Skip to content

Latest commit

 

History

History
29 lines (18 loc) · 1.08 KB

README.md

File metadata and controls

29 lines (18 loc) · 1.08 KB

Exploring the Difficulty of Hiding Keys in Neural Networks

In order to defend neural networks against malicious attacks, recent approaches propose the use of secret keys in the training or inference pipelines of learning systems. While this concept is innovative and the results are promising in terms of attack mitigation and classification accuracy, the effectiveness lasts on the secrecy of the key. However, this aspect is often not discussed. In this short paper, we explore this issue for the case of a recently proposed key-based deep neural network. White-box experiments on multiple models and datasets, using the original key-based method and our own extensions, show that it is currently possible to extract secret key bits with relatively limited effort.

Link to project: UIBK Security & Privacy Lab

Code Overview

//TODO

Models and Datasets

Taboo Trap

Adversarial Attacks & Transferability

Visualization

Example Usage

//TODO

Results

The final master thesis can be found at [...] //TODO