papers AI Learner
The Github is limit! Click to go to the new site.

Stealing Neural Networks via Timing Side Channels

2019-02-05
Vasisht Duddu, Debasis Samanta, D Vijay Rao, Valentina E. Balas

Abstract

Deep learning is gaining importance in many applications. However, neural networks face several security and privacy threats. This is particularly significant in the scenario where Cloud infrastructures deploy a service with neural network model at the back end. Here, an adversary can extract the neural network parameters, infer the regularization hyperparameter, identify if a data point was part of the training data, and generate effective transferable adversarial examples to evade classifiers. This paper shows how a neural network model is susceptible to timing side channel attack. In this paper, a black box neural network extraction attack is proposed by exploiting the timing side channels to infer the depth of the network. Although, constructing an equivalent architecture is a complex search problem, it is shown how the reinforcement learning with knowledge distillation can effectively reduce the search space to infer a target model. The proposed approach has been tested with VGG(Visual Geometry Group) architectures on CIFAR10 data set. It is observed that it is possible to reconstruct substitute models with test accuracy close to the target models and the proposed approach is scalable and independent of type of neural network architectures.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1812.11720

PDF

https://arxiv.org/pdf/1812.11720


Similar Posts

Comments