papers AI Learner
The Github is limit! Click to go to the new site.

Can Machine Learning Model with Static Features be Fooled: an Adversarial Machine Learning Approach

2019-04-20
Rahim Taheri, Reza Javidan, Mohammad Shojafar, Vinod P, Mauro Conti

Abstract

The widespread adoption of smartphones dramatically increases the risk of attacks and the spread of mobile malware, especially on the Android platform. Machine learning based solutions have been already used as a tool to supersede signature based anti-malware systems. However, malware authors leverage attributes from malicious and legitimate samples to estimate statistical difference in-order to create adversarial examples. Hence, to evaluate the vulnerability of machine learning algorithms in malware detection, we propose five different attack scenarios to perturb malicious applications (apps). By doing this, the classification algorithm inappropriately fits discriminant function on the set of data points, eventually yielding a higher misclassification rate. Further, to distinguish the adversarial examples from benign samples, we propose two defense mechanisms to counter attacks. To validate our attacks and solutions, we test our model on three different benchmark datasets. We also test our methods using various classifier algorithms and compare them with the state-of-the-art data poisoning method using the Jacobian matrix. Promising results show that generated adversarial samples can evade detection with a very high probability. Additionally, evasive variants generated by our attacks models when used to harden the developed anti-malware system improves the detection rate.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1904.09433

PDF

http://arxiv.org/pdf/1904.09433


Similar Posts

Comments