papers AI Learner
The Github is limit! Click to go to the new site.

Towards Thwarting Social Engineering Attacks

2019-01-03
Zheyuan Ryan Shi, Aaron Schlenker, Brian Hay, Fei Fang

Abstract

Social engineering attacks represent an increasingly important attack vector growing in use by sophisticated hackers to compromise organizations. Water-hole attacks, in particular, have been leveraged in many recent high profile hacks. These attacks compromise a legitimate website to execute drive-by download attacks by redirecting users to another domain with an exploit kit. To prevent water-hole attacks, organizations use a slew of countermeasures that alter the environment information given by employees visiting websites. In this paper, we explore this domain and introduce a game-theoretic model that captures the most relevant aspects for an organization protecting itself from a water-hole attack. This model provides a foundation for an organization to implement an automated protection policy that uses technological based countermeasures. Our main contributions are (1) the Social Engineering Deception Game model, (2) detailed analysis of the game model, (3) an algorithm to solve for the optimal protection policy, (4) heuristics to improve the scalability of our approach, and (5) detailed experiments that analyze the application of our approach.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.00586

PDF

http://arxiv.org/pdf/1901.00586


Similar Posts

Comments