papers AI Learner
The Github is limit! Click to go to the new site.

CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning

2019-01-11
Yisroel Mirsky, Tom Mahler, Ilan Shelef, Yuval Elovici

Abstract

In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services. An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market. In this paper, we show how an attacker can use deep learning to add or remove evidence of medical conditions from volumetric (3D) medical scans. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder. We implement the attack using a 3D conditional GAN and show how the framework (CT-GAN) can be automated. Although the body is complex and 3D medical scans are very large, CT-GAN achieves realistic results and can be executed in milliseconds. To evaluate the attack, we focus on injecting and removing lung cancer from CT scans. We show how three expert radiologists and a state-of-the-art deep learning AI could not differentiate between tampered and non-tampered scans. We also evaluate state-of-the-art countermeasures and propose our own. Finally, we discuss the possible attack vectors on modern radiology networks and demonstrate one of the attack vectors on an active CT scanner.

Abstract (translated by Google)
URL

http://arxiv.org/abs/1901.03597

PDF

http://arxiv.org/pdf/1901.03597


Similar Posts

Comments