Hacking Healthcare - Why Our Future Medical Data May Be Under Attack

This week's paper can be found here: Adversarial Attacks Against Medical Deep Learning Systems

Healthcare costs have been increasing to the point where medical care may become unaffordable for a significant portion of the US population in the near future. Many solutions have been proposed to slow down or reverse these cost increases, including passing federal healthcare programs (such as the ACA) that make medical care more efficient to limiting medical appointments to absolute emergencies.

One of these solutions is the idea of implementing artificial intelligence into medical practice. Ideally, this would reduce time spent on menial or recurring tasks, prevent inefficiency in the healthcare system, predict chronic conditions before they become irreversible, and provide better medical care to patients across the board. In fact, I've advocated for this solution in an opinion piece in the past. 

However, researchers at MIT and Harvard Medical School have recently published work warning us that the healthcare system may not be ready for deep learning. As with any other program, medical deep learning systems can be attacked by those intending to steal private patient data, and their work reveals that medical systems may be more susceptible than most to a specific type of hacking - adversarial attacks. 

The power of adversarial attacks has been seen in many areas of deep learning, including in a test of Google's deep learning-based image recognition software (Spoiler: It failed). By making small changes to data that might not change a person's classification of the data, you can fool deep learning algorithms into misclassifying information, resulting in incorrect patient diagnoses and potential patient harm.

This is not to say that we should not be developing deep learning models for healthcare - in fact, it may be in our best financial and health interests to do so. However, much more focus should be placed on developing defenses against such attacks to protect patient data and patient health. Medical information can be contained in many different data types and systems, so specific defenses will likely be needed for each modality/system. In addition, we should be developing ethical guidelines for implementing deep learning systems in healthcare, as there may be situations where the risk of an adversarial attack outweighs the benefits conferred by the medical deep learning system.