White Coats + AI: The Modern Human + Machine Recipe

By Sunidhi Ramesh

I wore my white coat for the first time seven months ago; the ceremony my medical school held for this momentous occasion was grand, perfumed with the hopes and fears that only a couple hundred beginnings to a couple hundred medical journeys could bring. Ten minutes after the keynote speaker, Dr. Paul DiMuzio, took the stage with his eloquent words of advice, he said something that has been on my mind since.

“We will teach you everything we know now to the best of our abilities, but here is the reality-- half of what you will learn over these next four years will mean little to nothing to you ten years from now. And you will not be sure at this time as to what half; none of us are. The world is changing, and it’s changing quickly. Medicine will be changing too—not only the medicine but also how you will interact with the world as physicians.”

It is a thought-provoking (and oddly concerning) message, but it isn’t much of an exaggeration.

Figure 1: CRISPR/Cas9 edits genes by cutting DNA and letting the body mend the damage using its natural repair mechanisms. Past forms of technology, then, may indeed soon become outdated.

We are in the midst of entering 4IR, the “Fourth Industrial Revolution,” and the biggest such transition since the Industrial Revolution of the 1700’s. With it, we will be forced to navigate the ins and outs of an amalgamation of our physical and digital worlds. And, as with every “revolution,” we will be asked to address the inevitable shifts in age-old social, economic, and political systems—shifts that will redefine the course of humanity in the same (yet very different) ways that the steam engine, mass production, and the advent of digital technology have in centuries past.

Ready or not, robotics, nanotechnology, and biotechnology are catapulting us into a new age.

It is no surprise, then, that my medical education will be archaic in a dozen years. How could it not be? Gene sequencing, CRISPR (Figure 1), and individualized treatments are already in the works, rendering obsolete the techniques currently printed in our textbooks. What exactly does the future look like? One thing is for sure: two letters are sprinkled throughout just about every discussion about the future of medicine: AI.


(Medicine - Physicians)

We are already more familiar with artificial intelligence (AI) than we think we are; in fact, many of us interact with AI on a daily basis—through the supercomputers in our pockets and virtual assistants such as Siri or Alexa. Using algorithms, statistical models, and mounds of historical data, AI can perform tasks without using explicit instructions, relying on models and inference instead. Better yet, AI can be programmed to take machine learning further to mimic neuronal signaling; termed “deep learning,” these algorithms can perform and re-perform tasks, adjusting and “learning” from its mistakes each time to help improve results. So just how smart can artificial intelligence get?

A little over a year ago, Stanford News published an article titled, “Stanford algorithm, [CheXNet], can diagnose pneumonia better than radiologists.” In it, author Taylor Kubota outlines how ChXNet, “outperformed… four Stanford radiologists in diagnosing pneumonia, [a lung infection that takes tens of thousands of American lives annually], accurately.” Cue a widespread media frenzy and a universal physician panic attack as the burning question popped up on thousands of forums across the country: will technology replace medical professionals?

Or, even more of an alarming question for a future physician to be asking: would replacing physicians with AI really be a bad thing? Most of us who have been to a doctor’s office know what to expect; you wait for far too long in the waiting room and then re-play the charade in the exam room. Your doctor sees you for ten minutes, and you leave soon after, realizing you didn’t even bring up half the concerns you had when you first walked into the door. It’s a disappointing feeling and one that shouldn’t be so relatable. But the reality of medicine today is that doctors are busy—but not just by the sheer volume of the patients they are required to see, as you would otherwise believe.


(Physicians + AI)

In December of 2016, the Annals of Internal Medicine published a study titled “Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in Four Specialties.” The authors found that, “for every hour physicians provide direct clinical face time to patients, nearly two additional hours are spent on the electronic health record and desk work within the clinic day. Outside office hours, physicians spend another 1-2 hours of personal time each night doing additional computer and other clerical work.” That’s almost twice as much time spent on paperwork than on anything remotely clinical.

Here’s where AI can step in—as a sort of personal assistant to every physician who is open to using it. AI can record office visits and take live notes in the process. Using these notes and cross-referencing them with similar, past cases as well as the subsequent outcomes from those cases, AI can make suggestions to help adjust from (or add to) a physician’s differential (a list of possible diseases or conditions that describe or could explain a patient’s particular symptoms). From here, AI could ensure that all the required next-steps (exam maneuvers, lab tests, imaging scans) are considered while the (statistically) unnecessary ones are recommended against. And all of this through apps or health record tools that allow for ease of use and (ideally) convenient integration into the medical sphere.

Figure 2: AI can help identify non-traditional subgroups with similar features that can then aid in personalizing treatment.

AI, then, has the potential for minimizing missed diagnoses or other red flags, for optimizing a physician’s time, and for producing better outcomes throughout American healthcare in general. A 2015 article about Machine Learning in Medicine takes this possibility one step further, stating that “deep learning might actually realize the elusive goal of reclassifying patients according to more homogenous subgroups, with shared pathophysiology, and the potential of shared response to therapy” (Figure 2). With these implementations, AI and physicians can work together in an augmented, symbiotic relationship to deliver more personalized, effective, and medically accurate care.

Perhaps human compassion and machine accuracy are not mutually exclusive, but, rather, remarkably compatible. Better results. Less mistakes. More time for physicians to see their patients. Promising, right?


(Medicine + AI) = ?

It may not be so straightforward, as there are a large number of questions coming out of the birth of AI from the medical sciences. What safeguards need to be put into place to maximize the efficacy (and safety) of a technology we ourselves have put together? Let’s say AI does begin acting as a personal assistant in the medical sphere. Let’s say AI is tasked with drafting physician notes by listening to physician-patient interactions. Let’s say AI is consistently relied on for reading MRI, CT, and X-Ray scans. What can go wrong? What do we have to lose? At the moment, both of these questions bode the same response: a lot.

Earlier this year, using national datasets from 2003-2005 and machine learning, six researchers from around the world sought out to answer one, overarching question: “Is it possible to reidentify data that have had protected health information removed by using machine learning?” The short answer? Yes, and frighteningly easily. AI was able to reidentify aggregated deidentified data from up to 94.9% of the adults in the NHANES dataset with the use of “demographics, a quasi-identifier” as well as “online search data, movie rating data, social network data, and genetic data” (Figure 3). I found this to be disturbing primarily because, in a lot of ways, AI has the ability to reverse decade-old methods we have put into place to maintain even a baseline level of privacy for our patients. Not to mention that our seemingly innocuous online “footprints” can easily be used against us.

Figure 3: AI can use online data to take anonymized patient information and re-identify it.

What are the implications of this study—beyond making clear that our current practices are insufficient? Could AI’s ability to reidentify information be expanded to apply to other forms of protected medical information?

These considerations have recently come to light as AI has begun to play a role in psychiatry where digital vendors such as Ginger.io use human-computer interaction patterns to evaluate and treat depression. Online listening service 7 Cups uses trained “active listeners” (supported by AI that laces together psychotherapy scripts to communicate empathy and acceptance) to care for people who are in need of emotional support. The potential effects of these new resources are tremendous—a spark of life to a field that has been stagnant for so long. But how is the data gleaned from these sites—data about a person’s intimate thoughts, feelings, emotions, personality, and self-concept— protected? What could happen if they aren’t?

And what if the NHANES study had used brain data? Would deidentification pose an even greater threat?

I would respond with a resounding yes. There is something about brain data that is intrinsically different—especially as we inch towards a future where brain scans may reveal more information about a person’s life, status, and thoughts than ever before. If these data can be reidentified, if the line between those diagnoses and the person to whom they were originally linked to could be redrawn by AI, could brain data be put to use maliciously? Could what the brain says about your gender identity be used against you in a workplace setting or even in a social one? Could what AI’s conclusions about your statistical propensity to develop dementia or brain cancer be used against you by life insurance companies?

So how do we keep AI’s role in the medical sphere quarantined from the outside world? And what other assumptions and safety nets have we failed to test and consider? Algorithms are man-made and often carry man-made stereotypes and biases with them; even the most well-intentioned programmers will undoubtedly face this issue. 

Figure 4: Medicine + AI = ?

Who is responsible for ensuring the integrity of AI? (The founders and programmers at OpenAI attempted to answer this by withholding their program this month from being released publicly, in fear of its misuse.) How can we test these programs to ensure an adherence to neutrality—to the ethical standards we currently hold our medical professionals to? How do we train our physicians to understand how these models function and to guard against healthcare’s ultimate reliance on AI? Will social disparities in healthcare delivery arise as certain regions of the world are able to implement AI faster and more efficiently than others?

One thing is for sure— as the 4IR plows forward, elbowing and taunting us to catch up, we, the ethicists, scientists, physicians, lawyers, philosophers, and engineers of the modern world, must challenge ourselves to identify and address the loopholes in AI’s imminent role in medicine before they unravel into devastating and dangerous consequences—and to do so sooner rather than later.

______________


Sunidhi Ramesh is a medical student at the Sidney Kimmel Medical College at Thomas Jefferson University and the Co-Managing Editor of The Neuroethics Blog. She recently graduated from Emory University and holds degrees in both Neuroscience and Sociology. Her interest in neuroethics was sparked by a summer abroad program in Paris; it was here that she was introduced to what proved to be a field that connected all of her interests (human diversity, education, ethics, medicine, neuroscience, and writing) together.


Want to cite this post?

Ramesh, S. (2019). White Coats + AI: The Modern Human + Machine Recipe. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2019/03/white-coats-ai-modern-human-machine.html

Comments

Follow Us

Follow Us
Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook