The Ethical Importance of Trust for a Patient’s Sense of Autonomy

By Ian Stevens

Image courtesy to Ryan Somma, Flickr
There is a rich medical history of deep-brain stimulation (DBS) devices, a kind of brain-computer interface (BCI) being used to treat Parkinson’s disease and other conditions (Kumar et al. 1998) and for a further reading see Garder 2013. Contemporary DBS devices are beginning to explore closed-loop stimulation (Widge, Malone, and Dougherty 2018). Closed-loop stimulation devices track the brain states of a patient and then they stimulate specific brain regions after a certain pathological neural state is recorded. These devices improve the stimulation efficiency and energy conservation of their predecessors, open-loop DBS devices, where stimulation would provide neural stimulation at a clinically set interval and intensity. For a thorough comparison of these devices see Ghasemi, Sahraee, and Mohammadi 2018. A noteworthy advantage of closed-loop DBS device over open-loop devices is their marginalization of psychological side effects that, in theory, increase a patient’s autonomy in life’s daily events.

Image  courtesy to Pixabay
The entire closed-loop stimulation process is done without any conscious effort or control by the patient and is directed largely by the device. Thus, these closed-loop DBS devices are said to exclude the patient from the decision-making process and undermine their self of autonomy (SOA), which I am treating here as the process of making autonomous decisions in relation to their daily treatment and not their daily events. However, placing a patient back ‘into the loop’ of the decision-making process provides interesting insights. Recently developed closed-loop advisory brain devices look to alleviate the lack of SOA concern by utilizing continuous electroencephalography recordings to warn patients of an upcoming epileptic event (Goering et al. 2017; Kellmeyer et al. 2016). After being notified of the impending seizure, the patient has the liberty to react as they see fit. Therefore, these devices are now said to bring the patient back into the decision-making process by increasing their SOA in their treatment decisions. A recent article published by Gilbert et al. explored the significance of patients being kept in the decision-making process by their closed-loop advisory brain devices and the resulting effects on those patients’ SOA (Frederic Gilbert, O’brien, and Cook 2018). This suggests the importance of trust in the implementation of these advisory brain devices. 

The paper by Gilbert et al. presented first-person narrative interviews describing the phenomenological experiences six patients had with their closed-loop advisory brain devices. The narratives from some of these patients are as follows:

[1]
Patient 02: Well as I got more and more confident, I didn’t question it, no. But initially when the algorithm was first put in, then I had very little confidence that it was going to be of any assistance. But then over time, I got more and more confident and so, yeah, I trusted it. (emphasis added) (Frederic Gilbert, O’brien, and Cook 2018)

Patient 06: The device took all of that insecurity away because now I’ve got to trust myself with that…I was more capable of making good decisions. (emphasis added)

[2]
Interviewer: Did you have a fit without a warning?
Patient 04: But a few times, yeah, so it did beep a few times as well. So yeah.
Interviewer: So with the device did you feel more confident for instance?
Patient 04: No I wasn’t trusting it….I just ignored it. (emphasis added)

One interview suggests that to develop a patient’s SOA, trust must be established between them and the device. However, what ‘trusting a device’ explicitly means is vague. The author’s interpretation of ‘trusting a device’ is that for a device to be trusted, the advisory signals must be reliable. That is, the signals must be accurate in predicting the epileptic events. However, this interpretation of ‘trusting a device’ does not exclude the possibility that other kinds of trust, beyond reliability, are not necessary to enhance a patient’s SOA. 

Image courtesy to Terry Johnston, Wikimedia Commons
Traditionally, trust is a robust philosophical concept and while a broad notion of trust has been shown to be important in biomedical research and healthcare, it has largely been unexplored for its application in the field of neurotechnology (Kerasidou 2017; Kraft et al. 2018). It should be noted that trust has been explored in the field of human-computer interaction, particularly how it could be developed with modification to devices (Hancock et al. 2011) and how trust is crucial for the development of disuse and misuse in automatic, like closed-loop, technology (Lee and See 2004). This paper hopes to explore, like human-computer interaction researchers, how trust is developed with autonomic neurotechnologies, but through a philosophical perspective. This direction is due to the ethical concerns that could arise both in relation to a patient’s SOA in their treatment, but also in other explicit concerns about devices maintaining their privacy. 

Trust is conventionally understood as the act of a truster being vulnerable towards a trustee (McLeod 2015). Multiple definitions of trust have been proposed, and in a previous paper, I utilized the distinction between trust as reliability and trust as goodwill to advocate for privacy in brain recording technologies (Stevens 2018). This distinction rested on the need for the trustee to care for the truster beyond agreed upon professional expectations (i.e. regulatory procedures) and to also have their goodwill in mind (Baier 1986; Jones 1996). Understanding these different interpretations of trust in the research setting is useful for understanding what ‘trusting a device’ means for advisory brain devices. 

Image courtesy to Pixabay
While there are expectations of the device accurately notifying patients about potential seizures, there could be other implicit expectations about the device maintaining the goodwill of the patients. For example, if these devices have the ability to violate a patient’s privacy, the patient could perceive the device as not having goodwill. Importantly, this violation of privacy could occur while the device is still functioning reliably or correctly. Thus, it seems reasonable for a patient to then lose or have a diminished amount of trust for the device. This theoretical distinction prompts future empirical studies on what these patients mean when they mention trust. Therefore, since a patient’s SOA is paramount in modern medical practices, I advise that research into the nature of trust in brain advisory technologies is followed up to either support or deny the reasonable conclusions made here (L Beauchamp and F Childress 2013). 

Research striving to make current closed-loop brain advisory devices accurate will likely resolve the distrust felt by most patients by utilizing the ethical concerns of the devices to shape their design. Contrastingly, on the human end of this human-computer interaction, there have already been recommendations to screen patients for a level psychological health that are comfortable in their vulnerabilities to then likely optimize trusting relationships with these devices (F. Gilbert et al. 2017). Thus, there is still room on both ends of the human-computer interaction spectrum for improvement as closed-loop devices develop and patient privacy becomes more vulnerable. More empirical research must be conducted as closed-loop devices progress. This research will best be accomplished by philosophers, clinicians, and neuroscientists collaborating to solve the ethical challenges posed by these novel BCI technologies. 
________________

Ian is an undergraduate student at Northern Arizona University. He is dual majoring in Biomedical Sciences and Philosophy with a minor in Psychology to pursue neuroethical research surrounding the use of neurotechnologies in medicine. 






References
  1. Baier, Annette. 1986. “Trust and Antitrust.” Ethics 96 (2): 231–60. https://doi.org/10.1086/292745
  2. Gardner, John. 2013. “A History of Deep Brain Stimulation: Technological Innovation and the Role of Clinical Assessment Tools.” Social Studies of Science 43 (5): 707–28. https://doi.org/10.1177/0306312713483678
  3. Ghasemi, P., T. Sahraee, and A. Mohammadi. 2018. “Closed- and Open-Loop Deep Brain Stimulation: Methods, Challenges, Current and Future Aspects.” Journal of Biomedical Physics & Engineering 8 (2): 209–16. 
  4. Gilbert, F., M. Cook, T. O’Brien, and J. Illes. 2017. “Embodiment and Estrangement: Results from a First-in-Human ‘Intelligent BCI’ Trial.” Science and Engineering Ethics, November. https://doi.org/10.1007/s11948-017-0001-5
  5. Gilbert, Frederic, Terence O’brien, and Mark Cook. 2018. “The Effects of Closed-Loop Brain Implants on Autonomy and Deliberation: What Are the Risks of Being Kept in the Loop?” Cambridge Quarterly of Healthcare Ethics 27 (2): 316–25. https://doi.org/10.1017/S0963180117000640
  6. Goering, Sara, Eran Klein, Darin D. Dougherty, and Alik S. Widge. 2017. “Staying in the Loop: Relational Agency and Identity in Next-Generation DBS for Psychiatry.” AJOB Neuroscience 8 (2): 59–70. https://doi.org/10.1080/21507740.2017.1320320
  7. Hancock, Peter A., Deborah R. Billings, Kristin E. Schaefer, Jessie Y. C. Chen, Ewart J. de Visser, and Raja Parasuraman. 2011. “A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction.” Human Factors 53 (5): 517–27. https://doi.org/10.1177/0018720811417254
  8. Jones, Karen. 1996. “Trust as an Affective Attitude.” Ethics 107 (1): 4–25. 
  9. Kellmeyer, Philipp, Thomas Cochrane, Oliver MĂ¼ller, Christine Mitchell, Tonio Ball, Joseph J. Fins, and Nikola Biller-Andorno. 2016. “The Effects of Closed-Loop Medical Devices on the Autonomy and Accountability of Persons and Systems.” Cambridge Quarterly of Healthcare Ethics 25 (4): 623–33. https://doi.org/10.1017/S0963180116000359
  10. Kerasidou, Angeliki. 2017. “Trust Me, I’m a Researcher!: The Role of Trust in Biomedical Research.” Medicine, Health Care and Philosophy 20 (1): 43–50. https://doi.org/10.1007/s11019-016-9721-6
  11. Kraft, Stephanie A., Mildred K. Cho, Katherine Gillespie, Meghan Halley, Nina Varsava, Kelly E. Ormond, Harold S. Luft, Benjamin S. Wilfond, and Sandra Soo-Jin Lee. 2018. “Beyond Consent: Building Trusting Relationships With Diverse Populations in Precision Medicine Research.” The American Journal of Bioethics 18 (4): 3–20. https://doi.org/10.1080/15265161.2018.1431322
  12. Kumar, R., A. M. Lozano, Y. J. Kim, W. D. Hutchison, E. Sime, E. Halket, and A. E. Lang. 1998. “Double-Blind Evaluation of Subthalamic Nucleus Deep Brain Stimulation in Advanced Parkinson’s Disease.” Neurology 51 (3): 850–55. 
  13. L Beauchamp, Tom, and James F Childress. 2013. Principles of Biomedical Ethics, 7th Edition
  14. Lee, John D., and Katrina A. See. 2004. “Trust in Automation: Designing for Appropriate Reliance.” Human Factors 46 (1): 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  15. McLeod, Carolyn. 2015. “Trust.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta, Fall 2015. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2015/entries/trust/
  16. Stevens, Ian. 2018. “Trust in the Privacy Concerns of Brain Recordings.” The Neuroethics Blog (blog). May 8, 2018. http://www.theneuroethicsblog.com/2018/05/trust-in-privacy-concerns-of-brain.html
  17. Widge, Alik S., Donald A. Jr Malone, and Darin D. Dougherty. 2018. “Closing the Loop on Deep Brain Stimulation for Treatment-Resistant Depression.” Frontiers in Neuroscience 12. https://doi.org/10.3389/fnins.2018.00175.

Want to cite this post?

Stevens, I. (2019). The Ethical Importance of Trust for a Patient’s Sense of Autonomy. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2019/01/the-ethical-importance-of-trust-for.html

Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook