Brain Computer Interfaces and Agency

By Ishan Dasgupta, Andreas Schönau, Eran Klein, and Sara Goering

This post is part of a series featuring authors who have received the Neuroethics R01 (Research Project Grants) supported by the NIH BRAIN Initiative. These research projects specifically address prominent ethical issues arising from emerging technologies and advancements in human brain research.

Image courtesy of Needpix.com
I’ve begun to wonder what’s me and what’s the depression, and what’s the stimulator…it blurs to the point where I’m not sure…frankly, who I am.

Imagine the following cases: A person with paralysis is able to control a robotic arm, using his thoughts alone, and it allows him to reach and get his coffee cup. A person with Locked-In Syndrome is able to communicate with her family for the first time in five years. A teenager struggling with anorexia regains weight after her body dysmorphia dissipates. A war veteran who lost a limb is able to use a new haptic-enhanced robotic arm to feel her child’s hand in hers again. These examples illustrate the breadth of possibilities that brain computer interface (BCI) technologies offer our society. A BCI is a device that creates a communication channel between the central nervous system and a processing device. This device collects neural data, analyzes it using algorithms that can be artificially intelligent, and creates an output that is sent to another device, like a robotic arm, a computer cursor, or a stimulating device in an area of the brain, such as a deep brain stimulator (DBS).

What makes BCI different from other neural implants, like DBS, is that they “read” neural activity and use this information to modulate output, compared to a fixed stimulation input. This extra layer, however, raises concerns about the ability of BCI technologies to influence (e.g., enhance, diminish, or confuse) human agency. Recently, Elon Musk announced that his company Neuralink intended to implant a human with a BCI device involving thousands of individual electrodes by the end of 2020. We must ask whether BCIs should be introduced into society broadly given their capacity to alter people’s sense of agency, or control over their actions and thoughts. 

Image courtesy of Pixabay
Human agency is the ability to act and take ownership of experiences and actions. Agency gives people a sense of control over their minds and bodies and is an important part of the human experience. But agency is a complicated phenomenon that relies on or is constituted by at least several other features, including responsibility, privacy, authenticity and trust.  People want to feel that they are intentionally acting rather than merely being acted upon. This means that when a person uses a robotic arm to reach their coffee cup, they feel responsibility over the actions of the device. It means that when a person with Locked-In Syndrome types out of a message using a BCI, she does so from a background of privacy, such that her thoughts and body are only accessible to those with whom she shares them.  It means that the teenager with anorexia feels that the BCI does not undermine her authenticity, that she continues as a recognizable self. Finally, it means that the war veteran can trust that the feel of her child’s hand is reliable and not misleading.  In our view, these concepts of responsibility, privacy, authenticity, and trust are crucial to understanding human agency and are uniquely implicated by different types of BCI technologies.  

As the examples above show, BCI technologies have the ability to both improve and diminish human agency. While a BCI may allow a person with anorexia to change the way she perceives her own body, it may also lead to changes to her personality that are unwanted or create anxiety. Similarly, when a paralyzed person uses a robotic arm to shake someone’s hand, if the device instead crushes that person’s hand, the user may be unsure of his responsibility for the action. Because different BCIs may implicate different aspects of agency in different types of end users, understanding and categorizing the problems that can occur is key. Failure to understand these issues could inhibit meaningful informed consent, end user acceptance of BCIs, measurement of agency, and user-centered design.  

We have been awarded a grant from the NIH to study these issues of agency. In our R01 grant, we aim to develop a conceptual map of human agency, using both philosophical literature on the relevant features, and end-user input on the phenomenal experience of using BCIs in research studies, through longitudinal interviews with participants using motor, sensory, communication, and psychiatric BCIs.  

Image courtesy of Pixabay
Unlike previous studies, we are exploring BCI technologies across modalities. Many of the early studies around end user perspectives for DBS were isolated to one disease condition. Understanding experiences across our four modalities—motor, sensory, communication, and psychiatry—will allow us to consider, for instance, whether privacy matters in the same ways to users addressing issues of communication compared to those treating motor impairments, or whether concerns about authenticity arise in the context of movement in similar ways to how they come up in the psychiatric realm.  

If we are successful, we believe this research will go a long way in guiding the field of neuroethics in developing better mechanisms to protect end users from potential unwanted consequences of BCI devices, while encouraging the design of BCIs that promote user agency. This includes protocols that warn future users about the potential changes to agency they might experience and to help them weigh the negative risks of a device against the potential therapeutic benefit. We also believe that working closely with BCI scientists will help develop a research enterprise that is better able to identify ethical issues raised by future technologies and utilize end user perspectives to design the next generation of BCIs.

Recommended readings 
  1. Goering et al. (2017) “Staying in the Loop: Relational Agency and Identity in Next-Generation DBS for Psychiatry” AJOB Neuroscience 8(2): 59-70. 
  2. Kellmeyer et al. (2016) “Effects of Closed-Loop Devices on the Autonomy and Accountability of Persons and Systems” Cambridge Quarterly of Healthcare Ethics  
  3. Steinert et al. (2018) “Doing Things with Thoughts: Brain-Computer Interfaces and Disembodied Agency” Philosophy and Technology https://doi.org/10.1007/s13347-018-0308-4
______________

Ishan Dasgupta is a post-doctoral Research Associate in the Department of Philosophy and the Center of Neurotechnology at the University of Washington. He works at the intersection of law, ethics, and public health policy as it relates to emerging technology. His past work has focused on ethical issues surrounding induced pluripotent stem cells, inclusion of pregnant women in biomedical research, and the use of tissue samples in genetics research.

Sara Goering is Associate Professor of Philosophy at the University of Washington, Seattle, and co-leads the Neuroethics Thrust in the Center for Neurotechnology (CNT). She is also a member of the Program on Ethics, the Disability Studies Program, and adjunct faculty in the Bioethics & Humanities Department. Her work in the Neuroethics Thrust focuses on issues of agency and identity in relation to neural technology (both DBS and BCI), and emphasizes the importance of engagement with disabled people, who are often the intended end-users of the technology. 


Eran Klein is a neurologist specializing in dementia at Oregon Health and Sciences University (OHSU) and the Portland VA Health Care System (PVAHCS). He co-leads the Neuroethics thrust at the Center for Neurotechnology (CNT) at the University of Washington. He works at the intersection of neurology, neuroscience, and philosophy. 

Andreas Schönau is a post-doctoral Research Associate in the Department of Philosophy and Center for Neurotechnology at the University of Washington. His past research focused on the clarification of conceptual theories and empirical methods in philosophical and neuroscientific research, the interdisciplinary combination of their respective insights, and the generation of conclusions towards understanding the phenomenon of free will from an action-theoretical perspective.


Want to cite this post?

Dasgupta, I., Schönau, A., Klein, E., & Goering, S. (2019). Brain Computer Interfaces and Agency. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2019/12/brain-computer-interfaces-and-agency.html

Comments

Follow Us

Follow Us
Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook