Tuesday, May 28, 2013

Let’s Put Our Heads Together and Think About This One: A Primer on Ethical Issues Surrounding Brain-to-Brain Interfacing

By John Trimper
Graduate Student, Psychology
Emory University
This post was written as part of the Contemporary Issues in Neuroethics course

Remember the precogs in Minority Report? The ones who could sync up their brains via the pale blue goo to see into the future?
The precogs from the movie Minority Report
Recent findings published in Scientific Reports (Pais-Vieira et al., 2013) suggest that the ability to sync up brains is no longer purely sci-fi fodder, and instead, has moved into the realm of laboratory reality. The relevant set of experiments, conducted primarily at the Nicolelis laboratory at Duke University, demonstrated that neural activity related to performance on a discrimination task could be recorded from one rat (“the encoder”) and transferred into a second rat’s brain (“the decoder”) via electrical stimulation. This brain-to-brain transfer of task-relevant information, provided the encoder rat was performing the task correctly, significantly enhanced the decoder’s ability to perform the task correctly (see Figure 2 for task description). That is, the decoder rat, who received no external clues as to which of two levers would provide a food reward, responded to the brain-to-brain transfer of information as if it cued him to choose the correct, food-rewarding lever. As a further proof of concept, the experimenters demonstrated that it wasn’t necessary for the rats to be hooked up to the same laboratory computer. In fact, it wasn’t even necessary for the rats to be on the same continent. Using the internet, the researchers were able to transfer information from the brain of an encoder rat at Duke University in real time to the brain of a decoder rat located in Brazil. Performance enhancements in this scenario were similar to those noted above (i.e., decoders chose the correct lever more often if brain-to-brain transfer was allowed).

The work by Pais-Vieira and colleagues (2013) is an important step forward for the field. As the authors suggest, the present findings mark progress towards “an organic computer capable of solving heuristic problems that would be deemed non-computable by a general Turing machine” (Pais-Vieira et al., 2013, p. 8-9). Indeed, Nicolelis’s group continues to be at the forefront of brain interfacing technologies. For example, in previous experiments, Carmena, Nicolelis, and colleagues (Carmena et al., 2003) trained monkeys to reach for and grasp virtual objects with a robotic arm using only their brains and visual feedback (i.e., without moving their own arms). Task relevant neuronal activity was recorded with implanted microelectrode arrays and computer algorithms converted the recorded signals into commands for the robot arm. The technologies hold great promise for the future of prosthetics and stroke rehabilitation.

Two of the mice (an "encoder" and a "decoder") from the experiment

But do the present findings mean that a brain-net is right around the corner? That human brains can be synced up to exchange to the kind of complex thoughts that one intuitively associates with terms like “telepathy”?

Well, no – not quite. Despite what the titles of many popular press articles seem to suggest (search Google for ‘telepathy rat brain transfer' for a laugh), at present, progress is hampered by several considerable limitations. These include the number of neurons that can be sampled, their neuroanatomical locations, and neural decryption/encryption capabilities. Brain-to-brain transfer of complex human thoughts will have to remain a sci-fi fantasy for the time being.

But, given that we’re at least at the point where this sort of technology is being discussed in earnest, it’s appropriate for ethical discourse surrounding the topic to receive a proportional degree of attention. Brain-to-brain interfacing (BTBI), in its current embodiment, involves extracting information from one individual’s brain and delivering this information to a second individual via implanted microstimulation electrodes (microstimulation electrodes, which are similar to the recording microelectrode arrays noted above, allow for the delivery of highly spatio-temporally precise stimulation patterns into the brain).  Thus, BTBI is associated with the same sorts of ethical issues that surround mind-reading (e.g., Kuebrick, 2012), deep brain stimulation (e.g., Schermer, 2011), and brain-computer interfacing (BCI) technologies (e.g., Vlek et al, 2012). However, given that BTBI involves a direct transfer of information between two individuals’ brains, the technique is also privy to its very own host of other ethical issues (e.g., legal and moral responsibility, issues of identity, and privacy).

Consider an illustrative example that is at least somewhat grounded within the framework of the technique’s current capacity. It’s not unreasonable to think that the military, with its appreciably liberal approach to “enhancement,” would be the first to employ BTBI technologies in humans. Imagine that one soldier in ground combat (“the decoder”), fitted with microstimulation electrodes and a helmet-mounted 365-degree camera, was able to neurally receive information directly from a second soldier (“the encoder”) watching the video-feed in a separate location. When the encoder detected a threat on the video feed, this information could be immediately transferred to the decoder, who could respond appropriately. This brain-to-brain transfer of threat information has the potential to be far faster than verbal transmission, and could potentially save many lives. Now imagine that the encoder, watching the video feed, accidentally recognizes a fellow soldier as a threat and a friendly-fire incident ensues. The decoder soldier fires the bullet that ends his comrade’s life. Who is responsible for the soldier’s death – decoder or encoder? What if the neural stimulation pattern was misinterpreted by the encoder (or, importantly, by the computer’s transfer algorithm)? What if the transfer was intentional on the part of the decoder?

According to some, issues surrounding liability as it relates BCI (e.g., one brain only) already have a strong framework for legal consideration. Tamburrini (2009) points out that by using the technology, the decoder is accepting some degree of responsibility for actions of the machine he/she becomes integrated with. To extend this to the current context, one would assume that the encoder individual would also be acknowledging his/her responsibility by taking part. Does this suggest that both would be tried as equally responsible if something went awry?

Of extreme importance for assessing liability would be the transfer algorithm’s ability to accurately decode the extracted neural information. Recording this information would ideally facilitate the post-hoc dissociation between transferred information and decoded/interpreted information, as well as, perhaps, providing some information regarding intention. A feat such as this, however, capable of identifying neural information content with 100% confidence, may be even farther out of reach than the transfer technology itself. As is currently the case for brain-interfacing technologies, each decoding algorithm employed must be carefully and rigorously calibrated (and re-calibrated later on) based on the individual implanted (e.g., “Nigel” in Vlek et al., 2012). One-hundred percent accuracy for the decoding of an intricate neural representation may not be feasible. Thus, a general decoding algorithm that could also extract intention, especially considering the brain regions that would be sampled from, seems unlikely (at present).

Of course, as is the case with any ethical dilemma, each scenario would need to be considered in terms of the specific conditional variables surrounding the event. The findings of Pais-Vieira and colleagues (2013) suggest an exciting future with BTBI technologies, and an equally spirited future of ethical discourse on the topic.


Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., & Nicolelis, M.A.L. (2013). A Brain-to-brain interface for real-time sharing of sensorimotor information. Scientific Reports, 3, 1319.

Wessberg, J., Stambaugh, C.R., Kralik, J.D., Beck, P.D., Laubach, M., Chapin, J.K., Kim, J., Biggs, S.J.,Srinivasan, M.A., & Nicolelis, M.A. (2000). Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature, 408(6810), 361-365.

Carmena, J.M., Lebedev, M.A., Crist, R.E., O’Doherty, J.E., Santucci, D.M., Dimitrov, D.F., & Nicolelis, M.A. (2003). Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biology, 1(2), E42.

Kuebrich , B. (2012). When the government can read your mind. The Neuroethics Blog. Retrieved on April 13, 2013, from http://www.theneuroethicsblog.com/ Schermer, 2011

Vlek, R.J., Steines, D., Szibbo, D., Kubler, A., Schneider, M-J., Haselage, P., & Nijboer, F. (2012). Ethical issues in brain-computer interface research, development, and dissemination. Journal of Neurologic Physical Therapy, 36(2), 94-99.

Tamburrini, G. (2009). Brain to computer communication: Ethical perspectives on interaction models. Neuroethics, 2, 137-149.

Want to cite this post?
Trimper, J. (2013). Let’s Put Our Heads Together and Think About This One: A Primer on Ethical Issues Surrounding Brain-to-Brain Interfacing. The Neuroethics Blog. Retrieved on , from

No comments: