The Social Impact of Brain Machine Interfaces: Value Sensitive Design in Neurotechnology

 By Tim Brown and Karen Rommelfanger

Image Courtesy of Pixabay
With the generous support of the MacArthur Foundation, the Center for Neurotechnology and Tech Policy Lab (both at the University of Washington) hosted a series of three online, exploratory workshops on the social impacts of brain-machine interfaces (BMIs). Existing work in this area examines the privacy and security dimensions of BMIs as well as questions of personal identity, authenticity, free will, and responsibility. Substantially less work interrogates BMI research, development, and commercialization from the perspective of social justice. These workshops were an attempt to do just that by foregrounding concerns such as human rights, civil liberties, structural inequality, representation in research, and inclusive design. These workshops were meant to place technologists and humanists in direct conversation about how current and future BMIs will impact society. Panelists and invited discussants included interdisciplinary neurotechnology researchers from the University of Washington as well as select national experts who approach neurotechnology from the perspectives of gender, sexuality, disability, and race. Our team’s goal is to produce a whitepaper on the social impacts of BMIs—inspired by the insights raised during the workshops—that we will make available to the broader community of stakeholders. 

In our third and final workshop, we considered how brain–machine interfaces can be designed in ways that take human values into account—in order to recognize, prevent, or mitigate potential harms or injustices. To this end, we considered the role human values play (or don’t play) in medical device design broadly, whether those values align between stakeholders (patients, caretakers, clinicians, communities), what role these values play in the development of BMIs currently, and how to best incorporate human values into the design of future BMIs. In other words, we asked what values are embedded in neurotechnologies, who they benefit, and how the field can do better. Joining us for this conversation were panelists: Dr. Laura Y. Cabrera (Michigan State University), Dr. Batya Friedman (University of Washington), Dr. David G. Hendry (University of Washington), and Lassana Magassa (University of Washington). Here, I summarize the themes that emerged from our conversation, catalyzed by a series of five discussion questions.

Editor’s note: We previously published posts about the first and second of these workshops. 

1. What do we (collectively) mean when we talk about human values? What are they? 

Our panelists started with a simple working definition of value—as ideals and goals that we uphold and strive for. Very quickly, however, the group puzzled over the distinction between values held by individuals, values held by communities, and values shared universally. That is, values are (at least in part) socially constructed and highly contested. We asked if universal values, or values we share insofar as we are all human, exist in the first place. After all, our practices of making sense of what we and how we value seem to vary widely between social contexts—across groups, communities, cultures, and generations. Many of the values that medical professionals try to preserve or respect—e.g., dignity—seem to take on a different meaning across those contexts. Even further, people from different cultures might have radically different ideas about what it means to preserve, respect, or uphold a value they believe they share.  

Image Courtesy of Pixabay
Batya approached this puzzle by arguing that some values are universal, but they are implemented differently in each social context. That is, a value may take a different form in each context, but they are rooted in the same values. Lassana raised the possibilities that how values are implemented varies so greatly between cultures that they seem like different values altogether. These variances raise a number of problems. Mixed-race people already move between worlds, to use Maria Lugones’ turn of phrase, but world travelers often have to prioritize different values in different situations—read: they often must “code switch.” In one world, acting a certain way makes a person confrontational; in another, that way of acting could be interpreted the opposite way. These possibilities make it difficult to imagine that any set of values as shared between cultures. 

Laura’s approach was to isolate a set of simpler values that we are more likely to share between cultures. Dignity, for example, a complex concept that has a life of its own in academic circles—with long lineages of thinkers defending different conceptions of dignity, arguments about who it applies to, and recommendations for respecting it. It’s possible, however, that simpler values become complicated eventually, or that values we think of as “simple” are filled with complexities on further inspection. For example, many attempts to promote health as a value take for granted a narrow view of what constitutes a health, and construe disability as a problem to fix. People with disabilities, however, maintain that there is value in disability, that their lives are good lives, and that a conception of health should capture lives like theirs. No matter what our approach, we need to prevent ourselves from building stigmas into our system of values.

2. How are values exhibited (or not exhibited) in the design of medical technologies, and how are those exhibited (or not) in the design of BMIs specifically? 

We started by recognizing that values and technology have a reciprocal relationship: our values shape the technologies we create, and our technologies shape our values. BMIs for medical use, in particular, are not developed and used in isolation—they, at the very least, are the result of interactions between doctors, patients, and engineers. But these interactions are often predicated, again, on the clinical goal of “fixing” disability. We considered the case of cochlear implants as an example of this. Some deaf parents would rather their deaf (or hard of hearing) children not receive cochlear implants: they worry that cochlear implants are not good enough to give their children full access to hearing communities, and they worry that these implants will put them at the margins of deaf communities. These parents face a backlash from hearing people—but this backlash is predicated on underlying value systems that cast deafness as a problem to fix. This is a deep problem for parents who would prefer their children to have full access to a deaf cultural identity instead of feeling stuck between two identities. We asked further: will other forms of BMI conflict with the cultural identities of people with disabilities? It seems very possible—and device designers will need to engage with communities to better understand these possibilities. 

3. What are the risks of failing to recognize and incorporate human values into the design of BMIs?  

One possible risk Batya identified is analogous to what Martha Nussbaum calls “the tragic question”: when none of our available options are free from moral wrongdoing. BMI users could run into the tragic question when the constraints of their BMI forces them to make a choice between several difficult outcomes. Designers should do everything to eliminate posing the tragic question through their devices. Take for example how some technologies (like wireless networks and cloud computing platforms) saturate our lives in ways that make it almost impossible to opt out. Will we soon live in a world where we’ll rely on BMIs in the same ways? Will BMIs tie into the same infrastructures that we might want to opt out of for the sake of upholding our values? 

Image Courtesy of Pixabay
Another risk is that there are some technologies we shouldn’t create because we (collectively) lack the moral maturity. That is, our moral maturity may not match up with our technical sophistication. One specific possibility is that BMI could extend systems of oppression and smother our ability to uphold our values. Neurotechnological interventions, for example, could be used in prisons for rehabilitation—e.g., neurostimulation to change aggressive behavior and neural recordings to track changes. This use of BMIs here would intersect with long-standing structural injustices: the school to prison pipeline, the over representation of Black people in prison, the medicalization of Black people’s (often warranted) emotional states; the oppression and devaluing of BIPOC and Queer lives. BMIs could also, however, make vulnerable groups forsake their own values by medicalizing and “fixing” them—the anger that would help BIPOC fight stand up for what they believe might be seen as just another target for therapy

4. Which values ought to play a role in the design of BMIs? Whose values should play a role? How do we decide? 

One thing was clear from our conversation: we are late, and the technology is already here. But it isn’t too late. We must determine how to make sure BMI design aligns to values we (as a society) endorse, promote, and prioritize. We need to think about individual and social values, and especially to ask how to support and propagate the right social values. That is, device designers and key members of supportive infrastructures—from academics to corporations—need to engage users and recognition, inclusion, and justice are values that are in high demand but in short supply. If marginalized people are forgotten in the design process, and are denied a seat at the table, it is almost certain that BMIs will codify values that further marginalize them. As such, we not only need to decide which values play a role in the design of BMI—we need to actively decide who will have the power to uphold or enforce values, and how that power will be expressed.
communities of potential users to ascertain their values. After all:

5. What practices should engineers and designers adopt to recognize human values and incorporate those values into future BMIs? 

Image Courtesy of Pixabay
Batya urged device designers to recognize the “value tensions” in the spaces their device will inhabit. Doing so will allow them to see how users’ values, as individuals and between groups, sit in tension with one another in a careful balance. Values, in her view, are like a web: if we try to address one, we have to address all the values connected to it. Design teams should try to think about that balance when they design BMI. David reminded us that value-sensitive design offers guidance in the form of strategies that translate values into design requirements. This process requires that we (1) develop a working definition of the value in question, (2) specify requirements for a design to uphold that value, and (3) identify how the specific device can be designed in a way that upholds the value. 

Beyond the design of technologies themselves, however, we must also excavate the social structures our technologies are developed are distributed in. Laura reminds us that infrastructures themselves are also technologies that should also reflect the values of the people who interact with them. We need to have conversations about technologies, their infrastructures, and human values in public in order to figure out what goals we should pursue, decide who is responsible for the consequences, and build structures of transparency and accountability. Further, the scientific community is responsible to produce the necessary literacy so that we can move forward rather than through the ideas of capitalists.   

Further Reading: 
  1. Cabrera, Laura Y. "How does enhancing cognition affect human values? How does this translate into social responsibility?." In Ethical Issues in Behavioral Neuroscience, pp. 223-241. 
  2. Friedman, Batya, and Hendry, David G. (2019). Value sensitive design: Shaping technology with moral imagination. MIT Press. 
  3. Sparrow, Robert. (2005). "Defending deaf culture: The case of cochlear implants." Journal of Political Philosophy, 13(2), 135-152. 

Timothy Brown is an NIH postdoctoral research associate in the Department of Philosophy at the University of Washington and the lead architect of the Social Impacts and BMI workshop series. His work explores the role neural technologies—like deep–brain stimulators and brain–computer interfaces—(will) play in our experiences of self, in our interpersonal relationships, and in our societies more broadly.

Dr. Karen S. Rommelfanger received her PhD in neuroscience and received postdoctoral training in neuroscience and neuroethics. Her research explores how evolving neuroscience and neurotechnologies challenge societal definitions of disease and medicine. Dr. Rommelfanger is an Associate Professor in the Departments of Neurology and Psychiatry and Behavioral Sciences, the Neuroethics Program Director at EmoryUniversity’s Center for Ethics, and Senior Associate Editor at the American Journal of Bioethics Neuroscience. She is dedicated to cross-cultural work in neuroethics is co-chair of the Neuroethics Workgroup of the International Brain Initiative. She is an appointed member to the NIH BRAIN Initiative Neuroethics Working Group and is ambassador to the Human BrainProject’s Ethics Advisory Board. She also serves as Neuroethics Subgroup member of the Advisory Committee to the Director at NIH for designing a roadmap for BRAIN 2025. She recently was appointed to the Global Futures Council on Neurotechnology of the World Economic Forum. A key part of her work is fostering communication across multiple stakeholders in neuroscience. As such she edits the largest international online neuroethics discussion forum The Neuroethics Blog and she is a frequent contributor and commentator in popular media such as The New York Times, USA Today and The Huffington Post.

Want to cite this post?

Brown, T. & Rommelfanger, K. (2020). The Social Impact of Brain Machine Interfaces: Value Sensitive Design in Neurotechnology. The Neuroethics Blog. Retrieved on , from


Follow Us

Follow Us
Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook