The Social Impact of Brain Machine Interfaces: Bias and (Big) Neural Data

By Tim Brown, Karen Rommelfanger, and Laura Specker Sullivan.

Image courtesy of Pixabay
The MacArthur Foundation was kind enough to support our series of three online, exploratory workshops on the social impacts of brain-machine interfaces (BMIs). Existing work in this area examines the privacy and security dimensions of BMIs as well as questions of personal identity, authenticity, free will, and responsibility. Substantially less work interrogates BMI research, development, and commercialization from the perspective of social and distributive justice. These workshops are an attempt to do just that by foregrounding concerns such as human rights, civil liberties, structural inequality, representation in research, and inclusive design. These workshops are meant to place technologists and humanists in direct conversation about how current and future BMIs will impact society. Panelists and invited discussants include interdisciplinary neurotechnology researchers from the University of Washington as well as select national experts who approach neurotechnology from the perspectives of gender, sexuality, disability, and race. Our team will, inspired by the insights raised during the workshops, produce a whitepaper on the social impacts of BMIs based on insights from the workshop that we will made available to the broader community of stakeholders.

Editor’s notes: For the next month, we will be featuring discussions and notes from the speakers. 

The first workshop was on Biases and Big Neural Data. Here we feature a sample of the speakers’ comments to the prompted questions from Drs. Laura Specker Sullivan, from Fordham University and Karen Rommelfanger, Emory University Center for Ethics. 

1. What do we (collectively) mean by ‘bias’? What does this concept signify, and what is its salience?  

Laura Specker Sullivan (LSS): The term bias can pick out so many different things. I find it helpful to start thinking about a concept by identifying all the different boxes it can fit into. In my mind, bias can be explicit or implicit, individual or structural, natural or unnatural, related to content or processes, cognitive or affective, blameworthy or not, and immutable or changeable. These categories interrelate – for example, biases that are natural are often thought to be immutable such that agents are not blameworthy or culpable for having them. While some people might use bias to more generally mean “perspective” or “lens,” such that a bias isn’t necessarily bad, in my understanding, bias is negative. I think of it like the definition of a disorder – for something to be a bias requires it to have a negative effect (on oneself or perhaps on others, which is a comment that came up during the workshop). Biases have distorting effects leading to, for example, incomplete evidence, inaccurate beliefs, and inappropriate emotions. As work on implicit bias and cognitive biases suggests, biases influence our thinking even when we are unaware of them. 

Karen Rommelfanger (KSR): I suppose first, bias, in the context of people, is something that no human is free of and has a tendency toward. Bias is a tendency or judgment, often and perhaps mostly unreasoned toward an idea or outcome.
  1. In the biological sciences, we have a track record of underestimating our biases or discounting important biases. In science, the methodology itself necessarily insists upon attempting pure objectiveness. But that’s really impossible for people, so we need to acknowledge how those biases might confound our interpretations of scientific hypotheses, scientific conduct, and final data interpretations

  2. Biological explanations of a social world (as has been noted by many in the field of critical neuroscience—see Suparna Choudhury at McGill) can be dangerously misleading at risk of reifying dangerous structurally-embedded inequities and biases. What can be learned from critical neuroscience? 

  3. Humans, including scientists, are not value free and neither is any enterprise they endeavor to do. It’s common for scientists to bow to the altar of statistical significance and the p-value, not realizing the differences they choose to analyze upon are often socially constructed (such as comparing upon axes of race and gender). 

2. What biases are made possible or enacted through the collection or categorization of neural data? What are the mechanisms of these biases? 

LSS: Biases can influence how neural data is defined, identified, acquired, categorized, and analyzed. A great example is in Maggie Thompson’s paper on BCI (brain computer interface) illiteracy. Some people cannot use brain-computer interfaces – their neural signals will not operate the BCI. While the assumption is that these users have a physiological or functional problem such that they cannot use a BCI, this assumption is problematic given that BCIs are not designed with neurodiversity in mind, but are largely developed by and tested on members of one social group. The neural data of this social group are set as the norm, and the neural data of other groups are abnormal. Without diverse representation of social groups in the process of creating and testing neurotechnology, it is easy to make these assumptions about what “normal” neural data looks like and how it can be used. The technology is thus biased towards the physiology and function of that one social group. 

Biases can also come about by institutional interests creating forces that influence neuroscience, such as the timeline attached to specific funding agencies and those funding agencies’ priorities for successful grants. If a project needs to produce a deliverable in three years, for example, then projects that study neural processes that unfold over more time will not be studied because they will not fit this framework. Funding agencies also often have certain priorities about which issues are most in need of research, and projects that fit these priorities will be more likely to be funded. Without a diverse source of funding for research, projects that investigate novel hypotheses or unique topics might not come to fruition. As with the source of biases in the concept of BCI illiteracy above, this can be even more problematic when funding agencies reflect the interests of a dominant set of social groups that are not representative of all the interests within a given society. 

KSR: A significant draw for me to neuroscience has been that understanding the brain requires true inter-disciplinarity. And I often stay away from that term because it’s over- and mis-used. But what I mean by interdisciplinary is that different disciplines come together and create new knowledge that is different from the initial inputs. 
  1. As we, scientists, neuroethicists, and society interpret neural data, we need to maintain humility understanding the limits of the tools of our discipline. For instance, I don’t believe that science alone can tell us what the mind is—try to operationalize that in the lab. I certainly don’t believe science alone can or should tell us what identity is

  2. Neural data can be erroneously interpreted as a moral truth about one’s lived experience. We need experts in philosophy, sociology, to name a few and certainly the participant/user interfacing with the neurotech to help us understand the full meaning of neural data. 

  3. Mechanisms for perpetuating bias in science is continuing to participate in a system that allows us to let bias go unchecked. A critical way to acknowledge bias or mitigate it is to consider whose voices inform the scientific hypotheses, conduct, and interpretation of data. Who is at the table, not just on the table for scientific deconstruction? 

3. What role do/will these biases play in how BMIs are developed, deployed, and used? How do/will users’ experiences be shaped by them? 

Image courtesy of Pixabay
LSS: As I noted in response to the previous question, if BMIs are primarily developed, deployed, and used by members of one social group, this can create a self-reinforcing mechanism such that if BMIs do not work for some social groups, members of that group will not be interested in studying them, producing them, or being test subjects for them, so they will not work for that group. If we want to develop BMIs such that they are interesting and useful to a wide range of people, then we need to ensure that we have diverse voices in their development. If not, then some people might assume that if they have a bad experience with a BMI or it doesn’t work for them, that the reason is that their brain doesn’t work “normally,” when really the reason is that their brain is different from that of BMI designers and testers. 

KSR: It’s true that understanding brain function is a critical part of the puzzle in rehabilitating, restoring, or even enhancing someone’s lived experience with a BMI. But, as I said earlier, biological explanations of a social world can be dangerously misleading at risk of reifying dangerous structurally-embedded inequities and biases
  1. We must specifically identify and articulate biases in how technologies are designed, how and for whom they are scientifically developed, and who will have access to them when they are deployed. A critical question being what do we consider normal and who goes into that ‘normal’ gold-standard baseline? 

  2. My hope is that users’ experiences will be embedded into the scientific development of these technologies (similar to the work that Tim and UW colleagues have done and some of our work). And in addition, we must equally take care that even as neuroethicists, our academic interests don’t dominate what the user’s needs and experiences are

  3. Neuroethicists and humanists have biases too. Which issues we choose to explore, which philosophical traditions we use to analyze ethical issues…we have a lot of self-work we need to do as disciplines to de-colonize our work, de-centering the White Male Western perspective. How can we de-colonize neuroethics? 

4. What is the possible social and political impact of these biases? How will they impact marginalized people or structural inequality? 

LSS: Biases can be inadvertently translated into BMIs in a way that reflects and reinforces social injustice and inequity. For instance, if the people who are interested in developing BMIs are mostly white Western men, then mostly white Western men will develop BMIs, and mostly white Western men will use BMIs. If BMIs are then a significant mechanism of social advancement (as computers have been), then mostly white Western men will reap those benefits. We need to ask at a more fundamental level what social influences lead white Western men to be more interested in these technologies in the first place and how to re-think science and technological development such that diverse viewpoints and experiences are built into the development process from the very beginning. 
KSR: Here the second question really answers the first, doesn’t it? 
  1. The social and political impact of these biases is that they will perpetuate the negative impact on marginalized peoples and mirror and amplify structural inequalities. 

  2. We risk further de-humanizing the participants (and even the engineers of these technologies) when we don’t acknowledge and address our biases. 

  3. It’s just not good science. The science in the long- ,and even short-term, is ultimately harmful and not helpful. 

5. What practices can the field adopt to prevent or mitigate these biases? How should the design of BMIs change to thwart bias? 

Image courtesy of Pikist
LSS: Even well-meaning attempts to acknowledge biases in the creation and use of BMIs can have unanticipated effects. For example, if men and women are found to have different neural mechanisms for some behavioral output, then it might make sense to create BMIs for men and BMIs for women. In the workshop, I described this as the “blue” and “pink” approach to diversity. While well-intentioned, this can reinforce a neurobiological reductionism of gender such that the difference between men and women reduces to the difference between the profiles of their neural data. As we move away from the assumption that the neurobiology of one social group is universal, the goal is not to further refine the neurobiological categories into which we place people, but to develop neurotechnology that is flexible and adaptable to a wide range of human physiology and function. 

There is also an important difference between labeling or acknowledging bias and attempting to mitigate its effects. While at the outset it might be helpful to label neural data that predominantly comes from one social group, the goal is really to ensure that BMIs are developed from a diverse or representative set of neural data. 

Finally, while bias can seem like a very big, structural problem, individuals still have agency. By thinking about the role they play in BMI development, they can identify areas where they are able to expand the range of perspectives, beliefs, and experiences that are included. These areas might include recruiting students, post-docs, and researchers from new communities; engaging with local partners through community-engaged research; asking different kinds of research questions and pursuing different sources of funding; assigning readings by a wide range of voices on syllabi; inviting new people to conferences and workshops, and so on. 

KSR: This is a big question and worth a deeper dive, but I’ll reference what I recently wrote in my AJOBN Editorial on the big brain initiatives and neuroethics. I asked: 
What can neuroscience or neuroethics offer in a time like this, and how can our communities reflect on our “individual and collective character” in society during a pandemic and global anti-racism protests? 

We need to challenge the current structures of embedded unchecked bias. This challenge need not be limited BMI communities. 
  1. Work to facilitate change and empower groups to do so. Process is critical: who is at the table, why they are there (or aren’t), and names and titles given by those in position of authority matter— a lot.

  2. You’ll keep getting the same outcome if you keep taking the same approach. Create new opportunities for meaningful training and engagement. Facilitate new conversations. This series of talks is a great start

  3. Strive to be more inclusive and accessible in the products of our fields—in our scholarly work and our collaborations. Consider the long-game, legacy of work toward promoting a world that meets a future vision where generations ahead can live and thrive in. I think academics forget why they publish in the first place. It’s a community good, not a line on a CV for promotion. When we think toward that common goal, the work is better and so is the process that is otherwise corrupted by a focus on individual, nearer-term benefit. 
Neuroscience and ethics as fields aspire to have social impact and generate public goods. Appealing to that overarching goal to do deeper self-examination of our practices will only improve our chances of successfully delivering on high social impact and global public good. How can we be explicit about re-orienting our goals to align more toward integrity in our work? 

Summary questions to explore: 
  1. What can be learned from fields not commonly or robustly integrated into neuroethics discussions, i.e., critical neuroscience, sociology, anthropology, feminist technoscience and bioethics? 

  2. Can we de-colonize neuroethics? 

  3. How can we be explicit about re-orienting our goals to align more toward integrity in our work? 

The work is daunting, but we remain inspired.

Alice Walker: “The most common way people give up their power is by thinking they don't have any.”

Frederick Douglass’ Great, Great, Great, Great Grandchild: "Somebody once said that pessimism is a tool of white oppression."

Stay tuned for next week’s installment on Structural Inequality in BMI Study and Practice.

_____________

Timothy Brown is an NIH postdoctoral research associate in the Department of Philosophy at the University of Washington and the lead architect of the Social Impacts and BMI workshop series. His work explores the role neural technologies—like deep–brain stimulators and brain–computer interfaces—(will) play in our experiences of self, in our interpersonal relationships, and in our societies more broadly.


Dr. Karen S. Rommelfanger received her PhD in neuroscience and received postdoctoral training in neuroscience and neuroethics. Her research explores how evolving neuroscience and neurotechnologies challenge societal definitions of disease and medicine. Dr. Rommelfanger is an Associate Professor in the Departments of Neurology and Psychiatry and Behavioral Sciences, the Neuroethics Program Director at Emory University’s Center for Ethics, and Senior Associate Editor at the American Journal of Bioethics Neuroscience. She is dedicated to cross-cultural work in neuroethics is co-chair of the Neuroethics Workgroup of the International Brain Initiative. She is an appointed member to the NIH BRAIN Initiative Neuroethics Working Group and is ambassador to the Human Brain Project’s Ethics Advisory Board. She also serves as Neuroethics Subgroup member of the Advisory Committee to the Director at NIH for designing a roadmap for BRAIN 2025. She recently was appointed to the Global Futures Council on Neurotechnology of the World Economic Forum. A key part of her work is fostering communication across multiple stakeholders in neuroscience. As such she edits the largest international online neuroethics discussion forum The Neuroethics Blog and she is a frequent contributor and commentator in popular media such as The New York Times, USA Today and The Huffington Post.


Laura Specker Sullivan is Assistant Professor of Philosophy at the College of Charleston and Director of Ethics at the Medical University of South Carolina. Her work focuses on ethical issues at the intersection of culture, science, and medicine. She is the past chair of the Neuroethics Affinity Group for the American Society for Bioethics and Humanities, a current member of the American Philosophical Association's Philosophy and Medicine Committee, and a member of the Institute for Electrical and Electronics Engineers' TechEthics Committee. 


Want to cite this post?

Brown, T., Rommelfanger, K., & Sullivan, L. S.. (2020). The Social Impact of Brain Machine Interfaces: Bias and (Big) Neural Data. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2020/08/the-social-impact-of-brain-machine.html

Comments

Follow Us

Follow Us
Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook