The Social Impact of Brain Machine Interfaces: Bias and (Big) Neural Data
|Image courtesy of Pixabay
- In the biological sciences, we have a track record of underestimating our biases or discounting important biases. In science, the methodology itself necessarily insists upon attempting pure objectiveness. But that’s really impossible for people, so we need to acknowledge how those biases might confound our interpretations of scientific hypotheses, scientific conduct, and final data interpretations.
- Biological explanations of a social world (as has been noted by many in the field of critical neuroscience—see Suparna Choudhury at McGill) can be dangerously misleading at risk of reifying dangerous structurally-embedded inequities and biases. What can be learned from critical neuroscience?
- Humans, including scientists, are not value free and neither is any enterprise they endeavor to do. It’s common for scientists to bow to the altar of statistical significance and the p-value, not realizing the differences they choose to analyze upon are often socially constructed (such as comparing upon axes of race and gender).
- As we, scientists, neuroethicists, and society interpret neural data, we need to maintain humility understanding the limits of the tools of our discipline. For instance, I don’t believe that science alone can tell us what the mind is—try to operationalize that in the lab. I certainly don’t believe science alone can or should tell us what identity is.
- Neural data can be erroneously interpreted as a moral truth about one’s lived experience. We need experts in philosophy, sociology, to name a few and certainly the participant/user interfacing with the neurotech to help us understand the full meaning of neural data.
- Mechanisms for perpetuating bias in science is continuing to participate in a system that allows us to let bias go unchecked. A critical way to acknowledge bias or mitigate it is to consider whose voices inform the scientific hypotheses, conduct, and interpretation of data. Who is at the table, not just on the table for scientific deconstruction?
|Image courtesy of Pixabay
- We must specifically identify and articulate biases in how technologies are designed, how and for whom they are scientifically developed, and who will have access to them when they are deployed. A critical question being what do we consider normal and who goes into that ‘normal’ gold-standard baseline?
- My hope is that users’ experiences will be embedded into the scientific development of these technologies (similar to the work that Tim and UW colleagues have done and some of our work). And in addition, we must equally take care that even as neuroethicists, our academic interests don’t dominate what the user’s needs and experiences are.
- Neuroethicists and humanists have biases too. Which issues we choose to explore, which philosophical traditions we use to analyze ethical issues…we have a lot of self-work we need to do as disciplines to de-colonize our work, de-centering the White Male Western perspective. How can we de-colonize neuroethics?
- The social and political impact of these biases is that they will perpetuate the negative impact on marginalized peoples and mirror and amplify structural inequalities.
- We risk further de-humanizing the participants (and even the engineers of these technologies) when we don’t acknowledge and address our biases.
- It’s just not good science. The science in the long- ,and even short-term, is ultimately harmful and not helpful.
|Image courtesy of Pikist
Even well-meaning attempts to acknowledge biases in the creation and use of BMIs can have unanticipated effects. For example, if men and women are found to have different neural mechanisms for some behavioral output, then it might make sense to create BMIs for men and BMIs for women. In the workshop, I described this as the “blue” and “pink” approach to diversity. While well-intentioned, this can reinforce a neurobiological reductionism of gender such that the difference between men and women reduces to the difference between the profiles of their neural data. As we move away from the assumption that the neurobiology of one social group is universal, the goal is not to further refine the neurobiological categories into which we place people, but to develop neurotechnology that is flexible and adaptable to a wide range of human physiology and function.
- Work to facilitate change and empower groups to do so. Process is critical: who is at the table, why they are there (or aren’t), and names and titles given by those in position of authority matter— a lot.
- You’ll keep getting the same outcome if you keep taking the same approach. Create new opportunities for meaningful training and engagement. Facilitate new conversations. This series of talks is a great start.
- Strive to be more inclusive and accessible in the products of our fields—in our scholarly work and our collaborations. Consider the long-game, legacy of work toward promoting a world that meets a future vision where generations ahead can live and thrive in. I think academics forget why they publish in the first place. It’s a community good, not a line on a CV for promotion. When we think toward that common goal, the work is better and so is the process that is otherwise corrupted by a focus on individual, nearer-term benefit.
- What can be learned from fields not commonly or robustly integrated into neuroethics discussions, i.e., critical neuroscience, sociology, anthropology, feminist technoscience and bioethics?
- Can we de-colonize neuroethics?
- How can we be explicit about re-orienting our goals to align more toward integrity in our work?
The work is daunting, but we remain inspired.
Alice Walker: “The most common way people give up their power is by thinking they don’t have any.”
Frederick Douglass’ Great, Great, Great, Great Grandchild: “Somebody once said that pessimism is a tool of white oppression.”
Laura Specker Sullivan is Assistant Professor of Philosophy at the College of Charleston and Director of Ethics at the Medical University of South Carolina. Her work focuses on ethical issues at the intersection of culture, science, and medicine. She is the past chair of the Neuroethics Affinity Group for the American Society for Bioethics and Humanities, a current member of the American Philosophical Association’s Philosophy and Medicine Committee, and a member of the Institute for Electrical and Electronics Engineers’ TechEthics Committee.
Want to cite this post?
Brown, T., Rommelfanger, K., & Sullivan, L. S.. (2020). The Social Impact of Brain Machine Interfaces: Bias and (Big) Neural Data. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2020/08/the-social-impact-of-brain-machine.html