By Tim Brown and Hannah Martens
With the generous support of the MacArthur Foundation, the Center for Neurotechnology and Tech Policy Lab (both at the University of Washington) hosted a series of three online, exploratory workshops on the social impacts of brain-machine interfaces (BMIs). Existing work in this area examines the privacy and security dimensions of BMIs as well as questions of personal identity, authenticity, free will, and responsibility. Substantially less work interrogates BMI research, development, and commercialization from the perspective of social justice. These workshops were an attempt to do just that by foregrounding concerns such as human rights, civil liberties, structural inequality, representation in research, and inclusive design. These workshops were meant to place technologists and humanists in direct conversation about how current and future BMIs will impact society. Panelists and invited discussants included interdisciplinary neurotechnology researchers from the University of Washington as well as select national experts who approach neurotechnology from the perspectives of gender, sexuality, disability, and race. Our team’s goal is to produce a whitepaper on the social impacts of BMIs—inspired by the insights raised during the workshops—that we will make available to the broader community of stakeholders.
In this, our final installment in a series of posts, we want to leave you with a summary of the themes that emerged across all of our conversations.
 |
Image courtesy of Pixy
|
Firstly, many of our panelists and discussants agreed that the design of BMIs is value-laden. That is to say, the values of device designers, the cultures they inhabit, and the institutions they represent are reflected in the design and function of neurotechnologies broadly. Some might take this as a problem to solve—insofar as the values stand in the way of objectivity and confound scientific and engineering research. Several panelists and discussants pushed back on this intuition. After all, the problem isn’t necessarily that engineers (for example) bring their perspectives to work, but rather that some perspectives are overrepresented, go underexamined, and/or are left unchecked. Instead of struggling to remove values from neuroengineering and its products, we should uncover, acknowledge, and interrogate those biases. Each person researching, developing, or distributing BMIs should ask themselves a series of questions: What assumptions guide my work? What are the sources of these assumptions? Are my assumptions justified? Are there other perspectives I should consider? It is not wrong to operate under assumption, but it is important to challenge those assumptions. Further, each should consider the values they bring to their work: Whose values are reflected in the development of BMIs? Whose values ought to be reflected? Far too often the people whose values ought to be reflected in the design of BMIs and the underlying engineering/science—people with disabilities, people of color, and LBGTQIA folks—are left out of the process almost entirely. Representation—in lab personnel, in study participants, of problems considered, of values in play—is of paramount importance.
We also seemed to agree that marginalization is often the result of how power is configured— in ways that benefit some and denigrate others—within our institutions. Grant-awarding institutions like NIH and NSF shape neurotechnology through their policies and processes. What grant programs they make available, how they constitute committees, the criteria they use to rank and award grants, and even the timeframe they specify for the application process can privilege some over others. Professional organizations, like ASBH and INS, distribute power through who they select for their committees, who they invite to present at events, how they award prizes, how they set prices for membership. Companies distribute power through their hiring decisions, their business models, their choice of tools, their alliances with adjacent companies, and the pricing of devices downstream. We must interrogate all of these systems and reconfigure them so that they empower marginalized people enough to have a seat at the table (instead of on the table), receive uptake when they speak, and make the changes necessary for the design of BMIs to reflect their values.
Finally, several of us were struck by how it felt to think through the above issues. The task of our workshop seemed to be to excavate these oppressive systems, to bring them to light in the context of neurotechnology, and to trace their connections. But in doing so, we face the daunting task of dismantling these structures, just to conduct research on or design BMIs. For example, we agreed that if a team conducting human-subjects research on a brain–computer interface wants to prevent racism from creeping into their assumptions, they should bring people of color into the research team and recruit them as human subjects. There are, however, entire systems of oppression, embedded in the foundations of so many of our academic institutions, that keep people of color from joining STEM fields in the first place. Those same systems also make it difficult for people of color to trust medical, academic, or corporate studies of any kind. For example, as
the tragic history of experimentation on Black Americans and
the effects of structural racism in healthcare both demonstrate, Black Americans still have good reasons to steer clear of medical and corporate research. The prospect of contending with these systems felt daunting to several discussants. It seemed as though individual stakeholders must work against these systems all at once in order to make any progress at all. What should a researcher in the aforementioned team do? Reach out to disenfranchised communities directly to recruit subjects? Commit to participating in anti-racist activism? Sure: we should take on these activities if we can.
But we shouldn’t also fall into the trap of believing that the work of dismantling systems of oppression falls on individuals alone. It is more effective and far less daunting to work collectively toward solutions within institutions, across disciplinary boundaries, and in solidarity with communities in the margins.
______________
Timothy Brown is an NIH postdoctoral research associate in the Department of Philosophy at the University of Washington and the lead architect of the Social Impacts and BMI workshop series. His work explores the role neural technologies—in particular, deep–brain stimulators and brain–computer interfaces—(will) play in our experiences of self, our interpersonal relationships, and our societies more broadly.
Hannah Martens is a second year doctoral student in the Philosophy Department at the University of Illinois, Chicago. Her interests include feminist philosophy, bioethics, social epistemology, moral psychology, philosophy of emotions, and aesthetics.
Want to cite this post?
Brown, T. & Martens, H. (2020). The Social Impact of Brain Machine Interfaces: Closing Remarks. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2020/09/the-social-impact-of-brain-machine_22.html