Towards True Equity in Neurotechnology
By Jasmine Kwasa, Arnelle Etienne, and Pulkit Grover
Introduction
![]() |
Image courtesy of Wikimedia Commons |
Racism in Medicine
The racial disparities we see in bioscience today are tied to the history of the United States, a more than 400-year long story of racial bias and outright racism: the “father of obstetrics,” James Marion Sims, experimented on enslaved women without anesthesia (and certainly without consent!); Henrietta Lacks was a Black woman whose tumor cells, which went on to revolutionize cell culture and cancer biology, were harvested without her knowledge or permission; and the U.S. Public Health Service withheld penicillin treatment to Black men in the infamous Tuskegee Experiments in order to examine the effects of syphilis. While human protections have become more standard in biomedical science after the 1991 Federal Policy for the Protection of Human Subjects was published, historical examples sowed medical mistrust in Black and Indigenous communities for generations.
Today, racial disparities are present in almost every subfield of medicine from heart health (Davis et al., 2007), diabetes (Spanakis and Golden, 2013), psychiatry and mental health (McGuire & Miranda, 2008), pain management (Wyatt, 2013), and, importantly to us, neurology (Betjemann et al., 2013) and basic neuroscience research (Abiodun, 2019). These disparities emerge from unequal access to quality care in racially oppressed communities and implicit biases held by medical practitioners (Feagin & Bennefield, 2014). While several medical schools are trying to include anti-racist coursework into their curricula, these ideas have had mixed responses (Goldfarb, 2019), and the medical field as a whole is slow to change. As medicine is increasingly relying on artificial intelligence, device development, and other technological advances, it is doubly important for medical professionals to be aware of and combat bias at every step.
Racism in Technology
![]() |
Image courtesy of Wikimedia Commons |
Even more insidious is biased hardware (the physical design of a technological system) of the medical devices that acquire data or deliver treatments. For instance, pulse oximetry, which can measure blood oxygenation and heart rate, relies on shining light onto skin and recording the amount of light that gets scattered or absorbed by the tissue. Dark skin has more absorbent melanin, which requires adjustments to either the wavelength of light used or the analysis algorithm. Even though this has long been known in optics research, wearable heart trackers still rely on biased versions of this technology, leading to inaccurate tracking for darker skin tones (J Personalized Medicine 2017) and complaints from consumers. Despite the good intentions behind these products, the negative impact on a large group of people cannot be ignored. Therefore, it is of paramount importance for designers to interrogate exactly who their solutions help and who they might exclude, especially as we continue to innovate at the intersection of medicine and technology.
Our example: EEG Technologies on Coarse, Curly Hair
Recently, our team identified and developed a solution to this form of exclusionary bias in the case of electroencephalography (EEG). These devices are in the first line of defense against epilepsy, brain injuries, stroke, and numerous other neurological and psychiatric illnesses. Despite their widespread use, we found (Etienne et al., 2020) that EEG systems do not consistently work for coarse and curly hair common in Black people. The springiness of kinky hair pushes back against electrodes, making the scalp contact poor, an essential factor for recording quality EEG. The consequences of EEG being unable to record from Black populations are severe: it can not only lead to misdiagnosis, but it has also likely introduced biases in existing EEG data on which our neuroscientific understanding of healthy and diseased brain is based. In clinical settings, patients are typically asked to straighten their hair before they arrive at the clinic, a culturally insensitive recommendation that does not fully solve the problem: the hair can spring back if it gets wet, which is common as wet or gel electrode systems are the standard. Since it takes longer to straighten the hair than it takes for it to curl up again, many Black participants are denied participation in EEG research studies due to the hassle.
We have provided the first solution to the problem: Sevo systems. The figure below shows a step-by-step procedure of how Sevo electrodes are applied. The hair is braided in cornrows, exposing parts of the scalp dictated by the clinical standard EEG locations. Electrodes are then placed (wires not shown) on the scalp by leveraging braiding to improve scalp contact.
The Sevo solution uses the strength and springiness of curly hair as an aid, not a hindrance, to electrode scalp contact. Much like all engineering solutions, Sevo requires improvement and iteration so that it can be deployed seamlessly in both research and clinical settings. More broadly, once identified, technological solutions can often be addressed with creative use and adaptations on existing technology. How can we build a culture where blindspots in technology design are rapidly identified and addressed?
Call to action
For EEG, despite it being a critical technology in widespread use today in the clinic, only a century after its invention are we beginning to address its inequitable use. This is unacceptable. Designers, inventors, investors, medical professionals, and stakeholders all need to understand that their choice of designs have real implications on who can use their solutions, and need to question whether the systems they are designing or are using are causing hindrances to specific populations. We have several suggestions:
- Prioritize diversity in design teams. In our case, until Etienne (who has coarse and curly hair herself) joined our research team and identified the issue, we were not aware of the inclusion problem with existing EEG. Prioritizing diversity goes beyond just getting diverse team members. It’s important to listen to and actually incorporate their points of view into your team's vision. Diversity in design teams requires a diverse, inclusive, and equitable STEM education pipeline leading to a workforce that values, welcomes, and promotes underrepresented scholars.
- Take the lead from the Fairness in Artificial Intelligence (AI) community, which is helping to identify existing societal biases and attempting to create AI systems that do not propagate old or introduce new biases. The goal is to encourage designers, engineers, and scientists to go back to the devices, systems, algorithms, solutions that exist or are being developed and ask themselves about potential biases, paying attention to needs and aspects of different populations of users.
- Incorporate community and stakeholder feedback in the design process. As Mark Latornero points out, ”Companies and their partners need to move from good intentions to accountable actions that mitigate risk. They should be transparent about both benefits and harms these AI tools may have in the long run… It should involve local people closest to the problem in the design process and conduct independent human rights assessments to determine if a project should move forward.” (AI for good is often bad, MIT tech review)
- Change how we educate our STEM students so they are able to question whether existing solutions are biased. Introduce them to ethics early in their training by including ethics discussions throughout undergraduate curricula, including full courses on ethics, humanities, and social science.
- Write to your U.S. representative about introducing and supporting federal policy related to fairness in AI and in STEM, generally. Congressional oversight of how algorithms and devices are used against citizens can act like the Protection of Humans Subjects guidelines for medical research. For example, this Future of Artificial Intelligence bill demands oversight of AI development via the establishment of a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence. A call to your congressional representative, especially if you identify as a scientist, can go a long way.
We are left thinking about the ethics of all medical technologies. Will optical imaging technologies that can detect cancer work on darker skin? Will online healthcare in the age of COVID-19 be just as inaccessible to Black and lower income communities as it was before? Will benefits of dramatic improvements in neural technologies be limited to a few because of implicit assumptions of the designers? Will solutions to our greatest medical problems be affordable to all? Once all designers are thinking about these questions, that will be a step toward true equity.
Based on discussions with Momi Afelin (Precision Neuro), Marlene Behrmann (CMU), Megan Kelly (Duke), Ashwati Krishnan (StimScience), Shawn Kelly (CMU), Christina Patterson (University of Pittsburgh Epilepsy Center).
References
- Abiodun, S. J. (2019). ‘Seeing Color,’ A Discussion of the Implications and Applications of Race in the Field of Neuroscience. Frontiers in Human Neuroscience, 13(August), 280.
- Barocas, S., & Hardt, M. (2017). Fairness in Machine Learning Tutorial. In Neural Information Processing Systems.
- Barocas, S., Hardt, M., & Narayanan, A. (2018). Fairness and Machine Learning. http://www.fairmlbook.org
- Betjemann, J. P., Thompson, A. C., Santos-Sánchez, C., Garcia, P. A., & Ivey, S. L. (2013). Distinguishing Language and Race Disparities in Epilepsy Surgery. Epilepsy & Behavior, 28(3), 444–449.
- Davis, A. M., Vinci, L. M., Okwuosa, T. M., Chase, A. R., & Huang, E. S. (2007). Cardiovascular Health Disparities: A Systematic Review of Health Care Interventions. Medical Care Research and Review, 64(5 Suppl), 29S–100S.
- Etienne, A., Laroia, T., Weigle, H., Kelly, S. K., Krishnan, A., & Grover, P. (2020). Novel Electrodes for Reliable EEG Recordings on Coarse and Curly Hair. IEEE Engineering in Medicine and Biology.
- Feagin, J., & Bennefield, Z. (2014). Systemic Racism and U.S. Health Care. Social Science & Medicine, 103(February), 7–14.
- Goldfarb, S. (2019, September 12). Take Two Aspirin and Call Me by My Pronouns. The Wall Street Journal. https://www.wsj.com/articles/take-two-aspirin-and-call-me-by-my-pronouns-11568325291
- Kim, P. (2019). Manipulating Opportunity. Virginia Law Review. https://papers.ssrn.com/abstract=3466933
- Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human Decisions and Machine Predictions. The Quarterly Journal of Economics, 133(1), 237–293.
- McGuire, T. G., & Miranda, J. (2008). New Evidence Regarding Racial and Ethnic Disparities in Mental Health: Policy Implications. Health Affairs, 27(2), 393–403.
- Spanakis, E. K., & Golden, S. H. (2013). Race/ethnic Difference in Diabetes and Diabetic Complications. Current Diabetes Reports, 13(6), 814–23.
- Wakefield, J. (2020, August 20). A-Levels: Ofqual’s ‘Cheating’ Algorithm under Review. BBC. https://www.bbc.com/news/technology-53836453
- Wyatt, R. (2013). Pain and Ethnicity. The Virtual Mentor, 15(5), 449–454.
- Zou, J., & Schiebinger, L. (2018). AI Can Be Sexist and Racist — It’s Time to Make It Fair. Nature, 559(7714), 324–26.
Want to cite this post?
Kwasa, J., Etienne, A., & Grover, P. (2020). Towards True Equity in Neurotechnology. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2020/12/towards-true-equity-in-neurotechnology.html