Towards True Equity in Neurotechnology

 By Jasmine Kwasa, Arnelle Etienne, and Pulkit Grover 


Image courtesy of Wikimedia Commons
Neuroscience, and medicine as a whole, has largely ignored or directly harmed underrepresented racial groups in the United States via several major mechanisms. A long, documented history of racism in medicine has destabilized Black and Indigenous communities’ trust in healthcare systems; systemic inequalities in access to healthcare have led to differential health outcomes across socioeconomic and racial lines; and implicit bias from medical practitioners has further exacerbated racial disparities in patient care (American Medical Association recently recognized that racism is a threat to public health). Likewise, technology has its own set of ingrained race issues, from biased datasets to biased algorithms and devices. In the wake of a new age in brain sciences discovery and corresponding advances in neurotechnology, the ethics of who neurotechnology serves must be directly addressed. We are a group of neurotechnologists who believe that we should aim to provide equitable care, regardless of race, gender, sex, sexuality, ability, and any other protected identity. While some of these issues are receiving attention, there are still egregious disparities in neurotechnologies that must be addressed from multiple standpoints. Here we show an example of how to mitigate anti-Black bias in electroencephalography (EEG) -- the first line of defense against epilepsy, brain injuries, stroke, and numerous other neurological and psychiatric illnesses -- using a new electrode design. We also outline suggestions for making more inclusive neural technologies going forward as a step towards true equity in neuroscience.  

Racism in Medicine 

The racial disparities we see in bioscience today are tied to the history of the United States, a more than 400-year long story of racial bias and outright racism: the “father of obstetrics,” James Marion Sims, experimented on enslaved women without anesthesia (and certainly without consent!); Henrietta Lacks was a Black woman whose tumor cells, which went on to revolutionize cell culture and cancer biology, were harvested without her knowledge or permission; and the U.S. Public Health Service withheld penicillin treatment to Black men in the infamous Tuskegee Experiments in order to examine the effects of syphilis. While human protections have become more standard in biomedical science after the 1991 Federal Policy for the Protection of Human Subjects was published, historical examples sowed medical mistrust in Black and Indigenous communities for generations.  

Today, racial disparities are present in almost every subfield of medicine from heart health (Davis et al., 2007), diabetes (Spanakis and Golden, 2013), psychiatry and mental health (McGuire & Miranda, 2008), pain management (Wyatt, 2013), and, importantly to us, neurology (Betjemann et al., 2013) and basic neuroscience research (Abiodun, 2019). These disparities emerge from unequal access to quality care in racially oppressed communities and implicit biases held by medical practitioners (Feagin & Bennefield, 2014). While several medical schools are trying to include anti-racist coursework into their curricula, these ideas have had mixed responses (Goldfarb, 2019), and the medical field as a whole is slow to change. As medicine is increasingly relying on artificial intelligence, device development, and other technological advances, it is doubly important for medical professionals to be aware of and combat bias at every step.  

Racism in Technology  

Image courtesy of Wikimedia Commons

Many assume that racism doesn’t exist in technology due to the unbiased nature of the underlying mathematics of computing. However, as new solutions from Silicon Valley and the biotechnology sector appear to make our daily lives easier, these companies indeed spread and even further dilate racial disparities due to historical biases perpetuated via algorithms and datasets. For instance, lack of fairness in artificial intelligence algorithms have led to countless examples of racism, sexism, classism, and more in fields such as childhood education, higher education, policing and justice, healthcare, and consumer technologies (Zou & Schiebinger, 2018; Wakefield, 2020; Barocas & Hardt, 2017; Barocas, Hardt, & Narayanan, 2018; Kim, 2019; Kleinberg et al., 2018).  

Even more insidious is biased hardware (the physical design of a technological system) of the medical devices that acquire data or deliver treatments. For instance, pulse oximetry, which can measure blood oxygenation and heart rate, relies on shining light onto skin and recording the amount of light that gets scattered or absorbed by the tissue. Dark skin has more absorbent melanin, which requires adjustments to either the wavelength of light used or the analysis algorithm. Even though this has long been known in optics research, wearable heart trackers still rely on biased versions of this technology, leading to inaccurate tracking for darker skin tones  (J Personalized Medicine 2017) and complaints from consumers. Despite the good intentions behind these products, the negative impact on a large group of people cannot be ignored. Therefore, it is of paramount importance for designers to interrogate exactly who their solutions help and who they might exclude, especially as we continue to innovate at the intersection of medicine and technology.  

Our example: EEG Technologies on Coarse, Curly Hair 

Recently, our team identified and developed a solution to this form of exclusionary bias in the case of electroencephalography (EEG). These devices are in the first line of defense against epilepsy, brain injuries, stroke, and numerous other neurological and psychiatric illnesses. Despite their widespread use, we found (Etienne et al., 2020) that EEG systems do not consistently work for coarse and curly hair common in Black people. The springiness of kinky hair pushes back against electrodes, making the scalp contact poor, an essential factor for recording quality EEG. The consequences of EEG being unable to record from Black populations are severe: it can not only lead to misdiagnosis, but it has also likely introduced biases in existing EEG data on which our neuroscientific understanding of healthy and diseased brain is based. In clinical settings, patients are typically asked to straighten their hair before they arrive at the clinic, a culturally insensitive recommendation that does not fully solve the problem: the hair can spring back if it gets wet, which is common as wet or gel electrode systems are the standard. Since it takes longer to straighten the hair than it takes for it to curl up again, many Black participants are denied participation in EEG research studies due to the hassle.   

We have provided the first solution to the problem: Sevo systems. The figure below shows a step-by-step procedure of how Sevo electrodes are applied. The hair is braided in cornrows, exposing parts of the scalp dictated by the clinical standard EEG locations. Electrodes are then placed (wires not shown) on the scalp by leveraging braiding to improve scalp contact.  

The Sevo solution uses the strength and springiness of curly hair as an aid, not a hindrance, to electrode scalp contact. Much like all engineering solutions, Sevo requires improvement and iteration so that it can be deployed seamlessly in both research and clinical settings. More broadly, once identified, technological solutions can often be addressed with creative use and adaptations on existing technology. How can we build a culture where blindspots in technology design are rapidly identified and addressed? 

Call to action

For EEG, despite it being a critical technology in widespread use today in the clinic, only a century after its invention are we beginning to address its inequitable use. This is unacceptable. Designers, inventors, investors, medical professionals, and stakeholders all need to understand that their choice of designs have real implications on who can use their solutions, and need to question whether the systems they are designing or are using are causing hindrances to specific populations. We have several suggestions: 

  • Prioritize diversity in design teams. In our case, until Etienne (who has coarse and curly hair herself) joined our research team and identified the issue, we were not aware of the inclusion problem with existing EEG. Prioritizing diversity goes beyond just getting diverse team members. It’s important to listen to and actually incorporate their points of view into your team's vision. Diversity in design teams requires a diverse, inclusive, and equitable STEM education pipeline leading to a workforce that values, welcomes, and promotes underrepresented scholars. 
  • Take the lead from the Fairness in Artificial Intelligence (AI) community, which is helping to identify existing societal biases and attempting to create AI systems that do not propagate old or introduce new biases. The goal is to encourage designers, engineers, and scientists to go back to the devices, systems, algorithms, solutions that exist or are being developed and ask themselves about potential biases, paying attention to needs and aspects of different populations of users. 
  • Incorporate community and stakeholder feedback in the design process. As Mark Latornero points out, ”Companies and their partners need to move from good intentions to accountable actions that mitigate risk. They should be transparent about both benefits and harms these AI tools may have in the long run… It should involve local people closest to the problem in the design process and conduct independent human rights assessments to determine if a project should move forward.” (AI for good is often bad, MIT tech review)
  • Change how we educate our STEM students so they are able to question whether existing solutions are biased. Introduce them to ethics early in their training by including ethics discussions throughout undergraduate curricula, including full courses on ethics, humanities, and social science. 
  • Write to your U.S. representative about introducing and supporting federal policy related to fairness in AI and in STEM, generally. Congressional oversight of how algorithms and devices are used against citizens can act like the Protection of Humans Subjects guidelines for medical research. For example, this Future of Artificial Intelligence bill demands oversight of AI development via the establishment of a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence. A call to your congressional representative, especially if you identify as a scientist, can go a long way.

We are left thinking about the ethics of all medical technologies. Will optical imaging technologies that can detect cancer work on darker skin? Will online healthcare in the age of COVID-19 be just as inaccessible to Black and lower income communities as it was before? Will benefits of dramatic improvements in neural technologies be limited to a few because of implicit assumptions of the designers? Will solutions to our greatest medical problems be affordable to all? Once all designers are thinking about these questions, that will be a step toward true equity. 

Based on discussions with Momi Afelin (Precision Neuro), Marlene Behrmann (CMU), Megan Kelly (Duke), Ashwati Krishnan (StimScience), Shawn Kelly (CMU), Christina Patterson (University of Pittsburgh Epilepsy Center).


  1. Abiodun, S. J. (2019). ‘Seeing Color,’ A Discussion of the Implications and Applications of Race in the Field of Neuroscience. Frontiers in Human Neuroscience, 13(August), 280.
  2. Barocas, S., & Hardt, M. (2017). Fairness in Machine Learning Tutorial. In Neural Information Processing Systems.
  3. Barocas, S., Hardt, M., & Narayanan, A. (2018). Fairness and Machine Learning.
  4. Betjemann, J. P., Thompson, A. C., Santos-S├ínchez, C., Garcia, P. A., & Ivey, S. L. (2013). Distinguishing Language and Race Disparities in Epilepsy Surgery. Epilepsy & Behavior, 28(3), 444–449.
  5. Davis, A. M., Vinci, L. M., Okwuosa, T. M., Chase, A. R., & Huang, E. S. (2007). Cardiovascular Health Disparities: A Systematic Review of Health Care Interventions. Medical Care Research and Review, 64(5 Suppl), 29S–100S.
  6. Etienne, A., Laroia, T., Weigle, H., Kelly, S. K., Krishnan, A., & Grover, P. (2020). Novel Electrodes for Reliable EEG Recordings on Coarse and Curly Hair. IEEE Engineering in Medicine and Biology. 
  7. Feagin, J., & Bennefield, Z. (2014). Systemic Racism and U.S. Health Care. Social Science & Medicine, 103(February), 7–14. 
  8. Goldfarb, S. (2019, September 12). Take Two Aspirin and Call Me by My Pronouns. The Wall Street Journal.
  9. Kim, P. (2019). Manipulating Opportunity. Virginia Law Review.
  10. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human Decisions and Machine Predictions. The Quarterly Journal of Economics, 133(1), 237–293.
  11. McGuire, T. G., & Miranda, J. (2008). New Evidence Regarding Racial and Ethnic Disparities in Mental Health: Policy Implications. Health Affairs, 27(2), 393–403.
  12. Spanakis, E. K., & Golden, S. H. (2013). Race/ethnic Difference in Diabetes and Diabetic Complications. Current Diabetes Reports, 13(6), 814–23.
  13. Wakefield, J. (2020, August 20). A-Levels: Ofqual’s ‘Cheating’ Algorithm under Review. BBC.
  14. Wyatt, R. (2013). Pain and Ethnicity. The Virtual Mentor, 15(5), 449–454. 
  15. Zou, J., & Schiebinger, L. (2018). AI Can Be Sexist and Racist — It’s Time to Make It Fair. Nature, 559(7714), 324–26.

Arnelle Etienne is a young innovator and recent graduate of Carnegie Mellon University's School of Engineering, with a self-defined degree in Technology and Humanistic Studies. She was a 2018 Undergraduate Research Fellow for the Center for the Neural Basis of Cognition in Pittsburgh, PA. Her emerging work addresses a one hundred-year-old problem in EEG, collecting unbiased data from Black patients. Her passion is to illuminate and eradicate bias by designing hardware and protocol to address disparities in tech. In her free time, she enjoys cooking Caribbean food, arguing about if music genres even exist, and having conversations about community and healing.

Pulkit Grover (Ph.D. UC Berkeley'10, B.Tech, M.Tech IIT Kanpur) is an associate professor at CMU. His main contributions to science are towards developing and experimentally validating a new theory of information (fundamental limits, practical designs) for efficient and reliable communication, computing, sensing, and control, e.g. by incorporating novel circuit-energy models and developing new mathematical tools for information flow analyses. To apply these ideas to a variety of problems including ethical AI and novel biomedical systems, his lab works extensively with data scientists, system and device engineers, neuroscientists, and clinicians. Pulkit is leading the SharpFocus team in response to DARPA's Next-generation Nonsurgical Neurotechnology (N3) challenge. When not fretting about the state of the world, he enjoys playing the sax and spending his free time with his wife, Kristen, and son, Utsah.

Jasmine Kwasa is a PhD candidate in Electrical and Computer Engineering at Carnegie Mellon University, where she is an NIH D-SPAN Neuroscience F99/K00 fellow. Her research investigates how hearing is influenced by attention and other higher-order cognitive skills using non-invasive neurotechnology, including high-density EEG. Outside of the lab, she has conducted educational research and led several K-12 STEM education programs for underrepresented students, particularly women of color. She has been named an NSF GRFP, Ford Foundation Fellow, Society for Neuroscience fellow, and "Rising Star in Biomedical" by MIT. Jasmine earned a B.S. from Washington University in St. Louis and an M.S. from Boston University, both in Biomedical Engineering, and enjoys teaching dance fitness classes, traveling, and spending time with her family.

Want to cite this post?

Kwasa, J., Etienne, A., & Grover, P. (2020). Towards True Equity in Neurotechnology. The Neuroethics Blog. Retrieved on , from


  1. This is an amazingly valuable blog post. I love the concrete recommendations you give at the end. Regarding the section, "EEG Technologies on Coarse, Curly Hair", I totally agree and think the same is true of transcranial magnetic stimulation (TMS) applied at oftentimes cannot contact the scalp with kinky hair, dreads, or cornrows, potentially reducing efficacy for treatment of depression or some other clinical use. Best of luck with Sevo! Great work Arnelle and this talented team of engineers!


Post a Comment

Emory Neuroethics on Facebook

Emory Neuroethics on Twitter

AJOB Neuroscience on Facebook