The Coded Bias within Neurotechnology
By Ankita Moss
The Coded Gaze
![]() |
Image courtesy of The Noun Project |
While researching social robots as an undergraduate at Georgia Tech, Buolamwini could not complete her assignment because her robot did not recognize her. She soon realized that the robot failed to recognize her because of the color of her skin. In fact, she had to ask her friend to participate in the project just so she could complete her assignment. As a PhD candidate at MIT, Buolamwini faced the same discrimination from computer vision software, this time from another robot employing facial recognition technology. At the MIT Media Lab, Buolamwini was working on a confidence-boosting, technology-for-social-good project called the Aspire Mirror, which projects motivational quotes, fun images, etc., onto an individual’s reflection. What started as a positive and uplifting project unraveled into something sad, oppressive, and destructive. The confidence-boosting and uplifting Aspire Mirror could not recognize Buolamwini - that is, until she put on a white mask. Across the world in Hong Kong, China, during a demo at a startup, Buolamwini experienced the same exact thing: algorithmic bias is an international issue, and the coded gaze blankets the world as it adopts these machine-learning models.
![]() |
Image courtesy of Flickr |
In 2020, director and producer Shalini Kantayya released Coded Bias, a documentary following leaders such as Boulamwini who are fighting for algorithmic justice in a world blatantly employing racially-biased datasets in facial recognition systems. The film premiered at Sundance Film Festival and garnered international acclaim and recognition for uncovering the harsh truth that improperly trained algorithms are being deployed to make decisions that affect the lives of individuals every day. From recognizing faces or detecting criminals to screening for college applications, the data used for algorithmic training reflects the blatant lack of inclusion in our society. Underneath the coded gaze is a system rampant with explicit and implicit bias.
How can data be so destructive, and how can we expect machines to be unbiased if we can’t be?
Artificial intelligence (AI) and its subset, machine-learning (ML), harness data to make decisions. Artificial intelligence is employed when “a machine mimics ‘cognitive functions,’” such as learning. You can think of ML algorithms like a feedback loop; they learn from the data that is fed to them and can then act on new data based on what has been learned. Now, the datasets themselves are not destructive. Humans are destructive when they, without reflection and awareness of their own biases, train machine-learning models on data that is unrepresentative of the target user group.
Coded Bias reveals how historical discriminatory practices are being infused into futuristic devices, such as these facial recognition systems and other types of cognitively analogous decision-making software. By feeding algorithms racially-biased datasets, we are imposing our own biases and teaching our devices of the future to discriminate. Once a machine-learning model learns through iterations, it is nearly impossible for it to unlearn. These iterations and, specifically, this repeated form of learning largely contribute to the mystery behind these destructive algorithms. They are undoubtedly destructive and pervasive because what is done cannot be undone. It is clear that we must embed ethics, fairness, representation, and justice within the development process itself to ensure that we are intentionally creating ethical technology and that we do not continue discriminatory cycles Buolamwini advocates for “social change as a priority and not an afterthought” throughout the ML lifecycle. While this approach may be obvious to some, ethical considerations are often deployed only once the technology is already created or in response to a crisis moment.
Prioritizing social good and maximizing equality upfront instead of merely return on investment (ROI) can help identify potential technologies and unrepresentative datasets that might perpetuate inequality in the long-run. Kathy O’Neill, data scientist and author of Weapons of Math Destruction: How Big Data Increases Inequality, explains how even math isn’t neutral and how algorithms are trained to look for patterns. These patterns are classified by the training method of the model and are dependent on the datasets assigned by those who are creating the technologies. It is imperative that we ensure that the data is representative of all possible end-users and that the innovators and stakeholders in the room who choose the datasets are oriented toward social-good. According to Buolamwini, apart from the data, “who codes matters” as well. The founder of the Algorithmic Justice League advocates for coding oriented toward equality with “full spectrum teams with diverse individuals who can check each other’s blind spots.”
Neurotech and Neuroscience Applications
In 2020, Boston, MA banned facial recognition due to racial bias concerns. Buolamwini testified and detailed how facial recognition technology perpetuates racial bias, discriminates against individuals, and has the potential to make unjust and life-altering decisions. Although facial recognition is one example of algorithmic injustice, artificial intelligence and machine-learning apply to any and every industry and are only growing in adoption across sectors. Of course, this growth in adoption does not exclude the neurotechnology and neuroscience sectors.
![]() |
Image courtesy of Wikimedia Commons |
I recently participated in the University of Washington’s Center for Neurotechnology’s annual hackathon. In this virtual 36-hour event, I sharpened my ML skills and networked with incredible innovators in the neurotechnology space. Participants ranged from video game design specialists to seasoned neuroscientists at the height of their careers. For our final hack presentation, I was assigned the “ethical considerations” slide. As I researched and wrote this part of my team’s presentation, I realized that machine-learning used within the neurotech and neuroscience spaces raises a host of ethical concerns. Machine-learning, when applied to any field, adds continued layers of complexity and mystery. For example, machine-learning and data science are applied when collecting patient and end-user datasets for a project or study. As the machine “learns,” it establishes a pattern. The process continues and repeats for new conditions, and it becomes increasingly unclear as to whether or not the pattern is fair, just, or representative of incoming data. In convolutional neural networks a form of deep learning, for example, features are continuously extracted from previous data and applied to new data as it is fed through. Although machine-learning has the potential to accelerate innovation and impact in the neuroscience and neurotech spaces, its application often raises a host of ethical concerns due to the high susceptibility for error in the data workflow, and therefore, a high chance of adverse consequences.
Yes, ML and data science are exponentially useful for analysis and extraction when dealing with large amounts of information. There are significant applications for ML and data science within neuroscience and neurotech, particularly within connectomics, behavioral analysis, or the quantification of neural activity - but we must be careful. In the case of a digital health solution developed to identify whether or not a user is at high risk for schizophrenia, a classifier can be used. Such a model must be accurate, as it can either aid with preventative measures or, due to the slightest mis-tuning, incorrectly classify individuals, leading to a host of consequences. ML is especially useful when dealing with complex data such as that in the brain-imaging field; however, the mysterious layers of machine-learning models raise significant questions in any space. These concerns will only be heightened in a space such as neurotechnology, which is deeply personal in nature and encourages the interfacing of technology with the nervous system. One can see how even applying machine-learning models to a project or study creates a hazy gray area.
Ethical Machine-Learning Practices: A Call to Action for the Neuro-Sectors
AI, machine-learning, and data science undisputedly widen and intensify the possible impact of any project - in any industry. For example, the reason Netflix is able to perfectly curate content for users is because of the large amount of data crunching -- in order words, the business model survives off of data. Verge Genomics, an AI-driven neuro-therapeutic company, harnesses machine learning and data science to determine scalable therapies for patients with neurodegenerative diseases. There is no doubt that artificial intelligence will continue to merge with neuroscience and neurotechnology. As tasks are automated, energy can be allocated to more impactful areas; when energy is allocated to the most impactful areas, ROI will increase. AI will accelerate the rise of neurotech companies due to workflows with heightened efficiency and more impactful energy allocation.
![]() |
Image courtesy of Wikimedia Commons |
As neuroscience and neurotechnology rapidly advance with the help of artificial intelligence, we must embed responsibility, fairness, and ethics within the data acquisition process. The IEEE has advocated for the standardization of the data acquisition and sharing cycle and has acknowledged the need for neuroethics within the neurotechnology development process. Both neuroethics and data representation have been acknowledged as IEEE standardization priorities, demonstrating the possible positive impact of integrating neuroethics within ML-oriented neurotechnology. In fact, the IEEE is spearheading an effort to create a framework for addressing the ethical, legal, and social implications of neurotechnology. Within the Neuroethics and Neurotech Innovation Collaboratory at Emory University (Dr. Karen Rommelfanger’s Neuroethics Lab), machine-learning applications are being researched alongside the assessment of attitudes toward integrating ethics within the neurotech innovation process. At the University of Washington’s CNT Hackathon, incorporating ethical considerations are encouraged, if not required. The Center for Neurotechnology’s Neuroethics Thrust publishes on embedding neuroethics within neurotechnology, and while participating at the hackathon, it became clear that the thrust has also infused ethical responsibility within all efforts of the Center.
We can learn from the consequences of inequitable facial recognition technology. We must continue efforts to ensure that the neurotechnological future we face is equitable and representative of our global mosaic of experiences, ideas, backgrounds, and identities. Neuroethics and neurotech innovation need not be separate efforts. If neuroethics is embedded within ML-oriented neurotech and neurotech more broadly, both spaces can grow together and benefit from one another. Placing neuroethics within the context of neuro-innovation will both benefit the end-user from whom the data is collected and those who are leading the data-oriented projects themselves. Neuroethics can protect the user while identifying bottlenecks and future consequences before they occur. I look forward to understanding how the two spaces can work together in the future and am hopeful for the continued interweaving of artificial intelligence and the neuro-sectors.
______________
Ankita Moss graduated from Emory University Summa Cum Laude with a Bachelor of Science in Neuroscience and Behavioral Biology and a minor in Ethics. She is a former copy editor for The Neuroethics Blog and intern at The American Journal of Bioethics Neuroscience. She is currently a researcher at the Neuroethics and Neurotech Innovation Co-Lab and an incoming medical student.
Want to cite this post?
Moss, A. (2021). The Coded Bias within Neurotechnology. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2021/04/the-coded-bias-within-neurotechnology.html