Skip to main content

Addressing the Social Dilemma: Identification and Examination of Western Individualistic Biases in Neurotech Innovation

By Linzie Taylor

Image courtesy of pxfuel

Netflix recently released the documentary The Social Dilemma, which features interviews with experts in technology who were involved in the creation of a variety of social media platforms, like Facebook and Twitter. The experts urge viewers to take a more extensive look at how technology is being made and used, as well as how that tech is affecting society. In the opening remarks, Tristan Harris, a former employee of Google, states that there is an unnamed problem prevailing within the tech industry: that developers did not intend to create platforms and products that would result in mass surveillance, increased disinformation, and increased psychological dependency—but the current reality is that they did. So, what is the underlying issue behind these effects? 

I believe I have an answer to what the driving force behind this prevailing problem may be.  

Western individualist philosophy originated in the period after the Dark Ages (Mhlambi, 2020). Europeans found themselves trying to find a tunnel out of the Dark Ages through the Enlightenment. The Enlightenment period emphasized the need for rational thought; through rationalization, a space existed for Europeans to hold their goals and advancement above that of others. This orientation resulted in global colonialization and enslavement of others for the benefit of Europeans. Fast-forward hundreds of years, and biases towards this thinking remain prevalent in the handling of technology within a global context. 

Within the documentary, the tech developers mention how their creations eventually fell out of their hands and began snowballing into what they are today. One example is Facebook selling user data without the users’ knowledge to Cambridge Analytica, who used this information to influence elections. We are entering into a world where technology is the new avenue for surveillance and domination. While watching the documentary, I could not help but notice that an overwhelming number of the speakers were white men, with a few exceptions. Perhaps this overrepresentation of white men can partly give reason as to why technology has easily been re-tooled for surveillance and control. Such a narrow perspective, neglectful of the past and devoid of reflection, would make plain just how institutional biases towards Euro-Americans’ goals perpetuate in our all-too-common systems of oppression and inequality, now tech is no exception.  

It is important to note that I, too, consider myself a Western individual. It is not that this way of thinking is necessarily wrong—rather, if it goes unchecked and unbalanced, then unintended harm towards humanity will continue to arise, like what is explained in The Social Dilemma. Thus, it is imperative to investigate how Western individualist biases manifest within neurotechnology and what they look like. 

As part of my graduate neuroscience program’s directed study with Dr. Karen Rommelfanger, I am exploring the influence of white, Western individualistic biases in neuroinnovation. This is a part of a broader project in the Rommelfanger Neuroethics and Neuroinnovation Collaboratory exploring ethics roadblocks in private sector neurotechnology innovation. 

Western biases 

Dr. Rommelfanger and others have advocated for a culturally aware neuroethics (GNS Delegates et al. 2018 Neuron; Rommelfanger et al., 2019 Neuron).  Within these articles, the authors explore how the values and practice of neuroscience, and science in general, have a bias towards Western bioethical and philosophical perspectives. Specifically, they use East Asian philosophical viewpoints to illustrate their point on how the brain is conceptualized and the predominant Western bias of centering and privileging the individual versus the collective. 

Western individualism defines a person within the context of rationality (Mhlambi, 2020). Within this rationale, Euro/Euro-Americans’ goals and values are centered over the “other” so that there is rationalization in the subjugation of the other with the understanding that this will result in profit. This profit serves as justification for the rationale (Mhlambi, 2020). Western individualist thinking produced colonialism and capitalism, and this influence is seen within the dominant culture of global society. 

Image courtesy of

With this understanding of individualism, we can examine how these narratives of who a person is, and thus whose goals matter, impact ethical decision making, specifically in neurotech innovation. The emphasis on self-advancement over the other and society at large is reminiscent of Western individualism. Direct-to-consumer (DTC) neurotechnologies highlight how monetary goals and incentives are valued higher than reducing societal harm through regulations (Wexler & Reiner, 2020). 

Cost vs. benefit is another way ethical consideration is applied to neurotechnology. For example, there has been a proposition to use neurotechnology to enhance eyewitness testimony (Chang & Buccafurni, 2010); however, this overlooks power imbalances within the justice system, which impact who carries the burden of costs and who benefits.  

There is another problem, and it revolves around how science understands its role in society. Without proper reflection of the power and responsibility inherent in the authority held by science, abuses can happen to individuals and societies alike. For example, DTC neurotechnology companies can abuse the status of science as a credible sphere of knowledge dissemination by vouching for their products even if there is no specific research to support them (Coates McCall et al., 2019). Another important consideration is how neuroscience/neurotechnology as a collection of individuals in a field views itself in relation to society. One could argue that the prominent orientation of neuroscientists and neuroscience is that society is largely a separate consumer of neuroscience, rather than neuroscientists considering themselves to be part of that end-user community. How neuroscientists think about the other in relation to the self is at play when considering how to create neurotechnology with societal good in mind.  

How might biases be manifesting? 

Bias is a part of being human, so biases within neurotechnology reflect biases of neuroinnovators and the dominant views of the culture in which they operate. Identifying and examining the values of the dominant culture can give insight into how these biases develop and are propagated in technology.  

Challenging these unhealthy biases in our culture can provide insight for solutions on how to mitigate bias for more impactful, societally beneficial neurotechnology. 

I conducted a literature review examining white, Western bias in neurotech innovation. On a large scale, the main issues noted in the texts are the influence of monetary incentives, adoption of the market model, lack of regulation, and increasing disparities (Chen et al., 2019; Coates McCall et al., 2019; Garcia­-Rojas, 2016; Mohamed et al., 2020; Wexler & Reiner, 2020). To a lesser degree, problems within neuroinnovation revolve around what it means to be a person and what constitutes “normal.” These biases produce a disconnect between science/technology and the communities they are intended to serve. The separatist understanding of science and communities is reminiscent of Western individualist thinking. 

What might some solutions be? 

Image courtesy of Wikimedia Commons
Solutions provided by neuroethicists to address these biases highlight the importance of reflection, inclusion of a wide range of entities (including communities), regulation, and openness and transparency throughout the innovation process. Going further, developers of neurotechnology should engage in reflexivity of their individual ethics. To address ethical flaws, Mhlambi proposes (in the context of AI tech) a shift from personhood based on rationality to a relation-based personhood with foundations in Ubuntu, a South African philosophy (Mhlambi, 2020). In this context, the self is understood in relation to others, such that the understanding of being a person is tied to the community (Mhlambi, 2020). This thinking emphasizes the interconnectedness of life, and through this lens, we can see how historical decisions impact the present. Thinking about the historical domination of Western individualism and thus colonialism and capitalism, it is not surprising to see present-day inequalities, even though it has been hundreds of years since the start of colonialism.  

In the context of neurotechnology, adopting a more relation-oriented definition of personhood would look like an emphasis on social cohesion, reduced harm and inequality, and the human right to be a part of the collective. This can be done by the inclusion and elevation of voices from disenfranchised communities. Challenging how we have traditionally been taught to conceptualize what it means to be human and who/what gets to be human can profoundly impact our ethical considerations and decision-making within neurotech innovation.  

There is a present, urgent need to finally learn from history. Through proper reflection on the past and how it influences decision-making in the present, and thus the creation of the future, we can avert questions like those asked at the beginning of The Social Dilemma (i.e., “How did we get here?”). Instead, knowledge of the past can be used to inform future decisions so that societal good is emphasized—not just what is good for a few individuals. In this way, years down the road, we will not still be wondering how technology got away from us.  


  1. Amadio, J., Bi, G. Q., Boshears, P. F., Carter, A., Devor, A., Doya, K., Garden, H., Illes, J., Johnson, L. S. M., Jorgenson, L., Jun, B. O., Lee, I., Michie, P., Miyakawa, T., Nakazawa, E., Sakura, O., Sarkissian, H., Sullivan, L. S., Uh, S., … Singh, I. (2018). Neuroethics Questions to Guide Ethical Research in the International Brain Initiatives. Neuron, 100(1), 19-36. 
  2. Chang, P. L., & Buccafurni, D. (2010). Is invading the sacred for the sake of justice justified? AJOB Neuroscience, 1(3), 48-50. 
  3. Chen, I., Szolovits, P., & Ghassemi, M. (2019). Can AI help reduce disparities in general medical and mental health care? AMA Journal of Ethics, 21(2), E167-179. 
  4. Coates McCall, I., Lau, C., Minielly, N., & Illes, J. (2019). Owning Ethical Innovation: Claims about Commercial Wearable Brain Technologies. Neuron, 102(4), 728-731. 
  5. Garcia­-Rojas, C. (2016). The Surveillance of Blackness: From the Trans-Atlantic Slave Trade to Contemporary Surveillance Technologies.
  6. Mhlambi, S. (2020). From rationality to relationality: Ubuntu as an ethical & human intelligence governance. Carr Center. 
  7. Mohamed, S., Png, M. T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy and Technology. 
  8. Wexler, A., & Reiner, P. (2020). Oversight of direct-to-consumer neurotechnologies. Science, 363(6424), 234-235. 

Linzie Taylor (they/she) is a doctoral student in Emory’s Neuroscience program.  Their research interests include investigating the neurological impact of racial trauma on the individual. Linzie aims to develop a Black Feminist Thought informed neuroethical framework to apply to their research examining the neurophysiological and psychological influence of race-based stressors. Overall Linzie hopes to introduce Black Feminist Thought to neuroscience so that research is designed in a more inclusive manner at the induction of experimentation. 

Want to cite this post?

Taylor, L. (2020). Addressing the Social Dilemma: Identification and Examination of Western Individualistic Biases in Neurotech Innovation. The Neuroethics Blog. Retrieved on , from


Emory Neuroethics on Facebook