Skip to main content

The Seven Principles for Ethical Consumer Neurotechnologies: How to Develop Consumer Neurotechnologies that Contribute to Human Flourishing

By Karola Kreitmair 

Karola Kreitmair, PhD, is a Clinical Ethics Fellow at the Stanford Center for Biomedical Ethics. She received her PhD in philosophy from Stanford University in 2013 and was a postdoctoral fellow in Stanford’s Thinking Matters program from 2013-2016. Her research interests include neuroethics, especially new technologies, deep brain stimulation, and the minimally-conscious state, as well as ethical issues associated with wearable technology and citizen science.  

Brain-computer interfaces, neurostimulation devices, virtual reality systems, wearables, and smart phone apps are increasingly available as consumer technologies intended to promote health and wellness, entertainment, productivity, enhancement, communication, and education. At the same time, a growing body of literature addresses ethical considerations with respect to these neurotechnologies (Wexler 2016; Ienca & Adorno 2017; Kreitmair & Cho 2017). The ultimate goal of ethical consumer products is to contribute to human flourishing. As such, there are seven principles which developers must respect if they are to develop ethical consumer neurotechnologies. I take these considerations to be necessary for the development of ethical consumer neurotechnologies, i.e. technologies that contribute to human flourishing, but I am not committed to claiming they are also jointly sufficient. 

The seven principles are: 
1. Safety 
2. Veracity 
3. Privacy 
4. Epistemic appropriateness 
5. Existential authenticity 
6. Just distribution 
7. Oversight 
1. Safety

Consumer neurotechnologies must be safe! 

Image courtesy of Wikimedia Commons.

Technology should be safe both when it is used as intended and when under threat from cybersecurity attacks. The bar for ensuring safety is relative to degree of risk of harm inherent in technology. For technology to be safe when used as intended the development and production must be based on valid scientific principles and methods. Failure risks harm to users. Security breaches are particularly risky, because neurotechnology stands in an intimate relationship with the brain. 

Consider, for example, neurostimulation devices like the tDCS device Thync. Such devices are advertised as enhancing cognition, relieving symptoms of anxiety and depression, and combating cravings. However, there are considerable risks associated with such technologies, including that unintended areas may be affected, that enhancing one area might hurt another, that effects may be longer-lasting than expected, and that tDCS may cause contact dermatitis and skin burns (Riedel, Kabisch, Ragert, & von Kriegstein 2012; Wurzman, Hamilton, Pascual-Leone & Fox 2016). 

At the same time, neurotechnologies must be safe from cybersecurity threats. In August 2016, the FDA recalled 465000 pacemakers because they were vulnerable to hacking, allowing malicious actors to deliver inappropriate shocks to the heart or rapidly drain batteries. It is not hard to imagine similar attacks being launched on consumer neurotechnologies. For instance, Nervana is a neurostimulation device that stimulates the vagus nerve with gentle electrical impulses. If hackers were to hijack device in order to amplify or intensify the stimulation, this could be extremely harmful.

2. Veracity

Consumer neurotechnologies must not promise results they do not deliver! 

Flouting the principle of honesty violates a user’s autonomy, because it prevents her from making an informed decision on whether to use a particular technology. In an ideal world, all consumer neurotechnology would provide a valuable benefit to the user. However, while this may be too high of a bar for consumer products in our actual world, requiring honesty with respect to the value the technology provides is not. 

Image courtesy of Flickr.

There are currently a number of EEG sensor headsets on the market, such as the Neurosky Mindwave, that purport to measure attention, calmness, mental effort, appreciation, pleasantness, cognitive preparedness, and creativity. These devices can be combined with diverse apps that claim to allow users to train their performance along these dimensions. Take for instance apps meant to increase a user’s creativity through neurofeedback. There is limited evidence that ‘creativity’ can be reliably captured through EEG rather than a broader ‘heightened internal awareness’ (Fink, Schwab & Papousek 2011). Moreover, research that supports the claim that EEG neurofeedback training protocols can improve creativity tends to focus on effects in musical performances (Gruzelier et al 2014), a fact which is generally not explicitly stated on the EEG headset training websites.

3. Privacy

Consumer neurotechnologies must be private! 

Neurotechnology can capture massive amounts of highly sensitive data about users. Brain wave data may soon give insight into contentful mental states, i.e. ‘what’ a particular user may be thinking or what a particular user might be experiencing, making it a new class of data. There have been calls to regulate the handling of this exploding trove of neurodata (Yuste et al 2017), and I agree with this sentiment. In the meantime, however, users still have a right for their brain data to remain private, including, for instance, not having their brain data sold or shared to target product at them. Concerns regarding cybersecurity threats also factor into privacy considerations, as data can be a prime target for hackers.

EEG BCI devices, such as the Emotiv device, gather vast amounts of information regarding a user’s arousal, valence, frustration, and focus, as well as vast amounts of raw EEG data. This is used for a number of purposes, such as real-time modelling of brain activity, flying drones with one’s mind, or translating thoughts to speech

Such data must be handled in a responsible manner, that respect a user’s privacy. Absent new regulation, it is the developers’ responsibility to ensure this.

4. Epistemic appropriateness

Consumer neurotechnology should preserve epistemic appropriateness! 

Image courtesy of Pixabay.

Much of this technology functions by upending traditional epistemic pathways. It either mediates how we acquire information about ourselves or about the world. Traditionally, we gain such information through direct perception or introspection, relying on our embeddedness and embodiedness to give us information. Neurotechnology encourages the user to gather information through tracking and measurements, such as wearable tracking or neurofeedback, or through the generation of visual or tactile sensory input. Literature is emerging that employing these altered epistemic pathways may have profound impacts on an individual’s psychology, phenomenology, and even physiology. 

Tracking technology, including wearable devices and neurotracking devices, such as Hexoskin smart clothing or Interaxon Muse, can track location, activity, heart rate, breathing volume, EEG, EKG, sweat composition, and brain activity. It is this kind of technology that permits so-called ‘self-quantification’ (Wolf 2009). Evidence is emerging, however, that tracking an activity and focusing on measuring the output of this activity can diminish the enjoyment of the experience. In studies, individuals who tracked the number of steps on their walks through a forest using a fitness tracker did accumulate more steps, but also enjoyed the experience less, because it felt more like work (Etkin 2016). Moreover, there is concern that tracking and focusing on external means of gaining self-knowledge may be counterproductive to experiencing phenoma such as ‘flow’ and ‘being-in-the-moment’, which may contribute to alienation from embodiedness and embeddedness (Kreitmair, Cho & Magnus 2017). 

Consider also virtual reality systems, such as the Occulus Go which retails at $199 and is hitting the market in 2018. Such systems generate alternative visual and auditory phenomenological experiences. Or they can be extended to generate embodied virtual reality experiences, including ‘tactile’ or ‘haptic responsiveness’. (For instance, the NullSpace haptic suit includes 32 independently activated vibration pads on the chest, abdomen, shoulders, arms, and hands, which can activate 117 built-in haptic effects.) Such virtual reality systems may have effects beyond what is intended. Physiologically, evidence from studies involving the rubber hand illusion suggest that perceptual illusions can have effects for the immune system, such as increasing the histamine reactivity of the real body part (Barnsley et al 2011). Such illusions are also induced in virtual reality and thus are likely to cause similar immune responses in these settings. Moreover, the phenomenological effects of tactile responsiveness in virtual reality are not yet known. It is possible that embodied virtual reality may affect a user’s being-in-the-world in ways that are disorienting and alienating. 

5. Existential authenticity

Consumer neurotechnology should respect existential authenticity. 

Image courtesy of Wikimedia Commons.

This consideration also arises from my previous observation regarding the shift in epistemic access, namely that neurotechnology encourages the user to gather information through tracking and measurements, e.g. wearable tracking and neurofeedback, or through the generation of visual, auditory, or tactile sensory input. 

Neurotechnology mediate experiences. In a sense, a user is experiencing the representation of an experience (rather than the experience itself). This is either a quantified representation, when she is accessing states of the self and the world through tracking technology, or a virtual representation, when accessing states of the self and the world through virtual reality technology. It’s a different sort of experience, in that the user is not engaging in an authentic way with reality. 

This raises existential concerns. Experiences are that within which we ground our self-fashioning. As Kierkegaard says, the project of being a human is that of ‘becoming what one is’. We are always a ‘becoming’, never a ‘being’. What happens if we fashion ourselves on the basis of inauthentic experiences? Specifically, can we authentically fashion ourselves when the tactile, auditory, sensory input we receive is incongruous with reality? Can we authentically fashion ourselves when we acquire beliefs through ourselves through quantified data? 

Take the example of virtual reality. Sartre, in Anti-Semite and Jew (1948) states that “Authenticity consists in having a true and lucid consciousness of the situation, in assuming the responsibilities and risks it involves”. This raises the question of what effect this shift in the kind of thing we are experiencing might have for our moral sensibilities. What happens when we fashion our moral sensibilities in an experience space that is unbound by the constraints of reality? What happens when this technology becomes widespread in children? These are the kind of issues we, as neuroethicists, should be thinking about.

6. Just distribution

Consumer neurotechnologies must be justly distributed. 

These technologies constitute a good. As such there must be a just distribution. Without committing myself to a particular theory of just distribution it is none the less the case that how widely an individual technology ought to be accessible depends on the value it bestows. Very beneficial technologies must be widely accessible, while less beneficial technologies may be available only to a niche market. The ‘digital divide’ is already an issue concerning the equitable access to the internet between different socioeconomic statuses. This will likely be also with neurotechnologies. 

Image courtesy of Wikimedia Commons.

An example is neurostimulation devices, such as Startstim neurostimulation, which is advertised as performing transcranial direct current stimulation (tDCS), transcranial alternative current stimulation (tACS), transcranial random noise stimulation (tRNS), with the aim of enhancing, among other things, executive functions, language, attention, learning, memory, mental arithmetic, and social cognition. 

Justice demands that if this, or any other direct-to-consumer neurotechnology is an effective intervention that can be used to gain a benefit in a competitive environment, then it should be accessible in a just fashion.

7. Oversight

Consumer neurotechnologies must be subject to oversight! 

Oversight should address the six dimensions discussed and should be proportional to the extent the technology is implicated in the six dimensions. Of course, certain oversight mechanisms already exist. In the US, for instance, some of the technologies discussed most likely ought to be regulated by the FDA. The situation here resembles that of the early days of direct-to-consumer genomics. When companies like 23andMe began, they operated in an unregulated market. However, thanks in part to work of bioethicists (Magnus, Cook-Deegan & Cho 2009) the FDA stepped in and began to regulate 23andMe and others like it. 

For other technologies, those that would not be covered by FDA regulations, what we need is for stakeholders to develop industry guidelines. Specifically, stakeholders need to make judgment calls about where along the 6 dimensions thresholds should of acceptability should fall. If a particular technology falls below such a threshold, a that consumer neurotechnology should not be made available. Stakeholders here include users, parents, developers, medical experts, cybersecurity experts, and neuroethicists

In conclusion, these are the seven dimensions that the development of ethical consumer neurotechnologies will take into consideration. These are safety, veracity, privacy, epistemic appropriateness, existential authenticity, just distribution, and oversight. It is only when these dimensions are considered that consumer neurotechnologies will truly contribute to human flourishing. 

Note: These principles are limited to consumer neurotechnologies. They may therefore not hold for clinical, military, or third-party applications. They also do not necessarily apply to pharmacological technologies. 


1. Barnsley, N., McAuley, J. H., Mohan, R., Dey, A., Thomas, P., & Moseley, G. L. (2011). The rubber hand illusion increases histamine reactivity in the real arm. Current Biology, 21(23), R945-R946. 

2. Etkin, J. (2016). The hidden cost of personal quantification. Journal of Consumer Research, 42(6), 967-984. 

3. Fink, A., Schwab, D., & Papousek, I. (2011). Sensitivity of EEG upper alpha activity to cognitive and affective creativity interventions. International Journal of Psychophysiology, 82(3), 233-239. 

4. Gruzelier, J. H., Foks, M., Steffert, T., Chen, M. L., & Ros, T. (2014). Beneficial outcome from EEG-neurofeedback on creative music performance, attention and well-being in school children. Biological psychology, 95, 86-95. 

5. Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), 5. 

6. Kreitmair, K. V., & Cho, M. K. (2017). The neuroethical future of wearable and mobile health technology. Neuroethics: Anticipating the Future, 80-107. 

7. Kreitmair, K. V., Cho, M. K., & Magnus, D. C. (2017). Consent and engagement, security, and authentic living using wearable and mobile health technology. Nature Biotechnology, 35(7), 617-620.  
Magnus, D., Cho, M. K., & Cook-Deegan, R. (2009). Direct-to-consumer genetic tests: beyond medical regulation?. Genome medicine, 1(2), 17. 

8. Riedel, P., Kabisch, S., Ragert, P., & von Kriegstein, K. (2012). Contact dermatitis after transcranial direct current stimulation. Brain stimulation, 5(3), 432-434. 

9. Sartre, J. P., & Becker, G. J. (1948). Anti-semite and Jew. New York, 43, 148. 

10. Wexler, A. (2016). The practices of do-it-yourself brain stimulation: implications for ethical considerations and regulatory proposals. Journal of medical ethics, 42(4), 211-215. 

11. Wolf, G. (2009). Know Thyself: Tracking Every Facet of Life, from Sleep to Mood to Pain, 24/7/365. Wired (2009, June 22). 

12. Wurzman, R., Hamilton, R. H., Pascual?Leone, A., & Fox, M. D. (2016). An open letter concerning do-it-yourself users of transcranial direct current stimulation. Annals of neurology, 80(1), 1-4. 

13. Yuste, R., Goering, S., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., … & Kellmeyer, P. (2017). Four ethical priorities for neurotechnologies and AI. Nature News, 551(7679), 159. 1

Want to cite this post?

Kreitmair, K. (2018). The Seven Principles for Ethical Consumer Neurotechnologies: How to Develop Consumer Neurotechnologies that Contribute to Human Flourishing. The Neuroethics Blog. Retrieved on , from


  1. The principle of veracity is one that is easily overlooked, but to my belief especially important in the scientific world. Confirmation bias is easy to fall into, and thus the principle of veracity must be enforced to people knowingly making up information and propaganda about their product, but also to people who believe they are doing things the right way. Let’s illustrate veracity with an example and how easily it is for scientists to create false data without even noticing, and how some strategies can help them overcome it. Let’s use ant research as the subject: it is commonly thought that ants are more aggressive with ants from other nests than with ants of their own nests. Nonetheless, as reported in a journal article1 titled Confirmation Bias in Studies of Nestmate Recognition: A Cautionary Note for Research into the Behaviour of Animals, “Non-blinded experiments were less likely to report activity that fell outside what they believed to be regular, and reported twice as much aggression by ants that were surrounded by non-nestmates.” So, researchers tended to ignore aggression among ants from the same nest, and exaggerated aggression between ants from different nests. Mistakes like this in science are dangerous because they create false discoveries that can have an impact in future research or even on developing a cure for some diseases. Removing things like bias (by making a regulation that requires double-blinded experiments when possible, for example) is a key factor to improving the veracity of scientific data.

    1. Van Wilgenburg E, Elgar MA (2013) Confirmation Bias in Studies of Nestmate Recognition: A Cautionary Note for Research into the Behaviour of Animals. PLoS ONE 8(1): e53548.


Post a Comment

Emory Neuroethics on Facebook