Skip to main content

Grounding Neural Data Privacy on Personal Integrity: A Neurocognitive Proposal

By Abel Wajnerman Paz

This piece is part of a series of featured posts from the 2020 International Neuroethics Society Meeting. It is based on an abstract titled “The Extended Global Workspace: A Proposal for Grounding Neural Data Privacy on Mental Integrity” that won the award for the “Best Oral Presentation.”

Informational privacy, neural data protection and the Chilean Bill

Image courtesy of Guido Girardi Lavín on Youtube.
Given its associated risks for patients and research subjects, such as health insurance or bank-issued credit denial (Glannon 2017), third-party access to neural data (ND; data about a person’s neural states, structures, or processes) requires appropriate regulations. A possible approach is to say that, given that ND can be considered ‘personal information’ and that the mentioned risks are prima facie similar to those associated with other kinds of health-related data, it would be enough to protect them through our general right to informational privacy (e.g., Shen 2013). 

However, it has been suggested that the specific risks associated with ND collection, analysis, and application will be increased by the rapid development of non-invasive, scalable, and potentially ubiquitous neurotechnologies oriented to healthy users and having various non-clinical (e.g., educational or work-related) applications (the so-called “pervasive neurotechnologies” Fernandez, Sriraman, Gurevitz and Ouiller 2015). These applications may involve violations of mental privacy (i.e., personal control over access to information about our mental states and processes), and this privacy dimension may not be appropriately protected by existing privacy and data regulations (Ienca & Andorno 2017). 

The Morningside Group, a global consortium of interdisciplinary experts advocating for the ethical use of neurotechnology, suggested making ND regulation more stringent than the regulation of other kinds of personal information by protecting it through bodily integrity (Yuste al. 2017, Goering & Yuste 2016). Treating ND as a body organ or as organic tissue entails, for instance, that consent from individual patients or research subjects is necessary for ND sharing (so opting out of sharing is the default choice), and that these parties can only consent to ND donation for altruistic purposes, not to its commercialization.  Interestingly, Chile is on its way to implement this idea. On October 7th, 2020 the Chilean Senate has introduced a Constitutional Reform and a Bill of Law for Neuroprotection. The bill stablishes that neural data is a special category of sensitive health-data. Specifically, Article 7 of the bulletin presenting the bill (N° 13.828-19) states that:

“the collection, storage, treatment, and dissemination of neuronal data and the neuronal activity of individuals will comply with the provisions contained in Law No. 19.451 regarding transplantation and organ donation, as applicable, and the provisions of the respective health code”1.  

In a previous blog entry, I have suggested that the substantial differences between ND and body organs cast doubt on the idea that the former should be covered by bodily integrity. Nevertheless, I argue here that the ND of a subject s may be analogous to neurocognitive properties of her brain. I claim that s’ ND constitutes a domain that is unique to her neurocognitive architecture. If this is correct, ND protection would be grounded on psychological integrity, a right that plausibly entails the same restrictions recommended by the Morningside Group and the Chilean bill. 

Mental integrity and informational domains 

Image courtesy of Wikimedia Commons
We can define mental integrity as the idea that no one can alter or manipulate the mental states, processes or structures of an individual (e.g., modulate her neural computation or information through electrical or magnetic brain stimulation) without her consent (e.g., Ienca & Haselager 2016, Lavazza 2018).  The states, processes, or structures covered by a subject s’ psychological integrity are often simply properties that happen to be instantiated in her brain. A token m of a mental state, process, or structure type M (e.g., a particular perceptual experience, decision-making process, or memory system) is part of the mind in which it is instantiated. The information that constitutes s’ ND seems to satisfy this condition. Her ability to think and reason about the mental states of her brain (for instance, through her meta-cognitive processes) suggests that her brain actually instantiates and processes this kind of information. However, this is insufficient for determining that her ND belong to her in the relevant sense. 

The problem is that when we say that some piece of personal information I about s belongs to s, i.e., that she has the exclusive right to control this information, we are not talking about a particular instantiation i of I. Any instantiation of I belongs to her in this sense. However, many instantiations of s’ ND are realized in physical structures that are different from her brain/mind, such as digital registers in brain recording devices or other brains (e.g., other brains can process information about our brain through their mind-reading systems). Thus, given that the instantiation relation is not an exclusive relation between s’ ND and her brain, it cannot ground her exclusive control over this information in the same way that it grounds her exclusive control over her mental states and processes. 

This exclusive right to control information about us is most often identified with informational privacy. In this case, we do have an exclusive connection between I and s, which is given by I’s semantic content: any token of I belongs to s because it is about s and not about any other subject2. By contrast, I will argue that s’ ownership of her ND can be grounded on psychological integrity because there is another ontological connection between this information and her brain that satisfies the exclusivity condition. 

There is a second sense in which information can be part of our brain systems, a sense that involves more than the mere instantiation of information. Different kinds of information can shape our cognitive architecture, that is, the set of relatively fixed or stable structures through which mental capacities are implemented (Pylyshyn 1998). A notion often employed in characterizing how different types of information mold our cognitive systems is that of ‘domain specificity’. A system has an informational domain to the extent that there is a kind of information it is dedicated to process (Robbins 2017). 

Of course, defining a domain in s’ brain is often not an exclusive relation between a kind of information I and her brain. The domains of many cognitive systems (e.g. information about visual shape, motion, color, etc.) are widely shared by different brains. This relation can ground s’ exclusive rights regarding I only if I defines a domain in her brain that is unique to her cognitive architecture. I will try to determine whether information about s’ brain satisfies this condition.  

Interoception and the inter-subject variability of cognitive domains

Image courtesy of Clipart Library and Wikimedia Commons

The domains of many cognitive systems are shared not only in the sense that the same mechanisms in different brains are dedicated to process information about the same kinds of properties (what we can call the ‘intensional’ dimension of a domain), but also in the sense that they process information about the instantiation of those properties in the same set of objects (what we can call the ‘extensional’ dimension of a domain). For instance, visual sub-systems of different brains can process information about a common set of properties, such as shape, position, motion, color, etc. (shared intensional domains), instantiated in a common set of objects that are available for all of them in the external environment (shared extensional domains). 

By contrast, the domains of the interoceptive mechanisms of different brains are extensionally different. Although these mechanisms carry information about the same properties (e.g. states of cardiovascular, respiratory, gastrointestinal or endocrine systems), each mechanism will only process information about the instantiation of those properties in the particular subject to which it belongs. For instance, my cardioperceptive mechanism is dedicated to gather and process information about (pressure, rate, rhythm and hormones of) my heart whereas your cardioperceptive mechanism is dedicated to gather and process information about yours. It is in this sense that the domains of interoceptive mechanisms are unique to each organism. That is, these mechanisms have an exclusive ontological relation with the kinds of information defining those domains. 

Image courtesy of Wikimedia Commons

The same idea would apply to ND if the domain of a neural mechanism were defined by information about the particular brain to which it belongs.

The global neuronal workspace as the home of neural data

Dehaene, Changeux and colleagues postulate the existence of a ‘global neuronal workspace’ (GNW), which is constituted by a set of cortical neurons (mostly pyramidal cells) that send projections to many distant areas through long-range excitatory axons, maximizing the ability to exchange information between specialized brain systems. More specifically, the GNW produces a sustained global or large-scale signal reaching and connecting distant processors (Dehaene 2014, pp. 223-225) characterized by high-frequency (gamma-band) oscillations and a massive long-distance phase synchrony (Dehaene & Naccache 2001, Dehaene 2014, pp. 216, 262). Any given brain system can have access, through this global signal, to information about the cognitive states of any other system of the same brain. Information about, say, a perceptual state in my visual system, can be sent through the GNW to, for instance, language, long-term memory, attention, and intention systems. Dehaene hypothesizes that the entry of inputs into this workspace constitutes the neural basis of access to consciousness.

Image courtesy of Dehaene et al.
Thus, just as information about the states of particular organism constitutes a domain that is unique to the interoceptive mechanisms of that organism, information about the neurocognitive states of a given brain constitutes a domain that is unique to the GNW of that brain. The domain of a particular instantiation of the GNW is extensionally unique because its signals only broadcast information about the neurocognitive states of the brain to which the mechanism belongs. It is in this sense that any particular brain has an exclusive ontological connection to its ND. 

The exclusive control a subject has over her ND is not merely grounded on informational privacy but also on psychological integrity (i.e., on the fact that her ND are part of her mind in some sense). This could justify the idea of making ND protection more stringent than the protection of other kinds of personal information (e.g. supporting the prohibition or restriction of its commercialization), as the Morningside Group and the Chilean bill propose. 

This is part of the original text, the bulletin in the Chilean Senate’s website includes an English version. 

 For instance, the EU Charter of Fundamental Rights describes privacy as the general right to the protection of ‘personal data concerning him or her’ (Article 8). 


  1. Dehaene S, Naccache L (2001) Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition 79:1-37. 
  2. Dehaene, S. (2014). Consciousness and the brain: Deciphering how the brain codes our thoughts. Penguin.
  3. Fernandez, A., Sriraman, N., Gurewitz, B., & Oullier, O. (2015). Pervasive Neurotechnology: A Groundbreaking Analysis of 10,000+ Patent Filings Transforming Medicine, Health, Entertainment, and Business. SharpBrains.
  4. Glannon, W. (2017). The evolution of neuroethics. In Racine, E., & Aspler, J. (eds.) Debates About Neuroethics, 19-44. Springer, Cham.
  5. Goering, S., & Yuste, R. (2016). On the necessity of ethical guidelines for novel neurotechnologies. Cell, 167(4), 882-885.
  6. Ienca, M., & Haselager, P. (2016). Hacking the brain: brain-computer interfacing technology and the ethics of neurosecurity. Ethics and Information Technology, 18(2), 117-129.
  7. Ienca, M., & Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13(1), 1-27.
  8. Lavazza, A. (2018). Freedom of thought and mental integrity: The moral requirements for any neural prosthesis. Frontiers in neuroscience, 12, 82.
  9. Pylyshyn, Z. W. (1998) Cognitive architecture, in Craig, E. (ed.) Routledge Encyclopedia of Philosophy, Taylor and Francis.   
  10. Robbins, Philip, “Modularity of Mind”, The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), Edward N. Zalta (ed.), URL =


    Abel Wajnerman Paz is an Assistant Professor in the Department of Philosophy at the Alberto Hurtado University, Santiago de Chile. He obtained his PhD in Philosophy at the University of Buenos Aires (2015), and was a CONICET (2015-2017) and a FONDECYT Postdoctoral Fellow (2018-2021). His main areas of interest are the Philosophy of Cognitive Neuroscience and Neuroethics. He focuses on epistemic and conceptual issues related to neural coding, computation and information processing, their relation to mental capacities (specifically, perception, thought and consciousness) and their neuroethical implications regarding mental privacy, psychological integrity, and autonomy. 

Want to cite this post?

Wajnerman Paz, A. (2021). Grounding Neural Data Privacy on Personal Integrity: A Neurocognitive Proposal. The Neuroethics Blog. Retrieved on , from


Emory Neuroethics on Facebook