Skip to main content

Caveats in Quantifying Consciousness

This piece belongs to a series of student posts written during the Neuroscience and Behavioral Biology Paris study abroad program taught by Dr. Karen Rommelfanger in June 2018.

By Ankita Moss

Image courtesy of Flickr user, Mike MacKenzie.

As I was listening to a presentation during the 2018 Neuroethics Network Conference in Paris, a particular phrase resonated with me: we must now contemplate the existence of “the minds of those that never lived.”

Dr. John Harris, a professor at the University of Manchester, discussed both the philosophical and practical considerations of emerging artificial intelligence technologies and their relationship to human notions of the theory of mind, or the ability to interpret the mental states of both oneself and others and use this to predict behavior.

Upon hearing this phrase and relating it to theory of mind, I immediately began to question my notions of “the self” and consciousness. To UC Berkeley philosopher Dr. Alva Noe, one manifests consciousness by building relationships with others, acting deliberately on the external environment in some capacity. Conversely, a group of Harvard scientists claim they have found the mechanistic origin of consciousness, a connection between the brainstem region responsible for arousal and regions of the brain that contribute to awareness.

Having explored theory of mind in my introductory psychology class, I had assumed that I would be somewhat familiar with the theory of mind material presented during the talks. However, Dr. John Harris offered a scenario that I had never considered, despite its actual plausibility: how will humans convince emerging artificial minds that we too can independently act on the world, and thus, have a consciousness as well? What if these artificially constructed, conscious forms of matter gain the notion that they “discovered” humans like the Europeans decided that they “discovered” the Americas? What if history repeats itself as innovations progress?

As Dr. Harris stated these piercingly thought-provoking questions to the audience, I suddenly stopped taking notes and attempted to grapple with the potential soon-to-be reality. What was once seemingly far-fetched, was now a plausible reality that humanity will have to evaluate using both philosophical and practical measures. This evaluation must encompass profound considerations that will redefine what it means to be human.

Humans make the nuanced argument that animals may have a lower level of consciousness as an attempt to justify animal testing and cruelty. Some may say that this is justified given that humans have a “higher level” of consciousness. In making this argument, we unconsciously strip animals of some of the rights that we take for granted. For example, some non-human primates are seen as models for human consciousness, while early invertebrates are labeled to have very low conscious capacity. Dr. Michio Kaku, in his book “The Future of the Mind,” defines his space-time theory of consciousness as the “process of creating a model of the world using multiple feedback loops in various parameters in order to accomplish a goal.” In this system, organisms are assigned a number indicating a low or high level of consciousness. Humans have the highest level because of our developed prefrontal cortex and ability to construct abstract thoughts and bring them to fruition in our external environment. If consciousness is rooted in theory of mind, this “higher” and “more advanced” cognitive state is essentially defined by our ability to interact with our surroundings. This definition justifies the argument that humans have more autonomy and control over the external environment than any other living organism.

Image courtesy of Flickr user, Bovee and Thill.

However, Dr. Harris’s point made me question this accepted view of consciousness. As the race to develop “more-human” artificial intelligence continues, humanity will have to define what it means to have a higher theory of mind and whether an artificially intelligent being could one day gain a higher level of rationality and awareness than a human being. This possibility raises concerns regarding the idea of quantifying consciousness. If we were to create a more rational, partly artificial being with a higher theory of mind or integrative intelligence, such as the interface proposed by Elon Musk’s Neuralink, the coin could very well be flipped and humans would become the animals with the definitive “lower” assigned number of consciousness. A perceived lesser consciousness would strip humans of control and give more complex beings a greater power that is justifiable by the very model that humanity constructed. Humanity has yet to encounter this dilemma, but it is inevitable if this technology advances as a result of the irrational and ill-advised attempt to birth more intelligent and complex beings.

In considering the trajectory of artificial intelligence research and its consequences, we must also take into account a possible loss of control resulting from the stripping/lessening of human rights as we now know them. Turning back to the considerations offered by Dr. Harris, one must take note of how the rights of peoples have been violated over the course of history. The Europeans stripped Native Americans of basic human rights, as well as rights to the land they inhabited first. Through the lens of the Europeans, “higher level” weapons and tools granted them superior power. If history does indeed repeat itself, and if superiority and dominance are inevitable, the prospective of fully conscious artificial intelligence threatens basic human rights and autonomy. If we already justify our actions towards those with “lesser intelligence” on the grounds of our own supposed “superiority,” then the same logic should have us seriously considering the possibility that we may be creating a toxic template that will inevitably put us on the losing side.

As of now, one of the most controversial feats in artificial intelligence is being undertaken by the 2045 Initiative, which claims it will create a sentient robot that can house human personality. The initiative’s aim is to create a race of enhanced humans, and with that, reinvent the fields of ethics, psychology, science, and even metaphysics. There may come a time when humans will have to question what it truly means to be conscious. Perhaps, it is not a matter of “if it will happen” but “when it will happen.”

So I ask, do emerging artificial intelligence technologies and initiatives proposing to create hybrid humans serve as the platform for the extension of humanity or as a sentence for its end?

Ankita Moss is an undergraduate student at Emory University majoring in Neuroscience and Behavioral Biology. Ankita has had a strong interest in neuroethics since high-school and hopes to contribute to the field professionally in the future. Aside from neuroscience and neuroethics, she is also very passionate about start-ups and entrepreneurship and founded the Catalyst biotechnology think-tank at Emory Entrepreneurship and Venture Management. Therefore, Ankita hopes to one day specifically navigate the ethical implications of neurotechnology startups and their impact on issues of identity and personhood.

Want to cite this post?

Moss, A. (2018). Caveats in Quantifying Consciousness. The Neuroethics Blog. Retrieved on , from


  1. Very interesting article! Made me think about Jonny number 5 from Short Circuit n Chappie (both are robot movies)


Post a Comment

Emory Neuroethics on Facebook