The Neuroethics Blog Series on Black Mirror: White Christmas
By Yunmiao Wang

Humans in the 21st century have an intimate relationship with technology. Much of our lives are spent being informed and entertained by screens. Technological advancements in science and medicine have helped and healed in ways we previously couldn’t dream of. But what unanticipated consequences may be lurking behind our rapid expansion into new technological territory? This question is continually being explored in the British sci-fi TV series Black Mirror, which provides a glimpse into the not-so-distant future and warns us to be mindful of how we treat our technology and how it can affect us in return. This piece is the final installment of a series of posts that discuss ethical issues surrounding neuro-technologies featured in the show, and will compare how similar technologies are impacting us in the real world.
SPOILER ALERT: The following contains plot spoilers for the Netflix television series, Black Mirror.
Plot Summary
“White Christmas” begins with a man, Joe Potter, waking up at a small and isolated outpost in the snowy wilderness. “I Wish It Could Be Christmas Everyday” plays in the background. Joe walks into the kitchen and finds Matt Trent cooking for Christmas. Matt, who seems to be bored of the mundane lifestyle of the outpost, asks Joe about how he ended up there—a conversation they have never had in their five years together at the outpost. Joe becomes defensive and is reluctant to share his past. He asks Matt the same question. In order to encourage Joe to open up, Matt shares a few stories about himself.
Matt first tells a story where he is a dating coach who trains socially awkward men like Harry how to seduce women. A remote technology called EYE-LINK enables Matt, along with eight other students, to watch through Harry’s eyes and help him approach women. In this fashion, Harry meets a woman named Jennifer at a corporate Christmas party. Due to a series of ironic circumstances, Jennifer kills both Harry and herself, as she believes both of them are troubled by voices in their heads. Matt and the rest of the students watching through EYE-LINK panic as they watch Harry die. They try to destroy any evidence that they were ever involved with Harry.
![]() |
Image courtesy of Flickr user Lindsay Silveira. |
Joe finally shares what brought him to the outpost and starts his story by saying that his girlfriend’s father never liked him. Joe and Beth were in a serious relationship until his drinking problem slowly pushed Beth away. On a double-date they have with their friends, Tim and Gita, Beth seems to be upset, which causes Joe to drink more. After the dinner, the drunken Joe finds out that Beth is pregnant and congratulates her. Instead of being happy, Beth expresses her unwillingness to keep the baby, which angers Joe. After a heated argument, Beth blocks Joe through a technology called Z-EYE and leaves him. Being blocked by Z-EYE, Joe is only able to see a blurry grey silhouette of Beth and is unable to hear her. He spends months looking for her and writing apology letters without receiving any response. Joe also finds out that Beth has kept the baby, but he is not allowed to see her offspring due to the Z-EYE blocking. One day he sees Beth’s image in the news, which implies that she is dead. Saddened by the news, Joe is determined to meet his child for the first time since the block has been removed upon Beth’s death. He waits outside of Beth’s father’s cabin during Christmastime with a gift for the child. To his surprise, the child he has been longing to see has Asian heritage, which neither he nor Beth has. Joe soon realizes that Beth was having an affair with their friend Tim. He follows the child with shock and confronts Beth’s father. Out of anger, Joe kills Beth’s father with the snow globe he brought as a gift and runs away in panic, leaving the little girl alone on a snowy day.
In the present, Matt asks Joe if he knows what happened to the kid. Joe finally breaks down and confesses that he is responsible for killing both Beth’s father and the child. Matt soon disappears after the confession, leaving Joe to realize the outpost is the same cabin where Beth’s father and daughter died. It turns out that everything so far has taken place in a cookie of Joe in order to make him confess his crime. Matt helped the officers with Joe’s case to regain his own freedom due to his involvement in Harry’s death. Even though Matt is released from the police station, he is blocked from everyone through Z-EYE and will not be able to interact with anyone in reality. Back in the police station, as an officer leaves work for Christmas, he sets the time perception for Joe’s cookie to 1000 years per minute, leaving the copy of Joe wandering in the cabin as “I Wish It Could Be Christmas Everyday” goes on endlessly in the background.
Current Technology
![]() |
Google Glass Enterprise Edition; image courtesy of Wikimedia Commons. |
The EYE-LINK that allowed Harry to livestream his view with multiple people and the Z-EYE that blocks Matt from the rest of the world are closer to reality than fiction. Google Glass, despite the failure of its first version, has made its second attempt and returned this year as the Glass Enterprise Edition [1]. Given the controversy about privacy and the criticism of the wearability of its predecessor, the newer version has switched gears to become an augmented reality tool for enterprise. For example, according to the report by Steven Levy, the Glass has been employed by an agricultural equipment manufacturer company to provide workers with detailed instructions on the assembly line, which has dramatically increased the yield with high quality [1]. However, this pivot to partner with industrial companies does not necessarily mean the end of smart glasses for the general consumers. If anything, it might be a beginning of the evolution for smart glasses.
While Google Glass is not a built-in device, visual prosthetics that implant into the visual system are no longer a dream. There has been success in restoring near-normal eyesight of blind mice [2] and trials of vision rehabilitation in humans through implants [3]. It is just a matter of time before we see the birth of technology similar to EYE-LINK. After all, many people nowadays are used to sharing their lives on social media, in real-time, through their phones. If built-in sensory devices that augment our perceptions become reality, blocking others through signal manipulation would not be much of a challenge either.
Compared with EYE-LINK and Z-EYE, the cookie technology from the episode seems far more implausible based on our current understanding of neuroscience. The root of consciousness and our minds remain a mystery, despite how much we currently know about the nervous system. While we are decades away from copying our own minds, the current developments of AI are still startling. AlphaGo has been making the news over the past few years by defeating top professional Go players from around the world. While Deep Blue, another AI system, defeated world chess champion Gary Kasparov in 1997, the defeat of humans by AI in Go is much more difficult. Go, a classic abstract strategy board game that dates back to 3000 years ago, is viewed as the most challenging classical game for AI to win due to its large number of possible board configurations [4]. Given such a massive amount of possibilities, traditional AI methods involving exhaustion of all possible positions by a search tree do not apply to Go. Previous generations of AlphaGo were developed by playing with numerous amateur Go players via advanced search trees and deep neural networks. The reason that the recent win by AlphaGo Zero is so striking is that it learned to master the game without any human knowledge [5]. The newer version of AlphaGo learns to play the game by playing against itself with much higher efficiency. The triumph of AlphaGo not only means the winning of the game, but also represents the conquering of some challenges in machine learning. The advanced algorithm could potentially mean a step towards solving more complicated learning tasks, such as emotion recognition and social learning.
![]() |
AlphaGo competing in a Go game; image courtesy of Flicker user lewcpe. |
Despite the impressive progress scientists have made in the field of machine learning and artificial intelligence, we are still far away from anything like the cookie that would be able to copy a person’s consciousness and manipulate it to our advantage.
Ethical Consideration
After Matt explains how he coerces Cookie Greta to work for the real Greta, Joe feels empathetic towards the cookie and calls the technology slavery and barbaric. Matt argues that since Cookie Greta is only made of code, she is not real, and, hence, it is not barbaric. The disagreement between the two raises a fundamental question about whether or not the copy of a person’s mind is merely lines of code. If not, should these mind-copies have rights as we do? Similar discussions can be found in this previous post on the blog.
![]() |
Image can be found here. |
Let’s employ the concepts of agency and experience to help us understand why people do not think AI, including the cookie, has consciousness. One might agree that the cookie has a high level of intelligence, in other words agency, due to the power of algorithm in a futuristic world, but he or she might find it difficult to imagine that the code has feelings too. Matt gives Cookie Greta a physical body to help “her” cope with “her” distress. While it might be a filming tactic for the audience to better visualize cookie, the embodiment seems to also provide Cookie Greta with an outlet to feel, sense, and better understand “her” own existence. Moreover, Matt has to change Cookie Greta’s perception of time and leave “her” in prolonged solitary to force “her” into compliance given “her” fear of boredom. The fact that Matt cannot simply adjust the codes to make Cookie obedient but has to manipulate “her” through “her” fear, which is an emotion, somehow indicates the Cookie has the ability to feel and experience. Similarly, Matt takes advantage of Cookie Joe’s empathy and guilt through manipulation in order to make him confess. Even though it can still be argued that these seemingly human emotions are nothing but simulation, how can we be certain that the simulated mind does not experience these feelings? If they are able to feel the same way as we do, forcing Joe’s cookie to listen to the same Christmas carol for millions of years in isolation would be an utterly brutal and unfair punishment.
If we assume that the cookie indeed has some form of consciousness, the next question would be: should cookies bear the same consequences of their origin’s actions? It is clear that both Cookie Greta and Cookie Joe have the same memories and ways of thinking as their real selves (the term of “real” is used loosely here to differentiate the cookie and its origin instead of implying that the former is not real). Based on the confession, Joe is indeed responsible for the death of two lives. However, should the copy of his mind be responsible for his crime? Do we view the copy as an extension of him or do we see the cookie as an independent individual? Similarly, if Neuralink does succeed in creating a hybrid of human brain and AI, how do we define the identify of an individual and who should be responsible for its wrong-doing?
Conclusion
![]() |
Darling's robot dinosaur; image courtesy of WikimediaCommons. |
References
1. Levy, Steven (2017, July 18). Google Glass 2.0 is starting a startling second act. Retrieved from https://www.wired.com/story/google-glass-2-is-here/
2. Nirenberg, S., & Pandarinath, C. (2012). Retinal prosthetic strategy with the capacity to restore normal vision. Proc Natl Acad Sci USA, 109(37), 15012-15017. doi:10.1073/pnas.1207035109
3. Lewis, P. M., Ackland, H. M., Lowery, A. J., & Rosenfeld, J. V. (2015). Restoration of vision in blind individuals using bionic devices: a review with a focus on cortical visual prostheses. Brain res, 1595, 51-73. doi:10.1016/j.brainres.2014.11.020
4. The story of AlphaGo so far. Retrieved from https://deepmind.com/research/alphago/
5. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., … Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 500(7676), 354-359. doi:10.1038/nature24270
6. Lyson, M. K. (2011). Deep brain stimulation: current and future clinical applications. Mayo Clin Proc, 86(7), 662-672. doi:10.4065/mcp.2011.0045
7. Wegner, D. M., & Gray, K. J. (2017). The mind club: who thinks, what feels, and why it matters. New York, NY: Penguin Books
8. Can robots teach us what it means to be human? (2017, July 10). Retrieved from https://www.npr.org/2017/07/10/536424647/can-robots-teach-us-what-it-means-to-be-human
9. Darling, K. (2012). Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior toward robotic objects. Robot Law, Calo, Froomkin, Kerr ed., Edward Elgar 2016; We robot Conference. Available at https://ssrn.com/abstract=2044797 or http://dx.doi.org/10.2139/ssrn.2044797
Want to cite this post?
Wang, Y. (2017). The Neuroethics Blog Series on Black Mirror: White Christmas. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/the-neuroethics-blog-series-on-black.html
Wang, Y. (2017). The Neuroethics Blog Series on Black Mirror: White Christmas. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/the-neuroethics-blog-series-on-black.html