Neuroethics, the Predictive Brain, and Hallucinating Neural Networks
The approach is often known as ‘predictive processing’ and, as the name suggests, it depicts brains as multi-area, multi-level engines of prediction. Such devices (for some introductions, see Hohwy (2013), Clark (2013) (2016)) are constantly trying to self-generate the sensory stream – to re-create it ‘from the top-down’ using stored knowledge (‘prior information’) about the world. When the attempt at top-down matching fails, so-called ‘prediction errors’ result. These ‘residual errors’ flag whatever remains unexplained by the current best predictive guess and are thus excellent guides for the recruitment of new predictions and/or the refinement of old ones. A multi-level exchange involving predictions, prediction errors, and new predictions then ensues, until a kind of equilibrium is reached.
|Image courtesy of Pexels.
According to these ‘predictive processing’ accounts, the process is one in which you start off with inadequate prior knowledge; so, when you first hear the sine wave version, you are unable to meet the incoming signal with the right wave of top-down predictions. After hearing the sentence, your model improves and you can match the sine wave skeleton with a rich flow of top-down prediction. Once you are expert enough, you can even recruit those apt top-down flows without hearing the specific sentence first. This corresponds to having learnt a generalizable world-model that now powers top-down prediction across new instances.
|Image courtesy of Pixabay.
This corresponds to assigning too little weight to sensory prediction error, or (though from a Bayesian perspective this amounts to the same thing) assigning too much weight to top-down predictions. Do that, and you will start to hallucinate patterns that are not there, just because you strongly predict them. We can do this on demand, as when we set out to spot faces in the clouds. But if we don’t know we are upping the value of our own predictions, we may believe our own hallucinations. Indeed, just this was shown in healthy undergraduates whose task was to try to detect the faint onset of Bing Crosby singing ‘White Christmas’ in a noisy sound file. Unknown to them, the sound file was just white noise (no faint trace of White Christmas at all). Yet a significant number of students claimed to hear the onset of the song (Merckelbach and van de Ven (2001) – and for a follow-up study showing that the effect is increased by caffeine, see Crow et al (2011)).
|A hallucinating multi-layer neural network looks at the
University of Sussex campus (Work by Suzuki
et al. (2017). Image reproduced by permission.)
Our experiential worlds, all this suggests, are a kind of shifting mosaic in which top-down predictions meet sensory evidence. This is a delicate mechanism prone to environmental, physio-logical, and pharmaco-logical upset. Using as a base the multi-level neural network architecture Deep Dream, Suzuki et al (2017) created an immersive VR (Virtual Reality) environment in which subjects could experience visual effects remarkably similar to those reported using hallucinogenic drugs. Translated (as suggested by Suzuki et al) into predictive processing terms, the networks were in effect being told strongly to predict certain kinds of object or feature in the input stream, thereby warping the processing of the raw visual information along those specific dimensions. For example, the network that generated the image shown in Fig 1 was (in predictive processing terms) forced chronically to predict ‘seeing dogs’ while taking input from the Sussex campus. The results were then replayed to subjects using a heads-up display and 360 degree immersive VR. Here’s a video clip of what the viewers experienced.
|Image courtesy of Flickr.
Predictive perception is endemically hostage to good training data. So immersion in statistically unrepresentative worlds will yield real-seeming but distortive percepts. Barrett and Wormwood, in a high-profile New York Times piece, suggest that skewed predictions may play a role in some police shootings of unarmed black men. In the right context, visual evidence that ought to lead us to perceive a handheld cell-phone in a dark alley is trumped by top-down predictions that instead deliver a percept as of a handgun. Skewed environments build bad perceivers (not just bad reasoners or actors).
Want to cite this post?
Clark, A. (2017). Neuroethics, the Predictive Brain, and Hallucinating Neural Networks. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/neuroethics-predictive-brain-and.html