Tuesday, December 5, 2017

Neuroethics, the Predictive Brain, and Hallucinating Neural Networks

By Andy Clark

Andy Clark is Professor of Logic and Metaphysics in the School of Philosophy, Psychology and Language Sciences, at Edinburgh University in Scotland. He is the author of several books including Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, 2016). Andy is currently PI on a 4-year ERC-funded project Expecting Ourselves: Prediction, Action, and the Construction of Conscious Experience.

In this post, I’d like to explore an emerging neurocomputational story that has implications for how we should think about ourselves and about the relations between normal and atypical forms of human experience.

Predictive Processing: From Peeps to Phrases

The approach is often known as ‘predictive processing’ and, as the name suggests, it depicts brains as multi-area, multi-level engines of prediction. Such devices (for some introductions, see Hohwy (2013), Clark (2013) (2016)) are constantly trying to self-generate the sensory stream – to re-create it ‘from the top-down’ using stored knowledge (‘prior information’) about the world. When the attempt at top-down matching fails, so-called ‘prediction errors’ result. These ‘residual errors’ flag whatever remains unexplained by the current best predictive guess and are thus excellent guides for the recruitment of new predictions and/or the refinement of old ones. A multi-level exchange involving predictions, prediction errors, and new predictions then ensues, until a kind of equilibrium is reached.

That’s pretty abstract and highly compressed. But a compelling example involves the hearing of ‘sine-wave speech.’ This is speech with much of the usual signal cut out, so that all that remains is a series of ‘peeps and whistles.’ You can hear an example by clicking on the first loudspeaker icon here. You probably won’t make much sense of what you hear. But now click on the next loudspeaker and listen to the original sentence before revisiting the sinewave replica. Now, your experiential world has altered. It sounds like odd but clearly intelligible speech. In one sense, you are now able to hallucinate the richer meaning-bearing structure despite that poor sensory signal. In another (equally valid) sense, you are now simply hearing what is there, but through a process that starts with better prior information, and so is better able to sift the interesting signal from the distracting noise. For some more demos like this, try here, or here.

Image courtesy of Pexels.
According to these ‘predictive processing’ accounts, the process is one in which you start off with inadequate prior knowledge; so, when you first hear the sine wave version, you are unable to meet the incoming signal with the right wave of top-down predictions. After hearing the sentence, your model improves and you can match the sine wave skeleton with a rich flow of top-down prediction. Once you are expert enough, you can even recruit those apt top-down flows without hearing the specific sentence first. This corresponds to having learnt a generalizable world-model that now powers top-down prediction across new instances.

Finally – but crucially for present purposes – the balance between top-down prediction and bottom-up sensory evidence is itself controlled and variable, so that sometimes we rely more on the sensory evidence, and sometimes more on the top-down predictions. This is the process known as the  ‘precision-weighting’ of the predictions and prediction error signals (see Fitzgerald et al (2015)).

Perturbing Predictions

Or rather, that’s what happens when all works as it should. But what happens when such systems go wrong? Consider some of the options:

Over-weighting the sensory evidence.

This corresponds to assigning too much weight (precision) to the errors flagging unexplained sensory information or (what here amounts to the very same thing) assigning too little weight to top-down predictions.  Do that, and you won’t be able to detect faint patterns in a noisy environment, missing the famous Dalmatian dog hidden in the play of light and shadow, or the true sentences hidden in the peeps and pops of sine wave speech. Could it be that autism spectrum disorder involves this kind of failure, making the incoming sensory stream seem full of unexplained details and hard to master? (For works that explore this and related ideas, see Pellicano and Burr (2012), Brock (2012), Friston et al (2012).)

Under-weighting the sensory evidence

Image courtesy of Pixabay.
This corresponds to assigning too little weight to sensory prediction error, or (though from a Bayesian perspective this amounts to the same thing) assigning too much weight to top-down predictions. Do that, and you will start to hallucinate patterns that are not there, just because you strongly predict them. We can do this on demand, as when we set out to spot faces in the clouds. But if we don’t know we are upping the value of our own predictions, we may believe our own hallucinations. Indeed, just this was shown in healthy undergraduates whose task was to try to detect the faint onset of Bing Crosby singing ‘White Christmas’ in a noisy sound file. Unknown to them, the sound file was just white noise (no faint trace of White Christmas at all). Yet a significant number of students claimed to hear the onset of the song (Merckelbach and van de Ven (2001) – and for a follow-up study showing that the effect is increased by caffeine, see Crow et al (2011)).

More Complex Disturbances

Fletcher and Frith (2009)) use the Bayesian/Predictive Processing apparatus to account for the emergence of delusions and hallucinations (the so-called ‘positive symptoms’) in schizophrenia. The basic idea is that both these symptoms might flow from a single underlying cause: falsely generated and highly-weighted (high-precision) waves of prediction error. The high weighting assigned to these falsely generated error signals renders them functionally potent, positioning them to drive the system towards increasingly bizarre hypotheses so as to accommodate them. Once such hypotheses take hold, new low-level sensory stimulation may be interpreted falsely. From the emerging ‘predictive brain’ perspective, this is no stranger than prior expectations making pure white noise sound like White Christmas.

A hallucinating multi-layer neural network looks at the
University of Sussex campus (Work by Suzuki
et al. (2017). Image reproduced by permission.)
Our experiential worlds, all this suggests, are a kind of shifting mosaic in which top-down predictions meet sensory evidence. This is a delicate mechanism prone to environmental, physio-logical, and pharmaco-logical upset. Using as a base the multi-level neural network architecture Deep Dream, Suzuki et al (2017) created an immersive VR (Virtual Reality) environment in which subjects could experience visual effects remarkably similar to those reported using hallucinogenic drugs. Translated (as suggested by Suzuki et al) into predictive processing terms, the networks were in effect being told strongly to predict certain kinds of object or feature in the input stream, thereby warping the processing of the raw visual information along those specific dimensions.  For example, the network that generated the image shown in Fig 1 was (in predictive processing terms) forced chronically to predict ‘seeing dogs’ while taking input from the Sussex campus. The results were then replayed to subjects using a heads-up display and 360 degree immersive VR. Here’s a video clip of what the viewers experienced.

Predictive processing accounts link directly to psychopharmacological models and speculations. Corlett et al (2009) (2010) relate the chemical mechanisms associated with a variety of psychoses to specific impairments in the precision-weighted top-down/bottom–up balancing act: impairments echoed, the same authors note, by the action of different psychotomimetic drugs.

Implications for Neuroethics

All this has implications both for the nature and practice of neuroscience and for the social and political frameworks in which we live and work.

Image courtesy of Flickr.
Predictive perception is endemically hostage to good training data. So immersion in statistically unrepresentative worlds will yield real-seeming but distortive percepts. Barrett and Wormwood, in a high-profile New York Times piece, suggest that skewed predictions may play a role in some police shootings of unarmed black men. In the right context, visual evidence that ought to lead us to perceive a handheld cell-phone in a dark alley is trumped by top-down predictions that instead deliver a percept as of a handgun. Skewed environments build bad perceivers (not just bad reasoners or actors).

Above all, we should get used to a simple but transformative fact – the idea of raw sensory experience is radically mistaken. Where we might sometimes think we are seeing or smelling or tasting what’s simply ‘given in the signal,’ we are instead seeing, tasting, or smelling only what’s there relative to an expectation. This picture of the roots of experience is the topic of our on-going ERC-funded project Expecting Ourselves. We are all, in this limited sense, hallucinating all the time. When others hallucinate or fall prey to delusions, they are not doing anything radically different from the neurotypical case.

* This post was prepared thanks to support from the European Research Council (XSPECT - DLV-692739). Thanks to Anil Seth and Keisuke Suzuki for letting me use their work on the Hallucination Machine, and to David Carmel, Frank Schumann and the X-SPECT team for helpful comments on an earlier version.


Barrett, L.F. and Wormwood, J (2015) When a Gun is Not a Gun, New York Times, April 17

Brock, J (2012) Alternative Bayesian accounts of autistic perception: comment on Pellicano and Burr Trends in Cognitive Sciences, Volume 16, Issue 12, 573-574 doi:10.1016/j.tics.2012.10.005

Clark, A (2013) Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science Behavioral and Brain Sciences 36: 3:  p. 181-204

Clark, A (2016) Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, NY)

Corlett PR, Frith CD, and Fletcher PC (2009) From drugs to deprivation: a Bayesian framework for understanding models of psychosis. Psychopharmacology (Berl) 206:4: p.515-30

Corlett PR, Taylor JR, Wang XJ, Fletcher PC, and Krystal JH (2010) Toward a neurobiology of delusions. Progress In Neurobiology. 92: 3 p.345-369

Crowe, S., Barot, J., Caldow, S., D’Aspromonte, J., Dell’Orso, J  Di Clemente, A.,   Hanson, K  Kellett, M  Makhlota, S  McIvor, B  McKenzie, L  Norman, R.,   Thiru, A.,  Twyerould, M., and Sapega, S (2011) The effect of caffeine and stress on auditory hallucinations in a non-clinical sample Personality and Individual Difference 50 :5 :626-630

Feldman H and Friston K (2010) Attention, uncertainty, and free-energy. Frontiers in Human Neuroscience 2: 4 article 215 (doi: 10.3389/fnhum.2010.00215)

FitzGerald, T. H. B., Dolan, R. J., & Friston, K. (2015). Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 9, 136.

Fletcher, P and Frith, C (2009) Perceiving is believing: a Bayesian appraoch to explaining the positive symptoms of schizophrenia. Nature Reviews: Neuroscience 10: 48-58

Friston K. (2005). A theory of cortical responses. Philos Trans R Soc Lond B Biol Sci.29;360(1456):815-36.

Friston, K.,  Lawson, R. & Frith, C.D.. (2013). On hyperpriors and hypopriors: Comment on Pellicano and Burr. Trends in Cognitive. Sciences, 17, 1.p1

Happé, F (2013) Embedded Figures Test (EFT) Encyclopedia of Autism Spectrum Disorders pp 1077-1078

Hohwy, J (2013) The Predictive Mind (Oxford University press, NY)

Merckelbach, H. & van de Ven, V. (2001). Another White Christmas: fantasy proneness and reports of 'hallucinatory experiences' in undergraduate students. Journal of Behaviour Therapy and Experimental Psychiatry, 32, 137-144.

Pellicano E., Burr D (2012) When the world becomes too real: A Bayesian explanation of autistic perception. Trends in Cognitive Sciences. 2012; 16:504–510.  doi: 10.1016/j.tics.2012.08.009

Suzuki, K., Roseboom, W., Schwartzman, D., and Seth, A. (2017) A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology Scientific Reports 7, Article number: 15982 doi:10.1038/s41598-017-16316-2

Want to cite this post?

Clark, A. (2017). Neuroethics, the Predictive Brain, and Hallucinating Neural Networks. The Neuroethics Blog. Retrieved on , from

No comments:

Post a Comment