Neuroethics, the Predictive Brain, and Hallucinating Neural Networks
By Andy Clark
Andy Clark is Professor of Logic and Metaphysics in the School of Philosophy, Psychology and Language Sciences, at Edinburgh University in Scotland. He is the author of several books including Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, 2016). Andy is currently PI on a 4-year ERC-funded project Expecting Ourselves: Prediction, Action, and the Construction of Conscious Experience.
In this post, I’d like to explore an emerging neurocomputational story that has implications for how we should think about ourselves and about the relations between normal and atypical forms of human experience.
Predictive Processing: From Peeps to Phrases
The approach is often known as ‘predictive processing’ and, as the name suggests, it depicts brains as multi-area, multi-level engines of prediction. Such devices (for some introductions, see Hohwy (2013), Clark (2013) (2016)) are constantly trying to self-generate the sensory stream – to re-create it ‘from the top-down’ using stored knowledge (‘prior information’) about the world. When the attempt at top-down matching fails, so-called ‘prediction errors’ result. These ‘residual errors’ flag whatever remains unexplained by the current best predictive guess and are thus excellent guides for the recruitment of new predictions and/or the refinement of old ones. A multi-level exchange involving predictions, prediction errors, and new predictions then ensues, until a kind of equilibrium is reached.
The approach is often known as ‘predictive processing’ and, as the name suggests, it depicts brains as multi-area, multi-level engines of prediction. Such devices (for some introductions, see Hohwy (2013), Clark (2013) (2016)) are constantly trying to self-generate the sensory stream – to re-create it ‘from the top-down’ using stored knowledge (‘prior information’) about the world. When the attempt at top-down matching fails, so-called ‘prediction errors’ result. These ‘residual errors’ flag whatever remains unexplained by the current best predictive guess and are thus excellent guides for the recruitment of new predictions and/or the refinement of old ones. A multi-level exchange involving predictions, prediction errors, and new predictions then ensues, until a kind of equilibrium is reached.
That’s pretty abstract and highly compressed. But a compelling example involves the hearing of ‘sine-wave speech.’ This is speech with much of the usual signal cut out, so that all that remains is a series of ‘peeps and whistles.’ You can hear an example by clicking on the first loudspeaker icon here. You probably won’t make much sense of what you hear. But now click on the next loudspeaker and listen to the original sentence before revisiting the sinewave replica. Now, your experiential world has altered. It sounds like odd but clearly intelligible speech. In one sense, you are now able to hallucinate the richer meaning-bearing structure despite that poor sensory signal. In another (equally valid) sense, you are now simply hearing what is there, but through a process that starts with better prior information, and so is better able to sift the interesting signal from the distracting noise. For some more demos like this, try here, or here.
![]() |
Image courtesy of Pexels. |
Finally – but crucially for present purposes – the balance between top-down prediction and bottom-up sensory evidence is itself controlled and variable, so that sometimes we rely more on the sensory evidence, and sometimes more on the top-down predictions. This is the process known as the ‘precision-weighting’ of the predictions and prediction error signals (see Fitzgerald et al (2015)).
Perturbing Predictions
Or rather, that’s what happens when all works as it should. But what happens when such systems go wrong? Consider some of the options:
Over-weighting the sensory evidence.
This corresponds to assigning too much weight (precision) to the errors flagging unexplained sensory information or (what here amounts to the very same thing) assigning too little weight to top-down predictions. Do that, and you won’t be able to detect faint patterns in a noisy environment, missing the famous Dalmatian dog hidden in the play of light and shadow, or the true sentences hidden in the peeps and pops of sine wave speech. Could it be that autism spectrum disorder involves this kind of failure, making the incoming sensory stream seem full of unexplained details and hard to master? (For works that explore this and related ideas, see Pellicano and Burr (2012), Brock (2012), Friston et al (2012).)
Under-weighting the sensory evidence
![]() |
Image courtesy of Pixabay. |
More Complex Disturbances
Fletcher and Frith (2009)) use the Bayesian/Predictive Processing apparatus to account for the emergence of delusions and hallucinations (the so-called ‘positive symptoms’) in schizophrenia. The basic idea is that both these symptoms might flow from a single underlying cause: falsely generated and highly-weighted (high-precision) waves of prediction error. The high weighting assigned to these falsely generated error signals renders them functionally potent, positioning them to drive the system towards increasingly bizarre hypotheses so as to accommodate them. Once such hypotheses take hold, new low-level sensory stimulation may be interpreted falsely. From the emerging ‘predictive brain’ perspective, this is no stranger than prior expectations making pure white noise sound like White Christmas.
![]() |
A hallucinating multi-layer neural network looks at the University of Sussex campus (Work by Suzuki et al. (2017). Image reproduced by permission.) |
Predictive processing accounts link directly to psychopharmacological models and speculations. Corlett et al (2009) (2010) relate the chemical mechanisms associated with a variety of psychoses to specific impairments in the precision-weighted top-down/bottom–up balancing act: impairments echoed, the same authors note, by the action of different psychotomimetic drugs.
Implications for Neuroethics
All this has implications both for the nature and practice of neuroscience and for the social and political frameworks in which we live and work.
![]() |
Image courtesy of Flickr. |
Above all, we should get used to a simple but transformative fact – the idea of raw sensory experience is radically mistaken. Where we might sometimes think we are seeing or smelling or tasting what’s simply ‘given in the signal,’ we are instead seeing, tasting, or smelling only what’s there relative to an expectation. This picture of the roots of experience is the topic of our on-going ERC-funded project Expecting Ourselves. We are all, in this limited sense, hallucinating all the time. When others hallucinate or fall prey to delusions, they are not doing anything radically different from the neurotypical case.
* This post was prepared thanks to support from the European Research Council (XSPECT - DLV-692739). Thanks to Anil Seth and Keisuke Suzuki for letting me use their work on the Hallucination Machine, and to David Carmel, Frank Schumann and the X-SPECT team for helpful comments on an earlier version.
References
Barrett, L.F. and Wormwood, J (2015) When a Gun is Not a Gun, New York Times, April 17
Brock, J (2012) Alternative Bayesian accounts of autistic perception: comment on Pellicano and Burr Trends in Cognitive Sciences, Volume 16, Issue 12, 573-574 doi:10.1016/j.tics.2012.10.005
Clark, A (2013) Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science Behavioral and Brain Sciences 36: 3: p. 181-204
Clark, A (2016) Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, NY)
Corlett PR, Frith CD, and Fletcher PC (2009) From drugs to deprivation: a Bayesian framework for understanding models of psychosis. Psychopharmacology (Berl) 206:4: p.515-30
Corlett PR, Taylor JR, Wang XJ, Fletcher PC, and Krystal JH (2010) Toward a neurobiology of delusions. Progress In Neurobiology. 92: 3 p.345-369
Crowe, S., Barot, J., Caldow, S., D’Aspromonte, J., Dell’Orso, J Di Clemente, A., Hanson, K Kellett, M Makhlota, S McIvor, B McKenzie, L Norman, R., Thiru, A., Twyerould, M., and Sapega, S (2011) The effect of caffeine and stress on auditory hallucinations in a non-clinical sample Personality and Individual Difference 50 :5 :626-630
Feldman H and Friston K (2010) Attention, uncertainty, and free-energy. Frontiers in Human Neuroscience 2: 4 article 215 (doi: 10.3389/fnhum.2010.00215)
FitzGerald, T. H. B., Dolan, R. J., & Friston, K. (2015). Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 9, 136. http://doi.org/10.3389/fncom.2015.00136
Fletcher, P and Frith, C (2009) Perceiving is believing: a Bayesian appraoch to explaining the positive symptoms of schizophrenia. Nature Reviews: Neuroscience 10: 48-58
Friston K. (2005). A theory of cortical responses. Philos Trans R Soc Lond B Biol Sci.29;360(1456):815-36.
Friston, K., Lawson, R. & Frith, C.D.. (2013). On hyperpriors and hypopriors: Comment on Pellicano and Burr. Trends in Cognitive. Sciences, 17, 1.p1
Happé, F (2013) Embedded Figures Test (EFT) Encyclopedia of Autism Spectrum Disorders pp 1077-1078
Hohwy, J (2013) The Predictive Mind (Oxford University press, NY)
Merckelbach, H. & van de Ven, V. (2001). Another White Christmas: fantasy proneness and reports of 'hallucinatory experiences' in undergraduate students. Journal of Behaviour Therapy and Experimental Psychiatry, 32, 137-144.
Pellicano E., Burr D (2012) When the world becomes too real: A Bayesian explanation of autistic perception. Trends in Cognitive Sciences. 2012; 16:504–510. doi: 10.1016/j.tics.2012.08.009
Suzuki, K., Roseboom, W., Schwartzman, D., and Seth, A. (2017) A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology Scientific Reports 7, Article number: 15982 doi:10.1038/s41598-017-16316-2
Clark, A. (2017). Neuroethics, the Predictive Brain, and Hallucinating Neural Networks. The Neuroethics Blog. Retrieved on , from http://www.theneuroethicsblog.com/2017/12/neuroethics-predictive-brain-and.html