Skip to main content

Hubris and Hope for Engineering Brains

“Living organisms are nothing more than complex biochemical machines.” [1]

The above statement, or at least the thought, is something that gets thrown around in biology and especially bioengineering. And it’s a very empowering thought: the living thing in front of me is governed by the same physical laws that govern the rest of the universe, including the machines we build.  It doesn’t have some sort of supernatural vital force flowing through it, just fats and proteins and DNA and small molecules.  We can use this.  We can fix ourselves when we are sick, we can design new life forms to do our bidding.  It’s all very exciting.

We might even go as far as to engineering a brain.  The task would be difficult, but the reward tremendous.  Brains are very good at performing difficult computations that top of the line AI is struggling with, and biological neurons tend to use less resources than their artificial cousins. Building a wet, squishy, thinking machine, designed to perform one specific task and to perform that task very well, would be a great boon to autonomous robots, power grid management, and thousands of other applications. Hey, we’ve already engineered neural tissue to be part of an art project.

Ionat Zurr and Oron Catts of Symbiotica

However, Ionat Zurr and Oron Catts of the SymbioticA biological art center in Perth, Australia, are quick to bring up the fact that thinking of living systems as machines brings with it a lot of additional baggage. [2] Machines don’t get a lot of respect. They are ‘tools’, ‘playthings’, to be used without reverence or regard to their own desires or wants.  What could a hammer possibly want?  If somehow it did want something, why should we care?  What could we do to an electrical circuit that could make us feel guilty?  By extension, why should we care about the wants of individual cells, or organs, or even living organisms?

One critique (of many possible critiques) of this mechanical view of neural systems is that it supposes an impossible level of understanding.  This isn’t an attack on determinism or naturalism, so much as an acknowledgement of how much we don’t currently know, and how much we are likely to know in the near future.  It is simple enough to understand a hammer to the point that it can be used effectively – you physically move it as an extension of your hand, and the metal end drives in a nail.  A sled dog, on the other hand, requires an immense understanding to ‘use’ properly, so that the dog acts as an extension of your own will. [3] The argument follows that we should be prepared to think of biological systems, and neural systems especially, as being too difficult to engineer in their entirety.  Without this perfect knowledge, how can we be sure that we haven’t created a slave that can’t even express its own suffering?

Tools can be adorable in addition to being functional.  From here

This living complexity and danger of moral atrocity is most severe in neural systems.   Much like all biological systems, there is a lot about the behavior of neural systems that remains unknown.  But even the ‘knowns’ in neural systems seem to act to prevent understanding.  Neural systems interconnect directly with every aspect of the body, and the environment beyond the body, so the effective environment that neural systems respond to (and thus, the number of variables engineers need to account for) is especially vast.  Additionally, neural systems are famously malleable – they adapt themselves to meet the demands of their environment, meaning that even if a neural system is understood to some level of satisfaction at one point in time, it could very well surprise you later on. This, combined with the general scientific and cultural notion that neural systems are responsible for such ethically nasty things as pain and suffering, makes it very difficult to be sure something terrible won’t happen when we start to design with neural tissue.

However, the discussion of what is unknown, or perhaps (practically) unknowable, about biology and neural systems should also include mention of the techniques that engineering has created in order to deal with overwhelming complexity.  For example, modern software engineering projects are so large that is impossible for any one person to understand the entire project all at once. Instead, the insurmountably large project is broken down into individual components or objects, which in turn are built of smaller and smaller objects. The idea is to test the system at each level of description, so that unexpected behaviors can be identified at their origin and fixed.

The fly olfactory sysem- from [4].  

The beginnings of this compartmentalization can even be seen in neuroscience work.  A fantastic example is Professor Larry Abbott’s work to provide functional descriptions of different levels of the fly olfactory system [4], where three successive layers of connections were described as three different functional blocks (noise reduction, normalization, and a reservoir computing style state expansion). While there wasn’t any discussion on what processes are necessary to maintain those functions in the face of unexpected interactions with other systems, or plastic changes in the blocks themselves (not to mention that this occurred in the humble fruit fly), the modular architecture used implies that such techniques could be used to engineer neural systems to behave in similar ways. So perhaps the engineers are up to the challenge of harnessing the computational capabilities of living tissue, in a way that is robust against the construction of entities that we would look upon with pity.



[1] Note that this quote refers to how biologists, not the author, views living organisms

[2] Catts, Oron, and Ionat Zurr. “The Vitality of Matter and the Instrumentalisation of Life.” Architectural Design 83.1 (2013): 70-75.

[3] The mechanical view of life can be made more practical and perhaps even less cold by making sure to include things like “exceeding the physical limits of a tool degrades performance” and “suffering decreases efficiency.”  The ideal living tool wouldn’t suffer from it’s use, but would enjoy pleasing it’s user- or perhaps master is a better word here-  and seek to improve it’s own performance.  Of course this could easily horrify someone with the imagery of a slave that participates in it’s own slavery, unable to even perceive what a better life might be.

[4] Keene, Alex C., and Scott Waddell. “Drosophila olfactory memory: single genes to complex neural circuits.” Nature Reviews Neuroscience 8.5 (2007): 341-354.

[5] This work includes the paper “Generating sparse and selective third-order responses

in the olfactory system of the fly”, as well as work that is currently under review.  Dr. Abbott gives a fantastic talk.

Want to cite this post?

Zeller-Townson, RT. (2013). Hubris and Hope for Engineering Brains. The Neuroethics Blog. Retrieved on from


Emory Neuroethics on Facebook