Why the brain is not a computer, and why it matters for science and politics by Francesco P. Battaglia
As a practicing experimental neuroscientist, I increasingly reflect on foundational issues in brain research. In my multi-decade career of looking at brain data, I became increasingly uneasy with the mainstream view of the brain. While it is certainly possible to carry out experiments that validate those current views, sometimes with solid, statistically strong results, we also observe many things that are very difficult to explain within those ideas. Even more worrying, it is often unclear how the results of those experiments help explain deep questions about our brain, our behaviour, and our mental health.
Of course, much progress has been achieved: neuroscience is a field that is methodologically flourishing, with a number of amazing techniques for the detailed investigation of the function of the nervous systems developed in the past 20-30 years. Still, it feels like we are stuck from the conceptual point of view, a feeling that I share with increasing minority of experts in the field.
In order to see where a renewal may come from, it is helpful to reflect on the origins of current view on the brain. As it is regularly the case in science, those ideas do not purely stem from experiments and observation of nature. Even more so, because of its object of study, the brain, the seat of our mind, neuroscience has, more than other scientific disciplines, major philosophical, as well as ideological and political repercussions. My claim here is that, unfortunately, many current ideas in neuroscience lend implicit, but important support to ideologies of economic and social oppression. Those same political implications also have a major effect on neuroscience research (via funding, publication, hiring decisions for example). Thus, criticizing the science of neuroscience, the philosophy of neuroscience and the politics of neuroscience can only go hand-in-hand, and this is required in order to make the science progress, and become an inspiration for more liberating perspectives about life, humans and the world.
In my NIAS fellowship, I have for a moment taken some distance from lab work to reflect on these themes. Here, I will start from some historical thoughts about the evolution of neuroscience, and then highlight some alternative ways of thinking that have been by and large minoritarian, but may be developed in substantial new conceptual blood for the scientific field.
A view of the brain, requires, or at least implies a theory about what the brain “does”, and therefore a view on how humans and other animals interact with one another and with the environment. Thus, more than is true for other subjects, a scientific statement on the brain is at the same time a statement on human nature, and on societal relationships. Throughout history, theoretical views of the brain have been influenced by the dominating technology of each epoch, for example clockwork and hydraulics at Descartes time, electricity and telegraph networks in the nineteenth century.
For modern neuroscience the link with technology has been even more intimate: its conceptual foundations originated from the same intellectual milieu that generated cybernetics, computer science and artificial intelligence as disciplines in a short period of time starting roughly at the end of World War II. [1] All these disciplines have at their core the concept of “computation” – the idea that the brain (or a brain-like artifact) receives “information” about the external world, processes (“computes”) and produces a result (for example, determining a behaviour) in a way that is “optimal” with respect to some “objective” criterion. The brain does so, the theory goes, based on a “modular” organization where every piece in the machinery subserves to a specific function. The conventional wisdom would suggest that this same architecture may be reproduced in machines, resulting in “brain-inspired” AI, which would work just like our brain. It is a major selling point for AI researchers and tech companies that they are making something that may be (at least in perspective) a valid substitute for human brains, and therefore can evaluate X-ray medical images, write newspaper articles or maybe one day perform scientific research (in other words, replace skilled labour).
I will argue that it is the other way around: modern cognitive science and neuroscience propose a computer-based idea of the brain. In fact, by making the brain “computer-like”, it makes “brain-like” computers plausible (with all the PR advantages mentioned above). In turn, as proposed by philosopher Matteo Pasquinelli [2] AI takes inspiration from mechanisms of (neoliberal) economics, social organization, production and division of labour, similar to what was already proposed by Marx about the automatic machines of the industrial revolution era.
The computation revolution that I alluded to above – which gave us computers, AI and neurosciences – rests on a few conceptual pillars that are also useful entry points for this discussion. First, the concept of information, the idea that an immaterial “code” exists and can be empirically and quantitatively studied, on which computations may be performed by following an algorithm. The idea entered biology not least because of the influence of Erwin Schrödinger (the founding father of quantum mechanics) and his essay “What is life?” [3] where he essentially answered this question by saying “life is information”. His book was a massive influence on the discoverers of DNA and on the burgeoning field of molecular biology. Information is immaterial but can be implemented, encoded in matter, for example in DNA molecules.
Neurons encode information in their electrical activity (about what is currently being “processed”) and in their structure (their synaptic connections) about past experiences and “stored information”. But, information and algorithms reign on matter, and as cognitive scientist Hilary Putnam put it while defending the computational (technically, functionalist) standpoint “the brain could be made out of Swiss cheese” and it wouldn’t matter, as long as it performs those algorithms. Matter is therefore just an “implementational detail” [4]. There are powerful technical objections to this view, both in its implications for genetics [5] and the neuroscience [6], which would be too long to describe here. But from the political point of view, besides opening the door to “silicon brains” as a perfectly viable alternative to the old-fashioned biological counterpart, functionalism implies a clearcut hierarchy (going back to Pasquinelli’s thesis) between Episteme (“knowledge”) and Techné (“craft”, following Aristoteles). That hierarchy reaffirms a social hierarchy with mental labour at the top and manual labour as its subordinate, and such hierarchy is now hard-wired in our biology and can seamlessly be translated into AI machines.
The second, related conceptual pillar is the idea that the mind works by performing statistical inference, based on probability distributions that are “sampled”, that is: created from experience. This is the so-called Bayesian framework and, again, it is not immune to technical criticism, about its very falsifiability as a scientific theory [7] or about the existence of meaningful probability distributions in a constantly changing world, which would deserve a full treatment. Here, I would rather emphasize that building probability distributions by sampling comes, almost unavoidably, with a, to some extent, arbitrary selection operation, of which data go in or out of the distribution. The always present risk is then to capture and crystalize prejudice: both making a theory of a prejudiced brain and making a theory of the brain imbued with prejudice. When applying the same concept to artificial systems, we see how it results in prejudice permeating AI, with deleterious effects for women, minorities and whomever is not part of the social group that controls the algorithm and data selection, as powerfully described by data scientist Cathy O’ Neil in her book Weapons of Mass Destruction.
As for the third pillar; Cooperativity – that is the idea that algorithms can be performed by networks of simple agents, each with limited access to information. This is the key idea of self-organizing neural networks, which represent the basis both of the computational brain and of contemporary AI. In a revealing quirk of history, an early, extremely influential version of this theory was provided by the economist Friedrich Hayek, in his book The sensory order. Now, why would a founding father of neoliberal economics write a book on brain and computation theory?
Hayek was interested in the parallel between brains and free markets, both interconnected systems of agents (neurons, or economic operators), whose coordinated activity produces emergent behaviours going above and beyond the scope of each single agent. In that sense, sensory perception (in brains) and price determination (in markets) are not too different from one another, and neoliberal economics becomes just what biology does. A major point (we will return on that shortly) is that from Hayek’s (and the computational brain) standpoints, this self-organization obeys an external criterion, utility maximization for economic systems, which translates into some notion of fitness for biological/neural systems. Reinforcement Learning, a leading formalism to model learning in the brain, and the basis for AIs like Google DeepMind’s AlphaGo, who could beat the Go world champion, is essentially a utility maximization algorithm that, roughly speaking, works as neoliberal economics teaches. Later, neuroscientists following Hayek’s steps found empirical evidence supportive of a mapping between “utility” itself and the activity of certain neurons in the brain, again, hard-wiring economical principles in biology. Also here, I cannot formulate a technical critique of this argument for space reasons.
In sum, “neuro-AI”, the synergy of artificial intelligence and neurobiology, subserves multiple ideological tasks: 1) it reproposes, in a “science-acceptable” way, Cartesian dualism between “mind” and “matter”, at the same time validating the hierarchy between “abstract” and “manual” labour; 2) it validates capitalist and colonial views by making them hard-wired in brain biology; and 3) it eliminates any fundamental distinction between our biological nature of humans and the computer realm, making our own very essence amenable to mechanization and capitalist exploitation.
If neuroscience is not ideologically neutral (and cannot be, in my view), the question is then: can we find a conceptual foundation for the brain that avoids these traps and may be compatible with non-capitalist, non-colonial, liberation perspective, while at the same time providing a better fit to experimental data and leading to more significant experimental paradigms)?
We can begin to address this problem by examining the role of evolution in shaping the brain. As we saw before, in the computational view, the brain solves “problems” posed by the environment by translating them into a computational framework and finding an optimal solution. The reason why the solution found by the brain is optimal, is simply natural selection, regarded as a “deus ex machina” that shapes life towards increasingly more complex, perfect and higher forms, where “perfection” is here equated with algorithmic optimality. In this view, evolution is practically replacing the role of the intelligent engineer. I would argue that this dominating view is naïve and scientifically a non-starter (or it should be): not only are the “ends” of this finalistic stance typically arbitrarily (and ideologically) chosen, but the idea is at odds with modern biological findings on the complexity inheritance mechanisms and the role of DNA in cell biochemistry. [8]
A potential way out is proposed by the autopoietic (‘self-generating’) view of biology, of which Humberto Maturana was a proponent. [9] In this view, living systems are physical, material systems of a special kind, because their structure and dynamics are such that they can maintain and reproduce the conditions of their very organization (think of a flame that can ensure that it keeps on getting fuel and air). While this autopoietic view can be completely rooted in the laws of physics, it defines its own laws, which become the proper explanatory level. Laws about structure, dynamics, ontogenesis, interaction with the environment, determine life and its evolution. This idea has three implications. First, natural selection is still important, by weeding out obviously unviable mutations, but it is not seen as the only force explaining everything in biology. As proposed by Maturana’s former student Francisco Varela and colleagues [10], rather than selecting an optimum, natural selection offers a diverse range of sufficiently viable life forms, potentially all the alternatives allowed by the laws of living matter (for example, morphogenesis, homeostasis, energy balance). That means that evolution is not optimally solving the problem of adapting to a new ecological niche: living organisms dictate, by their structure, the modalities of interaction with the environment (they are themselves the niche), and when they evolve, they open up new possibilities to themselves and other life forms.
Second, complexity is generated naturally in living system, but not as an ascent towards perfection or optimality. As already noted by evolutionary biologist Steven Jay Gould [11], bacteria, rather than human beings, represent the optimum of natural selection, intended as the fastest and most efficient way to create copies of a certain gene. Rather, complexity is generated for example by accretion, new synergetic or symbiotic relationships, or simply juxtaposition, once again along the paths that are made available by the laws of living systems. Borrowing the language of the physics of complex systems, we may say that life’s (intended both as organisms and species) history is a sequence of spontaneous symmetry breakings, points at which the system “chooses” between similarly adaptive alternatives. It breaks a symmetry, and in doing that it creates structure. These breakings are often irreversible, and their sequence is in fact the history of organism. You may remember vertebrate embryogenesis from high school biology how we begin as a single cell, then become a spherical ball of cells, then something with a front and a back, and by breaking more and more symmetries become a full-blown animal. You may also remember how rigid embryogenesis is, with all vertebrates going essentially through the same steps. If you are a developing vertebrate, the laws of embryology (themselves determined by evolutionary history) are as inescapable as the laws of physics, and probably more consequential. Countless similar examples could be made from neuroanatomy and neurophysiology.
Which brings me to my third point: living organisms live at the complex interface between the mathematical rules of living matter (that include concepts from e.g. disorder systems and chaos theory and are much richer than the laws of physics as you may imagine them) and the details of their being that are historically determined (history of the organism, of the species, of life). In this sense, with an (intended) pun on Marx, we may talk about a “historical materialism of the living matter”, in which teleology, after we chase it out the door, does not come back through the window, as we say in Italy. I understand that I am epistemologically at odds with many biology colleague, as I believe that teleology does not belong in natural sciences, and it always risks being a vehicle for ideology.
Doing neuroscience research from this new starting point means deeply modifying our practices and conceptual frameworks: rather than focussing on problems and the computational algorithms to solve them, we should think in terms of constraints and possibilities that come from living matter and its histories. We should try to reconstruct those histories (of species, organisms, behaviours), and when faced with a complex behaviour, understand both the “laws” that enabled it, and the historical path that yielded it. Naturally expressed behaviours, comparative work across animal species, links across levels of analysis (brain, body, environments, social sphere), should be emphasized. “single case studies” taking the history and the unicity of individuals should be combined with more conventional statistical approaches.
This will not be easy, and in many cases it will require experimental and analytical tools that are not yet available. It is time for theoretical ideas, however speculative, to drive experimental and methodological development, rather than the other way around. Outside of the lab, neuroscientists should also become aware of the role that they and their science play in the society and realize that whatever they do, it will not be neutral.
Notes
[1] Matthew Cobb, The idea of the brain (Profile Books 2021).
[2] Matteo Pasquinelli, The eye of the master, a social history of artificial intelligence (Verso 2023).
[3] Erwin Schrödinger, What is life? (Cambridge UP 2012).
[4] David Marr, Vision (MIT Press 2010).
[5] Dan Nicholson, What is life? Revisited (Cambridge UP 2025).
[6] Vicente Raja, “The Motifs of Radical Embodied Neuroscience”, European Journal of Neuroscience 60/5 (2024): 4738-55; https://doi.org/10.1111/ejn.16434.
[7] Madhur Mangalam, “The Myth of the Bayesian Brain”, European Journal of Applied Physiology (ahead of print, June 26, 2025); https://doi.org/10.1007/s00421-025-05855-6.
[8] James A. Shapiro, Evolution: A View from the 21st Century (Cognition Press 2022).
[9] Humberto Maturana, Autopoiesis and Cognition (Springer 1980; 1991).
[10] Francisco J. Varela, Evan Thompson and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (MIT Press 2017).
[11] Steven Jay Gould, Full House: The Spread of Excellence from Plato to Darwin (Three Rivers Press 1997).
This article originally appeared in Dutch in The Dutch Review of Books.