My Concise Oxford Dictionary defines “free-will” as “the power of acting without necessity or constraints”. A much debated question is whether human beings have this capability. Most answers are based on an easy-going introspection. “Surely, it is evident that we can make up our own mind on any question and in any situation we find ourselves?” However, over past centuries and decades various thinkers, for various reasons, have come to the conclusion that believers in free-will deceive themselves. According to their way of thinking, it can only be an illusion: All is determined by forces outside their control.
In essence, there have been two main arguments in support of this determinism. They are:
- The theoretical impossibility of mental liberty coexisting with an all-powerful deity (see the the doctrine of predestination).
- A belief that the neural systems that underpin human action and thought operate in a machine-like manner.
For those whose premise is the supremacy of God, free-will could only occur if the Deity were to give up power voluntarily. The argument continues that this is a step it could not take because doing so would mean cancelling out the most basic fact of its existence, namely its all-powerful nature.
For those who see brains as machines, all must be explained in terms of mechanical processes. They ask what they assume to be a rhetorical question: “How could a mere machine be endowed with free-will?” Both of these arguments can be treated as cases of special pleading, leaving fundamental questions unanswered.
Free-will as a functional reality
However, there is an intermediate possibility that depends on the notion of free-will as a functional reality. This is attractive not only because it has the advantage of overcoming the objections of those who insist on a mechanistic explanation, but also because it fits with what introspection tells us. Let me explain.
Earlier in this chapter, under the heading “modes of description”, I described my first viewing of the powers of an electron microscope and being amazed to see how unrecognisable the image of the same minute portion of a leaf could be when viewed at the different levels of magnification. There seemed to be absolutely nothing in common between them. However, the specialist doing the demonstration seemed to have no difficulty in describing links both their functions and links between them.
But that was many years ago and no matter how seemingly complete the explanations he gave at the time, by now, they would have had to be revised in all sorts of ways. It could hardly be otherwise, for the relatively new and rapidly blooming science of molecular biology, aided by ever more sophisticated technology, has been revealing ever-increasing levels of complexity and creating a mushrooming of questions to ask. Accordingly, it would be surprising to find any serious scientist who currently believes that it will be possible, in anything like a near future, to arrive at a definitive description of the multiplicity of neural processes and interconnections that enable our brains, not only to to classify and recognise but also to learn and use motor and intellectual skills so effectively.
Computers competing with the human brain
For analogous reasons, a similar situation obtains in the field of computer-based brain-modelling. Despite all the astonishing progress that has been made in this field, computer scientists have still far to go before realising the goal of constructing a machine capable of mimicking the full extent of the intellectual and functional capacities of a human brain. Simply put, the problem is the daunting degree of interconnectivity within the brain’s neural networks. To model this, amongst other things, it would be necessary to take account of:
- The estimated 100 billion neurons in the brain.
- The extensive interlinking of each neuron to numerous other neurons, including those belonging to systems that provide sensory and somatosensory inputs, always by means of multitudes of neural processes.
- The requirements of neurophysiological plausibility.
No wonder I keep hearing computer scientists saying that the task of competing with the human brain on its own terms will remain well beyond their resources for the foreseeable future.
Characteristics of hypothetical brain modelling machines
Even if there are some computer scientists who are more optimistic, this would not be of any consequence for my explanation of functional free will, for it does not depend on the existence of actual brain-modelling machines. Rather it involves thought-experiments relating to hypothetical creations whose operational principles are based on known characteristics of the brain. Accordingly the machines will have to use a considerable number of different sensor-types, each responding to a different modality of information (light, sound, scent, taste, various kinds of pressure, etc.), feeding a vast number of extensively interlinked, mini processors (taking on the role of neurons). These would have to be capable of:
- Separating out and usefully recombine relevant aspects of the sensory information extracted from the environment by means of the multiplicity of sensors with task-specific characteristics, appropriately situated in a wide range of locations (multimodal processing).
- Providing contextual information derived, not only from relevant parts of long-term memory, as built up through the agency of numbers of interacting subsystems, over a lifetime of experience, but also from the totality of the current environment, as captured and interpreted by sensory-systems, taking information from all parts of the body (temporal and spatial context).
- Monitoring their own behaviour, using the feedback (provided by relevant sensory systems, memory stores or, much more likely, a combination of the two) that is required by analytic processes for both consciousness and learning.
- Organising and implement actions (involving the coordination of complex muscle systems) and thought-processes (motor and mind control).
- Generating feeling-based criteria upon which to make choices (decision making).
Equipped in various ways with these five capacities, the brain-mimicking computers would have to be able:
- To make useful syntheses of the mass of data that has been extracted from the multiple sources of sensory input, with a view to both making sense and, subsequently, enabling recognition.
- To do the above in any context, no matter what the domain of description, or how many variables have to be taken into consideration.
- To learn from both positive and negative feedback (particularly from mistakes, using previously acquired, task specific error-correction skills).
The machine must also be capable of making sense of:
- Information derived from within the relatively easy (but nevertheless potentially fiendishly complex) domains researched by practitioners of the so-called “hard sciences”, such as mathematicians, physicists and molecular biologists.
- Much less easily classified material relating to the disciplines traditionally placed under the umbrellas of the social sciences and the arts.
In short, the brain-machine envisaged in the thought expermient would have to be at ease with making use of input pertaining to any realm of ideas whatsoever, however fanciful, simple-minded or far-fetched. It would also need to be capable of self-deception and crises of confidence in its own findings.
But this is far from all. To be like the human brain, every brain machine would have to have an ever-evolving memory-store, based on a ceaseless stream of ongoing inputs and capable of creating a unique internal world (analogous to “personal experience”). Accordingly, each machine would have a ‘personalised’ reaction to each and every contingency. In addition, like Antoni Tàpies and myself, it would have to be capable of having fun with the idea of creativity, however absurd its premise.
In the light of all these requirements (and no doubt many more), it is clear that neither computer hardware designers nor the computer programmers who were responsible for creating them would be able to predict the behaviour of the brain modeling machines envisaged in our thought-experiment. Only beings or groups of beings equipped with capacities comparable with the second of the hypothesised Gods* (the one capable of preplanning everything, from the evolution of species to down to the trajectory of every floating dust particle, for all eternity) would be able to unscramble an omelette of such complexity.
Moreover, even assuming that:
- The self-monitoring aspect of the brain-modeling machines could be equated with consciousness.
- The implied awareness of self could be programmed to incorporate both a sense of agency and a means of ranking the levels of both the credibility and the desirability of conclusions reached.
The outcome would be like human brains in the sense that they could only deal with an extremely limited part of the information being provided by the massively complex arrays and sequences of processes involved in determining their current behaviour.
All in all, it is safe to conclude that, even if machines could be made that meet these extremely exacting and, at the present, far from obtainable criteria, they would be unable to perceive the mechanically and contextually determined origins of their actions or thoughts. Accordingly, assuming the self-monitoring capacity of such machines could be equated with introspection, they would have no choice but to consider themselves as being in possession of free-will.
Moreover, if all traces of determinism remain obscure to the machines themselves, how would their output appear to other, similarly constructed and programmed machines? Clearly, from the perspective of any one machine seeing itself as having free-will, all other machines that have been created and evolved in accordance with the same principles would likewise be seen as in possession of their own free wills (or, possibly, dismissed as “just machines“).
Functional free-will and experiential reality
Since all the above arguments apply to any mechanistic way of thinking, whether it is focused on hypothetical computers or biological brains, they must also have relevance to speculations about the nature of free-will in our species. Just as no theory of the solar system or the universe, however indisputably correct, can stop us experiencing the sun as rising in the morning and setting in the evening (see Post on “Why I am a flat Earther”), so no mechanistic theory of brain function can deprive us of our sense of possessing free-will. It may be an illusion, but it is with us to stay, along with any of the sense-of-self, personal feelings and motivation it can provide.
Finally, a word on the future of machines that mimic human brains. Since the functional free-will argued for above is predicated upon the idea that all machines, human or electronic, evolve in idiosyncratic ways, their diversity would be ensured. Accordingly, so would be their role in evolutionary processes that favour the survival of the fittest (whether as individuals, as contributing members of groups or as friends of the environment), with their all their possible implications and risks.
Posts from “Having Fun with Creativity”, Chapter 10 of “Fresh Perspectives on Creativity”
- Playful fancies as a stimulus to creativity
- An inspirational story: a child draws a potato
- The case for being a flat-earther:
- The nature of truth
- Tapies advocates playing games
- Cézanne fall short
- False confidence
- Self deception
- Free will and determinism
Other Posts from “Fresh Insights into Creativity”
- Chapter 6 : “The Modernist experiment.
- Chapter 7: “The first Modern Painter”: A surprise suggestion.
* A reference to an earlier passage in the chapter from which this post is an extract, namely, “Having Fun with Creativity”, Chapter 10 of “Fresh Perspectives on Creativity” . It consists of a not too serious run through of the hypothetical choices that would have faced an all powerful deity when sitting at his/her desk planning of the Big Bang. It is scheduled to appear in a later Post.