Michael Kidner , the artist, was a teacher at the Bath Academy of Art, when I was a student there. In my view, he was one of the most interesting and important artists of the late 20th and early 21st centuries. It was my great luck that I was able to form a friendship with him that lasted over forty years, until his death in 2009. In 2007 he had a one man show at the Flowers East Gallery, London, to which he gave the title “No goals in a quicksand”. I was asked to help with the writing of the catalogue.* One of my contributions was a slightly edited version of a chapter from “Fresh Perspectives on Creativity”, a book I was working on at the time and which is now one of the four books I am currently publishing in installments on the Posts Page of this website. My other main contribution to the catalogue was an introductory piece called “Michael Kidner the man”, which you will find repeated, after Figure 1 below. To complete this Post, I have also included a link to the chapter on Michael (“The big bang, chaos and the butterfly”) and an image of Michael’s last finished painting (Figure 2):
Michael Kidner the man
When considering Michael’s work, it is easy to concentrate attention too much on the science-based ideas and too little on the man and his feelings. As I have got to know him well, I have been impressed by a quality which I am tempted to call ‘naivete’, since it reminds me of a quotation from Matisse: “The effort to see without distortion takes something like courage and this courage is essential to the artist, who has to look at everything as though he saw it for the first time.” Michael looks at mathematically-based systems in this spirit and is not daunted by his shortcomings as a mathematician, of which he is only too aware. Thus, though characterising himself as groping towards an understanding of matters beyond his realistic grasp, Michael does not see this as a reason for abandoning his obsession with mathematical propositions. He sees these as relating to the fundamental mysteries of science and he looks at the evolution of systems, generated by a simple logic, in much the same way as Cézanne must have looked at natural objects, with total concentration and never ending wonder at the seemingly endless layers of novelty that open up before him. In other words, though he works with the ideas of science, he responds to them with the sensibility of an artist, experiencing them with innocent and ever inquisitive eyes. It is his personal and, therefore, quintessentially different visual response that reveals and creates, not only the perceptual excitements but also the metaphors for the human condition that are to be found in his work. Michael works slowly and doggedly. Daily he clambers up to his attic studio where he spends untold hours, apparently oblivious of time and human needs, patiently searching for the key that will give meaning to his latest quest. His explanations of what he is doing are delivered in a slightly hesitant and even seemingly self deprecating manner which completely fails to obscure the steadfast determination to go further which imbues his every word. There is no thought of calling it a day, even at the age of ninety years and even despite his physical handicaps. Cézanne wanted to die painting: one feels that Michael has this same level of commitment.
The purpose of this Post is to provide the link below to “The perception of surface”, Chapter 7 of my book “Painting with Light and Colour”. This provides illustrations and explanations of ways we perceive: (a) reflected light as opposed to transmitted light, (b) matt surfaces as opposed to glossy ones, and (c) the complexities of interreflections. Apart from their intrinsic interest, the function of these in the book is to prepare the ground for the next chapters, which explain how Georges Seurat’s ideas about “painting with light”, either directly or, more often, indirectly, were to revolutionise the use of colour in paintings in a multiplicity of ways. Thus, the next Post will provide a link to Chapter 8, which, after introducing Seurat’s ideas and methods, starts the process of going more deeply into their game-changing ramifications. What follows below gives a foretaste of the nature of these.
Not nearly enough importance is given to the impact of Georges Seurat’s ideas concerning the depiction of reflected light. Their significance lies in the fact that, either directly or indirectly, they were to have a transformative, game-changing influence on the way later artists:
Painted reflected light.
Approached the depiction of illusory pictorial space.
Explored whole-field colour relations.
Ramped up the colourfulness of their paintings.
Too often in the past the focus has been on Pointillism as a method, treating it as a fascinating, but not so very important phase in art history. In contrast, in my book, I show that the ideas behind Seurat’s innovations, as developed and transformed by his successors, were to open up possibilities of permanent value for anyone who makes paintings of virtually any kind. With hindsight we can see that Seurat’s ideas:
Furnished one of the two pillars that underpin the transformative use of colour found in the work of numbers of progressive artists, including Gauguin and Bonnard. As we shall see in later chapters, when we come to the subject of whole-field colour relations, the artist primarily responsible for the other pillar was Cézanne. It was these two sets of ideas that were to be synthesised in the dogmas of Marian Bohusz-Szyszko.
Opened the way to scientific experiments that supplied coherent insights into the working principles of the eye/brain systems that enable the perception of surface-solidity, surface-form, in front/behind relations and the qualities of light (reflected light and ambient illumination). Since it is the operation of these that make possible the only way that artists can use colour/lightness relationships to deceive viewers into interpreting the content of paintings as existing in illusory pictorial space, it is hard to exaggerate the practical value for painters wishing to represent any of the above mentioned qualities in their work.
To prepare for grappling with all this, it is helpful to be clear about the role that the light reflected from surfaces has in creating our sense of their solidity and our perception of both their form and their interconnectedness.
Chapter 1 of my book “Painting with Light and Colour” told of the dogmas of Professor Bohusz-Szyszko and his claim that they were “all you need to know about painting”. It also praised their value as a practical guide.
Chapter 2 is about doubts that arose concerning their theoretical basis. It was the experience of living with these that prepared me for a critical moment in my life. This came several years later while I was reading an article in the Scientific American that had been brought to my attention by one of my colleagues in the Psychology Department at the University of Stirling. The purpose of the article was to present what the author, Edwin Land, fervently believed to be a mould-breaking understanding of the neural computations used by the eye/brain to produce the phenomenon of “colour constancy”. Actually Gaspard Monge, a French mathematician, had beaten him to the post by nearly two hundred years. But this did not stop the contents of Land’s article from being the catalyst to the evaporation of my worries. More importantly, my efforts to better understand the significance of Land’s ideas were eventually to open the way for cooperations with colleagues in the The University of Stirling Vision Group (see link below*). Without their help, few of the new insights relating to the use of colour in paintings that can be found in my book would have materialised.
But this is jumping the gun. First click on the link below to access the chapter on the doubts that had haunted me and on the process of questioning they set in motion. Its function is to explain why there is a need for the new ways of thinking and doing that play such an important part in the chapters that follow.
*Above and in many places in my books, I acknowledge the importance of the role of colleagues in the development of the new science-based ideas put forward in them. As well as acknowledging the help of various individual scientists at the University of Stirling, I call attention to the role played by the University of Stirling Vision Group. For more on this pleaseclick here to access the Post I have written on its personnel and its activities.
Posts relating to other chapters from “Painting with Light and Colour”:
It is well known that the Impressionists and their immediate successors (often referred to in my books as the Early Modernists) reacted strongly against what they saw as the straitjacket of the traditional ideas taught in the academies. The purpose of this Post is to publish Chapter 4 of “Painting with Light and Colour”, which provides a short introduction to what these actually were, with comments on the pros and cons of following them uncritically. Normally, I have been writing a separate introduction for my Posts but on this occasion I have used the Introductory from the chapter itself. Accordingly, when you open the link to the chapter below, you may want avoid reading the same thing twice.
Traditional ideas and their limitations
This chapter has four main purposes. These are to:
Introduce some traditional ideas about the depiction of space and light.
Discuss their limitations.
Suggest that these are more comprehensive and satisfactory alternatives.
Prepare the way for a better understanding of the significance of Seurat’s science and his colour based innovations.
The first of objectives is met by elaborating on three aspects of painting which, after being explored in some depth by the Renaissance artists, became embedded in the academic tradition. Although satisfactorily serving their purpose for the artists who followed them, it was these that were found wanting by the Impressionists. More importantly in the present context, it was also these that were given a new dimension by Seurat and those who built upon his ideas. The three aspects were detailed in the last chapter:
Significantly, as we shall see, it is only with respect to the first item on the list (atmosphere) that colour of any sort was seen as having a role to play. Even then only blue was required.
In contrast, the academic rules guiding the depiction of the quality of light and shading provided no function to colour. The practice of the Renaissance artists and the teaching of the Academies placed the emphasis exclusively on variations in “lightness” (what the English call “tone” and the Americans term “value”).
The science referred to in the title of this post had a lot to do with the revolution in the understanding that gave birth to what we now know as the science of “visual perception“. The first intimations that an important change was afoot came in the later part of the seventeenth century with Isaac Newton’s work on the composition of light. However, the paradigm shift came in the late eighteenth century when the work of Gaspard Monge and others made it clear that colour is not a property of surfaces but is made in the head. This completely new understanding of the nature of visual perception was to be fleshed out in the next century by a flood of confirmatory studies. A milestone was the publication by Herman van Helmholtz of a three-volume review of the new domain of study. It was a magisterial achievement that showed why, despite his considerable debt to others, he has been described as the “Father of the Psychology of Perception“. The third and last of these volumes was published in 1867, just in time to have a profound influence, first on the young Impressionists and, then, in the remainder of the nineteenth and in the early twentieth centuries, on many of their Modernist Painter successors.
The new science misrepresented
One of the purposes of “Painting with Light and Colour”, my book on the theory and practice painting, is to provide a better account of the hugely important role of the new sciences of vision and visual perception in the history of painting. In this post I am publishing Chapter 5, which continues the process of setting the scene started in the Introduction to the science at the beginning of the book. It does so by revisiting and shedding new light on important aspects of colour theory. It has four objectives:
To question the widespread dissemination of half-truths and falsehoods in how-to-do-it books and articles on painting.
To sort out misconceptions about colour theory that I have found to be common amongst my students.
To show how well-known concepts are given new significance when considered in the context of the realisation that colour is not a property of surfaces but is made in the head.
To introduce other more recent ideas that will play a key role in the chapters that follow. These are likely to be unfamiliar to most people, as they are the fruit of little known, late twentieth century experimental clarifications, which enable sense to be made of formerly unsolved mysteries.
Thispost focuses on the revolution in painting that gathered momentum in the latter part of the nineteenth century. A key factor in its genesis was an earlier and still ongoing revolution in the then emerging science of visual perception (more posts on aspects of this to follow). At the core of this was an accumulation of evidence that demonstrated that colour is not a property of surfaces in the external world but a construction by the eye/brain. In Chapter 6 of my book “Fresh Insights into Creativity“, I have described what occurred as “The Modernist Experiment”. The word “experiment” is used because the discoveries of science, the threat of the recently invented photograph and the challenge to well-embedded assumptions posed by the Japanese print, led to:
A root and branch questioning of just about every aspect of painting.
A concerted effort to make paintings that would push forward the search for answers.
More than ever before, the thought-processes and working practice of artists illustrated the earlier groundbreaking contention of John Constablethat “paintings should be regarded as experiments“.
A link to the chapter
Please click on the link below to access the chapter in question. In it you will read how the revolution in painting evolved between the 1860s, when the young Impressionists met with now celebrated poets and writers in the Café Guerbois, Paris, and the 1960s, when an exhibition called “The Art of the Real“,at the Museum of Modern Art, New York, prepared the way for the arrival of so-called “Post Modernism” (to be the subject of a later Post).
My Concise Oxford Dictionary defines “free-will” as “the power of acting without necessity or constraints”. A much debated question is whether human beings have this capability. Most answers are based on an easy-going introspection. “Surely, it is evident that we can make up our own mind on any question and in any situation we find ourselves?” However, over past centuries and decades various thinkers, for various reasons, have come to the conclusion that believers in free-will deceive themselves. According to their way of thinking, it can only be an illusion: All is determined by forces outside their control.
In essence, there have been two main arguments in support of this determinism. They are:
The theoretical impossibility of mental liberty coexisting with an all-powerful deity (see the the doctrine of predestination).
A belief that the neural systems that underpin human action and thought operate in a machine-like manner.
For those whose premise is the supremacy of God, free-will could only occur if the Deity were to give up power voluntarily. The argument continues that this is a step it could not take because doing so would mean cancelling out the most basic fact of its existence, namely its all-powerful nature.
For those who see brains as machines, all must be explained in terms of mechanical processes. They ask what they assume to be a rhetorical question: “How could a mere machine be endowed with free-will?” Both of these arguments can be treated as cases of special pleading, leaving fundamental questions unanswered. As might be expected, there have been many attempts to confront these, including the suggestion that follows, which depends on the notion of free-will as a functional reality.
Free-will as a functional reality
This possibility, as outlined below, is attractive not only because it has the advantage of overcoming the objections of those who insist on a mechanistic explanation, but also because it fits with what introspection tells us. Let me explain.
Earlier in this chapter, under the heading “modes of description”, I described my first viewing of the powers of an electron microscope and being amazed to see how unrecognisable the image of the same minute portion of a leaf could be when viewed at the different levels of magnification. There seemed to be absolutely nothing in common between them. However, the specialist doing the demonstration seemed to have no difficulty in describing both their functions and links between them.
But that was many years ago and no matter how seemingly complete the explanations he gave at the time, by now, they would have had to be revised in all sorts of ways. It could hardly be otherwise, for the relatively new and rapidly blooming science of molecular biology, aided by ever more sophisticated technology, has been revealing ever-increasing levels of complexity and creating a mushrooming of questions to ask. Accordingly, it would be surprising to find any serious scientist who currently believes that it will be possible, in anything like a near future, to arrive at a definitive description of the multiplicity of neural processes and interconnections that enable our brains, not only to to classify and recognise but also to learn and use motor and intellectual skills so effectively.
Computers competing with the human brain
For analogous reasons, a similar situation obtains in the field of computer-based brain-modelling. Despite all the astonishing progress that has been made in this field, computer scientists have still far to go before realising the goal of constructing a machine capable of mimicking the full extent of the intellectual and functional capacities of a human brain. Simply put, the problem is the daunting degree of interconnectivity within the brain’s neural networks. To model this, amongst other things, it would be necessary to take account of:
The estimated 86 billion neurons in the brain, each with an average 1,750 connections to other neurons, including those belonging to systems that are fed by both sensory and somatosensory inputs.
The requirements of neurophysiological plausibility.
No wonder I keep hearing computer scientists saying that the task of competing with the human brain on its own terms will remain well beyond their resources for the foreseeable future.**
Characteristics of hypothetical brain modelling machines
Even if there are some computer scientists who are more optimistic, this would not be of any consequence for my explanation of functional free will, for it does not depend on the existence of actual brain-modelling machines. Rather it involves thought-experiments relating to hypothetical creations whose operational principles are based on known characteristics of the brain. Accordingly the machines will have to use a considerable number of different sensor-types, each responding to a different modality of information (light, sound, scent, taste, various kinds of pressure, etc.), feeding a vast number of extensively interlinked, mini processors (taking on the role of neurons). These would have to be capable of:
Separating out and usefully recombine relevant aspects of the sensory information extracted from the environment by means of the multiplicity of sensors with task-specific characteristics, appropriately situated in a wide range of locations (multimodal processing).
Providing contextual information derived, not only from relevant parts of long-term memory, as built up through the agency of numbers of interacting subsystems, over a lifetime of experience, but also from the totality of the current environment, as captured and interpreted by sensory-systems, taking information from all parts of the body (temporal and spatial context).
Monitoring their own behaviour, using the feedback (provided by relevant sensory systems, memory stores or, much more likely, a combination of the two) that is required by analytic processes for both consciousness and learning.
Organising and implement actions (involving the coordination of complex muscle systems) and thought-processes (motor and mind control).
Generating feeling-based criteria upon which to make choices (decision making).
Equipped in various ways with these five capacities, the brain-mimicking computers would have to be able:
To make useful syntheses of the mass of data that has been extracted from the multiple sources of sensory input, with a view to both making sense and, subsequently, enabling recognition.
To do the above in any context, no matter what the domain of description, or how many variables have to be taken into consideration.
To learn from both positive and negative feedback (particularly from mistakes, using previously acquired, task specific error-correction skills).
The machine must also be capable of making sense of:
Information derived from within the relatively easy (but nevertheless potentially fiendishly complex) domains researched by practitioners of the so-called “hard sciences”, such as mathematicians, physicists and molecular biologists.
Much less easily classified material relating to the disciplines traditionally placed under the umbrellas of the social sciences and the arts.
In short, the brain-machine envisaged in the thought expermient would have to be at ease with making use of input pertaining to any realm of ideas whatsoever, however fanciful, simple-minded or far-fetched. It would also need to be capable of self-deceptionand crises of confidence in its own findings.
But this is far from all. To be like the human brain, every brain machine would have to have an ever-evolving memory-store, based on a ceaseless stream of ongoing inputs and capable of creating a unique internal world (analogous to “personal experience”). Accordingly, each machine would have a ‘personalised’ reaction to each and every contingency. In addition, like Antoni Tàpies and myself, it would have to be capable of having fun with the idea of creativity, however absurd its premise.
In the light of all these requirements (and no doubt many more), it is clear that neither computer hardware designers nor the computer programmers who were responsible for creating them would be able to predict the behaviour of the brain modeling machines envisaged in our thought-experiment. Only beings or groups of beings equipped with capacities comparable with the second of the hypothesised Gods** (the one capable of preplanning everything, from the evolution of species to down to the trajectory of every floating dust particle, for all eternity) would be able to unscramble an omelette of such complexity.
Moreover, even assuming that:
The self-monitoring aspect of the brain-modeling machines could be equated with consciousness.
The implied awareness of self could be programmed to incorporate both a sense of agency and a means of ranking the levels of both the credibility and the desirability of conclusions reached.
The outcome would be like human brains in the sense that they could only deal with an extremely limited part of the information being provided by the massively complex arrays and sequences of processes involved in determining their current behaviour.
All in all, it is safe to conclude that, even if machines could be made that meet these extremely exacting and, at the present, far from obtainable criteria, they would be unable to perceive the mechanically and contextually determined origins of their actions or thoughts. Accordingly, assuming the self-monitoring capacity of such machines could be equated with introspection, they would have no choice but to consider themselves as being in possession of free-will.
Moreover, if all traces of determinism remain obscure to the machines themselves, how would their output appear to other, similarly constructed and programmed machines? Clearly, from the perspective of any one machine seeing itself as having free-will, all other machines that have been created and evolved in accordance with the same principles would likewise be seen as in possession of their own free wills (or, possibly, dismissed as “just machines“).
Functional free-will and experiential reality
Since all the above arguments apply to any mechanistic way of thinking, whether it is focused on hypothetical computers or biological brains, they must also have relevance to speculations about the nature of free-will in our species. Just as no theory of the solar system or the universe, however indisputably correct, can stop us experiencing the sun as rising in the morning and setting in the evening (see Post on“Why I am a flat Earther”), so no mechanistic theory of brain function can deprive us of our sense of possessing free-will. It may be an illusion, but it is with us to stay, along with any of the sense-of-self, personal feelings and motivation it can provide.
Finally, a word on the future of machines that mimic human brains. Since the functional free-will argued for above is predicated upon the idea that all machines, human or electronic, evolve in idiosyncratic ways, their diversity would be ensured. Accordingly, so would be their role in evolutionary processes that favour the survival of the fittest (whether as individuals, as contributing members of groups or as friends of the environment), with their all their possible implications and risks.
Posts from “Having Fun with Creativity”, Chapter 10of“Fresh Perspectives on Creativity”
* However, the European Union is currently committing €1,200,000,000 over 10 years to “The Human Brain Project” with the stated objective of finding ways of modeling the human brain.
** A reference to an earlier passage in the chapter from which this post is an extract, namely, “Having Fun with Creativity”, Chapter 10of“Fresh Perspectives on Creativity” . It consists of a not too serious run through of the hypothetical choices that would have faced an all powerful deity when sitting at his/her desk planning of the Big Bang. It is scheduled to appear in a later Post.
This “Post” makes available the introduction to my book “Painting with Light and Colour” (please find link blow). As is the case with all introductions, it comes at the beginning of the book. However, I have delayed publishing it on this “Posts Page”, until after making available many of the chapters it introduces, as well as some additional material. As a result, when writing this “Posts Page” introduction to the actual book, it has been possible to to refer to what comes after it. As I can see that doing so may have many advantages for those who have read the previously published chapters and material and maybe even for new readers, I have not hesitated to do so.
Two earlier Posts draw attention to the historical importance of Seurat’s science-based ideas on the practice of painting light and colour. In the “Venetian Colourists”, it is argued that the artists known by this label and those who built upon their ideas were not “colourists” at all. Rather they were “lightists”, whose reputation as “colourists” was based on their mastery of whole-field lightness/darkness relations (“chiaroscuro“). Colour did not enter into the theory of painting light until Seurat introduced his idea of using optically-mixed arrays of separate dots of complementary pigment-colours to give a new kind of luminosity to his paintings. This step proved to be the precursor of a transformative jump from “lightists” to “colourists”.
The next steps, which were were taken by such artists as Cézanne, Gauguin and Bonnard, were later to inspire the synthesis of my teacher Marian Bohusz-Szyszko. It is these that provide the main subject matter of the first two chapters of my book “Painting with Light and Colour”. Chapter 1, “the Dogmas”, along with my Posts Page introduction to it, can be obtained by clicking on this link. In it I explain how, in addition to having an abiding influence on both my own painting and my teaching, the synthesis was to:
Provide the questions that led to my scientific research into the perception of surface, space, light and harmony in paintings (see link below).
Lead to the gamut of practical insights on the use of colour in painting that distinguish my books from others on the same subjects.
An introduction to key ideas
To help readers to navigate the considerable quantity of unfamiliar science-based ideas contained in my book “Painting with Light and Colour”, I decided to preface its main content with an “Introduction to the science”, which can be obtained by clicking below.
The purpose of this Post is to make available “the nature of painting, “Chapter 3” of my book “Painting with Light and Colour”. It provides a quick run through some basic factors, which are so evident that some of their practical implications are too often overlooked. These are presented under four headings:
Real surface/illusory pictorial space ambiguities.
Whole-field colour/lightness interactions.
What paintings can do that nature cannot.
The human element.
All the chapters in my books have an “Introduction”. Below is the Introduction to Chapter 3. You can choose to read it now or when you click on the link to Chapter 3 that follows it.
It is difficult to imagine a more useful first guide to painting than the dogmas of Professor Marian Bohusz-Szyszko. However, they have their limits. Fortunately, as I believe the remainder of this book will make clear, it is both possible and worthwhile to go much more deeply into the reasons for both their strengths and their limitations. One approach to doing this is to trace the roots of the Professor’s assertions by reference to the work and ideas of his artist predecessors. Another, is to focus on the history of science and how it illuminated the subject of picture perception. Whichever our choice, it is inevitable that there will be much overlapping. The reason is that, in the nineteenth century, a particularly high proportion of the ideas influencing the community of progressive artists were rooted in the new ways of thinking about the world we live in that were emerging from science.
To prepare the way for the combination of theory and practice which provides the subject matter of the remainder of this book, this chapter offers a first introduction to basic factors that are necessarily in play when selections of artists’ pigments, mixed with various mediums are arranged on a circumscribed, flat picture-surface in such a way as to excite the feelings of people. The main reason for starting with these fundamentals is because:
Taking them into consideration can help artists to achieve a surprising number of widely sought after goals.
They provide reference points and context for so much of what follows.
Their importance is too often overlooked by practicing artists.
The basic factors in question will be presented under the headings,“real surface/illusory pictorial space ambiguities”, “whole-field colour/lightness interactions”, “what paintings can do that nature cannot” and “the human element”.
Continuing in the spirit of Tapies’ game-playing approach to creativity, we find ourselves jumping sideways to false confidence and self-deception, two closely interrelated subjects of great pertinence to both artists and scientists. These I will spread over two Posts. Both can be approached via episodes in my personal history.
The first anecdote, which is on the subject of confidence, concerns a flight of fancy that popped into my head at the time I was meditating on the mysteries of recognition, and how on earth the eye/brain systems could enable it. My reverie took the form of what I came to call the “Abstraction-Hierarchy Model”. It was a simplistic conception relating to brain-system processing that will be explained in more detail in a later Post.