It is well known that the Impressionists and their immediate successors (often referred to in my books as the Early Modernists) reacted strongly against what they saw as the straitjacket of the traditional ideas taught in the academies. The purpose of this Post is to publish Chapter 4 of “Painting with Light and Colour”, which provides a short introduction to what these actually were, with comments on the pros and cons of following them uncritically. Normally, I have been writing a separate introduction for my Posts but on this occasion I have used the Introductory from the chapter itself. Accordingly, when you open the link to the chapter below, you may want avoid reading the same thing twice.
Traditional ideas and their limitations
This chapter has four main purposes. These are to:
Introduce some traditional ideas about the depiction of space and light.
Discuss their limitations.
Suggest that these are more comprehensive and satisfactory alternatives.
Prepare the way for a better understanding of the significance of Seurat’s science and his colour based innovations.
The first of objectives is met by elaborating on three aspects of painting which, after being explored in some depth by the Renaissance artists, became embedded in the academic tradition. Although satisfactorily serving their purpose for the artists who followed them, it was these that were found wanting by the Impressionists. More importantly in the present context, it was also these that were given a new dimension by Seurat and those who built upon his ideas. The three aspects were detailed in the last chapter:
Significantly, as we shall see, it is only with respect to the first item on the list (atmosphere) that colour of any sort was seen as having a role to play. Even then only blue was required.
In contrast, the academic rules guiding the depiction of the quality of light and shading provided no function to colour. The practice of the Renaissance artists and the teaching of the Academies placed the emphasis exclusively on variations in “lightness” (what the English call “tone” and the Americans term “value”).
The science referred to in the title of this post had a lot to do with the revolution in the understanding that gave birth to what we now know as the science of “visual perception“. The first intimations that an important change was afoot came in the later part of the seventeenth century with Isaac Newton’s work on the composition of light. However, the paradigm shift came in the late eighteenth century when the work of Gaspard Monge and others made it clear that colour is not a property of surfaces but is made in the head. This completely new understanding of the nature of visual perception was to be fleshed out in the next century by a flood of confirmatory studies. A milestone was the publication by Herman van Helmholtz of a three-volume review of the new domain of study. It was a magisterial achievement that showed why, despite his considerable debt to others, he has been described as the “Father of the Psychology of Perception“. The third and last of these volumes was published in 1867, just in time to have a profound influence, first on the young Impressionists and, then, in the remainder of the nineteenth and in the early twentieth centuries, on many of their Modernist Painter successors.
The new science misrepresented
One of the purposes of “Painting with Light and Colour”, my book on the theory and practice painting, is to provide a better account of the hugely important role of the new sciences of vision and visual perception in the history of painting. In this post I am publishing Chapter 5, which continues the process of setting the scene started in the Introduction to the science at the beginning of the book. It does so by revisiting and shedding new light on important aspects of colour theory. It has four objectives:
To question the widespread dissemination of half-truths and falsehoods in how-to-do-it books and articles on painting.
To sort out misconceptions about colour theory that I have found to be common amongst my students.
To show how well-known concepts are given new significance when considered in the context of the realisation that colour is not a property of surfaces but is made in the head.
To introduce other more recent ideas that will play a key role in the chapters that follow. These are likely to be unfamiliar to most people, as they are the fruit of little known, late twentieth century experimental clarifications, which enable sense to be made of formerly unsolved mysteries.
Thispost focuses on the revolution in painting that gathered momentum in the latter part of the nineteenth century. A key factor in its genesis was an earlier and still ongoing revolution in the then emerging science of visual perception (more posts on aspects of this to follow). At the core of this was an accumulation of evidence that demonstrated that colour is not a property of surfaces in the external world but a construction by the eye/brain. In Chapter 6 of my book “Fresh Insights into Creativity“, I have described what occurred as “The Modernist Experiment”. The word “experiment” is used because the discoveries of science, the threat of the recently invented photograph and the challenge to well-embedded assumptions posed by the Japanese print, led to:
A root and branch questioning of just about every aspect of painting.
A concerted effort to make paintings that would push forward the search for answers.
More than ever before, the thought-processes and working practice of artists illustrated the earlier groundbreaking contention of John Constablethat “paintings should be regarded as experiments“.
A link to the chapter
Please click on the link below to access the chapter in question. In it you will read how the revolution in painting evolved between the 1860s, when the young Impressionists met with now celebrated poets and writers in the Café Guerbois, Paris, and the 1960s, when an exhibition called “The Art of the Real“,at the Museum of Modern Art, New York, prepared the way for the arrival of so-called “Post Modernism” (to be the subject of a later Post).
My Concise Oxford Dictionary defines “free-will” as “the power of acting without necessity or constraints”. A much debated question is whether human beings have this capability. Most answers are based on an easy-going introspection. “Surely, it is evident that we can make up our own mind on any question and in any situation we find ourselves?” However, over past centuries and decades various thinkers, for various reasons, have come to the conclusion that believers in free-will deceive themselves. According to their way of thinking, it can only be an illusion: All is determined by forces outside their control.
In essence, there have been two main arguments in support of this determinism. They are:
The theoretical impossibility of mental liberty coexisting with an all-powerful deity (see the the doctrine of predestination).
A belief that the neural systems that underpin human action and thought operate in a machine-like manner.
For those whose premise is the supremacy of God, free-will could only occur if the Deity were to give up power voluntarily. The argument continues that this is a step it could not take because doing so would mean cancelling out the most basic fact of its existence, namely its all-powerful nature.
For those who see brains as machines, all must be explained in terms of mechanical processes. They ask what they assume to be a rhetorical question: “How could a mere machine be endowed with free-will?” Both of these arguments can be treated as cases of special pleading, leaving fundamental questions unanswered. As might be expected, there have been many attempts to confront these, including the suggestion that follows, which depends on the notion of free-will as a functional reality.
Free-will as a functional reality
This possibility, as outlined below, is attractive not only because it has the advantage of overcoming the objections of those who insist on a mechanistic explanation, but also because it fits with what introspection tells us. Let me explain.
Earlier in this chapter, under the heading “modes of description”, I described my first viewing of the powers of an electron microscope and being amazed to see how unrecognisable the image of the same minute portion of a leaf could be when viewed at the different levels of magnification. There seemed to be absolutely nothing in common between them. However, the specialist doing the demonstration seemed to have no difficulty in describing both their functions and links between them.
But that was many years ago and no matter how seemingly complete the explanations he gave at the time, by now, they would have had to be revised in all sorts of ways. It could hardly be otherwise, for the relatively new and rapidly blooming science of molecular biology, aided by ever more sophisticated technology, has been revealing ever-increasing levels of complexity and creating a mushrooming of questions to ask. Accordingly, it would be surprising to find any serious scientist who currently believes that it will be possible, in anything like a near future, to arrive at a definitive description of the multiplicity of neural processes and interconnections that enable our brains, not only to to classify and recognise but also to learn and use motor and intellectual skills so effectively.
Computers competing with the human brain
For analogous reasons, a similar situation obtains in the field of computer-based brain-modelling. Despite all the astonishing progress that has been made in this field, computer scientists have still far to go before realising the goal of constructing a machine capable of mimicking the full extent of the intellectual and functional capacities of a human brain. Simply put, the problem is the daunting degree of interconnectivity within the brain’s neural networks. To model this, amongst other things, it would be necessary to take account of:
The estimated 86 billion neurons in the brain, each with an average 1,750 connections to other neurons, including those belonging to systems that are fed by both sensory and somatosensory inputs.
The requirements of neurophysiological plausibility.
No wonder I keep hearing computer scientists saying that the task of competing with the human brain on its own terms will remain well beyond their resources for the foreseeable future.**
Characteristics of hypothetical brain modelling machines
Even if there are some computer scientists who are more optimistic, this would not be of any consequence for my explanation of functional free will, for it does not depend on the existence of actual brain-modelling machines. Rather it involves thought-experiments relating to hypothetical creations whose operational principles are based on known characteristics of the brain. Accordingly the machines will have to use a considerable number of different sensor-types, each responding to a different modality of information (light, sound, scent, taste, various kinds of pressure, etc.), feeding a vast number of extensively interlinked, mini processors (taking on the role of neurons). These would have to be capable of:
Separating out and usefully recombine relevant aspects of the sensory information extracted from the environment by means of the multiplicity of sensors with task-specific characteristics, appropriately situated in a wide range of locations (multimodal processing).
Providing contextual information derived, not only from relevant parts of long-term memory, as built up through the agency of numbers of interacting subsystems, over a lifetime of experience, but also from the totality of the current environment, as captured and interpreted by sensory-systems, taking information from all parts of the body (temporal and spatial context).
Monitoring their own behaviour, using the feedback (provided by relevant sensory systems, memory stores or, much more likely, a combination of the two) that is required by analytic processes for both consciousness and learning.
Organising and implement actions (involving the coordination of complex muscle systems) and thought-processes (motor and mind control).
Generating feeling-based criteria upon which to make choices (decision making).
Equipped in various ways with these five capacities, the brain-mimicking computers would have to be able:
To make useful syntheses of the mass of data that has been extracted from the multiple sources of sensory input, with a view to both making sense and, subsequently, enabling recognition.
To do the above in any context, no matter what the domain of description, or how many variables have to be taken into consideration.
To learn from both positive and negative feedback (particularly from mistakes, using previously acquired, task specific error-correction skills).
The machine must also be capable of making sense of:
Information derived from within the relatively easy (but nevertheless potentially fiendishly complex) domains researched by practitioners of the so-called “hard sciences”, such as mathematicians, physicists and molecular biologists.
Much less easily classified material relating to the disciplines traditionally placed under the umbrellas of the social sciences and the arts.
In short, the brain-machine envisaged in the thought expermient would have to be at ease with making use of input pertaining to any realm of ideas whatsoever, however fanciful, simple-minded or far-fetched. It would also need to be capable of self-deceptionand crises of confidence in its own findings.
But this is far from all. To be like the human brain, every brain machine would have to have an ever-evolving memory-store, based on a ceaseless stream of ongoing inputs and capable of creating a unique internal world (analogous to “personal experience”). Accordingly, each machine would have a ‘personalised’ reaction to each and every contingency. In addition, like Antoni Tàpies and myself, it would have to be capable of having fun with the idea of creativity, however absurd its premise.
In the light of all these requirements (and no doubt many more), it is clear that neither computer hardware designers nor the computer programmers who were responsible for creating them would be able to predict the behaviour of the brain modeling machines envisaged in our thought-experiment. Only beings or groups of beings equipped with capacities comparable with the second of the hypothesised Gods** (the one capable of preplanning everything, from the evolution of species to down to the trajectory of every floating dust particle, for all eternity) would be able to unscramble an omelette of such complexity.
Moreover, even assuming that:
The self-monitoring aspect of the brain-modeling machines could be equated with consciousness.
The implied awareness of self could be programmed to incorporate both a sense of agency and a means of ranking the levels of both the credibility and the desirability of conclusions reached.
The outcome would be like human brains in the sense that they could only deal with an extremely limited part of the information being provided by the massively complex arrays and sequences of processes involved in determining their current behaviour.
All in all, it is safe to conclude that, even if machines could be made that meet these extremely exacting and, at the present, far from obtainable criteria, they would be unable to perceive the mechanically and contextually determined origins of their actions or thoughts. Accordingly, assuming the self-monitoring capacity of such machines could be equated with introspection, they would have no choice but to consider themselves as being in possession of free-will.
Moreover, if all traces of determinism remain obscure to the machines themselves, how would their output appear to other, similarly constructed and programmed machines? Clearly, from the perspective of any one machine seeing itself as having free-will, all other machines that have been created and evolved in accordance with the same principles would likewise be seen as in possession of their own free wills (or, possibly, dismissed as “just machines“).
Functional free-will and experiential reality
Since all the above arguments apply to any mechanistic way of thinking, whether it is focused on hypothetical computers or biological brains, they must also have relevance to speculations about the nature of free-will in our species. Just as no theory of the solar system or the universe, however indisputably correct, can stop us experiencing the sun as rising in the morning and setting in the evening (see Post on“Why I am a flat Earther”), so no mechanistic theory of brain function can deprive us of our sense of possessing free-will. It may be an illusion, but it is with us to stay, along with any of the sense-of-self, personal feelings and motivation it can provide.
Finally, a word on the future of machines that mimic human brains. Since the functional free-will argued for above is predicated upon the idea that all machines, human or electronic, evolve in idiosyncratic ways, their diversity would be ensured. Accordingly, so would be their role in evolutionary processes that favour the survival of the fittest (whether as individuals, as contributing members of groups or as friends of the environment), with their all their possible implications and risks.
Posts from “Having Fun with Creativity”, Chapter 10of“Fresh Perspectives on Creativity”
* However, the European Union is currently committing €1,200,000,000 over 10 years to “The Human Brain Project” with the stated objective of finding ways of modeling the human brain.
** A reference to an earlier passage in the chapter from which this post is an extract, namely, “Having Fun with Creativity”, Chapter 10of“Fresh Perspectives on Creativity” . It consists of a not too serious run through of the hypothetical choices that would have faced an all powerful deity when sitting at his/her desk planning of the Big Bang. It is scheduled to appear in a later Post.
Two earlier Posts draw attention to the historical importance of Seurat’s science-based ideas on the practice of painting light and colour. In the “Venetian Colourists”, it is argued that the artists known by this label and those who built upon their ideas were not “colourists” at all. Rather they were “lightists”, whose reputation as “colourists” was based on their mastery of whole-field lightness/darkness relations (“chiaroscuro“). Colour did not enter into the theory of painting light until Seurat introduced his idea of using optically-mixed arrays of separate dots of complementary pigment-colours to give a new kind of luminosity to his paintings. This step proved to be the precursor of a transformative jump from “lightists” to “colourists”.
The next steps, which were were taken by such artists as Cézanne, Gauguin and Bonnard, were later to inspire the synthesis of my teacher Marian Bohusz-Szyszko. It is these that provide the main subject matter of the second post mentioned above, namely “The Dogmas, Chapter 1 of my book “Painting with Light and Colour”. There I explain how, as well as having an abiding influence on my own painting and my teaching, they were to:
Provide the questions that led to my scientific research into the perception of surface, space, light and harmony in paintings (see link below).
Lead to the gamut of practical insights on the use of colour in painting that distinguish my books from others on the same subjects.
An introduction to key ideas
To help readers to navigate the considerable quantity of unfamiliar science-based ideas contained in my book “Painting with Light and Colour”, I decided to preface its main content with an “Introduction to the science”, which can be obtained by clicking below.
The purpose of this Post is to make available “the nature of painting, “Chapter 3” of my book “Painting with Light and Colour”. It provides a quick run through some basic factors, which are so evident that some of their practical implications are too often overlooked. These are presented under four headings:
Real surface/illusory pictorial space ambiguities.
Whole-field colour/lightness interactions.
What paintings can do that nature cannot.
The human element.
All the chapters in my books have an “Introduction”. Below is the Introduction to Chapter 3. You can choose to read it now or when you click on the link to Chapter 3 that follows it.
It is difficult to imagine a more useful first guide to painting than the dogmas of Professor Marian Bohusz-Szyszko. However, they have their limits. Fortunately, as I believe the remainder of this book will make clear, it is both possible and worthwhile to go much more deeply into the reasons for both their strengths and their limitations. One approach to doing this is to trace the roots of the Professor’s assertions by reference to the work and ideas of his artist predecessors. Another, is to focus on the history of science and how it illuminated the subject of picture perception. Whichever our choice, it is inevitable that there will be much overlapping. The reason is that, in the nineteenth century, a particularly high proportion of the ideas influencing the community of progressive artists were rooted in the new ways of thinking about the world we live in that were emerging from science.
To prepare the way for the combination of theory and practice which provides the subject matter of the remainder of this book, this chapter offers a first introduction to basic factors that are necessarily in play when selections of artists’ pigments, mixed with various mediums are arranged on a circumscribed, flat picture-surface in such a way as to excite the feelings of people. The main reason for starting with these fundamentals is because:
Taking them into consideration can help artists to achieve a surprising number of widely sought after goals.
They provide reference points and context for so much of what follows.
Their importance is too often overlooked by practicing artists.
The basic factors in question will be presented under the headings,“real surface/illusory pictorial space ambiguities”, “whole-field colour/lightness interactions”, “what paintings can do that nature cannot” and “the human element”.
Continuing in the spirit of Tapies’ game-playing approach to creativity, we find ourselves jumping sideways to false confidence and self-deception, two closely interrelated subjects of great pertinence to both artists and scientists. These I will spread over two Posts. Both can be approached via episodes in my personal history.
The first anecdote, which is on the subject of confidence, concerns a flight of fancy that popped into my head at the time I was meditating on the mysteries of recognition, and how on earth the eye/brain systems could enable it. My reverie took the form of what I came to call the “Abstraction-Hierarchy Model”. It was a simplistic conception relating to brain-system processing that will be explained in more detail in a later Post.
The possibility that artists can build substantial castles on insubstantial foundations leads naturally to the subject of “self-deception”. During a recent conversation, the French artist Xavier Krebs confided that, during the process of making a painting, there sometimes comes a moment of what he described as a time of massive self-deception. Suddenly, to his delight, the painting that he is working on seems to come alive in a way that is thrilling beyond belief. The experience is extremely potent and only too real. The balloon is not pricked until the next morning when Xavier rushes excitedly to the studio. There he finds himself confronted, not by the “masterpiece” he was expecting, but what he now experiences as a spirit-crushing “disaster”. Nothing has changed but the artist’s experience of what he is seeing, Yet he assures me that there is no room for doubt: The scales of self-deception have dropped permanently from his eyes:
Figure 1 : The “Big Bang” (NB. black is a colour sensation made by the eye/brain)
Usually the word “chaos” has the connotation of referring to something rather grand and all engulfing, as in the case of the chaos created by the Big Bang. However, in my books, great emphasis has been placed on a much less spectacular manifestation of it. Namely the mini chaos that occurs when anyone, including artists drawing from observation, makes a comparison. This is because the same/different judgments required can hardly avoid revealing unpredictable differences. Let me elaborate:
My dictionary defines chaos as “a state in which no order can be perceived”. Clearly, this phrase can be applied to any differences discovered by any same/difference judgments, since, at least for the time being, these have no property other than is that of being different. Logically this is equivalent of saying that they can have no order. The only way that they can be given a place in an ordered description is by relating them to something else. It follows that all comparisons, except the rare ones that finds no differences, will bring a mini-chaos in their train.
Figure 2 : You will experience a mini chaos if you catch your attention being drawn to a difference
Since, by definition, lack of order cannot constitute “sense”, the mini chaos produced by any comparison that reveals differences will confront the sense-seeking eye/brain systems with the problem of finding some form of coherence. In other words, something that can only be found by looking at it, either in a different way or in relation to something else. As explained in my book “What the Scientists can Learn from Artists”, the solution requires either the use of analytic looking systems or a transition to other levels or modes of description, where the meaningless difference no longer exists. Speaking extremely generally, this change of levels, which might also be described as a “transformation of context”, can be achieved in two ways, either by going up the system or by going down it.
Going up accesses higher and cruder levels of description, where details are ignored. Doing so provides a filtering out process that, if allowed to run its course, will ultimately lead to the same seemingly banal outcome, namely that all objects will be perceived by the eye/brain as being identical, undifferentiated lumps, devoid of transformational context. At first sight this might seem a pretty useless outcome but, as explained below, this is far from the case.
Going down means focusing attention on progressively lower levels of description, a procedure which must eventually lead to the rediscovery of the basic building-blocks of appearances (the visual primitives or their equivalents in other domains of description). Again, the tendency can only be for the system to move in the direction of discovering that all objects are made from a small number of similar components. Looked at in this way, the process is analogous to the physicist discovering that all matter is made up of the same bunch of sub atomic particles.
What we learn from the search for of sameness
As suggested above, it would be easy to suppose that a processes which leads to the discovery that everything is the same as everything is a bit pointless. But for two reasons this is very far from the case:
Firstly, the fact of everything being the same, means that everything will be familiar and, accordingly, capable of setting in motion the eye/brain’s analytic-looking systems that are used for dealing with familiarity.
Secondly, the eye/brain only arrives at its uninteresting conclusion by means of a sequence of steps that provide a priceless outcome, namely a hierarchy of connections that link a number of formerly disparate “sames”. It can be described as “priceless” because, taken together, the resulting assemblage of links can constitute a powerful characterization of the object concerned. As explained in “What the Scientists can Learn from Artists”, it is such information-containing assemblages that underpin the brain’s capacity for description-building in all the domains of its activity. It is the basis of skill acquisition, including the skills required for thought.
An analogy with mathematics
The value of having a mechanism for searching for that which is the same has something in common with the value of the equation in mathematics. This becomes apparent when we realise that the value of this most basic of mathematical tools is predicated upon its capacity for producing the essentially uninteresting conclusion that two things (the two sides of the equation) that, at first sight look different, are in fact identical. However, the tautological nature of this outcome is only discovered by a series of procedures which, over and over again, have proved their power for generating extremely useful information and highly significant insights.
In sum, the very fact that the eye/brain automatically filters out differences in the search for samenesses, locks its processing systems into activity that is capable of harnessing chaos, and it is the manner in which does so that enables it to become the engine of creative description-building.
Other extracts from “Having Fun with Creativity”, Chapter 10 of “Fresh Perspectives on Creativity”
This Post discusses the relationship between fast drawing, learning and personal expression. It is an important subject because there seems to be a connection in many people’s minds between speed and expression. Various questions arise. The most basic one is whether there is any necessary connection at all.
In all my books I assume that personal expression can come in a multitude of ways: fast, slow, passionate, quietly sensitive, and all gradations between these extremes. This Post concentrates on the use of fast drawing. The main arguments are found in Chapter 8 of my book,“Drawing on Both sides of the Brain”.
The questions raised in Chapter 8 provide a means of taking a critical look at the widespread practice of starting life drawing sessions with poses that are so short that they force fast drawing. Those who advocate this practice, believe that their shortness will increase the likelihood of creativity and personal expression. In Chapter 8, I question this belief.
Chapter 8 and subsequent chapters between them explain how to use accuracy as a means of enhancing information pickup speed and, thereby, to learn to draw faster, with more authority and in ways that foster personal expression.
NB. In the chapter, reference is made both to illustrations found in a later chapter of the book, and to texts in another book in the series. As neither of these is as yet available to the reader, I have added them below.
Texts and illustrations referred to in Chapter 8
Four drawings of pollarded trees on the esplanade, Castelnau de Montmiral. They were made in 3 hrs, 30 minutes, 10 minutes and from memory respectively. They are extracted from Chapter 11 of “Drawing on Both sides of the Brain”.
A computer controlled experiment: an extract from Chapter 8 of “What Scientists can Learn from Artists”:
This extract comprises a summary of ideas coming from the main experiments and how they led to the computer controlled experiment which showed that preparatory looking helped rapidity of information pick-up:
“These ideas were amongst those that we had in mind when we came to consider the results from the main experiments. In particular they influenced our thought when we reflected upon the revelations of the video-tape record. One result was a hypothesis that needed testing. The argument that gave rise to this depended a variety of factors. If both comparison and the organisation of actions disrupt aspects of visual-memory, then copying must require a longer-term memory-store to guide a coordinated and efficient looking strategy. The superior performance of the skilled adults for drawing familiar objects from memory indicated that this function could be performed by long term memory. However, what about unfamiliar objects or the complex curves which describe the ever changing shapes of familiar ones? As suggested above, efficient visual analysis of these might require the creation of a purpose-specific memory store, structured with the help of longer looks, such as those recorded on the videotape. Thus, our hypothesis was that the function of the longer looks is to create a memory store containing knowledge of what to look for later. The advantage would be reaped in terms of the pick-up efficiency of the inter-saccadic glances. Given that time taken for each of these is fixed, it follows that the learning process enables more information to be picked up in the same time. Such a feat could only be achieved if appropriate, purpose specific memory structures had been created.
The computer-controlled experiment in question was used to test these ideas. A sequence of different two-line RSL models was displayed on a computer screen. At a given time after a model appeared, one of the two lines disappeared and the subjects were asked to copy the one that remained. The time before the disappearance was either one-third of a second or five seconds. When the subjects had completed drawing the visible line, they pressed a button which caused the second line to reappear for either one-third of a second (allowing time for one glance) or two-thirds of a second (allowing time for two glances). The question was whether the information collected in the five-second preliminary look would lead to better pick-up of information by the final glance or glances. The answer was a clear ‘yes’. Without the preliminary five-second look, the subjects were all-over-the-place when doing their best to copy the second line, whereas with it, they performed almost as well as if the image was there in front of their eyes.
This result gave strong support to the hypothesis that temporary knowledge, acquired as a result of appropriately organised looking behaviour, could play a vital role in achieving copying accuracy.”
Chapters from “Drawing on Both Sides of the Brain”.