Thispost focuses on the revolution in painting that gathered momentum in the latter part of the nineteenth century. A key factor in its genesis was an earlier and still ongoing revolution in the then emerging science of visual perception (more posts on aspects of this to follow). At the core of this was an accumulation of evidence that demonstrated that colour is not a property of surfaces in the external world but a construction by the eye/brain. In Chapter 6 of my book “Fresh Insights into Creativity“, I have described what occurred as “The Modernist Experiment”. The word “experiment” is used because the discoveries of science, the threat of the recently invented photograph and the challenge to well-embedded assumptions posed by the Japanese print, led to:
A root and branch questioning of just about every aspect of painting.
A concerted effort to make paintings that would push forward the search for answers.
More than ever before, the thought-processes and working practice of artists illustrated the earlier groundbreaking contention of John Constablethat “paintings should be regarded as experiments“.
A link to the chapter
Please click on the link below to access the chapter in question. In it you will read how the revolution in painting evolved between the 1860s, when the young Impressionists met with now celebrated poets and writers in the Café Guerbois, Paris, and the 1960s, when an exhibition called “The Art of the Real“,at the Museum of Modern Art, New York, prepared the way for the arrival of so-called “Post Modernism” (to be the subject of a later Post).
My Concise Oxford Dictionary defines “free-will” as “the power of acting without necessity or constraints”. A much debated question is whether human beings have this capability. Most answers are based on an easy-going introspection. “Surely, it is evident that we can make up our own mind on any question and in any situation we find ourselves?” However, over past centuries and decades various thinkers, for various reasons, have come to the conclusion that believers in free-will deceive themselves. According to their way of thinking, it can only be an illusion: All is determined by forces outside their control.
In essence, there have been two main arguments in support of this determinism. They are:
The theoretical impossibility of mental liberty coexisting with an all-powerful deity (see the the doctrine of predestination).
A belief that the neural systems that underpin human action and thought operate in a machine-like manner.
For those whose premise is the supremacy of God, free-will could only occur if the Deity were to give up power voluntarily. The argument continues that this is a step it could not take because doing so would mean cancelling out the most basic fact of its existence, namely its all-powerful nature.
For those who see brains as machines, all must be explained in terms of mechanical processes. They ask what they assume to be a rhetorical question: “How could a mere machine be endowed with free-will?” Both of these arguments can be treated as cases of special pleading, leaving fundamental questions unanswered. As might be expected, there have been many attempts to confront these, including the suggestion that follows, which depends on the notion of free-will as a functional reality.
Free-will as a functional reality
This possibility, as outlined below, is attractive not only because it has the advantage of overcoming the objections of those who insist on a mechanistic explanation, but also because it fits with what introspection tells us. Let me explain.
Earlier in this chapter, under the heading “modes of description”, I described my first viewing of the powers of an electron microscope and being amazed to see how unrecognisable the image of the same minute portion of a leaf could be when viewed at the different levels of magnification. There seemed to be absolutely nothing in common between them. However, the specialist doing the demonstration seemed to have no difficulty in describing both their functions and links between them.
But that was many years ago and no matter how seemingly complete the explanations he gave at the time, by now, they would have had to be revised in all sorts of ways. It could hardly be otherwise, for the relatively new and rapidly blooming science of molecular biology, aided by ever more sophisticated technology, has been revealing ever-increasing levels of complexity and creating a mushrooming of questions to ask. Accordingly, it would be surprising to find any serious scientist who currently believes that it will be possible, in anything like a near future, to arrive at a definitive description of the multiplicity of neural processes and interconnections that enable our brains, not only to to classify and recognise but also to learn and use motor and intellectual skills so effectively.
Computers competing with the human brain
For analogous reasons, a similar situation obtains in the field of computer-based brain-modelling. Despite all the astonishing progress that has been made in this field, computer scientists have still far to go before realising the goal of constructing a machine capable of mimicking the full extent of the intellectual and functional capacities of a human brain. Simply put, the problem is the daunting degree of interconnectivity within the brain’s neural networks. To model this, amongst other things, it would be necessary to take account of:
The estimated 86 billion neurons in the brain, each with an average 1,750 connections to other neurons, including those belonging to systems that are fed by both sensory and somatosensory inputs.
The requirements of neurophysiological plausibility.
No wonder I keep hearing computer scientists saying that the task of competing with the human brain on its own terms will remain well beyond their resources for the foreseeable future.**
Characteristics of hypothetical brain modelling machines
Even if there are some computer scientists who are more optimistic, this would not be of any consequence for my explanation of functional free will, for it does not depend on the existence of actual brain-modelling machines. Rather it involves thought-experiments relating to hypothetical creations whose operational principles are based on known characteristics of the brain. Accordingly the machines will have to use a considerable number of different sensor-types, each responding to a different modality of information (light, sound, scent, taste, various kinds of pressure, etc.), feeding a vast number of extensively interlinked, mini processors (taking on the role of neurons). These would have to be capable of:
Separating out and usefully recombine relevant aspects of the sensory information extracted from the environment by means of the multiplicity of sensors with task-specific characteristics, appropriately situated in a wide range of locations (multimodal processing).
Providing contextual information derived, not only from relevant parts of long-term memory, as built up through the agency of numbers of interacting subsystems, over a lifetime of experience, but also from the totality of the current environment, as captured and interpreted by sensory-systems, taking information from all parts of the body (temporal and spatial context).
Monitoring their own behaviour, using the feedback (provided by relevant sensory systems, memory stores or, much more likely, a combination of the two) that is required by analytic processes for both consciousness and learning.
Organising and implement actions (involving the coordination of complex muscle systems) and thought-processes (motor and mind control).
Generating feeling-based criteria upon which to make choices (decision making).
Equipped in various ways with these five capacities, the brain-mimicking computers would have to be able:
To make useful syntheses of the mass of data that has been extracted from the multiple sources of sensory input, with a view to both making sense and, subsequently, enabling recognition.
To do the above in any context, no matter what the domain of description, or how many variables have to be taken into consideration.
To learn from both positive and negative feedback (particularly from mistakes, using previously acquired, task specific error-correction skills).
The machine must also be capable of making sense of:
Information derived from within the relatively easy (but nevertheless potentially fiendishly complex) domains researched by practitioners of the so-called “hard sciences”, such as mathematicians, physicists and molecular biologists.
Much less easily classified material relating to the disciplines traditionally placed under the umbrellas of the social sciences and the arts.
In short, the brain-machine envisaged in the thought expermient would have to be at ease with making use of input pertaining to any realm of ideas whatsoever, however fanciful, simple-minded or far-fetched. It would also need to be capable of self-deceptionand crises of confidence in its own findings.
But this is far from all. To be like the human brain, every brain machine would have to have an ever-evolving memory-store, based on a ceaseless stream of ongoing inputs and capable of creating a unique internal world (analogous to “personal experience”). Accordingly, each machine would have a ‘personalised’ reaction to each and every contingency. In addition, like Antoni Tàpies and myself, it would have to be capable of having fun with the idea of creativity, however absurd its premise.
In the light of all these requirements (and no doubt many more), it is clear that neither computer hardware designers nor the computer programmers who were responsible for creating them would be able to predict the behaviour of the brain modeling machines envisaged in our thought-experiment. Only beings or groups of beings equipped with capacities comparable with the second of the hypothesised Gods** (the one capable of preplanning everything, from the evolution of species to down to the trajectory of every floating dust particle, for all eternity) would be able to unscramble an omelette of such complexity.
Moreover, even assuming that:
The self-monitoring aspect of the brain-modeling machines could be equated with consciousness.
The implied awareness of self could be programmed to incorporate both a sense of agency and a means of ranking the levels of both the credibility and the desirability of conclusions reached.
The outcome would be like human brains in the sense that they could only deal with an extremely limited part of the information being provided by the massively complex arrays and sequences of processes involved in determining their current behaviour.
All in all, it is safe to conclude that, even if machines could be made that meet these extremely exacting and, at the present, far from obtainable criteria, they would be unable to perceive the mechanically and contextually determined origins of their actions or thoughts. Accordingly, assuming the self-monitoring capacity of such machines could be equated with introspection, they would have no choice but to consider themselves as being in possession of free-will.
Moreover, if all traces of determinism remain obscure to the machines themselves, how would their output appear to other, similarly constructed and programmed machines? Clearly, from the perspective of any one machine seeing itself as having free-will, all other machines that have been created and evolved in accordance with the same principles would likewise be seen as in possession of their own free wills (or, possibly, dismissed as “just machines“).
Functional free-will and experiential reality
Since all the above arguments apply to any mechanistic way of thinking, whether it is focused on hypothetical computers or biological brains, they must also have relevance to speculations about the nature of free-will in our species. Just as no theory of the solar system or the universe, however indisputably correct, can stop us experiencing the sun as rising in the morning and setting in the evening (see Post on“Why I am a flat Earther”), so no mechanistic theory of brain function can deprive us of our sense of possessing free-will. It may be an illusion, but it is with us to stay, along with any of the sense-of-self, personal feelings and motivation it can provide.
Finally, a word on the future of machines that mimic human brains. Since the functional free-will argued for above is predicated upon the idea that all machines, human or electronic, evolve in idiosyncratic ways, their diversity would be ensured. Accordingly, so would be their role in evolutionary processes that favour the survival of the fittest (whether as individuals, as contributing members of groups or as friends of the environment), with their all their possible implications and risks.
Posts from “Having Fun with Creativity”, Chapter 10of“Fresh Perspectives on Creativity”
* However, the European Union is currently committing €1,200,000,000 over 10 years to “The Human Brain Project” with the stated objective of finding ways of modeling the human brain.
** A reference to an earlier passage in the chapter from which this post is an extract, namely, “Having Fun with Creativity”, Chapter 10of“Fresh Perspectives on Creativity” . It consists of a not too serious run through of the hypothetical choices that would have faced an all powerful deity when sitting at his/her desk planning of the Big Bang. It is scheduled to appear in a later Post.
Two earlier Posts draw attention to the historical importance of Seurat’s science-based ideas on the practice of painting light and colour. In the “Venetian Colourists”, it is argued that the artists known by this label and those who built upon their ideas were not “colourists” at all. Rather they were “lightists”, whose reputation as “colourists” was based on their mastery of whole-field lightness/darkness relations (“chiaroscuro“). Colour did not enter into the theory of painting light until Seurat introduced his idea of using optically-mixed arrays of separate dots of complementary pigment-colours to give a new kind of luminosity to his paintings. This step proved to be the precursor of a transformative jump from “lightists” to “colourists”.
The next steps, which were were taken by such artists as Cézanne, Gauguin and Bonnard, were later to inspire the synthesis of my teacher Marian Bohusz-Szyszko. It is these that provide the main subject matter of the second post mentioned above, namely “The Dogmas, Chapter 1 of my book “Painting with Light and Colour”. There I explain how, as well as having an abiding influence on my own painting and my teaching, they were to:
Provide the questions that led to my scientific research into the perception of surface, space, light and harmony in paintings (see link below).
Lead to the gamut of practical insights on the use of colour in painting that distinguish my books from others on the same subjects.
An introduction to key ideas
To help readers to navigate the considerable quantity of unfamiliar science-based ideas contained in my book “Painting with Light and Colour”, I decided to preface its main content with an “Introduction to the science”, which can be obtained by clicking below.
The purpose of this Post is to make available “the nature of painting, “Chapter 3” of my book “Painting with Light and Colour”. It provides a quick run through some basic factors, which are so evident that some of their practical implications are too often overlooked. These are presented under four headings:
Real surface/illusory pictorial space ambiguities.
Whole-field colour/lightness interactions.
What paintings can do that nature cannot.
The human element.
All the chapters in my books have an “Introduction”. Below is the Introduction to Chapter 3. You can choose to read it now or when you click on the link to Chapter 3 that follows it.
It is difficult to imagine a more useful first guide to painting than the dogmas of Professor Marian Bohusz-Szyszko. However, they have their limits. Fortunately, as I believe the remainder of this book will make clear, it is both possible and worthwhile to go much more deeply into the reasons for both their strengths and their limitations. One approach to doing this is to trace the roots of the Professor’s assertions by reference to the work and ideas of his artist predecessors. Another, is to focus on the history of science and how it illuminated the subject of picture perception. Whichever our choice, it is inevitable that there will be much overlapping. The reason is that, in the nineteenth century, a particularly high proportion of the ideas influencing the community of progressive artists were rooted in the new ways of thinking about the world we live in that were emerging from science.
To prepare the way for the combination of theory and practice which provides the subject matter of the remainder of this book, this chapter offers a first introduction to basic factors that are necessarily in play when selections of artists’ pigments, mixed with various mediums are arranged on a circumscribed, flat picture-surface in such a way as to excite the feelings of people. The main reason for starting with these fundamentals is because:
Taking them into consideration can help artists to achieve a surprising number of widely sought after goals.
They provide reference points and context for so much of what follows.
Their importance is too often overlooked by practicing artists.
The basic factors in question will be presented under the headings,“real surface/illusory pictorial space ambiguities”, “whole-field colour/lightness interactions”, “what paintings can do that nature cannot” and “the human element”.
Continuing in the spirit of Tapies’ game-playing approach to creativity, we find ourselves jumping sideways to false confidence and self-deception, two closely interrelated subjects of great pertinence to both artists and scientists. These I will spread over two Posts. Both can be approached via episodes in my personal history.
The first anecdote, which is on the subject of confidence, concerns a flight of fancy that popped into my head at the time I was meditating on the mysteries of recognition, and how on earth the eye/brain systems could enable it. My reverie took the form of what I came to call the “Abstraction-Hierarchy Model”. It was a simplistic conception relating to brain-system processing that will be explained in more detail in a later Post.
The possibility that artists can build substantial castles on insubstantial foundations leads naturally to the subject of “self-deception”. During a recent conversation, the French artist Xavier Krebs confided that, during the process of making a painting, there sometimes comes a moment of what he described as a time of massive self-deception. Suddenly, to his delight, the painting that he is working on seems to come alive in a way that is thrilling beyond belief. The experience is extremely potent and only too real. The balloon is not pricked until the next morning when Xavier rushes excitedly to the studio. There he finds himself confronted, not by the “masterpiece” he was expecting, but what he now experiences as a spirit-crushing “disaster”. Nothing has changed but the artist’s experience of what he is seeing, Yet he assures me that there is no room for doubt: The scales of self-deception have dropped permanently from his eyes:
This Post discusses the relationship between fast drawing, learning and personal expression. It is an important subject because there seems to be a connection in many people’s minds between speed and expression. Various questions arise. The most basic one is whether there is any necessary connection at all.
In all my books I assume that personal expression can come in a multitude of ways: fast, slow, passionate, quietly sensitive, and all gradations between these extremes. This Post concentrates on the use of fast drawing. The main arguments are found in Chapter 8 of my book,“Drawing on Both sides of the Brain”.
The questions raised in Chapter 8 provide a means of taking a critical look at the widespread practice of starting life drawing sessions with poses that are so short that they force fast drawing. Those who advocate this practice, believe that their shortness will increase the likelihood of creativity and personal expression. In Chapter 8, I question this belief.
Chapter 8 and subsequent chapters between them explain how to use accuracy as a means of enhancing information pickup speed and, thereby, to learn to draw faster, with more authority and in ways that foster personal expression.
NB. In the chapter, reference is made both to illustrations found in a later chapter of the book, and to texts in another book in the series. As neither of these is as yet available to the reader, I have added them below.
Texts and illustrations referred to in Chapter 8
Four drawings of pollarded trees on the esplanade, Castelnau de Montmiral. They were made in 3 hrs, 30 minutes, 10 minutes and from memory respectively. They are extracted from Chapter 11 of “Drawing on Both sides of the Brain”.
A computer controlled experiment: an extract from Chapter 8 of “What Scientists can Learn from Artists”:
This extract comprises a summary of ideas coming from the main experiments and how they led to the computer controlled experiment which showed that preparatory looking helped rapidity of information pick-up:
“These ideas were amongst those that we had in mind when we came to consider the results from the main experiments. In particular they influenced our thought when we reflected upon the revelations of the video-tape record. One result was a hypothesis that needed testing. The argument that gave rise to this depended a variety of factors. If both comparison and the organisation of actions disrupt aspects of visual-memory, then copying must require a longer-term memory-store to guide a coordinated and efficient looking strategy. The superior performance of the skilled adults for drawing familiar objects from memory indicated that this function could be performed by long term memory. However, what about unfamiliar objects or the complex curves which describe the ever changing shapes of familiar ones? As suggested above, efficient visual analysis of these might require the creation of a purpose-specific memory store, structured with the help of longer looks, such as those recorded on the videotape. Thus, our hypothesis was that the function of the longer looks is to create a memory store containing knowledge of what to look for later. The advantage would be reaped in terms of the pick-up efficiency of the inter-saccadic glances. Given that time taken for each of these is fixed, it follows that the learning process enables more information to be picked up in the same time. Such a feat could only be achieved if appropriate, purpose specific memory structures had been created.
The computer-controlled experiment in question was used to test these ideas. A sequence of different two-line RSL models was displayed on a computer screen. At a given time after a model appeared, one of the two lines disappeared and the subjects were asked to copy the one that remained. The time before the disappearance was either one-third of a second or five seconds. When the subjects had completed drawing the visible line, they pressed a button which caused the second line to reappear for either one-third of a second (allowing time for one glance) or two-thirds of a second (allowing time for two glances). The question was whether the information collected in the five-second preliminary look would lead to better pick-up of information by the final glance or glances. The answer was a clear ‘yes’. Without the preliminary five-second look, the subjects were all-over-the-place when doing their best to copy the second line, whereas with it, they performed almost as well as if the image was there in front of their eyes.
This result gave strong support to the hypothesis that temporary knowledge, acquired as a result of appropriately organised looking behaviour, could play a vital role in achieving copying accuracy.”
Other posts including chapters from “Drawing on Both Sides of the Brain”.
Other Posts that publish chapters from “Drawing on Both Sides of the Brain”.
I have met many people who think that copying photographs is somehow cheating. Certainly it can be used as an easy way of sidestepping the challenges (and opportunities) provided by copying directly from nature. But this does not mean that it can never be justified.
The main purpose of this Post is to publish Chapter 7of my book “Drawing on Both Sides of the Brain”, which discusses the advantages and disadvantages of copying small, static, two-dimensional photographic images, as compared with confronting the full force of nature, in all its dimensions. Its conclusion is that both possibilities have their place. Rather than condemning the practice of copying photographs out of hand, artists might be well advised to work out what is the best option in the circumstances of the moment.
The chapter also considers an earlier and, for many years, much used memory-based alternative to copying photographic images.
CLAM is an acronym for “continuously looking at the model“. It describes a teaching method, suggested by Kimon Nicolaїdes and popularised by Betty Edwards. However, these authors describe it as “contour drawing”.
Since 1941, when Nicolaїdes‘ book “The Natural Way to Draw” was published posthumously and started its life as the most influential book on drawing published in the twentieth century, his method has proved its value as a powerful teaching tool. However, in addition to its well established advantages, the way Nicolaїdes‘ and Edwards taught it has significant disadvantages. Chapter 6 in my book “Drawing on Both Sides of the Brain” explains both the strengths and the limitations of the method.
Why avoid talking of “negative spaces ” or “negative shapes”?
The title of Chapter 6 of my book “Drawing on Both Sides of the Brain” is “Negative Shapes”. Some people may be surprised to find that I question the widespread use by art teachers of the phrase “negative shapes” and of its equivalent, “negative spaces“. After explaining the reasons for the popularity of its use as a means of bypassing the problems due to familiarity, I argue that it has significant shortcomings. In the light of these, I suggest that there are alternatives which avoid its disadvantages without relinquishing any of its advantages. Perhaps more importantly, these provides better ways of using drawing from observation as a tool for discovering the unique characteristics of objects in the world around us.