This website provides a way of making a list of categories. The one I have created can be found at the top of the left-hand side margin, written in brown typeface. The categories (in upper case) and sub categories (in lower case) are arranged in alphabetical order. They categories are:‘Creativity’, ‘Drawing’, ‘Extracts from my books’, ‘Miscellaneous subjects’, ‘Painting’, ‘Painting School news’, ‘Science’ and ‘The Glossary’. Click on any of these to access all posts in that category.
Experience shows that many readers find it difficult to find specific Posts by this method. To make it easier, I have created an up to date ‘Contents List’, divided into five categories. Most of the material in the categories “drawing”, “painting” and “creativity” comes from my books on those subjects.
Contents list, listing the five categories and the Posts to be found within each of them:
Chapter 1 of my book “Painting with Light and Colour” told of the dogmas of Professor Bohusz-Szyszko and his claim that they were “all you need to know about painting”. It also praised their value as a practical guide.
Chapter 2 is about doubts that arose concerning their theoretical basis. It was the experience of living with these that prepared me for a critical moment in my life. This came several years later while I was reading an article in the Scientific American that had been brought to my attention by one of my colleagues in the Psychology Department at the University of Stirling. The purpose of the article was to present what the author, Edwin Land, fervently believed to be a mould-breaking understanding of the neural computations used by the eye/brain to produce the phenomenon of “colour constancy”. Actually Gaspard Monge, a French mathematician, had beaten him to the post by nearly two hundred years. But this did not stop the contents of Land’s article from being the catalyst to the evaporation of my worries. More importantly, my efforts to better understand the significance of Land’s ideas were eventually to open the way for cooperations with colleagues in the The University of Stirling Vision Group (see below).* Without their help, few of the new insights relating to the use of colour in paintings that can be found in my book would have materialised.
But this is jumping the gun. First click on the link below to access the chapter on the doubts that had haunted me and on the process of questioning they set in motion. Its function is to explain why there is a need for the new ways of thinking and doing that play such an important part in the chapters that follow.
Above and in many places in my books, I acknowledge the importance of the role of colleagues in the development of the new science-based ideas put forward in them. In particular I mention cooperations with various scientists at the University of Stirling. The most important of these were:
Alistair Watson (Physics, psychology and computer imagery).
Also, although Peter Brophy* did not join our group, he was an ever-available and important source of information on the biochemistry of the brain.
In the Autumn of 1984, Alistair, Leslie and I took the first steps in the setting up of the University of Stirling Vision Group, which was to have many meetings attended by the above named colleagues and other members of the various interested Departments. Its starting point was a package of ideas developed by Alistair and myself, and two core algorithms based on them, produced by Alistair. These were:
A “colour constancy algorithm“, capable of modelling both spatial and temporal colour constancy, which was inspired by our interpretation of how this phenomenon is achieved by human eye/brain systems. As a preliminary step to achieving this main objective, the algorithm has to pick off the information about surface-reflection. Since it was obvious that the reflected-light contained information, we speculated upon how it might be used by the eye/brain. Due to my interest in picture perception, we focused on its potential for computing surface-form, in front/behind relations, and the wavelength composition of ambient illumination.**
A “classification/recognition algorithm”, based on our interpretation of how human eye/brain systems achieves their primary task of enabling recognition.***
We could not help being excited by the early tests of these algorithms and the speculations concerning their potential. In our enthusiasm to push matters further, Alistair suggested we should seek the help of other researchers, particularly ones with expertise in:
Mathematics and computing.
Visual perception with special reference of visual memory.
It was at this juncture that, having decided on a name for what we were hoping would become a collaborative group, we contacted Leslie Smith for his mathematical and computing skills. But this was only a start. Once Leslie was on board, we approached Bill Phillips, whose long standing interest in visual memory had led him to take the plunge into the recently emerging domain of neural networks and learning algorithms. After many Vision Group meetings, much sharing of ideas, many hours spent working on implementations of algorithms, and the writing of a number of working papers, we decided to submit a suite of five grant applications to the Science and Engineering Research Council, who had let it be known that they were looking for groups of researchers working on the use of computers to model the functional principles of neural system. The stated aim of the SERC was to set up a small number of “Centres of Excellence” in this domain. Not only were two of our grant applications accepted (one submitted by Bill Phillips and one submitted by Leslie Smith), but also our university was encouraged to create a brand new Centre for Cognitive and Computational Neuroscience . This empire absorbed the University of Stirling Vision Group which ceased to have an independent existence. Its coming into existence also coincided with my departure from Stirling on my way to founding my Painting School of Montmiral, where I intended to put theory into practice both in my own work and in my teaching. I also had hopes of confirming and, with any luck, extending the theory.
* The links to Bill, Leslie, Lindsay and Peter relate to their current status. Alistair, Karel and Ranald all retired or died before the Internet became the essential information source it has since become.
** My book is full of examples of how fruitful this speculation proved to be.
It is well known that the Impressionists and their immediate successors (often referred to in my books as the Early Modernists) reacted strongly against what they saw as the straitjacket of the traditional ideas taught in the academies. The purpose of this Post is to publish Chapter 4 of “Painting with Light and Colour”, which provides a short introduction to what these actually were, with comments on the pros and cons of following them uncritically. Normally, I have been writing a separate introduction for my Posts but on this occasion I have used the Introductory from the chapter itself. Accordingly, when you open the link to the chapter below, you may want avoid reading the same thing twice.
Traditional ideas and their limitations
This chapter has four main purposes. These are to:
Introduce some traditional ideas about the depiction of space and light.
Discuss their limitations.
Suggest that these are more comprehensive and satisfactory alternatives.
Prepare the way for a better understanding of the significance of Seurat’s science and his colour based innovations.
The first of objectives is met by elaborating on three aspects of painting which, after being explored in some depth by the Renaissance artists, became embedded in the academic tradition. Although satisfactorily serving their purpose for the artists who followed them, it was these that were found wanting by the Impressionists. More importantly in the present context, it was also these that were given a new dimension by Seurat and those who built upon his ideas. The three aspects were detailed in the last chapter:
Significantly, as we shall see, it is only with respect to the first item on the list (atmosphere) that colour of any sort was seen as having a role to play. Even then only blue was required.
In contrast, the academic rules guiding the depiction of the quality of light and shading provided no function to colour. The practice of the Renaissance artists and the teaching of the Academies placed the emphasis exclusively on variations in “lightness” (what the English call “tone” and the Americans term “value”).
The science referred to in the title of this post had a lot to do with the revolution in the understanding that gave birth to what we now know as the science of “visual perception“. The first intimations that an important change was afoot came in the later part of the seventeenth century with Isaac Newton’s work on the composition of light. However, the paradigm shift came in the late eighteenth century when the work of Gaspard Monge and others made it clear that colour is not a property of surfaces but is made in the head. This completely new understanding of the nature of visual perception was to be fleshed out in the next century by a flood of confirmatory studies. A milestone was the publication by Herman van Helmholtz of a three-volume review of the new domain of study. It was a magisterial achievement that showed why, despite his considerable debt to others, he has been described as the “Father of the Psychology of Perception“. The third and last of these volumes was published in 1867, just in time to have a profound influence, first on the young Impressionists and, then, in the remainder of the nineteenth and in the early twentieth centuries, on many of their Modernist Painter successors.
The new science misrepresented
One of the purposes of “Painting with Light and Colour”, my book on the theory and practice painting, is to provide a better account of the hugely important role of the new sciences of vision and visual perception in the history of painting. In this post I am publishing Chapter 5, which continues the process of setting the scene started in the Introduction to the science at the beginning of the book. It does so by revisiting and shedding new light on important aspects of colour theory. It has four objectives:
To question the widespread dissemination of half-truths and falsehoods in how-to-do-it books and articles on painting.
To sort out misconceptions about colour theory that I have found to be common amongst my students.
To show how well-known concepts are given new significance when considered in the context of the realisation that colour is not a property of surfaces but is made in the head.
To introduce other more recent ideas that will play a key role in the chapters that follow. These are likely to be unfamiliar to most people, as they are the fruit of little known, late twentieth century experimental clarifications, which enable sense to be made of formerly unsolved mysteries.
Thispost focuses on the revolution in painting that gathered momentum in the latter part of the nineteenth century. A key factor in its genesis was an earlier and still ongoing revolution in the then emerging science of visual perception (more posts on aspects of this to follow). At the core of this was an accumulation of evidence that demonstrated that colour is not a property of surfaces in the external world but a construction by the eye/brain. In Chapter 6 of my book “Fresh Insights into Creativity“, I have described what occurred as “The Modernist Experiment”. The word “experiment” is used because the discoveries of science, the threat of the recently invented photograph and the challenge to well-embedded assumptions posed by the Japanese print, led to:
A root and branch questioning of just about every aspect of painting.
A concerted effort to make paintings that would push forward the search for answers.
More than ever before, the thought-processes and working practice of artists illustrated the earlier groundbreaking contention of John Constablethat “paintings should be regarded as experiments“.
A link to the chapter
Please click on the link below to access the chapter in question. In it you will read how the revolution in painting evolved between the 1860s, when the young Impressionists met with now celebrated poets and writers in the Café Guerbois, Paris, and the 1960s, when an exhibition called “The Art of the Real“,at the Museum of Modern Art, New York, prepared the way for the arrival of so-called “Post Modernism” (to be the subject of a later Post).
My Concise Oxford Dictionary defines “free-will” as “the power of acting without necessity or constraints”. A much debated question is whether human beings have this capability. Most answers are based on an easy-going introspection. “Surely, it is evident that we can make up our own mind on any question and in any situation we find ourselves?” However, over past centuries and decades various thinkers, for various reasons, have come to the conclusion that believers in free-will deceive themselves. According to their way of thinking, it can only be an illusion: All is determined by forces outside their control.
In essence, there have been two main arguments in support of this determinism. They are:
The theoretical impossibility of mental liberty coexisting with an all-powerful deity (see the the doctrine of predestination).
A belief that the neural systems that underpin human action and thought operate in a machine-like manner.
For those whose premise is the supremacy of God, free-will could only occur if the Deity were to give up power voluntarily. The argument continues that this is a step it could not take because doing so would mean cancelling out the most basic fact of its existence, namely its all-powerful nature.
For those who see brains as machines, all must be explained in terms of mechanical processes. They ask what they assume to be a rhetorical question: “How could a mere machine be endowed with free-will?” Both of these arguments can be treated as cases of special pleading, leaving fundamental questions unanswered.
Free-will as a functional reality
However, there is an intermediate possibility that depends on the notion of free-will as a functional reality. This is attractive not only because it has the advantage of overcoming the objections of those who insist on a mechanistic explanation, but also because it fits with what introspection tells us. Let me explain.
Earlier in this chapter, under the heading “modes of description”, I described my first viewing of the powers of an electron microscope and being amazed to see how unrecognisable the image of the same minute portion of a leaf could be when viewed at the different levels of magnification. There seemed to be absolutely nothing in common between them. However, the specialist doing the demonstration seemed to have no difficulty in describing links both their functions and links between them.
But that was many years ago and no matter how seemingly complete the explanations he gave at the time, by now, they would have had to be revised in all sorts of ways. It could hardly be otherwise, for the relatively new and rapidly blooming science of molecular biology, aided by ever more sophisticated technology, has been revealing ever-increasing levels of complexity and creating a mushrooming of questions to ask. Accordingly, it would be surprising to find any serious scientist who currently believes that it will be possible, in anything like a near future, to arrive at a definitive description of the multiplicity of neural processes and interconnections that enable our brains, not only to to classify and recognise but also to learn and use motor and intellectual skills so effectively.
Computers competing with the human brain
For analogous reasons, a similar situation obtains in the field of computer-based brain-modelling. Despite all the astonishing progress that has been made in this field, computer scientists have still far to go before realising the goal of constructing a machine capable of mimicking the full extent of the intellectual and functional capacities of a human brain. Simply put, the problem is the daunting degree of interconnectivity within the brain’s neural networks. To model this, amongst other things, it would be necessary to take account of:
The estimated 100 billion neurons in the brain.
The extensive interlinking of each neuron to numerous other neurons, including those belonging to systems that provide sensory and somatosensory inputs, always by means of multitudes of neural processes.
The requirements of neurophysiological plausibility.
No wonder I keep hearing computer scientists saying that the task of competing with the human brain on its own terms will remain well beyond their resources for the foreseeable future.
Characteristics of hypothetical brain modelling machines
Even if there are some computer scientists who are more optimistic, this would not be of any consequence for my explanation of functional free will, for it does not depend on the existence of actual brain-modelling machines. Rather it involves thought-experiments relating to hypothetical creations whose operational principles are based on known characteristics of the brain. Accordingly the machines will have to use a considerable number of different sensor-types, each responding to a different modality of information (light, sound, scent, taste, various kinds of pressure, etc.), feeding a vast number of extensively interlinked, mini processors (taking on the role of neurons). These would have to be capable of:
Separating out and usefully recombine relevant aspects of the sensory information extracted from the environment by means of the multiplicity of sensors with task-specific characteristics, appropriately situated in a wide range of locations (multimodal processing).
Providing contextual information derived, not only from relevant parts of long-term memory, as built up through the agency of numbers of interacting subsystems, over a lifetime of experience, but also from the totality of the current environment, as captured and interpreted by sensory-systems, taking information from all parts of the body (temporal and spatial context).
Monitoring their own behaviour, using the feedback (provided by relevant sensory systems, memory stores or, much more likely, a combination of the two) that is required by analytic processes for both consciousness and learning.
Organising and implement actions (involving the coordination of complex muscle systems) and thought-processes (motor and mind control).
Generating feeling-based criteria upon which to make choices (decision making).
Equipped in various ways with these five capacities, the brain-mimicking computers would have to be able:
To make useful syntheses of the mass of data that has been extracted from the multiple sources of sensory input, with a view to both making sense and, subsequently, enabling recognition.
To do the above in any context, no matter what the domain of description, or how many variables have to be taken into consideration.
To learn from both positive and negative feedback (particularly from mistakes, using previously acquired, task specific error-correction skills).
The machine must also be capable of making sense of:
Information derived from within the relatively easy (but nevertheless potentially fiendishly complex) domains researched by practitioners of the so-called “hard sciences”, such as mathematicians, physicists and molecular biologists.
Much less easily classified material relating to the disciplines traditionally placed under the umbrellas of the social sciences and the arts.
In short, the brain-machine envisaged in the thought expermient would have to be at ease with making use of input pertaining to any realm of ideas whatsoever, however fanciful, simple-minded or far-fetched. It would also need to be capable of self-deceptionand crises of confidence in its own findings.
But this is far from all. To be like the human brain, every brain machine would have to have an ever-evolving memory-store, based on a ceaseless stream of ongoing inputs and capable of creating a unique internal world (analogous to “personal experience”). Accordingly, each machine would have a ‘personalised’ reaction to each and every contingency. In addition, like Antoni Tàpies and myself, it would have to be capable of having fun with the idea of creativity, however absurd its premise.
In the light of all these requirements (and no doubt many more), it is clear that neither computer hardware designers nor the computer programmers who were responsible for creating them would be able to predict the behaviour of the brain modeling machines envisaged in our thought-experiment. Only beings or groups of beings equipped with capacities comparable with the second of the hypothesised Gods* (the one capable of preplanning everything, from the evolution of species to down to the trajectory of every floating dust particle, for all eternity) would be able to unscramble an omelette of such complexity.
Moreover, even assuming that:
The self-monitoring aspect of the brain-modeling machines could be equated with consciousness.
The implied awareness of self could be programmed to incorporate both a sense of agency and a means of ranking the levels of both the credibility and the desirability of conclusions reached.
The outcome would be like human brains in the sense that they could only deal with an extremely limited part of the information being provided by the massively complex arrays and sequences of processes involved in determining their current behaviour.
All in all, it is safe to conclude that, even if machines could be made that meet these extremely exacting and, at the present, far from obtainable criteria, they would be unable to perceive the mechanically and contextually determined origins of their actions or thoughts. Accordingly, assuming the self-monitoring capacity of such machines could be equated with introspection, they would have no choice but to consider themselves as being in possession of free-will.
Moreover, if all traces of determinism remain obscure to the machines themselves, how would their output appear to other, similarly constructed and programmed machines? Clearly, from the perspective of any one machine seeing itself as having free-will, all other machines that have been created and evolved in accordance with the same principles would likewise be seen as in possession of their own free wills (or, possibly, dismissed as “just machines“).
Functional free-will and experiential reality
Since all the above arguments apply to any mechanistic way of thinking, whether it is focused on hypothetical computers or biological brains, they must also have relevance to speculations about the nature of free-will in our species. Just as no theory of the solar system or the universe, however indisputably correct, can stop us experiencing the sun as rising in the morning and setting in the evening (see Post on“Why I am a flat Earther”), so no mechanistic theory of brain function can deprive us of our sense of possessing free-will. It may be an illusion, but it is with us to stay, along with any of the sense-of-self, personal feelings and motivation it can provide.
Finally, a word on the future of machines that mimic human brains. Since the functional free-will argued for above is predicated upon the idea that all machines, human or electronic, evolve in idiosyncratic ways, their diversity would be ensured. Accordingly, so would be their role in evolutionary processes that favour the survival of the fittest (whether as individuals, as contributing members of groups or as friends of the environment), with their all their possible implications and risks.
Posts from “Having Fun with Creativity”, Chapter 10of“Fresh Perspectives on Creativity”
* A reference to an earlier passage in the chapter from which this post is an extract, namely, “Having Fun with Creativity”, Chapter 10of“Fresh Perspectives on Creativity” . It consists of a not too serious run through of the hypothetical choices that would have faced an all powerful deity when sitting at his/her desk planning of the Big Bang. It is scheduled to appear in a later Post.
Two earlier Posts draw attention to the historical importance of Seurat’s science-based ideas on the practice of painting light and colour. In the one, which is on the “Venetian Colourists”, it is argued that the artists known by this label and those who built upon their ideas were not “colourists” at all. Rather they were “lightists”, whose reputation as “colourists” was based on their mastery of whole-field lightness/darkness relations (“chiaroscuro“). Colour did not enter into the theory of painting light until Seurat introduced his idea of using optically-mixed arrays of separate dots of complementary pigment-colours to give a new kind of luminosity to his paintings. This step proved to be the precursor of a transformative jump from “lightists” to “colourists”. The next steps, which were were taken by such artists as Cézanne, Gauguin and Bonnard, were later to inspire the synthesis of my teacher Marian Bohusz-Szyszko. It is these that provide the main subject matter of the second post mentioned above, namely “The Dogmas, Chapter 1 of my book “Painting with Light and Colour”. There I explain how, as well as having an abiding influence on my own painting and my teaching, they were to:
The purpose of this Post is to make available “the nature of painting, “Chapter 3” of my book “Painting with Light and Colour”. It provides a quick run through some basic factors, which are so evident that some of their practical implications are too often overlooked. These are presented under four headings:
Real surface/illusory pictorial space ambiguities.
Whole-field colour/lightness interactions.
What paintings can do that nature cannot.
The human element.
All the chapters in my books have an “Introduction”. From now on I will be using these to introduce the Posts that provide links to the .PDF version of them. Accordingly, you can choose to read the one for this Post, either immediately below, or when you click on the capitalised link in brown that follows it. The text of these “Introductions” will be italicised.
It is difficult to imagine a more useful first guide to painting than the dogmas of Professor Marian Bohusz-Szyszko. However, they have their limits. Fortunately, as I believe the remainder of this book will make clear, it is both possible and worthwhile to go much more deeply into the reasons for both their strengths and their limitations. One approach to doing this is to trace the roots of the Professor’s assertions by reference to the work and ideas of his artist predecessors. Another, is to focus on the history of science and how it illuminated the subject of picture perception. Whichever our choice, it is inevitable that there will be much overlapping. The reason is that, in the nineteenth century, a particularly high proportion of the ideas influencing the community of progressive artists were rooted in the new ways of thinking about the world we live in that were emerging from science.
To prepare the way for the combination of theory and practice which provides the subject matter of the remainder of this book, this chapter offers a first introduction to basic factors that are necessarily in play when selections of artists’ pigments, mixed with various mediums are arranged on a circumscribed, flat picture-surface in such a way as to excite the feelings of people. The main reason for starting with these fundamentals is because:
Taking them into consideration can help artists to achieve a surprising number of widely sought after goals.
They provide reference points and context for so much of what follows.
Their importance is too often overlooked by practicing artists.
The basic factors in question will be presented under the headings,“real surface/illusory pictorial space ambiguities”, “whole-field colour/lightness interactions”, “what paintings can do that nature cannot” and “the human element”.
Continuing in the spirit of Tapies’ game-playing approach to creativity, we find ourselves jumping sideways to false confidence and self-deception, two closely interrelated subjects of great pertinence to both artists and scientists. These I will spread over two Posts. Both can be approached via episodes in my personal history.
The first anecdote, which is on the subject of confidence, concerns a flight of fancy that popped into my head at the time I was meditating on the mysteries of recognition, and how on earth the eye/brain systems could enable it. My reverie took the form of what I came to call the “Abstraction-Hierarchy Model”. It was a simplistic conception relating to brain-system processing that will be explained in more detail in a later Post.
The possibility that artists can build substantial castles on insubstantial foundations leads naturally to the subject of “self-deception”. During a recent conversation, the French artist Xavier Krebs confided that, during the process of making a painting, there sometimes comes a moment of what he described as a time of massive self-deception. Suddenly, to his delight, the painting that he is working on seems to come alive in a way that is thrilling beyond belief. The experience is extremely potent and only too real. The balloon is not pricked until the next morning when Xavier rushes excitedly to the studio. There he finds himself confronted, not by the “masterpiece” he was expecting, but what he now experiences as a spirit-crushing “disaster”. Nothing has changed but the artist’s experience of what he is seeing, Yet he assures me that there is no room for doubt: The scales of self-deception have dropped permanently from his eyes:
This Post is about the connection between fast drawing, learning and personal expression. It is an important subject because there seems to be a connection in many people’s minds between speed and expression. Various questions arise. Perhaps the main one is whether there is any necessary connection at all. In all my books I assume that personal expression can come in a multitude of ways: fast, slow, passionate, quietly sensitive, and all gradations between these extremes. This Post concentrates on using fast drawing. As its core is “Movement, speed and memory”, Chapter 8 of my book,“Drawing on Both sides of the Brain” (see below”).
The question raised by Chapter 8 relates to the widespread practice of starting life drawing sessions with poses that are so short that they force fast drawing. Those who advocate this practice, believe that their shortness will increase the likelihood of creativity and personal expression. In Chapter 8, I question this belief.