Sympathetic Vibratory Physics - It's a Musical Universe!
 
 SVP Notes
 
  SVP Notes Index

CONSCIOUS MACHINES

Text: When the Robots Take Over Is that a qualia in your circuits? by Paul Davies If robots could enter the Olympics, they would win hands down at weightlifting, shot put and swimming, and put in a strong performance at jumping. Nobody would be shocked. When it comes to sheer brawn, machines long ago overtook humans in ability. Since nobody could ever run like a cheetah or hug like a bear, homo sapiens have never laid claim to any absolute record of physical achievement. But when, in the intellectual equivalent of an Olympic event, the chess grand master Garry Kasparov was beaten by the mental gymnastics of an IBM computer known as Deep Blue in 1997, a collective shudder ran through society. Physically we may be only average among animals, but humans have always prided themselves on their mental pre-eminence. It is brain power that singles out our species so definitively from the others. Now, it seems, that longstanding intellectual hegemony is threatened. The idea that machines could think, and perhaps outsmart humans, has been around in science fiction for a long time. Isaac Azimov's 1950 classic I, Robot set the trend and before long humanoid robots, androids and wilful computers began cropping up regularly in books and films. The subject entered the realm of real science in 1952, with a famous journal article entitled "Can Machines Think?" written by Alan Turing, the British mathematician and war hero who helped crack the German World War II cipher machine known as Enigma. Turing is credited with the invention of the modern electronic computer. He was also a brilliant logician and visionary, who thought deeply about the nature of life and consciousness. In the 1960s the power of computers rose dramatically and engineers began using the term "artificial intelligence", or AI, for machines that could perform certain human-like tasks. Foremost among them was the ability to process visual information and respond accordingly. A succession of clumsy robots that could detect target objects, and even pick them up, was demonstrated to a sceptical public. A rather more demanding goal was to build a computer that could transform speech into text, or even translate from one language into another. In spite of much early hype progress has been disappointingly slow, although in recent years there has been reasonable success. Few people would credit today's robots or computers, whatever their amazing qualities, with genuine consciousness. Rather, they are little more than ultra-fast information processing systems that have to be cleverly programmed by human operators before they can function. But all that may be about to change. Researchers are busy designing new forms of computing systems that more closely resemble the brain in the architecture of their circuitry. Progress with nanotechnology has ushered in the prospect of harvesting brain cells from living animals, cloning them and amalgamating them with conventional electronic devices. Meanwhile, physicists are racing towards the Holy Grail of the quantum computer, an information processor that could function at the atomic level and harness the bizarre properties of quantum physics to achieve staggering computational speed and power. If these technological advances succeed in creating truly conscious machines, all sorts of vexing questions follow. Would a conscious machine have feelings and possess a sense of personal identity? Could it have free will? Notions of right and wrong? There are no agreed answers to these puzzles. Sceptics point out that a computer might be capable of closely imitating human actions and responses yet still be a mindless automaton, lacking any inner subjective experience. Such a device would not be sentient in the normal sense of the word. In colloquial terms, the machine might do some ingenious things, but there may be nobody at home. To attribute humanlike consciousness to a mimicking machine might be to commit the fallacy that if it looks like a duck and quacks like a duck, then it must be a duck. How could we know that a machine experiences sensations, is aware of its surroundings, cares about its existence and so on? Turing took a rather pragmatic view of machine consciousness, believing that if a computer could imitate a person well enough to fool a human inquisitor in a sight-unseen, question-and-answer session, then there would be no more reason to deny the computer consciousness than to deny another person's consciousness. But many philosophers are uneasy about this so-called Turing Test definition, because it leaves out any discussion of what is going on inside the machine, in the mental sense. In spite of this, the subject of consciousness has recently enjoyed something of a resurgence in scientific and philosophical circles, having been sidelined for decades as a not-quite-respectable topic for serious study. The worldwide success of books such as Daniel Dennett's Consciousness Explained and Roger Penrose's The Emperor's New Mind have provoked a flurry of new theories aimed at understanding exactly what constitutes a sentient being. These new ideas are the latest contributions to centuries of theorising on the so-called mind-body problem. The mysterious relationship between mind and matter has fascinated philosophers for at least 2,500 years. In the Western world, most people have an image of the human person as a duality of mind and body, with the mind playing a role analogous to the driver of a vehicle, steering the body according to the dictates of the will. It is a view that was propelled to prominence by Rene{AAC} Descartes in the 17th century. Descartes speculated that the elusive mind-stuff somehow prods the material of our brains to do its bidding, and the brain in turn controls the body. Today, few experts have a good word to say for this so-called Cartesian dualism. Gilbert Ryle famously derided Descartes' image of the mind as "the ghost in the machine". The problem with Cartesian dualism is that it explains very little. Without having a theory of how the mind itself works, and what physical laws it obeys, extending the brain via a bridge to some elusive thing called "the mind" merely compounds one mystery with another. There is also the difficulty of how the mind, consisting of non-material entities such as thoughts, intentions and so on, can affect the material substance of the brain. Put crudely, how can thoughts move atoms around? If I decide to raise my arm, and my arm obliges, how can this chain of events be described in terms of cause and effect? Today we know that arm muscles are controlled by nerve impulses that originate in the brain, but what triggers the nerve activity in the first place? Descartes believed that the non-material mind initiates the action, and is connected to the brain via a small feature called the pineal gland. But the nature of the mind-brain coupling remained utterly mysterious and scientists have been unable to find any convincing trace of it. In the opinion of most investigators, physical processes in the brain are produced by other physical processes in the brain; there does not seem to be room for anything extra. Dennett has been at pains to demolish another myth stemming from dualism, a misconception he dubs "the Cartesian theatre". In popular imagery, each of us possesses a self, or inner observer, who sits somewhere inside our head and watches the show presented to it by the senses. This sort of talk is, for Dennett, vacuous nonsense. He points out that the mind, or the self, is the show, not an observer of it. Following this reasoning to extremes, you and I don't really exist, we just hallucinate our existence, along with much of the external world. But this reductionistic approach leaves some uneasy questions about how and why consciousness evolved. After all, most of what human beings do is carried out unconsciously. When I walk across the room, I don't think about putting one foot in front of another. I can ride a bicycle or drive a car while preoccupied with a scientific topic. Refined physical activities such as piano playing and tennis take place far too quickly for the strokes to enter the conscious mind before execution. Even in higher cerebral activity such as verbal conversation, we normally hit on only the general idea we wish to convey. Rarely is it necessary to consciously frame the exact words in advance of delivering them, an ability famously expressed by Gertrude Stein in the rejoinder: "How do I know what I think till I hear what I say?" From my experience I have found I can read a book convincingly to my children while musing on a physics problem and taking in nothing of the story myself. The amazing capabilities of the unconscious mind are startlingly illustrated by the curious phenomenon of blind sight. Some stroke victims who have lost part of their visual field are nevertheless able to visually process a certain amount of information in that region. For example, a notice saying "Your hat is crooked", placed in the blind region, may cause the patient to adjust the hat, even though he is sure he cannot see the sign. If humans can move, converse and even read without being directly conscious of the relevant details, why is consciousness needed at all? In terms of biological evolution, what counts is behaviour that promotes survival; that behaviour doesn't have to be conscious. A zombie that behaved just like a human being, but had no inner mental life, would be our equal in the Darwinian game. Philosophers use the term "qualia" for the experienced sensations of things. The redness of red seems starkly different from the greenness of green or the sound of a piano or the feel of running water. These distinctive qualia are the key factors that give human experience its richness. But why do we have qualia at all? What use are they in evolutionary terms? It is easy to make a machine that can distinguish between red and green and respond accordingly, even though no machine yet built has a "redness" sensation when red is detected. In principle, a person possessing good blind sight extending across the whole visual field could navigate around, knowing what the world looks like without seeing it. What, then, is the function of "seeing red" as opposed to merely detecting the wavelength of light coming from an object? Few clues come from studying other organisms. In a celebrated paper entitled "What is it like to be a bat?" the philosopher Thomas Nagel invited us to reflect on how the world would appear viewed through a bat's exotic sound-echo sensory system. But is it like anything to be a bat? Maybe bats' brains are just complicated machines without any bat qualia attached. It is unclear at what stage in the evolutionary path from microbes to humans that consciousness in general, and qualia in particular, emerged. It seems unduly chauvinistic to assert that only humans are conscious. Monkeys seem to have a notion of self, cats and dogs act purposively as if they know what they are doing, but differ only in degree from rats, mice and birds. So is a mouse conscious? How about a goldfish? An ant? A bacterium? Where on the scale of biological complexity does consciousness first arise? If engineers could make a machine with ant capability (a reasonable prospect in the near future) would we attribute consciousness to the ant, but not to the machine? As William James long ago pointed out, natural selection can act only on physical material. Organisms, not minds, succumb to the law of the jungle, so consciousness can evolve biologically only if it has a physical basis. Something going on among atoms must produce qualia. But what? What, exactly, distinguishes a swirling electrical pattern that generates, say, the sensation of red from one that has no mental correlation at all? Neuroscientists can map what goes on in brains when subjects report certain experiences, and the time may soon come when, from a readout of brain activity, it would be possible to say, "He is feeling a pain in his toe", or suchlike. But why that particular complicated electrochemical pattern is pain when another has no experience associated with it seems not only hard to explain, but entirely outside the scope of scientific concepts. Thoughts, feelings, sensations and qualia are one sort of thing, electrical currents and chemical processes are a completely different sort of thing. How can one ever explain the other? Most hard-nosed investigators sidestep these philosophical posers, and regard the human brain as merely an extremely elaborate machine that happens to have the property of generating consciousness, even if nobody can quite put their finger on precisely what it is in the brain that creates mental events. There seems to be no fundamental reason why an artificial device couldn't be built to do the same. In principle, we could engineer a synthetic brain from the bottom up, step by step. But probably rudimentary consciousness could be generated in a much simpler device, though whether it would resemble the present-day computer is another matter. Some researchers favour building a tangle of wires and switches similar to the neural architecture of the brain, rather than the orderly array of circuits to be found on a microchip. These systems are known, appropriately enough, as neural nets. They have the important facility that the interconnections can be repeatedly adjusted, enabling the nets to be trained to perform certain tasks better and better. Neural computers have been studied for applications as diverse as "smart" rover vehicle guidance systems and predicting the stockmarket. Paul Werbos, a former president of the International Neural Network Society, believes that we could build a machine with the intelligence of a mouse within a decade. Other researchers think the hardware is incidental because consciousness is ultimately a property of function and process rather than special stuff. Conscious machines could, they believe, be assembled from almost anything as long as the appropriate information-processing protocols are established and the system is sufficiently complex. This "mind is software" approach is vaguely reminiscent of Cartesian dualism, since software, or information, is non-material. Where it differs is in causal efficacy. Descartes thought the mind caused the brain to do this or that; no computer engineer would say the program causes the computer's circuits to fire. Why should we care about something as arcane and outlandish as conscious machines? AI specialists are motivated by two distinct goals. On the one hand is the mind-boggling potential of machines that could match or even outperform human intellectual achievements. Imagine highly intelligent robots replacing humans in uncomfortable or hazardous situations. Forty years of manned space flight, for example, have exposed the weaknesses of homo sapiens to prolonged periods in space. If robots could fly on a one-way mission to Mars, and have the wit and knowledge to seek out evidence for life, the cost savings over a two-way manned expedition would be enormous. Machines might one day think profoundly enough to formulate scientific theories, compose music or literature, assess evidence in place of judges and juries, diagnose complex diseases, or run countries instead of politicians. In many cases, researchers are less concerned with the practical applications of intelligent machines than the light that they would cast on the nature of consciousness: the deepest scientific riddle of all time. The mind is also the central entity in all world religions and all mystical and spiritual belief systems. Progress in creating artificial minds carries challenging implications for many traditional religious ideas. Mindful of the sweeping implications of his work, Rodney Brooks, the director of the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, has engaged the services of Anne Foerst, a theologian. Foerst, who is both an engineer and Lutheran scholar, believes it is essential that we confront the question of what it means to be a person, and at what point a robot could be said to acquire personhood and rights. Although "soul" doesn't enter Foerst's vocabulary, there is no doubt that many religious people see the distinctive characteristic of a human being not in terms of qualia or a sense of self, but arising from a soul or spirit. Such ideas are anathema to most scientists, but until they can be replaced by a new concept to dignify human existence, there will be unease among religious people about AI research. The solution of the mind-body problem will probably come from a scientific breakthrough rather than the musings of armchair theorists. When the original Games were being held in ancient Olympia, the Greek philosophers down the road were puzzling deeply not only about the mind, but about space, time, motion and the nature of matter. All the latter topics now form part of mainstream physics. Unless consciousness represents the outer limits of scientific inquiry, it will also one day be part of physics, and that day could come sooner rather than later. Paul Davies is a physicist and writer. His latest book is The Fifth Miracle: the search for the origin of life (Penguin). Sydney Morning Herald, September 16, 2000

See Also:

Source:

Top of Page | Master Index | Home | What's New | FAQ | Catalog