Searle’s definition of ‘strong AI’ can be summarized,
“a machine can think simply as a result of instantiating a computer program (Searle, p.203).
In this sense, by creating and running a program, if the program can perform and respond in the same way as a human so as to be indistinguishable, then this would be creating a mind, as much to say that the computer given the right program can think and understand. Seale calls this Strong AI, in contrast with weak AI which is the view that computer models are a useful way of studying the human mind (this version acknowledging the similarities between computers and the mind but not accepting that a computer model could ever be a mind – just a model of one).
Searle uses an illustration – the Chinese room theory – to refute strong AI. The person standing in a room in his illustration is like the computer, and this person has a basket of symbols (‘the database’). He does not know Chinese, but is given a rule book explaining how to manipulate Chinese symbols (the ‘computer program’). He is handed Chinese symbols (‘the questions’) by the people outside the room (‘the programmers’) who do know Chinese, and he processes these symbols according to the rule book and hands them back (‘the answers’). At no time does he attach any meaning to the symbols – he never ‘understands’ Chinese, he only knows how to construct the symbols in the correct form and hand them back. To the people outside the room however, it would appear, indistinguishably, that he understands Chinese. Seale’s illustration aims to show that the computer does not really think, and he summarises his argument;
(Axiom 1) Computer programs are formal (synatic) – i.e. they manipulate symbols without any reference to meanings.
(Axiom 2) Human minds have mental contents (semantics).When we learn to speak, we don’t just get very good at putting a lot of sounds in the right order (like memorizing the Chinese rule book) – we attach meanings to the words and put them together in such a way as to express ourselves.
(Axiom 3) Searle argues that “syntax is neither constitutive of nor sufficient for semantics” and so
(Conclusion 1) “programs are neither constitutive of nor sufficient for minds” (Searle, p.206).
He further argues that computer programs merely simulate reality – for example, a computer modeling the economy is not the economy itself, but just a model of it. Therefore a computer modeling a mind (and thus giving out all the correct responses by, say, the Turing test) is not actually a mind, but just a model of one. Computers provide models of processes, but the processes are not ‘real’ (Searle, p.210).
But what is reality? For example, the eye is basically a mechanism for receiving light and transforming it into electrical energy. The receptors in the eye transform it into electrical signals, which are passed on to the brain. But the eye can be ‘fooled’ into passing on false signals – for example, by way of an optical illusion. What we experience is merely an interpretation of the electrical signals received by the brain, and therefore no more real than the messages sent to and from a computer program. Searle speaks of a computer simulating the digesting of pizza, but this is not a great deal different to our experience of eating pizza – the taste and heat are electrical signals and if the taste nerves that travel to the brain were cut, we would have no sense of taste at all.
Granted, nobody thinks a computer program would actually digest anything but the messages to the program say that it does, in the same way that the messages to our brains tell us an optical illusion is doing something it isn’t – both cases are as ‘real’ as each other.
Searle argues that since all mental phenomena are caused by nerophysiological processes in the brain, therefore brains cause minds (Axiom 4) and any other system capable of causing minds would have to have the causal powers at least equivalent to those of brains (Conclusion 2 – Searle, p.210). He argues that running a computer program cannot produce the necessary phenomena. Wilkinson agrees, because, he says, not everything is computable. He argues that having skills or “knowing how” is part of human intelligence which cannot be reduced to “knowledge that” (propositional knowledge), and no matter how much input it is fed, the computer cannot know what information is relevant or how to apply it (Wilkinson, p.123).
I disagree that this obstacle rules out AI because, besides basic instincts, a new life has little more information than a computer. From the moment a baby opens its eyes, it has input from the world around it (and arguably this starts even before birth). The amount of input is vast – starting with the first smells, sounds and sights and as it grows older, its experiences of the world and the people around it. Everything the child knows is absorbed (or programmed) from its experience of the world.
Obviously, no computer or program that exists today has the capacity or the ability to absorb the degree of information this way. But Simon states, “intuition, insight and learning are no longer the exclusive possessions of humans” (H. Simon p. 120). Intel has just released a package of software claims to help computers “learn” through advanced predictive algorithms. The software ‘helps’ the computer to better predict the outcome of certain events. The more data the application has access to, the better the predictive abilities (www.extremetech.com). This casts doubt on Seale’s Axiom 3 and Conclusion 2. To be able to assess past performance and learn from it, the computer must be doing more than simply performing a set of instructions – it must ‘know’ what the relevance of an outcome is (based on programming and on past examples), ‘know’ that this outcome is undesirable and ‘know’ what appropriate changes to make. The number of examples improve its accuracy and this is not much different to the way we learn.
A child starts off like a computer with an operating system – the capacity to be programmed and to absorb information and process it meaningfully, and the capacity to learn how to learn, small basic programs common to us all like Intel’s toolbox. Activities such as riding a bike which Wilkinson argues are down to skill, are in fact down to millions of pieces of information acquired by the person. We fail to ride the bike from day one because we don’t possess a full set of instructions about riding bikes – some information (balance, speed etc) is acquired by trial and error. Having had several million years to evolve, our operating system and thus our program is far more advanced that that of computers which have only been around for a few decades and are relatively primitive.
This means that simulating a human mind is not impossible – it is just not possible at the moment. An example of positive steps towards this stage can be found in a study carried out on whether a computer could recognize the difference between a male and female face. Given the right program (and sufficient examples), the degree of accuracy in recognizing faces was almost 100%, as good as a human. Wilkinson spends considerable time discussing the impossibility of explaining to a computer the simplest of things like how to react to a chair (Wilkinson, p.125). But how does one describe a human face? Without a list of formal rules or description, somehow the computer in the experiment was able to ‘learn’, absorbing millions of pieces of information which humans simply don’t have the capacity to put accurately into words. Similarly, company Ai have created HAL, a program which is being taught to speak English just by being spoken and read to. According to the company, people reading the transcripts of HAL’s conversations have been unable to tell them apart from a toddler (www.wired.com). This would seemingly pass the Turing test designed by inventor Alan Turing to overcome the problem of what constitutes ‘thinking’, which states that once a person can fail to distinguish the conversation of a real person and a computer, the computer is ‘thinking’ (Crane, Audio Cassette 5).
Dreyfus argues that to have general intelligence (like the ability to recognize male from female, or perhaps the ability to assess how to act in any given situation), a computer would have to have common sense knowledge (Crane, Audio Tape 5). But what is common sense knowledge? Does a baby have this? No, and despite a number of uniform mechanisms common to everybody, common sense knowledge is acquired by input. The input is not always complete and is often fragmented (and sometimes entirely flawed) – that is why we have to practice driving the car or riding the bike. But every single action we take could in theory be specified entirely without ambiguity, accepting that this would be an incredibly complex task, but not necessarily impossible. “All human mental attributes… are algorithmically specifiable forms of symbol manipulation” (Wilkinson p.102).
Further, with new technology in the form of PDP, it is possible for computers to work in the same multi-level way as a human brain. Searle argues that this does not afford a way round the Chinese room argument and is, in effect, just expanding the room to what he calls a “Chinese gym”. He believes that increasing the size of the program does not mean that the program will function in any way different to a small version (Wilkinson, pp 108 – 109, Searle p.208). More of the same, no matter how much more, will not produce understanding (Dennett, p.113). But how do we know? Firstly, we’ve never been able to give a program even a fraction of the capacity or ability of the human brain so we have no idea how such a program will act. Secondly, Trefil gives the example of a pile of grains of sand which is static until it reaches a certain point of complexity, at which it begins to display unusual features. It has an emergent property and we cannot know that a program will not display the same emergent property, given sufficient input and capacity (Trefil/Wilkinson p.113). Finally, it can be said that we know so very little about minds, we cannot make such a sweeping assumption that this is not the method to create one. We cannot really define what it is to ‘think’ (and neither could/would Turing apparently, since he avoided defining it, saying this would be ‘dangerous’). If a thinking being is one displaying the apparent qualities of a human being, then all we have to establish is how to replicate a human mind with a computer program.
In conclusion, Searle argues that the brain’s mind is not a computer program because it merely manipulates symbols without attaching meaning to them, much like a simple calculator. It can however be argued that we already have some simple AI programs which appear to imitate human behavior and further programs which go in some way towards analyzing their own behaviour and ‘learning from it’. As thinking is subjective, the only way we can suppose that this constitutes thinking is by an objective test like the Turing test.
Further, there is nothing in human consciousness – feelings and desires, which cannot be identified and understood as a computational function, similar to anything found inside any sufficiently complex computer. In a recent documentary, Professor Winston (BBC) explained that our choice of partners is largely down to pheromones, which makes a mockery of what we know as ‘love’. Indeed, humans believe that they are in some way exclusive with their feelings, beliefs and desires, but these are summed up as ‘folk psychology’ by Churchland, who believes such mental phenomena are just theories which will cease to exist in time with scientific progress (Churchland, pp. 194 – 201). Ultimately, whilst even the most powerful computers boil down to on off switches, it seems likely at least in parallel, that human brains have similar functionality. In strong AI, the hardware – the computer itself or system under which the program runs, is irrelevant – it is the program that matters, and so the view that it is possible to create something with functionality equivalent to the human brain logically brings the conclusion that the human mind is separate to the body and that the same type of mind could operate in any equivalent ‘hardware’. Therefore, artificial intelligence in what Searle calls the strong sense, is possible.
Churchland, P. M. Eliminative Materialism and the Propositional Attitudes, Journal of Philosophy Volume 78 no. 2, Feb 1981 pp 67 – 90 in Wilkinson, R. Minds and Bodies (1999) Reading 6, pp.194-201, The Open University
Crane, T – Artificial Intelligence Audio Tape 5, Side 1, Band 1 (quoting Hubert Dreyfus)
Dennett, D. C. – Consciousness Explained, Allen Lane: The Penguin Press cited in Wilkinson, R. Minds and Bodies (1999) p.113, The Open University
Newell A. & Simon, H. A. Heuristic problem solving: the next advance in operations research, Operations Research, 6. p. 1-10 cited in Searle, J. R. Is the Brain’s mind a Computer Program? (1990), Scientific American, January pp.20-5, in Wilkinson, R. Minds and Bodies (1999) Reading 7, p.202, The Open University
Searle, J. R. Is the Brain’s mind a Computer Program? (1990), Scientific American, January pp.20-5, in Wilkinson, R. Minds and Bodies (1999) Reading 7, p.202, The Open University
Trefil, J Are we unique? Wiley, cited in Wilkinson, R. Minds and Bodies (1999) p.113, The Open University
Wilkinson, R. Minds and Bodies (1999) The Open University
Information on the intel software can be found at www.extremetech.com
Information on HAL can be found at www.wired.com
Information on Professor Winston’s documentary can be found at www.bbc.co.uk
...(download the rest of the essay above)