Artificial intelligence is our future, and many modern philosophers including Nick Bostrom and Callum Chace, argue that this new generation could come any time now. However, there is a prominent question within all of their arguments and discussions; what form of artificial intelligence is possible? This leads me to my research question; to what extent is strong artificial intelligence and Nick Bostrom’s concept of superintelligence possible? There are three forms of artificial intelligence predicted in current technological discussions; weak, strong and super intelligence. Weak artificial intelligence is defined as a machine that can be intelligent and solve problems, but does not have full consciousness, whereas strong artificial intelligence is fully conscious. This is arguably the most important question facing our new generation, as if strong artificial intelligence is possible and they are able to function the same as humans, it requires us to consider new morals and rights we should adapt for these computers, and question whether we should therefore treat them with the same responsibility and appreciation. In Nick Bostrom’s “Superintelligence – Paths, Dangers, Strategies”, he defines superintelligence as ‘any intellect that greatly precedes the cognitive performance of humans in virtually all domains of interest’ . Intellect in this context refers to ‘the faculty of reasoning and understanding objectively, especially with regard to abstract matters.’ This reference to intelligence seems to be applicable to many artificial intelligences we have now, and I believe they will eventually become more advanced than humans in this area of interest. However, does Nick Bostrom’s reference to ‘cognitive performance’ refer to only physical and behavioural attributes in life? And is it plausible to say that cognitive performance includes consciousness, if so, to what extent can consciousness be reproduced in a non-natural being? These are the issues against Nick Bostrom’s concept of superintelligence. In this essay I will first show that strong artificial intelligence and superintelligence is able to reach, if not exceed, the same intelligence as humans. However, I will then show that since neither strong artificial intelligence nor superintelligence will ever reach the same level of consciousness as humans, they are not possible.
A potential convincing argument for strong artificial intelligence and super intelligence is Alan Turing’s (1912-54) test for intelligence, which allows for machines to the same level of understanding as humans. By defining his own Turing machine as being subject to learning and becoming as intelligent as a human brain, Turing suggests that machines are able to progress into strong artificial intelligence. He claimed that to create a learning mind, there are only two inputs required; pleasure and pain. With the machine able to experience pleasure and pain, ‘reward and punishment ’ can be implemented and therefore you can teach a computer, just as you can teach an infant, vicariously through their environment. According to Stephen Hawkin’s, ‘intelligence is central to what it means to be human’ and this would imply, similar to Turing’s beliefs, that if superintelligence and strong artificial intelligence were to be human, all they would need is to behave intelligently. Therefore, they would not require cognitive performance and consciousness as Nick Bostrom and others suggest. In 1950 Turing wrote a paper, “Computing Machinery and Intelligence”, in which he proposed a test for intelligence, using the structure of the imitation game. He took a computer (A) and a human (B) on one side and a human tester (C) on the other. If the tester can’t recognize which candidate; A or B is human, after a series of questions, then the computer has successfully passed the Turing test. In modern technology, the way for testing human intelligence is to make a sort of chatting interface/chat line, in which the candidates and tester can communicate. Through this test, Turing is suggesting, like Stephen Hawkins, that artificial intelligence does not need to ‘greatly precede’ human’s cognitive performance, but rather just surpass intellectually in order to become conscious.
Whilst Turing was able to strengthen his theory by correctly objecting many critiques such as Lady Lovelace’s originality objection, he still fails to consider the emotional innovation which defines humans. One of the most famous objections is Lady Lovelace’s objection, stating that computers are incapable of originality and this is largely due to the fact that computers are unable to independently learn. In Turing’s paper addressing Lovelace’s objection, he understands her counter as claiming that computers are never able to surprise us. In response, Turing provides that computers can still surprise humans, despite being programmed. Bostrom, further counters Lovelace’s objections by saying that ‘the values of artificial intelligence has needs to match our values and learnt to understand what we would accept as correct’ , suggesting that he believes it is possible for a machine to learn. Additionally, the context in which Lovelace formulated her argument was not exposed to contemporary scientific knowledge, as her paper was written around 1840, suggesting that the full scope of ability for science and engineering was not considered in her argument. However, Bostrom suggests that by 2065, 90% of AGI human-level machine intelligence will be attained, suggesting that context is key for arguments deliberating this very modern development. In 1949, a more modern philosophical counter was proposed by Professor Geoffrey Jefferson claiming that ‘not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain’. This has not yet been possible to achieve when creating artificial intelligence, and so with this reference to emotion, strong artificial intelligence in this modern era is not yet possible. These arguments suggest that the Turing test only looks at the surface of strong artificial intelligence, and ‘to be intelligent is not enough to be a thinker or a doer ’, therefore, if a machine is only intelligent, strong AI is not possible.
Underlining the Turing test is a broader philosophy of mind, behaviourism, which is a physicalist view of the mind, identifying the mind with environmental inputs and behavioural outputs. However, similar to the Turing test, behaviourism is a too simplistic theory and so doesn’t show strong artificial intelligence is possible. Behaviourism implies that all that is needed to produce strong artificial intelligence is to programme computers to behave in a certain manner so as to present emotions and understanding. It is the belief that ‘one man can never know that another man is in a given state of mind’, as this would require them to experience the same things, and sense experiences are always private and personal. A human is defined by the behaviours and physical attributes that it displays, therefore, strong artificial intelligence and superintelligence is able to be conscious and so ‘precede human cognitions’, as cognition is only physical, and can be programmed. Fleenor at Utah University, was given a virtual hand that was able to sense and experience things, showing that behaviour is all that is needed to make something human and conscious, as the hand has now become part of the man’s identity. ‘He literally, biologically and neurologically, feels’ . Analytic behaviourism defines one’s mental states and therefore their consciousness in terms of the patterns of behaviour that they exhibit. Statements regarding states of mind can be translated without loss of meaning into statements of behaviour. Callum Chace holds that the ‘existence of our own brains, producing rich conscious lives… is proof that consciousness can be generated by a material entity’ , showing behaviourist influences in his views that it is the physical body and not consciousness that makes a human, and therefore, strong artificial is plausible and possible.
However, behaviourism effectively denies the existence of many ‘inner’ aspects of mental states, and ultimately fails because there is no necessary link between behaviour and mental states. Putman (1926-2016) highlights the distinction between synthetic and analytic statements in order to produce his strong argument against behaviourism. He draws a distinction of necessary and contingent connections between statements. For example, the house is on fire, and there is smoke coming out of that house, are two statements with clear connections between them, however, the connections are only conditional, as it is possible that the second statement is true without the first statement being true. These are synthetic statements, whereas analytic statements have much tighter connections; Paul is a bachelor and Paul is unmarried, are two facts in which a necessary connection can be made between them. Putman uses the distinctions between these statements to claim that if mental states are just behavioural patterns, then surely the connection between mental states and behaviour should be necessary. However, through Putman’s multiple realizability argument, it is evident that there can be many different physical properties implemented due to a certain mental property. Putman further uses a science fiction example to strengthen his counter-argument; ‘imagine a community of “super-spartans” or “super-stoics” – a community in which the adults have the ability to successfully suppress all voluntary pain behaviour. They may, on occasion, admit that they feel pain, but always in pleasant well-modulated voices… they do not wince, scream, flinch, sob, grit their teeth, clench their fists, exhibit beads of sweat, or otherwise act like people in pain… however, they do feel pain, and they dislike it (just as we do) ... they even admit that it takes a great effort of will to behave as they do.’ This scenario makes a clear link between pain and at least some pain behaviour which is contingent, rather than necessary, which a behaviourist claims. Another key argument against behaviourism is that it gives no key role to the brain, and someone such as an actor, could have the ability to produce their own mental states just by pretending to have the feelings. Functionalism however, is a development of behaviourism and allows for there to be an inner working of mental and brain states whilst still accepting the possibility of strong artificial intelligence.
Putman, himself, formed the argument which makes use of the brain state table of the Turing machine, to argue that our mental states are ‘functional states’ and that we are not defined by the physicality of the object but the functionality of it. For him ‘a functional state is defined by its causal relations to inputs, outputs and other functional organisations’ . Putman held a functionalist belief that the mind is defined by its function. Both functionalism and analytical behaviourism reject that the mind is an objective substance, whether a soul or a brain. A computer’s whole purpose is to compute functions, and so they are mainly designed to take a certain input and, according to a set of instructions and algorithms, produce an output that performs this function. Computational functionalism says that the mind is the software of the brain and so mental states are the same as and resemble computer states, which have been programmed and controlled. Turing’s machine is a blueprint of a machine that illustrates how algorithms can be computed. ‘The actions is completely given by what Turing calls a “table of behaviour” for the machine, dictating what it will do for every configuration’ . According to functionalism, we can conceive a complex machine table of the human mind, as human beings, like computers, take inputs through sensory and perceptual information, and output them in the form of behaviour. Therefore, according to functionalism, since mental states are programmed the brain mimics a computer and therefore a computer can be a mind, further allowing for strong artificial intelligence.
Whilst functionalism avoids behaviourism’s error of identifying mental states with actual outward behaviour as well as conforming with our common sense as it thinks of a mental state in terms of the typical cause and effect relationship, there are also significant weaknesses to the theory, which would suggest that strong artificial intelligence cannot be possible if it only functions the same as humans. The main weakness with functionalism is its inability to account for personal sensations within the mind, that cannot be replicated by a machine. This failure to consider qualia and intentionality is highlighted in John Searle’s Chinese room argument.
John Searle’s chinese room is an analogy to show that no computer can be super intelligent or have strong artificial intelligence, as intelligence requires understanding, and countering the behaviourist and functionalist view, intelligence is more than just external appearances. If one were to claim to be fluent in English, but not in French, it would seem obvious that they would understand a story in English but not in French. The advocates for strong artificial intelligence would argue that if a programme allowing that individual to understand French were given to any complex machine, then they would too be able to understand the French stories. However, Searle would disagree. He imagines a monolingual English speaker inside a room with a rule-book and sheets of paper. The rule-book contains instructions on how to speak Chinese and what to do with Chinese symbols. The instructions are of the form: ‘If you see Chinese symbol X on one sheet of paper and Chinese symbol Y on another, then write down Chinese symbol Z’. People from outside of the room pass pieces of paper with Chinese writing and the man inside follows the rules to reply, and pass paper back out. Imagine that the instructions are so sophisticated that the responses from the man inside the room are indistinguishable of a native speaker. Searle argues that it is not plausible to say that the man inside the room understands Chinese, and therefore suggests that no computer could ever achieve strong artificial intelligence, as there is no way of knowing whether it would fully understand information, and therefore, it would be incapable of becoming conscious. There is no way that the man in the room could ever learn Chinese; no amount of syntax could ever equal semantics.
However, whilst the guy inside the room doesn’t understand Chinese, the system as a whole does, and so this could suggest that the entire system of artificial intelligence could become strong. Searle, further argued then that if the man learnt the entire system and left the room, he still would be unable to understand Chinese. Nevertheless, when the man starts interacting with the external world again he would associate symbols and begin to understand Chinese, suggesting strong artificial intelligence is possible within a matter of time, as the computers evolve and begin to learn and interact with their environments. Whilst Searle shows the importance of computer understanding to produce intelligence, he still does not give an adequate argument for qualia and what it feels to be conscious, as his analogy is too simplistic. The brain cannot be reduced to one room, as the mind is complex and much harder to understand, therefore computers may understand and behave in a certain way, but they will never feel what it’s like to be fully functioning conscious being.
If consciousness is needed to produce strong artificial intelligence and superintelligence, there are two key considerations for consciousness. First, that consciousness could be explicable by science and therefore functionalist might be true and strong artificial intelligence will be possible, as consciousness is all physical and so can be created. On the other hand, consciousness might be explained by having a non-physical aspect of the universe which again links to the wider philosophy of the mind.
Both functionalist and behaviourist philosophies, view humans to only have one physical entity, and therefore this allows for strong artificial intelligence to be possible, however, it does not provide a full explanation for the distinction of a human being. Dualism believe that humans are a composite of two entities; physical body and non-physical mind. Since they hold true that physical death is the separation of body and mind, therefore, as a machine is never able to die and considering that ‘mind’ (soul) cannot be artificially created, strong artificial intelligence can only ever appear to perform human cognitions, but can never actually be conscious and alive, as it is only a body, not a mind as well. Richard Swinburne (1934) is a substance dualist who expresses his belief that there are two kinds of substance; the physical and non-physical. The non-physical substance exists in addition to brains and physical matter as an active, thinking entity. He conducted a thought experiment in which he split a man’s brain in two and put half into two different bodies and questioned which person was the actual man. From this, Swinburne concluded that by identifying as ‘me’, one must be something else other than just a body as the mind defines most of one’s identity.
Descartes (1596-1650) was also a substance dualist, along with Eccles, who both claimed that a human ability or cognitive process cannot be embodied in physical brains and machines, and therefore there is no possible way to create strong, conscious artificial intelligence. Descartes introduced Cartesian Dualism to support his ideas and show the distinction of humans to other physical bodies. He used skeptism to doubt everything in the objective world, with the exception of ‘cogito ergo sum’ (“I think, therefore I am”), which he claimed helped prove that he was a thinking conscious being and that was the only fact he could be certain of. By the noun “thought”, Descartes was referring to ‘everything that takes place within us so that we are conscious of it, in so far as it is an object of consciousness’ . As Descartes saw the body, res extensa, like a machine, therefore, machines can never be conscious and so can never precede cognitive processes, as this would require a mind as well as the structure and physical brain. However, animals are still living, and considered thinking things but they are not conscious. In a radio broadcast in 1952, Turing asserted that ‘we are not interested in the fact that the brain has the consistency of cold porridge. We don’t want to say “this machine’s quite hard, so it isn’t a brain, so it can’t think” ’. Therefore, a machine may be able to, as Turing suggested, be presupposed as alive and thinking, however, it will never exceed human consciousness and emotions.
Unlike other philosophical objections to strong artificial intelligence, dualism is still a present issue in modern day discussions. Leibniz (1646-1716) argues for dualism in states of knowledge rather than states of possibility. He claimed that even if the machine can/is supposed to be a thinking thing, there would be no way to explain how it does so even by examining its physical parts. Leibniz claimed that is you went into a “thinking, feeling and perceiving” machine, all you would find is ‘nothing but parts which push and move each other, and never anything can could explain perception” . This analogy shows that no matter how much information one knowns about the physical object; modern technology and science has not yet discovered a reason or explanation for consciousness. Leibniz is shown to still influence modern philosophers such as Alvin Plantinga (1932), who claimed that ‘a physical object is… just not the sort of thing that can think’. This highlights the main obstacle facing the progression of artificial intelligence, as finding the other substance which makes humans distinctive. Dualism, whilst presenting an explanation for the feeling of consciousness, requires the belief in God and a non-physical realm, going against our intuition.
On the other hand, scientific discovery has found physical explanations for our emotions and thoughts, rejecting substance dualism and suggesting that with more technological advancements, strong artificial intelligence is possible. Paul Churchland argued for neural dependency by saying that humans are a ‘purely physical outcome of a purely physical process’ . Churchland, in his book ‘Matter and Consciousness’ makes references to the effects that drugs, alcohol and senile degeneration of nerve tissues have on human’s rationality and thought. He additionally adds that ‘the vulnerability of consciousness to the anaesthetics, to caffeine and to something as simple as a sharp blow to the head, shows its very close dependence on neural activity in the brain’. Churchland’s eliminative materialist philosophy does not allow for multiple realisability and suggests that computers can only be conscious with a human brain. However, this would resemble more the process of genetic cloning, and no longer be artificial objects becoming intelligent and conscious, as computers would have different mental states to genetically cloned objects. Therefore, if Paul Churchland’s explanation of consciousness is implemented in artificial intelligence, different rights will need to be considered, as they will not have the same level of consciousness as humans if their structure does not resemble that of the biological brain. Whilst Descartes still insinuates that the pineal gland links the mind and physical brain, physicalism argues that the mind is the brain, and adds a stronger emphasis on the physical importance of the brain. Neither arguments securely imply the reason for consciousness, and therefore, there must be a subsequent argument, property dualism.
Since dualism requires the belief in a non-physical God, or religion, and science fails to account for the feeling of consciousness, therefore, property dualism seems to be the most logical philosophical argument in this generation that considers what is distinguishable about humans, and so suggests that strong artificial intelligence is not possible. Property dualism holds that a person consists of one physical substance which has two separate properties; physical and mental. Substance materialists, who are a form of property dualists, use the example of colour and shape to show that, whilst the two properties are independent of one another, a ‘red ball’, for example, could not be that without both colour and shape. As property dualism disagrees with substance dualism by saying that colour and shape, mental and physical, are part of one substance, therefore, if we were to completely understand the entire brain, then we would in part, have enough information to explain it. With this, property dualism gives a possibility for strong artificial intelligence and superintelligence, all depending on our future scientific discoveries.
Overall, I hold the belief that whilst computers will eventually transcend human’s intelligence, they will never be able to exhibit the same emotion or consciousness that humans seem to have, as they will never be natural properties that can evolve as humans have. Having explored the weaknesses of behaviourism and functionalism, as well as the Turing test, I feel these physicalist explanations cannot capture what it feels like to be conscious. I do not describe my pain in terms of physical descriptions, as Paul Churchland would argue. However, as the history of science highlights, we may eventually find a clearer and greater understanding of the nature of our consciousness, leading me to believe that in the future there is a possibility for strong artificial intelligence, if we find consciousness to be something of a physical matter. For now, I believe the strongest philosophy is property dualism as it both finds the distinguishing feature of humans, and allows for a more natural explanation for consciousness. As a property dualist, I don’t believe mental states can be fully reduced to physical substances and functions as we have not yet discovered in science the root of our consciousness. Whilst property dualism claims that strong artificial is not possible at the moment, I believe that when science develops, there is still a possibility for strong artificial intelligence and superintelligence, depending on the information and technological advancements made.
...(download the rest of the essay above)