Whilst the explanation of the increased prevalence of artificial intelligence can be explained by its potential for businesses to maximise potential, there are multiple ethical challenges and questions that arise through its development.
One such of these questions is whether or not we can assign responsibility to the Artificial Intelligence? Or as a rephrasing, can we assign agency to Artificial Intelligence? If it thinks and acts in ways similar to human beings, can it be punished in the same way. Agency, defined as ‘the capacity of an individual to act independently and to make their own free choices', is essential in assigning responsibility, and yet, if the Artificial Intelligence is following its programming, then can it really be called a moral agent. Whilst the human conscience can understand subtle nuances, such as sarcasm and ambiguity, Artificial Intelligence still needs to be programmed, and programming languages, whilst composed with rigid syntax, lack the semantics of normal communication. Therefore, if a military robot is programmed to eliminate hostiles and sees a child exhibiting what it believes to be hostile behaviour, but is in reality playing with a toy gun, and kills the child, can the robot be held responsible? It is moral dilemmas like this which result in inquiries into the ethical programming of Artificial Intelligence. Despite the exponential growth of computational power, in accordance with “Moore's Law”, ‘experts continue to worry about whether it is humanly possible to create software sophisticated enough for armed military robots to discriminate combatants from non-combatants'. Therefore, it can be argued that without the ability to differentiate based off of the use of sensors, Artificial Intelligence must have an ‘ethically correct way' of choosing its own actions. In designing and programming ethical Artificial Intelligence, ‘the limitations of current ethical theories for developing the control architecture of artificial moral agents (AMAs) highlight deep questions about the purpose of such theories'. Issues with current ethical viewpoints such as the deontological approach taken by Kant, which when applied to AI ‘would simply see roboticists or the robots themselves acting in accord with some finite set of (presumably algorithmic, programmable) rules, and moral decision making would thus consist simply in computing the proper outcome of the rules'. Kant's approach can be seen in Asimov's ‘Three Laws of Robotics'. And yet, in implementing this deontological approach, we open up a world where the rules can be abused, a robot who may not harm a human being, for example, would be helpless in attempting to stop someone from being assaulted in the street. Abuse of the rules show just one disadvantage of deontological approaches. Due to the lack of computational power, utilitarian approaches also fail. A robot programmed to ‘act so as to maximise the greatest amount of utility for the largest number of people', also known as the “Greatest Happiness Principle”, needs copious amounts of information to make its choice. In each situation, the vast amount of information that needs to be processed and the calculations needed to be carried out, as well as the defining structure of which actions resulted in greater happiness could leave Artificial Intelligence unable to act. Instead of taking a traditional approach to facing ethics, a method suggested by Colin Allen and Wendell Wallach in their book ‘Moral Machines: Teaching Robots Right from Wrong 'is more useful. Their approach to the design of Artificial Moral Agents, done by considering ‘the top-down imposition of an ethical theory' and the ‘bottom-up building of systems that aim at goals or standards' considers the difficulties of top-down approaches, whilst also introducing the use bottom-up building, focused on ‘learning, developmental, and evolutionary processes'. The ability to reason and the focus on moral decision making, emphasised by Allen and Wallach, in order to progress towards AI with full moral agency is the baseline for the ethical discussions posed by the factors in this essay.
Whilst Allen and Wallach's proposed approach lays the foundation, there is still the ethical problem present in the design of and programming of Artificial Intelligence. In order to fully utilise the potential of Artificial Intelligence, it is necessary to either program a code of ethics into the Artificial Intelligence, or allow it to develop its own ethical ideas, which also raises the question of whose ideals should Artificial Intelligence align too. In the words of Wallach and Allen, “When moving toward advanced autonomous systems, the systems should themselves be able to do ethical decision making to reduce the risk of unwanted behaviour”. The two proposed theories towards the ethical design of artificial intelligence, which have been previously mentioned, each come with their own advantages and disadvantages. In simplistic terms, the bottom-up approach would consist of, for example, giving the mechanism which drives the thought process of the developed artificial intelligence, like a neural network, a large number of various different ethical situations, and giving it the acceptable right answer. With a suitable number of situations and with enough variation, the artificial intelligence would, theoretically, be able to make an informed decision on what action it should take when presented an entirely new ethical problem, as data from the previously problems, would have been collated and analysed for trends, for example, a metric could be the amount of human deaths in the scenario, and therefore the more people that die in the scenario, the less ethically viable a solution it is. Of course the problem with this approach is who is responsible for choosing the right ethical decision, after all there are many different schools of thought surrounding ethics, this could lead to not only problems in the development of Artificial Intelligence, in the sense that it is difficult to program without knowing what limits and bounds are to be put in place. The other proposed theory for the ethical design for Artificial Intelligence is a top-down approach. Again, in simplistic terms, the top-down approach would be coding the basis of a system of ethics into Artificial Intelligence, and then to let it develop it's own “thoughts”. Of course the difficulty surrounding this is which one of the previously described ethical theories should be used, or whether different theories can be used for different Artificial Intelligence, depending on where it was developed or what it was developed for.
Considering we overcome the hurdle of correctly programming Artificial Intelligence, and the method we use is suitable in allowing the Artificial Intelligence complete moral agency, there is still the matter that Artificial Intelligence, like humanity, will ‘evolve'. The age old problem of ‘playing God' arises, and with it questions over the extinction of humanity. Technology has yet to lead to a global catastrophe, whilst human-error has been responsible for many disasters, such as the explosion of the nuclear reactor at Chernobyl in Russia in 1986 due to too many control rods being removed. Examples like this are just one of the reasons why a developed form of Artificial Intelligence could decide that humanity is a danger, and in true human-like fashion, choose that the best way to deal with a problem is to eliminate it. This misalignment of the goals of humanity and Artificial Intelligence is just one ethical problem which would arise from its increased prevalence. In his book, Shanahan explores the decision making processes undertaken by three different Artificial Intelligence, pre human-levels of intelligence in what he titles, “Unintended Consequences”. Consisting of marketing events, gas masks, armed factory raids, death and the renouncement of wealth by a senior executive of a large company, the story shows the unpredictability of the actions of Artificial Intelligence, as well as the differences between said actions compared to what a human may do. Whilst the actions of the Artificial Intelligence may be genuinely towards its set purpose, and it may be, strictly speaking, within the binds of its programming, the severity of the consequences may not register to it at the same level as it might to a human. Lastly, there is the danger of the misuse of Artificial Intelligence. The technological adage, “Nothing is unhackable” stands true for Artificial Intelligence as well, and with it, the danger that those with malintent will use it for its unintended purpose. The amount of data needed for Artificial Intelligence to make informed decisions as well as its intended purpose could make it a principle target for those with bad intentions. Take, for example, the use of lethal autonomous war systems, which many are expressing their concerns over the use of . Whilst steps are being made in the branch of ethics to include Artificial Intelligence, such as ETHICBOTS and project Euronet Roboethics Atelier , a concrete system will need to be put in place to ensure that Artificial Intelligence will not infringe the rights of humanity.
...(download the rest of the essay above)