A Review of Artificial Intelligence
Benjamin Koo, Christine Nguyen, Genevieve Grossman, Nse O’Dean, Brendan Schuster
ABSTRACT
Artificial intelligence has come a long way since its realization in the 1950s. The idea of man made intelligence dates back to ancient Greece, but was materialized in the past 70 years. While some could still argue that AI is in its infancy today, it has helped to make great advances in science and medicine. The advances AI is making in nanotechnology is also feeds back into the development of artificial intelligence because the increased capacity and complexity achievable on a circuit. Recent AI applications have seen improvements in nano-imaging and analysis techniques as well. Artificial neural networks, a form of AI, has made advances with image processing on the nanoscale. Nanoscale surface analysis of materials has also been improved, helping to more accurately determine the material properties of nanomaterials. These extremely powerful tools are used to optimize manufacturing techniques of other small scale devices and materials. There are, however, ethical concerns to consider with this new technology. As AI continues to improve, gathering data on larger and larger scales can become a problem, especially in the medical field when the information is gathered on large pools of people. Security with information gathering is becoming a huge issue in the new age of information. Furthermore, the uncharted territory science is approaching ever faster with highly intelligent self-aware systems presents many concerns that the world has never had to consider. AI has shown to be extremely beneficial to society by helping to make vast improvements in medicine and science, so it is worth the current risks to keep developing. As long as more advanced AI is developed ethically and safely, it will be beneficial to society.
1. Introduction:
Artificial Intelligence, also known as AI, is a field that is expected to grow exponentially as computing power continues to grow. Before going into the history, applications, and finally ethics of AI, it is helpful to explain what artificial intelligence is, or the definition of AI. AI can be described as an agent of intelligence. All living things are also agents of intelligence. Intelligence helps each of these agents survive, or achieve their respective goals by flexibly changing and adapting to their environment. This includes gaining experience through trial and error, watching others, and adapting to best accomplish their own goals. In order to develop artificial intelligence, one needs to understand what makes a system intelligent, in this case a computer. Reasoning is the main instrument of intelligent systems in our world today, and one hypothesis is that integrating such logical thinking methods into computation tools is the main goal in the development of artificial intelligence [1].
This raises a question, is about whether artificial intelligence is real intelligence. Can a bunch of transistors arranged in a certain configuration manifest an intelligent system capable of complex reasoning to the likes of humans? A good analogy is artificially synthesized diamonds. Even though they are created in a lab and are different than naturally occurring diamonds, they are still “real” diamonds. AI might not be like any intelligence humans have ever seen before and has some prominent differences, but it would still be real. In engineering AI, one would assume the goal is to create intelligence much like that of a human. Because AI is still a highly misunderstood topic, many people think this to be true. For example, in the beginning of human flight, many ideas were to copy birds, flies, and other animals that could fly. Once there was enough knowledge on the subject, namely aerodynamics, engineers manifested flight in the form of planes. This type of flight is not anything like the previous examples, but it is the most efficient for such large objects. For the large processing science hopes to achieve with AI, one would hypothesize it might not even closely resemble our human intelligence [1][2].
Artificial Intelligence has been studied and developed over many years by people across many disciplines. Even back in ancient Greece, Homer wrote about mechanical “tripods” serving the gods, so the idea that a machine may capable of “intelligent thought” goes far back in history. Philosophers have shaped the concept of intelligent machines in relation to humans, and challenge what it truly means to be human. Science fiction writers such as Isaac Asimov have painted fantasies of these machines dramatically changing the lives of humans for the better [3].
The designs of intelligent machines of the past were purely mechanical. Chess-playing machines of all sorts sprung up in the eighteenth and nineteenth centuries [3]. The most famous one, “The Turk”, was not actually a machine play at all – a gifted chess player was hidden inside The Turk’s enclosure and moving pieces from the inside. However, The Turk did showcase the potential learning aspect of intelligent machines. The idea that a machine could perform an action based on previous experiences has been only hypothetical until the middle of the twentieth century. Advances in computing theory and technology has allowed the construction of such intelligent machines, which are given life through what is known colloquially today as AI.
After WWII, the establishment of computing lab facilities, such as Alan Turing’s lab in Manchester, bolstered the ability to theorize, build, and experiment algorithms for AI. Various countries around the world had built large computing labs and leading scientists would attend conferences to exchange ideas, such as the Cybernetics Conference as shown below.
Figure 1 [4]
The early development of AI had influence from multiple disciplines from its inception. Norbert Wiener (engineering), Walter Pitts (biology), Claude Shannon (information theory), and John Von Neumann (game theory) all made impacts on the progress and direction of AI research. One of the most important developments of computing theory, is the Turing Machine, invented by Alan Turing and described in his 1936 paper. He also published a paper about the Turing Test in 1950 in a philosophy periodical [3]. A computer is said to pass the Turing Test if a human cannot distinguish between a text output from a computer and a text output from another human. This greatly influenced the philosophical, ethical, and moral implications of creating AI that can precisely mimic human behavior.
A shift to knowledge-based programs happened in the 1960s and 1970s. This “knowledge” is stored electrically and can be manipulated via programming, which was also in its infancy. The ability to store data allowed the production of machines that are able to perform mass spectroscopy analysis, give weather predictions, forecasting stock markets, and recognize human speech [5].
In the 1980s, the advent of personal computers allowed commercial use of computing, such as manufacturing companies utilizing AI for quality control. Around this time, neural-network method of AI learning was developing quickly, and became a discipline of study and practice that is still widely used today. The 1990s introduced the Internet to computer communications, and allowed a broader distributed learning for AI across many computers [5]. Combined with the constant improvements to computing speed, efficiency, and cost, integrating AI to all kinds of machines has become a sharp interest for many disciplines and technologies. Some important applications will be discussed in the following sections.
2. Applications
In recent years there has been a growing interest in nanotechnology in fields and areas of research for its mechanical, chemical, and biological applications. In what is considered a relatively new field there is an even newer application for nanotechnology and that is the convergence of nanotech and artificial intelligence. Traditionally AI is thought of as solely a subject of science fiction but with the development of nanotechnology it is becoming more and more possible to realize AI functionality using current nanotechnology. Nanotechnology is a combination of physics, chemistry, and engineering, while AI to this point in time is primarily used in biological applications to produce computational models such as neural networks. There is the intent to use these two technologies to enhance the research and implementation of nano, biological, and information technology in the real world along with a variety of other disciplines. However, this integration is still in its infancy. If it is possible to close the void between nanoscience and AI it may very well be possible that one day merging biology and technology will be possible on a larger scale (Fig 2).
Figure 2. Areas of STEM that could be affected by AI and Nanotech. Merging.
A recent application of AI and nanotech has been the use of Artificial Neural Networks (ANNs) in scanning probe microscopy (SPM). ANNs are “a set of interconnected nodes whose connection weights are determined through a supervised or unsupervised algorithm in order to learn these types of input–output functions.” [6] Over the years there has been a wide range of SPM techniques developed for different parameters and grades of resolution depending on the size of the object being scanned. Examples are scanning capacitance microscopy [7], scanning near field optical microscopy [8], and atomic force microscopy (AFM) [9]. Throughout the years attempts to improve image resolution [10,11] and atom manipulation [12,13] the SPM signals are still difficult to translate [14]. Tip sample interactions are the main problem because of many parameters. Artificial intelligence will be used to help clarify imaging. Recently much progress has been made in multimodal SPM imaging which is the use of multiple channels to derive data about an image. Examples are dual excitation frequency SPM [15], multiple harmonic imaging [16], 3D-modes [17], band excitation SPM [18] and the use of rapid digital lock-ins [19]. In response an SPM approach called functional recognition imaging (FR-SPM) was developed by Nikiforov et al [20] to be used in conjunction with principal component analysis (PCA) to quickly and accurately render an image. This is displayed by the effective images taken of bacteria, Micrococcus lysodeikticus and Pseudomonas fluorescens using FR-SPM (Figure 3). ANN is also able to identify unknown parameters and characterize them accurately. This was demonstrated using electrostatic force microscopy (EFM) to train ANN with numerical simulations where the sample distance and dielectric constant were to be estimated while a carbon nanotube (CNT) was being scanned [21]. Figure 4 illustrates the results and the resemblance in error between real sample and simulated sample.
Figure 3. Identification of bacteria on a PLL-coated mica substrate with ANN. (a) Topography image of the bacteria on PLL mica (the area within the blue rectangle was used for training the neural net). Output of the neural net in the form of recognition maps for the background (b), P. fluorescens (c) and M. lysodeikticus (d). Value 0 corresponds to the minimum likelihood of the point being the target when value 1 corresponds to the maximum. Reprinted with permission from [22]. Copyright 2009 IOP Publishing
Figure 4. (a) Difference between the real and ANN predicted dielectric constant values from an electrostatic force microscopy experiment. Dashed line corresponds to the error allowed in the training phase. (b) Normalized error of the experimental curves that were used in this work. Reprinted with permission from [23]. Copyright 2009 IOP Publishing.
ANNs are also used to help determine material properties of nanomaterials. The morphology of CNT turfs through the extrapolation of structural properties like alignment and curvature. Stereological, three-dimensional interpretation of two-dimensional cross sections, of CNT images from a scanning electron microscope (SEM) were catalogued using ANNs. This organization was accurate and effective allowing for proper analysis of CNT structures. To ensure these results were usable two training sets were used. Defined values of rope images then a secondary set of CNT images were analyzed by hand to train ANNs properly. Chemical reactions are another aspect that can be determined by ANNs through “kinetic data parameters” like constant rate and concentration of elements in a reaction.Without using differential equations scientist are now able to determine conversion rates of heterogeneous oxidation reactions because ANNs are able to predict many results and parameters accurately in most cases. Genetic algorithms have been previously used to understand nanoparticle clusters.The reason being is because unlike other models GAs look at the whole lattice structure without the need for prior inputs.
Artificial neural networks are performing optimization functions using nanotechnology. The processes to manufacture thin film optoelectronics (solar cells, flat panel displays, organic light emitting diodes) are governed by complex parameters.[24] By using ANN models to optimize the processing effect on the electrical properties and deposition rates, the materials produced were of higher quality, optimizing the carrier concentrations and the mobility and resistivity of the the thin films.[6] What was once a complex nanotechnology process with nonlinear parameters suddenly became much easier and more efficient once artificial intelligence was used to optimize the many variables involved.
Genetic algorithms work in a similar way to neural networks- they use technology to try to replicate a biological process for the enhancement of other fields of science. Genetic algorithms (GA) and evolutionary optimization work to create nanoantenna structures with the best configurations. Using GA, researchers were able to simulate 100 generations of testing in 20 individual structures. They compared the maximal near-field intensity enhancement of the antennas, finding that the best one after 100 simulations had an intensity twice as high as the reference antenna.[25] Genetic algorithms have similarly been used to improve carbon nanotube emission devices. GAs were implemented to improve e-beam focusability by optimizing the set of structural and electrical parameters exhibited by the carbon nanotubes.[26] Another example of genetic algorithms and evolutionary optimization in nanotechnology can be found in the synthesis of metallic nanowires. Evolutionary optimization is used to select for the ideal optical properties (such as maximal scattering at a given wavelength) in order to tune nanowires for a given purpose. While this is currently only utilized in nanowires, there could be potential for use in other areas of interest in the field of plasmonics.[26] Researchers suggest that future applications for this might include using this technique to optimize other electro-optical systems. These include micro- and nanofabrication processes, such as parallel electron beam lithography; electron microscope imaging for applications in materials science, displays, and lighting (e.g. field-emission pixels); and accelerator physics.[6]
3. Ethics
With rapidly emerging and innovative technological applications at the intersection of artificial intelligence and nanotechnology, it can be easy to develop transformative technologies faster than researchers and scientists can begin to understand its associated ethical ramifications. The marriage of nanotechnology and artificial intelligence prompts a thorough consideration and evaluation of the potential benefits, risks and dangers of these new applications because there is a myriad of stakeholders that are impacted. In areas of scientific advancement such as robotics, biology and medicine, researchers and scientists must understand the ethical and legal dimensions of new artificial nano-applications to protect stakeholders, which include the environment, lawmakers and society [27].
The application of AI in bio- and nanomedicine paves the way for new innovations but also new considerations in ethical and legal outcomes and concerns. For instance, the potential for rapid and accurate detection of diseases can be feasible by developing a diagnostic nanodevice, depicted in Figure 5, that analyzes cells, fluids and molecules in the human body [28]. One of the benefits to the development of this microchip includes performing accurate, low-cost diagnostics [29].
Figure 5. Diagram that depicts the intersectional nanochip device application of nanotechnology, biology, and artificial intelligence. The programmable bio-nano-sensor chip, which utilizes artificial intelligence and algorithms to detect biomarkers for disease. [3]
For example, “cardiovascular disease (CVD) has been the leading killer of Americans for decades [30],” and the implementation of a diagnostic bio-nano-sensor chip can promote early diagnosis of cardiovascular disease and ideally decrease the rates at which people die from cardiovascular diseases [29]. Improved diagnosis and implementation of more accurate preventative measures can cut costs of treatments, medications and healthcare significantly.
However, with this nanochip technology, there are many ethical dilemmas that can arise. Privacy is one of the major concerns associated with the bio-nano-sensor chip because the analysis component of this technology handles extremely sensitive and valuable information about the patient. Additionally, developers of the AI component do not have a holistic understanding of its learning capabilities which means that there is a possibility the intent of the algorithm can be malicious in processing sensitive user data. Because of AI’s advanced learning capabilities, it can be programmed to achieve a goal but may find and learn a destructive methodology to achieve this goal. For example, machine learning can parse and flag valuable information about patients’ genetic predisposition to certain diseases and perhaps determine the likelihood of that disease manifesting. If an insurance provider has access to this information, the data can negatively affect a patient’s insurance coverage which compromises the patient’s opportunity to obtain fair health coverage. Furthermore, because this technology exposes sensitive health information, this consequence can also open a door for genetic discrimination on a more widespread level. A possible solution to balance the benefits and dangers of this technology is to evolve laws to police the expanding boundaries of nanotechnology and AI. Regulation plays a crucial role in examining the bioethics and computer ethics because it counterbalances the benefits of cost reduction and better preventative and treatment tactics; the risks of user privacy in the collection and sharing of user data; and the potential resulting dangers of genetic discrimination by healthcare providers, insurance companies, and employers. Researchers and developers of this new AI nanochip must work to understand the domain in which they work and how that positively and negatively affects the stakeholders of this technology.
Another revolutionary application of AI and nanotechnology includes building genetic algorithms to modify chromosomes for selective generational breeding, which raises inquiries about the ethics of human nature, enhancement and value [28] and proposes advantages and disadvantages. One of the benefits of this technology includes the potential for eliminating diseases and disorders by using nanodevices to perform genetic mutation and to ensure that these diseases and disorders are not passed down after long-term generational breeding. However, using nanodevices to perform selective genetic programming can promote genetic engineering for unethical motivations and biases. For example, genetic engineering can introduce or increase already present disparities in social, racial and economic inequalities by utilizing this technology to create an “ideal” class of genetically engineered people. Utilizing AI and nanodevices to develop genetic algorithms extremely new, and there is still more to be explored and understood by researchers before this technology can become widely available. Haphazardly and rapidly distributing innovative technologies without a thorough understanding of the legal and ethical dimensions of the problem can extremely dangerous. Before deploying nanodevices for genetic engineering, regulations for how the algorithm is developed and tested for biases; who uses this technology; and how it can be used must be defined in order to counterbalance benefits and hazards. Enforcing this balance is crucial to protecting the stakeholders of genetic algorithm nanotechnology, which includes but is not limited to doctors, patients and insurance companies. It is also important to consider the bias of the developers of this AI. Biases from those who develop the algorithm and those who select test data and cases can pose dangers to the implementation and utilization of machine learning because it skews the results to a specific perspective or conclusion. Therefore, the meticulous consideration of ethics, regulation and law plays a crucial role in protecting and mediating between stakeholders, and considering the benefits and ramifications of genetic engineering with AI nanodevices.
As AI continues to grow, the emergence of extremely intelligent self aware systems will be synthesized. This completely uncharted territory can pose extremely serious consequences. A system smarter than all of humanity combined could have the ability to cause serious harm on a global scale. This seems like a far fetched idea only seen in movies and science fiction books, but the idea is based on an actual concern. There is no evidence or studies in this topic just yet because the technology is not there yet, but as scientists and engineers, extreme caution and safety is extremely important. As the technology stands right now, it is clear that the benefits of using artificial intelligence are grounds enough to keep developing the technology. Since AI is not developed on a large scale, many of the risks of implementing this technology are merely theoretical. As long as AI continues to be developed in a safe and ethical way, many of the issues being brought up are not an issue.
4. Conclusion
Artificial intelligence has come a long way since its start in the 1950s. Science is already seeing some serious benefits of integrating this emerging technology into many different processes to increase accuracy and reduce error. Image processing has improved greatly through the use of AI to more accurately depict an image in nanoscale. These self learning algorithms are able to clarify an image based on past and present data to deliver more accurate images than ever before. As these algorithms improve, one can only expect that the advancements in such technology will improve as well. Furthermore, using AI to help optimize nanoscale manufacturing techniques will also feed back into AI technology as computing power per unit area improves.
The applications of primitive artificial intelligence have proven to be very useful so far with image processing in nano-scale technologies. Artificial neural networks have been shown to work in diagnosing various diseases in humans more accurately than doctors through their image processing. This new technology gives rise to ethical concerns to the stakeholders, which are doctors, patients and the companies that will gather the data. The legal and ethical dimensions of AI and nanotechnology must be thoroughly specified and evaluated to create a well-informed system of counterbalances that carefully examines: the benefits that the nanotechnology can offer that improve users’ wellbeing and standard of living at a low cost; potential risks which include protecting the integrity and privacy of user information when users disclose their information the technology itself; and finally, the dangerous implications that can arise from AI and nanotechnology which includes genetic discrimination, increased socioeconomic disparities, and the potentially destructive nature of AI due to developer-to-technology goal misalignment.
While there are some serious risks to developing AI technology more and more. If scientists and engineers practice safety and follow ethical guidelines, the benefits will continue to outweigh the disadvantages. One area of artificial intelligence of great concern is the development of highly intelligent self aware systems. While the technology is not available yet to realize such a system, there would need to be a huge amount of research and analysis of these technologies when science does arrive at that point.
References
[1] Poole, David L., et al. Computational Intelligence: a Logical Approach. Oxford University Press, 1998.
[2] E. S. Brunette, R. C. Flemmer and C. L. Flemmer, "A review of artificial intelligence," 2009 4th International Conference on Autonomous Robots and Agents, Wellington, 2000, pp. 385-392.
doi: 10.1109/ICARA.2000.4804025
[3] Buchanan, Bruce G.A (very) brief history of artificial intelligence AI Magazine, v 26, n 4, p 53-60, Winter 2005
[4]Bruderer H. (2016) The Birth of Artificial Intelligence: First Conference on Artificial Intelligence in Paris in 1951?. In: Tatnall A., Leslie C. (eds) International Communities of Invention and Innovation. HC 2016. IFIP Advances in Information and Communication Technology, vol 491. Springer, Cham
[5] Ning et al. “Discussion on research and development of artificial intelligence”. 2010 IEEE International Conference on Advanced Management Science(ICAMS 2010). IEEE. 23 August 2010
[6] Sacha, G M, and P Varona. “Artificial Intelligence in Nanotechnology.” Nanotechnology, vol. 24, no. 45, Nov. 2013, p. 452002., doi:10.1088/0957-4484/24/45/452002.2)
[7] Matey, J. R., and J. Blanc. “Scanning Capacitance Microscopy.” Journal of Applied Physics, vol. 57, no. 5, 1985, pp. 1437–1444., doi:10.1063/1.334506.
[8] Pohl, D. W., et al. “Optical Stethoscopy: Image Recording with Resolution λ/20.” Applied Physics Letters, vol. 44, no. 7, 1984, pp. 651–653., doi:10.1063/1.94865.
[9] Binnig, G., et al. “Atomic Force Microscope.” Physical Review Letters, vol. 56, no. 9, Mar. 1986, pp. 930–933., doi:10.1103/physrevlett.56.930.
[10] “Masthead.” The Journal of Organic Chemistry, vol. 28, no. 9, 1963, doi:10.1021/jo01044a700.
[11] Enriquez-Flores, C I, et al. “Fast Frequency Sweeping in Resonance-Tracking SPM for High-Resolution AFAM and PFM Imaging.” Nanotechnology, vol. 23, no. 49, 2012, p. 495705., doi:10.1088/0957-4484/23/49/495705.
[12] Piner, Richard D., et al. “‘Dip-Pen’ Nanolithography.” Science, vol. 283, no. 5402, 1999, pp. 661–663., doi:10.1126/science.283.5402.661.
[13] Oyabu, Noriaki, et al. “Mechanical Vertical Manipulation of Selected Single Atoms by Soft Nanoindentation Using Near Contact Atomic Force Microscopy.” Physical Review Letters, vol. 90, no. 17, Feb. 2003, doi:10.1103/physrevlett.90.176102.
[14] Miotto, R, et al. “Changes in a Nanoparticle’s Spectroscopic Signal Mediated by the Local Environment.” Nanotechnology, vol. 23, no. 48, June 2012, p. 485202., doi:10.1088/0957-4484/23/48/485202.
[15] Hung, Huey-Shan, and Shan-Hui Hsu. “Biological Performances of Poly(Ether)Urethane–Silver Nanocomposites.” Nanotechnology, vol. 18, no. 47, 2007, p. 475101., doi:10.1088/0957-4484/18/47/475101.
[16] Raman, Arvind. “Cantilever Dynamics in Atomic Force Microscopy.” Nano Today, 2008, pp. 20–27.
[17] Gómez-Navarro, C, et al. “Scanning Force Microscopy Three-Dimensional Modes Applied to the Study of the Dielectric Response of Adsorbed DNA Molecules.” Nanotechnology, vol. 13, no. 3, 2002, pp. 314–317., doi:10.1088/0957-4484/13/3/315
[18] Kos, A B, and D C Hurley. “Nanomechanical Mapping with Resonance Tracking Scanned Probe Microscope.” Measurement Science and Technology, vol. 19, no. 1, 2007, p. 015504., doi:10.1088/0957-0233/19/1/015504.
[19-23] Nikiforov, M P, et al. “Functional Recognition Imaging Using Artificial Neural Networks: Applications to Rapid Cellular Identification via Broadband Electromechanical Response.” Nanotechnology, vol. 20, no. 40, 2009, p. 405708., doi:10.1088/0957-4484/20/40/405708.
[24] Bhosle, V., et al. “Metallic Conductivity and Metal-Semiconductor Transition in Ga-Doped ZnO.” Applied Physics Letters, vol. 88, no. 3, 2006, doi:10.1063/1.2165281.
[25] Feichtner, Thorsten, et al. “Evolutionary Optimization of Optical Antennas.” Physical Review Letters, vol. 109, no. 12, 2012, doi:10.1103/physrevlett.109.127701.
[26] Chen, P Y, et al. “Optimal Design of Integrally Gated CNT Field-Emission Devices Using a Genetic Algorithm.” Nanotechnology, vol. 18, no. 39, Apr. 2007, p. 395203., doi:10.1088/0957-4484/18/39/395203.
[26] Macías, D., et al. “Heuristic Optimization for the Design of Plasmonic Nanowires with Specific Resonant and Scattering Properties.” Optics Express, vol. 20, no. 12, 2012, p. 13146., doi:10.1364/oe.20.013146.
[27] Tate, Jitendra S., et al. “Military and National Security Implications of Nanotechnology.” Journal of Technology Studies, vol. 41, no. 1, Spring 2015, pp. 20–28. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&AuthType=ip,uid&db=aph&AN=111189495&scope=site.
[28] Alexiou A., Psixa M., Vlamos P. (2011) Ethical Issues of Artificial Biomedical Applications. In: Iliadis L., Maglogiannis I., Papadopoulos H. (eds) Artificial Intelligence Applications and Innovations. EANN 2011, AIAI 2011. IFIP Advances in Information and Communication Technology, vol 364. Springer, Berlin, Heidelberg
[29] Christodoulides, Nicolaos et al. “Programmable bio-nanochip technology for the diagnosis of cardiovascular disease at the point-of-care” Methodist DeBakey cardiovascular journal vol. 8,1 (2012): 6-12.
[30] https://healthmetrics.heart.org/wp-content/uploads/2017/10/Cardiovascular-Disease-A-Costly-Burden.pdf