How can the developments in robots and AI help a person after a serious car accident?

Introduction This paper, as said above is about how AI and robots can help a victim of a serious traffic accident, form life support, until living with it at home, after a long recovery. To research this topic, we made a couple of sub questions. These sub questions are listed below. We, the researchers, are: … Read more

Issues raised by Artificial Intelligence

Technology has developed drastically over the past two decades reaching a new level when the term artificial intelligence was first coined by John McCarthy in the 1950s. The first prototype of Artificial Intelligence to resemble that of today’s was invented by Frank Rosenblatt in 1957. The Perceptron, Rosenblatt’s invention, allowed for distinguishing of patterns through … Read more

Solve Artificial Intelligence (AI) problems with data matching

Business systems can be improved and even automated with the help of data matching AI. Above all, this eliminates many of the common errors that occur when comparing human input to structured data input. For example, do you believe humans make sound decisions when confronted with uncontrolled information? Machine Learning refers to the automated decision-making … Read more

Artificial Intelligence offers the most dependable & effective path to pandemic preparedness

Technology adds a layer of protection against the COVID-19 pandemic. Individuals, organizations, and businesses are using it to develop their skills. Unlike previous periods of innovation in human history, the Fourth Industrial Revolution has always occurred without pause following a situational crisis – COVID-19 refusing to take a natural review period. The Fourth Industrial Revolution … Read more

Image processing

1. Introduction Humans need the vision to form perception and understanding of their environment. The goal of computer vision is to adapt human vision by giving computers the ability to electronically understand and perceive an image [2]. Computer vision provides an output in the form of image understanding when given a digital image as input. … Read more

Recent Trends in R+ and Artificial Intelligence

Introduction R+ is the general name given to Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR). Augmented reality is an intuitive encounter of a genuine environment where the items that live in reality are improved by PC produced perceptual data, now and then over different tactile modalities, including visual, sound-related, haptic, somatosensory and … Read more

Characterization of the fibre in the pseudostem of banana cultivars using physico chemical and image analyses

Abstract: Abstract: This research work presents the composition of the pseudostem of selected varieties of banana and detailed characteristics of the fibre in its intact form in the stem. Banana is one of the important fruit crops cultivated in tropical parts of the world. Non-food products like yarn, fabrics and quality papers are manufactured from … Read more

Artificial Intelligence in a Simplified Driving World Environment

The following report details stage 1 of the research study “Artificial Intelligence in a Simplified Driving World Environment”. Artificial intelligence (AI) focuses on the development of machines to be able to perform intelligent tasks independently, replicating the actions that a human expert would take in a given situation. The AI market is growing annually, and … Read more

Machine learning for image recognition (autonomous vehicles)

Research Methodologies and Emerging Technologies Section 1. Introduction Autonomous vehicles are required to achieve the fundamental performance in the timelessness of metropolitan transport procedures, as they provide the probability for additional security, enhanced originality, unique availability, excellent drive performance, and right decision on the various position. The self-sufficient car has to find out the ability … Read more

About Artificial Intelligence

Artificial Intelligence (AI), despite being prevalent in the everyday life of most individuals and encompassing almost every variation of modern industry in some capacity, curiously lacks a precise universally accepted definition.

AI was first named in the 1950’s when Minsky, McCarthy, and colleagues, described artificial intelligence as “that of making a machine behave in ways that would be called intelligent if a human were so behaving” (source).

Artificial intelligence has been categorized as “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.” (Winston, n.d.) as “a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason and take action.” (Panel, 2016) , “the activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” (Nilsson, 2010) and “AI can also be defined by what AI researchers do. AI is primarily as a branch of computer science that studies the properties of intelligence by synthesizing intelligence” (Simon, 1995)

At its core Artificial Intelligence is the ability for a machine to complete a task, that if it were done by a human would require intelligence.

Types of AI

The general definition of AI is broad, as its ability to be classified. AI can currently be classified according to two separate systems, one is the classification of AI in relation to their similarities to the human mind, another is a broader definition, more commonly used in the technology industry that puts AI into three separate categories.

AI classified based on its relation to the human mind falls into four separate categories:

Reactive: This is the original form of AI and operate in an extremely limited capability. They emulate the ability to respond to different stimuli, this type of AI does not have any memory-based functionality, they do not use previous experience to make decisions on their current actions. In basic terms, they do not have the ability to learn, they can simply respond to a limited variation of inputs.

Limited memory: Similar to reactive machines, this type of AI also has the ability to learn from historical data to make decisions. These machines are trained by using data stored in their memory as a reference model for solving problems. Almost all types of current AI fit into this category.

Theory of mind: This type of AI currently only exists in theory, “ is the ability to attribute mental states — beliefs, intents, desires, emotions, knowledge, etc. — to oneself and to others.” (Wikipedia, n.d.)

Self-awareness: This AI also exists only hypothetically and is self-explanatory. It is an AI that has developed self-awareness.

These four types of AI can also be more generally classified under the three more general classifications, Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI).

ANI: the form of AI that exists in our world today, often referred to as “weak AI”. It is intelligent systems that operate within a limited context – how to carry out specific tasks without being specifically programmed to do so – driven by sets of self-learning algorithms and is at best a basic simulation of human intelligence.

Narrow AI is generally focused on performing a single task extremely well, often at much faster speeds and with higher accuracy than humans. Whilst this form of AI seems intelligent it operates under a larger set of constraints and limitations than even the most basic human intelligence, namely they are only capable of performing specific tasks in which they are programmed to do, which is where the name ‘narrow AI’ comes from. Reactive and limited memory AI fits into this category.

AGI: this type of AI has the same ability as a human being, it can learn, perceive and understand independently and build connections and generalizations across multiple fields in the same manner that humans can. This form of AI currently exists only in theory.

ASI: a theoretical type of AI that surpasses human intelligence and ability in every facet. An example of this would be Skynet from the Terminator series.

How does AI work?

As stated, the field of AI is creating machines that are capable of executing tasks that would require human intelligence to otherwise perform. Machine learning is a subset of that field, one that allows machines to “learn” independently and deep learning is a further subset of that, with it being the area that is currently producing the furthest advancements in the field.

“Artificial intelligence is a set of algorithms and intelligence to try to mimic human intelligence. Machine learning is one of them, and deep learning is one of those machine learning techniques.” – Frank Chen (Source).

Machine learning

Machine learning is a subset of AI that allows a system to learn from data without the need to be specifically programmed to do so, it does this through sets of rules – or “algorithms” – that the system is able to follow.

This is achieved by training the system, by feeding it data, which using statistical techniques it will then finds patterns in said data, and from which it derives a rule or procedure that explains the data or can predict future data. Or more simply put, by the system learning.

“In essence, you could build an AI consisting of many different rules and it would also be able to be AI. But instead of programming all the rules, you feed the algorithm data and let the algorithm adjust itself to improve the accuracy of the algorithm. Traditional science algorithms mainly process, whereas machine learning is about applying an algorithm to fit a model to the data. Examples of machine-learning algorithms that are used a lot and that you might be familiar with are decision trees, random forest, Bayesian networks, K-mean clustering, neural networks, regression, artificial neural networks, deep learning and reinforcement learning. “ (IBM, 2018)

Machine learning methods are usually categorized broadly under two definitions: supervised and unsupervised.

Supervised learning is where algorithms trained using labelled examples. It is similar to learning by example, with the system being given a data set with labels that act as the “answers” and eventually the system learns to tell the difference between the labels by comparing its outputs with the correct outputs – the answers – to find errors and adjust itself accordingly.

For example, a system might be shown pictures of cats and dogs and given enough data will learn to differentiate by perhaps the structure of its ears, or shape of its face.

Once the system has been “trained” it is able to then be applied to new data and classify it using the rules it has learnt.

The problem with supervised learning is that it usually requires enormous amounts of labelled data to work effectively, with systems potentially needing to use millions of images to say, carry out the task of identifying pictures of cats and dogs accurately.

Unsupervised learning is where algorithms are trained using unlabelled data sets, it is not given the correct “answer” for the data and instead must figure out what it is being shown. The aim of unsupervised learning is for the system to explore the data and try and identify patterns that can used to classify and categorize the data.

For example, unsupervised learning might be clustering together data that can be grouped by similarities, such as news websites grouping together stories on similar topics.

Deep learning

Deep learning is a subset of machine learning that operates by employing a system inspired by the human brain – neural networks – it operates by using progressive layers that each subsequently extract and composite information. As data is passed through the layers:

“each unit combines a set of input values to produce an output value, which in turn is passed on to other neurons downstream. For example, in an image recognition application, a first layer of units might combine the raw data of the image to recognize simple patterns in the image; a second layer of units might combine the results of the first layer to recognize patterns-of-patterns; a third layer might combine the results of the second layer; and so on.”

This allows systems to process large amount of uncategorized and complex data efficiently by breaking it down into smaller, simpler parts and using those parts to recognize complex precise patterns in data that would not be possible using traditional machine learning techniques.

The larger the neural network and the more data it has access to, the better he performance of the system. Deep learning however requires enormous amounts of processing power and specific hardware – GPU’s have made recent advancements possible – long training times and large amounts of data to work effectively.

In addition, one of the problems facing deep learning is known as the “black box” problem, in which it is often next to impossible to determine how the system came to a particular conclusion, which in turn makes it difficult to gain insight required to refine and improve the system.

Development of AI

Despite AI having existed for more than half a century since the term was originally coined in 1950, developments in the field have only recently seen large breakthroughs and interest from modern industries, this is due to advancements in computing power – GPU’s – and the exponential growth in volume and variety of data, which in turn has increased the potential value – and advancement – for algorithms.

As the necessity for the implementation of AI systems becomes more prevalent, due to the rise of big data, and AI providing a greater return on investment, more research and development has been put into the field.

Challenges for development

The main challenge in the development of increasingly advanced AI is computing power, until recently there was a technical brick wall regarding development, with there being plenty of theoretical ideas but not enough computing power required to implement or develop them effectively.

Modern day cloud computing and parallel processing systems have helped currently, but they are nothing more than a stop gap as advances in complex deep learning algorithms and data volumes continue to grow, and more power is required.

Another problem in the development of AI is that current systems can only learn from given data, knowledge cannot be integrated in any other way, this means for example that any inaccuracies in the data will be reflected in results.

This is partly due to the fact that modern AI only operates on a one-track mind, it is only capable of performing a specific task, and thus unable to perform, and take into consideration learning and data from tasks other than the one it is performing.

Lack of professionals in the field, despite the increased demand for AI experts, machine and deep learning developers and data scientists, talent supply remains at a deficit – as of early 2019 there was estimated to be less than 40,000 AI specialists in the world (Source).