Artificial intelligence (AI) is technology that gives machines the power to perform specific tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, learning, and language translation. AI, and its definition, continues to evolve. Part of AI’s development is contingent upon its creator’s interpretation and expectation of human intelligence as well as imagination of nonhuman intelligence. As a result, stages, classifications, or levels of defining AI have started to formulate.
AI continues to be featured in the news, conversations, and product development. Like other innovative technologies that individuals choose to simply ignore, due to the lack of understanding, fear, or disbelief, one thing continues to stay constant: It is not going away.
When starting to discuss AI and corporate compliance, there are two aspects to consider. One is understanding how AI will improve corporate compliance in efficiently identifying, monitoring, and addressing compliance issues. The other is how corporate compliance will now need to identify risks, develop guidelines, and monitor the use and implementation of AI.
In this section, we will help (1) introduce and define AI, (2) identify varied use cases, (3) discuss key benefits to corporate compliance, and (4) explore what to consider in embracing and thinking about keeping AI in compliance.
What Is AI?
AI encompasses a variety of technologies with the goal of imitating a human thought process and rational. The term “artificial intelligence” was coined in the late 1950s but has become more popular due to the advancement of the data storage, algorithms, and heavy investment in consumer goods. Early AI explored topics like problem solving. In the 1960s, the U.S. Department of Defense began training computers to mimic basic human reasoning. In the 1970s, the Defense Advanced Research Projects Agency (DARPA) applied the early technology to street-mapping projects. DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa, or Cortana were household names.
This work paved the way for the automation and formal reasoning that we see in the technology today, including decision-making and smart search systems that can be developed to complement and augment human abilities. The creative minds in Hollywood sell action and drama related to artificial intelligence, but we are far from the AI negatively depicted. AI has evolved to provide many benefits in various industries such as healthcare, transportation, defense, financial, manufacturing, retail, and more.
The internet of things (IOT), which is the network of computing devices connecting everyday objects via the internet, enables the sending and receiving of data, and the proliferation of smartphone technology allows the collection of large volumes of valuable information. However, searching, analyzing, and understanding that data for our benefit can take a lot of time and effort. Another challenge is that we can only absorb a finite amount of information, where AI can digest and process large amounts of data in a fraction of the time it would take us. AI brings human-level intelligence and structure to electronic data. Extracting meaningful insights is the next obvious technological evolution, so we can focus on what to do with that data.
The most simplistic way to look at AI is as a newborn child; only in the technology world, the experiential knowledge equivalent is data. A child needs to learn various basic skills before it can have an educated conversation. This child needs patience and the guidance of a teacher. Similarly, the professional or expert must also work hand in hand with AI, so it may learn how to properly function. Unlike the child, computers were not created with a cognitive ability to distinguish visual objects, sound, speech, smell, or touch.
However, various technologies are developing similar attributes to mimic human senses. AI will be the combination of these and similar technologies. Below are a variety of key definitions that include:
-
Machine learning automates analytical model building. It uses methods from neural networks, statistics, operations research, and physics to find hidden insights in data without explicitly being programmed for where to look or what to conclude.
-
Rule-based machine learning (RBML) is a term in computer science intended to encompass any machine learning method that identifies, learns, or evolves “rules” to store, manipulate, or apply. For example, a rule-based approach might say, “If there is a transaction that is more than $1,000,000, activate a fraud review.” The people who wrote the program would have built that rule into the program.
-
A neural network is a type of machine learning, made up of interconnected units (like neurons) that processes information by responding to external inputs and relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.
-
Deep learning uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.
-
Cognitive computing is a subfield of AI that strives for a natural, human-like interaction with machines. Using AI and cognitive computing, the goal is for a machine to simulate human processes through the ability to interpret images and speech—and then speak coherently in response.
-
Computer vision relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze, and understand images, they can capture images or videos in real time and interpret their surroundings.
-
Natural language processing (NLP) is the ability of computers to analyze, understand, and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.
-
A chatbot is a computer program that uses NLP and AI to simulate human conversation and derive a response. Essentially, it’s a machine that can chat or respond to chatter.
-
Graphical processing units are key to AI, because they provide the heavy computing power that’s required for iterative processing. Training neural networks requires big data plus computer power.
-
The Internet of Things generates massive amounts of data from connected devices, such as Google Home, Ring cameras, and Alexa, although the data are mostly unanalyzed. Automating models with AI will allow us to use more of it.
-
Advanced algorithms are being developed and combined in new ways to analyze more data faster and at multiple levels. This intelligent processing is key to identifying and predicting rare events, understanding complex systems, and optimizing unique scenarios.
-
Application processing interfaces (APIs) are portable packages of code that make it possible to add AI functionality to existing products and software packages. They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create captions and headlines, or call out interesting patterns and insights in data.
Like a decision tree, data rules are some of the basic instructions of how to interpret or validate data. However, these rules or algorithms lack the cognitive features to interpret the gray areas. Therefore, machine learning or rule-based machine learning must be incorporated to continue to help it interpret, without being explicitly told what to do. Like the human brain, the neural network is the connection path that helps it draw data to create certain assumptions or conclusions. Deep learning is having multilayered neural networks. Computer vision relies on pattern recognition and deep learning to recognize what’s in a picture or video. We see this in face-recognition technology. NLP technology provides the ability to analyze, understand, and generate human language, including speech. Therefore, it is the combination of all these technologies, including cognitive computing, that endows AI with the capacity to have a human-like conversation with a user.
Stages of AI
As AI continues to evolve, so does the way we view it. The example of a newborn child was given to explain the development of AI and, thus following that same methodology, we will discuss the overall stages.
-
Stage 1: Reactive
In this stage, the AI is following a set of rules, such as when IBM’s Deep Blue beat Chess Grandmaster Garry Kasparov. Or looking at it more simplistically, when using an excel VLOOKUP function formula, such as if Column A1 has a specific number in a VLOOKUP table, return the number in Column B1, if not return 0. This is a simple reaction, such as if a child’s favorite toy is abruptly removed by a stranger. The child’s reaction would likely be to cry.
-
Stage 2: Limited memory
In this stage, the AI is capable of learning from historical data in making decisions in addition to the reaction or rule-based protocols. The child again has their favorite toy and now sees the same stranger. The child has learned that there is a risk of losing the toy. The child may clutch the toy tighter or decide to cry before it is taken.
-
Stage 3: Theory of mind
In this stage, which is not as developed or under consideration in comparison to the first two stages, it would focus on the emotional, rational, and exploring its known and unknown needs. This would be best comparable to a young child comprehending the existence of options, their place in society, and the benefits and consequences of one’s actions. This is like Ivan Pavlov’s work involving temperament, conditioning, and involuntary reflex action but without the human emotional component. However, there is work on developing Artificial Emotional Intelligence, which involves facial recognition, that would greatly contribute to this stage.
-
Stage 4: Self-aware
In this stage, although hypothetical, the AI becomes aware of its own existence and can independently make decisions based on its own method of rationalization. Depending on if the AI is only able to do this for a specific task or objective, it may be labelled as Artificial Narrow Intelligence (ANI). If the AI can learn additional topics, where it would be comparable to a human capability, it would be referred to as Artificial General Intelligence (AGI). The last is the Artificial Superintelligence (ASI), which would be more capable than a human. This is also referred to as Singularity.