What is AI: Everything You Need to Know

Quick answer:

In a nutshell and much more simple words that may not be as comprehensive, AI is the study of automating intelligent decisions.

In order to train and test the methodologies scientists come up with. They need some sort of a playground to experiment in. A playground is an environment that is inspired by reality and agents are the decision-makers.

Different Definitions of AI across the years

Artificial intelligence is one of the most controversial yet trendy topics nowadays. Everyone has high hopes about the evolution of AI—some are intimidated by this type of technology and some even fear it. Researchers are having trouble agreeing on a definition for AI. Every scientist defines AI with regard to their field and their own point of view.

In this article, I’m going to brief you on what AI is in a high-level and easy-to-understand way.

First, let’s start with some definitions.

Charniak and McDermott in Introduction to Artificial Intelligence

Shapiro in Artificial Intelligence

Rich and Knight in Artificial Intelligence

Russell and Norvig in Artificial Intelligence: A Modern Approach

What does it take to be an (intelligent) agent?

According to Russell and Norvig, agents are entities that can be viewed as perceiving and acting upon their environment. Agents use sensors to perceive, and effectors to act (e.g. animals, robots, thermostats, etc.).

Rational Agents

Rational agents do “the right thing”. The agent defines the term right as whatever causes the agent to be “most successful”. Consequently, an agent needs a measure of success.

The Performance Measure

  • Define a success criterion — the performance measure.
  • A generic criterion will be imprecise, unreliable, and probably unattainable: ex, the agent’s own opinion.
  • For precision, the measure of success will be defined in terms of a particular task the agent is supposed to perform.
  • Thus, different criteria for different agents.

Example: Coffee Delivery Agent

An agent that operates in a suite of offices, it receives coffee from the kitchenette and delivers to offices.

  • What is a suitable measure of success?

Rationality vs. Success
Note that a rational agent doesn’t equal a successful one. Rationality depends not only on the performance measure but also on:

  1. Agent’s sensory capabilities (what it can sense). Coffee agents that detect obstacles visually vs. those that use collision detection.
  2. Agent’s actuator capabilities (what it can do). Coffee agents that can heat coffee with a built-in coil vs. those that cannot.
  3. Agent’s knowledge (what it knows). Omniscient agents, uninformed agents, misinformed agents.

Ideal Rational Agents


Intelligent agents should also be autonomous. In other words, an agent’s behavior should not be completely based on built-in knowledge, but also on its own experience.

Provide the agent with enough built-in knowledge to get started, and a learning mechanism to allow it to derive knowledge from percepts (and other knowledge).

The Turing Test

This is actually the main problem of AI. If we know a precise-enough definition, AI would not be a problem. At least not as intriguing as it is. How would we know that we’ve succeeded?

Turing proposed a way: Replacing the question “Can Machines Think” with a less controversial question.

Introducing The Imitation Game

The Imitation Game is played with three people: a man A, a woman B, and an interrogator C who may be of either sex.

The interrogator is separated from the man and woman. The interrogator’s goal is to guess correctly which of the other two is the man and which is the woman. The interrogator deals with them by labeling one X and the other Y.

The interrogator reaches their goal by asking both of them questions such as, “X, how much do you weigh?” If X is A then A must answer, but retains the right to lie and give misleading answers. In fact, A tries to deceive the interrogator to let them believe that he’s the woman.

As such, A’s goal is to make the interrogator mislabel them—whereas B’s goal is to help the interrogator guess correctly. B has the option to lie or tell the truth, but there’s no point of lying because she is on the interrogator’s side. B can tell the interrogator things like “Believe me, I’m the woman” to help them—but A can do so, too.

In order to eliminate the voice tones that might give decisive clues, answers are typed. At the end of the game, the interrogator says “X is A & Y is B” or vice versa.

Now, suppose a machine plays the interrogator game. Will the interrogator decide incorrectly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original question: ‘Can machines think?’

A machine/program is said to have passed the Turing test if it can fool the interrogator as often as the man does. By defining the test as such, Turing has set an aggressive agenda for AI research.

Passing the Turing test requires solving most of the major AI problems (natural language competence, reasoning, planning — the so-called “AI-complete problems”). Though AI researchers do not explicitly target the Turing test, it remains the ultimate measure of success in AI.

Structure of Intelligent Agents

To design an agent, we have to, first, specify four things:

  1. Performance measures
  2. Environment
  3. Actuators
  4. Sensors

Agent = Agent program + Architecture.

  • The agent program maps percept sequences to actions.
  • Architecture is whatever the agent program runs on.

The architecture has three functions:

  1. Maps sensory input to data structures that are made available to the program
  2. Runs the program
  3. Maps program outputs onto signals to the effectors

Ex: Coffee-Delivery Agent

  • Performance measure: Delivering coffee to the correct room.
  • Environment: Suite of offices.
  • Actions: Translate, rotate.
  • Sensations: Location (where am I now?), orientation (35◦ North-East?).


Assume the suite of offices is a grid of rooms, where each room is a cell. Each cell/room has a door to the neighboring cell/room, if there is one.


Suppose agent is in R1 and needs to deliver coffee to R8. What does it mean to deliver coffee to the right room? Well, to be in R8, the goal is to put the world in a state in which the agent is in R8. We can think of the goal as a set of states.


There are an infinite number of states in the world. We use abstraction to ignore unneeded information—for example, neither the exact location nor the exact orientation is important to us in the coffee delivery problem.

What is relevant to us is the grid cell where the agent is and whether the agent is facing North, West, South or East. The best state for this problem is a Tuple (cell, direction) such that the cell belongs to {1, 2, 3,…, 16} and the direction belongs to {N, S, E, W}


The actions we choose should get our agent where we want it to go while minimizing the number of actions for abstraction and simplicity’s sake. For example, our actions could be

  • Move forward
  • Rotate Right 90 degrees
  • Rotate Left 90 degrees

These 3 actions are sufficient to get the agent anywhere on the grid.


Why didn’t we choose the location to be room- rather than cell-number? Because if we did so, we wouldn’t be able to tell in which cell within a room the agent is in

For example: if the agent is in R2, and R2 consists of 4 cells just like the image below, with this one piece of information we’re not 100% sure which cell it’s in. Consequently, we can’t be sure of the result of the action performed on the cell where the agent is located.

The Search-Execute Model

In order to get to R8, what should the agent do first? Well, it’s often convenient for the agent to plan ahead. Given its knowledge of the map of the suite and of the effects of its actions, the agent may formulate a plan to get from R1 to R8 before even moving. The plan is in the form of a sequence of actions. The problem of formulating a correct plan is known as search.

Problem Types

Depending on the agent’s knowledge and the environment, search problems may be divided into four types:

  1. Single-state
  2. Multiple-State
  3. Contingency
  4. Exploration

Contingency and exploration problems require interleaving search and execution. We’ll discuss these under the topics of planning and learning in a later article.

Single-State Problems
The agent’s sensors provide enough information for the agent to tell which state it’s in, and it knows the effects of all its actions.

Multiple-State Problem
At any time, the agent isn’t certain about which state in a given set of states it’s in.

For example:

  1. Perfect knowledge of the effects of actions, but insufficient state-sensing capabilities. Coffee-agent with only cell information or only direction information (or neither).
  2. Alternatively, the agent may have perfect sensors but uncertainty about the effects of its actions. Sometimes translation moves the agent backward.

A Search Problem

A search problem is defined as a 5-tuple:

  1. A set of operators, or actions, available to the agent.
  2. An initial state.
  3. A state-space: the set of states reachable from the initial state by any sequence of actions.
  4. A goal test, which the agent applies to a state to determine if it’s a goal state.
  5. A path cost function: a function that assigns a cost to a sequence of actions. Typically, it’s the sum of the costs of individual actions in the sequence.

A solution

A search algorithm takes a problem as input and returns a solution as output in the form of a sequence of actions from the initial state to a state satisfying the goal test. A solution with a smaller path cost is preferred.


Turing’s Vision

A lot of what I wrote here I thankfully learned from my AI Professor Haythem Ismail. All credits go to him and his lectures.

Leave a Reply

Your email address will not be published. Required fields are marked *

Excited? Us too.

Let’s get moving