Agents In Artificial Intelligence - Types And Examples
Artificial intelligence is the study of rational agents to make decisions related to a person, firm, machine or software. Considering the past and present perceptual inputs of an agent at a particular instant, AI carries out a task with the best outcome possible. AI system comprises of agent and its environment. One particular environment consists of various agents. In this article, we shall we discussing the agents in detail.
What is AI Agent?
Agents in AI are software programs that act autonomously to perform tasks or make decisions on behalf of users. These intelligent agents can gather information, analyze data, and interact with their environment to achieve specific goals. They are designed to mimic human behavior and are used in various applications such as virtual assistants, chatbots, and recommendation systems.
Agent is a part of AI system that takes actions or decisions based on the information it perceives from the environment. For example, an automated vacuum cleaner that uses sensors to detect dirt and obstacles. It builds a model of its environment and decides how to move and clean based on that model.
Structure of Agents in Artificial Intelligence
An AI agent comprises of Architecture and an Agent program. Architecture involves machinery for execution of tasks by agents. It consists of a device with sensors and effectors or actuators. An agent program refers to the process of implementation of an agent function, which is map of the percept sequence or the perceptual history of the agent for a particular action.
Interaction of Agents with Environment
Interaction of the Agent with the environment uses Sensors and Effectors. Sensors perceive the environment and the actuators or effectors act upon that environment.
This interaction can occur in two different ways:
- Perception: Perception is a passive interaction between the agent and the environment where the environment remains unchanged when the agent takes up information from the environment. This involves gaining information using 'Sensors' from the surroundings without any change to the surroundings.
- Action: Action is an active interaction between the agent and the environment where the environment changes when the action is performed. This involves utilization of an 'Effector' or an 'Actuator' which completes an action but leads to changes in the surroundings while doing so.
For example, in case of a virtual agent, when the virtual agent reads and interprets the information provided by the user, it is known as 'Perception' while when it replies to the user based on the interpretation it is known as 'Action'.
Action of Agents In Artificial Intelligence
Agents in Artificial Intelligence act by:
- Mapping of the Percept sequences or Perceptual history to the Actions: Mapping refers to a list that maps a particular percept sequence to the action. The design for an ideal agent can be figured out by specifying an action corresponding to the percept sequence or the perceptual history.
- Autonomy: The agent designer determines the behavior of the agent by determining its experience and its built-in knowledge. Autonomy refers to taking actions based on the experience of the agent. If the system comprises of an autonomous intelligent agent then it is able to operate and adapt successfully in a wide range of environments.
Types of Agents in Artificial Intelligence With Examples
Based on their degree of perceived intelligence and capability, types of agents in artificial intelligence can be divided into:
- Simple Reflex Agents
- Model-Based Agents
- Goal-Based Agents
- Utility-Based Agents
- Learning Agents
Performance can be improved and better action can be generated for each of these types of agents in AI.
Simple Reflex Agents
- This is a simple type of agent which works on the basis of current percept and not based on the rest of the percepts history.
- The agent function, in this case, is based on condition-action rule where the condition or the state is mapped to the action such that action is taken only when condition is true or else it is not.
- If the environment associated with this agent is fully observable, only then is the agent function successful, if it is partially observable, in that case the agent function enters into infinite loops that can be escaped only on randomization of its actions.
- The problems associated with this type include very limited intelligence, No knowledge of non-perceptual parts of the state, huge size for generation and storage and inability to adapt to changes in the environment.
- Example: A thermostat in a heating system.
This can be illustrated using the following image:
Model-Based Agents
- Model-based agent utilizes the condition-action rule, where it works by finding a rule that will allow the condition, which is based on the current situation, to be satisfied.
- Irrespective of the first type, it can handle partially observable environments by tracking the situation and using a particular model related to the world.
- It consists of two important factors, which are Model and Internal State.
- Model provides knowledge and understanding of the process of occurrence of different things in the surroundings such that the current situation can be studied and a condition can be created. Actions are performed by the agent based on this model.
- Internal State uses the perceptual history to represent a current percept. The agent keeps a track of this internal state and is adjusted by each of the percepts. The current internal state is stored by the agent inside it to maintain a kind of structure that can describe the unseen world.
- The state of the agent can be updated by gaining information about how the world evolves and how the agent's action affects the world.
- Example: A vacuum cleaner that uses sensors to detect dirt and obstacles and moves and cleans based on a model.
This can be illustrated as:
Goal-Based Agents
- This type takes decisions on the basis of its goal or desirable situations so that it can choose such an action that can achieve the goal required.
- It is an improvement over model based agent where information about the goal is also included. This is because it is not always sufficient to know just about the current state, knowledge of the goal is a more beneficial approach.
- The aim is to reduce the distance between action and the goal so that the best possible way can be chosen from multiple possibilities. Once the best way is found, the decision is represented explicitly which makes the agent more flexible.
- It carries out considerations of different situations called searching and planning by considering long sequence of possible actions for confirming its ability to achieve the goal. This makes the agent proactive.
- It can easily change its behavior if required.
- Example: A chess-playing AI whose goal is winning the game.
This can be illustrated as follows:
Utility-Based Agents
- Utility agent have their end uses as their building blocks and is used when best action and decision needs to be taken from multiple alternatives.
- It is an improvement over goal based agent as it not only involves the goal but also the way the goal can be achieved such that the goal can be achieved in a quicker, safer, cheaper way.
- The extra component of utility or method to achieve a goal provides a measure of success at a particular state that makes the utility agent different.
- It takes the agent happiness into account and gives an idea of how happy the agent is because of the utility and hence, the action with maximum utility is considered. This associated degree of happiness can be calculated by mapping a state onto a real number.
- Mapping of a state onto a real number with the help of utility function gives the efficiency of an action to achieve the goal.
- Example: A delivery drone that delivers packages to customers efficiently while optimizing factors like delivery time, energy consumption, and customer satisfaction.
This can be illustrated as follows:
Learning Agents
- Learning agent, as the name suggests, has the capability to learn from past experiences and takes actions or decisions based on learning capabilities. Example: A spam filter that learns from user feedback.
- It gains basic knowledge from past and uses that learning to act and adapt automatically.
- It comprises of four conceptual components, which are given as follows:
- Learning element: It makes improvements by learning from the environment.
- Critic: Critic provides feedback to the learning agent giving the performance measure of the agent with respect to the fixed performance standard.
- Performance element: It selects the external action.
- Problem generator: This suggests actions that lead to new and informative experiences.
This can be illustrated as follows:
Summing Up
This article brings the following points to the attention of the readers:
- Artificial intelligence refers to the study of rational agents to make decisions related to a person, firm, machine or software. AI system comprises of agent and its environment.
- Agent is a part of AI system that takes actions or decisions based on the information it perceives from the environment.
- Agents interact with the environment using sensors and actuators or effectors in two different ways which are perception and Action.
- Perception is a passive interaction between the agent and the environment where the environment remains unchanged when the agent takes up information from the environment while Action is an active interaction between them where the environment changes when the action is performed.
- The agents in AI act by Mapping of the Percept sequences or Perceptual history to the Actions and Autonomy.
- Based on their degree of perceived intelligence and capability, Agents can be divided into five types which are Simplex reflex agent, Model Based agent, Goal based agent, Utility agent and Learning agent.
You may also like to read: