Discover more from Matt Rickard
Norvig's Agent Definition
There’s no consensus on what an AI agent means today. The term is used to describe everything from chatbots to for loops.
In 1995, Stuart J. Russell and Peter Norvig gave an academic definition and a taxonomy in Artificial Intelligence: A Modern Approach.
"Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators”
They classify agents into five different categories.
Simple Reflex Agents: These agents operate under the principle of condition-action rule, meaning they take action based on the current stimulus. They do not consider the history of their interactions with the environment and have no concept of the future; their decision-making is entirely present-focused.
Model-Based Reflex Agents: These agents improve upon simple reflex agents by maintaining some sort of internal state that depends on the stimulus history and thereby reflects at least some of the unobserved aspects of the current state. They use a model of the world to choose actions in a way that takes into account the state of the environment as well as the way the environment changes in response to their actions.
Goal-Based Agents: These agents further expand upon the capabilities of model-based agents by having the ability to set and strive for goals. They consider the future consequences of their actions and choose the ones that align with their goals. This often involves searching and planning, as they need to predict the outcomes of their actions to achieve their goals.
Utility-Based Agents: Unlike goal-based agents that have a binary view of success and failure, utility-based agents can measure the success of their actions on a continuum using a utility function. This allows them to compare different states according to a preference (utility) and to strive not only to achieve goals but to maximize their own perceived happiness or satisfaction.
Learning Agents: These are the most advanced type of agents covered in Norvig's work. Learning agents can improve their performance over time based on their experiences. They have a learning component that allows them to adapt by observing what happens in the environment and a performance element that makes decisions based on learned information and innate knowledge. They can also contain components that allow them to make improvements to the learning component itself.