Rational agent
From Wikipedia, the free encyclopedia
A rational agent is an agent which takes actions based on information from and knowledge about the agent's environment. It tends to maximize the chances of success using commonly accepted logical inference rules.
The action a rational agent takes depends on:
- the agent's past experiences
- the agent's information of its environment
- the actions, duties and obligations available to the agent
- the estimated benefits and the chances of success of the actions.
Rational agents are studied in the fields of cognitive science, ethics, the philosophy of practical reason, and in particular, in artificial intelligence.
A cognitive rational agent with sophisticated logical capacities of reasoning but without emotional preferences, and with self-consciousness can be considered a rational intelligent agent. Therefore various software agents are usually rational agents.
In game theory and classical economics it is assumed that the actors/(human agents) are rational.
An example of rational agents in the field of software is BDI software agents, and in socio-cognitive engineering is the abstract IPK (information,preference,knowledge) agent called personoid.
Contents |
[edit] See also
[edit] Economics
[edit] Software
- software agent
- intelligent agent
- belief revision
[edit] Further reading
- Russell, Stuart J. & Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2, <http://aima.cs.berkeley.edu/>

