Agent Intention
(Redirected from desired outcome)
Jump to navigation
Jump to search
An Agent Intention is an Agent Deliberative Mental State that represents what a Rational Agent has chosen to do.
- AKA: Purpose, Desired Result.
- Context:
- It can (typically) be based on an Agent Belief System.
- It can (typically) involve an Agent Planning Process.
- It can (typically) be associated to an Agent Desire.
- It can range from being a Strong Intention to being a Weak Intention, based on Agent Commitment.
- It can be associated to an Agent Intention Category, such as a moral intention.
- It can range from being a Human Intention to being a Software Agent Intention.
- It can be expressed in a Intention Statement.
- …
- Example(s):
- BDI Agent Intention, held by a Belief-Desire-Intention (BDI) Agent System.
- a Moral Intention, such as good intention or bad intention (such as criminal intent).
- a Personal Project (with project goals).
- …
- Counter-Example(s):
- an Agent Goal;
- an Agent Belief;
- an Unconscious Behavior;
- an Organizational Goal.
- See: Agent Preference, Agent Desire, Mental State, Rational Argument, Planning, Mind, Future.
References
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Intention Retrieved:2020-2-7.
2019
- (Wikipedia, 2019) ⇒ https://www.wikiwand.com/en/Belief%E2%80%93desire%E2%80%93intention_software_model#/BDI_agents Retrieved:2019-8-10.
- A BDI agent is a particular type of bounded rational software agent, imbued with particular mental attitudes, viz: Beliefs, Desires and Intentions (BDI) (...)
This section defines the idealized architectural components of a BDI system.
- Beliefs: Beliefs represent the informational state of the agent, in other words its beliefs about the world (including itself and other agents). Beliefs can also include inference rules, allowing forward chaining to lead to new beliefs. Using the term belief rather than knowledge recognizes that what an agent believes may not necessarily be true (and in fact may change in the future).
- Beliefset: Beliefs are stored in a database (sometimes called a belief base or a belief set), although that is an implementation decision.
- Desires: Desires represent the motivational state of the agent. They represent objectives or situations that the agent would like to accomplish or bring about. Examples of desires might be: find the best price, go to the party or become rich.
- Goals: A goal is a desire that has been adopted for active pursuit by the agent. Usage of the term goals adds the further restriction that the set of active desires must be consistent. For example, one should not have concurrent goals to go to a party and to stay at home – even though they could both be desirable.
- Intentions: Intentions represent the deliberative state of the agent – what the agent has chosen to do. Intentions are desires to which the agent has to some extent committed. In implemented systems, this means the agent has begun executing a plan.
- Plans: Plans are sequences of actions (recipes or knowledge areas) that an agent can perform to achieve one or more of its intentions. Plans may include other plans: my plan to go for a drive may include a plan to find my car keys. This reflects that in Bratman's model, plans are initially only partially conceived, with details being filled in as they progress.
- Events: These are triggers for reactive activity by the agent. An event may update beliefs, trigger plans or modify goals. Events may be generated externally and received by sensors or integrated systems. Additionally, events may be generated internally to trigger decoupled updates or plans of activity.
- Beliefs: Beliefs represent the informational state of the agent, in other words its beliefs about the world (including itself and other agents). Beliefs can also include inference rules, allowing forward chaining to lead to new beliefs. Using the term belief rather than knowledge recognizes that what an agent believes may not necessarily be true (and in fact may change in the future).
- A BDI agent is a particular type of bounded rational software agent, imbued with particular mental attitudes, viz: Beliefs, Desires and Intentions (BDI) (...)
- BDI was also extended with an obligations component, giving rise to the BOID agent architecture[1] to incorporate obligations, norms and commitments of agents that act within a social environment.
2015
2015b
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/goal Retrieved:2015-12-24.
- A goal is a desired result that a person or a system envisions, plans and commits to achieve: a personal or organizational desired end-point in some sort of assumed development. Many people endeavor to reach goals within a finite time by setting deadlines.
It is roughly similar to purpose or aim, the anticipated result which guides reaction, or an end, which is an object, either a physical object or an abstract object, that has intrinsic value.
- A goal is a desired result that a person or a system envisions, plans and commits to achieve: a personal or organizational desired end-point in some sort of assumed development. Many people endeavor to reach goals within a finite time by setting deadlines.
2006
- (Gollwitzer & Sheeran, 2006) ⇒ Peter M. Gollwitzer, and Paschal Sheeran. (2006). “Implementation Intentions and Goal Achievement: A Meta-analysis of Effects and Processes.” In: Advances in Experimental Social Psychology, 38.
- QUOTE: Holding a strong goal intention (“I intend to reach Z!”) does not guarantee goal achievement, because people may fail to deal effectively with self-regulatory problems during goal striving. This review analyzes whether realization of goal intentions is facilitated by forming an implementation intention that spells out the when, where, and how of goal striving in advance (“If situation Y is encountered, then I will initiate goal‐directed behavior X!”).
2001
- (Frith, 2001) ⇒ Uta Frith. (2001). “Mind blindness and the brain in autism.” Neuron 32, no. 6. http://dx.doi.org/10.1016/S0896-6273(01)00552-9
- ABSTRACT: Experimental evidence shows that the inability to attribute mental states, such as desires and beliefs, to self and others (mentalizing) explains the social and communication impairments of individuals with autism. Brain imaging studies in normal volunteers highlight a circumscribed network that is active during mentalizing and links medial prefrontal regions with posterior superior temporal sulcus and temporal poles. The brain abnormality that results in mentalizing failure in autism may involve weak connections between components of this system.
1993
- (Bratman, 1993) ⇒ Michael E. Bratman. (1993). “Shared Intention.” Ethics, 104(1). http://www.jstor.org/stable/2381695
1987
- (Cohen & Levesque, 1987) ⇒ Philip R. Cohen, and Hector J. Levesque. (1987). “Intention = Choice + Commitment.” In: Proceedings of the Sixth National Conference on Artificial Intelligence. ISBN:0-934613-42-7
- QUOTE: This paper provides a logical analysis of the concept of intention as composed of two more basic concepts, choice (or goal) and commitment. By making explicit the conditions under which an agent can drop her goals, i.e., by specifying how the agent is committed to her goals, the formalism provides analyses for Bratman's three characteristic functional roles played by intentions (Bratman, 1986), and shows how agents can avoid intending all the foreseen side-effects of what they actually intend.
1987b
- (Bratman, 1987) ⇒ Michael E. Bratman. (1987). “Intention, Plans, and Practical Reason.” Harvard University Press
- BOOK OVERVIEW: What happens to our conception of mind and rational agency when we take seriously future-directed intentions and plans and their roles as inputs into further practical reasoning? The author's initial efforts in responding to this question resulted in a series of papers that he wrote during the early 1980s. In this book, Bratman develops further some of the main themes of these essays and also explores a variety of related ideas and issues. He develops a planning theory of intention. Intentions are treated as elements of partial plans of action. These plans play basic roles in practical reasoning, roles that support the organization of our activities over time and socially. Bratman explores the impact of this approach on a wide range of issues, including the relation between intention and intentional action, and the distinction between intended and expected effects of what one intends.
1957
- (Anscombe, 1957) ⇒ G. E. M. Anscombe. (1957). “Intention.” ISBN:978-0-674-00399-6
- http://en.wikipedia.org/wiki/Intention_%28book%29 Anscombe argues that the concept of intention is central to our understanding of ourselves as rational agents. The intentions with which we act are identified by the reasons we give in answer to questions concerning why we perform actions.
1957b
- (Grice, 1957) ⇒ H. Paul. Grice. (1957). “Meaning.” The philosophical review.
- QUOTE: … Similarly in nonlinguistic cases: if we are asking about an agent's intention, a previous expression counts heavily; nevertheless, a man might plan to throw a letter in the dustbin and yet take it to the post; when lifting his hand he might "come to" and say either "I didn't intend to do …
- ↑ J. Broersen, M. Dastani, J. Hulstijn, Z. Huang, L. van der Torre The BOID architecture: conflicts between beliefs, obligations, intentions and desires Proceedings of the fifth International Conference on Autonomous agents Pages 9-16, ACM New York, NY, USA