Belief-Desire-Intention (BDI) Agent System
(Redirected from Belief-Desire-Intention Agent System)
Jump to navigation
Jump to search
A Belief-Desire-Intention (BDI) Agent System is an Agent-Oriented Programming System that is centered on Belief-Desire-Intention (BDI) Agents.
- AKA: BDI Agent System.
- Context:
- It can solve a Belief-Desire-Intention (BDI) Abstract Task by implementing Belief-Desire-Intention (BDI) Agent Algorithms.
- It can range from being a Goal-Oriented Agent System to being a Plan-Oriented Agent System.
- It can range from being a Single BDI Agent System to being a BDI Multi-Agent System.
- It usually requires the use of an Agent Programming Language.
- It usually requires a BDI Interpreter and a BDI Reasoning Engine.
- …
- Example(s):
- a Procedural Reasoning System (PRS).
- an UM-PRS Agent System.
- an OpenPRS.
- a Distributed Multi-Agent Reasoning System (dMARS),
- an Agent Real-Time System (ARTS),
- a JAM.
- a JACK Intelligent Agents.
- a JADEX Agent System.
- a Jason Multi-Agent System.
- a GORITE.
- a SPARK.
- a 3APL.
- A 2APL.
- a CogniTAO (Think-As-One).
- …
- Counter-Example(s):
- See: Belief-Desire-Intention Software Model, Multi-Agent System, Intelligent Agent, GOAL Agent Programming Language, AgentSpeak, Intention Logic, BDI Logic, Agent Communication Language, Abstract Task, Multi-Agent Communication System, Agent-based Model (ABM).
References
2019a
- (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Belief–desire–intention_software_model Retrieved:2019-8-10.
- The belief–desire–intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.
2019b
- (Wikipedia, 2019) ⇒ https://www.wikiwand.com/en/Belief%E2%80%93desire%E2%80%93intention_software_model#/BDI_agents Retrieved:2019-8-10.
- A BDI agent is a particular type of bounded rational software agent, imbued with particular mental attitudes, viz: Beliefs, Desires and Intentions (BDI) (...)
This section defines the idealized architectural components of a BDI system.
- Beliefs: Beliefs represent the informational state of the agent, in other words its beliefs about the world (including itself and other agents). Beliefs can also include inference rules, allowing forward chaining to lead to new beliefs. Using the term belief rather than knowledge recognizes that what an agent believes may not necessarily be true (and in fact may change in the future).
- Beliefset: Beliefs are stored in a database (sometimes called a belief base or a belief set), although that is an implementation decision.
- Desires: Desires represent the motivational state of the agent. They represent objectives or situations that the agent would like to accomplish or bring about. Examples of desires might be: find the best price, go to the party or become rich.
- Goals: A goal is a desire that has been adopted for active pursuit by the agent. Usage of the term goals adds the further restriction that the set of active desires must be consistent. For example, one should not have concurrent goals to go to a party and to stay at home – even though they could both be desirable.
- Intentions: Intentions represent the deliberative state of the agent – what the agent has chosen to do. Intentions are desires to which the agent has to some extent committed. In implemented systems, this means the agent has begun executing a plan.
- Plans: Plans are sequences of actions (recipes or knowledge areas) that an agent can perform to achieve one or more of its intentions. Plans may include other plans: my plan to go for a drive may include a plan to find my car keys. This reflects that in Bratman's model, plans are initially only partially conceived, with details being filled in as they progress.
- Events: These are triggers for reactive activity by the agent. An event may update beliefs, trigger plans or modify goals. Events may be generated externally and received by sensors or integrated systems. Additionally, events may be generated internally to trigger decoupled updates or plans of activity.
- Beliefs: Beliefs represent the informational state of the agent, in other words its beliefs about the world (including itself and other agents). Beliefs can also include inference rules, allowing forward chaining to lead to new beliefs. Using the term belief rather than knowledge recognizes that what an agent believes may not necessarily be true (and in fact may change in the future).
- A BDI agent is a particular type of bounded rational software agent, imbued with particular mental attitudes, viz: Beliefs, Desires and Intentions (BDI) (...)
- BDI was also extended with an obligations component, giving rise to the BOID agent architecture[1] to incorporate obligations, norms and commitments of agents that act within a social environment.
2019c
- (Cranefield & Dignum, 2019) ⇒ Stephen Cranefield, and Frank Dignum. (2019). “Incorporating Social Practices in BDI Agent Systems.” In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. ISBN:978-1-4503-6309-9
2018
- (Mensio et al., 2018) ⇒ Martino Mensio, Giuseppe Rizzo, and Maurizio Morisio. (2018). “Multi-turn QA: A RNN Contextual Approach to Intent Classification for Goal-oriented Systems.” In: Companion Proceedings of the The Web Conference 2018. ISBN:978-1-4503-5640-4 doi:10.1145/3184558.3191539
2009
- (Mateus et al., 2009) ⇒ Gustavo Pereira Mateus, Beatriz Wilges, Luiz Claudio Duarte Dalmolin, Silvia Nassar, and Ricardo Silveira. (2009). “A Belief Desire Intention Multi Agent System in a Virtual Learning Environment.” In: Proceedings of the 2009 Ninth IEEE International Conference on Advanced Learning Technologies. ISBN:978-0-7695-3711-5 doi:10.1109/ICALT.2009.213
2006
- (Sardina et al., 2006) ⇒ Sebastian Sardina, Lavindra de Silva, and Lin Padgham. (2006). “Hierarchical Planning in BDI Agent Programming Languages: A Formal Approach.” In: Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems. ISBN:1-59593-303-4 doi:10.1145/1160633.1160813
2005a
- (Yan et al., 2005) ⇒ Shung-Bin Yan, Zu-Nien Lin, Hsun-Jen Hsu, and Feng-Jian Wang. (2005). “Intention Scheduling for BDI Agent Systems.” In: Proceedings of the 29th Annual International Computer Software and Applications Conference - Volume 01. ISBN:0-7695-2413-3-01 doi:10.1109/COMPSAC.2005.92
2005b
- (Pokahr et al., 2005) ⇒ Alexander Pokahr, Lars Braubach, and Winfried Lamersdorf. (2005). “A BDI Architecture for Goal Deliberation.” In: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems. ISBN:1-59593-093-0 doi:10.1145/1082473.1082740
2005c
- (Pokahr et al., 2005) ⇒ Alexander Pokahr, Lars Braubach, and Winfried Lamersdorf. (2005). “Jadex: A BDI Reasoning Engine.” In: Multi-Agent Programming. Multiagent Systems, Artificial Societies, and Simulated Organizations (International Book Series). Journal, 15. ISBN:978-0-387-24568-3 doi:10.1007/0-387-26350-0_6
2004a
- (Meneguzzi et al., 2004) ⇒ Felipe Rech Meneguzzi, Avelino Francisco Zorzo, and Michael da Costa Móra. (2004). “Propositional Planning in BDI Agents.” In: Proceedings of the 2004 ACM symposium on Applied computing. ISBN:1-58113-812-1 doi:10.1145/967900.967916
2004b
- (Braubach et al., 2004) ⇒ Lars Braubach, Alexander Pokahr, Daniel Moldt, and Winfried Lamersdorf. (2004). “Goal Representation for BDI Agent Systems.” In: Proceedings of the Second International Conference on Programming Multi-Agent Systems. ISBN:3-540-24559-6, 978-3-540-24559-9 doi:10.1007/978-3-540-32260-3_3
1995
- (Rao & Georgeff, 1995) ⇒ Anand S. Rao, and Michael P. Georgeff. (1995). “BDI Agents: From Theory to Practice.” In: Proceedings of the First International Conference on Multiagent Systems (ICMAS 1995).
- QUOTE: A number of different approaches have emerged as candidates for the study of agent-oriented systems (Bratmau et al. 1988; Doyle 1992; Rao and Georgeff 1991c; Rosenschein and Kaelbling 1986; Shoham 1993). One such architecture views the system as a rational agent having certain mental attitudes of Belief, Desire, and Intention (BDI), representing, respectively, the information, motivational, and deliberative states of the agent. These mental attitudes determine the system's behavior and are critical for achieving adequate or optimal performance when deliberation is subject to resource bounds (Bratman 1987; Kinny and Georgeff 1991).
1987
- (Georgeff & Lansky, 1987) ⇒ Michael P. Georgeff, and Amy L. Lansky. (1987). “Reactive Reasoning and Planning.” In: Proceedings of the sixth National conference on Artificial intelligence - Volume 2. ISBN:0-934613-42-7
- ↑ J. Broersen, M. Dastani, J. Hulstijn, Z. Huang, L. van der Torre The BOID architecture: conflicts between beliefs, obligations, intentions and desires Proceedings of the fifth International Conference on Autonomous agents Pages 9-16, ACM New York, NY, USA