Partially Observable Markov Decision Process (POMDP)

From GM-RKB
(Redirected from POMDP)
Jump to navigation Jump to search

A Partially Observable Markov Decision Process (POMDP) is a Markov decision process that is also a partially observable environment.



References

2017a

  • (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process Retrieved:2017-6-14.
    • A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP.

      The POMDP framework is general enough to model a variety of real-world sequential decision processes. Applications include robot navigation problems, machine maintenance, and planning under uncertainty in general. The framework originated in the operations research community, and was later adapted by the artificial intelligence and automated planning communities.

      An exact solution to a POMDP yields the optimal action for each possible belief over the world states. The optimal action maximizes (or minimizes) the expected reward (or cost) of the agent over a possibly infinite horizon. The sequence of optimal actions is known as the optimal policy of the agent for interacting with its environment.

2017b

2012a

  • (Wikipedia, 2012) ⇒ http://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process
    • A Partially Observable Markov Decision Process (POMDP) is a generalization of a Markov Decision Process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP.

      The POMDP framework is general enough to model a variety of real-world sequential decision processes. Applications include robot navigation problems, machine maintenance, and planning under uncertainty in general. The framework originated in the Operations Research community, and was later taken over by the Artificial Intelligence and Automated Planning communities.

      An exact solution to a POMDP yields the optimal action for each possible belief over the world states. The optimal action maximizes (or minimizes) the expected reward (or cost) of the agent over a possibly infinite horizon. The sequence of optimal actions is known as the optimal policy of the agent for interacting with its environment.

2012b

  • (Wikipedia, 2012) ⇒ http://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process#Formal_definition
    • A discrete-time POMDP models the relationship between an agent and its environment. Formally, a POMDP is a tuple [math]\displaystyle{ (S,A,O,T,\Omega,R) }[/math], where
      • [math]\displaystyle{ S }[/math] is a set of states,
      • [math]\displaystyle{ A }[/math] is a set of actions,
      • [math]\displaystyle{ O }[/math] is a set of observations,
      • [math]\displaystyle{ T }[/math] is a set of conditional transition probabilities,
      • [math]\displaystyle{ \Omega }[/math] is a set of conditional observation probabilities,
      • [math]\displaystyle{ R: A,S \to \mathbb{R} }[/math] is the reward function.
    • At each time period, the environment is in some state [math]\displaystyle{ s \in S }[/math]. The agent takes an action [math]\displaystyle{ a \in A }[/math],

which causes the environment to transition to state [math]\displaystyle{ s' }[/math] with probability [math]\displaystyle{ T(s'\mid s,a) }[/math]. Finally, the agent receives a reward with expected value, say [math]\displaystyle{ r }[/math], and the process repeats.

2007

2004

Markov
Models
Do we have control
over the state transitons?
NO YES
Are the states
completely
observable?
YES

Markov Chain

MDP

Markov Decision Process
NO

HMM

Hidden Markov Model

POMDP

Partially Observable
Markov Decision Process

2000