Transition Probability
A Transition Probability is a Probability Function that represents the occurrence of transitioning from one state to another.
- Context:
- It ranges from being a One-Step Transition Probability to being an m-Step Transition Probability.
- Example(s):
- Counter-Example(s):
- See: Transition Event, Markov Decision Process, Transition Probability Matrix.
References
2020a
- (utdallas, 2020) ⇒ https://personal.utdallas.edu/~jjue/cs6352/markov/node3.html
- QUOTE: The one-step transition probability is the probability of transitioning from one state to another in a single step. The Markov chain is said to be time homogeneous if the transition probabilities from one state to another are independent of time index $n$.
$p_{ij} = Pr\{X_{n}=j \vert X_{n-1}=i \}$
The transition probability matrix, $P$, is the matrix consisting of the one-step transition probabilities, $p_{ij}$.
The $m$-step transition probability is the probability of transitioning from state $i$ to state $j$ in $m$ steps.
- QUOTE: The one-step transition probability is the probability of transitioning from one state to another in a single step. The Markov chain is said to be time homogeneous if the transition probabilities from one state to another are independent of time index $n$.
2020b
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Markov_chain#Transitions Retrieved:2020-8-9.
- QUOTE: The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate.
A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps.
Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important.
- QUOTE: The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate.
2017
- (Sammut & Webb, 2017) ⇒ Claude Sammut, and Geoffrey I. Webb. (2017). “Transition Probabilities”. In: (Sammut & Webb, 2017).
- QUOTE: In a Markov decision process, the transition probabilities represent the probability of being in state $s'$ at time $t + 1$, given you take action a from state $s$ at time $t$ for all $s$, $a$ and $t$.