No-Regret Online Learning Algorithm
A No-Regret Online Learning Algorithm is an online learning algorithm that ...
- Context:
- Do as well as best policy with all data in advance even against an adversary
- 1) Pick good predictors for (asymptotic consistency) minimize loss
- 2) generate a stable sequence of predictors (the addition of new data doesn't change they hypothesis much in the limit
- Do as well as best policy with all data in advance even against an adversary
- See: Reinforcement Learning Algorithm, Repeated Interaction, Iterative Learning, Imitation Learning, Gradient Descent.
References
2011
- (Ross et al., 2011) ⇒ Stéphane Ross, Geoffrey J. Gordon, and J. Andrew Bagnell. (2011). “A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning.” In: Proceedings of AISTATS-2011.
2007
- (Blum & Monsour, 2007) ⇒ Avrim Blum, and Yishay Monsour. (2007). “Learning, Regret Minimization, and Equilibria.” xx xxx.
- ABSTRACT: Many situations involve repeatedly making decisions in an uncertain envi- ronment: for instance, deciding what route to drive to work each day, or repeated play of a game against an opponent with an unknown strategy. In this chapter we describe learning algorithms with strong guarantees for set- tings of this type, along with connections to game-theoretic equilibria when all players in a system are simultaneously adapting in such a manner.
We begin by presenting algorithms for repeated play of a matrix game with the guarantee that against any opponent, they will perform nearly as well as the best fixed action in hindsight (also called the problem of combining expert advice or minimizing external regret). In a zero-sum game, such algorithms are guaranteed to approach or exceed the minimax value of the game, and even provide a simple proof of the minimax theorem. We then turn to algorithms that minimize an even stronger form of regret, known as internal or swap regret. We present a general reduction showing how to convert any algorithm for minimizing external regret to one that minimizes this stronger form of regret as well. Internal regret is important because when all players in a game minimize this stronger type of regret, the empirical distribution of play is known to converge to correlated equilibrium.
The third part of this chapter explains a different reduction: how to con- vert from the full information setting in which the action chosen by the opponent is revealed after each time step, to the partial information (ban- dit) setting, where at each time step only the payoff of the selected action is observed (such as in routing), and still maintain a small external regret.
Finally, we end by discussing routing games in the Wardrop model, where one can show that if all participants minimize their own external regret, then overall traffic is guaranteed to converge to an approximate Nash Equilibrium. This further motivates price-of-anarchy results.
- ABSTRACT: Many situations involve repeatedly making decisions in an uncertain envi- ronment: for instance, deciding what route to drive to work each day, or repeated play of a game against an opponent with an unknown strategy. In this chapter we describe learning algorithms with strong guarantees for set- tings of this type, along with connections to game-theoretic equilibria when all players in a system are simultaneously adapting in such a manner.
2005
- (Vermorel et al., 2005) ⇒ Joannès Vermorel, and Mehryar Mohri. (2005). “Multi-armed Bandit Algorithms and Empirical Evaluation.” In: Proceedings of the 16th European conference on Machine Learning. doi:10.1007/11564096_42
- QUOTE: A different version of the bandit problem has been studied by [10, 23, 9, 8] where the reward distributions are assumed to be known to the player. This problem is not about balancing exploration and exploitation, it admits an optimal solution based on the so-called Gittins indices. This paper deals with bandit problems found in practice where the assumption about the prior knowledge of the payoffs typically does not hold (see for example section 4).
The regret [math]\displaystyle{ \rho }[/math] after T rounds is defined as the difference between the reward sum associated to an optimal strategy and the sum of the collected rewards [math]\displaystyle{ \rho = Tu^* - \Sigma_{t=1}^T \hat{r_t} }[/math] where [math]\displaystyle{ \mu }[/math] is the maximal reward mean, [math]\displaystyle{ ì = \operatorname{max}_k{ì_k} }[/math], and [math]\displaystyle{ r_t }[/math] the reward at time [math]\displaystyle{ t }[/math]. A strategy whose average regret per round tends to zero with probability 1 for any bandit problem when the horizon tends to infinity is a zero-regret strategy. Intuitively, zero-regret strategies are guaranteed to converge to an optimal strategy, not necessarily unique, if enough rounds are played.
- QUOTE: A different version of the bandit problem has been studied by [10, 23, 9, 8] where the reward distributions are assumed to be known to the player. This problem is not about balancing exploration and exploitation, it admits an optimal solution based on the so-called Gittins indices. This paper deals with bandit problems found in practice where the assumption about the prior knowledge of the payoffs typically does not hold (see for example section 4).