2007 AWESOMEAGeneralMultiagentLearni
- (Conitzer & Sandholm, 2007) ⇒ Vincent Conitzer, and Tuomas Sandholm. (2007). “AWESOME: A General Multiagent Learning Algorithm That Converges in Self-play and Learns a Best Response Against Stationary Opponents.” In: Machine Learning Journal, 67(1-2). doi:10.1007/s10994-006-0143-1
Subject Headings: AWESOME, Multi-Agent Learning Algorithm, Multi-Agent Learning Task, Multi-Agent System.
Notes
Cited By
- http://scholar.google.com/scholar?q=%222007%22+AWESOME%3A+A+General+Multiagent+Learning+Algorithm+That+Converges+in+Self-play+and+Learns+a+Best+Response+Against+Stationary+Opponents
- http://dl.acm.org/citation.cfm?id=1236647.1236650&preflayout=flat#citedby
Quotes
Abstract
Two minimal requirements for a satisfactory multiagent learning algorithm are that it 1. learns to play optimally against stationary opponents and 2. converges to a Nash equilibrium in self-play. The previous algorithm that has come closest, WoLF-IGA, has been proven to have these two properties in 2-player 2-action (repeated) games -- assuming that the opponent's mixed strategy is observable. Another algorithm, ReDVaLeR (which was introduced after the algorithm described in this paper), achieves the two properties in games with arbitrary numbers of actions and players, but still requires that the opponents' mixed strategies are observable. In this paper we present AWESOME, the first algorithm that is guaranteed to have the two properties in games with arbitrary numbers of actions and players. It is still the only algorithm that does so while only relying on observing the other players' actual actions (not their mixed strategies). It also learns to play optimally against opponents that eventually become stationary. The basic idea behind AWESOME (Adapt When Everybody is Stationary, Otherwise Move to Equilibrium) is to try to adapt to the others' strategies when they appear stationary, but otherwise to retreat to a precomputed equilibrium strategy. We provide experimental results that suggest that AWESOME converges fast in practice. The techniques used to prove the properties of AWESOME are fundamentally different from those used for previous algorithms, and may help in analyzing future multiagent learning algorithms as well.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2007 AWESOMEAGeneralMultiagentLearni | Vincent Conitzer Tuomas Sandholm | AWESOME: A General Multiagent Learning Algorithm That Converges in Self-play and Learns a Best Response Against Stationary Opponents | 10.1007/s10994-006-0143-1 | 2007 |