AlphaGo Go Playing System
An AlphaGo Go Playing System is a deep reinforcement learning-based Go-playing systems that was developed by Alphabet Inc.'s Google DeepMind in 2015.
- Context:
- It can have Superhuman Capability.
- …
- Example(s):
- the version in 2016-12-22.
- the version in 2017-05-23.
- AlphaGo Lee.
- AlphaGo Fan.
- AlphaGo Master.
- AlphaGo Zero.
- …
- Counter-Example(s):
- See: Google DeepMind, Narrow AI System, AlphaZero, Computer Go, go Software, Self-Play (Reinforcement Learning Technique).
References
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/AlphaGo Retrieved:2023-8-7.
- AlphaGo is a computer program that plays the board game Go. It was developed by the London-based DeepMind Technologies, an acquired subsidiary of Google (now Alphabet Inc.). Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master. After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules.
AlphaGo and its successors use a Monte Carlo tree search algorithm to find its moves based on knowledge previously acquired by machine learning, specifically by an artificial neural network (a deep learning method) by extensive training, both from human and computer play.[1] A neural network is trained to identify the best moves and the winning percentages of these moves. This neural network improves the strength of the tree search, resulting in stronger move selection in the next iteration.
In October 2015, in a match against Fan Hui, the original AlphaGo became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19×19 board.[2] [3] In March 2016, it beat Lee Sedol in a five-game match, the first time a computer Go program has beaten a 9-dan professional without handicap.[4] Although it lost to Lee Sedol in the fourth game, Lee resigned in the final game, giving a final score of 4 games to 1 in favour of AlphaGo. In recognition of the victory, AlphaGo was awarded an honorary 9-dan by the Korea Baduk Association. The lead up and the challenge match with Lee Sedol were documented in a documentary film also titled AlphaGo,[5] directed by Greg Kohs. The win by AlphaGo was chosen by Science as one of the Breakthrough of the Year runners-up on 22 December 2016.
At the 2017 Future of Go Summit, the Master version of AlphaGo beat Ke Jie, the number one ranked player in the world at the time, in a three-game match, after which AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association.
After the match between AlphaGo and Ke Jie, DeepMind retired AlphaGo, while continuing AI research in other areas.[6] The self-taught AlphaGo Zero achieved a 100–0 victory against the early competitive version of AlphaGo, and its successor AlphaZero is currently perceived as the world's top player in Go.
- AlphaGo is a computer program that plays the board game Go. It was developed by the London-based DeepMind Technologies, an acquired subsidiary of Google (now Alphabet Inc.). Subsequent versions of AlphaGo became increasingly powerful, including a version that competed under the name Master. After retiring from competitive play, AlphaGo Master was succeeded by an even more powerful version known as AlphaGo Zero, which was completely self-taught without learning from human games. AlphaGo Zero was then generalized into a program known as AlphaZero, which played additional games, including chess and shogi. AlphaZero has in turn been succeeded by a program known as MuZero which learns without being taught the rules.
- ↑ Silver, David; Huang, Aja; Maddison, Chris J.; Guez, Arthur; Sifre, Laurent; Driessche, George van den; Schrittwieser, Julian; Antonoglou, Ioannis; Panneershelvam, Veda; Lanctot, Marc; Dieleman, Sander; Grewe, Dominik; Nham, John; Kalchbrenner, Nal; Sutskever, Ilya; Lillicrap, Timothy; Leach, Madeleine; Kavukcuoglu, Koray; Graepel, Thore; Hassabis, Demis (28 January 2016). “Mastering the game of Go with deep neural networks and tree search". Nature. 529 (7587): 484–489. Bibcode:2016Natur.529..484S. doi:10.1038/nature16961. ISSN 0028-0836. PMID 26819042. S2CID 515925.closed access
- ↑ "Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning". Google Research Blog. 27 January 2016. Archived from the original on 30 January 2016. Retrieved 28 January 2016.
- ↑ "Google achieves AI 'breakthrough' by beating Go champion". BBC News. 27 January 2016. Archived from the original on 2 December 2021. Retrieved 20 July 2018.
- ↑ "Match 1 – Google DeepMind Challenge Match: Lee Sedol vs AlphaGo". YouTube. 8 March 2016. Archived from the original on 29 March 2017. Retrieved 9 March 2016.
- ↑ "AlphaGo Movie". AlphaGo Movie. Archived from the original on 3 January 2018. Retrieved 14 October 2017.
- ↑ Metz, Cade (27 May 2017). "After Win in China, AlphaGo's Designers Explore New AI". Wired.
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/AlphaGo Retrieved:2017-9-5.
- AlphaGo is a narrow AI computer program that plays the board game Go. It was developed by Alphabet Inc.'s Google DeepMind in London in October 2015. It became the first Computer Go program to beat a human professional Go player without handicaps on a full-sized 19×19 board. ...