Arcade Learning Environment (ALE) Framework
Jump to navigation
Jump to search
An Arcade Learning Environment (ALE) Framework is a object-oriented AI agent framework focused on Stella Atari 2600 emulator games.
- Example(s):
- See: Atari 2600, OpenAI Gym, Stella Emulator.
References
2017
- https://github.com/mgbellemare/Arcade-Learning-Environment
- QUOTE: The Arcade Learning Environment (ALE) is a simple object-oriented framework that allows researchers and hobbyists to develop AI agents for Atari 2600 games. It is built on top of the Atari 2600 emulator Stella and separates the details of emulation from agent design. This video depicts over 50 games currently supported in the ALE. …
Features
- [[Object-oriented framework with support to add agents and games.
- [[Emulation core uncoupled from rendering and sound generation modules for fast emulation with minimal library dependencies.
- Automatic extraction of game score and end-of-game signal for more than 50 Atari 2600 games.
- Multi-platform code (compiled and tested under OS X and several Linux distributions, with Cygwin support).
- Communication between agents and emulation core can be accomplished through pipes, allowing for cross-language development (sample Java code included).
- Python development is supported through ctypes.
- Agents programmed in C++ have access to all features in the ALE.
- Visualization tools.
- QUOTE: The Arcade Learning Environment (ALE) is a simple object-oriented framework that allows researchers and hobbyists to develop AI agents for Atari 2600 games. It is built on top of the Atari 2600 emulator Stella and separates the details of emulation from agent design. This video depicts over 50 games currently supported in the ALE. …
2015
- (Bellemare et al., 2015) ⇒ Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. (2015). “The Arcade Learning Environment: An Evaluation Platform for General Agents.” In: Proceedings of the 24th International Conference on Artificial Intelligence. ISBN:978-1-57735-738-4
- ABSTRACT: In |this extended abstract we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approach]]es to these problems. We illustrate the promise of ALE by presenting a benchmark set of domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. We conclude with a brief update on the latest ALE developments.
All of the software, including the benchmark agents, is publicly available.