Ray Framework
Jump to navigation
Jump to search
A Ray Framework is a distributed execution framework.
- Context:
- It can be targeted at large-scale machine learning and reinforcement learning applications.
- See: Michael I. Jordan.
References
2020
- https://github.com/ray-project/ray
- QUOTE: ... Ray is a fast and simple framework for building and running distributed applications.
Ray is packaged with the following libraries for accelerating machine learning workloads:
- Tune: Scalable Hyperparameter Tuning
- RLlib: Scalable Reinforcement Learning
- RaySGD: Distributed Training Wrappers
- Install Ray with: pip install ray. For nightly wheels, see the Installation page.
- QUOTE: ... Ray is a fast and simple framework for building and running distributed applications.
2020
- https://rise.cs.berkeley.edu/projects/ray/
- QUOTE: Ray is a high-performance distributed execution framework targeted at large-scale machine learning and reinforcement learning applications. It achieves scalability and fault tolerance by abstracting the control state of the system in a global control store and keeping all other components stateless. It uses a shared-memory distributed object store to efficiently handle large data through shared memory, and it uses a bottom-up hierarchical scheduling architecture to achieve low-latency and high-throughput scheduling. It uses a lightweight API based on dynamic task graphs and actors to express a wide range of applications in a flexible manner.