global_chunk_id
int64 0
478
| text
stringlengths 288
999
|
|---|---|
0
|
Overview#
Ray is an open-source unified framework for scaling AI and Python applications like machine learning. It provides the compute layer for parallel processing so that you don’t need to be a distributed systems expert. Ray minimizes the complexity of running your distributed individual and end-to-end machine learning workflows with these components:
Scalable libraries for common machine learning tasks such as data preprocessing, distributed training, hyperparameter tuning, reinforcement learning, and model serving.
Pythonic distributed computing primitives for parallelizing and scaling Python applications.
Integrations and utilities for integrating and deploying a Ray cluster with existing tools and infrastructure such as Kubernetes, AWS, GCP, and Azure.
For data scientists and machine learning practitioners, Ray lets you scale jobs without needing infrastructure expertise:
|
1
|
For data scientists and machine learning practitioners, Ray lets you scale jobs without needing infrastructure expertise:
Easily parallelize and distribute ML workloads across multiple nodes and GPUs.
Leverage the ML ecosystem with native and extensible integrations.
For ML platform builders and ML engineers, Ray:
Provides compute abstractions for creating a scalable and robust ML platform.
Provides a unified ML API that simplifies onboarding and integration with the broader ML ecosystem.
Reduces friction between development and production by enabling the same Python code to scale seamlessly from a laptop to a large cluster.
For distributed systems engineers, Ray automatically handles key processes:
|
2
|
For distributed systems engineers, Ray automatically handles key processes:
Orchestration–Managing the various components of a distributed system.
Scheduling–Coordinating when and where tasks are executed.
Fault tolerance–Ensuring tasks complete regardless of inevitable points of failure.
Auto-scaling–Adjusting the number of resources allocated to dynamic demand.
What you can do with Ray#
These are some common ML workloads that individuals, organizations, and companies leverage Ray to build their AI applications:
Batch inference on CPUs and GPUs
Model serving
Distributed training of large models
Parallel hyperparameter tuning experiments
Reinforcement learning
ML platform
Ray framework#
Stack of Ray libraries - unified toolkit for ML workloads.
Ray’s unified compute framework consists of three layers:
|
3
|
Ray framework#
Stack of Ray libraries - unified toolkit for ML workloads.
Ray’s unified compute framework consists of three layers:
Ray AI Libraries–An open-source, Python, domain-specific set of libraries that equip ML engineers, data scientists, and researchers with a scalable and unified toolkit for ML applications.
Ray Core–An open-source, Python, general purpose, distributed computing library that enables ML engineers and Python developers to scale Python applications and accelerate machine learning workloads.
Ray Clusters–A set of worker nodes connected to a common Ray head node. Ray clusters can be fixed-size, or they can autoscale up and down according to the resources requested by applications running on the cluster.
Scale machine learning workloads
Build ML applications with a toolkit of libraries for distributed
data processing,
model training,
tuning,
reinforcement learning,
model serving,
and more.
Ray AI Libraries
|
4
|
Build ML applications with a toolkit of libraries for distributed
data processing,
model training,
tuning,
reinforcement learning,
model serving,
and more.
Ray AI Libraries
Build distributed applications
Build and run distributed applications with a
simple and flexible API.
Parallelize single machine code with
little to zero code changes.
Ray Core
Deploy large-scale workloads
Deploy workloads on AWS, GCP, Azure or
on premise.
Use Ray cluster managers to run Ray on existing
Kubernetes,
YARN,
or Slurm clusters.
Ray Clusters
Each of Ray’s five native libraries distributes a specific ML task:
|
5
|
Ray Clusters
Each of Ray’s five native libraries distributes a specific ML task:
Data: Scalable, framework-agnostic data loading and transformation across training, tuning, and prediction.
Train: Distributed multi-node and multi-core model training with fault tolerance that integrates with popular training libraries.
Tune: Scalable hyperparameter tuning to optimize model performance.
Serve: Scalable and programmable serving to deploy models for online inference, with optional microbatching to improve performance.
RLlib: Scalable distributed reinforcement learning workloads.
|
6
|
Ray’s libraries are for both data scientists and ML engineers alike. For data scientists, these libraries can be used to scale individual workloads, and also end-to-end ML applications. For ML Engineers, these libraries provides scalable platform abstractions that can be used to easily onboard and integrate tooling from the broader ML ecosystem.
For custom applications, the Ray Core library enables Python developers to easily build scalable, distributed systems that can run on a laptop, cluster, cloud, or Kubernetes. It’s the foundation that Ray AI libraries and third-party integrations (Ray ecosystem) are built on.
Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing
ecosystem of community integrations.
|
7
|
Ray for ML Infrastructure#
Tip
We’d love to hear from you if you are using Ray to build a ML platform! Fill out this short form to get involved.
Ray and its AI libraries provide unified compute runtime for teams looking to simplify their ML platform.
Ray’s libraries such as Ray Train, Ray Data, and Ray Serve can be used to compose end-to-end ML workflows, providing features and APIs for
data preprocessing as part of training, and transitioning from training to serving.
|
8
|
Why Ray for ML Infrastructure?#
Ray’s AI libraries simplify the ecosystem of machine learning frameworks, platforms, and tools, by providing a seamless, unified, and open experience for scalable ML:
1. Seamless Dev to Prod: Ray’s AI libraries reduces friction going from development to production. With Ray and its libraries, the same Python code scales seamlessly from a laptop to a large cluster.
2. Unified ML API and Runtime: Ray’s APIs enables swapping between popular frameworks, such as XGBoost, PyTorch, and Hugging Face, with minimal code changes. Everything from training to serving runs on a single runtime (Ray + KubeRay).
3. Open and Extensible: Ray is fully open-source and can run on any cluster, cloud, or Kubernetes. Build custom components and integrations on top of scalable developer APIs.
|
9
|
Example ML Platforms built on Ray#
Merlin is Shopify’s ML platform built on Ray. It enables fast-iteration and scaling of distributed applications such as product categorization and recommendations.
Shopify’s Merlin architecture built on Ray.#
Spotify uses Ray for advanced applications that include personalizing content recommendations for home podcasts, and personalizing Spotify Radio track sequencing.
How Ray ecosystem empowers ML scientists and engineers at Spotify..#
The following highlights feature companies leveraging Ray’s unified API to build simpler, more flexible ML platforms.
[Blog] The Magic of Merlin - Shopify’s New ML Platform
[Slides] Large Scale Deep Learning Training and Tuning with Ray
[Blog] Griffin: How Instacart’s ML Platform Tripled in a year
[Talk] Predibase - A low-code deep learning platform built for scale
[Blog] Building a ML Platform with Kubeflow and Ray on GKE
[Talk] Ray Summit Panel - ML Platform on Ray
|
10
|
Deploying Ray for ML platforms#
Here, we describe how you might use or deploy Ray in your infrastructure. There are two main deployment patterns – pick and choose and within existing platforms.
The core idea is that Ray can be complementary to your existing infrastructure and integration tools.
Design Principles#
Ray and its libraries handles the heavyweight compute aspects of AI apps and services.
Ray relies on external integrations (e.g., Tecton, MLFlow, W&B) for Storage and Tracking.
Workflow Orchestrators (e.g., AirFlow) are an optional component that can be used for scheduling recurring jobs, launching new Ray clusters for jobs, and running non-Ray compute steps.
Lightweight orchestration of task graphs within a single Ray app can be handled using Ray tasks.
Ray libraries can be used independently, within an existing ML platform, or to build a Ray-native ML platform.
|
11
|
Pick and choose your own libraries#
You can pick and choose which Ray AI libraries you want to use.
This is applicable if you are an ML engineer who wants to independently use a Ray library for a specific AI app or service use case and do not need to integrate with existing ML platforms.
For example, Alice wants to use RLlib to train models for her work project. Bob wants to use Ray Serve to deploy his model pipeline. In both cases, Alice and Bob can leverage these libraries independently without any coordination.
This scenario describes most usages of Ray libraries today.
In the above diagram:
Only one library is used – showing that you can pick and choose and do not need to replace all of your ML infrastructure to use Ray.
You can use one of Ray’s many deployment modes to launch and manage Ray clusters and Ray applications.
Ray AI libraries can read data from external storage systems such as Amazon S3 / Google Cloud Storage, as well as store results there.
|
12
|
Existing ML Platform integration#
You may already have an existing machine learning platform but want to use some subset of Ray’s ML libraries. For example, an ML engineer wants to use Ray within the ML Platform their organization has purchased (e.g., SageMaker, Vertex).
Ray can complement existing machine learning platforms by integrating with existing pipeline/workflow orchestrators, storage, and tracking services, without requiring a replacement of your entire ML platform.
In the above diagram:
|
13
|
In the above diagram:
A workflow orchestrator such as AirFlow, Oozie, SageMaker Pipelines, etc. is responsible for scheduling and creating Ray clusters and running Ray apps and services. The Ray application may be part of a larger orchestrated workflow (e.g., Spark ETL, then Training on Ray).
Lightweight orchestration of task graphs can be handled entirely within Ray. External workflow orchestrators will integrate nicely but are only needed if running non-Ray steps.
Ray clusters can also be created for interactive use (e.g., Jupyter notebooks, Google Colab, Databricks Notebooks, etc.).
Ray Train, Data, and Serve provide integration with Feature Stores like Feast for Training and Serving.
Ray Train and Tune provide integration with tracking services such as MLFlow and Weights & Biases.
|
14
|
Installing Ray#
Ray currently officially supports x86_64, aarch64 (ARM) for Linux, and Apple silicon (M1) hardware.
Ray on Windows is currently in beta.
Official Releases#
From Wheels#
You can install the latest official version of Ray from PyPI on Linux, Windows,
and macOS by choosing the option that best matches your use case.
Recommended
For machine learning applications
pip install -U "ray[data,train,tune,serve]"
# For reinforcement learning support, install RLlib instead.
# pip install -U "ray[rllib]"
For general Python applications
pip install -U "ray[default]"
# If you don't want Ray Dashboard or Cluster Launcher, install Ray with minimal dependencies instead.
# pip install -U "ray"
Advanced
Command
Installed components
pip install -U "ray"
Core
pip install -U "ray[default]"
Core, Dashboard, Cluster Launcher
pip install -U "ray[data]"
Core, Data
pip install -U "ray[train]"
Core, Train
pip install -U "ray[tune]"
Core, Tune
|
15
|
pip install -U "ray[default]"
Core, Dashboard, Cluster Launcher
pip install -U "ray[data]"
Core, Data
pip install -U "ray[train]"
Core, Train
pip install -U "ray[tune]"
Core, Tune
pip install -U "ray[serve]"
Core, Dashboard, Cluster Launcher, Serve
pip install -U "ray[serve-grpc]"
Core, Dashboard, Cluster Launcher, Serve with gRPC support
pip install -U "ray[rllib]"
Core, Tune, RLlib
pip install -U "ray[all]"
Core, Dashboard, Cluster Launcher, Data, Train, Tune, Serve, RLlib. This option isn’t recommended. Specify the extras you need as shown below instead.
Tip
You can combine installation extras.
For example, to install Ray with Dashboard, Cluster Launcher, and Train support, you can run:
pip install -U "ray[default,train]"
|
16
|
Tip
You can combine installation extras.
For example, to install Ray with Dashboard, Cluster Launcher, and Train support, you can run:
pip install -U "ray[default,train]"
Daily Releases (Nightlies)#
You can install the nightly Ray wheels via the following links. These daily releases are tested via automated tests but do not go through the full release process. To install these wheels, use the following pip command and wheels:
# Clean removal of previous install
pip uninstall -y ray
# Install Ray with support for the dashboard + cluster launcher
pip install -U "ray[default] @ LINK_TO_WHEEL.whl"
# Install Ray with minimal dependencies
# pip install -U LINK_TO_WHEEL.whl
Linux
Linux (x86_64)
Linux (arm64/aarch64)
Linux Python 3.9 (x86_64)
Linux Python 3.9 (aarch64)
Linux Python 3.10 (x86_64)
Linux Python 3.10 (aarch64)
Linux Python 3.11 (x86_64)
Linux Python 3.11 (aarch64)
Linux Python 3.12 (x86_64)
Linux Python 3.12 (aarch64)
|
17
|
Linux Python 3.10 (x86_64)
Linux Python 3.10 (aarch64)
Linux Python 3.11 (x86_64)
Linux Python 3.11 (aarch64)
Linux Python 3.12 (x86_64)
Linux Python 3.12 (aarch64)
Linux Python 3.13 (x86_64) (beta)
Linux Python 3.13 (aarch64) (beta)
MacOS
MacOS (x86_64)
MacOS (arm64)
MacOS Python 3.9 (x86_64)
MacOS Python 3.9 (arm64)
MacOS Python 3.10 (x86_64)
MacOS Python 3.10 (arm64)
MacOS Python 3.11 (x86_64)
MacOS Python 3.11 (arm64)
MacOS Python 3.12 (x86_64)
MacOS Python 3.12 (arm64)
MacOS Python 3.13 (x86_64) (beta)
MacOS Python 3.13 (arm64) (beta)
Windows (beta)
Windows (beta)
Windows Python 3.9
Windows Python 3.10
Windows Python 3.11
Windows Python 3.12
Note
On Windows, support for multi-node Ray clusters is currently experimental and untested.
If you run into issues please file a report at ray-project/ray#issues.
|
18
|
Windows Python 3.12
Note
On Windows, support for multi-node Ray clusters is currently experimental and untested.
If you run into issues please file a report at ray-project/ray#issues.
Note
Usage stats collection is enabled by default (can be disabled) for nightly wheels including both local clusters started via ray.init() and remote clusters via cli.
Installing from a specific commit#
You can install the Ray wheels of any particular commit on master with the following template. You need to specify the commit hash, Ray version, Operating System, and Python version:
pip install https://s3-us-west-2.amazonaws.com/ray-wheels/master/{COMMIT_HASH}/ray-{RAY_VERSION}-{PYTHON_VERSION}-{PYTHON_VERSION}-{OS_VERSION}.whl
|
19
|
For example, here are the Ray 3.0.0.dev0 wheels for Python 3.9, MacOS for commit 4f2ec46c3adb6ba9f412f09a9732f436c4a5d0c9:
pip install https://s3-us-west-2.amazonaws.com/ray-wheels/master/4f2ec46c3adb6ba9f412f09a9732f436c4a5d0c9/ray-3.0.0.dev0-cp39-cp39-macosx_10_15_x86_64.whl
There are minor variations to the format of the wheel filename; it’s best to match against the format in the URLs listed in the Nightlies section.
Here’s a summary of the variations:
For MacOS, commits predating August 7, 2021 will have macosx_10_13 in the filename instead of macosx_10_15.
M1 Mac (Apple Silicon) Support#
Ray supports machines running Apple Silicon (such as M1 macs).
Multi-node clusters are untested. To get started with local Ray development:
Install miniforge.
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
bash Miniforge3-MacOSX-arm64.sh
rm Miniforge3-MacOSX-arm64.sh # Cleanup.
|
20
|
Install miniforge.
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
bash Miniforge3-MacOSX-arm64.sh
rm Miniforge3-MacOSX-arm64.sh # Cleanup.
Ensure you’re using the miniforge environment (you should see (base) in your terminal).
source ~/.bash_profile
conda activate
Install Ray as you normally would.
pip install ray
Windows Support#
Windows support is in Beta. Ray supports running on Windows with the following caveats (only the first is
Ray-specific, the rest are true anywhere Windows is used):
|
21
|
Windows Support#
Windows support is in Beta. Ray supports running on Windows with the following caveats (only the first is
Ray-specific, the rest are true anywhere Windows is used):
Multi-node Ray clusters are untested.
Filenames are tricky on Windows and there still may be a few places where Ray
assumes UNIX filenames rather than Windows ones. This can be true in downstream
packages as well.
Performance on Windows is known to be slower since opening files on Windows
is considerably slower than on other operating systems. This can affect logging.
Windows does not have a copy-on-write forking model, so spinning up new
processes can require more memory.
Submit any issues you encounter to
GitHub.
|
22
|
Submit any issues you encounter to
GitHub.
Installing Ray on Arch Linux#
Note: Installing Ray on Arch Linux is not tested by the Project Ray developers.
Ray is available on Arch Linux via the Arch User Repository (AUR) as
python-ray.
You can manually install the package by following the instructions on the
Arch Wiki or use an AUR helper like yay (recommended for ease of install)
as follows:
yay -S python-ray
To discuss any issues related to this package refer to the comments section
on the AUR page of python-ray here.
Installing From conda-forge#
Ray can also be installed as a conda package on Linux and Windows.
# also works with mamba
conda create -c conda-forge python=3.9 -n ray
conda activate ray
# Install Ray with support for the dashboard + cluster launcher
conda install -c conda-forge "ray-default"
# Install Ray with minimal dependencies
# conda install -c conda-forge ray
|
23
|
# Install Ray with support for the dashboard + cluster launcher
conda install -c conda-forge "ray-default"
# Install Ray with minimal dependencies
# conda install -c conda-forge ray
To install Ray libraries, use pip as above or conda/mamba.
conda install -c conda-forge "ray-data" # installs Ray + dependencies for Ray Data
conda install -c conda-forge "ray-train" # installs Ray + dependencies for Ray Train
conda install -c conda-forge "ray-tune" # installs Ray + dependencies for Ray Tune
conda install -c conda-forge "ray-serve" # installs Ray + dependencies for Ray Serve
conda install -c conda-forge "ray-rllib" # installs Ray + dependencies for Ray RLlib
For a complete list of available ray libraries on Conda-forge, have a look
at https://anaconda.org/conda-forge/ray-default
|
24
|
For a complete list of available ray libraries on Conda-forge, have a look
at https://anaconda.org/conda-forge/ray-default
Note
Ray conda packages are maintained by the community, not the Ray team. While
using a conda environment, it is recommended to install Ray from PyPi using
pip install ray in the newly created environment.
Building Ray from Source#
Installing from pip should be sufficient for most Ray users.
However, should you need to build from source, follow these instructions for building Ray.
Docker Source Images#
Users can pull a Docker image from the rayproject/ray Docker Hub repository.
The images include Ray and all required dependencies. It comes with anaconda and various versions of Python.
Images are tagged with the format {Ray version}[-{Python version}][-{Platform}]. Ray version tag can be one of the following:
Ray version tag
Description
latest
The most recent Ray release.
x.y.z
A specific Ray release, e.g. 2.31.0
|
25
|
Ray version tag
Description
latest
The most recent Ray release.
x.y.z
A specific Ray release, e.g. 2.31.0
nightly
The most recent Ray development build (a recent commit from Github master)
The optional Python version tag specifies the Python version in the image. All Python versions supported by Ray are available, e.g. py39, py310 and py311. If unspecified, the tag points to an image of the lowest Python version that the Ray version supports.
The optional Platform tag specifies the platform where the image is intended for:
Platform tag
Description
-cpu
These are based off of an Ubuntu image.
-cuXX
These are based off of an NVIDIA CUDA image with the specified CUDA version. They require the Nvidia Docker Runtime.
-gpu
Aliases to a specific -cuXX tagged image.
<no tag>
Aliases to -cpu tagged images.
|
26
|
-gpu
Aliases to a specific -cuXX tagged image.
<no tag>
Aliases to -cpu tagged images.
Example: for the nightly image based on Python 3.9 and without GPU support, the tag is nightly-py39-cpu.
If you want to tweak some aspects of these images and build them locally, refer to the following script:
cd ray
./build-docker.sh
Review images by listing them:
docker images
Output should look something like the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
rayproject/ray dev 7243a11ac068 2 days ago 1.11 GB
rayproject/base-deps latest 5606591eeab9 8 days ago 512 MB
ubuntu 22.04 1e4467b07108 3 weeks ago 73.9 MB
Launch Ray in Docker#
Start out by launching the deployment container.
docker run --shm-size=<shm-size> -t -i rayproject/ray
|
27
|
Launch Ray in Docker#
Start out by launching the deployment container.
docker run --shm-size=<shm-size> -t -i rayproject/ray
Replace <shm-size> with a limit appropriate for your system, for example
512M or 2G. A good estimate for this is to use roughly 30% of your available memory (this is
what Ray uses internally for its Object Store). The -t and -i options here are required to support
interactive use of the container.
If you use a GPU version Docker image, remember to add --gpus all option. Replace <ray-version> with your target ray version in the following command:
docker run --shm-size=<shm-size> -t -i --gpus all rayproject/ray:<ray-version>-gpu
Note: Ray requires a large amount of shared memory because each object
store keeps all of its objects in shared memory, so the amount of shared memory
will limit the size of the object store.
You should now see a prompt that looks something like:
root@ebc78f68d100:/ray#
|
28
|
Test if the installation succeeded#
To test if the installation was successful, try running some tests. This assumes
that you’ve cloned the git repository.
python -m pytest -v python/ray/tests/test_mini.py
Installed Python dependencies#
Our docker images are shipped with pre-installed Python dependencies
required for Ray and its libraries.
We publish the dependencies that are installed in our ray Docker images for Python 3.9.
|
29
|
Install Ray Java with Maven#
Note
All Ray Java APIs are experimental and only supported by the community.
Before installing Ray Java with Maven, you should install Ray Python with pip install -U ray . Note that the versions of Ray Java and Ray Python must match.
Note that nightly Ray python wheels are also required if you want to install Ray Java snapshot version.
Find the latest Ray Java release in the central repository. To use the latest Ray Java release in your application, add the following entries in your pom.xml:
<dependency>
<groupId>io.ray</groupId>
<artifactId>ray-api</artifactId>
<version>${ray.version}</version>
</dependency>
<dependency>
<groupId>io.ray</groupId>
<artifactId>ray-runtime</artifactId>
<version>${ray.version}</version>
</dependency>
|
30
|
The latest Ray Java snapshot can be found in sonatype repository. To use the latest Ray Java snapshot in your application, add the following entries in your pom.xml:
<!-- only needed for snapshot version of ray -->
<repositories>
<repository>
<id>sonatype</id>
<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>io.ray</groupId>
<artifactId>ray-api</artifactId>
<version>${ray.version}</version>
</dependency>
<dependency>
<groupId>io.ray</groupId>
<artifactId>ray-runtime</artifactId>
<version>${ray.version}</version>
</dependency>
</dependencies>
|
31
|
Note
When you run pip install to install Ray, Java jars are installed as well. The above dependencies are only used to build your Java code and to run your code in local mode.
If you want to run your Java code in a multi-node Ray cluster, it’s better to exclude Ray jars when packaging your code to avoid jar conflicts if the versions (installed Ray with pip install and maven dependencies) don’t match.
Install Ray C++#
Note
All Ray C++ APIs are experimental and only supported by the community.
You can install and use Ray C++ API as follows.
pip install -U ray[cpp]
# Create a Ray C++ project template to start with.
ray cpp --generate-bazel-project-template-to ray-template
Note
If you build Ray from source, remove the build option build --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" from the file cpp/example/.bazelrc before running your application. The related issue is this.
|
32
|
Ray Use Cases#
This page indexes common Ray use cases for scaling ML.
It contains highlighted references to blogs, examples, and tutorials also located
elsewhere in the Ray documentation.
LLMs and Gen AI#
Large language models (LLMs) and generative AI are rapidly changing industries, and demand compute at an astonishing pace. Ray provides a distributed compute framework for scaling these models, allowing developers to train and deploy models faster and more efficiently. With specialized libraries for data streaming, training, fine-tuning, hyperparameter tuning, and serving, Ray simplifies the process of developing and deploying large-scale AI models.
Explore LLMs and Gen AI examples
|
33
|
Explore LLMs and Gen AI examples
Batch Inference#
Batch inference is the process of generating model predictions on a large “batch” of input data.
Ray for batch inference works with any cloud provider and ML framework,
and is fast and cheap for modern deep learning applications.
It scales from single machines to large clusters with minimal code changes.
As a Python-first framework, you can easily express and interactively develop your inference workloads in Ray.
To learn more about running batch inference with Ray, see the batch inference guide.
Explore batch inference examples
|
34
|
Explore batch inference examples
Model Serving#
Ray Serve is well suited for model composition, enabling you to build a complex inference service consisting of multiple ML models and business logic all in Python code.
It supports complex model deployment patterns requiring the orchestration of multiple Ray actors, where different actors provide inference for different models. Serve handles both batch and online inference and can scale to thousands of models in production.
Deployment patterns with Ray Serve. (Click image to enlarge.)#
Learn more about model serving with the following resources.
[Talk] Productionizing ML at Scale with Ray Serve
[Blog] Simplify your MLOps with Ray & Ray Serve
[Guide] Getting Started with Ray Serve
[Guide] Model Composition in Serve
[Gallery] Serve Examples Gallery
[Gallery] More Serve Use Cases on the Blog
|
35
|
Hyperparameter Tuning#
The Ray Tune library enables any parallel Ray workload to be run under a hyperparameter tuning algorithm.
Running multiple hyperparameter tuning experiments is a pattern apt for distributed computing because each experiment is independent of one another. Ray Tune handles the hard bit of distributing hyperparameter optimization and makes available key features such as checkpointing the best result, optimizing scheduling, and specifying search patterns.
Distributed tuning with distributed training per trial.#
Learn more about the Tune library with the following talks and user guides.
[Guide] Getting Started with Ray Tune
[Blog] How to distribute hyperparameter tuning with Ray Tune
[Talk] Simple Distributed Hyperparameter Optimization
[Blog] Hyperparameter Search with 🤗 Transformers
[Gallery] Ray Tune Examples Gallery
More Tune use cases on the Blog
|
36
|
Distributed Training#
The Ray Train library integrates many distributed training frameworks under a simple Trainer API,
providing distributed orchestration and management capabilities out of the box.
In contrast to training many models, model parallelism partitions a large model across many machines for training. Ray Train has built-in abstractions for distributing shards of models and running training in parallel.
Model parallelism pattern for distributed large model training.#
Learn more about the Train library with the following talks and user guides.
[Talk] Ray Train, PyTorch, TorchX, and distributed deep learning
[Blog] Elastic Distributed Training with XGBoost on Ray
[Guide] Getting Started with Ray Train
[Example] Fine-tune a 🤗 Transformers model
[Gallery] Ray Train Examples Gallery
[Gallery] More Train Use Cases on the Blog
|
37
|
Reinforcement Learning#
RLlib is an open-source library for reinforcement learning (RL), offering support for production-level, highly distributed RL workloads while maintaining unified and simple APIs for a large variety of industry applications. RLlib is used by industry leaders in many different verticals, such as climate control, industrial control, manufacturing and logistics, finance, gaming, automobile, robotics, boat design, and many others.
Decentralized distributed proximal polixy optimiation (DD-PPO) architecture.#
Learn more about reinforcement learning with the following resources.
[Course] Applied Reinforcement Learning with RLlib
[Blog] Intro to RLlib: Example Environments
[Guide] Getting Started with RLlib
[Talk] Deep reinforcement learning at Riot Games
[Gallery] RLlib Examples Gallery
[Gallery] More RL Use Cases on the Blog
|
38
|
ML Platform#
Ray and its AI libraries provide unified compute runtime for teams looking to simplify their ML platform.
Ray’s libraries such as Ray Train, Ray Data, and Ray Serve can be used to compose end-to-end ML workflows, providing features and APIs for
data preprocessing as part of training, and transitioning from training to serving.
Read more about building ML platforms with Ray in this section.
End-to-End ML Workflows#
The following highlights examples utilizing Ray AI libraries to implement end-to-end ML workflows.
[Example] Text classification with Ray
[Example] Object detection with Ray
[Example] Machine learning on tabular data
[Example] AutoML for Time Series with Ray
Large Scale Workload Orchestration#
The following highlights feature projects leveraging Ray Core’s distributed APIs to simplify the orchestration of large scale workloads.
|
39
|
Large Scale Workload Orchestration#
The following highlights feature projects leveraging Ray Core’s distributed APIs to simplify the orchestration of large scale workloads.
[Blog] Highly Available and Scalable Online Applications on Ray at Ant Group
[Blog] Ray Forward 2022 Conference: Hyper-scale Ray Application Use Cases
[Blog] A new world record on the CloudSort benchmark using Ray
[Example] Speed up your web crawler by parallelizing it with Ray
|
40
|
Getting Started#
Ray is an open source unified framework for scaling AI and Python applications. It provides a simple, universal API for building distributed applications that can scale from a laptop to a cluster.
What’s Ray?#
Ray simplifies distributed computing by providing:
Scalable compute primitives: Tasks and actors for painless parallel programming
Specialized AI libraries: Tools for common ML workloads like data processing, model training, hyperparameter tuning, and model serving
Unified resource management: Seamless scaling from laptop to cloud with automatic resource handling
Choose Your Path#
Select the guide that matches your needs:
Scale ML workloads: Ray Libraries Quickstart
Scale general Python applications: Ray Core Quickstart
Deploy to the cloud: Ray Clusters Quickstart
Debug and monitor applications: Debugging and Monitoring Quickstart
|
41
|
Ray AI Libraries Quickstart#
Use individual libraries for ML workloads. Each library specializes in a specific part of the ML workflow, from data processing to model serving. Click on the dropdowns for your workload below.
Data: Scalable Datasets for ML
Ray Data provides distributed data processing optimized for machine learning and AI workloads. It efficiently streams data through data pipelines.
Here’s an example on how to scale offline inference and training ingest with Ray Data.
Note
To run this example, install Ray Data:
pip install -U "ray[data]"
from typing import Dict
import numpy as np
import ray
# Create datasets from on-disk files, Python objects, and cloud storage like S3.
ds = ray.data.read_csv("s3://anonymous@ray-example-data/iris.csv")
|
42
|
# Create datasets from on-disk files, Python objects, and cloud storage like S3.
ds = ray.data.read_csv("s3://anonymous@ray-example-data/iris.csv")
# Apply functions to transform data. Ray Data executes transformations in parallel.
def compute_area(batch: Dict[str, np.ndarray]) -> Dict[str, np.ndarray]:
length = batch["petal length (cm)"]
width = batch["petal width (cm)"]
batch["petal area (cm^2)"] = length * width
return batch
transformed_ds = ds.map_batches(compute_area)
# Iterate over batches of data.
for batch in transformed_ds.iter_batches(batch_size=4):
print(batch)
# Save dataset contents to on-disk files or cloud storage.
transformed_ds.write_parquet("local:///tmp/iris/")
Learn more about Ray Data
Train: Distributed Model Training
Ray Train makes distributed model training simple. It abstracts away the complexity of setting up distributed training across popular frameworks like PyTorch and TensorFlow.
|
43
|
Ray Train makes distributed model training simple. It abstracts away the complexity of setting up distributed training across popular frameworks like PyTorch and TensorFlow.
PyTorch
This example shows how you can use Ray Train with PyTorch.
To run this example install Ray Train and PyTorch packages:
Note
pip install -U "ray[train]" torch torchvision
Set up your dataset and model.
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
def get_dataset():
return datasets.FashionMNIST(
root="/tmp/data",
train=True,
download=True,
transform=ToTensor(),
)
|
44
|
def get_dataset():
return datasets.FashionMNIST(
root="/tmp/data",
train=True,
download=True,
transform=ToTensor(),
)
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28 * 28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10),
)
def forward(self, inputs):
inputs = self.flatten(inputs)
logits = self.linear_relu_stack(inputs)
return logits
Now define your single-worker PyTorch training function.
def train_func():
num_epochs = 3
batch_size = 64
dataset = get_dataset()
dataloader = DataLoader(dataset, batch_size=batch_size)
model = NeuralNetwork()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
|
45
|
model = NeuralNetwork()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for epoch in range(num_epochs):
for inputs, labels in dataloader:
optimizer.zero_grad()
pred = model(inputs)
loss = criterion(pred, labels)
loss.backward()
optimizer.step()
print(f"epoch: {epoch}, loss: {loss.item()}")
This training function can be executed with:
train_func()
Convert this to a distributed multi-worker training function.
Use the ray.train.torch.prepare_model and
ray.train.torch.prepare_data_loader utility functions to
set up your model and data for distributed training.
This automatically wraps the model with DistributedDataParallel
and places it on the right device, and adds DistributedSampler to the DataLoaders.
import ray.train.torch
def train_func_distributed():
num_epochs = 3
batch_size = 64
|
46
|
def train_func_distributed():
num_epochs = 3
batch_size = 64
dataset = get_dataset()
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
dataloader = ray.train.torch.prepare_data_loader(dataloader)
model = NeuralNetwork()
model = ray.train.torch.prepare_model(model)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for epoch in range(num_epochs):
if ray.train.get_context().get_world_size() > 1:
dataloader.sampler.set_epoch(epoch)
for inputs, labels in dataloader:
optimizer.zero_grad()
pred = model(inputs)
loss = criterion(pred, labels)
loss.backward()
optimizer.step()
print(f"epoch: {epoch}, loss: {loss.item()}")
Instantiate a TorchTrainer
with 4 workers, and use it to run the new training function.
from ray.train.torch import TorchTrainer
from ray.train import ScalingConfig
|
47
|
Instantiate a TorchTrainer
with 4 workers, and use it to run the new training function.
from ray.train.torch import TorchTrainer
from ray.train import ScalingConfig
# For GPU Training, set `use_gpu` to True.
use_gpu = False
trainer = TorchTrainer(
train_func_distributed,
scaling_config=ScalingConfig(num_workers=4, use_gpu=use_gpu)
)
results = trainer.fit()
To accelerate the training job using GPU, make sure you have GPU configured, then set use_gpu to True. If you don’t have a GPU environment, Anyscale provides a development workspace integrated with an autoscaling GPU cluster for this purpose.
TensorFlow
This example shows how you can use Ray Train to set up Multi-worker training
with Keras.
To run this example install Ray Train and Tensorflow packages:
Note
pip install -U "ray[train]" tensorflow
Set up your dataset and model.
import sys
import numpy as np
|
48
|
Note
pip install -U "ray[train]" tensorflow
Set up your dataset and model.
import sys
import numpy as np
if sys.version_info >= (3, 12):
# Tensorflow is not installed for Python 3.12 because of keras compatibility.
sys.exit(0)
else:
import tensorflow as tf
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the [0, 255] range.
# You need to convert them to float32 with values in the [0, 1] range.
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
|
49
|
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
Now define your single-worker TensorFlow training function.
def train_func():
batch_size = 64
single_worker_dataset = mnist_dataset(batch_size)
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
This training function can be executed with:
train_func()
|
50
|
This training function can be executed with:
train_func()
Now convert this to a distributed multi-worker training function.
Set the global batch size - each worker processes the same size
batch as in the single-worker code.
Choose your TensorFlow distributed training strategy. This examples
uses the MultiWorkerMirroredStrategy.
import json
import os
def train_func_distributed():
per_worker_batch_size = 64
# This environment variable will be set by Ray Train.
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
|
51
|
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
Instantiate a TensorflowTrainer
with 4 workers, and use it to run the new training function.
from ray.train.tensorflow import TensorflowTrainer
from ray.train import ScalingConfig
# For GPU Training, set `use_gpu` to True.
use_gpu = False
trainer = TensorflowTrainer(train_func_distributed, scaling_config=ScalingConfig(num_workers=4, use_gpu=use_gpu))
trainer.fit()
To accelerate the training job using GPU, make sure you have GPU configured, then set use_gpu to True. If you don’t have a GPU environment, Anyscale provides a development workspace integrated with an autoscaling GPU cluster for this purpose.
Learn more about Ray Train
Tune: Hyperparameter Tuning at Scale
|
52
|
Learn more about Ray Train
Tune: Hyperparameter Tuning at Scale
Ray Tune is a library for hyperparameter tuning at any scale.
It automatically finds the best hyperparameters for your models with efficient distributed search algorithms.
With Tune, you can launch a multi-node distributed hyperparameter sweep in less than 10 lines of code, supporting any deep learning framework including PyTorch, TensorFlow, and Keras.
Note
To run this example, install Ray Tune:
pip install -U "ray[tune]"
This example runs a small grid search with an iterative training function.
from ray import tune
def objective(config): # ①
score = config["a"] ** 2 + config["b"]
return {"score": score}
search_space = { # ②
"a": tune.grid_search([0.001, 0.01, 0.1, 1.0]),
"b": tune.choice([1, 2, 3]),
}
tuner = tune.Tuner(objective, param_space=search_space) # ③
results = tuner.fit()
print(results.get_best_result(metric="score", mode="min").config)
|
53
|
tuner = tune.Tuner(objective, param_space=search_space) # ③
results = tuner.fit()
print(results.get_best_result(metric="score", mode="min").config)
If TensorBoard is installed (pip install tensorboard), you can automatically visualize all trial results:
tensorboard --logdir ~/ray_results
Learn more about Ray Tune
Serve: Scalable Model Serving
Ray Serve provides scalable and programmable serving for ML models and business logic. Deploy models from any framework with production-ready performance.
Note
To run this example, install Ray Serve and scikit-learn:
pip install -U "ray[serve]" scikit-learn
This example runs serves a scikit-learn gradient boosting classifier.
import requests
from starlette.requests import Request
from typing import Dict
from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier
from ray import serve
|
54
|
from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier
from ray import serve
# Train model.
iris_dataset = load_iris()
model = GradientBoostingClassifier()
model.fit(iris_dataset["data"], iris_dataset["target"])
@serve.deployment
class BoostingModel:
def __init__(self, model):
self.model = model
self.label_list = iris_dataset["target_names"].tolist()
async def __call__(self, request: Request) -> Dict:
payload = (await request.json())["vector"]
print(f"Received http request with data {payload}")
prediction = self.model.predict([payload])[0]
human_name = self.label_list[prediction]
return {"result": human_name}
# Deploy model.
serve.run(BoostingModel.bind(model), route_prefix="/iris")
# Query it!
sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get(
"http://localhost:8000/iris", json=sample_request_input)
print(response.text)
|
55
|
# Query it!
sample_request_input = {"vector": [1.2, 1.0, 1.1, 0.9]}
response = requests.get(
"http://localhost:8000/iris", json=sample_request_input)
print(response.text)
The response shows {"result": "versicolor"}.
Learn more about Ray Serve
RLlib: Industry-Grade Reinforcement Learning
RLlib is a reinforcement learning (RL) library that offers high performance implementations of popular RL algorithms and supports various training environments. RLlib offers high scalability and unified APIs for a variety of industry- and research applications.
Note
To run this example, install rllib and either tensorflow or pytorch:
pip install -U "ray[rllib]" tensorflow # or torch
You may also need CMake installed on your system.
import gymnasium as gym
import numpy as np
import torch
from typing import Dict, Tuple, Any, Optional
from ray.rllib.algorithms.ppo import PPOConfig
|
56
|
import gymnasium as gym
import numpy as np
import torch
from typing import Dict, Tuple, Any, Optional
from ray.rllib.algorithms.ppo import PPOConfig
# Define your problem using python and Farama-Foundation's gymnasium API:
class SimpleCorridor(gym.Env):
"""Corridor environment where an agent must learn to move right to reach the exit.
---------------------
| S | 1 | 2 | 3 | G | S=start; G=goal; corridor_length=5
---------------------
Actions:
0: Move left
1: Move right
Observations:
A single float representing the agent's current position (index)
starting at 0.0 and ending at corridor_length
Rewards:
-0.1 for each step
+1.0 when reaching the goal
Episode termination:
When the agent reaches the goal (position >= corridor_length)
"""
|
57
|
Rewards:
-0.1 for each step
+1.0 when reaching the goal
Episode termination:
When the agent reaches the goal (position >= corridor_length)
"""
def __init__(self, config):
self.end_pos = config["corridor_length"]
self.cur_pos = 0.0
self.action_space = gym.spaces.Discrete(2) # 0=left, 1=right
self.observation_space = gym.spaces.Box(0.0, self.end_pos, (1,), np.float32)
def reset(
self, *, seed: Optional[int] = None, options: Optional[Dict] = None
) -> Tuple[np.ndarray, Dict]:
"""Reset the environment for a new episode.
Args:
seed: Random seed for reproducibility
options: Additional options (not used in this environment)
|
58
|
Args:
seed: Random seed for reproducibility
options: Additional options (not used in this environment)
Returns:
Initial observation of the new episode and an info dict.
"""
super().reset(seed=seed) # Initialize RNG if seed is provided
self.cur_pos = 0.0
# Return initial observation.
return np.array([self.cur_pos], np.float32), {}
def step(self, action: int) -> Tuple[np.ndarray, float, bool, bool, Dict]:
"""Take a single step in the environment based on the provided action.
Args:
action: 0 for left, 1 for right
|
59
|
Returns:
A tuple of (observation, reward, terminated, truncated, info):
observation: Agent's new position
reward: Reward from taking the action (-0.1 or +1.0)
terminated: Whether episode is done (reached goal)
truncated: Whether episode was truncated (always False here)
info: Additional information (empty dict)
"""
# Walk left if action is 0 and we're not at the leftmost position
if action == 0 and self.cur_pos > 0:
self.cur_pos -= 1
# Walk right if action is 1
elif action == 1:
self.cur_pos += 1
# Set `terminated` flag when end of corridor (goal) reached.
terminated = self.cur_pos >= self.end_pos
truncated = False
# +1 when goal reached, otherwise -0.1.
reward = 1.0 if terminated else -0.1
return np.array([self.cur_pos], np.float32), reward, terminated, truncated, {}
|
60
|
# Create an RLlib Algorithm instance from a PPOConfig object.
print("Setting up the PPO configuration...")
config = (
PPOConfig().environment(
# Env class to use (our custom gymnasium environment).
SimpleCorridor,
# Config dict passed to our custom env's constructor.
# Use corridor with 20 fields (including start and goal).
env_config={"corridor_length": 20},
)
# Parallelize environment rollouts for faster training.
.env_runners(num_env_runners=3)
# Use a smaller network for this simple task
.training(model={"fcnet_hiddens": [64, 64]})
)
# Construct the actual PPO algorithm object from the config.
algo = config.build_algo()
rl_module = algo.get_module()
|
61
|
# Construct the actual PPO algorithm object from the config.
algo = config.build_algo()
rl_module = algo.get_module()
# Train for n iterations and report results (mean episode rewards).
# Optimal reward calculation:
# - Need at least 19 steps to reach the goal (from position 0 to 19)
# - Each step (except last) gets -0.1 reward: 18 * (-0.1) = -1.8
# - Final step gets +1.0 reward
# - Total optimal reward: -1.8 + 1.0 = -0.8
print("\nStarting training loop...")
for i in range(5):
results = algo.train()
# Log the metrics from training results
print(f"Iteration {i+1}")
print(f" Training metrics: {results['env_runners']}")
# Save the trained algorithm (optional)
checkpoint_dir = algo.save()
print(f"\nSaved model checkpoint to: {checkpoint_dir}")
|
62
|
# Save the trained algorithm (optional)
checkpoint_dir = algo.save()
print(f"\nSaved model checkpoint to: {checkpoint_dir}")
print("\nRunning inference with the trained policy...")
# Create a test environment with a shorter corridor to verify the agent's behavior
env = SimpleCorridor({"corridor_length": 10})
# Get the initial observation (should be: [0.0] for the starting position).
obs, info = env.reset()
terminated = truncated = False
total_reward = 0.0
step_count = 0
# Play one episode and track the agent's trajectory
print("\nAgent trajectory:")
positions = [float(obs[0])] # Track positions for visualization
while not terminated and not truncated:
# Compute an action given the current observation
action_logits = rl_module.forward_inference(
{"obs": torch.from_numpy(obs).unsqueeze(0)}
)["action_dist_inputs"].numpy()[
0
] # [0]: Batch dimension=1
# Get the action with highest probability
action = np.argmax(action_logits)
|
63
|
# Get the action with highest probability
action = np.argmax(action_logits)
# Log the agent's decision
action_name = "LEFT" if action == 0 else "RIGHT"
print(f" Step {step_count}: Position {obs[0]:.1f}, Action: {action_name}")
# Apply the computed action in the environment
obs, reward, terminated, truncated, info = env.step(action)
positions.append(float(obs[0]))
# Sum up rewards
total_reward += reward
step_count += 1
# Report final results
print(f"\nEpisode complete:")
print(f" Steps taken: {step_count}")
print(f" Total reward: {total_reward:.2f}")
print(f" Final position: {obs[0]:.1f}")
# Verify the agent has learned the optimal policy
if total_reward > -0.5 and obs[0] >= 9.0:
print(" Success! The agent has learned the optimal policy (always move right).")
Learn more about Ray RLlib
Ray Core Quickstart#
|
64
|
Learn more about Ray RLlib
Ray Core Quickstart#
Ray Core provides simple primitives for building and running distributed applications. It enables you to turn regular Python or Java functions and classes into distributed stateless tasks and stateful actors with just a few lines of code.
The examples below show you how to:
Convert Python functions to Ray tasks for parallel execution
Convert Python classes to Ray actors for distributed stateful computation
Core: Parallelizing Functions with Ray Tasks
Python
Note
To run this example install Ray Core:
pip install -U "ray"
Import Ray and and initialize it with ray.init().
Then decorate the function with @ray.remote to declare that you want to run this function remotely.
Lastly, call the function with .remote() instead of calling it normally.
This remote call yields a future, a Ray object reference, that you can then fetch with ray.get.
import ray
ray.init()
@ray.remote
def f(x):
return x * x
|
65
|
@ray.remote
def f(x):
return x * x
futures = [f.remote(i) for i in range(4)]
print(ray.get(futures)) # [0, 1, 4, 9]
Java
Note
To run this example, add the ray-api and ray-runtime dependencies in your project.
Use Ray.init to initialize Ray runtime.
Then use Ray.task(...).remote() to convert any Java static method into a Ray task.
The task runs asynchronously in a remote worker process. The remote method returns an ObjectRef,
and you can fetch the actual result with get.
import io.ray.api.ObjectRef;
import io.ray.api.Ray;
import java.util.ArrayList;
import java.util.List;
public class RayDemo {
public static int square(int x) {
return x * x;
}
|
66
|
public class RayDemo {
public static int square(int x) {
return x * x;
}
public static void main(String[] args) {
// Initialize Ray runtime.
Ray.init();
List<ObjectRef<Integer>> objectRefList = new ArrayList<>();
// Invoke the `square` method 4 times remotely as Ray tasks.
// The tasks run in parallel in the background.
for (int i = 0; i < 4; i++) {
objectRefList.add(Ray.task(RayDemo::square, i).remote());
}
// Get the actual results of the tasks.
System.out.println(Ray.get(objectRefList)); // [0, 1, 4, 9]
}
}
In the above code block we defined some Ray Tasks. While these are great for stateless operations, sometimes you
must maintain the state of your application. You can do that with Ray Actors.
Learn more about Ray Core
Core: Parallelizing Classes with Ray Actors
|
67
|
Core: Parallelizing Classes with Ray Actors
Ray provides actors to allow you to parallelize an instance of a class in Python or Java.
When you instantiate a class that is a Ray actor, Ray starts a remote instance
of that class in the cluster. This actor can then execute remote method calls and
maintain its own internal state.
Python
Note
To run this example install Ray Core:
pip install -U "ray"
import ray
ray.init() # Only call this once.
@ray.remote
class Counter(object):
def __init__(self):
self.n = 0
def increment(self):
self.n += 1
def read(self):
return self.n
counters = [Counter.remote() for i in range(4)]
[c.increment.remote() for c in counters]
futures = [c.read.remote() for c in counters]
print(ray.get(futures)) # [1, 1, 1, 1]
Java
Note
To run this example, add the ray-api and ray-runtime dependencies in your project.
|
68
|
Java
Note
To run this example, add the ray-api and ray-runtime dependencies in your project.
import io.ray.api.ActorHandle;
import io.ray.api.ObjectRef;
import io.ray.api.Ray;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
public class RayDemo {
public static class Counter {
private int value = 0;
public void increment() {
this.value += 1;
}
public int read() {
return this.value;
}
}
public static void main(String[] args) {
// Initialize Ray runtime.
Ray.init();
List<ActorHandle<Counter>> counters = new ArrayList<>();
// Create 4 actors from the `Counter` class.
// These run in remote worker processes.
for (int i = 0; i < 4; i++) {
counters.add(Ray.actor(Counter::new).remote());
}
|
69
|
// Invoke the `increment` method on each actor.
// This sends an actor task to each remote actor.
for (ActorHandle<Counter> counter : counters) {
counter.task(Counter::increment).remote();
}
// Invoke the `read` method on each actor, and print the results.
List<ObjectRef<Integer>> objectRefList = counters.stream()
.map(counter -> counter.task(Counter::read).remote())
.collect(Collectors.toList());
System.out.println(Ray.get(objectRefList)); // [1, 1, 1, 1]
}
}
Learn more about Ray Core
Ray Cluster Quickstart#
Deploy your applications on Ray clusters on AWS, GCP, Azure, and more, often with minimal code changes to your existing code.
Clusters: Launching a Ray Cluster on AWS
Ray programs can run on a single machine, or seamlessly scale to large clusters.
Note
To run this example install the following:
pip install -U "ray[default]" boto3
|
70
|
Ray programs can run on a single machine, or seamlessly scale to large clusters.
Note
To run this example install the following:
pip install -U "ray[default]" boto3
If you haven’t already, configure your credentials as described in the documentation for boto3.
Take this simple example that waits for individual nodes to join the cluster.
example.py
import sys
import time
from collections import Counter
import ray
@ray.remote
def get_host_name(x):
import platform
import time
time.sleep(0.01)
return x + (platform.node(),)
def wait_for_nodes(expected):
# Wait for all nodes to join the cluster.
while True:
num_nodes = len(ray.nodes())
if num_nodes < expected:
print(
"{} nodes have joined so far, waiting for {} more.".format(
num_nodes, expected - num_nodes
)
)
sys.stdout.flush()
time.sleep(1)
else:
break
|
71
|
def main():
wait_for_nodes(4)
# Check that objects can be transferred from each node to each other node.
for i in range(10):
print("Iteration {}".format(i))
results = [get_host_name.remote(get_host_name.remote(())) for _ in range(100)]
print(Counter(ray.get(results)))
sys.stdout.flush()
print("Success!")
sys.stdout.flush()
time.sleep(20)
if __name__ == "__main__":
ray.init(address="localhost:6379")
main()
You can also download this example from the GitHub repository.
Store it locally in a file called example.py.
To execute this script in the cloud, download this configuration file,
or copy it here:
cluster.yaml
# An unique identifier for the head node and workers of this cluster.
cluster_name: aws-example-minimal
# Cloud-provider specific configuration.
provider:
type: aws
region: us-west-2
# The maximum number of workers nodes to launch in addition to the head
# node.
max_workers: 3
|
72
|
# Tell the autoscaler the allowed node types and the resources they provide.
# The key is the name of the node type, which is for debugging purposes.
# The node config specifies the launch config and physical instance type.
available_node_types:
ray.head.default:
# The node type's CPU and GPU resources are auto-detected based on AWS instance type.
# If desired, you can override the autodetected CPU and GPU resources advertised to the autoscaler.
# You can also set custom resources.
# For example, to mark a node type as having 1 CPU, 1 GPU, and 5 units of a resource called "custom", set
# resources: {"CPU": 1, "GPU": 1, "custom": 5}
resources: {}
# Provider-specific config for this node type, e.g., instance type. By default
# Ray auto-configures unspecified fields such as SubnetId and KeyName.
# For more documentation on available fields, see
|
73
|
# Ray auto-configures unspecified fields such as SubnetId and KeyName.
# For more documentation on available fields, see
# http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances
node_config:
InstanceType: m5.large
ray.worker.default:
# The minimum number of worker nodes of this type to launch.
# This number should be >= 0.
min_workers: 3
# The maximum number of worker nodes of this type to launch.
# This parameter takes precedence over min_workers.
max_workers: 3
# The node type's CPU and GPU resources are auto-detected based on AWS instance type.
# If desired, you can override the autodetected CPU and GPU resources advertised to the autoscaler.
# You can also set custom resources.
# For example, to mark a node type as having 1 CPU, 1 GPU, and 5 units of a resource called "custom", set
|
74
|
# You can also set custom resources.
# For example, to mark a node type as having 1 CPU, 1 GPU, and 5 units of a resource called "custom", set
# resources: {"CPU": 1, "GPU": 1, "custom": 5}
resources: {}
# Provider-specific config for this node type, e.g., instance type. By default
# Ray auto-configures unspecified fields such as SubnetId and KeyName.
# For more documentation on available fields, see
# http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances
node_config:
InstanceType: m5.large
|
75
|
Assuming you have stored this configuration in a file called cluster.yaml, you can now launch an AWS cluster as follows:
ray submit cluster.yaml example.py --start
Learn more about launching Ray Clusters on AWS, GCP, Azure, and more
Clusters: Launching a Ray Cluster on Kubernetes
Ray programs can run on a single node Kubernetes cluster, or seamlessly scale to larger clusters.
Learn more about launching Ray Clusters on Kubernetes
Clusters: Launching a Ray Cluster on Anyscale
Anyscale is the company behind Ray. The Anyscale platform provides an enterprise-grade Ray deployment on top of your AWS, GCP, Azure, or on-prem Kubernetes clusters.
Try Ray on Anyscale
Debugging and Monitoring Quickstart#
Use built-in observability tools to monitor and debug Ray applications and clusters. These tools help you understand your application’s performance and identify bottlenecks.
Ray Dashboard: Web GUI to monitor and debug Ray
|
76
|
Ray Dashboard: Web GUI to monitor and debug Ray
Ray dashboard provides a visual interface that displays real-time system metrics, node-level resource monitoring, job profiling, and task visualizations. The dashboard is designed to help users understand the performance of their Ray applications and identify potential issues.
Note
To get started with the dashboard, install the default installation as follows:
pip install -U "ray[default]"
The dashboard automatically becomes available when running Ray scripts. Access the dashboard through the default URL, http://localhost:8265.
Learn more about Ray Dashboard
Ray State APIs: CLI to access cluster states
Ray state APIs allow users to conveniently access the current state (snapshot) of Ray through CLI or Python SDK.
Note
To get started with the state API, install the default installation as follows:
pip install -U "ray[default]"
Run the following code.
import ray
import time
ray.init(num_cpus=4)
|
77
|
Note
To get started with the state API, install the default installation as follows:
pip install -U "ray[default]"
Run the following code.
import ray
import time
ray.init(num_cpus=4)
@ray.remote
def task_running_300_seconds():
print("Start!")
time.sleep(300)
@ray.remote
class Actor:
def __init__(self):
print("Actor created")
# Create 2 tasks
tasks = [task_running_300_seconds.remote() for _ in range(2)]
# Create 2 actors
actors = [Actor.remote() for _ in range(2)]
ray.get(tasks)
See the summarized statistics of Ray tasks using ray summary tasks in a terminal.
ray summary tasks
======== Tasks Summary: 2022-07-22 08:54:38.332537 ========
Stats:
------------------------------------
total_actor_scheduled: 2
total_actor_tasks: 0
total_tasks: 2
|
78
|
======== Tasks Summary: 2022-07-22 08:54:38.332537 ========
Stats:
------------------------------------
total_actor_scheduled: 2
total_actor_tasks: 0
total_tasks: 2
Table (group by func_name):
------------------------------------
FUNC_OR_CLASS_NAME STATE_COUNTS TYPE
0 task_running_300_seconds RUNNING: 2 NORMAL_TASK
1 Actor.__init__ FINISHED: 2 ACTOR_CREATION_TASK
Learn more about Ray State APIs
Learn More#
Ray has a rich ecosystem of resources to help you learn more about distributed computing and AI scaling.
Blog and Press#
|
79
|
Learn more about Ray State APIs
Learn More#
Ray has a rich ecosystem of resources to help you learn more about distributed computing and AI scaling.
Blog and Press#
Modern Parallel and Distributed Python: A Quick Tutorial on Ray
Why Every Python Developer Will Love Ray
Ray: A Distributed System for AI (Berkeley Artificial Intelligence Research, BAIR)
10x Faster Parallel Python Without Python Multiprocessing
Implementing A Parameter Server in 15 Lines of Python with Ray
Ray Distributed AI Framework Curriculum
RayOnSpark: Running Emerging AI Applications on Big Data Clusters with Ray and Analytics Zoo
First user tips for Ray
Tune: a Python library for fast hyperparameter tuning at any scale
Cutting edge hyperparameter tuning with Ray Tune
New Library Targets High Speed Reinforcement Learning
Scaling Multi Agent Reinforcement Learning
Functional RL with Keras and Tensorflow Eager
How to Speed up Pandas by 4x with one line of code
Quick Tip—Speed up Pandas using Modin
Ray Blog
|
80
|
Videos#
Unifying Large Scale Data Preprocessing and Machine Learning Pipelines with Ray Data | PyData 2021 (slides)
Programming at any Scale with Ray | SF Python Meetup Sept 2019
Ray for Reinforcement Learning | Data Council 2019
Scaling Interactive Pandas Workflows with Modin
Ray: A Distributed Execution Framework for AI | SciPy 2018
Ray: A Cluster Computing Engine for Reinforcement Learning Applications | Spark Summit
RLlib: Ray Reinforcement Learning Library | RISECamp 2018
Enabling Composition in Distributed Reinforcement Learning | Spark Summit 2018
Tune: Distributed Hyperparameter Search | RISECamp 2018
Slides#
Talk given at UC Berkeley DS100
Talk given in October 2019
Talk given at RISECamp 2019
Papers#
Ray 2.0 Architecture white paper
Ray 1.0 Architecture white paper (old)
Exoshuffle: large-scale data shuffle in Ray
RLlib paper
RLlib flow paper
Tune paper
Ray paper (old)
Ray HotOS paper (old)
|
81
|
Ray 2.0 Architecture white paper
Ray 1.0 Architecture white paper (old)
Exoshuffle: large-scale data shuffle in Ray
RLlib paper
RLlib flow paper
Tune paper
Ray paper (old)
Ray HotOS paper (old)
If you encounter technical issues, post on the Ray discussion forum. For general questions, announcements, and community discussions, join the Ray community on Slack.
|
82
|
Image Classification Batch Inference with PyTorch ResNet152Beginner. Data. PyTorch.Computer Vision ray-teamObject Detection Batch Inference with PyTorch FasterRCNN_ResNet50Beginner. Data. PyTorch.Computer Vision ray-teamImage Classification Batch Inference with Hugging Face Vision TransformerBeginner. Data. Transformers.Computer Vision ray-teamBatch Inference with LoRA AdapterBeginner. Data. vLLM.Large Language Models Generative AI ray-teamBatch Inference with Structural OutputBeginner. Data. vLLM.Large Language Models Generative AI ray-teamTabular Data Training and Batch Inference with XGBoostBeginner. Data. XGBoost. ray-teamServe ML ModelsBeginner. Serve. PyTorch.Computer Vision ray-teamServe a Stable Diffusion ModelBeginner. Serve.Computer Vision Generative AI ray-teamServe a Text Classification ModelBeginner. Serve.Natural Language Processing ray-teamServe an Object Detection ModelBeginner. Serve.Computer Vision ray-teamServe an Inference Model on AWS NeuronCores Using
|
83
|
a Text Classification ModelBeginner. Serve.Natural Language Processing ray-teamServe an Object Detection ModelBeginner. Serve.Computer Vision ray-teamServe an Inference Model on AWS NeuronCores Using FastAPIIntermediate. Serve.Natural Language Processing ray-teamServe an Inference with Stable Diffusion Model on AWS NeuronCores Using FastAPIIntermediate. Serve.Computer Vision Generative AI ray-teamServe a model on Intel Gaudi AcceleratorIntermediate. Serve. PyTorch.Generative AI Large Language Models ray-teamScale a Gradio App with Ray ServeIntermediate. Serve.Generative AI Large Language Models Natural Language Processing ray-teamServe a Text Generator with Request BatchingIntermediate. Serve.Generative AI Large Language Models Natural Language Processing ray-teamServe DeepSeekBeginner. Serve.Generative AI Large Language Models Natural Language Processing ray-teamServe a Chatbot with Request and Response StreamingIntermediate. Serve.Generative AI Large Language Models Natural Language
|
84
|
AI Large Language Models Natural Language Processing ray-teamServe a Chatbot with Request and Response StreamingIntermediate. Serve.Generative AI Large Language Models Natural Language Processing ray-teamServing models with Triton Server in Ray ServeIntermediate. Serve.Computer Vision Generative AI ray-teamServe a Java AppAdvanced. Serve. ray-teamTrain an image classifier with PyTorchBeginner. Train. PyTorch.Computer Vision ray-teamTrain an image classifier with LightningBeginner. Train. Lightning.Computer Vision ray-teamTrain a text classifier with Hugging Face AccelerateBeginner. Train. Accelerate. PyTorch. Hugging Face.Large Language Models Natural Language Processing ray-teamTrain an image classifier with TensorFlowBeginner. Train. TensorFlow.Computer Vision ray-teamTrain with Horovod and PyTorchBeginner. Train. Horovod. ray-teamTrain ResNet model with Intel GaudiBeginner. Train. PyTorch.Computer Vision community*Contributed by the Ray Community💪 ✨Train BERT model with Intel
|
85
|
and PyTorchBeginner. Train. Horovod. ray-teamTrain ResNet model with Intel GaudiBeginner. Train. PyTorch.Computer Vision community*Contributed by the Ray Community💪 ✨Train BERT model with Intel GaudiBeginner. Train. Transformers.Natural Language Processing community*Contributed by the Ray Community💪 ✨Train a text classifier with DeepSpeedIntermediate. Train. DeepSpeed. PyTorch.Large Language Models Natural Language Processing ray-teamFine-tune a personalized Stable Diffusion modelIntermediate. Train. PyTorch.Computer Vision Generative AI ray-teamFinetune Stable Diffusion and generate images with Intel GaudiIntermediate. Train. Accelerate. Transformers.Computer Vision Generative AI community*Contributed by the Ray Community💪 ✨Train a text classifier with PyTorch Lightning and Ray DataIntermediate. Train. Lightning.Natural Language Processing ray-teamTrain a text classifier with Hugging Face TransformersIntermediate. Train. Transformers.Natural Language Processing ray-teamFine-tune
|
86
|
Train. Lightning.Natural Language Processing ray-teamTrain a text classifier with Hugging Face TransformersIntermediate. Train. Transformers.Natural Language Processing ray-teamFine-tune Llama-2-7b and Llama-2-70b with Intel GaudiIntermediate. Train. Accelerate. Transformers.Natural Language Processing Large Language Models community*Contributed by the Ray Community💪 ✨Pre-train Llama-2 with Intel GaudiIntermediate. Train. Accelerate. Transformers. DeepSpeed.Natural Language Processing Large Language Models community*Contributed by the Ray Community💪 ✨Fine-tune Llama3.1 with AWS TrainiumAdvanced. Train. PyTorch. AWS Neuron.Natural Language Processing Large Language Models community*Contributed by the Ray Community💪 ✨Fine-tune a Llama-2 text generation model with DeepSpeed and Hugging Face AccelerateAdvanced. Train. Accelerate. DeepSpeed. Hugging Face.Natural Language Processing Large Language Models ray-teamFine-tune a GPT-J-6B text generation model with DeepSpeed and Hugging Face
|
87
|
AccelerateAdvanced. Train. Accelerate. DeepSpeed. Hugging Face.Natural Language Processing Large Language Models ray-teamFine-tune a GPT-J-6B text generation model with DeepSpeed and Hugging Face TransformersAdvanced. Train. Hugging Face. DeepSpeed.Natural Language Processing Large Language Models Generative AI ray-teamFine-tune a vicuna-13b text generation model with PyTorch Lightning and DeepSpeedAdvanced. Train. Lightning. DeepSpeed.Large Language Models Generative AI ray-teamFine-tune a dolly-v2-7b text generation model with PyTorch Lightning and FSDPAdvanced. Train. Lightning.Large Language Models Generative AI Natural Language Processing ray-teamTrain a tabular model with XGBoostBeginner. Train. XGBoost. ray-team
|
88
|
The Ray Ecosystem#
This page lists libraries that have integrations with Ray for distributed execution
in alphabetical order.
It’s easy to add your own integration to this list.
Simply open a pull request with a few lines of text, see the dropdown below for
more information.
Adding Your Integration
To add an integration add an entry to this file, using the same
grid-item-card directive that the other examples use.
Apache Airflow® is an open-source platform that enables users to programmatically author, schedule, and monitor workflows using directed acyclic graphs (DAGs). With the Ray provider, users can seamlessly orchestrate Ray jobs within Airflow DAGs.
Apache Airflow Integration
BuildFlow is a backend framework that allows you to build and manage complex cloud infrastructure using pure python. With BuildFlow’s decorator pattern you can turn any function into a component of your backend system.
BuildFlow Integration
|
89
|
BuildFlow Integration
Classy Vision is a new end-to-end, PyTorch-based framework for large-scale training of state-of-the-art image and video classification models. The library features a modular, flexible design that allows anyone to train machine learning models on top of PyTorch using very simple abstractions.
Classy Vision Integration
Daft is a data engine that supports SQL and Python DataFrames for data processing and analytics natively on your Ray clusters.
Daft Integration
Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love. Dask uses existing Python APIs and data structures to make it easy to switch between Numpy, Pandas, Scikit-learn to their Dask-powered equivalents.
Dask Integration
|
90
|
Dask Integration
Data-Juicer is a one-stop multimodal data processing system to make data higher-quality, juicier, and more digestible for foundation models. It integrates with Ray for distributed data processing on large-scale datasets with over 100 multimodal operators and supports TB-size dataset deduplication.
Data-Juicer Integration
Flambé is a machine learning experimentation framework built to accelerate the entire research life cycle. Flambé’s main objective is to provide a unified interface for prototyping models, running experiments containing complex pipelines, monitoring those experiments in real-time, reporting results, and deploying a final model for inference.
Flambé Integration
Flowdapt is a platform designed to help developers configure, debug, schedule, trigger, deploy and serve adaptive and reactive Artificial Intelligence workflows at large-scale.
Flowdapt Integration
|
91
|
Flowdapt Integration
Flyte is a Kubernetes-native workflow automation platform for complex, mission-critical data and ML processes at scale. It has been battle-tested at Lyft, Spotify, Freenome, and others and is truly open-source.
Flyte Integration
Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use.
Horovod Integration
State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0. It integrates with Ray for distributed hyperparameter tuning of transformer models.
Hugging Face Transformers Integration
Analytics Zoo seamlessly scales TensorFlow, Keras and PyTorch to distributed big data (using Spark, Flink & Ray).
Intel Analytics Zoo Integration
|
92
|
Analytics Zoo seamlessly scales TensorFlow, Keras and PyTorch to distributed big data (using Spark, Flink & Ray).
Intel Analytics Zoo Integration
The power of 350+ pre-trained NLP models, 100+ Word Embeddings, 50+ Sentence Embeddings, and 50+ Classifiers in 46 languages with 1 line of Python code.
NLU Integration
Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. With Ludwig, you can train a deep learning model on Ray in zero lines of code, automatically leveraging Dask on Ray for data preprocessing, Horovod on Ray for distributed training, and Ray Tune for hyperparameter optimization.
Ludwig Integration
Mars is a tensor-based unified framework for large-scale data computation which scales Numpy, Pandas and Scikit-learn. Mars can scale in to a single machine, and scale out to a cluster with thousands of machines.
MARS Integration
|
93
|
MARS Integration
Scale your pandas workflows by changing one line of code. Modin transparently distributes the data and computation so that all you need to do is continue using the pandas API as you were before installing Modin.
Modin Integration
Prefect is an open source workflow orchestration platform in Python. It allows you to easily define, track and schedule workflows in Python. This integration makes it easy to run a Prefect workflow on a Ray cluster in a distributed way.
Prefect Integration
PyCaret is an open source low-code machine learning library in Python that aims to reduce the hypothesis to insights cycle time in a ML experiment. It enables data scientists to perform end-to-end experiments quickly and efficiently.
PyCaret Integration
|
94
|
PyCaret Integration
RayDP (“Spark on Ray”) enables you to easily use Spark inside a Ray program. You can use Spark to read the input data, process the data using SQL, Spark DataFrame, or Pandas (via Koalas) API, extract and transform features using Spark MLLib, and use RayDP Estimator API for distributed training on the preprocessed dataset.
RayDP Integration
Scikit-learn is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy.
Scikit Learn Integration
|
95
|
Scikit Learn Integration
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.
Seldon Alibi Integration
Sematic is an open-source ML pipelining tool written in Python. It enables users to write end-to-end pipelines that can seamlessly transition between your laptop and the cloud, with rich visualizations, traceability, reproducibility, and usability as first-class citizens. This integration enables dynamic allocation of Ray clusters within Sematic pipelines.
Sematic Integration
spaCy is a library for advanced Natural Language Processing in Python and Cython. It’s built on the very latest research, and was designed from day one to be used in real products.
spaCy Integration
|
96
|
spaCy Integration
XGBoost is a popular gradient boosting library for classification and regression. It is one of the most popular tools in data science and workhorse of many top-performing Kaggle kernels.
XGBoost Integration
LightGBM is a high-performance gradient boosting library for classification and regression. It is designed to be distributed and efficient.
LightGBM Integration
Volcano is system for running high-performance workloads on Kubernetes. It features powerful batch scheduling capabilities required by ML and other data-intensive workloads.
Volcano Integration
|
97
|
What’s Ray Core?#
Ray Core is a powerful distributed computing framework that provides a small set of essential primitives (tasks, actors, and objects) for building and scaling distributed applications.
This walk-through introduces you to these core concepts with simple examples that demonstrate how to transform your Python functions and classes into distributed Ray tasks and actors, and how to work effectively with Ray objects.
Note
Ray has introduced an experimental API for high-performance workloads that’s
especially well suited for applications using multiple GPUs.
See Ray Compiled Graph for more details.
Getting Started#
To get started, install Ray using pip install -U ray. For additional installation options, see Installing Ray.
The first step is to import and initialize Ray:
import ray
ray.init()
Note
In recent versions of Ray (>=1.5), ray.init() is automatically called on the first use of a Ray remote API.
|
98
|
ray.init()
Note
In recent versions of Ray (>=1.5), ray.init() is automatically called on the first use of a Ray remote API.
Running a Task#
Tasks are the simplest way to parallelize your Python functions across a Ray cluster. To create a task:
Decorate your function with @ray.remote to indicate it should run remotely
Call the function with .remote() instead of a normal function call
Use ray.get() to retrieve the result from the returned future (Ray object reference)
Here’s a simple example:
# Define the square task.
@ray.remote
def square(x):
return x * x
# Launch four parallel square tasks.
futures = [square.remote(i) for i in range(4)]
# Retrieve results.
print(ray.get(futures))
# -> [0, 1, 4, 9]
Calling an Actor#
While tasks are stateless, Ray actors allow you to create stateful workers that maintain their internal state between method calls.
When you instantiate a Ray actor:
|
99
|
Calling an Actor#
While tasks are stateless, Ray actors allow you to create stateful workers that maintain their internal state between method calls.
When you instantiate a Ray actor:
Ray starts a dedicated worker process somewhere in your cluster
The actor’s methods run on that specific worker and can access and modify its state
The actor executes method calls serially in the order it receives them, preserving consistency
Here’s a simple Counter example:
# Define the Counter actor.
@ray.remote
class Counter:
def __init__(self):
self.i = 0
def get(self):
return self.i
def incr(self, value):
self.i += value
# Create a Counter actor.
c = Counter.remote()
# Submit calls to the actor. These calls run asynchronously but in
# submission order on the remote actor process.
for _ in range(10):
c.incr.remote(1)
# Retrieve final actor state.
print(ray.get(c.get.remote()))
# -> 10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.