global_chunk_id
int64 0
478
| text
stringlengths 288
999
|
|---|---|
100
|
# Retrieve final actor state.
print(ray.get(c.get.remote()))
# -> 10
The preceding example demonstrates basic actor usage. For a more comprehensive example that combines both tasks and actors, see the Monte Carlo Pi estimation example.
Passing Objects#
Ray’s distributed object store efficiently manages data across your cluster. There are three main ways to work with objects in Ray:
Implicit creation: When tasks and actors return values, they are automatically stored in Ray’s distributed object store, returning object references that can be later retrieved.
Explicit creation: Use ray.put() to directly place objects in the store.
Passing references: You can pass object references to other tasks and actors, avoiding unnecessary data copying and enabling lazy execution.
Here’s an example showing these techniques:
import numpy as np
# Define a task that sums the values in a matrix.
@ray.remote
def sum_matrix(matrix):
return np.sum(matrix)
|
101
|
Here’s an example showing these techniques:
import numpy as np
# Define a task that sums the values in a matrix.
@ray.remote
def sum_matrix(matrix):
return np.sum(matrix)
# Call the task with a literal argument value.
print(ray.get(sum_matrix.remote(np.ones((100, 100)))))
# -> 10000.0
# Put a large array into the object store.
matrix_ref = ray.put(np.ones((1000, 1000)))
# Call the task with the object reference as an argument.
print(ray.get(sum_matrix.remote(matrix_ref)))
# -> 1000000.0
Next Steps#
Tip
To monitor your application’s performance and resource usage, check out the Ray dashboard.
You can combine Ray’s simple primitives in powerful ways to express virtually any distributed computation pattern. To dive deeper into Ray’s key concepts,
explore these user guides:
Using remote functions (Tasks)
Using remote classes (Actors)
Working with Ray Objects
|
102
|
Key Concepts#
This section overviews Ray’s key concepts. These primitives work together to enable Ray to flexibly support a broad range of distributed applications.
Tasks#
Ray enables arbitrary functions to execute asynchronously on separate Python workers. These asynchronous Ray functions are called tasks. Ray enables tasks to specify their resource requirements in terms of CPUs, GPUs, and custom resources. The cluster scheduler uses these resource requests to distribute tasks across the cluster for parallelized execution.
See the User Guide for Tasks.
Actors#
Actors extend the Ray API from functions (tasks) to classes. An actor is essentially a stateful worker (or a service). When you instantiate a new actor, Ray creates a new worker and schedules methods of the actor on that specific worker. The methods can access and mutate the state of that worker. Like tasks, actors support CPU, GPU, and custom resource requirements.
See the User Guide for Actors.
|
103
|
Objects#
Tasks and actors create objects and compute on objects. You can refer to these objects as remote objects because Ray stores them anywhere in a Ray cluster, and you use object refs to refer to them. Ray caches remote objects in its distributed shared-memory object store and creates one object store per node in the cluster. In the cluster setting, a remote object can live on one or many nodes, independent of who holds the object ref.
See the User Guide for Objects.
Placement Groups#
Placement groups allow users to atomically reserve groups of resources across multiple nodes. You can use them to schedule Ray tasks and actors packed as close as possible for locality (PACK), or spread apart (SPREAD). A common use case is gang-scheduling actors or tasks.
See the User Guide for Placement Groups.
|
104
|
Environment Dependencies#
When Ray executes tasks and actors on remote machines, their environment dependencies, such as Python packages, local files, and environment variables, must be available on the remote machines. To address this problem, you can
1. Prepare your dependencies on the cluster in advance using the Ray Cluster Launcher
2. Use Ray’s runtime environments to install them on the fly.
See the User Guide for Environment Dependencies.
|
105
|
User Guides#
This section explains how to use Ray’s key concepts to build distributed applications.
If you’re brand new to Ray, we recommend starting with the walkthrough.
Tasks
Specifying required resources
Passing object refs to Ray tasks
Waiting for Partial Results
Generators
Multiple returns
Cancelling tasks
Scheduling
Fault Tolerance
Task Events
More about Ray Tasks
Nested Remote Functions
Yielding Resources While Blocked
Dynamic generators
num_returns set by the task caller
num_returns set by the task executor
Exception handling
Limitations
Actors
Specifying required resources
Calling the actor
Passing around actor handles
Generators
Cancelling actor tasks
Scheduling
Fault Tolerance
FAQ: Actors, Workers and Resources
Task Events
More about Ray Actors
Named Actors
Get-Or-Create a Named Actor
Actor Lifetimes
Terminating Actors
Manual termination via an actor handle
Manual termination within the actor
|
106
|
Terminating Actors
Manual termination via an actor handle
Manual termination within the actor
AsyncIO / Concurrency for Actors
AsyncIO for Actors
Threaded Actors
AsyncIO for Remote Tasks
Limiting Concurrency Per-Method with Concurrency Groups
Defining Concurrency Groups
Default Concurrency Group
Setting the Concurrency Group at Runtime
Utility Classes
Actor Pool
Message passing using Ray Queue
Out-of-band Communication
Wrapping Library Processes
Ray Collective
HTTP Server
Limitations
Actor Task Execution Order
Synchronous, Single-Threaded Actor
Asynchronous or Threaded Actor
Objects
Fetching Object Data
Passing Object Arguments
Closure Capture of Objects
Nested Objects
Fault Tolerance
More about Ray Objects
Serialization
Overview
Serialization notes
Customized Serialization
Troubleshooting
Known Issues
Object Spilling
Spilling to a custom directory
Stats
|
107
|
Object Spilling
Spilling to a custom directory
Stats
Environment Dependencies
Concepts
Preparing an environment using the Ray Cluster launcher
Runtime environments
Specifying a Runtime Environment Per-Job
Specifying a Runtime Environment Per-Task or Per-Actor
Common Workflows
Using Local Files
Using conda or pip packages
Using uv for package management
Library Development
API Reference
Caching and Garbage Collection
Runtime Environment Specified by Both Job and Driver
Inheritance
Frequently Asked Questions
Are environments installed on every node?
When is the environment installed?
Where are the environments cached?
How long does it take to install or to load from cache?
What is the relationship between runtime environments and Docker?
My runtime_env was installed, but when I log into the node I can’t import the packages.
|
108
|
Remote URIs
Hosting a Dependency on a Remote Git Provider: Step-by-Step Guide
Option 1: Download Zip (quicker to implement, but not recommended for production environments)
Option 2: Manually Create URL (slower to implement, but recommended for production environments)
Debugging
Scheduling
Resources
Scheduling Strategies
“DEFAULT”
“SPREAD”
PlacementGroupSchedulingStrategy
NodeAffinitySchedulingStrategy
Locality-Aware Scheduling
More about Ray Scheduling
Resources
Physical Resources and Logical Resources
Custom Resources
Specifying Node Resources
Specifying Task or Actor Resource Requirements
Accelerator Support
Starting Ray nodes with accelerators
Using accelerators in Tasks and Actors
Fractional Accelerators
Workers not Releasing GPU Resources
Accelerator Types
|
109
|
Accelerator Support
Starting Ray nodes with accelerators
Using accelerators in Tasks and Actors
Fractional Accelerators
Workers not Releasing GPU Resources
Accelerator Types
Placement Groups
Key Concepts
Create a Placement Group (Reserve Resources)
Schedule Tasks and Actors to Placement Groups (Use Reserved Resources)
Placement Strategy
Remove Placement Groups (Free Reserved Resources)
Observe and Debug Placement Groups
[Advanced] Child Tasks and Actors
[Advanced] Named Placement Group
[Advanced] Detached Placement Group
[Advanced] Fault Tolerance
API Reference
Memory Management
Concepts
Debugging using ‘ray memory’
Memory Aware Scheduling
Out-Of-Memory Prevention
What is the memory monitor?
How do I disable the memory monitor?
How do I configure the memory monitor?
Using the Memory Monitor
Addressing memory issues
Questions or Issues?
|
110
|
Fault tolerance
How to write fault tolerant Ray applications
More about Ray fault tolerance
Task Fault Tolerance
Catching application-level failures
Retrying failed tasks
Cancelling misbehaving tasks
Actor Fault Tolerance
Actor process failure
Actor creator failure
Force-killing a misbehaving actor
Unavailable actors
Actor method exceptions
Object Fault Tolerance
Recovering from data loss
Recovering from owner failure
Understanding ObjectLostErrors
Node Fault Tolerance
Worker node failure
Head node failure
Raylet failure
GCS Fault Tolerance
Setting up Redis
Design Patterns & Anti-patterns
Pattern: Using nested tasks to achieve nested parallelism
Example use case
Code example
Pattern: Using generators to reduce heap memory usage
Example use case
Code example
Pattern: Using ray.wait to limit the number of pending tasks
Example use case
Code example
Pattern: Using resources to limit the number of concurrently running tasks
Example use case
Code example
|
111
|
Pattern: Using ray.wait to limit the number of pending tasks
Example use case
Code example
Pattern: Using resources to limit the number of concurrently running tasks
Example use case
Code example
Pattern: Using asyncio to run actor methods concurrently
Example use case
Pattern: Using an actor to synchronize other tasks and actors
Example use case
Code example
Pattern: Using a supervisor actor to manage a tree of actors
Example use case
Code example
Pattern: Using pipelining to increase throughput
Example use case
Code example
Anti-pattern: Returning ray.put() ObjectRefs from a task harms performance and fault tolerance
Code example
Anti-pattern: Calling ray.get in a loop harms parallelism
Code example
Anti-pattern: Calling ray.get unnecessarily harms performance
Code example
Anti-pattern: Processing results in submission order using ray.get increases runtime
Code example
Anti-pattern: Fetching too many objects at once with ray.get causes failure
Code example
|
112
|
Anti-pattern: Processing results in submission order using ray.get increases runtime
Code example
Anti-pattern: Fetching too many objects at once with ray.get causes failure
Code example
Anti-pattern: Over-parallelizing with too fine-grained tasks harms speedup
Code example
Anti-pattern: Redefining the same remote function or class harms performance
Code example
Anti-pattern: Passing the same large argument by value repeatedly harms performance
Code example
Anti-pattern: Closure capturing large objects harms performance
Code example
Anti-pattern: Using global variables to share state between tasks and actors
Code example
Anti-pattern: Serialize ray.ObjectRef out of band
Code example
Anti-pattern: Forking new processes in application code
Code example
|
113
|
Anti-pattern: Serialize ray.ObjectRef out of band
Code example
Anti-pattern: Forking new processes in application code
Code example
Ray Compiled Graph (beta)
Use Cases
More Resources
Table of Contents
Quickstart
Hello World
Specifying data dependencies
asyncio support
Execution and failure semantics
Execution Timeouts
CPU to GPU communication
GPU to GPU communication
Profiling
PyTorch profiler
Nsight system profiler
Visualization
Experimental: Overlapping communication and computation
Troubleshooting
Limitations
Returning NumPy arrays
Explicitly teardown before reusing the same actors
Compiled Graph API
Input and Output Nodes
DAG Construction
Compiled Graph Operations
Configurations
Advanced topics
Tips for first-time users
Tip 1: Delay ray.get()
Tip 2: Avoid tiny tasks
Tip 3: Avoid passing same object repeatedly to remote tasks
Tip 4: Pipeline data processing
|
114
|
Advanced topics
Tips for first-time users
Tip 1: Delay ray.get()
Tip 2: Avoid tiny tasks
Tip 3: Avoid passing same object repeatedly to remote tasks
Tip 4: Pipeline data processing
Starting Ray
What is the Ray runtime?
Starting Ray on a single machine
Starting Ray via the CLI (ray start)
Launching a Ray cluster (ray up)
What’s next?
Ray Generators
Getting started
Error handling
Generator from Actor Tasks
Using the Ray generator with asyncio
Garbage collection of object references
Fault tolerance
Cancellation
How to wait for generator without blocking a thread (compatibility to ray.wait and ray.get)
Thread safety
Limitation
Using Namespaces
Specifying namespace for named actors
Anonymous namespaces
Getting the current namespace
Cross-language programming
Setup the driver
Python calling Java
Java calling Python
Cross-language data serialization
Cross-language exception stacks
Working with Jupyter Notebooks & JupyterLab
Setting Up Notebook
|
115
|
Working with Jupyter Notebooks & JupyterLab
Setting Up Notebook
Lazy Computation Graphs with the Ray DAG API
Ray DAG with functions
Ray DAG with classes and class methods
Ray DAG with custom InputNode
Ray DAG with multiple MultiOutputNode
Reuse Ray Actors in DAGs
More resources
Miscellaneous Topics
Dynamic Remote Parameters
Overloaded Functions
Inspecting Cluster State
Node Information
Resource Information
Running Large Ray Clusters
Tuning Operating System Settings
Benchmark
Authenticating Remote URIs in runtime_env
Authenticating Remote URIs
Running on VMs: the netrc File
Running on KubeRay: Secrets with netrc
Lifetimes of a User-Spawn Process
User-Spawned Process Killed on Worker Exit
Enabling the feature
⚠️ Caution: Core worker now reaps zombies, toggle back if you wait to waitpid
Under the hood
|
116
|
Tasks#
Ray enables arbitrary functions to be executed asynchronously on separate Python workers. Such functions are called Ray remote functions and their asynchronous invocations are called Ray tasks. Here is an example.
Python
import ray
import time
# A regular Python function.
def normal_function():
return 1
# By adding the `@ray.remote` decorator, a regular Python function
# becomes a Ray remote function.
@ray.remote
def my_function():
return 1
# To invoke this remote function, use the `remote` method.
# This will immediately return an object ref (a future) and then create
# a task that will be executed on a worker process.
obj_ref = my_function.remote()
# The result can be retrieved with ``ray.get``.
assert ray.get(obj_ref) == 1
@ray.remote
def slow_function():
time.sleep(10)
return 1
|
117
|
# The result can be retrieved with ``ray.get``.
assert ray.get(obj_ref) == 1
@ray.remote
def slow_function():
time.sleep(10)
return 1
# Ray tasks are executed in parallel.
# All computation is performed in the background, driven by Ray's internal event loop.
for _ in range(4):
# This doesn't block.
slow_function.remote()
See the ray.remote API for more details.
Java
public class MyRayApp {
// A regular Java static method.
public static int myFunction() {
return 1;
}
}
// Invoke the above method as a Ray task.
// This will immediately return an object ref (a future) and then create
// a task that will be executed on a worker process.
ObjectRef<Integer> res = Ray.task(MyRayApp::myFunction).remote();
// The result can be retrieved with ``ObjectRef::get``.
Assert.assertTrue(res.get() == 1);
public class MyRayApp {
public static int slowFunction() throws InterruptedException {
TimeUnit.SECONDS.sleep(10);
return 1;
}
}
|
118
|
public class MyRayApp {
public static int slowFunction() throws InterruptedException {
TimeUnit.SECONDS.sleep(10);
return 1;
}
}
// Ray tasks are executed in parallel.
// All computation is performed in the background, driven by Ray's internal event loop.
for(int i = 0; i < 4; i++) {
// This doesn't block.
Ray.task(MyRayApp::slowFunction).remote();
}
C++
// A regular C++ function.
int MyFunction() {
return 1;
}
// Register as a remote function by `RAY_REMOTE`.
RAY_REMOTE(MyFunction);
// Invoke the above method as a Ray task.
// This will immediately return an object ref (a future) and then create
// a task that will be executed on a worker process.
auto res = ray::Task(MyFunction).Remote();
// The result can be retrieved with ``ray::ObjectRef::Get``.
assert(*res.Get() == 1);
int SlowFunction() {
std::this_thread::sleep_for(std::chrono::seconds(10));
return 1;
}
RAY_REMOTE(SlowFunction);
|
119
|
int SlowFunction() {
std::this_thread::sleep_for(std::chrono::seconds(10));
return 1;
}
RAY_REMOTE(SlowFunction);
// Ray tasks are executed in parallel.
// All computation is performed in the background, driven by Ray's internal event loop.
for(int i = 0; i < 4; i++) {
// This doesn't block.
ray::Task(SlowFunction).Remote();
a
Use ray summary tasks from State API to see running and finished tasks and count:
# This API is only available when you download Ray via `pip install "ray[default]"`
ray summary tasks
======== Tasks Summary: 2023-05-26 11:09:32.092546 ========
Stats:
------------------------------------
total_actor_scheduled: 0
total_actor_tasks: 0
total_tasks: 5
Table (group by func_name):
------------------------------------
FUNC_OR_CLASS_NAME STATE_COUNTS TYPE
0 slow_function RUNNING: 4 NORMAL_TASK
1 my_function FINISHED: 1 NORMAL_TASK
|
120
|
Specifying required resources#
You can specify resource requirements in tasks (see Specifying Task or Actor Resource Requirements for more details.)
Python
# Specify required resources.
@ray.remote(num_cpus=4, num_gpus=2)
def my_function():
return 1
# Override the default resource requirements.
my_function.options(num_cpus=3).remote()
Java
// Specify required resources.
Ray.task(MyRayApp::myFunction).setResource("CPU", 4.0).setResource("GPU", 2.0).remote();
C++
// Specify required resources.
ray::Task(MyFunction).SetResource("CPU", 4.0).SetResource("GPU", 2.0).Remote();
Passing object refs to Ray tasks#
In addition to values, Object refs can also be passed into remote functions. When the task gets executed, inside the function body the argument will be the underlying value. For example, take this function:
Python
@ray.remote
def function_with_an_argument(value):
return value + 1
obj_ref1 = my_function.remote()
assert ray.get(obj_ref1) == 1
|
121
|
Python
@ray.remote
def function_with_an_argument(value):
return value + 1
obj_ref1 = my_function.remote()
assert ray.get(obj_ref1) == 1
# You can pass an object ref as an argument to another Ray task.
obj_ref2 = function_with_an_argument.remote(obj_ref1)
assert ray.get(obj_ref2) == 2
Java
public class MyRayApp {
public static int functionWithAnArgument(int value) {
return value + 1;
}
}
ObjectRef<Integer> objRef1 = Ray.task(MyRayApp::myFunction).remote();
Assert.assertTrue(objRef1.get() == 1);
// You can pass an object ref as an argument to another Ray task.
ObjectRef<Integer> objRef2 = Ray.task(MyRayApp::functionWithAnArgument, objRef1).remote();
Assert.assertTrue(objRef2.get() == 2);
C++
static int FunctionWithAnArgument(int value) {
return value + 1;
}
RAY_REMOTE(FunctionWithAnArgument);
auto obj_ref1 = ray::Task(MyFunction).Remote();
assert(*obj_ref1.Get() == 1);
|
122
|
C++
static int FunctionWithAnArgument(int value) {
return value + 1;
}
RAY_REMOTE(FunctionWithAnArgument);
auto obj_ref1 = ray::Task(MyFunction).Remote();
assert(*obj_ref1.Get() == 1);
// You can pass an object ref as an argument to another Ray task.
auto obj_ref2 = ray::Task(FunctionWithAnArgument).Remote(obj_ref1);
assert(*obj_ref2.Get() == 2);
Note the following behaviors:
As the second task depends on the output of the first task, Ray will not execute the second task until the first task has finished.
If the two tasks are scheduled on different machines, the output of the
first task (the value corresponding to obj_ref1/objRef1) will be sent over the
network to the machine where the second task is scheduled.
|
123
|
Waiting for Partial Results#
Calling ray.get on Ray task results will block until the task finished execution. After launching a number of tasks, you may want to know which ones have
finished executing without blocking on all of them. This could be achieved by ray.wait(). The function
works as follows.
Python
object_refs = [slow_function.remote() for _ in range(2)]
# Return as soon as one of the tasks finished execution.
ready_refs, remaining_refs = ray.wait(object_refs, num_returns=1, timeout=None)
Java
WaitResult<Integer> waitResult = Ray.wait(objectRefs, /*num_returns=*/0, /*timeoutMs=*/1000);
System.out.println(waitResult.getReady()); // List of ready objects.
System.out.println(waitResult.getUnready()); // list of unready objects.
C++
ray::WaitResult<int> wait_result = ray::Wait(object_refs, /*num_objects=*/0, /*timeout_ms=*/1000);
Generators#
Ray is compatible with Python generator syntax. See Ray Generators for more details.
|
124
|
Generators#
Ray is compatible with Python generator syntax. See Ray Generators for more details.
Multiple returns#
By default, a Ray task only returns a single Object Ref. However, you can configure Ray tasks to return multiple Object Refs, by setting the num_returns option.
Python
# By default, a Ray task only returns a single Object Ref.
@ray.remote
def return_single():
return 0, 1, 2
object_ref = return_single.remote()
assert ray.get(object_ref) == (0, 1, 2)
# However, you can configure Ray tasks to return multiple Object Refs.
@ray.remote(num_returns=3)
def return_multiple():
return 0, 1, 2
object_ref0, object_ref1, object_ref2 = return_multiple.remote()
assert ray.get(object_ref0) == 0
assert ray.get(object_ref1) == 1
assert ray.get(object_ref2) == 2
|
125
|
object_ref0, object_ref1, object_ref2 = return_multiple.remote()
assert ray.get(object_ref0) == 0
assert ray.get(object_ref1) == 1
assert ray.get(object_ref2) == 2
For tasks that return multiple objects, Ray also supports remote generators that allow a task to return one object at a time to reduce memory usage at the worker. Ray also supports an option to set the number of return values dynamically, which can be useful when the task caller does not know how many return values to expect. See the user guide for more details on use cases.
Python
@ray.remote(num_returns=3)
def return_multiple_as_generator():
for i in range(3):
yield i
# NOTE: Similar to normal functions, these objects will not be available
# until the full task is complete and all returns have been generated.
a, b, c = return_multiple_as_generator.remote()
Cancelling tasks#
Ray tasks can be canceled by calling ray.cancel() on the returned Object ref.
|
126
|
Cancelling tasks#
Ray tasks can be canceled by calling ray.cancel() on the returned Object ref.
Python
@ray.remote
def blocking_operation():
time.sleep(10e6)
obj_ref = blocking_operation.remote()
ray.cancel(obj_ref)
try:
ray.get(obj_ref)
except ray.exceptions.TaskCancelledError:
print("Object reference was cancelled.")
Scheduling#
For each task, Ray will choose a node to run it
and the scheduling decision is based on a few factors like
the task’s resource requirements,
the specified scheduling strategy
and locations of task arguments.
See Ray scheduling for more details.
Fault Tolerance#
By default, Ray will retry failed tasks
due to system failures and specified application-level failures.
You can change this behavior by setting
max_retries and retry_exceptions options
in ray.remote() and .options().
See Ray fault tolerance for more details.
|
127
|
Task Events#
By default, Ray traces the execution of tasks, reporting task status events and profiling events
that the Ray Dashboard and State API use.
You can change this behavior by setting enable_task_events options in ray.remote() and .options()
to disable task events, which reduces the overhead of task execution, and the amount of data the task sends to the Ray Dashboard.
Nested tasks don’t inherit the task events settings from the parent task. You need to set the task events settings for each task separately.
More about Ray Tasks#
Nested Remote Functions
Dynamic generators
|
128
|
Nested Remote Functions#
Remote functions can call other remote functions, resulting in nested tasks.
For example, consider the following.
import ray
@ray.remote
def f():
return 1
@ray.remote
def g():
# Call f 4 times and return the resulting object refs.
return [f.remote() for _ in range(4)]
@ray.remote
def h():
# Call f 4 times, block until those 4 tasks finish,
# retrieve the results, and return the values.
return ray.get([f.remote() for _ in range(4)])
Then calling g and h produces the following behavior.
>>> ray.get(g.remote())
[ObjectRef(b1457ba0911ae84989aae86f89409e953dd9a80e),
ObjectRef(7c14a1d13a56d8dc01e800761a66f09201104275),
ObjectRef(99763728ffc1a2c0766a2000ebabded52514e9a6),
ObjectRef(9c2f372e1933b04b2936bb6f58161285829b9914)]
>>> ray.get(h.remote())
[1, 1, 1, 1]
|
129
|
>>> ray.get(h.remote())
[1, 1, 1, 1]
One limitation is that the definition of f must come before the
definitions of g and h because as soon as g is defined, it
will be pickled and shipped to the workers, and so if f hasn’t been
defined yet, the definition will be incomplete.
Yielding Resources While Blocked#
Ray will release CPU resources when being blocked. This prevents
deadlock cases where the nested tasks are waiting for the CPU
resources held by the parent task.
Consider the following remote function.
@ray.remote(num_cpus=1, num_gpus=1)
def g():
return ray.get(f.remote())
When a g task is executing, it will release its CPU resources when it gets
blocked in the call to ray.get. It will reacquire the CPU resources when
ray.get returns. It will retain its GPU resources throughout the lifetime of
the task because the task will most likely continue to use GPU memory.
|
130
|
Dynamic generators#
Python generators are functions that behave like iterators, yielding one
value per iteration. Ray supports remote generators for two use cases:
To reduce max heap memory usage when returning multiple values from a remote
function. See the design pattern guide for an
example.
When the number of return values is set dynamically by the remote function
instead of by the caller.
Remote generators can be used in both actor and non-actor tasks.
|
131
|
num_returns set by the task caller#
Where possible, the caller should set the remote function’s number of return values using @ray.remote(num_returns=x) or foo.options(num_returns=x).remote().
Ray will return this many ObjectRefs to the caller.
The remote task should then return the same number of values, usually as a tuple or list.
Compared to setting the number of return values dynamically, this adds less complexity to user code and less performance overhead, as Ray will know exactly how many ObjectRefs to return to the caller ahead of time.
Without changing the caller’s syntax, we can also use a remote generator function to yield the values iteratively.
The generator should yield the same number of return values specified by the caller, and these will be stored one at a time in Ray’s object store.
An error will be raised for generators that yield a different number of values from the one specified by the caller.
|
132
|
@ray.remote
def large_values(num_returns):
return [
np.random.randint(np.iinfo(np.int8).max, size=(100_000_000, 1), dtype=np.int8)
for _ in range(num_returns)
]
for this code, which uses a generator function:
@ray.remote
def large_values_generator(num_returns):
for i in range(num_returns):
yield np.random.randint(
np.iinfo(np.int8).max, size=(100_000_000, 1), dtype=np.int8
)
print(f"yielded return value {i}")
The advantage of doing so is that the generator function does not need to hold all of its return values in memory at once.
It can yield the arrays one at a time to reduce memory pressure.
|
133
|
num_returns set by the task executor#
In some cases, the caller may not know the number of return values to expect from a remote function.
For example, suppose we want to write a task that breaks up its argument into equal-size chunks and returns these.
We may not know the size of the argument until we execute the task, so we don’t know the number of return values to expect.
In these cases, we can use a remote generator function that returns a dynamic number of values.
To use this feature, set num_returns="dynamic" in the @ray.remote decorator or the remote function’s .options().
Then, when invoking the remote function, Ray will return a single ObjectRef that will get populated with an DynamicObjectRefGenerator when the task completes.
The DynamicObjectRefGenerator can be used to iterate over a list of ObjectRefs containing the actual values returned by the task.
import numpy as np
|
134
|
# Returns an ObjectRef[DynamicObjectRefGenerator].
dynamic_ref = split.remote(array_ref, block_size)
print(dynamic_ref)
# ObjectRef(c8ef45ccd0112571ffffffffffffffffffffffff0100000001000000)
i = -1
ref_generator = ray.get(dynamic_ref)
print(ref_generator)
# <ray._raylet.DynamicObjectRefGenerator object at 0x7f7e2116b290>
for i, ref in enumerate(ref_generator):
# Each DynamicObjectRefGenerator iteration returns an ObjectRef.
assert len(ray.get(ref)) <= block_size
num_blocks_generated = i + 1
array_size = len(ray.get(array_ref))
assert array_size <= num_blocks_generated * block_size
print(f"Split array of size {array_size} into {num_blocks_generated} blocks of "
f"size {block_size} each.")
# Split array of size 63153 into 64 blocks of size 1000 each.
# NOTE: The dynamic_ref points to the generated ObjectRefs. Make sure that this
# ObjectRef goes out of scope so that Ray can garbage-collect the internal
# ObjectRefs.
del dynamic_ref
|
135
|
# NOTE: The dynamic_ref points to the generated ObjectRefs. Make sure that this
# ObjectRef goes out of scope so that Ray can garbage-collect the internal
# ObjectRefs.
del dynamic_ref
We can also pass the ObjectRef returned by a task with num_returns="dynamic" to another task. The task will receive the DynamicObjectRefGenerator, which it can use to iterate over the task’s return values. Similarly, you can also pass an ObjectRefGenerator as a task argument.
@ray.remote
def get_size(ref_generator : DynamicObjectRefGenerator):
print(ref_generator)
num_elements = 0
for ref in ref_generator:
array = ray.get(ref)
assert len(array) <= block_size
num_elements += len(array)
return num_elements
# Returns an ObjectRef[DynamicObjectRefGenerator].
dynamic_ref = split.remote(array_ref, block_size)
assert array_size == ray.get(get_size.remote(dynamic_ref))
# (get_size pid=1504184)
# <ray._raylet.DynamicObjectRefGenerator object at 0x7f81c4250ad0>
|
136
|
# This also works, but should be avoided because you have to call an additional
# `ray.get`, which blocks the driver.
ref_generator = ray.get(dynamic_ref)
assert array_size == ray.get(get_size.remote(ref_generator))
# (get_size pid=1504184)
# <ray._raylet.DynamicObjectRefGenerator object at 0x7f81c4251b50>
Exception handling#
If a generator function raises an exception before yielding all its values, the values that it already stored will still be accessible through their ObjectRefs.
The remaining ObjectRefs will contain the raised exception.
This is true for both static and dynamic num_returns.
If the task was called with num_returns="dynamic", the exception will be stored as an additional final ObjectRef in the DynamicObjectRefGenerator.
@ray.remote
def generator():
for i in range(2):
yield i
raise Exception("error")
|
137
|
ref1, ref2, ref3, ref4 = generator.options(num_returns=4).remote()
assert ray.get([ref1, ref2]) == [0, 1]
# All remaining ObjectRefs will contain the error.
try:
ray.get([ref3, ref4])
except Exception as error:
print(error)
dynamic_ref = generator.options(num_returns="dynamic").remote()
ref_generator = ray.get(dynamic_ref)
ref1, ref2, ref3 = ref_generator
assert ray.get([ref1, ref2]) == [0, 1]
# Generators with num_returns="dynamic" will store the exception in the final
# ObjectRef.
try:
ray.get(ref3)
except Exception as error:
print(error)
Note that there is currently a known bug where exceptions will not be propagated for generators that yield more values than expected. This can occur in two cases:
|
138
|
Note that there is currently a known bug where exceptions will not be propagated for generators that yield more values than expected. This can occur in two cases:
When num_returns is set by the caller, but the generator task returns more than this value.
When a generator task with num_returns="dynamic" is re-executed, and the re-executed task yields more values than the original execution. Note that in general, Ray does not guarantee correctness for task re-execution if the task is nondeterministic, and it is recommended to set @ray.remote(max_retries=0) for such tasks.
|
139
|
# Generators that yield more values than expected currently do not throw an
# exception (the error is only logged).
# See https://github.com/ray-project/ray/issues/28689.
ref1, ref2 = generator.options(num_returns=2).remote()
assert ray.get([ref1, ref2]) == [0, 1]
"""
(generator pid=2375938) 2022-09-28 11:08:51,386 ERROR worker.py:755 --
Unhandled error: Task threw exception, but all return values already
created. This should only occur when using generator tasks.
...
"""
Limitations#
Although a generator function creates ObjectRefs one at a time, currently Ray will not schedule dependent tasks until the entire task is complete and all values have been created. This is similar to the semantics used by tasks that return multiple values as a list.
|
140
|
Actors#
Actors extend the Ray API from functions (tasks) to classes.
An actor is essentially a stateful worker (or a service).
When you instantiate a new actor, Ray creates a new worker and schedules methods of the actor on
that specific worker. The methods can access and mutate the state of that worker.
Python
The ray.remote decorator indicates that instances of the Counter class are actors. Each actor runs in its own Python process.
import ray
@ray.remote
class Counter:
def __init__(self):
self.value = 0
def increment(self):
self.value += 1
return self.value
def get_counter(self):
return self.value
# Create an actor from this class.
counter = Counter.remote()
Java
Ray.actor is used to create actors from regular Java classes.
// A regular Java class.
public class Counter {
private int value = 0;
public int increment() {
this.value += 1;
return this.value;
}
}
|
141
|
private int value = 0;
public int increment() {
this.value += 1;
return this.value;
}
}
// Create an actor from this class.
// `Ray.actor` takes a factory method that can produce
// a `Counter` object. Here, we pass `Counter`'s constructor
// as the argument.
ActorHandle<Counter> counter = Ray.actor(Counter::new).remote();
C++
ray::Actor is used to create actors from regular C++ classes.
// A regular C++ class.
class Counter {
private:
int value = 0;
public:
int Increment() {
value += 1;
return value;
}
};
// Factory function of Counter class.
static Counter *CreateCounter() {
return new Counter();
};
RAY_REMOTE(&Counter::Increment, CreateCounter);
// Create an actor from this class.
// `ray::Actor` takes a factory method that can produce
// a `Counter` object. Here, we pass `Counter`'s factory function
// as the argument.
auto counter = ray::Actor(CreateCounter).Remote();
|
142
|
Use ray list actors from State API to see actors states:
# This API is only available when you install Ray with `pip install "ray[default]"`.
ray list actors
======== List: 2023-05-25 10:10:50.095099 ========
Stats:
------------------------------
Total: 1
Table:
------------------------------
ACTOR_ID CLASS_NAME STATE JOB_ID NAME NODE_ID PID RAY_NAMESPACE
0 9e783840250840f87328c9f201000000 Counter ALIVE 01000000 13a475571662b784b4522847692893a823c78f1d3fd8fd32a2624923 38906 ef9de910-64fb-4575-8eb5-50573faa3ddf
Specifying required resources#
Specify resource requirements in actors. See Specifying Task or Actor Resource Requirements for more details.
Python
# Specify required resources for an actor.
@ray.remote(num_cpus=2, num_gpus=0.5)
class Actor:
pass
|
143
|
Python
# Specify required resources for an actor.
@ray.remote(num_cpus=2, num_gpus=0.5)
class Actor:
pass
Java
// Specify required resources for an actor.
Ray.actor(Counter::new).setResource("CPU", 2.0).setResource("GPU", 0.5).remote();
C++
// Specify required resources for an actor.
ray::Actor(CreateCounter).SetResource("CPU", 2.0).SetResource("GPU", 0.5).Remote();
Calling the actor#
You can interact with the actor by calling its methods with the remote
operator. You can then call get on the object ref to retrieve the actual
value.
Python
# Call the actor.
obj_ref = counter.increment.remote()
print(ray.get(obj_ref))
1
Java
// Call the actor.
ObjectRef<Integer> objectRef = counter.task(&Counter::increment).remote();
Assert.assertTrue(objectRef.get() == 1);
C++
// Call the actor.
auto object_ref = counter.Task(&Counter::increment).Remote();
assert(*object_ref.Get() == 1);
|
144
|
C++
// Call the actor.
auto object_ref = counter.Task(&Counter::increment).Remote();
assert(*object_ref.Get() == 1);
Methods called on different actors execute in parallel, and methods called on the same actor execute serially in the order you call them. Methods on the same actor share state with one another, as shown below.
Python
# Create ten Counter actors.
counters = [Counter.remote() for _ in range(10)]
# Increment each Counter once and get the results. These tasks all happen in
# parallel.
results = ray.get([c.increment.remote() for c in counters])
print(results)
# Increment the first Counter five times. These tasks are executed serially
# and share state.
results = ray.get([counters[0].increment.remote() for _ in range(5)])
print(results)
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
[2, 3, 4, 5, 6]
|
145
|
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
[2, 3, 4, 5, 6]
Java
// Create ten Counter actors.
List<ActorHandle<Counter>> counters = new ArrayList<>();
for (int i = 0; i < 10; i++) {
counters.add(Ray.actor(Counter::new).remote());
}
// Increment each Counter once and get the results. These tasks all happen in
// parallel.
List<ObjectRef<Integer>> objectRefs = new ArrayList<>();
for (ActorHandle<Counter> counterActor : counters) {
objectRefs.add(counterActor.task(Counter::increment).remote());
}
// prints [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
System.out.println(Ray.get(objectRefs));
// Increment the first Counter five times. These tasks are executed serially
// and share state.
objectRefs = new ArrayList<>();
for (int i = 0; i < 5; i++) {
objectRefs.add(counters.get(0).task(Counter::increment).remote());
}
// prints [2, 3, 4, 5, 6]
System.out.println(Ray.get(objectRefs));
|
146
|
C++
// Create ten Counter actors.
std::vector<ray::ActorHandle<Counter>> counters;
for (int i = 0; i < 10; i++) {
counters.emplace_back(ray::Actor(CreateCounter).Remote());
}
// Increment each Counter once and get the results. These tasks all happen in
// parallel.
std::vector<ray::ObjectRef<int>> object_refs;
for (ray::ActorHandle<Counter> counter_actor : counters) {
object_refs.emplace_back(counter_actor.Task(&Counter::Increment).Remote());
}
// prints 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
auto results = ray::Get(object_refs);
for (const auto &result : results) {
std::cout << *result;
}
// Increment the first Counter five times. These tasks are executed serially
// and share state.
object_refs.clear();
for (int i = 0; i < 5; i++) {
object_refs.emplace_back(counters[0].Task(&Counter::Increment).Remote());
}
// prints 2, 3, 4, 5, 6
results = ray::Get(object_refs);
for (const auto &result : results) {
std::cout << *result;
}
|
147
|
Passing around actor handles#
You can pass actor handles into other tasks. You can also define remote functions or actor methods that use actor handles.
Python
import time
@ray.remote
def f(counter):
for _ in range(10):
time.sleep(0.1)
counter.increment.remote()
Java
public static class MyRayApp {
public static void foo(ActorHandle<Counter> counter) throws InterruptedException {
for (int i = 0; i < 1000; i++) {
TimeUnit.MILLISECONDS.sleep(100);
counter.task(Counter::increment).remote();
}
}
}
C++
void Foo(ray::ActorHandle<Counter> counter) {
for (int i = 0; i < 1000; i++) {
std::this_thread::sleep_for(std::chrono::milliseconds(100));
counter.Task(&Counter::Increment).Remote();
}
}
If you instantiate an actor, you can pass the handle around to various tasks.
Python
counter = Counter.remote()
# Start some tasks that use the actor.
[f.remote(counter) for _ in range(3)]
|
148
|
If you instantiate an actor, you can pass the handle around to various tasks.
Python
counter = Counter.remote()
# Start some tasks that use the actor.
[f.remote(counter) for _ in range(3)]
# Print the counter value.
for _ in range(10):
time.sleep(0.1)
print(ray.get(counter.get_counter.remote()))
0
3
8
10
15
18
20
25
30
30
Java
ActorHandle<Counter> counter = Ray.actor(Counter::new).remote();
// Start some tasks that use the actor.
for (int i = 0; i < 3; i++) {
Ray.task(MyRayApp::foo, counter).remote();
}
// Print the counter value.
for (int i = 0; i < 10; i++) {
TimeUnit.SECONDS.sleep(1);
System.out.println(counter.task(Counter::getCounter).remote().get());
}
C++
auto counter = ray::Actor(CreateCounter).Remote();
// Start some tasks that use the actor.
for (int i = 0; i < 3; i++) {
ray::Task(Foo).Remote(counter);
}
|
149
|
C++
auto counter = ray::Actor(CreateCounter).Remote();
// Start some tasks that use the actor.
for (int i = 0; i < 3; i++) {
ray::Task(Foo).Remote(counter);
}
// Print the counter value.
for (int i = 0; i < 10; i++) {
std::this_thread::sleep_for(std::chrono::seconds(1));
std::cout << *counter.Task(&Counter::GetCounter).Remote().Get() << std::endl;
}
Generators#
Ray is compatible with Python generator syntax. See Ray Generators for more details.
Cancelling actor tasks#
Cancel Actor Tasks by calling ray.cancel() on the returned ObjectRef.
Python
import ray
import asyncio
import time
@ray.remote
class Actor:
async def f(self):
try:
await asyncio.sleep(5)
except asyncio.CancelledError:
print("Actor task canceled.")
actor = Actor.remote()
ref = actor.f.remote()
# Wait until task is scheduled.
time.sleep(1)
ray.cancel(ref)
|
150
|
In Ray, Task cancellation behavior is contingent on the Task’s current state:
Unscheduled tasks:
If Ray hasn’t scheduled an Actor Task yet, Ray attempts to cancel the scheduling.
When Ray successfully cancels at this stage, it invokes ray.get(actor_task_ref)
which produces a TaskCancelledError.
Running actor tasks (regular actor, threaded actor):
For tasks classified as a single-threaded Actor or a multi-threaded Actor,
Ray offers no mechanism for interruption.
Running async actor tasks:
For Tasks classified as async Actors <_async-actors>, Ray seeks to cancel the associated asyncio.Task.
This cancellation approach aligns with the standards presented in
asyncio task cancellation.
Note that asyncio.Task won’t be interrupted in the middle of execution if you don’t await within the async function.
Cancellation guarantee:
Ray attempts to cancel Tasks on a best-effort basis, meaning cancellation isn’t always guaranteed.
|
151
|
Cancellation guarantee:
Ray attempts to cancel Tasks on a best-effort basis, meaning cancellation isn’t always guaranteed.
For example, if the cancellation request doesn’t get through to the executor,
the Task might not be cancelled.
You can check if a Task was successfully cancelled using ray.get(actor_task_ref).
Recursive cancellation:
Ray tracks all child and Actor Tasks. When the recursive=True argument is given,
it cancels all child and Actor Tasks.
|
152
|
Scheduling#
For each actor, Ray chooses a node to run it on,
and bases the scheduling decision on a few factors like
the actor’s resource requirements
and the specified scheduling strategy.
See Ray scheduling for more details.
Fault Tolerance#
By default, Ray actors won’t be restarted and
actor tasks won’t be retried when actors crash unexpectedly.
You can change this behavior by setting
max_restarts and max_task_retries options
in ray.remote() and .options().
See Ray fault tolerance for more details.
FAQ: Actors, Workers and Resources#
What’s the difference between a worker and an actor?
Each “Ray worker” is a python process.
Ray treats a worker differently for tasks and actors. For tasks, Ray uses a “Ray worker” to execute multiple Ray tasks. For actors, Ray starts a “Ray worker” as a dedicated Ray actor.
|
153
|
Tasks: When Ray starts on a machine, a number of Ray workers start automatically (1 per CPU by default). Ray uses them to execute tasks (like a process pool). If you execute 8 tasks with num_cpus=2, and total number of CPUs is 16 (ray.cluster_resources()["CPU"] == 16), you end up with 8 of your 16 workers idling.
Actor: A Ray Actor is also a “Ray worker” but you instantiate it at runtime with actor_cls.remote(). All of its methods run on the same process, using the same resources Ray designates when you define the Actor. Note that unlike tasks, Ray doesn’t reuse the Python processes that run Ray Actors. Ray terminates them when you delete the Actor.
|
154
|
To maximally utilize your resources, you want to maximize the time that
your workers work. You also want to allocate enough cluster resources
so Ray can run all of your needed actors and any other tasks you
define. This also implies that Ray schedules tasks more flexibly,
and that if you don’t need the stateful part of an actor, it’s better
to use tasks.
|
155
|
Task Events#
By default, Ray traces the execution of actor tasks, reporting task status events and profiling events
that Ray Dashboard and State API use.
You can disable task event reporting for the actor by setting the enable_task_events option to False in ray.remote() and .options(). This setting reduces the overhead of task execution by reducing the amount of data Ray sends to the Ray Dashboard.
You can also disable task event reporting for some actor methods by setting the enable_task_events option to False in ray.remote() and .options() on the actor method.
Method settings override the actor setting:
@ray.remote
class FooActor:
# Disable task events reporting for this method.
@ray.method(enable_task_events=False)
def foo(self):
pass
foo_actor = FooActor.remote()
ray.get(foo_actor.foo.remote())
More about Ray Actors#
|
156
|
Named Actors#
An actor can be given a unique name within their namespace.
This allows you to retrieve the actor from any job in the Ray cluster.
This can be useful if you cannot directly
pass the actor handle to the task that needs it, or if you are trying to
access an actor launched by another driver.
Note that the actor will still be garbage-collected if no handles to it
exist. See Actor Lifetimes for more details.
Python
import ray
@ray.remote
class Counter:
pass
# Create an actor with a name
counter = Counter.options(name="some_name").remote()
# Retrieve the actor later somewhere
counter = ray.get_actor("some_name")
Java
// Create an actor with a name.
ActorHandle<Counter> counter = Ray.actor(Counter::new).setName("some_name").remote();
...
// Retrieve the actor later somewhere
Optional<ActorHandle<Counter>> counter = Ray.getActor("some_name");
Assert.assertTrue(counter.isPresent());
|
157
|
...
// Retrieve the actor later somewhere
Optional<ActorHandle<Counter>> counter = Ray.getActor("some_name");
Assert.assertTrue(counter.isPresent());
C++
// Create an actor with a globally unique name
ActorHandle<Counter> counter = ray::Actor(CreateCounter).SetGlobalName("some_name").Remote();
...
// Retrieve the actor later somewhere
boost::optional<ray::ActorHandle<Counter>> counter = ray::GetGlobalActor("some_name");
We also support non-global named actors in C++, which means that the actor name is only valid within the job and the actor cannot be accessed from another job
// Create an actor with a job-scope-unique name
ActorHandle<Counter> counter = ray::Actor(CreateCounter).SetName("some_name").Remote();
...
// Retrieve the actor later somewhere in the same job
boost::optional<ray::ActorHandle<Counter>> counter = ray::GetActor("some_name");
|
158
|
...
// Retrieve the actor later somewhere in the same job
boost::optional<ray::ActorHandle<Counter>> counter = ray::GetActor("some_name");
Note
Named actors are scoped by namespace. If no namespace is assigned, they will
be placed in an anonymous namespace by default.
Python
import ray
@ray.remote
class Actor:
pass
# driver_1.py
# Job 1 creates an actor, "orange" in the "colors" namespace.
ray.init(address="auto", namespace="colors")
Actor.options(name="orange", lifetime="detached").remote()
# driver_2.py
# Job 2 is now connecting to a different namespace.
ray.init(address="auto", namespace="fruit")
# This fails because "orange" was defined in the "colors" namespace.
ray.get_actor("orange")
# You can also specify the namespace explicitly.
ray.get_actor("orange", namespace="colors")
|
159
|
# driver_3.py
# Job 3 connects to the original "colors" namespace
ray.init(address="auto", namespace="colors")
# This returns the "orange" actor we created in the first job.
ray.get_actor("orange")
Java
import ray
class Actor {
}
// Driver1.java
// Job 1 creates an actor, "orange" in the "colors" namespace.
System.setProperty("ray.job.namespace", "colors");
Ray.init();
Ray.actor(Actor::new).setName("orange").remote();
// Driver2.java
// Job 2 is now connecting to a different namespace.
System.setProperty("ray.job.namespace", "fruits");
Ray.init();
// This fails because "orange" was defined in the "colors" namespace.
Optional<ActorHandle<Actor>> actor = Ray.getActor("orange");
Assert.assertFalse(actor.isPresent()); // actor.isPresent() is false.
|
160
|
// Driver3.java
System.setProperty("ray.job.namespace", "colors");
Ray.init();
// This returns the "orange" actor we created in the first job.
Optional<ActorHandle<Actor>> actor = Ray.getActor("orange");
Assert.assertTrue(actor.isPresent()); // actor.isPresent() is true.
Get-Or-Create a Named Actor#
A common use case is to create an actor only if it doesn’t exist.
Ray provides a get_if_exists option for actor creation that does this out of the box.
This method is available after you set a name for the actor via .options().
If the actor already exists, a handle to the actor will be returned
and the arguments will be ignored. Otherwise, a new actor will be
created with the specified arguments.
Python
import ray
@ray.remote
class Greeter:
def __init__(self, value):
self.value = value
def say_hello(self):
return self.value
|
161
|
Python
import ray
@ray.remote
class Greeter:
def __init__(self, value):
self.value = value
def say_hello(self):
return self.value
# Actor `g1` doesn't yet exist, so it is created with the given args.
a = Greeter.options(name="g1", get_if_exists=True).remote("Old Greeting")
assert ray.get(a.say_hello.remote()) == "Old Greeting"
# Actor `g1` already exists, so it is returned (new args are ignored).
b = Greeter.options(name="g1", get_if_exists=True).remote("New Greeting")
assert ray.get(b.say_hello.remote()) == "Old Greeting"
Java
// This feature is not yet available in Java.
C++
// This feature is not yet available in C++.
Actor Lifetimes#
Separately, actor lifetimes can be decoupled from the job, allowing an actor to persist even after the driver process of the job exits. We call these actors detached.
Python
counter = Counter.options(name="CounterActor", lifetime="detached").remote()
|
162
|
Python
counter = Counter.options(name="CounterActor", lifetime="detached").remote()
The CounterActor will be kept alive even after the driver running above script
exits. Therefore it is possible to run the following script in a different
driver:
counter = ray.get_actor("CounterActor")
Note that an actor can be named but not detached. If we only specified the
name without specifying lifetime="detached", then the CounterActor can
only be retrieved as long as the original driver is still running.
Java
System.setProperty("ray.job.namespace", "lifetime");
Ray.init();
ActorHandle<Counter> counter = Ray.actor(Counter::new).setName("some_name").setLifetime(ActorLifetime.DETACHED).remote();
|
163
|
Java
System.setProperty("ray.job.namespace", "lifetime");
Ray.init();
ActorHandle<Counter> counter = Ray.actor(Counter::new).setName("some_name").setLifetime(ActorLifetime.DETACHED).remote();
The CounterActor will be kept alive even after the driver running above process
exits. Therefore it is possible to run the following code in a different
driver:
System.setProperty("ray.job.namespace", "lifetime");
Ray.init();
Optional<ActorHandle<Counter>> counter = Ray.getActor("some_name");
Assert.assertTrue(counter.isPresent());
C++
Customizing lifetime of an actor hasn’t been implemented in C++ yet.
Unlike normal actors, detached actors are not automatically garbage-collected by Ray.
Detached actors must be manually destroyed once you are sure that they are no
longer needed. To do this, use ray.kill to manually terminate the actor.
After this call, the actor’s name may be reused.
|
164
|
Terminating Actors#
Actor processes will be terminated automatically when all copies of the
actor handle have gone out of scope in Python, or if the original creator
process dies.
Note that automatic termination of actors is not yet supported in Java or C++.
Manual termination via an actor handle#
In most cases, Ray will automatically terminate actors that have gone out of
scope, but you may sometimes need to terminate an actor forcefully. This should
be reserved for cases where an actor is unexpectedly hanging or leaking
resources, and for detached actors, which must be
manually destroyed.
Python
import ray
@ray.remote
class Actor:
pass
actor_handle = Actor.remote()
ray.kill(actor_handle)
# This will not go through the normal Python sys.exit
# teardown logic, so any exit handlers installed in
# the actor using ``atexit`` will not be called.
|
165
|
ray.kill(actor_handle)
# This will not go through the normal Python sys.exit
# teardown logic, so any exit handlers installed in
# the actor using ``atexit`` will not be called.
Java
actorHandle.kill();
// This will not go through the normal Java System.exit teardown logic, so any
// shutdown hooks installed in the actor using ``Runtime.addShutdownHook(...)`` will
// not be called.
C++
actor_handle.Kill();
// This will not go through the normal C++ std::exit
// teardown logic, so any exit handlers installed in
// the actor using ``std::atexit`` will not be called.
|
166
|
C++
actor_handle.Kill();
// This will not go through the normal C++ std::exit
// teardown logic, so any exit handlers installed in
// the actor using ``std::atexit`` will not be called.
This will cause the actor to immediately exit its process, causing any current,
pending, and future tasks to fail with a RayActorError. If you would like
Ray to automatically restart the actor, make sure to set a nonzero
max_restarts in the @ray.remote options for the actor, then pass the
flag no_restart=False to ray.kill.
For named and detached actors, calling ray.kill on
an actor handle destroys the actor and allow the name to be reused.
Use ray list actors --detail from State API to see the death cause of dead actors:
# This API is only available when you download Ray via `pip install "ray[default]"`
ray list actors --detail
|
167
|
---
- actor_id: e8702085880657b355bf7ef001000000
class_name: Actor
state: DEAD
job_id: '01000000'
name: ''
node_id: null
pid: 0
ray_namespace: dbab546b-7ce5-4cbb-96f1-d0f64588ae60
serialized_runtime_env: '{}'
required_resources: {}
death_cause:
actor_died_error_context: # <---- You could see the error message w.r.t why the actor exits.
error_message: The actor is dead because `ray.kill` killed it.
owner_id: 01000000ffffffffffffffffffffffffffffffffffffffffffffffff
owner_ip_address: 127.0.0.1
ray_namespace: dbab546b-7ce5-4cbb-96f1-d0f64588ae60
class_name: Actor
actor_id: e8702085880657b355bf7ef001000000
never_started: true
node_ip_address: ''
pid: 0
name: ''
is_detached: false
placement_group_id: null
repr_name: ''
|
168
|
Manual termination within the actor#
If necessary, you can manually terminate an actor from within one of the actor methods.
This will kill the actor process and release resources associated/assigned to the actor.
Python
@ray.remote
class Actor:
def exit(self):
ray.actor.exit_actor()
actor = Actor.remote()
actor.exit.remote()
This approach should generally not be necessary as actors are automatically garbage
collected. The ObjectRef resulting from the task can be waited on to wait
for the actor to exit (calling ray.get() on it will raise a RayActorError).
Java
Ray.exitActor();
Garbage collection for actors haven’t been implemented yet, so this is currently the
only way to terminate an actor gracefully. The ObjectRef resulting from the task
can be waited on to wait for the actor to exit (calling ObjectRef::get on it will
throw a RayActorException).
C++
ray::ExitActor();
|
169
|
C++
ray::ExitActor();
Garbage collection for actors haven’t been implemented yet, so this is currently the
only way to terminate an actor gracefully. The ObjectRef resulting from the task
can be waited on to wait for the actor to exit (calling ObjectRef::Get on it will
throw a RayActorException).
Note that this method of termination waits until any previously submitted
tasks finish executing and then exits the process gracefully with sys.exit.
You could see the actor is dead as a result of the user’s exit_actor() call:
# This API is only available when you download Ray via `pip install "ray[default]"`
ray list actors --detail
|
170
|
---
- actor_id: 070eb5f0c9194b851bb1cf1602000000
class_name: Actor
state: DEAD
job_id: '02000000'
name: ''
node_id: 47ccba54e3ea71bac244c015d680e202f187fbbd2f60066174a11ced
pid: 47978
ray_namespace: 18898403-dda0-485a-9c11-e9f94dffcbed
serialized_runtime_env: '{}'
required_resources: {}
death_cause:
actor_died_error_context:
error_message: 'The actor is dead because its worker process has died.
Worker exit type: INTENDED_USER_EXIT Worker exit detail: Worker exits
by an user request. exit_actor() is called.'
owner_id: 02000000ffffffffffffffffffffffffffffffffffffffffffffffff
owner_ip_address: 127.0.0.1
node_ip_address: 127.0.0.1
pid: 47978
ray_namespace: 18898403-dda0-485a-9c11-e9f94dffcbed
class_name: Actor
actor_id: 070eb5f0c9194b851bb1cf1602000000
name: ''
never_started: false
|
171
|
AsyncIO / Concurrency for Actors#
Within a single actor process, it is possible to execute concurrent threads.
Ray offers two types of concurrency within an actor:
async execution
threading
Keep in mind that the Python’s Global Interpreter Lock (GIL) will only allow one thread of Python code running at once.
This means if you are just parallelizing Python code, you won’t get true parallelism. If you call Numpy, Cython, Tensorflow, or PyTorch code, these libraries will release the GIL when calling into C/C++ functions.
Neither the Threaded Actors nor AsyncIO for Actors model will allow you to bypass the GIL.
AsyncIO for Actors#
Since Python 3.5, it is possible to write concurrent code using the
async/await syntax.
Ray natively integrates with asyncio. You can use ray alongside with popular
async frameworks like aiohttp, aioredis, etc.
import ray
import asyncio
|
172
|
@ray.remote
class AsyncActor:
# multiple invocation of this method can be running in
# the event loop at the same time
async def run_concurrent(self):
print("started")
await asyncio.sleep(2) # concurrent workload here
print("finished")
actor = AsyncActor.remote()
# regular ray.get
ray.get([actor.run_concurrent.remote() for _ in range(4)])
# async ray.get
async def async_get():
await actor.run_concurrent.remote()
asyncio.run(async_get())
(AsyncActor pid=40293) started
(AsyncActor pid=40293) started
(AsyncActor pid=40293) started
(AsyncActor pid=40293) started
(AsyncActor pid=40293) finished
(AsyncActor pid=40293) finished
(AsyncActor pid=40293) finished
(AsyncActor pid=40293) finished
...
ObjectRefs as asyncio.Futures#
ObjectRefs can be translated to asyncio.Futures. This feature
make it possible to await on ray futures in existing concurrent
applications.
Instead of:
import ray
@ray.remote
def some_task():
return 1
|
173
|
@ray.remote
def some_task():
return 1
ray.get(some_task.remote())
ray.wait([some_task.remote()])
you can do:
import ray
import asyncio
@ray.remote
def some_task():
return 1
async def await_obj_ref():
await some_task.remote()
await asyncio.wait([some_task.remote()])
asyncio.run(await_obj_ref())
Please refer to asyncio doc
for more asyncio patterns including timeouts and asyncio.gather.
If you need to directly access the future object, you can call:
import asyncio
async def convert_to_asyncio_future():
ref = some_task.remote()
fut: asyncio.Future = asyncio.wrap_future(ref.future())
print(await fut)
asyncio.run(convert_to_asyncio_future())
1
ObjectRefs as concurrent.futures.Futures#
ObjectRefs can also be wrapped into concurrent.futures.Future objects. This
is useful for interfacing with existing concurrent.futures APIs:
import concurrent
|
174
|
refs = [some_task.remote() for _ in range(4)]
futs = [ref.future() for ref in refs]
for fut in concurrent.futures.as_completed(futs):
assert fut.done()
print(fut.result())
1
1
1
1
Defining an Async Actor#
By using async method definitions, Ray will automatically detect whether an actor support async calls or not.
import asyncio
@ray.remote
class AsyncActor:
async def run_task(self):
print("started")
await asyncio.sleep(2) # Network, I/O task here
print("ended")
actor = AsyncActor.remote()
# All 5 tasks should start at once. After 2 second they should all finish.
# they should finish at the same time
ray.get([actor.run_task.remote() for _ in range(5)])
(AsyncActor pid=3456) started
(AsyncActor pid=3456) started
(AsyncActor pid=3456) started
(AsyncActor pid=3456) started
(AsyncActor pid=3456) started
(AsyncActor pid=3456) ended
(AsyncActor pid=3456) ended
(AsyncActor pid=3456) ended
(AsyncActor pid=3456) ended
(AsyncActor pid=3456) ended
|
175
|
Under the hood, Ray runs all of the methods inside a single python event loop.
Please note that running blocking ray.get or ray.wait inside async
actor method is not allowed, because ray.get will block the execution
of the event loop.
In async actors, only one task can be running at any point in time (though tasks can be multi-plexed). There will be only one thread in AsyncActor! See Threaded Actors if you want a threadpool.
Setting concurrency in Async Actors#
You can set the number of “concurrent” task running at once using the
max_concurrency flag. By default, 1000 tasks can be running concurrently.
import asyncio
@ray.remote
class AsyncActor:
async def run_task(self):
print("started")
await asyncio.sleep(1) # Network, I/O task here
print("ended")
actor = AsyncActor.options(max_concurrency=2).remote()
# Only 2 tasks will be running concurrently. Once 2 finish, the next 2 should run.
ray.get([actor.run_task.remote() for _ in range(8)])
|
176
|
actor = AsyncActor.options(max_concurrency=2).remote()
# Only 2 tasks will be running concurrently. Once 2 finish, the next 2 should run.
ray.get([actor.run_task.remote() for _ in range(8)])
(AsyncActor pid=5859) started
(AsyncActor pid=5859) started
(AsyncActor pid=5859) ended
(AsyncActor pid=5859) ended
(AsyncActor pid=5859) started
(AsyncActor pid=5859) started
(AsyncActor pid=5859) ended
(AsyncActor pid=5859) ended
(AsyncActor pid=5859) started
(AsyncActor pid=5859) started
(AsyncActor pid=5859) ended
(AsyncActor pid=5859) ended
(AsyncActor pid=5859) started
(AsyncActor pid=5859) started
(AsyncActor pid=5859) ended
(AsyncActor pid=5859) ended
|
177
|
Threaded Actors#
Sometimes, asyncio is not an ideal solution for your actor. For example, you may
have one method that performs some computation heavy task while blocking the event loop, not giving up control via await. This would hurt the performance of an Async Actor because Async Actors can only execute 1 task at a time and rely on await to context switch.
Instead, you can use the max_concurrency Actor options without any async methods, allowng you to achieve threaded concurrency (like a thread pool).
Warning
When there is at least one async def method in actor definition, Ray
will recognize the actor as AsyncActor instead of ThreadedActor.
@ray.remote
class ThreadedActor:
def task_1(self): print("I'm running in a thread!")
def task_2(self): print("I'm running in another thread!")
a = ThreadedActor.options(max_concurrency=2).remote()
ray.get([a.task_1.remote(), a.task_2.remote()])
|
178
|
a = ThreadedActor.options(max_concurrency=2).remote()
ray.get([a.task_1.remote(), a.task_2.remote()])
(ThreadedActor pid=4822) I'm running in a thread!
(ThreadedActor pid=4822) I'm running in another thread!
Each invocation of the threaded actor will be running in a thread pool. The size of the threadpool is limited by the max_concurrency value.
AsyncIO for Remote Tasks#
We don’t support asyncio for remote tasks. The following snippet will fail:
@ray.remote
async def f():
pass
Instead, you can wrap the async function with a wrapper to run the task synchronously:
async def f():
pass
@ray.remote
def wrapper():
import asyncio
asyncio.run(f())
|
179
|
Limiting Concurrency Per-Method with Concurrency Groups#
Besides setting the max concurrency overall for an actor, Ray allows methods to be separated into concurrency groups, each with its own threads(s). This allows you to limit the concurrency per-method, e.g., allow a health-check method to be given its own concurrency quota separate from request serving methods.
Tip
Concurrency groups work with both asyncio and threaded actors. The syntax is the same.
Defining Concurrency Groups#
This defines two concurrency groups, “io” with max concurrency = 2 and
“compute” with max concurrency = 4. The methods f1 and f2 are
placed in the “io” group, and the methods f3 and f4 are placed
into the “compute” group. Note that there is always a default
concurrency group for actors, which has a default concurrency of 1000
AsyncIO actors and 1 otherwise.
Python
You can define concurrency groups for actors using the concurrency_group decorator argument:
import ray
|
180
|
Python
You can define concurrency groups for actors using the concurrency_group decorator argument:
import ray
@ray.remote(concurrency_groups={"io": 2, "compute": 4})
class AsyncIOActor:
def __init__(self):
pass
@ray.method(concurrency_group="io")
async def f1(self):
pass
@ray.method(concurrency_group="io")
async def f2(self):
pass
@ray.method(concurrency_group="compute")
async def f3(self):
pass
@ray.method(concurrency_group="compute")
async def f4(self):
pass
async def f5(self):
pass
a = AsyncIOActor.remote()
a.f1.remote() # executed in the "io" group.
a.f2.remote() # executed in the "io" group.
a.f3.remote() # executed in the "compute" group.
a.f4.remote() # executed in the "compute" group.
a.f5.remote() # executed in the default group.
|
181
|
Java
You can define concurrency groups for concurrent actors using the API setConcurrencyGroups() argument:
class ConcurrentActor {
public long f1() {
return Thread.currentThread().getId();
}
public long f2() {
return Thread.currentThread().getId();
}
public long f3(int a, int b) {
return Thread.currentThread().getId();
}
public long f4() {
return Thread.currentThread().getId();
}
public long f5() {
return Thread.currentThread().getId();
}
}
ConcurrencyGroup group1 =
new ConcurrencyGroupBuilder<ConcurrentActor>()
.setName("io")
.setMaxConcurrency(1)
.addMethod(ConcurrentActor::f1)
.addMethod(ConcurrentActor::f2)
.build();
ConcurrencyGroup group2 =
new ConcurrencyGroupBuilder<ConcurrentActor>()
.setName("compute")
.setMaxConcurrency(1)
.addMethod(ConcurrentActor::f3)
.addMethod(ConcurrentActor::f4)
.build();
|
182
|
ActorHandle<ConcurrentActor> myActor = Ray.actor(ConcurrentActor::new)
.setConcurrencyGroups(group1, group2)
.remote();
myActor.task(ConcurrentActor::f1).remote(); // executed in the "io" group.
myActor.task(ConcurrentActor::f2).remote(); // executed in the "io" group.
myActor.task(ConcurrentActor::f3, 3, 5).remote(); // executed in the "compute" group.
myActor.task(ConcurrentActor::f4).remote(); // executed in the "compute" group.
myActor.task(ConcurrentActor::f5).remote(); // executed in the "default" group.
Default Concurrency Group#
By default, methods are placed in a default concurrency group which has a concurrency limit of 1000 for AsyncIO actors and 1 otherwise.
The concurrency of the default group can be changed by setting the max_concurrency actor option.
|
183
|
Python
The following actor has 2 concurrency groups: “io” and “default”.
The max concurrency of “io” is 2, and the max concurrency of “default” is 10.
@ray.remote(concurrency_groups={"io": 2})
class AsyncIOActor:
async def f1(self):
pass
actor = AsyncIOActor.options(max_concurrency=10).remote()
Java
The following concurrent actor has 2 concurrency groups: “io” and “default”.
The max concurrency of “io” is 2, and the max concurrency of “default” is 10.
class ConcurrentActor:
public long f1() {
return Thread.currentThread().getId();
}
ConcurrencyGroup group =
new ConcurrencyGroupBuilder<ConcurrentActor>()
.setName("io")
.setMaxConcurrency(2)
.addMethod(ConcurrentActor::f1)
.build();
ActorHandle<ConcurrentActor> myActor = Ray.actor(ConcurrentActor::new)
.setConcurrencyGroups(group1)
.setMaxConcurrency(10)
.remote();
|
184
|
ActorHandle<ConcurrentActor> myActor = Ray.actor(ConcurrentActor::new)
.setConcurrencyGroups(group1)
.setMaxConcurrency(10)
.remote();
Setting the Concurrency Group at Runtime#
You can also dispatch actor methods into a specific concurrency group at runtime.
The following snippet demonstrates setting the concurrency group of the
f2 method dynamically at runtime.
Python
You can use the .options method.
# Executed in the "io" group (as defined in the actor class).
a.f2.options().remote()
# Executed in the "compute" group.
a.f2.options(concurrency_group="compute").remote()
Java
You can use setConcurrencyGroup method.
// Executed in the "io" group (as defined in the actor creation).
myActor.task(ConcurrentActor::f2).remote();
// Executed in the "compute" group.
myActor.task(ConcurrentActor::f2).setConcurrencyGroup("compute").remote();
|
185
|
Utility Classes#
Actor Pool#
Python
The ray.util module contains a utility class, ActorPool.
This class is similar to multiprocessing.Pool and lets you schedule Ray tasks over a fixed pool of actors.
import ray
from ray.util import ActorPool
@ray.remote
class Actor:
def double(self, n):
return n * 2
a1, a2 = Actor.remote(), Actor.remote()
pool = ActorPool([a1, a2])
# pool.map(..) returns a Python generator object ActorPool.map
gen = pool.map(lambda a, v: a.double.remote(v), [1, 2, 3, 4])
print(list(gen))
# [2, 4, 6, 8]
See the package reference for more information.
Java
Actor pool hasn’t been implemented in Java yet.
C++
Actor pool hasn’t been implemented in C++ yet.
Message passing using Ray Queue#
Sometimes just using one signal to synchronize is not enough. If you need to send data among many tasks or
actors, you can use ray.util.queue.Queue.
import ray
from ray.util.queue import Queue, Empty
|
186
|
ray.init()
# You can pass this object around to different tasks/actors
queue = Queue(maxsize=100)
@ray.remote
def consumer(id, queue):
try:
while True:
next_item = queue.get(block=True, timeout=1)
print(f"consumer {id} got work {next_item}")
except Empty:
pass
[queue.put(i) for i in range(10)]
print("Put work 1 - 10 to queue...")
consumers = [consumer.remote(id, queue) for id in range(2)]
ray.get(consumers)
Ray’s Queue API has a similar API to Python’s asyncio.Queue and queue.Queue.
|
187
|
Out-of-band Communication#
Typically, Ray actor communication is done through actor method calls and data is shared through the distributed object store.
However, in some use cases out-of-band communication can be useful.
Wrapping Library Processes#
Many libraries already have mature, high-performance internal communication stacks and
they leverage Ray as a language-integrated actor scheduler.
The actual communication between actors is mostly done out-of-band using existing communication stacks.
For example, Horovod-on-Ray uses NCCL or MPI-based collective communications, and RayDP uses Spark’s internal RPC and object manager.
See Ray Distributed Library Patterns for more details.
Ray Collective#
Ray’s collective communication library (ray.util.collective) allows efficient out-of-band collective and point-to-point communication between distributed CPUs or GPUs.
See Ray Collective for more details.
|
188
|
HTTP Server#
You can start a http server inside the actor and expose http endpoints to clients
so users outside of the ray cluster can communicate with the actor.
Python
import ray
import asyncio
import requests
from aiohttp import web
@ray.remote
class Counter:
async def __init__(self):
self.counter = 0
asyncio.get_running_loop().create_task(self.run_http_server())
async def run_http_server(self):
app = web.Application()
app.add_routes([web.get("/", self.get)])
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, "127.0.0.1", 25001)
await site.start()
async def get(self, request):
return web.Response(text=str(self.counter))
async def increment(self):
self.counter = self.counter + 1
ray.init()
counter = Counter.remote()
[ray.get(counter.increment.remote()) for i in range(5)]
r = requests.get("http://127.0.0.1:25001/")
assert r.text == "5"
|
189
|
ray.init()
counter = Counter.remote()
[ray.get(counter.increment.remote()) for i in range(5)]
r = requests.get("http://127.0.0.1:25001/")
assert r.text == "5"
Similarly, you can expose other types of servers as well (e.g., gRPC servers).
Limitations#
When using out-of-band communication with Ray actors, keep in mind that Ray does not manage the calls between actors. This means that functionality like distributed reference counting will not work with out-of-band communication, so you should take care not to pass object references in this way.
|
190
|
Actor Task Execution Order#
Synchronous, Single-Threaded Actor#
In Ray, an actor receives tasks from multiple submitters (including driver and workers).
For tasks received from the same submitter, a synchronous, single-threaded actor executes
them in the order they were submitted, if the actor tasks never retry.
In other words, a given task will not be executed until previously submitted tasks from
the same submitter have finished execution.
For actors where max_task_retries is set to a non-zero number, the task
execution order is not guaranteed when task retries occur.
Python
import ray
@ray.remote
class Counter:
def __init__(self):
self.value = 0
def add(self, addition):
self.value += addition
return self.value
counter = Counter.remote()
# For tasks from the same submitter,
# they are executed according to submission order.
value0 = counter.add.remote(1)
value1 = counter.add.remote(2)
|
191
|
counter = Counter.remote()
# For tasks from the same submitter,
# they are executed according to submission order.
value0 = counter.add.remote(1)
value1 = counter.add.remote(2)
# Output: 1. The first submitted task is executed first.
print(ray.get(value0))
# Output: 3. The later submitted task is executed later.
print(ray.get(value1))
1
3
However, the actor does not guarantee the execution order of the tasks from different
submitters. For example, suppose an unfulfilled argument blocks a previously submitted
task. In this case, the actor can still execute tasks submitted by a different worker.
Python
import time
import ray
@ray.remote
class Counter:
def __init__(self):
self.value = 0
def add(self, addition):
self.value += addition
return self.value
counter = Counter.remote()
# Submit task from a worker
@ray.remote
def submitter(value):
return ray.get(counter.add.remote(value))
|
192
|
counter = Counter.remote()
# Submit task from a worker
@ray.remote
def submitter(value):
return ray.get(counter.add.remote(value))
# Simulate delayed result resolution.
@ray.remote
def delayed_resolution(value):
time.sleep(5)
return value
# Submit tasks from different workers, with
# the first submitted task waiting for
# dependency resolution.
value0 = submitter.remote(delayed_resolution.remote(1))
value1 = submitter.remote(2)
# Output: 3. The first submitted task is executed later.
print(ray.get(value0))
# Output: 2. The later submitted task is executed first.
print(ray.get(value1))
3
2
Asynchronous or Threaded Actor#
Asynchronous or threaded actors do not guarantee the
task execution order. This means the system might execute a task
even though previously submitted tasks are pending execution.
Python
import time
import ray
@ray.remote
class AsyncCounter:
def __init__(self):
self.value = 0
|
193
|
Python
import time
import ray
@ray.remote
class AsyncCounter:
def __init__(self):
self.value = 0
async def add(self, addition):
self.value += addition
return self.value
counter = AsyncCounter.remote()
# Simulate delayed result resolution.
@ray.remote
def delayed_resolution(value):
time.sleep(5)
return value
# Submit tasks from the driver, with
# the first submitted task waiting for
# dependency resolution.
value0 = counter.add.remote(delayed_resolution.remote(1))
value1 = counter.add.remote(2)
# Output: 3. The first submitted task is executed later.
print(ray.get(value0))
# Output: 2. The later submitted task is executed first.
print(ray.get(value1))
3
2
|
194
|
Objects#
In Ray, tasks and actors create and compute on objects. We refer to these objects as remote objects because they can be stored anywhere in a Ray cluster, and we use object refs to refer to them. Remote objects are cached in Ray’s distributed shared-memory object store, and there is one object store per node in the cluster. In the cluster setting, a remote object can live on one or many nodes, independent of who holds the object ref(s).
An object ref is essentially a pointer or a unique ID that can be used to refer to a
remote object without seeing its value. If you’re familiar with futures, Ray object refs are conceptually
similar.
Object refs can be created in two ways.
They are returned by remote function calls.
They are returned by ray.put().
Python
import ray
# Put an object in Ray's object store.
y = 1
object_ref = ray.put(y)
Java
// Put an object in Ray's object store.
int y = 1;
ObjectRef<Integer> objectRef = Ray.put(y);
|
195
|
Python
import ray
# Put an object in Ray's object store.
y = 1
object_ref = ray.put(y)
Java
// Put an object in Ray's object store.
int y = 1;
ObjectRef<Integer> objectRef = Ray.put(y);
C++
// Put an object in Ray's object store.
int y = 1;
ray::ObjectRef<int> object_ref = ray::Put(y);
Note
Remote objects are immutable. That is, their values cannot be changed after
creation. This allows remote objects to be replicated in multiple object
stores without needing to synchronize the copies.
Fetching Object Data#
You can use the ray.get() method to fetch the result of a remote object from an object ref.
If the current node’s object store does not contain the object, the object is downloaded.
Python
If the object is a numpy array
or a collection of numpy arrays, the get call is zero-copy and returns arrays backed by shared object store memory.
Otherwise, we deserialize the object data into a Python object.
import ray
import time
|
196
|
# Get the value of one object ref.
obj_ref = ray.put(1)
assert ray.get(obj_ref) == 1
# Get the values of multiple object refs in parallel.
assert ray.get([ray.put(i) for i in range(3)]) == [0, 1, 2]
# You can also set a timeout to return early from a ``get``
# that's blocking for too long.
from ray.exceptions import GetTimeoutError
# ``GetTimeoutError`` is a subclass of ``TimeoutError``.
@ray.remote
def long_running_function():
time.sleep(8)
obj_ref = long_running_function.remote()
try:
ray.get(obj_ref, timeout=4)
except GetTimeoutError: # You can capture the standard "TimeoutError" instead
print("`get` timed out.")
`get` timed out.
Java
// Get the value of one object ref.
ObjectRef<Integer> objRef = Ray.put(1);
Assert.assertTrue(objRef.get() == 1);
// You can also set a timeout(ms) to return early from a ``get`` that's blocking for too long.
Assert.assertTrue(objRef.get(1000) == 1);
|
197
|
// Get the values of multiple object refs in parallel.
List<ObjectRef<Integer>> objectRefs = new ArrayList<>();
for (int i = 0; i < 3; i++) {
objectRefs.add(Ray.put(i));
}
List<Integer> results = Ray.get(objectRefs);
Assert.assertEquals(results, ImmutableList.of(0, 1, 2));
// Ray.get timeout example: Ray.get will throw an RayTimeoutException if time out.
public class MyRayApp {
public static int slowFunction() throws InterruptedException {
TimeUnit.SECONDS.sleep(10);
return 1;
}
}
Assert.assertThrows(RayTimeoutException.class,
() -> Ray.get(Ray.task(MyRayApp::slowFunction).remote(), 3000));
C++
// Get the value of one object ref.
ray::ObjectRef<int> obj_ref = ray::Put(1);
assert(*obj_ref.Get() == 1);
|
198
|
C++
// Get the value of one object ref.
ray::ObjectRef<int> obj_ref = ray::Put(1);
assert(*obj_ref.Get() == 1);
// Get the values of multiple object refs in parallel.
std::vector<ray::ObjectRef<int>> obj_refs;
for (int i = 0; i < 3; i++) {
obj_refs.emplace_back(ray::Put(i));
}
auto results = ray::Get(obj_refs);
assert(results.size() == 3);
assert(*results[0] == 0);
assert(*results[1] == 1);
assert(*results[2] == 2);
|
199
|
Passing Object Arguments#
Ray object references can be freely passed around a Ray application. This means that they can be passed as arguments to tasks, actor methods, and even stored in other objects. Objects are tracked via distributed reference counting, and their data is automatically freed once all references to the object are deleted.
There are two different ways one can pass an object to a Ray task or method. Depending on the way an object is passed, Ray will decide whether to de-reference the object prior to task execution.
Passing an object as a top-level argument: When an object is passed directly as a top-level argument to a task, Ray will de-reference the object. This means that Ray will fetch the underlying data for all top-level object reference arguments, not executing the task until the object data becomes fully available.
import ray
@ray.remote
def echo(a: int, b: int, c: int):
"""This function prints its input values to stdout."""
print(a, b, c)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.