global_chunk_id
int64 0
478
| text
stringlengths 288
999
|
|---|---|
200
|
@ray.remote
def echo(a: int, b: int, c: int):
"""This function prints its input values to stdout."""
print(a, b, c)
# Passing the literal values (1, 2, 3) to `echo`.
echo.remote(1, 2, 3)
# -> prints "1 2 3"
# Put the values (1, 2, 3) into Ray's object store.
a, b, c = ray.put(1), ray.put(2), ray.put(3)
# Passing an object as a top-level argument to `echo`. Ray will de-reference top-level
# arguments, so `echo` will see the literal values (1, 2, 3) in this case as well.
echo.remote(a, b, c)
# -> prints "1 2 3"
|
201
|
Passing an object as a nested argument: When an object is passed within a nested object, for example, within a Python list, Ray will not de-reference it. This means that the task will need to call ray.get() on the reference to fetch the concrete value. However, if the task never calls ray.get(), then the object value never needs to be transferred to the machine the task is running on. We recommend passing objects as top-level arguments where possible, but nested arguments can be useful for passing objects on to other tasks without needing to see the data.
import ray
@ray.remote
def echo_and_get(x_list): # List[ObjectRef]
"""This function prints its input values to stdout."""
print("args:", x_list)
print("values:", ray.get(x_list))
# Put the values (1, 2, 3) into Ray's object store.
a, b, c = ray.put(1), ray.put(2), ray.put(3)
|
202
|
# Put the values (1, 2, 3) into Ray's object store.
a, b, c = ray.put(1), ray.put(2), ray.put(3)
# Passing an object as a nested argument to `echo_and_get`. Ray does not
# de-reference nested args, so `echo_and_get` sees the references.
echo_and_get.remote([a, b, c])
# -> prints args: [ObjectRef(...), ObjectRef(...), ObjectRef(...)]
# values: [1, 2, 3]
The top-level vs not top-level passing convention also applies to actor constructors and actor method calls:
@ray.remote
class Actor:
def __init__(self, arg):
pass
def method(self, arg):
pass
obj = ray.put(2)
# Examples of passing objects to actor constructors.
actor_handle = Actor.remote(obj) # by-value
actor_handle = Actor.remote([obj]) # by-reference
# Examples of passing objects to actor method calls.
actor_handle.method.remote(obj) # by-value
actor_handle.method.remote([obj]) # by-reference
|
203
|
# Examples of passing objects to actor method calls.
actor_handle.method.remote(obj) # by-value
actor_handle.method.remote([obj]) # by-reference
Closure Capture of Objects#
You can also pass objects to tasks via closure-capture. This can be convenient when you have a large object that you want to share verbatim between many tasks or actors, and don’t want to pass it repeatedly as an argument. Be aware however that defining a task that closes over an object ref will pin the object via reference-counting, so the object will not be evicted until the job completes.
import ray
# Put the values (1, 2, 3) into Ray's object store.
a, b, c = ray.put(1), ray.put(2), ray.put(3)
@ray.remote
def print_via_capture():
"""This function prints the values of (a, b, c) to stdout."""
print(ray.get([a, b, c]))
|
204
|
@ray.remote
def print_via_capture():
"""This function prints the values of (a, b, c) to stdout."""
print(ray.get([a, b, c]))
# Passing object references via closure-capture. Inside the `print_via_capture`
# function, the global object refs (a, b, c) can be retrieved and printed.
print_via_capture.remote()
# -> prints [1, 2, 3]
Nested Objects#
Ray also supports nested object references. This allows you to build composite objects that themselves hold references to further sub-objects.
# Objects can be nested within each other. Ray will keep the inner object
# alive via reference counting until all outer object references are deleted.
object_ref_2 = ray.put([object_ref])
Fault Tolerance#
Ray can automatically recover from object data loss
via lineage reconstruction
but not owner failure.
See Ray fault tolerance for more details.
More about Ray Objects#
Serialization
Object Spilling
|
205
|
Serialization#
Since Ray processes do not share memory space, data transferred between workers and nodes will need to serialized and deserialized. Ray uses the Plasma object store to efficiently transfer objects across different processes and different nodes. Numpy arrays in the object store are shared between workers on the same node (zero-copy deserialization).
Overview#
Ray has decided to use a customized Pickle protocol version 5 backport to replace the original PyArrow serializer. This gets rid of several previous limitations (e.g. cannot serialize recursive objects).
Ray is currently compatible with Pickle protocol version 5, while Ray supports serialization of a wider range of objects (e.g. lambda & nested functions, dynamic classes) with the help of cloudpickle.
|
206
|
Plasma Object Store#
Plasma is an in-memory object store. It has been originally developed as part of Apache Arrow. Prior to Ray’s version 1.0.0 release, Ray forked Arrow’s Plasma code into Ray’s code base in order to disentangle and continue development with respect to Ray’s architecture and performance needs.
Plasma is used to efficiently transfer objects across different processes and different nodes. All objects in Plasma object store are immutable and held in shared memory. This is so that they can be accessed efficiently by many workers on the same node.
Each node has its own object store. When data is put into the object store, it does not get automatically broadcasted to other nodes. Data remains local to the writer until requested by another task or actor on another node.
|
207
|
Serializing ObjectRefs#
Explicitly serializing ObjectRefs using ray.cloudpickle should be used as a last resort. Passing ObjectRefs through Ray task arguments and return values is the recommended approach.
Ray ObjectRefs can be serialized using ray.cloudpickle. The ObjectRef can then be deserialized and accessed with ray.get(). Note that ray.cloudpickle must be used; other pickle tools are not guaranteed to work. Additionally, the process that deserializes the ObjectRef must be part of the same Ray cluster that serialized it.
When serialized, the ObjectRef’s value will remain pinned in Ray’s shared memory object store. The object must be explicitly freed by calling ray._private.internal_api.free(obj_ref).
Warning
ray._private.internal_api.free(obj_ref) is a private API and may be changed in future Ray versions.
|
208
|
Warning
ray._private.internal_api.free(obj_ref) is a private API and may be changed in future Ray versions.
This code example demonstrates how to serialize an ObjectRef, store it in external storage, deserialize and use it, and lastly free its object.
import ray
from ray import cloudpickle
FILE = "external_store.pickle"
ray.init()
my_dict = {"hello": "world"}
obj_ref = ray.put(my_dict)
with open(FILE, "wb+") as f:
cloudpickle.dump(obj_ref, f)
# ObjectRef remains pinned in memory because
# it was serialized with ray.cloudpickle.
del obj_ref
with open(FILE, "rb") as f:
new_obj_ref = cloudpickle.load(f)
# The deserialized ObjectRef works as expected.
assert ray.get(new_obj_ref) == my_dict
# Explicitly free the object.
ray._private.internal_api.free(new_obj_ref)
|
209
|
# The deserialized ObjectRef works as expected.
assert ray.get(new_obj_ref) == my_dict
# Explicitly free the object.
ray._private.internal_api.free(new_obj_ref)
Numpy Arrays#
Ray optimizes for numpy arrays by using Pickle protocol 5 with out-of-band data.
The numpy array is stored as a read-only object, and all Ray workers on the same node can read the numpy array in the object store without copying (zero-copy reads). Each numpy array object in the worker process holds a pointer to the relevant array held in shared memory. Any writes to the read-only object will require the user to first copy it into the local process memory.
Tip
You can often avoid serialization issues by using only native types (e.g., numpy arrays or lists/dicts of numpy arrays and other primitive types), or by using Actors hold objects that cannot be serialized.
|
210
|
Fixing “assignment destination is read-only”#
Because Ray puts numpy arrays in the object store, when deserialized as arguments in remote functions they will become read-only. For example, the following code snippet will crash:
import ray
import numpy as np
@ray.remote
def f(arr):
# arr = arr.copy() # Adding a copy will fix the error.
arr[0] = 1
try:
ray.get(f.remote(np.zeros(100)))
except ray.exceptions.RayTaskError as e:
print(e)
# ray.exceptions.RayTaskError(ValueError): ray::f()
# File "test.py", line 6, in f
# arr[0] = 1
# ValueError: assignment destination is read-only
To avoid this issue, you can manually copy the array at the destination if you need to mutate it (arr = arr.copy()). Note that this is effectively like disabling the zero-copy deserialization feature provided by Ray.
Serialization notes#
|
211
|
Serialization notes#
Ray is currently using Pickle protocol version 5. The default pickle protocol used by most python distributions is protocol 3. Protocol 4 & 5 are more efficient than protocol 3 for larger objects.
For non-native objects, Ray will always keep a single copy even it is referred multiple times in an object:
import ray
import numpy as np
obj = [np.zeros(42)] * 99
l = ray.get(ray.put(obj))
assert l[0] is l[1] # no problem!
Whenever possible, use numpy arrays or Python collections of numpy arrays for maximum performance.
Lock objects are mostly unserializable, because copying a lock is meaningless and could cause serious concurrency problems. You may have to come up with a workaround if your object contains a lock.
|
212
|
Customized Serialization#
Sometimes you may want to customize your serialization process because
the default serializer used by Ray (pickle5 + cloudpickle) does
not work for you (fail to serialize some objects, too slow for certain objects, etc.).
There are at least 3 ways to define your custom serialization process:
If you want to customize the serialization of a type of objects,
and you have access to the code, you can define __reduce__
function inside the corresponding class. This is commonly done
by most Python libraries. Example code:
import ray
import sqlite3
class DBConnection:
def __init__(self, path):
self.path = path
self.conn = sqlite3.connect(path)
# without '__reduce__', the instance is unserializable.
def __reduce__(self):
deserializer = DBConnection
serialized_data = (self.path,)
return deserializer, serialized_data
original = DBConnection("/tmp/db")
print(original.conn)
|
213
|
original = DBConnection("/tmp/db")
print(original.conn)
copied = ray.get(ray.put(original))
print(copied.conn)
<sqlite3.Connection object at ...>
<sqlite3.Connection object at ...>
If you want to customize the serialization of a type of objects,
but you cannot access or modify the corresponding class, you can
register the class with the serializer you use:
import ray
import threading
class A:
def __init__(self, x):
self.x = x
self.lock = threading.Lock() # could not be serialized!
try:
ray.get(ray.put(A(1))) # fail!
except TypeError:
pass
def custom_serializer(a):
return a.x
def custom_deserializer(b):
return A(b)
# Register serializer and deserializer for class A:
ray.util.register_serializer(
A, serializer=custom_serializer, deserializer=custom_deserializer)
ray.get(ray.put(A(1))) # success!
|
214
|
# Register serializer and deserializer for class A:
ray.util.register_serializer(
A, serializer=custom_serializer, deserializer=custom_deserializer)
ray.get(ray.put(A(1))) # success!
# You can deregister the serializer at any time.
ray.util.deregister_serializer(A)
try:
ray.get(ray.put(A(1))) # fail!
except TypeError:
pass
# Nothing happens when deregister an unavailable serializer.
ray.util.deregister_serializer(A)
NOTE: Serializers are managed locally for each Ray worker. So for every Ray worker,
if you want to use the serializer, you need to register the serializer. Deregister
a serializer also only applies locally.
If you register a new serializer for a class, the new serializer would replace
the old serializer immediately in the worker. This API is also idempotent, there are
no side effects caused by re-registering the same serializer.
We also provide you an example, if you want to customize the serialization
of a specific object:
import threading
|
215
|
We also provide you an example, if you want to customize the serialization
of a specific object:
import threading
class A:
def __init__(self, x):
self.x = x
self.lock = threading.Lock() # could not serialize!
try:
ray.get(ray.put(A(1))) # fail!
except TypeError:
pass
class SerializationHelperForA:
"""A helper class for serialization."""
def __init__(self, a):
self.a = a
def __reduce__(self):
return A, (self.a.x,)
ray.get(ray.put(SerializationHelperForA(A(1)))) # success!
# the serializer only works for a specific object, not all A
# instances, so we still expect failure here.
try:
ray.get(ray.put(A(1))) # still fail!
except TypeError:
pass
|
216
|
Troubleshooting#
Use ray.util.inspect_serializability to identify tricky pickling issues. This function can be used to trace a potential non-serializable object within any Python object – whether it be a function, class, or object instance.
Below, we demonstrate this behavior on a function with a non-serializable object (threading lock):
from ray.util import inspect_serializability
import threading
lock = threading.Lock()
def test():
print(lock)
inspect_serializability(test, name="test")
|
217
|
lock = threading.Lock()
def test():
print(lock)
inspect_serializability(test, name="test")
The resulting output is:
=============================================================
Checking Serializability of <function test at 0x7ff130697e50>
=============================================================
!!! FAIL serialization: cannot pickle '_thread.lock' object
Detected 1 global variables. Checking serializability...
Serializing 'lock' <unlocked _thread.lock object at 0x7ff1306a9f30>...
!!! FAIL serialization: cannot pickle '_thread.lock' object
WARNING: Did not find non-serializable object in <unlocked _thread.lock object at 0x7ff1306a9f30>. This may be an oversight.
=============================================================
Variable:
FailTuple(lock [obj=<unlocked _thread.lock object at 0x7ff1306a9f30>, parent=<function test at 0x7ff130697e50>])
|
218
|
FailTuple(lock [obj=<unlocked _thread.lock object at 0x7ff1306a9f30>, parent=<function test at 0x7ff130697e50>])
was found to be non-serializable. There may be multiple other undetected variables that were non-serializable.
Consider either removing the instantiation/imports of these variables or moving the instantiation into the scope of the function/class.
=============================================================
Check https://docs.ray.io/en/master/ray-core/objects/serialization.html#troubleshooting for more information.
If you have any suggestions on how to improve this error message, please reach out to the Ray developers on github.com/ray-project/ray/issues/
=============================================================
|
219
|
For even more detailed information, set environmental variable RAY_PICKLE_VERBOSE_DEBUG='2' before importing Ray. This enables
serialization with python-based backend instead of C-Pickle, so you can debug into python code at the middle of serialization.
However, this would make serialization much slower.
Known Issues#
Users could experience memory leak when using certain python3.8 & 3.9 versions. This is due to a bug in python’s pickle module.
This issue has been solved for Python 3.8.2rc1, Python 3.9.0 alpha 4 or late versions.
|
220
|
Object Spilling#
Ray spills objects to a directory in the local filesystem once the object store is full. By default, Ray
spills objects to the temporary directory (for example, /tmp/ray/session_2025-03-28_00-05-20_204810_2814690).
Spilling to a custom directory#
You can specify a custom directory for spilling objects by setting the
object_spilling_directory parameter in the ray.init function or the
--object-spilling-directory command line option in the ray start command.
Python
ray.init(object_spilling_directory="/path/to/spill/dir")
CLI
ray start --object-spilling-directory=/path/to/spill/dir
For advanced usage and customizations, reach out to the Ray team.
|
221
|
CLI
ray start --object-spilling-directory=/path/to/spill/dir
For advanced usage and customizations, reach out to the Ray team.
Stats#
When spilling is happening, the following INFO level messages are printed to the Raylet logs-for example, /tmp/ray/session_latest/logs/raylet.out:
local_object_manager.cc:166: Spilled 50 MiB, 1 objects, write throughput 230 MiB/s
local_object_manager.cc:334: Restored 50 MiB, 1 objects, read throughput 505 MiB/s
You can also view cluster-wide spill stats by using the ray memory command:
--- Aggregate object store stats across all nodes ---
Plasma memory usage 50 MiB, 1 objects, 50.0% full
Spilled 200 MiB, 4 objects, avg write throughput 570 MiB/s
Restored 150 MiB, 3 objects, avg read throughput 1361 MiB/s
If you only want to display cluster-wide spill stats, use ray memory --stats-only.
|
222
|
Scheduling#
For each task or actor, Ray will choose a node to run it and the scheduling decision is based on the following factors.
Resources#
Each task or actor has the specified resource requirements.
Given that, a node can be in one of the following states:
Feasible: the node has the required resources to run the task or actor.
Depending on the current availability of these resources, there are two sub-states:
Available: the node has the required resources and they are free now.
Unavailable: the node has the required resources but they are currently being used by other tasks or actors.
Infeasible: the node doesn’t have the required resources. For example a CPU-only node is infeasible for a GPU task.
|
223
|
Infeasible: the node doesn’t have the required resources. For example a CPU-only node is infeasible for a GPU task.
Resource requirements are hard requirements meaning that only feasible nodes are eligible to run the task or actor.
If there are feasible nodes, Ray will either choose an available node or wait until a unavailable node to become available
depending on other factors discussed below.
If all nodes are infeasible, the task or actor cannot be scheduled until feasible nodes are added to the cluster.
Scheduling Strategies#
Tasks or actors support a scheduling_strategy option to specify the strategy used to decide the best node among feasible nodes.
Currently the supported strategies are the followings.
|
224
|
“DEFAULT”#
"DEFAULT" is the default strategy used by Ray.
Ray schedules tasks or actors onto a group of the top k nodes.
Specifically, the nodes are sorted to first favor those that already have tasks or actors scheduled (for locality),
then to favor those that have low resource utilization (for load balancing).
Within the top k group, nodes are chosen randomly to further improve load-balancing and mitigate delays from cold-start in large clusters.
Implementation-wise, Ray calculates a score for each node in a cluster based on the utilization of its logical resources.
If the utilization is below a threshold (controlled by the OS environment variable RAY_scheduler_spread_threshold, default is 0.5), the score is 0,
otherwise it is the resource utilization itself (score 1 means the node is fully utilized).
Ray selects the best node for scheduling by randomly picking from the top k nodes with the lowest scores.
|
225
|
otherwise it is the resource utilization itself (score 1 means the node is fully utilized).
Ray selects the best node for scheduling by randomly picking from the top k nodes with the lowest scores.
The value of k is the max of (number of nodes in the cluster * RAY_scheduler_top_k_fraction environment variable) and RAY_scheduler_top_k_absolute environment variable.
By default, it’s 20% of the total number of nodes.
Currently Ray handles actors that don’t require any resources (i.e., num_cpus=0 with no other resources) specially by randomly choosing a node in the cluster without considering resource utilization.
Since nodes are randomly chosen, actors that don’t require any resources are effectively SPREAD across the cluster.
@ray.remote
def func():
return 1
|
226
|
@ray.remote(num_cpus=1)
class Actor:
pass
# If unspecified, "DEFAULT" scheduling strategy is used.
func.remote()
actor = Actor.remote()
# Explicitly set scheduling strategy to "DEFAULT".
func.options(scheduling_strategy="DEFAULT").remote()
actor = Actor.options(scheduling_strategy="DEFAULT").remote()
# Zero-CPU (and no other resources) actors are randomly assigned to nodes.
actor = Actor.options(num_cpus=0).remote()
“SPREAD”#
"SPREAD" strategy will try to spread the tasks or actors among available nodes.
@ray.remote(scheduling_strategy="SPREAD")
def spread_func():
return 2
@ray.remote(num_cpus=1)
class SpreadActor:
pass
# Spread tasks across the cluster.
[spread_func.remote() for _ in range(10)]
# Spread actors across the cluster.
actors = [SpreadActor.options(scheduling_strategy="SPREAD").remote() for _ in range(10)]
|
227
|
NodeAffinitySchedulingStrategy#
NodeAffinitySchedulingStrategy is a low-level strategy that allows a task or actor to be scheduled onto a particular node specified by its node id.
The soft flag specifies whether the task or actor is allowed to run somewhere else if the specified node doesn’t exist (e.g. if the node dies)
or is infeasible because it does not have the resources required to run the task or actor.
In these cases, if soft is True, the task or actor will be scheduled onto a different feasible node.
Otherwise, the task or actor will fail with TaskUnschedulableError or ActorUnschedulableError.
As long as the specified node is alive and feasible, the task or actor will only run there
regardless of the soft flag. This means if the node currently has no available resources, the task or actor will wait until resources
become available.
This strategy should only be used if other high level scheduling strategies (e.g. placement group) cannot give the
|
228
|
It’s a low-level strategy which prevents optimizations by a smart scheduler.
It cannot fully utilize an autoscaling cluster since node ids must be known when the tasks or actors are created.
It can be difficult to make the best static placement decision
especially in a multi-tenant cluster: for example, an application won’t know what else is being scheduled onto the same nodes.
@ray.remote
def node_affinity_func():
return ray.get_runtime_context().get_node_id()
@ray.remote(num_cpus=1)
class NodeAffinityActor:
pass
# Only run the task on the local node.
node_affinity_func.options(
scheduling_strategy=ray.util.scheduling_strategies.NodeAffinitySchedulingStrategy(
node_id=ray.get_runtime_context().get_node_id(),
soft=False,
)
).remote()
|
229
|
Locality-Aware Scheduling#
By default, Ray prefers available nodes that have large task arguments local
to avoid transferring data over the network. If there are multiple large task arguments,
the node with most object bytes local is preferred.
This takes precedence over the "DEFAULT" scheduling strategy,
which means Ray will try to run the task on the locality preferred node regardless of the node resource utilization.
However, if the locality preferred node is not available, Ray may run the task somewhere else.
When other scheduling strategies are specified,
they have higher precedence and data locality is no longer considered.
Note
Locality-aware scheduling is only for tasks not actors.
@ray.remote
def large_object_func():
# Large object is stored in the local object store
# and available in the distributed memory,
# instead of returning inline directly to the caller.
return [1] * (1024 * 1024)
|
230
|
@ray.remote
def small_object_func():
# Small object is returned inline directly to the caller,
# instead of storing in the distributed memory.
return [1]
@ray.remote
def consume_func(data):
return len(data)
large_object = large_object_func.remote()
small_object = small_object_func.remote()
# Ray will try to run consume_func on the same node
# where large_object_func runs.
consume_func.remote(large_object)
# Ray will try to spread consume_func across the entire cluster
# instead of only running on the node where large_object_func runs.
[
consume_func.options(scheduling_strategy="SPREAD").remote(large_object)
for i in range(10)
]
# Ray won't consider locality for scheduling consume_func
# since the argument is small and will be sent to the worker node inline directly.
consume_func.remote(small_object)
More about Ray Scheduling#
Resources
Accelerator Support
Placement Groups
Memory Management
Out-Of-Memory Prevention
|
231
|
Environment Dependencies#
Your Ray application may have dependencies that exist outside of your Ray script. For example:
Your Ray script may import/depend on some Python packages.
Your Ray script may be looking for some specific environment variables to be available.
Your Ray script may import some files outside of the script.
|
232
|
One frequent problem when running on a cluster is that Ray expects these “dependencies” to exist on each Ray node. If these are not present, you may run into issues such as ModuleNotFoundError, FileNotFoundError and so on.
To address this problem, you can (1) prepare your dependencies on the cluster in advance (e.g. using a container image) using the Ray Cluster Launcher, or (2) use Ray’s runtime environments to install them on the fly.
For production usage or non-changing environments, we recommend installing your dependencies into a container image and specifying the image using the Cluster Launcher.
For dynamic environments (e.g. for development and experimentation), we recommend using runtime environments.
Concepts#
|
233
|
Concepts#
Ray Application. A program including a Ray script that calls ray.init() and uses Ray tasks or actors.
Dependencies, or Environment. Anything outside of the Ray script that your application needs to run, including files, packages, and environment variables.
Files. Code files, data files or other files that your Ray application needs to run.
Packages. External libraries or executables required by your Ray application, often installed via pip or conda.
Local machine and Cluster. Usually, you may want to separate the Ray cluster compute machines/pods from the machine/pod that handles and submits the application. You can submit a Ray Job via the Ray Job Submission mechanism, or use ray attach to connect to a cluster interactively. We call the machine submitting the job your local machine.
Job. A Ray job is a single application: it is the collection of Ray tasks, objects, and actors that originate from the same script.
|
234
|
Preparing an environment using the Ray Cluster launcher#
The first way to set up dependencies is to prepare a single environment across the cluster before starting the Ray runtime.
You can build all your files and dependencies into a container image and specify this in your your Cluster YAML Configuration.
You can also install packages using setup_commands in the Ray Cluster configuration file (reference); these commands will be run as each node joins the cluster.
Note that for production settings, it is recommended to build any necessary packages into a container image instead.
You can push local files to the cluster using ray rsync_up (reference).
Runtime environments#
Note
This feature requires a full installation of Ray using pip install "ray[default]". This feature is available starting with Ray 1.4.0 and is currently supported on macOS and Linux, with beta support on Windows.
|
235
|
The second way to set up dependencies is to install them dynamically while Ray is running.
A runtime environment describes the dependencies your Ray application needs to run, including files, packages, environment variables, and more.
It is installed dynamically on the cluster at runtime and cached for future use (see Caching and Garbage Collection for details about the lifecycle).
Runtime environments can be used on top of the prepared environment from the Ray Cluster launcher if it was used.
For example, you can use the Cluster launcher to install a base set of packages, and then use runtime environments to install additional packages.
In contrast with the base cluster environment, a runtime environment will only be active for Ray processes. (For example, if using a runtime environment specifying a pip package my_pkg, the statement import my_pkg will fail if called outside of a Ray task, actor, or job.)
|
236
|
runtime_env = {"pip": ["emoji"]}
ray.init(runtime_env=runtime_env)
@ray.remote
def f():
import emoji
return emoji.emojize('Python is :thumbs_up:')
print(ray.get(f.remote()))
Python is 👍
A runtime environment can be described by a Python dict:
runtime_env = {
"pip": ["emoji"],
"env_vars": {"TF_WARNINGS": "none"}
}
Alternatively, you can use ray.runtime_env.RuntimeEnv:
from ray.runtime_env import RuntimeEnv
runtime_env = RuntimeEnv(
pip=["emoji"],
env_vars={"TF_WARNINGS": "none"}
)
For more examples, jump to the API Reference.
There are two primary scopes for which you can specify a runtime environment:
Per-Job, and
Per-Task/Actor, within a job.
|
237
|
For more examples, jump to the API Reference.
There are two primary scopes for which you can specify a runtime environment:
Per-Job, and
Per-Task/Actor, within a job.
Specifying a Runtime Environment Per-Job#
You can specify a runtime environment for your whole job, whether running a script directly on the cluster, using the Ray Jobs API, or submitting a KubeRay RayJob:
# Option 1: Starting a single-node local Ray cluster or connecting to existing local cluster
ray.init(runtime_env=runtime_env)
# Option 2: Using Ray Jobs API (Python SDK)
from ray.job_submission import JobSubmissionClient
client = JobSubmissionClient("http://<head-node-ip>:8265")
job_id = client.submit_job(
entrypoint="python my_ray_script.py",
runtime_env=runtime_env,
)
|
238
|
client = JobSubmissionClient("http://<head-node-ip>:8265")
job_id = client.submit_job(
entrypoint="python my_ray_script.py",
runtime_env=runtime_env,
)
# Option 3: Using Ray Jobs API (CLI). (Note: can use --runtime-env to pass a YAML file instead of an inline JSON string.)
$ ray job submit --address="http://<head-node-ip>:8265" --runtime-env-json='{"working_dir": "/data/my_files", "pip": ["emoji"]}' -- python my_ray_script.py
# Option 4: Using KubeRay RayJob. You can specify the runtime environment in the RayJob YAML manifest.
# [...]
spec:
runtimeEnvYAML: |
pip:
- requests==2.26.0
- pendulum==2.1.2
env_vars:
KEY: "VALUE"
|
239
|
Warning
Specifying the runtime_env argument in the submit_job or ray job submit call ensures the runtime environment is installed on the cluster before the entrypoint script is run.
If runtime_env is specified from ray.init(runtime_env=...), the runtime env is only applied to all children Tasks and Actors, not the entrypoint script (Driver) itself.
If runtime_env is specified by both ray job submit and ray.init, the runtime environments are merged. See Runtime Environment Specified by Both Job and Driver for more details.
Note
There are two options for when to install the runtime environment:
As soon as the job starts (i.e., as soon as ray.init() is called), the dependencies are eagerly downloaded and installed.
The dependencies are installed only when a task is invoked or an actor is created.
The default is option 1. To change the behavior to option 2, add "eager_install": False to the config of runtime_env.
|
240
|
The default is option 1. To change the behavior to option 2, add "eager_install": False to the config of runtime_env.
Specifying a Runtime Environment Per-Task or Per-Actor#
You can specify different runtime environments per-actor or per-task using .options() or the @ray.remote decorator:
# Invoke a remote task that will run in a specified runtime environment.
f.options(runtime_env=runtime_env).remote()
# Instantiate an actor that will run in a specified runtime environment.
actor = SomeClass.options(runtime_env=runtime_env).remote()
# Specify a runtime environment in the task definition. Future invocations via
# `g.remote()` will use this runtime environment unless overridden by using
# `.options()` as above.
@ray.remote(runtime_env=runtime_env)
def g():
pass
|
241
|
# Specify a runtime environment in the actor definition. Future instantiations
# via `MyClass.remote()` will use this runtime environment unless overridden by
# using `.options()` as above.
@ray.remote(runtime_env=runtime_env)
class MyClass:
pass
This allows you to have actors and tasks running in their own environments, independent of the surrounding environment. (The surrounding environment could be the job’s runtime environment, or the system environment of the cluster.)
Warning
Ray does not guarantee compatibility between tasks and actors with conflicting runtime environments.
For example, if an actor whose runtime environment contains a pip package tries to communicate with an actor with a different version of that package, it can lead to unexpected behavior such as unpickling errors.
|
242
|
Common Workflows#
This section describes some common use cases for runtime environments. These use cases are not mutually exclusive; all of the options described below can be combined in a single runtime environment.
Using Local Files#
Your Ray application might depend on source files or data files.
For a development workflow, these might live on your local machine, but when it comes time to run things at scale, you will need to get them to your remote cluster.
The following simple example explains how to get your local files on the cluster.
import os
import ray
os.makedirs("/tmp/runtime_env_working_dir", exist_ok=True)
with open("/tmp/runtime_env_working_dir/hello.txt", "w") as hello_file:
hello_file.write("Hello World!")
# Specify a runtime environment for the entire Ray job
ray.init(runtime_env={"working_dir": "/tmp/runtime_env_working_dir"})
|
243
|
# Specify a runtime environment for the entire Ray job
ray.init(runtime_env={"working_dir": "/tmp/runtime_env_working_dir"})
# Create a Ray task, which inherits the above runtime env.
@ray.remote
def f():
# The function will have its working directory changed to its node's
# local copy of /tmp/runtime_env_working_dir.
return open("hello.txt").read()
print(ray.get(f.remote()))
Hello World!
Note
The example above is written to run on a local machine, but as for all of these examples, it also works when specifying a Ray cluster to connect to
(e.g., using ray.init("ray://123.456.7.89:10001", runtime_env=...) or ray.init(address="auto", runtime_env=...)).
|
244
|
The specified local directory will automatically be pushed to the cluster nodes when ray.init() is called.
You can also specify files via a remote cloud storage URI; see Remote URIs for details.
If you specify a working_dir, Ray always prepares it first, and it’s present in the creation of other runtime environments in the ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR} environment variable. This sequencing allows pip and conda to reference local files in the working_dir like requirements.txt or environment.yml. See pip and conda sections in API Reference for more details.
|
245
|
Using conda or pip packages#
Your Ray application might depend on Python packages (for example, pendulum or requests) via import statements.
Ray ordinarily expects all imported packages to be preinstalled on every node of the cluster; in particular, these packages are not automatically shipped from your local machine to the cluster or downloaded from any repository.
However, using runtime environments you can dynamically specify packages to be automatically downloaded and installed in a virtual environment for your Ray job, or for specific Ray tasks or actors.
import ray
import requests
# This example runs on a local machine, but you can also do
# ray.init(address=..., runtime_env=...) to connect to a cluster.
ray.init(runtime_env={"pip": ["requests"]})
@ray.remote
def reqs():
return requests.get("https://www.ray.io/").status_code
print(ray.get(reqs.remote()))
200
|
246
|
@ray.remote
def reqs():
return requests.get("https://www.ray.io/").status_code
print(ray.get(reqs.remote()))
200
You may also specify your pip dependencies either via a Python list or a local requirements.txt file.
Consider specifying a requirements.txt file when your pip install command requires options such as --extra-index-url or --find-links; see https://pip.pypa.io/en/stable/reference/requirements-file-format/# for details.
Alternatively, you can specify a conda environment, either as a Python dictionary or via a local environment.yml file. This conda environment can include pip packages.
For details, head to the API Reference.
Warning
Since the packages in the runtime_env are installed at runtime, be cautious when specifying conda or pip packages whose installations involve building from source, as this can be slow.
|
247
|
Warning
Since the packages in the runtime_env are installed at runtime, be cautious when specifying conda or pip packages whose installations involve building from source, as this can be slow.
Note
When using the "pip" field, the specified packages will be installed “on top of” the base environment using virtualenv, so existing packages on your cluster will still be importable. By contrast, when using the conda field, your Ray tasks and actors will run in an isolated environment. The conda and pip fields cannot both be used in a single runtime_env.
Note
The ray[default] package itself will automatically be installed in the environment. For the conda field only, if you are using any other Ray libraries (for example, Ray Serve), then you will need to specify the library in the runtime environment (e.g. runtime_env = {"conda": {"dependencies": ["pytorch", "pip", {"pip": ["requests", "ray[serve]"]}]}}.)
|
248
|
Note
conda environments must have the same Python version as the Ray cluster. Do not list ray in the conda dependencies, as it will be automatically installed.
Using uv for package management#
The recommended approach for package management with uv in runtime environments is through uv run.
This method offers several key advantages:
First, it keeps dependencies synchronized between your driver and Ray workers.
Additionally, it provides full support for pyproject.toml including editable
packages. It also allows you to lock package versions using uv lock.
For more details, see the UV scripts documentation as
well as our blog post.
Note
Because this is a new feature, you currently need to set a feature flag:
export RAY_RUNTIME_ENV_HOOK=ray._private.runtime_env.uv_runtime_env_hook.hook
We plan to make it the default after collecting more feedback, and adapting the behavior if necessary.
Create a file pyproject.toml in your working directory like the following:
[project]
|
249
|
We plan to make it the default after collecting more feedback, and adapting the behavior if necessary.
Create a file pyproject.toml in your working directory like the following:
[project]
name = "test"
version = "0.1"
dependencies = [
"emoji",
"ray",
]
And then a test.py like the following:
import emoji
import ray
@ray.remote
def f():
return emoji.emojize('Python is :thumbs_up:')
# Execute 1000 copies of f across a cluster.
print(ray.get([f.remote() for _ in range(1000)]))
|
250
|
@ray.remote
def f():
return emoji.emojize('Python is :thumbs_up:')
# Execute 1000 copies of f across a cluster.
print(ray.get([f.remote() for _ in range(1000)]))
and run the driver script with uv run test.py. This runs 1000 copies of
the f function across a number of Python worker processes in a Ray cluster.
The emoji dependency, in addition to being available for the main script, is
also available for all worker processes. Also, the source code in the current
working directory is available to all the workers.
This workflow also supports editable packages, for example, you can use
uv add --editable ./path/to/package where ./path/to/package
must be inside your current working directory so it’s available to all
workers.
See here
for an end-to-end example of how to use uv run to run a batch inference workload
with Ray Data.
Using uv in a Ray Job: With the same pyproject.toml and test.py files as above,
you can submit a Ray Job via
ray job submit --working-dir . -- uv run test.py
|
251
|
This command makes sure both the driver and workers of the job run in the uv environment as specified by your pyproject.toml.
Using uv with Ray Serve: With appropriate pyproject.toml and app.py files, you can
run a Ray Serve application with uv run serve run app:main.
Best Practices and Tips:
Use uv lock to generate a lockfile and make sure all your dependencies are frozen, so things won’t change in uncontrolled ways if a new version of a package gets released.
If you have a requirements.txt file, you can use uv add -r requirement.txt to add the dependencies to your pyproject.toml and then use that with uv run.
If your pyproject.toml is in some subdirectory, you can use uv run --project to use it from there.
If you use uv run and want to reset the working directory to something that isn’t the current working directory, use the --directory flag. The Ray uv integration makes sure your working_dir is set accordingly.
|
252
|
Advanced use cases: Under the hood, the uv run support is implemented using a low level runtime environment
plugin called py_executable. It allows you to specify the Python executable (including arguments) that Ray workers will
be started in. In the case of uv, the py_executable is set to uv run with the same parameters that were used to run the
driver. Also, the working_dir runtime environment is used to propagate the working directory of the driver
(including the pyproject.toml) to the workers. This allows uv to set up the right dependencies and environment for the
workers to run in. There are some advanced use cases where you might want to use the py_executable mechanism directly in
your programs:
|
253
|
Applications with heterogeneous dependencies: Ray supports using a different runtime environment for different
tasks or actors. This is useful for deploying different inference engines, models, or microservices in different
Ray Serve deployments
and also for heterogeneous data pipelines in Ray Data. To implement this, you can specify a
different py_executable for each of the runtime environments and use uv run with a different
--project parameter for each. Instead, you can also use a different working_dir for each environment.
Customizing the command the worker runs in: On the workers, you might want to customize uv with some special
arguments that aren’t used for the driver. Or, you might want to run processes using poetry run, a build system
like bazel, a profiler, or a debugger. In these cases, you can explicitly specify the executable the worker should
run in via py_executable. It could even be a shell script that is stored in working_dir if you are trying to wrap
|
254
|
Library Development#
Suppose you are developing a library my_module on Ray.
A typical iteration cycle will involve
Making some changes to the source code of my_module
Running a Ray script to test the changes, perhaps on a distributed cluster.
To ensure your local changes show up across all Ray workers and can be imported properly, use the py_modules field.
import ray
import my_module
ray.init("ray://123.456.7.89:10001", runtime_env={"py_modules": [my_module]})
@ray.remote
def test_my_module():
# No need to import my_module inside this function.
my_module.test()
ray.get(test_my_module.remote())
API Reference#
The runtime_env is a Python dictionary or a Python class ray.runtime_env.RuntimeEnv including one or more of the following fields:
|
255
|
ray.get(test_my_module.remote())
API Reference#
The runtime_env is a Python dictionary or a Python class ray.runtime_env.RuntimeEnv including one or more of the following fields:
working_dir (str): Specifies the working directory for the Ray workers. This must either be (1) an local existing directory with total size at most 500 MiB, (2) a local existing zipped file with total unzipped size at most 500 MiB (Note: excludes has no effect), or (3) a URI to a remotely-stored zip file containing the working directory for your job (no file size limit is enforced by Ray). See Remote URIs for details.
The specified directory will be downloaded to each node on the cluster, and Ray workers will be started in their node’s copy of this directory.
Examples
"." # cwd
"/src/my_project"
"/src/my_project.zip"
"s3://path/to/my_dir.zip"
|
256
|
Examples
"." # cwd
"/src/my_project"
"/src/my_project.zip"
"s3://path/to/my_dir.zip"
Note: Setting a local directory per-task or per-actor is currently unsupported; it can only be set per-job (i.e., in ray.init()).
Note: If the local directory contains a .gitignore file, the files and paths specified there are not uploaded to the cluster. You can disable this by setting the environment variable RAY_RUNTIME_ENV_IGNORE_GITIGNORE=1 on the machine doing the uploading.
Note: If the local directory contains symbolic links, Ray follows the links and the files they point to are uploaded to the cluster.
py_modules (List[str|module]): Specifies Python modules to be available for import in the Ray workers. (For more ways to specify packages, see also the pip and conda fields below.)
Each entry must be either (1) a path to a local file or directory, (2) a URI to a remote zip or wheel file (see Remote URIs for details), (3) a Python module object, or (4) a path to a local .whl file.
|
257
|
Examples of entries in the list:
"."
"/local_dependency/my_dir_module"
"/local_dependency/my_file_module.py"
"s3://bucket/my_module.zip"
my_module # Assumes my_module has already been imported, e.g. via 'import my_module'
my_module.whl
"s3://bucket/my_module.whl"
The modules will be downloaded to each node on the cluster.
Note: Setting options (1), (3) and (4) per-task or per-actor is currently unsupported, it can only be set per-job (i.e., in ray.init()).
Note: For option (1), if the local directory contains a .gitignore file, the files and paths specified there are not uploaded to the cluster. You can disable this by setting the environment variable RAY_RUNTIME_ENV_IGNORE_GITIGNORE=1 on the machine doing the uploading.
|
258
|
py_executable (str): Specifies the executable used for running the Ray workers. It can include arguments as well. The executable can be
located in the working_dir. This runtime environment is useful to run workers in a custom debugger or profiler as well as to run workers
in an environment set up by a package manager like UV (see here).
Note: py_executable is new functionality and currently experimental. If you have some requirements or run into any problems, raise issues in github.
|
259
|
excludes (List[str]): When used with working_dir or py_modules, specifies a list of files or paths to exclude from being uploaded to the cluster.
This field uses the pattern-matching syntax used by .gitignore files: see https://git-scm.com/docs/gitignore for details.
Note: In accordance with .gitignore syntax, if there is a separator (/) at the beginning or middle (or both) of the pattern, then the pattern is interpreted relative to the level of the working_dir.
In particular, you shouldn’t use absolute paths (e.g. /Users/my_working_dir/subdir/) with excludes; rather, you should use the relative path /subdir/ (written here with a leading / to match only the top-level subdir directory, rather than all directories named subdir at all levels.)
Example: {"working_dir": "/Users/my_working_dir/", "excludes": ["my_file.txt", "/subdir/", "path/to/dir", "*.log"]}
|
260
|
Example: {"working_dir": "/Users/my_working_dir/", "excludes": ["my_file.txt", "/subdir/", "path/to/dir", "*.log"]}
pip (dict | List[str] | str): Either (1) a list of pip requirements specifiers, (2) a string containing the path to a local pip
“requirements.txt” file, or (3) a python dictionary that has three fields: (a) packages (required, List[str]): a list of pip packages,
(b) pip_check (optional, bool): whether to enable pip check at the end of pip install, defaults to False.
(c) pip_version (optional, str): the version of pip; Ray will spell the package name “pip” in front of the pip_version to form the final requirement string.
The syntax of a requirement specifier is defined in full in PEP 508.
This will be installed in the Ray workers at runtime. Packages in the preinstalled cluster environment will still be available.
To use a library like Ray Serve or Ray Tune, you will need to include "ray[serve]" or "ray[tune]" here.
The Ray version must match that of the cluster.
|
261
|
Example: ["requests==1.0.0", "aiohttp", "ray[serve]"]
Example: "./requirements.txt"
Example: {"packages":["tensorflow", "requests"], "pip_check": False, "pip_version": "==22.0.2;python_version=='3.8.11'"}
When specifying a path to a requirements.txt file, the file must be present on your local machine and it must be a valid absolute path or relative filepath relative to your local current working directory, not relative to the working_dir specified in the runtime_env.
Furthermore, referencing local files within a requirements.txt file isn’t directly supported (e.g., -r ./my-laptop/more-requirements.txt, ./my-pkg.whl). Instead, use the ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR} environment variable in the creation process. For example, use -r ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR}/my-laptop/more-requirements.txt or ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR}/my-pkg.whl to reference local files, while ensuring they’re in the working_dir.
|
262
|
uv (dict | List[str] | str): Alpha version feature. This plugin is the uv pip version of the pip plugin above. If you
are looking for uv run support with pyproject.toml and uv.lock support, use
the uv run runtime environment plugin instead.
Either (1) a list of uv requirements specifiers, (2) a string containing
the path to a local uv “requirements.txt” file, or (3) a python dictionary that has three fields: (a) packages (required, List[str]): a list of uv packages,
(b) uv_version (optional, str): the version of uv; Ray will spell the package name “uv” in front of the uv_version to form the final requirement string.
(c) uv_check (optional, bool): whether to enable pip check at the end of uv install, default to False.
(d) uv_pip_install_options (optional, List[str]): user-provided options for uv pip install command, default to ["--no-cache"].
To override the default options and install without any options, use an empty list [] as install option value.
|
263
|
To override the default options and install without any options, use an empty list [] as install option value.
The syntax of a requirement specifier is the same as pip requirements.
This will be installed in the Ray workers at runtime. Packages in the preinstalled cluster environment will still be available.
To use a library like Ray Serve or Ray Tune, you will need to include "ray[serve]" or "ray[tune]" here.
The Ray version must match that of the cluster.
|
264
|
Example: ["requests==1.0.0", "aiohttp", "ray[serve]"]
Example: "./requirements.txt"
Example: {"packages":["tensorflow", "requests"], "uv_version": "==0.4.0;python_version=='3.8.11'"}
When specifying a path to a requirements.txt file, the file must be present on your local machine and it must be a valid absolute path or relative filepath relative to your local current working directory, not relative to the working_dir specified in the runtime_env.
Furthermore, referencing local files within a requirements.txt file isn’t directly supported (e.g., -r ./my-laptop/more-requirements.txt, ./my-pkg.whl). Instead, use the ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR} environment variable in the creation process. For example, use -r ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR}/my-laptop/more-requirements.txt or ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR}/my-pkg.whl to reference local files, while ensuring they’re in the working_dir.
|
265
|
conda (dict | str): Either (1) a dict representing the conda environment YAML, (2) a string containing the path to a local
conda “environment.yml” file,
or (3) the name of a local conda environment already installed on each node in your cluster (e.g., "pytorch_p36") or its absolute path (e.g. "/home/youruser/anaconda3/envs/pytorch_p36") .
In the first two cases, the Ray and Python dependencies will be automatically injected into the environment to ensure compatibility, so there is no need to manually include them.
The Python and Ray version must match that of the cluster, so you likely should not specify them manually.
Note that the conda and pip keys of runtime_env cannot both be specified at the same time—to use them together, please use conda and add your pip dependencies in the "pip" field in your conda environment.yaml.
|
266
|
Example: {"dependencies": ["pytorch", "torchvision", "pip", {"pip": ["pendulum"]}]}
Example: "./environment.yml"
Example: "pytorch_p36"
Example: "/home/youruser/anaconda3/envs/pytorch_p36"
When specifying a path to a environment.yml file, the file must be present on your local machine and it must be a valid absolute path or a relative filepath relative to your local current working directory, not relative to the working_dir specified in the runtime_env.
Furthermore, referencing local files within a environment.yml file isn’t directly supported (e.g., -r ./my-laptop/more-requirements.txt, ./my-pkg.whl). Instead, use the ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR} environment variable in the creation process. For example, use -r ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR}/my-laptop/more-requirements.txt or ${RAY_RUNTIME_ENV_CREATE_WORKING_DIR}/my-pkg.whl to reference local files, while ensuring they’re in the working_dir.
|
267
|
env_vars (Dict[str, str]): Environment variables to set. Environment variables already set on the cluster will still be visible to the Ray workers; so there is
no need to include os.environ or similar in the env_vars field.
By default, these environment variables override the same name environment variables on the cluster.
You can also reference existing environment variables using ${ENV_VAR} to achieve the appending behavior.
If the environment variable doesn’t exist, it becomes an empty string "".
Example: {"OMP_NUM_THREADS": "32", "TF_WARNINGS": "none"}
Example: {"LD_LIBRARY_PATH": "${LD_LIBRARY_PATH}:/home/admin/my_lib"}
Non-existant variable example: {"ENV_VAR_NOT_EXIST": "${ENV_VAR_NOT_EXIST}:/home/admin/my_lib"} -> ENV_VAR_NOT_EXIST=":/home/admin/my_lib".
|
268
|
nsight (Union[str, Dict[str, str]]): specifies the config for the Nsight System Profiler. The value is either (1) “default”, which refers to the default config, or (2) a dict of Nsight System Profiler options and their values.
See here for more details on setup and usage.
Example: "default"
Example: {"stop-on-exit": "true", "t": "cuda,cublas,cudnn", "ftrace": ""}
image_uri (dict): Require a given Docker image. The worker process runs in a container with this image.
- Example: {"image_uri": "anyscale/ray:2.31.0-py39-cpu"}
Note: image_uri is experimental. If you have some requirements or run into any problems, raise issues in github.
config (dict | ray.runtime_env.RuntimeEnvConfig): config for runtime environment. Either a dict or a RuntimeEnvConfig.
Fields:
(1) setup_timeout_seconds, the timeout of runtime environment creation, timeout is in seconds.
Example: {"setup_timeout_seconds": 10}
Example: RuntimeEnvConfig(setup_timeout_seconds=10)
|
269
|
Example: {"setup_timeout_seconds": 10}
Example: RuntimeEnvConfig(setup_timeout_seconds=10)
(2) eager_install (bool): Indicates whether to install the runtime environment on the cluster at ray.init() time, before the workers are leased. This flag is set to True by default.
If set to False, the runtime environment will be only installed when the first task is invoked or when the first actor is created.
Currently, specifying this option per-actor or per-task is not supported.
Example: {"eager_install": False}
Example: RuntimeEnvConfig(eager_install=False)
|
270
|
Example: {"eager_install": False}
Example: RuntimeEnvConfig(eager_install=False)
Caching and Garbage Collection#
Runtime environment resources on each node (such as conda environments, pip packages, or downloaded working_dir or py_modules directories) will be cached on the cluster to enable quick reuse across different runtime environments within a job. Each field (working_dir, py_modules, etc.) has its own cache whose size defaults to 10 GB. To change this default, you may set the environment variable RAY_RUNTIME_ENV_<field>_CACHE_SIZE_GB on each node in your cluster before starting Ray e.g. export RAY_RUNTIME_ENV_WORKING_DIR_CACHE_SIZE_GB=1.5.
When the cache size limit is exceeded, resources not currently used by any Actor, Task or Job are deleted.
|
271
|
Runtime Environment Specified by Both Job and Driver#
When running an entrypoint script (Driver), the runtime environment can be specified via ray.init(runtime_env=...) or ray job submit --runtime-env (See Specifying a Runtime Environment Per-Job for more details).
If the runtime environment is specified by ray job submit --runtime-env=..., the runtime environments are applied to the entrypoint script (Driver) and all the tasks and actors created from it.
If the runtime environment is specified by ray.init(runtime_env=...), the runtime environments are applied to all the tasks and actors, but not the entrypoint script (Driver) itself.
|
272
|
Since ray job submit submits a Driver (that calls ray.init), sometimes runtime environments are specified by both of them. When both the Ray Job and Driver specify runtime environments, their runtime environments are merged if there’s no conflict.
It means the driver script uses the runtime environment specified by ray job submit, and all the tasks and actors are going to use the merged runtime environment.
Ray raises an exception if the runtime environments conflict.
The runtime_env["env_vars"] of ray job submit --runtime-env=... is merged with the runtime_env["env_vars"] of ray.init(runtime_env=...).
Note that each individual env_var keys are merged.
If the environment variables conflict, Ray raises an exception.
Every other field in the runtime_env will be merged. If any key conflicts, it raises an exception.
Example:
# `ray job submit --runtime_env=...`
{"pip": ["requests", "chess"],
"env_vars": {"A": "a", "B": "b"}}
# ray.init(runtime_env=...)
{"env_vars": {"C": "c"}}
|
273
|
Example:
# `ray job submit --runtime_env=...`
{"pip": ["requests", "chess"],
"env_vars": {"A": "a", "B": "b"}}
# ray.init(runtime_env=...)
{"env_vars": {"C": "c"}}
# Driver's actual `runtime_env` (merged with Job's)
{"pip": ["requests", "chess"],
"env_vars": {"A": "a", "B": "b", "C": "c"}}
Conflict Example:
# Example 1, env_vars conflicts
# `ray job submit --runtime_env=...`
{"pip": ["requests", "chess"],
"env_vars": {"C": "a", "B": "b"}}
# ray.init(runtime_env=...)
{"env_vars": {"C": "c"}}
# Ray raises an exception because the "C" env var conflicts.
# Example 2, other field (e.g., pip) conflicts
# `ray job submit --runtime_env=...`
{"pip": ["requests", "chess"]}
# ray.init(runtime_env=...)
{"pip": ["torch"]}
# Ray raises an exception because "pip" conflicts.
|
274
|
# ray.init(runtime_env=...)
{"pip": ["torch"]}
# Ray raises an exception because "pip" conflicts.
You can set an environment variable RAY_OVERRIDE_JOB_RUNTIME_ENV=1
to avoid raising an exception upon a conflict. In this case, the runtime environments
are inherited in the same way as Driver and Task and Actor both specify
runtime environments, where ray job submit
is a parent and ray.init is a child.
Inheritance#
The runtime environment is inheritable, so it applies to all Tasks and Actors within a Job and all child Tasks and Actors of a Task or Actor once set, unless it is overridden.
If an Actor or Task specifies a new runtime_env, it overrides the parent’s runtime_env (i.e., the parent Actor’s or Task’s runtime_env, or the Job’s runtime_env if Actor or Task doesn’t have a parent) as follows:
|
275
|
The runtime_env["env_vars"] field will be merged with the runtime_env["env_vars"] field of the parent.
This allows for environment variables set in the parent’s runtime environment to be automatically propagated to the child, even if new environment variables are set in the child’s runtime environment.
Every other field in the runtime_env will be overridden by the child, not merged. For example, if runtime_env["py_modules"] is specified, it will replace the runtime_env["py_modules"] field of the parent.
Example:
# Parent's `runtime_env`
{"pip": ["requests", "chess"],
"env_vars": {"A": "a", "B": "b"}}
# Child's specified `runtime_env`
{"pip": ["torch", "ray[serve]"],
"env_vars": {"B": "new", "C": "c"}}
# Child's actual `runtime_env` (merged with parent's)
{"pip": ["torch", "ray[serve]"],
"env_vars": {"A": "a", "B": "new", "C": "c"}}
Frequently Asked Questions#
|
276
|
# Child's actual `runtime_env` (merged with parent's)
{"pip": ["torch", "ray[serve]"],
"env_vars": {"A": "a", "B": "new", "C": "c"}}
Frequently Asked Questions#
Are environments installed on every node?#
If a runtime environment is specified in ray.init(runtime_env=...), then the environment will be installed on every node. See Per-Job for more details.
(Note, by default the runtime environment will be installed eagerly on every node in the cluster. If you want to lazily install the runtime environment on demand, set the eager_install option to false: ray.init(runtime_env={..., "config": {"eager_install": False}}.)
|
277
|
When is the environment installed?#
When specified per-job, the environment is installed when you call ray.init() (unless "eager_install": False is set).
When specified per-task or per-actor, the environment is installed when the task is invoked or the actor is instantiated (i.e. when you call my_task.remote() or my_actor.remote().)
See Per-Job Per-Task/Actor, within a job for more details.
Where are the environments cached?#
Any local files downloaded by the environments are cached at /tmp/ray/session_latest/runtime_resources.
|
278
|
How long does it take to install or to load from cache?#
The install time usually mostly consists of the time it takes to run pip install or conda create / conda activate, or to upload/download a working_dir, depending on which runtime_env options you’re using.
This could take seconds or minutes.
On the other hand, loading a runtime environment from the cache should be nearly as fast as the ordinary Ray worker startup time, which is on the order of a few seconds. A new Ray worker is started for every Ray actor or task that requires a new runtime environment.
(Note that loading a cached conda environment could still be slow, since the conda activate command sometimes takes a few seconds.)
You can set setup_timeout_seconds config to avoid the installation hanging for a long time. If the installation is not finished within this time, your tasks or actors will fail to start.
|
279
|
What is the relationship between runtime environments and Docker?#
They can be used independently or together.
A container image can be specified in the Cluster Launcher for large or static dependencies, and runtime environments can be specified per-job or per-task/actor for more dynamic use cases.
The runtime environment will inherit packages, files, and environment variables from the container image.
My runtime_env was installed, but when I log into the node I can’t import the packages.#
The runtime environment is only active for the Ray worker processes; it does not install any packages “globally” on the node.
|
280
|
Remote URIs#
The working_dir and py_modules arguments in the runtime_env dictionary can specify either local path(s) or remote URI(s).
A local path must be a directory path. The directory’s contents will be directly accessed as the working_dir or a py_module.
A remote URI must be a link directly to a zip file or a wheel file (only for py_module). The zip file must contain only a single top-level directory.
The contents of this directory will be directly accessed as the working_dir or a py_module.
For example, suppose you want to use the contents in your local /some_path/example_dir directory as your working_dir.
If you want to specify this directory as a local path, your runtime_env dictionary should contain:
runtime_env = {..., "working_dir": "/some_path/example_dir", ...}
|
281
|
Suppose instead you want to host your files in your /some_path/example_dir directory remotely and provide a remote URI.
You would need to first compress the example_dir directory into a zip file.
There should be no other files or directories at the top level of the zip file, other than example_dir.
You can use the following command in the Terminal to do this:
cd /some_path
zip -r zip_file_name.zip example_dir
Note that this command must be run from the parent directory of the desired working_dir to ensure that the resulting zip file contains a single top-level directory.
In general, the zip file’s name and the top-level directory’s name can be anything.
The top-level directory’s contents will be used as the working_dir (or py_module).
You can check that the zip file contains a single top-level directory by running the following command in the Terminal:
zipinfo -1 zip_file_name.zip
# example_dir/
# example_dir/my_file_1.txt
# example_dir/subdir/my_file_2.txt
|
282
|
Suppose you upload the compressed example_dir directory to AWS S3 at the S3 URI s3://example_bucket/example.zip.
Your runtime_env dictionary should contain:
runtime_env = {..., "working_dir": "s3://example_bucket/example.zip", ...}
Warning
Check for hidden files and metadata directories in zipped dependencies.
You can inspect a zip file’s contents by running the zipinfo -1 zip_file_name.zip command in the Terminal.
Some zipping methods can cause hidden files or metadata directories to appear in the zip file at the top level.
To avoid this, use the zip -r command directly on the directory you want to compress from its parent’s directory. For example, if you have a directory structure such as: a/b and you what to compress b, issue the zip -r b command from the directory a.
If Ray detects more than a single directory at the top level, it will use the entire zip file instead of the top-level directory, which may lead to unexpected behavior.
|
283
|
Currently, three types of remote URIs are supported for hosting working_dir and py_modules packages:
HTTPS: HTTPS refers to URLs that start with https.
These are particularly useful because remote Git providers (e.g. GitHub, Bitbucket, GitLab, etc.) use https URLs as download links for repository archives.
This allows you to host your dependencies on remote Git providers, push updates to them, and specify which dependency versions (i.e. commits) your jobs should use.
To use packages via HTTPS URIs, you must have the smart_open library (you can install it using pip install smart_open).
Example:
runtime_env = {"working_dir": "https://github.com/example_username/example_respository/archive/HEAD.zip"}
|
284
|
Example:
runtime_env = {"working_dir": "https://github.com/example_username/example_respository/archive/HEAD.zip"}
S3: S3 refers to URIs starting with s3:// that point to compressed packages stored in AWS S3.
To use packages via S3 URIs, you must have the smart_open and boto3 libraries (you can install them using pip install smart_open and pip install boto3).
Ray does not explicitly pass in any credentials to boto3 for authentication.
boto3 will use your environment variables, shared credentials file, and/or AWS config file to authenticate access.
See the AWS boto3 documentation to learn how to configure these.
Example:
runtime_env = {"working_dir": "s3://example_bucket/example_file.zip"}
|
285
|
Example:
runtime_env = {"working_dir": "s3://example_bucket/example_file.zip"}
GS: GS refers to URIs starting with gs:// that point to compressed packages stored in Google Cloud Storage.
To use packages via GS URIs, you must have the smart_open and google-cloud-storage libraries (you can install them using pip install smart_open and pip install google-cloud-storage).
Ray does not explicitly pass in any credentials to the google-cloud-storage’s Client object.
google-cloud-storage will use your local service account key(s) and environment variables by default.
Follow the steps on Google Cloud Storage’s Getting started with authentication guide to set up your credentials, which allow Ray to access your remote package.
Example:
runtime_env = {"working_dir": "gs://example_bucket/example_file.zip"}
|
286
|
Example:
runtime_env = {"working_dir": "gs://example_bucket/example_file.zip"}
Note that the smart_open, boto3, and google-cloud-storage packages are not installed by default, and it is not sufficient to specify them in the pip section of your runtime_env.
The relevant packages must already be installed on all nodes of the cluster when Ray starts.
Hosting a Dependency on a Remote Git Provider: Step-by-Step Guide#
You can store your dependencies in repositories on a remote Git provider (e.g. GitHub, Bitbucket, GitLab, etc.), and you can periodically push changes to keep them updated.
In this section, you will learn how to store a dependency on GitHub and use it in your runtime environment.
Note
These steps will also be useful if you use another large, remote Git provider (e.g. BitBucket, GitLab, etc.).
For simplicity, this section refers to GitHub alone, but you can follow along on your provider.
|
287
|
First, create a repository on GitHub to store your working_dir contents or your py_module dependency.
By default, when you download a zip file of your repository, the zip file will already contain a single top-level directory that holds the repository contents,
so you can directly upload your working_dir contents or your py_module dependency to the GitHub repository.
Once you have uploaded your working_dir contents or your py_module dependency, you need the HTTPS URL of the repository zip file, so you can specify it in your runtime_env dictionary.
You have two options to get the HTTPS URL.
|
288
|
Option 1: Download Zip (quicker to implement, but not recommended for production environments)#
The first option is to use the remote Git provider’s “Download Zip” feature, which provides an HTTPS link that zips and downloads your repository.
This is quick, but it is not recommended because it only allows you to download a zip file of a repository branch’s latest commit.
To find a GitHub URL, navigate to your repository on GitHub, choose a branch, and click on the green “Code” drop down button:
This will drop down a menu that provides three options: “Clone” which provides HTTPS/SSH links to clone the repository,
“Open with GitHub Desktop”, and “Download ZIP.”
Right-click on “Download Zip.”
This will open a pop-up near your cursor. Select “Copy Link Address”:
Now your HTTPS link is copied to your clipboard. You can paste it into your runtime_env dictionary.
|
289
|
Now your HTTPS link is copied to your clipboard. You can paste it into your runtime_env dictionary.
Warning
Using the HTTPS URL from your Git provider’s “Download as Zip” feature is not recommended if the URL always points to the latest commit.
For instance, using this method on GitHub generates a link that always points to the latest commit on the chosen branch.
By specifying this link in the runtime_env dictionary, your Ray Cluster always uses the chosen branch’s latest commit.
This creates a consistency risk: if you push an update to your remote Git repository while your cluster’s nodes are pulling the repository’s contents,
some nodes may pull the version of your package just before you pushed, and some nodes may pull the version just after.
For consistency, it is better to specify a particular commit, so all the nodes use the same package.
See “Option 2: Manually Create URL” to create a URL pointing to a specific commit.
|
290
|
Option 2: Manually Create URL (slower to implement, but recommended for production environments)#
The second option is to manually create this URL by pattern-matching your specific use case with one of the following examples.
This is recommended because it provides finer-grained control over which repository branch and commit to use when generating your dependency zip file.
These options prevent consistency issues on Ray Clusters (see the warning above for more info).
To create the URL, pick a URL template below that fits your use case, and fill in all parameters in brackets (e.g. [username], [repository], etc.) with the specific values from your repository.
For instance, suppose your GitHub username is example_user, the repository’s name is example_repository, and the desired commit hash is abcdefg.
If example_repository is public and you want to retrieve the abcdefg commit (which matches the first example use case), the URL would be:
|
291
|
Here is a list of different use cases and corresponding URLs:
Example: Retrieve package from a specific commit hash on a public GitHub repository
runtime_env = {"working_dir": ("https://github.com"
"/[username]/[repository]/archive/[commit hash].zip")}
Example: Retrieve package from a private GitHub repository using a Personal Access Token during development. For production see this document to learn how to authenticate private dependencies safely.
runtime_env = {"working_dir": ("https://[username]:[personal access token]@github.com"
"/[username]/[private repository]/archive/[commit hash].zip")}
Example: Retrieve package from a public GitHub repository’s latest commit
runtime_env = {"working_dir": ("https://github.com"
"/[username]/[repository]/archive/HEAD.zip")}
Example: Retrieve package from a specific commit hash on a public Bitbucket repository
|
292
|
Example: Retrieve package from a specific commit hash on a public Bitbucket repository
runtime_env = {"working_dir": ("https://bitbucket.org"
"/[owner]/[repository]/get/[commit hash].tar.gz")}
Tip
It is recommended to specify a particular commit instead of always using the latest commit.
This prevents consistency issues on a multi-node Ray Cluster.
See the warning below “Option 1: Download Zip” for more info.
Once you have specified the URL in your runtime_env dictionary, you can pass the dictionary
into a ray.init() or .options() call. Congratulations! You have now hosted a runtime_env dependency
remotely on GitHub!
Debugging#
If runtime_env cannot be set up (e.g., network issues, download failures, etc.), Ray will fail to schedule tasks/actors
that require the runtime_env. If you call ray.get, it will raise RuntimeEnvSetupError with
the error message in detail.
import ray
import time
@ray.remote
def f():
pass
|
293
|
@ray.remote
def f():
pass
@ray.remote
class A:
def f(self):
pass
start = time.time()
bad_env = {"conda": {"dependencies": ["this_doesnt_exist"]}}
# [Tasks] will raise `RuntimeEnvSetupError`.
try:
ray.get(f.options(runtime_env=bad_env).remote())
except ray.exceptions.RuntimeEnvSetupError:
print("Task fails with RuntimeEnvSetupError")
# [Actors] will raise `RuntimeEnvSetupError`.
a = A.options(runtime_env=bad_env).remote()
try:
ray.get(a.f.remote())
except ray.exceptions.RuntimeEnvSetupError:
print("Actor fails with RuntimeEnvSetupError")
Task fails with RuntimeEnvSetupError
Actor fails with RuntimeEnvSetupError
|
294
|
Task fails with RuntimeEnvSetupError
Actor fails with RuntimeEnvSetupError
Full logs can always be found in the file runtime_env_setup-[job_id].log for per-actor, per-task and per-job environments, or in
runtime_env_setup-ray_client_server_[port].log for per-job environments when using Ray Client.
You can also enable runtime_env debugging log streaming by setting an environment variable RAY_RUNTIME_ENV_LOG_TO_DRIVER_ENABLED=1 on each node before starting Ray, for example using setup_commands in the Ray Cluster configuration file (reference).
This will print the full runtime_env setup log messages to the driver (the script that calls ray.init()).
Example log output:
ray.init(runtime_env={"pip": ["requests"]})
|
295
|
Resources#
Ray allows you to seamlessly scale your applications from a laptop to a cluster without code change.
Ray resources are key to this capability.
They abstract away physical machines and let you express your computation in terms of resources,
while the system manages scheduling and autoscaling based on resource requests.
A resource in Ray is a key-value pair where the key denotes a resource name, and the value is a float quantity.
For convenience, Ray has native support for CPU, GPU, and memory resource types; CPU, GPU and memory are called pre-defined resources.
Besides those, Ray also supports custom resources.
|
296
|
Physical Resources and Logical Resources#
Physical resources are resources that a machine physically has such as physical CPUs and GPUs
and logical resources are virtual resources defined by a system.
Ray resources are logical and don’t need to have 1-to-1 mapping with physical resources.
For example, you can start a Ray head node with 0 logical CPUs via ray start --head --num-cpus=0
even if it physically has eight
(This signals the Ray scheduler to not schedule any tasks or actors that require logical CPU resources
on the head node, mainly to reserve the head node for running Ray system processes.).
They are mainly used for admission control during scheduling.
The fact that resources are logical has several implications:
|
297
|
Resource requirements of tasks or actors do NOT impose limits on actual physical resource usage.
For example, Ray doesn’t prevent a num_cpus=1 task from launching multiple threads and using multiple physical CPUs.
It’s your responsibility to make sure tasks or actors use no more resources than specified via resource requirements.
Ray doesn’t provide CPU isolation for tasks or actors.
For example, Ray won’t reserve a physical CPU exclusively and pin a num_cpus=1 task to it.
Ray will let the operating system schedule and run the task instead.
If needed, you can use operating system APIs like sched_setaffinity to pin a task to a physical CPU.
Ray does provide GPU isolation in the form of visible devices by automatically setting the CUDA_VISIBLE_DEVICES environment variable,
which most ML frameworks will respect for purposes of GPU assignment.
|
298
|
Note
Ray sets the environment variable OMP_NUM_THREADS=<num_cpus> if num_cpus is set on
the task/actor via ray.remote() and task.options()/actor.options().
Ray sets OMP_NUM_THREADS=1 if num_cpus is not specified; this
is done to avoid performance degradation with many workers (issue #6998). You can
also override this by explicitly setting OMP_NUM_THREADS to override anything Ray sets by default.
OMP_NUM_THREADS is commonly used in numpy, PyTorch, and Tensorflow to perform multi-threaded
linear algebra. In multi-worker setting, we want one thread per worker instead of many threads
per worker to avoid contention. Some other libraries may have their own way to configure
parallelism. For example, if you’re using OpenCV, you should manually set the number of
threads using cv2.setNumThreads(num_threads) (set to 0 to disable multi-threading).
Physical resources vs logical resources#
|
299
|
Physical resources vs logical resources#
Custom Resources#
Besides pre-defined resources, you can also specify a Ray node’s custom resources and request them in your tasks or actors.
Some use cases for custom resources:
Your node has special hardware and you can represent it as a custom resource.
Then your tasks or actors can request the custom resource via @ray.remote(resources={"special_hardware": 1})
and Ray will schedule the tasks or actors to the node that has the custom resource.
You can use custom resources as labels to tag nodes and you can achieve label based affinity scheduling.
For example, you can do ray.remote(resources={"custom_label": 0.001}) to schedule tasks or actors to nodes with custom_label custom resource.
For this use case, the actual quantity doesn’t matter, and the convention is to specify a tiny number so that the label resource is
not the limiting factor for parallelism.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.