global_chunk_id
int64
0
478
text
stringlengths
288
999
400
Retrying failed tasks# When a worker is executing a task, if the worker dies unexpectedly, either because the process crashed or because the machine failed, Ray will rerun the task until either the task succeeds or the maximum number of retries is exceeded. The default number of retries is 3 and can be overridden by specifying max_retries in the @ray.remote decorator. Specifying -1 allows infinite retries, and 0 disables retries. To override the default number of retries for all tasks submitted, set the OS environment variable RAY_TASK_MAX_RETRIES. e.g., by passing this to your driver script or by using runtime environments. You can experiment with this behavior by running the following code. import numpy as np import os import ray import time ray.init(ignore_reinit_error=True) @ray.remote(max_retries=1) def potentially_fail(failure_probability): time.sleep(0.2) if np.random.random() < failure_probability: os._exit(0) return 0
401
@ray.remote(max_retries=1) def potentially_fail(failure_probability): time.sleep(0.2) if np.random.random() < failure_probability: os._exit(0) return 0 for _ in range(3): try: # If this task crashes, Ray will retry it up to one additional # time. If either of the attempts succeeds, the call to ray.get # below will return normally. Otherwise, it will raise an # exception. ray.get(potentially_fail.remote(0.5)) print('SUCCESS') except ray.exceptions.WorkerCrashedError: print('FAILURE')
402
When a task returns a result in the Ray object store, it is possible for the resulting object to be lost after the original task has already finished. In these cases, Ray will also try to automatically recover the object by re-executing the tasks that created the object. This can be configured through the same max_retries option described here. See object fault tolerance for more information. By default, Ray will not retry tasks upon exceptions thrown by application code. However, you may control whether application-level errors are retried, and even which application-level errors are retried, via the retry_exceptions argument. This is False by default. To enable retries upon application-level errors, set retry_exceptions=True to retry upon any exception, or pass a list of retryable exceptions. An example is shown below. import numpy as np import os import ray import time ray.init(ignore_reinit_error=True) class RandomError(Exception): pass
403
ray.init(ignore_reinit_error=True) class RandomError(Exception): pass @ray.remote(max_retries=1, retry_exceptions=True) def potentially_fail(failure_probability): if failure_probability < 0 or failure_probability > 1: raise ValueError( "failure_probability must be between 0 and 1, but got: " f"{failure_probability}" ) time.sleep(0.2) if np.random.random() < failure_probability: raise RandomError("Failed!") return 0 for _ in range(3): try: # If this task crashes, Ray will retry it up to one additional # time. If either of the attempts succeeds, the call to ray.get # below will return normally. Otherwise, it will raise an # exception. ray.get(potentially_fail.remote(0.5)) print('SUCCESS') except RandomError: print('FAILURE')
404
# Provide the exceptions that we want to retry as an allowlist. retry_on_exception = potentially_fail.options(retry_exceptions=[RandomError]) try: # This will fail since we're passing in -1 for the failure_probability, # which will raise a ValueError in the task and does not match the RandomError # exception that we provided. ray.get(retry_on_exception.remote(-1)) except ValueError: print("FAILED AS EXPECTED") else: raise RuntimeError("An exception should be raised so this shouldn't be reached.") # These will retry on the RandomError exception. for _ in range(3): try: # If this task crashes, Ray will retry it up to one additional # time. If either of the attempts succeeds, the call to ray.get # below will return normally. Otherwise, it will raise an # exception. ray.get(retry_on_exception.remote(0.5)) print('SUCCESS') except RandomError: print('FAILURE AFTER RETRIES')
405
Cancelling misbehaving tasks# If a task is hanging, you may want to cancel the task to continue to make progress. You can do this by calling ray.cancel on an ObjectRef returned by the task. By default, this will send a KeyboardInterrupt to the task’s worker if it is mid-execution. Passing force=True to ray.cancel will force-exit the worker. See the API reference for ray.cancel for more details. Note that currently, Ray will not automatically retry tasks that have been cancelled. Sometimes, application-level code may cause memory leaks on a worker after repeated task executions, e.g., due to bugs in third-party libraries. To make progress in these cases, you can set the max_calls option in a task’s @ray.remote decorator. Once a worker has executed this many invocations of the given remote function, it will automatically exit. By default, max_calls is set to infinity.
406
Actor Fault Tolerance# Actors can fail if the actor process dies, or if the owner of the actor dies. The owner of an actor is the worker that originally created the actor by calling ActorClass.remote(). Detached actors do not have an owner process and are cleaned up when the Ray cluster is destroyed.
407
Actor process failure# Ray can automatically restart actors that crash unexpectedly. This behavior is controlled using max_restarts, which sets the maximum number of times that an actor will be restarted. The default value of max_restarts is 0, meaning that the actor won’t be restarted. If set to -1, the actor will be restarted infinitely many times. When an actor is restarted, its state will be recreated by rerunning its constructor. After the specified number of restarts, subsequent actor methods will raise a RayActorError. By default, actor tasks execute with at-most-once semantics (max_task_retries=0 in the @ray.remote decorator). This means that if an actor task is submitted to an actor that is unreachable, Ray will report the error with RayActorError, a Python-level exception that is thrown when ray.get is called on the future returned by the task. Note that this exception may be thrown even though the task did indeed execute successfully.
408
ray.get is called on the future returned by the task. Note that this exception may be thrown even though the task did indeed execute successfully. For example, this can happen if the actor dies immediately after executing the task. Ray also offers at-least-once execution semantics for actor tasks (max_task_retries=-1 or max_task_retries > 0). This means that if an actor task is submitted to an actor that is unreachable, the system will automatically retry the task. With this option, the system will only throw a RayActorError to the application if one of the following occurs: (1) the actor’s max_restarts limit has been exceeded and the actor cannot be restarted anymore, or (2) the max_task_retries limit has been exceeded for this particular task. Note that if the actor is currently restarting when a task is submitted, this will count for one retry. The retry limit can be set to infinity with max_task_retries = -1. You can experiment with this behavior by running the following code.
409
ray.init() # This actor kills itself after executing 10 tasks. @ray.remote(max_restarts=4, max_task_retries=-1) class Actor: def __init__(self): self.counter = 0 def increment_and_possibly_fail(self): # Exit after every 10 tasks. if self.counter == 10: os._exit(0) self.counter += 1 return self.counter actor = Actor.remote() # The actor will be reconstructed up to 4 times, so we can execute up to 50 # tasks successfully. The actor is reconstructed by rerunning its constructor. # Methods that were executing when the actor died will be retried and will not # raise a `RayActorError`. Retried methods may execute twice, once on the # failed actor and a second time on the restarted actor. for _ in range(50): counter = ray.get(actor.increment_and_possibly_fail.remote()) print(counter) # Prints the sequence 1-10 5 times.
410
# After the actor has been restarted 4 times, all subsequent methods will # raise a `RayActorError`. for _ in range(10): try: counter = ray.get(actor.increment_and_possibly_fail.remote()) print(counter) # Unreachable. except ray.exceptions.RayActorError: print("FAILURE") # Prints 10 times. For at-least-once actors, the system will still guarantee execution ordering according to the initial submission order. For example, any tasks submitted after a failed actor task will not execute on the actor until the failed actor task has been successfully retried. The system will not attempt to re-execute any tasks that executed successfully before the failure (unless max_task_retries is nonzero and the task is needed for object reconstruction). Note For async or threaded actors, tasks might be executed out of order. Upon actor restart, the system will only retry incomplete tasks. Previously completed tasks will not be re-executed.
411
Note For async or threaded actors, tasks might be executed out of order. Upon actor restart, the system will only retry incomplete tasks. Previously completed tasks will not be re-executed. At-least-once execution is best suited for read-only actors or actors with ephemeral state that does not need to be rebuilt after a failure. For actors that have critical state, the application is responsible for recovering the state, e.g., by taking periodic checkpoints and recovering from the checkpoint upon actor restart.
412
Actor checkpointing# max_restarts automatically restarts the crashed actor, but it doesn’t automatically restore application level state in your actor. Instead, you should manually checkpoint your actor’s state and recover upon actor restart. For actors that are restarted manually, the actor’s creator should manage the checkpoint and manually restart and recover the actor upon failure. This is recommended if you want the creator to decide when the actor should be restarted and/or if the creator is coordinating actor checkpoints with other execution: import os import sys import ray import json import tempfile import shutil @ray.remote(num_cpus=1) class Worker: def __init__(self): self.state = {"num_tasks_executed": 0} def execute_task(self, crash=False): if crash: sys.exit(1) # Execute the task # ... # Update the internal state self.state["num_tasks_executed"] = self.state["num_tasks_executed"] + 1
413
class Controller: def __init__(self): self.worker = Worker.remote() self.worker_state = ray.get(self.worker.checkpoint.remote()) def execute_task_with_fault_tolerance(self): i = 0 while True: i = i + 1 try: ray.get(self.worker.execute_task.remote(crash=(i % 2 == 1))) # Checkpoint the latest worker state self.worker_state = ray.get(self.worker.checkpoint.remote()) return except ray.exceptions.RayActorError: print("Actor crashes, restarting...") # Restart the actor and restore the state self.worker = Worker.remote() ray.get(self.worker.restore.remote(self.worker_state)) controller = Controller() controller.execute_task_with_fault_tolerance() controller.execute_task_with_fault_tolerance() assert ray.get(controller.worker.checkpoint.remote())["num_tasks_executed"] == 2
414
Alternatively, if you are using Ray’s automatic actor restart, the actor can checkpoint itself manually and restore from a checkpoint in the constructor: @ray.remote(max_restarts=-1, max_task_retries=-1) class ImmortalActor: def __init__(self, checkpoint_file): self.checkpoint_file = checkpoint_file if os.path.exists(self.checkpoint_file): # Restore from a checkpoint with open(self.checkpoint_file, "r") as f: self.state = json.load(f) else: self.state = {} def update(self, key, value): import random if random.randrange(10) < 5: sys.exit(1) self.state[key] = value # Checkpoint the latest state with open(self.checkpoint_file, "w") as f: json.dump(self.state, f) def get(self, key): return self.state[key]
415
# Checkpoint the latest state with open(self.checkpoint_file, "w") as f: json.dump(self.state, f) def get(self, key): return self.state[key] checkpoint_dir = tempfile.mkdtemp() actor = ImmortalActor.remote(os.path.join(checkpoint_dir, "checkpoint.json")) ray.get(actor.update.remote("1", 1)) ray.get(actor.update.remote("2", 2)) assert ray.get(actor.get.remote("1")) == 1 shutil.rmtree(checkpoint_dir) Note If the checkpoint is saved to external storage, make sure it’s accessible to the entire cluster since the actor can be restarted on a different node. For example, save the checkpoint to cloud storage (e.g., S3) or a shared directory (e.g., via NFS).
416
Actor creator failure# For non-detached actors, the owner of an actor is the worker that created it, i.e. the worker that called ActorClass.remote(). Similar to objects, if the owner of an actor dies, then the actor will also fate-share with the owner. Ray will not automatically recover an actor whose owner is dead, even if it has a nonzero max_restarts. Since detached actors do not have an owner, they will still be restarted by Ray even if their original creator dies. Detached actors will continue to be automatically restarted until the maximum restarts is exceeded, the actor is destroyed, or until the Ray cluster is destroyed. You can try out this behavior in the following code. import ray import os import signal ray.init() @ray.remote(max_restarts=-1) class Actor: def ping(self): return "hello"
417
@ray.remote(max_restarts=-1) class Actor: def ping(self): return "hello" @ray.remote class Parent: def generate_actors(self): self.child = Actor.remote() self.detached_actor = Actor.options(name="actor", lifetime="detached").remote() return self.child, self.detached_actor, os.getpid() parent = Parent.remote() actor, detached_actor, pid = ray.get(parent.generate_actors.remote()) os.kill(pid, signal.SIGKILL) try: print("actor.ping:", ray.get(actor.ping.remote())) except ray.exceptions.RayActorError as e: print("Failed to submit actor call", e) # Failed to submit actor call The actor died unexpectedly before finishing this task. # class_name: Actor # actor_id: 56f541b178ff78470f79c3b601000000 # namespace: ea8b3596-7426-4aa8-98cc-9f77161c4d5f # The actor is dead because because all references to the actor were removed.
418
try: print("detached_actor.ping:", ray.get(detached_actor.ping.remote())) except ray.exceptions.RayActorError as e: print("Failed to submit detached actor call", e) # detached_actor.ping: hello Force-killing a misbehaving actor# Sometimes application-level code can cause an actor to hang or leak resources. In these cases, Ray allows you to recover from the failure by manually terminating the actor. You can do this by calling ray.kill on any handle to the actor. Note that it does not need to be the original handle to the actor. If max_restarts is set, you can also allow Ray to automatically restart the actor by passing no_restart=False to ray.kill. Unavailable actors# When an actor can’t accept method calls, a ray.get on the method’s returned object reference may raise ActorUnavailableError. This exception indicates the actor isn’t accessible at the moment, but may recover after waiting and retrying. Typical cases include:
419
Actor method calls are executed at-most-once. When a ray.get() call raises the ActorUnavailableError exception, there’s no guarantee on whether the actor executed the task or not. If the method has side effects, they may or may not be observable. Ray does guarantee that the method won’t be executed twice, unless the actor or the method is configured with retries, as described in the next section. The actor may or may not recover in the next calls. Those subsequent calls may raise ActorDiedError if the actor is confirmed dead, ActorUnavailableError if it’s still unreachable, or return values normally if the actor recovered. As a best practice, if the caller gets the ActorUnavailableError error, it should “quarantine” the actor and stop sending traffic to the actor. It can then periodically ping the actor until it raises ActorDiedError or returns OK.
420
“quarantine” the actor and stop sending traffic to the actor. It can then periodically ping the actor until it raises ActorDiedError or returns OK. If a task has max_task_retries > 0 and it received ActorUnavailableError, Ray will retry the task up to max_task_retries times. If the actor is restarting in its constructor, the task retry will fail, consuming one retry count. If there are still retries remaining, Ray will retry again after RAY_task_retry_delay_ms, until all retries are consumed or the actor is ready to accept tasks. If the constructor takes a long time to run, consider increasing max_task_retries or increase RAY_task_retry_delay_ms.
421
Actor method exceptions# Sometime you want to retry when an actor method raises exceptions. Use max_task_retries with retry_exceptions to retry. Note that by default, retrying on user raised exceptions is disabled. To enable it, make sure the method is idempotent, that is, invoking it multiple times should be equivalent to invoking it only once. You can set retry_exceptions in the @ray.method(retry_exceptions=...) decorator, or in the .options(retry_exceptions=...) in the method call. Retry behavior depends on the value you set retry_exceptions to: - retry_exceptions == False (default): No retries for user exceptions. - retry_exceptions == True: Ray retries a method on user exception up to max_task_retries times. - retry_exceptions is a list of exceptions: Ray retries a method on user exception up to max_task_retries times, only if the method raises an exception from these specific classes.
422
- retry_exceptions is a list of exceptions: Ray retries a method on user exception up to max_task_retries times, only if the method raises an exception from these specific classes. max_task_retries applies to both exceptions and actor crashes. A Ray actor can set this option to apply to all of its methods. A method can also set an overriding option for itself. Ray searches for the first non-default value of max_task_retries in this order:
423
The method call’s value, for example, actor.method.options(max_task_retries=2). Ray ignores this value if you don’t set it. The method definition’s value, for example, @ray.method(max_task_retries=2). Ray ignores this value if you don’t set it. The actor creation call’s value, for example, Actor.options(max_task_retries=2). Ray ignores this value if you didn’t set it. The Actor class definition’s value, for example, @ray.remote(max_task_retries=2) decorator. Ray ignores this value if you didn’t set it. The default value,`0`. For example, if a method sets max_task_retries=5 and retry_exceptions=True, and the actor sets max_restarts=2, Ray executes the method up to 6 times: once for the initial invocation, and 5 additional retries. The 6 invocations may include 2 actor crashes. After the 6th invocation, a ray.get call to the result Ray ObjectRef raises the exception raised in the last invocation, or ray.exceptions.RayActorError if the actor crashed in the last invocation.
424
Object Fault Tolerance# A Ray object has both data (the value returned when calling ray.get) and metadata (e.g., the location of the value). Data is stored in the Ray object store while the metadata is stored at the object’s owner. The owner of an object is the worker process that creates the original ObjectRef, e.g., by calling f.remote() or ray.put(). Note that this worker is usually a distinct process from the worker that creates the value of the object, except in cases of ray.put. import ray import numpy as np @ray.remote def large_array(): return np.zeros(int(1e5)) x = ray.put(1) # The driver owns x and also creates the value of x. y = large_array.remote() # The driver is the owner of y, even though the value may be stored somewhere else. # If the node that stores the value of y dies, Ray will automatically recover # it by re-executing the large_array task. # If the driver dies, anyone still using y will receive an OwnerDiedError.
425
Ray can automatically recover from data loss but not owner failure. Recovering from data loss# When an object value is lost from the object store, such as during node failures, Ray will use lineage reconstruction to recover the object. Ray will first automatically attempt to recover the value by looking for copies of the same object on other nodes. If none are found, then Ray will automatically recover the value by re-executing the task that previously created the value. Arguments to the task are recursively reconstructed through the same mechanism. Lineage reconstruction currently has the following limitations:
426
The object, and any of its transitive dependencies, must have been generated by a task (actor or non-actor). This means that objects created by ray.put are not recoverable. Tasks are assumed to be deterministic and idempotent. Thus, by default, objects created by actor tasks are not reconstructable. To allow reconstruction of actor task results, set the max_task_retries parameter to a non-zero value (see actor fault tolerance for more details). Tasks will only be re-executed up to their maximum number of retries. By default, a non-actor task can be retried up to 3 times and an actor task cannot be retried. This can be overridden with the max_retries parameter for remote functions and the max_task_retries parameter for actors. The owner of the object must still be alive (see below).
427
Lineage reconstruction can cause higher than usual driver memory usage because the driver keeps the descriptions of any tasks that may be re-executed in case of failure. To limit the amount of memory used by lineage, set the environment variable RAY_max_lineage_bytes (default 1GB) to evict lineage if the threshold is exceeded. To disable lineage reconstruction entirely, set the environment variable RAY_TASK_MAX_RETRIES=0 during ray start or ray.init. With this setting, if there are no copies of an object left, an ObjectLostError will be raised. Recovering from owner failure# The owner of an object can die because of node or worker process failure. Currently, Ray does not support recovery from owner failure. In this case, Ray will clean up any remaining copies of the object’s value to prevent a memory leak. Any workers that subsequently try to get the object’s value will receive an OwnerDiedError exception, which can be handled manually.
428
Understanding ObjectLostErrors# Ray throws an ObjectLostError to the application when an object cannot be retrieved due to application or system error. This can occur during a ray.get() call or when fetching a task’s arguments, and can happen for a number of reasons. Here is a guide to understanding the root cause for different error types:
429
OwnerDiedError: The owner of an object, i.e., the Python worker that first created the ObjectRef via .remote() or ray.put(), has died. The owner stores critical object metadata and an object cannot be retrieved if this process is lost. ObjectReconstructionFailedError: This error is thrown if an object, or another object that this object depends on, cannot be reconstructed due to one of the limitations described above. ReferenceCountingAssertionError: The object has already been deleted, so it cannot be retrieved. Ray implements automatic memory management through distributed reference counting, so this error should not happen in general. However, there is a known edge case that can produce this error. ObjectFetchTimedOutError: A node timed out while trying to retrieve a copy of the object from a remote node. This error usually indicates a system-level bug. The timeout period can be configured using the RAY_fetch_fail_timeout_milliseconds environment variable (default 10 minutes).
430
Node Fault Tolerance# A Ray cluster consists of one or more worker nodes, each of which consists of worker processes and system processes (e.g. raylet). One of the worker nodes is designated as the head node and has extra processes like the GCS. Here, we describe node failures and their impact on tasks, actors, and objects. Worker node failure# When a worker node fails, all the running tasks and actors will fail and all the objects owned by worker processes of this node will be lost. In this case, the tasks, actors, objects fault tolerance mechanisms will kick in and try to recover the failures using other worker nodes. Head node failure# When a head node fails, the entire Ray cluster fails. To tolerate head node failures, we need to make GCS fault tolerant so that when we start a new head node we still have all the cluster-level data.
431
Raylet failure# When a raylet process fails, the corresponding node will be marked as dead and is treated the same as node failure. Each raylet is associated with a unique id, so even if the raylet restarts on the same physical machine, it’ll be treated as a new raylet/node to the Ray cluster.
432
Pattern: Using nested tasks to achieve nested parallelism Pattern: Using generators to reduce heap memory usage Pattern: Using ray.wait to limit the number of pending tasks Pattern: Using resources to limit the number of concurrently running tasks Pattern: Using asyncio to run actor methods concurrently Pattern: Using an actor to synchronize other tasks and actors Pattern: Using a supervisor actor to manage a tree of actors Pattern: Using pipelining to increase throughput Anti-pattern: Returning ray.put() ObjectRefs from a task harms performance and fault tolerance Anti-pattern: Calling ray.get in a loop harms parallelism Anti-pattern: Calling ray.get unnecessarily harms performance Anti-pattern: Processing results in submission order using ray.get increases runtime Anti-pattern: Fetching too many objects at once with ray.get causes failure Anti-pattern: Over-parallelizing with too fine-grained tasks harms speedup
433
Anti-pattern: Fetching too many objects at once with ray.get causes failure Anti-pattern: Over-parallelizing with too fine-grained tasks harms speedup Anti-pattern: Redefining the same remote function or class harms performance Anti-pattern: Passing the same large argument by value repeatedly harms performance Anti-pattern: Closure capturing large objects harms performance Anti-pattern: Using global variables to share state between tasks and actors Anti-pattern: Serialize ray.ObjectRef out of band Anti-pattern: Forking new processes in application code
434
GCS Fault Tolerance# Global Control Service (GCS) is a server that manages cluster-level metadata. It also provides a handful of cluster-level operations including actor, placement groups and node management. By default, the GCS is not fault tolerant since all the data is stored in-memory and its failure means that the entire Ray cluster fails. To make the GCS fault tolerant, HA Redis is required. Then, when the GCS restarts, it loads all the data from the Redis instance and resumes regular functions. During the recovery period, the following functions are not available: Actor creation, deletion and reconstruction. Placement group creation, deletion and reconstruction. Resource management. Worker node registration. Worker process creation. However, running Ray tasks and actors remain alive and any existing objects will continue to be available. Setting up Redis# KubeRay (officially supported) If you are using KubeRay, refer to KubeRay docs on GCS Fault Tolerance.
435
Setting up Redis# KubeRay (officially supported) If you are using KubeRay, refer to KubeRay docs on GCS Fault Tolerance. ray start If you are using ray start to start the Ray head node, set the OS environment RAY_REDIS_ADDRESS to the Redis address, and supply the --redis-password flag with the password when calling ray start: RAY_REDIS_ADDRESS=redis_ip:port ray start --head --redis-password PASSWORD --redis-username default ray up If you are using ray up to start the Ray cluster, change head_start_ray_commands field to add RAY_REDIS_ADDRESS and --redis-password to the ray start command: head_start_ray_commands: - ray stop - ulimit -n 65536; RAY_REDIS_ADDRESS=redis_ip:port ray start --head --redis-password PASSWORD --redis-username default --port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml --dashboard-host=0.0.0.0 Kubernetes If you are using Kubernetes but not KubeRay, please refer to this doc.
436
Kubernetes If you are using Kubernetes but not KubeRay, please refer to this doc. Once the GCS is backed by Redis, when it restarts, it’ll recover the state by reading from Redis. When the GCS is recovering from its failed state, the raylet will try to reconnect to the GCS. If the raylet fails to reconnect to the GCS for more than 60 seconds, the raylet will exit and the corresponding node fails. This timeout threshold can be tuned by the OS environment variable RAY_gcs_rpc_server_reconnect_timeout_s. If the IP address of GCS will change after restarts, it’s better to use a qualified domain name and pass it to all raylets at start time. Raylet will resolve the domain name and connect to the correct GCS. You need to ensure that at any time, only one GCS is alive.
437
Note GCS fault tolerance with external Redis is officially supported ONLY if you are using KubeRay for Ray serve fault tolerance. For other cases, you can use it at your own risk and you need to implement additional mechanisms to detect the failure of GCS or the head node and restart it.
438
Pattern: Using nested tasks to achieve nested parallelism# In this pattern, a remote task can dynamically call other remote tasks (including itself) for nested parallelism. This is useful when sub-tasks can be parallelized. Keep in mind, though, that nested tasks come with their own cost: extra worker processes, scheduling overhead, bookkeeping overhead, etc. To achieve speedup with nested parallelism, make sure each of your nested tasks does significant work. See Anti-pattern: Over-parallelizing with too fine-grained tasks harms speedup for more details. Example use case# You want to quick-sort a large list of numbers. By using nested tasks, we can sort the list in a distributed and parallel fashion. Tree of tasks# Code example# import ray import time from numpy import random
439
Tree of tasks# Code example# import ray import time from numpy import random def partition(collection): # Use the last element as the pivot pivot = collection.pop() greater, lesser = [], [] for element in collection: if element > pivot: greater.append(element) else: lesser.append(element) return lesser, pivot, greater def quick_sort(collection): if len(collection) <= 200000: # magic number return sorted(collection) else: lesser, pivot, greater = partition(collection) lesser = quick_sort(lesser) greater = quick_sort(greater) return lesser + [pivot] + greater
440
@ray.remote def quick_sort_distributed(collection): # Tiny tasks are an antipattern. # Thus, in our example we have a "magic number" to # toggle when distributed recursion should be used vs # when the sorting should be done in place. The rule # of thumb is that the duration of an individual task # should be at least 1 second. if len(collection) <= 200000: # magic number return sorted(collection) else: lesser, pivot, greater = partition(collection) lesser = quick_sort_distributed.remote(lesser) greater = quick_sort_distributed.remote(greater) return ray.get(lesser) + [pivot] + ray.get(greater)
441
for size in [200000, 4000000, 8000000]: print(f"Array size: {size}") unsorted = random.randint(1000000, size=(size)).tolist() s = time.time() quick_sort(unsorted) print(f"Sequential execution: {(time.time() - s):.3f}") s = time.time() ray.get(quick_sort_distributed.remote(unsorted)) print(f"Distributed execution: {(time.time() - s):.3f}") print("--" * 10) # Outputs: # Array size: 200000 # Sequential execution: 0.040 # Distributed execution: 0.152 # -------------------- # Array size: 4000000 # Sequential execution: 6.161 # Distributed execution: 5.779 # -------------------- # Array size: 8000000 # Sequential execution: 15.459 # Distributed execution: 11.282 # --------------------
442
We call ray.get() after both quick_sort_distributed function invocations take place. This allows you to maximize parallelism in the workload. See Anti-pattern: Calling ray.get in a loop harms parallelism for more details. Notice in the execution times above that with smaller tasks, the non-distributed version is faster. However, as the task execution time increases, i.e. because the lists to sort are larger, the distributed version is faster.
443
Pattern: Using generators to reduce heap memory usage# In this pattern, we use generators in Python to reduce the total heap memory usage during a task. The key idea is that for tasks that return multiple objects, we can return them one at a time instead of all at once. This allows a worker to free the heap memory used by a previous return value before returning the next one. Example use case# You have a task that returns multiple large values. Another possibility is a task that returns a single large value, but you want to stream this value through Ray’s object store by breaking it up into smaller chunks. Using normal Python functions, we can write such a task like this. Here’s an example that returns numpy arrays of size 100MB each: import numpy as np @ray.remote def large_values(num_returns): return [ np.random.randint(np.iinfo(np.int8).max, size=(100_000_000, 1), dtype=np.int8) for _ in range(num_returns) ]
444
@ray.remote def large_values(num_returns): return [ np.random.randint(np.iinfo(np.int8).max, size=(100_000_000, 1), dtype=np.int8) for _ in range(num_returns) ] However, this will require the task to hold all num_returns arrays in heap memory at the same time at the end of the task. If there are many return values, this can lead to high heap memory usage and potentially an out-of-memory error. We can fix the above example by rewriting large_values as a generator. Instead of returning all values at once as a tuple or list, we can yield one value at a time. @ray.remote def large_values_generator(num_returns): for i in range(num_returns): yield np.random.randint( np.iinfo(np.int8).max, size=(100_000_000, 1), dtype=np.int8 ) print(f"yielded return value {i}") Code example# import sys import ray # fmt: off # __large_values_start__ import numpy as np
445
Code example# import sys import ray # fmt: off # __large_values_start__ import numpy as np @ray.remote def large_values(num_returns): return [ np.random.randint(np.iinfo(np.int8).max, size=(100_000_000, 1), dtype=np.int8) for _ in range(num_returns) ] # __large_values_end__ # fmt: on # fmt: off # __large_values_generator_start__ @ray.remote def large_values_generator(num_returns): for i in range(num_returns): yield np.random.randint( np.iinfo(np.int8).max, size=(100_000_000, 1), dtype=np.int8 ) print(f"yielded return value {i}") # __large_values_generator_end__ # fmt: on
446
# A large enough value (e.g. 100). num_returns = int(sys.argv[1]) # Worker will likely OOM using normal returns. print("Using normal functions...") try: ray.get( large_values.options(num_returns=num_returns, max_retries=0).remote( num_returns )[0] ) except ray.exceptions.WorkerCrashedError: print("Worker failed with normal function") # Using a generator will allow the worker to finish. # Note that this will block until the full task is complete, i.e. the # last yield finishes. print("Using generators...") ray.get( large_values_generator.options(num_returns=num_returns, max_retries=0).remote( num_returns )[0] ) print("Success!") $ RAY_IGNORE_UNHANDLED_ERRORS=1 python test.py 100
447
$ RAY_IGNORE_UNHANDLED_ERRORS=1 python test.py 100 Using normal functions... ... -- A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker... Worker failed Using generators... (large_values_generator pid=373609) yielded return value 0 (large_values_generator pid=373609) yielded return value 1 (large_values_generator pid=373609) yielded return value 2 ... Success!
448
Pattern: Using ray.wait to limit the number of pending tasks# In this pattern, we use ray.wait() to limit the number of pending tasks. If we continuously submit tasks faster than their process time, we will accumulate tasks in the pending task queue, which can eventually cause OOM. With ray.wait(), we can apply backpressure and limit the number of pending tasks so that the pending task queue won’t grow indefinitely and cause OOM. Note If we submit a finite number of tasks, it’s unlikely that we will hit the issue mentioned above since each task only uses a small amount of memory for bookkeeping in the queue. It’s more likely to happen when we have an infinite stream of tasks to run.
449
Note This method is meant primarily to limit how many tasks should be in flight at the same time. It can also be used to limit how many tasks can run concurrently, but it is not recommended, as it can hurt scheduling performance. Ray automatically decides task parallelism based on resource availability, so the recommended method for adjusting how many tasks can run concurrently is to modify each task’s resource requirements instead. Example use case# You have a worker actor that process tasks at a rate of X tasks per second and you want to submit tasks to it at a rate lower than X to avoid OOM. For example, Ray Serve uses this pattern to limit the number of pending queries for each worker. Limit number of pending tasks# Code example# Without backpressure: import ray ray.init() @ray.remote class Actor: async def heavy_compute(self): # taking a long time... # await asyncio.sleep(5) return actor = Actor.remote()
450
ray.init() @ray.remote class Actor: async def heavy_compute(self): # taking a long time... # await asyncio.sleep(5) return actor = Actor.remote() NUM_TASKS = 1000 result_refs = [] # When NUM_TASKS is large enough, this will eventually OOM. for _ in range(NUM_TASKS): result_refs.append(actor.heavy_compute.remote()) ray.get(result_refs) With backpressure: MAX_NUM_PENDING_TASKS = 100 result_refs = [] for _ in range(NUM_TASKS): if len(result_refs) > MAX_NUM_PENDING_TASKS: # update result_refs to only # track the remaining tasks. ready_refs, result_refs = ray.wait(result_refs, num_returns=1) ray.get(ready_refs) result_refs.append(actor.heavy_compute.remote()) ray.get(result_refs)
451
Pattern: Using asyncio to run actor methods concurrently# By default, a Ray actor runs in a single thread and actor method calls are executed sequentially. This means that a long running method call blocks all the following ones. In this pattern, we use await to yield control from the long running method call so other method calls can run concurrently. Normally the control is yielded when the method is doing IO operations but you can also use await asyncio.sleep(0) to yield control explicitly. Note You can also use threaded actors to achieve concurrency. Example use case# You have an actor with a long polling method that continuously fetches tasks from the remote store and executes them. You also want to query the number of tasks executed while the long polling method is running. With the default actor, the code will look like this: import ray @ray.remote class TaskStore: def get_next_task(self): return "task"
452
@ray.remote class TaskStore: def get_next_task(self): return "task" @ray.remote class TaskExecutor: def __init__(self, task_store): self.task_store = task_store self.num_executed_tasks = 0 def run(self): while True: task = ray.get(task_store.get_next_task.remote()) self._execute_task(task) def _execute_task(self, task): # Executing the task self.num_executed_tasks = self.num_executed_tasks + 1 def get_num_executed_tasks(self): return self.num_executed_tasks task_store = TaskStore.remote() task_executor = TaskExecutor.remote(task_store) task_executor.run.remote() try: # This will timeout since task_executor.run occupies the entire actor thread # and get_num_executed_tasks cannot run. ray.get(task_executor.get_num_executed_tasks.remote(), timeout=5) except ray.exceptions.GetTimeoutError: print("get_num_executed_tasks didn't finish in 5 seconds")
453
This is problematic because TaskExecutor.run method runs forever and never yield the control to run other methods. We can solve this problem by using async actors and use await to yield control: @ray.remote class AsyncTaskExecutor: def __init__(self, task_store): self.task_store = task_store self.num_executed_tasks = 0 async def run(self): while True: # Here we use await instead of ray.get() to # wait for the next task and it will yield # the control while waiting. task = await task_store.get_next_task.remote() self._execute_task(task) def _execute_task(self, task): # Executing the task self.num_executed_tasks = self.num_executed_tasks + 1 def get_num_executed_tasks(self): return self.num_executed_tasks
454
def get_num_executed_tasks(self): return self.num_executed_tasks async_task_executor = AsyncTaskExecutor.remote(task_store) async_task_executor.run.remote() # We are able to run get_num_executed_tasks while run method is running. num_executed_tasks = ray.get(async_task_executor.get_num_executed_tasks.remote()) print(f"num of executed tasks so far: {num_executed_tasks}") Here, instead of using the blocking ray.get() to get the value of an ObjectRef, we use await so it can yield the control while we are waiting for the object to be fetched.
455
Pattern: Using resources to limit the number of concurrently running tasks# In this pattern, we use resources to limit the number of concurrently running tasks. By default, Ray tasks require 1 CPU each and Ray actors require 0 CPU each, so the scheduler limits task concurrency to the available CPUs and actor concurrency to infinite. Tasks that use more than 1 CPU (e.g., via multithreading) may experience slowdown due to interference from concurrent ones, but otherwise are safe to run. However, tasks or actors that use more than their proportionate share of memory may overload a node and cause issues like OOM. If that is the case, we can reduce the number of concurrently running tasks or actors on each node by increasing the amount of resources requested by them. This works because Ray makes sure that the sum of the resource requirements of all of the concurrently running tasks and actors on a given node does not exceed the node’s total resources.
456
Note For actor tasks, the number of running actors limits the number of concurrently running actor tasks we can have. Example use case# You have a data processing workload that processes each input file independently using Ray remote functions. Since each task needs to load the input data into heap memory and do the processing, running too many of them can cause OOM. In this case, you can use the memory resource to limit the number of concurrently running tasks (usage of other resources like num_cpus can achieve the same goal as well). Note that similar to num_cpus, the memory resource requirement is logical, meaning that Ray will not enforce the physical memory usage of each task if it exceeds this amount. Code example# Without limit: import ray # Assume this Ray node has 16 CPUs and 16G memory. ray.init() @ray.remote def process(file): # Actual work is reading the file and process the data. # Assume it needs to use 2G memory. pass
457
@ray.remote def process(file): # Actual work is reading the file and process the data. # Assume it needs to use 2G memory. pass NUM_FILES = 1000 result_refs = [] for i in range(NUM_FILES): # By default, process task will use 1 CPU resource and no other resources. # This means 16 tasks can run concurrently # and will OOM since 32G memory is needed while the node only has 16G. result_refs.append(process.remote(f"{i}.csv")) ray.get(result_refs) With limit: result_refs = [] for i in range(NUM_FILES): # Now each task will use 2G memory resource # and the number of concurrently running tasks is limited to 8. # In this case, setting num_cpus to 2 has the same effect. result_refs.append( process.options(memory=2 * 1024 * 1024 * 1024).remote(f"{i}.csv") ) ray.get(result_refs)
458
Pattern: Using an actor to synchronize other tasks and actors# When you have multiple tasks that need to wait on some condition or otherwise need to synchronize across tasks & actors on a cluster, you can use a central actor to coordinate among them. Example use case# You can use an actor to implement a distributed asyncio.Event that multiple tasks can wait on. Code example# import asyncio import ray # We set num_cpus to zero because this actor will mostly just block on I/O. @ray.remote(num_cpus=0) class SignalActor: def __init__(self): self.ready_event = asyncio.Event() def send(self, clear=False): self.ready_event.set() if clear: self.ready_event.clear() async def wait(self, should_wait=True): if should_wait: await self.ready_event.wait() @ray.remote def wait_and_go(signal): ray.get(signal.wait.remote()) print("go!")
459
@ray.remote def wait_and_go(signal): ray.get(signal.wait.remote()) print("go!") signal = SignalActor.remote() tasks = [wait_and_go.remote(signal) for _ in range(4)] print("ready...") # Tasks will all be waiting for the signals. print("set..") ray.get(signal.send.remote()) # Tasks are unblocked. ray.get(tasks) # Output is: # ready... # set.. # (wait_and_go pid=77366) go! # (wait_and_go pid=77372) go! # (wait_and_go pid=77367) go! # (wait_and_go pid=77358) go!
460
Pattern: Using a supervisor actor to manage a tree of actors# Actor supervision is a pattern in which a supervising actor manages a collection of worker actors. The supervisor delegates tasks to subordinates and handles their failures. This pattern simplifies the driver since it manages only a few supervisors and does not deal with failures from worker actors directly. Furthermore, multiple supervisors can act in parallel to parallelize more work. Tree of actors# Note If the supervisor dies (or the driver), the worker actors are automatically terminated thanks to actor reference counting. Actors can be nested to multiple levels to form a tree. Example use case# You want to do data parallel training and train the same model with different hyperparameters in parallel. For each hyperparameter, you can launch a supervisor actor to do the orchestration and it will create worker actors to do the actual training per data shard.
461
Note For data parallel training and hyperparameter tuning, it’s recommended to use Ray Train (DataParallelTrainer and Ray Tune’s Tuner) which applies this pattern under the hood. Code example# import ray @ray.remote(num_cpus=1) class Trainer: def __init__(self, hyperparameter, data): self.hyperparameter = hyperparameter self.data = data # Train the model on the given training data shard. def fit(self): return self.data * self.hyperparameter @ray.remote(num_cpus=1) class Supervisor: def __init__(self, hyperparameter, data): self.trainers = [Trainer.remote(hyperparameter, d) for d in data] def fit(self): # Train with different data shard in parallel. return ray.get([trainer.fit.remote() for trainer in self.trainers])
462
def fit(self): # Train with different data shard in parallel. return ray.get([trainer.fit.remote() for trainer in self.trainers]) data = [1, 2, 3] supervisor1 = Supervisor.remote(1, data) supervisor2 = Supervisor.remote(2, data) # Train with different hyperparameters in parallel. model1 = supervisor1.fit.remote() model2 = supervisor2.fit.remote() assert ray.get(model1) == [1, 2, 3] assert ray.get(model2) == [2, 4, 6]
463
Pattern: Using pipelining to increase throughput# If you have multiple work items and each requires several steps to complete, you can use the pipelining technique to improve the cluster utilization and increase the throughput of your system. Note Pipelining is an important technique to improve the performance and is heavily used by Ray libraries. See Ray Data as an example. Example use case# A component of your application needs to do both compute-intensive work and communicate with other processes. Ideally, you want to overlap computation and communication to saturate the CPU and increase the overall throughput. Code example# import ray @ray.remote class WorkQueue: def __init__(self): self.queue = list(range(10)) def get_work_item(self): if self.queue: return self.queue.pop(0) else: return None @ray.remote class WorkerWithoutPipelining: def __init__(self, work_queue): self.work_queue = work_queue
464
@ray.remote class WorkerWithoutPipelining: def __init__(self, work_queue): self.work_queue = work_queue def process(self, work_item): print(work_item) def run(self): while True: # Get work from the remote queue. work_item = ray.get(self.work_queue.get_work_item.remote()) if work_item is None: break # Do work. self.process(work_item) @ray.remote class WorkerWithPipelining: def __init__(self, work_queue): self.work_queue = work_queue def process(self, work_item): print(work_item) def run(self): self.work_item_ref = self.work_queue.get_work_item.remote() while True: # Get work from the remote queue. work_item = ray.get(self.work_item_ref) if work_item is None: break self.work_item_ref = self.work_queue.get_work_item.remote()
465
if work_item is None: break self.work_item_ref = self.work_queue.get_work_item.remote() # Do work while we are fetching the next work item. self.process(work_item) work_queue = WorkQueue.remote() worker_without_pipelining = WorkerWithoutPipelining.remote(work_queue) ray.get(worker_without_pipelining.run.remote()) work_queue = WorkQueue.remote() worker_with_pipelining = WorkerWithPipelining.remote(work_queue) ray.get(worker_with_pipelining.run.remote()) In the example above, a worker actor pulls work off of a queue and then does some computation on it. Without pipelining, we call ray.get() immediately after requesting a work item, so we block while that RPC is in flight, causing idle CPU time. With pipelining, we instead preemptively request the next work item before processing the current one, so we can use the CPU while the RPC is in flight which increases the CPU utilization.
466
It disallows inlining small return values: Ray has a performance optimization to return small (<= 100KB) values inline directly to the caller, avoiding going through the distributed object store. On the other hand, ray.put() will unconditionally store the value to the object store which makes the optimization for small return values impossible. Returning ObjectRefs involves extra distributed reference counting protocol which is slower than returning the values directly. It’s less fault tolerant: the worker process that calls ray.put() is the “owner” of the returned ObjectRef and the return value fate shares with the owner. If the worker process dies, the return value is lost. In contrast, the caller process (often the driver) is the owner of the return value if it’s returned directly. Code example# If you want to return a single value regardless if it’s small or large, you should return it directly. import ray import numpy as np
467
Code example# If you want to return a single value regardless if it’s small or large, you should return it directly. import ray import numpy as np @ray.remote def task_with_single_small_return_value_bad(): small_return_value = 1 # The value will be stored in the object store # and the reference is returned to the caller. small_return_value_ref = ray.put(small_return_value) return small_return_value_ref @ray.remote def task_with_single_small_return_value_good(): small_return_value = 1 # Ray will return the value inline to the caller # which is faster than the previous approach. return small_return_value assert ray.get(ray.get(task_with_single_small_return_value_bad.remote())) == ray.get( task_with_single_small_return_value_good.remote() ) @ray.remote def task_with_single_large_return_value_bad(): large_return_value = np.zeros(10 * 1024 * 1024) large_return_value_ref = ray.put(large_return_value) return large_return_value_ref
468
@ray.remote def task_with_single_large_return_value_good(): # Both approaches will store the large array to the object store # but this is better since it's faster and more fault tolerant. large_return_value = np.zeros(10 * 1024 * 1024) return large_return_value assert np.array_equal( ray.get(ray.get(task_with_single_large_return_value_bad.remote())), ray.get(task_with_single_large_return_value_good.remote()), ) # Same thing applies for actor tasks as well. @ray.remote class Actor: def task_with_single_return_value_bad(self): single_return_value = np.zeros(9 * 1024 * 1024) return ray.put(single_return_value) def task_with_single_return_value_good(self): return np.zeros(9 * 1024 * 1024) actor = Actor.remote() assert np.array_equal( ray.get(ray.get(actor.task_with_single_return_value_bad.remote())), ray.get(actor.task_with_single_return_value_good.remote()), )
469
actor = Actor.remote() assert np.array_equal( ray.get(ray.get(actor.task_with_single_return_value_bad.remote())), ray.get(actor.task_with_single_return_value_good.remote()), ) If you want to return multiple values and you know the number of returns before calling the task, you should use the num_returns option. # This will return a single object # which is a tuple of two ObjectRefs to the actual values. @ray.remote(num_returns=1) def task_with_static_multiple_returns_bad1(): return_value_1_ref = ray.put(1) return_value_2_ref = ray.put(2) return (return_value_1_ref, return_value_2_ref) # This will return two objects each of which is an ObjectRef to the actual value. @ray.remote(num_returns=2) def task_with_static_multiple_returns_bad2(): return_value_1_ref = ray.put(1) return_value_2_ref = ray.put(2) return (return_value_1_ref, return_value_2_ref)
470
# This will return two objects each of which is the actual value. @ray.remote(num_returns=2) def task_with_static_multiple_returns_good(): return_value_1 = 1 return_value_2 = 2 return (return_value_1, return_value_2) assert ( ray.get(ray.get(task_with_static_multiple_returns_bad1.remote())[0]) == ray.get(ray.get(task_with_static_multiple_returns_bad2.remote()[0])) == ray.get(task_with_static_multiple_returns_good.remote()[0]) ) @ray.remote class Actor: @ray.method(num_returns=1) def task_with_static_multiple_returns_bad1(self): return_value_1_ref = ray.put(1) return_value_2_ref = ray.put(2) return (return_value_1_ref, return_value_2_ref) @ray.method(num_returns=2) def task_with_static_multiple_returns_bad2(self): return_value_1_ref = ray.put(1) return_value_2_ref = ray.put(2) return (return_value_1_ref, return_value_2_ref)
471
@ray.method(num_returns=2) def task_with_static_multiple_returns_good(self): # This is faster and more fault tolerant. return_value_1 = 1 return_value_2 = 2 return (return_value_1, return_value_2) actor = Actor.remote() assert ( ray.get(ray.get(actor.task_with_static_multiple_returns_bad1.remote())[0]) == ray.get(ray.get(actor.task_with_static_multiple_returns_bad2.remote()[0])) == ray.get(actor.task_with_static_multiple_returns_good.remote()[0]) ) If you don’t know the number of returns before calling the task, you should use the dynamic generator pattern if possible. @ray.remote(num_returns=1) def task_with_dynamic_returns_bad(n): return_value_refs = [] for i in range(n): return_value_refs.append(ray.put(np.zeros(i * 1024 * 1024))) return return_value_refs @ray.remote(num_returns="dynamic") def task_with_dynamic_returns_good(n): for i in range(n): yield np.zeros(i * 1024 * 1024)
472
Anti-pattern: Calling ray.get in a loop harms parallelism# TLDR: Avoid calling ray.get() in a loop since it’s a blocking call; use ray.get() only for the final result. A call to ray.get() fetches the results of remotely executed functions. However, it is a blocking call, which means that it always waits until the requested result is available. If you call ray.get() in a loop, the loop will not continue to run until the call to ray.get() is resolved. If you also spawn the remote function calls in the same loop, you end up with no parallelism at all, as you wait for the previous function call to finish (because of ray.get()) and only spawn the next call in the next iteration of the loop.
473
The solution here is to separate the call to ray.get() from the call to the remote functions. That way all remote functions are spawned before we wait for the results and can run in parallel in the background. Additionally, you can pass a list of object references to ray.get() instead of calling it one by one to wait for all of the tasks to finish.
474
Code example# import ray ray.init() @ray.remote def f(i): return i # Anti-pattern: no parallelism due to calling ray.get inside of the loop. sequential_returns = [] for i in range(100): sequential_returns.append(ray.get(f.remote(i))) # Better approach: parallelism because the tasks are executed in parallel. refs = [] for i in range(100): refs.append(f.remote(i)) parallel_returns = ray.get(refs) Calling ray.get() in a loop# When calling ray.get() right after scheduling the remote work, the loop blocks until the result is received. We thus end up with sequential processing. Instead, we should first schedule all remote calls, which are then processed in parallel. After scheduling the work, we can then request all the results at once. Other ray.get() related anti-patterns are: Anti-pattern: Calling ray.get unnecessarily harms performance Anti-pattern: Processing results in submission order using ray.get increases runtime
475
Anti-pattern: Calling ray.get unnecessarily harms performance# TLDR: Avoid calling ray.get() unnecessarily for intermediate steps. Work with object references directly, and only call ray.get() at the end to get the final result. When ray.get() is called, objects must be transferred to the worker/node that calls ray.get(). If you don’t need to manipulate the object, you probably don’t need to call ray.get() on it! Typically, it’s best practice to wait as long as possible before calling ray.get(), or even design your program to avoid having to call ray.get() at all. Code example# Anti-pattern: import ray import numpy as np ray.init() @ray.remote def generate_rollout(): return np.ones((10000, 10000)) @ray.remote def reduce(rollout): return np.sum(rollout) # `ray.get()` downloads the result here. rollout = ray.get(generate_rollout.remote()) # Now we have to reupload `rollout` reduced = ray.get(reduce.remote(rollout))
476
# `ray.get()` downloads the result here. rollout = ray.get(generate_rollout.remote()) # Now we have to reupload `rollout` reduced = ray.get(reduce.remote(rollout)) Better approach: # Don't need ray.get here. rollout_obj_ref = generate_rollout.remote() # Rollout object is passed by reference. reduced = ray.get(reduce.remote(rollout_obj_ref)) Notice in the anti-pattern example, we call ray.get() which forces us to transfer the large rollout to the driver, then again to the reduce worker. In the fixed version, we only pass the reference to the object to the reduce task. The reduce worker will implicitly call ray.get() to fetch the actual rollout data directly from the generate_rollout worker, avoiding the extra copy to the driver. Other ray.get() related anti-patterns are: Anti-pattern: Calling ray.get in a loop harms parallelism Anti-pattern: Processing results in submission order using ray.get increases runtime
477
Anti-pattern: Processing results in submission order using ray.get increases runtime# TLDR: Avoid processing independent results in submission order using ray.get() since results may be ready in a different order than the submission order. A batch of tasks is submitted, and we need to process their results individually once they’re done. If each task takes a different amount of time to finish and we process results in submission order, we may waste time waiting for all of the slower (straggler) tasks that were submitted earlier to finish while later faster tasks have already finished. Instead, we want to process the tasks in the order that they finish using ray.wait() to speed up total time to completion. Processing results in submission order vs completion order# Code example# import random import time import ray ray.init() @ray.remote def f(i): time.sleep(random.random()) return i
478
Processing results in submission order vs completion order# Code example# import random import time import ray ray.init() @ray.remote def f(i): time.sleep(random.random()) return i # Anti-pattern: process results in the submission order. sum_in_submission_order = 0 refs = [f.remote(i) for i in range(100)] for ref in refs: # Blocks until this ObjectRef is ready. result = ray.get(ref) # process result sum_in_submission_order = sum_in_submission_order + result # Better approach: process results in the completion order. sum_in_completion_order = 0 refs = [f.remote(i) for i in range(100)] unfinished = refs while unfinished: # Returns the first ObjectRef that is ready. finished, unfinished = ray.wait(unfinished, num_returns=1) result = ray.get(finished[0]) # process result sum_in_completion_order = sum_in_completion_order + result Other ray.get() related anti-patterns are: