global_chunk_id
int64 0
478
| text
stringlengths 288
999
|
|---|---|
300
|
Specifying Node Resources#
By default, Ray nodes start with pre-defined CPU, GPU, and memory resources. The quantities of these logical resources on each node are set to the physical quantities auto detected by Ray.
By default, logical resources are configured by the following rule.
Warning
Ray does not permit dynamic updates of resource capacities after Ray has been started on a node.
Number of logical CPUs (``num_cpus``): Set to the number of CPUs of the machine/container.
Number of logical GPUs (``num_gpus``): Set to the number of GPUs of the machine/container.
Memory (``memory``): Set to 70% of “available memory” when ray runtime starts.
Object Store Memory (``object_store_memory``): Set to 30% of “available memory” when ray runtime starts. Note that the object store memory is not logical resource, and users cannot use it for scheduling.
|
301
|
However, you can always override that by manually specifying the quantities of pre-defined resources and adding custom resources.
There are several ways to do that depending on how you start the Ray cluster:
ray.init()
If you are using ray.init() to start a single node Ray cluster, you can do the following to manually specify node resources:
# This will start a Ray node with 3 logical cpus, 4 logical gpus,
# 1 special_hardware resource and 1 custom_label resource.
ray.init(num_cpus=3, num_gpus=4, resources={"special_hardware": 1, "custom_label": 1})
ray start
If you are using ray start to start a Ray node, you can run:
ray start --head --num-cpus=3 --num-gpus=4 --resources='{"special_hardware": 1, "custom_label": 1}'
ray up
If you are using ray up to start a Ray cluster, you can set the resources field in the yaml file:
available_node_types:
head:
...
resources:
CPU: 3
GPU: 4
special_hardware: 1
custom_label: 1
|
302
|
Specifying Task or Actor Resource Requirements#
Ray allows specifying a task or actor’s logical resource requirements (e.g., CPU, GPU, and custom resources).
The task or actor will only run on a node if there are enough required logical resources
available to execute the task or actor.
By default, Ray tasks use 1 logical CPU resource and Ray actors use 1 logical CPU for scheduling, and 0 logical CPU for running.
(This means, by default, actors cannot get scheduled on a zero-cpu node, but an infinite number of them can run on any non-zero cpu node.
The default resource requirements for actors was chosen for historical reasons.
It’s recommended to always explicitly set num_cpus for actors to avoid any surprises.
If resources are specified explicitly, they are required both at schedule time and at execution time.)
You can also explicitly specify a task’s or actor’s logical resource requirements (for example, one task may require a GPU) instead of using default ones via ray.remote()
|
303
|
Python
# Specify the default resource requirements for this remote function.
@ray.remote(num_cpus=2, num_gpus=2, resources={"special_hardware": 1})
def func():
return 1
# You can override the default resource requirements.
func.options(num_cpus=3, num_gpus=1, resources={"special_hardware": 0}).remote()
@ray.remote(num_cpus=0, num_gpus=1)
class Actor:
pass
# You can override the default resource requirements for actors as well.
actor = Actor.options(num_cpus=1, num_gpus=0).remote()
Java
// Specify required resources.
Ray.task(MyRayApp::myFunction).setResource("CPU", 1.0).setResource("GPU", 1.0).setResource("special_hardware", 1.0).remote();
Ray.actor(Counter::new).setResource("CPU", 2.0).setResource("GPU", 1.0).remote();
C++
// Specify required resources.
ray::Task(MyFunction).SetResource("CPU", 1.0).SetResource("GPU", 1.0).SetResource("special_hardware", 1.0).Remote();
ray::Actor(CreateCounter).SetResource("CPU", 2.0).SetResource("GPU", 1.0).Remote();
|
304
|
ray::Actor(CreateCounter).SetResource("CPU", 2.0).SetResource("GPU", 1.0).Remote();
Task and actor resource requirements have implications for the Ray’s scheduling concurrency.
In particular, the sum of the logical resource requirements of all of the
concurrently executing tasks and actors on a given node cannot exceed the node’s total logical resources.
This property can be used to limit the number of concurrently running tasks or actors to avoid issues like OOM.
Fractional Resource Requirements#
Ray supports fractional resource requirements.
For example, if your task or actor is IO bound and has low CPU usage, you can specify fractional CPU num_cpus=0.5 or even zero CPU num_cpus=0.
The precision of the fractional resource requirement is 0.0001 so you should avoid specifying a double that’s beyond that precision.
@ray.remote(num_cpus=0.5)
def io_bound_task():
import time
time.sleep(1)
return 2
io_bound_task.remote()
|
305
|
time.sleep(1)
return 2
io_bound_task.remote()
@ray.remote(num_gpus=0.5)
class IOActor:
def ping(self):
import os
print(f"CUDA_VISIBLE_DEVICES: {os.environ['CUDA_VISIBLE_DEVICES']}")
# Two actors can share the same GPU.
io_actor1 = IOActor.remote()
io_actor2 = IOActor.remote()
ray.get(io_actor1.ping.remote())
ray.get(io_actor2.ping.remote())
# Output:
# (IOActor pid=96328) CUDA_VISIBLE_DEVICES: 1
# (IOActor pid=96329) CUDA_VISIBLE_DEVICES: 1
Note
GPU, TPU, and neuron_cores resource requirements that are greater than 1, need to be whole numbers. For example, num_gpus=1.5 is invalid.
Tip
Besides resource requirements, you can also specify an environment for a task or actor to run in,
which can include Python packages, local files, environment variables, and more. See Runtime Environments for details.
|
306
|
Accelerator Support#
Accelerators like GPUs are critical for many machine learning apps.
Ray Core natively supports many accelerators as pre-defined resource types and allows tasks and actors to specify their accelerator resource requirements.
The accelerators natively supported by Ray Core are:
Accelerator
Ray Resource Name
Support Level
Nvidia GPU
GPU
Fully tested, supported by the Ray team
AMD GPU
GPU
Experimental, supported by the community
Intel GPU
GPU
Experimental, supported by the community
AWS Neuron Core
neuron_cores
Experimental, supported by the community
Google TPU
TPU
Experimental, supported by the community
Intel Gaudi
HPU
Experimental, supported by the community
Huawei Ascend
NPU
Experimental, supported by the community
Starting Ray nodes with accelerators#
By default, Ray sets the quantity of accelerator resources of a node to the physical quantities of accelerators auto detected by Ray.
If you need to, you can override this.
Nvidia GPU
|
307
|
Nvidia GPU
Tip
You can set the CUDA_VISIBLE_DEVICES environment variable before starting a Ray node
to limit the Nvidia GPUs that are visible to Ray.
For example, CUDA_VISIBLE_DEVICES=1,3 ray start --head --num-gpus=2
lets Ray only see devices 1 and 3.
AMD GPU
Tip
You can set the ROCR_VISIBLE_DEVICES environment variable before starting a Ray node
to limit the AMD GPUs that are visible to Ray.
For example, ROCR_VISIBLE_DEVICES=1,3 ray start --head --num-gpus=2
lets Ray only see devices 1 and 3.
Intel GPU
Tip
You can set the ONEAPI_DEVICE_SELECTOR environment variable before starting a Ray node
to limit the Intel GPUs that are visible to Ray.
For example, ONEAPI_DEVICE_SELECTOR=1,3 ray start --head --num-gpus=2
lets Ray only see devices 1 and 3.
AWS Neuron Core
|
308
|
AWS Neuron Core
Tip
You can set the NEURON_RT_VISIBLE_CORES environment variable before starting a Ray node
to limit the AWS Neuro Cores that are visible to Ray.
For example, NEURON_RT_VISIBLE_CORES=1,3 ray start --head --resources='{"neuron_cores": 2}'
lets Ray only see devices 1 and 3.
See the Amazon documentation<https://awslabs.github.io/data-on-eks/docs/category/inference-on-eks> for more examples of Ray on Neuron with EKS as an orchestration substrate.
Google TPU
Tip
You can set the TPU_VISIBLE_CHIPS environment variable before starting a Ray node
to limit the Google TPUs that are visible to Ray.
For example, TPU_VISIBLE_CHIPS=1,3 ray start --head --resources='{"TPU": 2}'
lets Ray only see devices 1 and 3.
Intel Gaudi
|
309
|
Intel Gaudi
Tip
You can set the HABANA_VISIBLE_MODULES environment variable before starting a Ray node
to limit the Intel Gaudi HPUs that are visible to Ray.
For example, HABANA_VISIBLE_MODULES=1,3 ray start --head --resources='{"HPU": 2}'
lets Ray only see devices 1 and 3.
Huawei Ascend
Tip
You can set the ASCEND_RT_VISIBLE_DEVICES environment variable before starting a Ray node
to limit the Huawei Ascend NPUs that are visible to Ray.
For example, ASCEND_RT_VISIBLE_DEVICES=1,3 ray start --head --resources='{"NPU": 2}'
lets Ray only see devices 1 and 3.
|
310
|
Note
There’s nothing preventing you from specifying a larger number of
accelerator resources (e.g., num_gpus) than the true number of accelerators on the machine given Ray resources are logical.
In this case, Ray acts as if the machine has the number of accelerators you specified
for the purposes of scheduling tasks and actors that require accelerators.
Trouble only occurs if those tasks and actors
attempt to actually use accelerators that don’t exist.
Using accelerators in Tasks and Actors#
If a task or actor requires accelerators, you can specify the corresponding resource requirements (e.g. @ray.remote(num_gpus=1)).
Ray then schedules the task or actor to a node that has enough free accelerator resources
and assign accelerators to the task or actor by setting the corresponding environment variable (e.g. CUDA_VISIBLE_DEVICES) before running the task or actor code.
Nvidia GPU
import os
import ray
ray.init(num_gpus=2)
|
311
|
Nvidia GPU
import os
import ray
ray.init(num_gpus=2)
@ray.remote(num_gpus=1)
class GPUActor:
def ping(self):
print("GPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["GPU"]))
print("CUDA_VISIBLE_DEVICES: {}".format(os.environ["CUDA_VISIBLE_DEVICES"]))
@ray.remote(num_gpus=1)
def gpu_task():
print("GPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["GPU"]))
print("CUDA_VISIBLE_DEVICES: {}".format(os.environ["CUDA_VISIBLE_DEVICES"]))
gpu_actor = GPUActor.remote()
ray.get(gpu_actor.ping.remote())
# The actor uses the first GPU so the task uses the second one.
ray.get(gpu_task.remote())
(GPUActor pid=52420) GPU IDs: [0]
(GPUActor pid=52420) CUDA_VISIBLE_DEVICES: 0
(gpu_task pid=51830) GPU IDs: [1]
(gpu_task pid=51830) CUDA_VISIBLE_DEVICES: 1
AMD GPU
import os
import ray
ray.init(num_gpus=2)
|
312
|
AMD GPU
import os
import ray
ray.init(num_gpus=2)
@ray.remote(num_gpus=1)
class GPUActor:
def ping(self):
print("GPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["GPU"]))
print("ROCR_VISIBLE_DEVICES: {}".format(os.environ["ROCR_VISIBLE_DEVICES"]))
@ray.remote(num_gpus=1)
def gpu_task():
print("GPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["GPU"]))
print("ROCR_VISIBLE_DEVICES: {}".format(os.environ["ROCR_VISIBLE_DEVICES"]))
gpu_actor = GPUActor.remote()
ray.get(gpu_actor.ping.remote())
# The actor uses the first GPU so the task uses the second one.
ray.get(gpu_task.remote())
(GPUActor pid=52420) GPU IDs: [0]
(GPUActor pid=52420) ROCR_VISIBLE_DEVICES: 0
(gpu_task pid=51830) GPU IDs: [1]
(gpu_task pid=51830) ROCR_VISIBLE_DEVICES: 1
Intel GPU
import os
import ray
ray.init(num_gpus=2)
|
313
|
Intel GPU
import os
import ray
ray.init(num_gpus=2)
@ray.remote(num_gpus=1)
class GPUActor:
def ping(self):
print("GPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["GPU"]))
print("ONEAPI_DEVICE_SELECTOR: {}".format(os.environ["ONEAPI_DEVICE_SELECTOR"]))
@ray.remote(num_gpus=1)
def gpu_task():
print("GPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["GPU"]))
print("ONEAPI_DEVICE_SELECTOR: {}".format(os.environ["ONEAPI_DEVICE_SELECTOR"]))
gpu_actor = GPUActor.remote()
ray.get(gpu_actor.ping.remote())
# The actor uses the first GPU so the task uses the second one.
ray.get(gpu_task.remote())
(GPUActor pid=52420) GPU IDs: [0]
(GPUActor pid=52420) ONEAPI_DEVICE_SELECTOR: 0
(gpu_task pid=51830) GPU IDs: [1]
(gpu_task pid=51830) ONEAPI_DEVICE_SELECTOR: 1
AWS Neuron Core
import os
import ray
ray.init(resources={"neuron_cores": 2})
|
314
|
AWS Neuron Core
import os
import ray
ray.init(resources={"neuron_cores": 2})
@ray.remote(resources={"neuron_cores": 1})
class NeuronCoreActor:
def ping(self):
print("Neuron Core IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["neuron_cores"]))
print("NEURON_RT_VISIBLE_CORES: {}".format(os.environ["NEURON_RT_VISIBLE_CORES"]))
@ray.remote(resources={"neuron_cores": 1})
def neuron_core_task():
print("Neuron Core IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["neuron_cores"]))
print("NEURON_RT_VISIBLE_CORES: {}".format(os.environ["NEURON_RT_VISIBLE_CORES"]))
neuron_core_actor = NeuronCoreActor.remote()
ray.get(neuron_core_actor.ping.remote())
# The actor uses the first Neuron Core so the task uses the second one.
ray.get(neuron_core_task.remote())
|
315
|
neuron_core_actor = NeuronCoreActor.remote()
ray.get(neuron_core_actor.ping.remote())
# The actor uses the first Neuron Core so the task uses the second one.
ray.get(neuron_core_task.remote())
(NeuronCoreActor pid=52420) Neuron Core IDs: [0]
(NeuronCoreActor pid=52420) NEURON_RT_VISIBLE_CORES: 0
(neuron_core_task pid=51830) Neuron Core IDs: [1]
(neuron_core_task pid=51830) NEURON_RT_VISIBLE_CORES: 1
Google TPU
import os
import ray
ray.init(resources={"TPU": 2})
@ray.remote(resources={"TPU": 1})
class TPUActor:
def ping(self):
print("TPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["TPU"]))
print("TPU_VISIBLE_CHIPS: {}".format(os.environ["TPU_VISIBLE_CHIPS"]))
@ray.remote(resources={"TPU": 1})
def tpu_task():
print("TPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["TPU"]))
print("TPU_VISIBLE_CHIPS: {}".format(os.environ["TPU_VISIBLE_CHIPS"]))
|
316
|
tpu_actor = TPUActor.remote()
ray.get(tpu_actor.ping.remote())
# The actor uses the first TPU so the task uses the second one.
ray.get(tpu_task.remote())
(TPUActor pid=52420) TPU IDs: [0]
(TPUActor pid=52420) TPU_VISIBLE_CHIPS: 0
(tpu_task pid=51830) TPU IDs: [1]
(tpu_task pid=51830) TPU_VISIBLE_CHIPS: 1
Intel Gaudi
import os
import ray
ray.init(resources={"HPU": 2})
@ray.remote(resources={"HPU": 1})
class HPUActor:
def ping(self):
print("HPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["HPU"]))
print("HABANA_VISIBLE_MODULES: {}".format(os.environ["HABANA_VISIBLE_MODULES"]))
@ray.remote(resources={"HPU": 1})
def hpu_task():
print("HPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["HPU"]))
print("HABANA_VISIBLE_MODULES: {}".format(os.environ["HABANA_VISIBLE_MODULES"]))
|
317
|
hpu_actor = HPUActor.remote()
ray.get(hpu_actor.ping.remote())
# The actor uses the first HPU so the task uses the second one.
ray.get(hpu_task.remote())
(HPUActor pid=52420) HPU IDs: [0]
(HPUActor pid=52420) HABANA_VISIBLE_MODULES: 0
(hpu_task pid=51830) HPU IDs: [1]
(hpu_task pid=51830) HABANA_VISIBLE_MODULES: 1
Huawei Ascend
import os
import ray
ray.init(resources={"NPU": 2})
@ray.remote(resources={"NPU": 1})
class NPUActor:
def ping(self):
print("NPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["NPU"]))
print("ASCEND_RT_VISIBLE_DEVICES: {}".format(os.environ["ASCEND_RT_VISIBLE_DEVICES"]))
@ray.remote(resources={"NPU": 1})
def npu_task():
print("NPU IDs: {}".format(ray.get_runtime_context().get_accelerator_ids()["NPU"]))
print("ASCEND_RT_VISIBLE_DEVICES: {}".format(os.environ["ASCEND_RT_VISIBLE_DEVICES"]))
|
318
|
Inside a task or actor, ray.get_runtime_context().get_accelerator_ids() returns a
list of accelerator IDs that are available to the task or actor.
Typically, it is not necessary to call get_accelerator_ids() because Ray
automatically sets the corresponding environment variable (e.g. CUDA_VISIBLE_DEVICES),
which most ML frameworks respect for purposes of accelerator assignment.
Note: The remote function or actor defined above doesn’t actually use any
accelerators. Ray schedules it on a node which has at least one accelerator, and
reserves one accelerator for it while it is being executed, however it is up to the
function to actually make use of the accelerator. This is typically done through an
external library like TensorFlow. Here is an example that actually uses accelerators.
In order for this example to work, you need to install the GPU version of
TensorFlow.
@ray.remote(num_gpus=1)
def gpu_task():
import tensorflow as tf
|
319
|
# Create a TensorFlow session. TensorFlow restricts itself to use the
# GPUs specified by the CUDA_VISIBLE_DEVICES environment variable.
tf.Session()
Note: It is certainly possible for the person to
ignore assigned accelerators and to use all of the accelerators on the machine. Ray does
not prevent this from happening, and this can lead to too many tasks or actors using the
same accelerator at the same time. However, Ray does automatically set the
environment variable (e.g. CUDA_VISIBLE_DEVICES), which restricts the accelerators used
by most deep learning frameworks assuming it’s not overridden by the user.
Fractional Accelerators#
Ray supports fractional resource requirements
so multiple tasks and actors can share the same accelerator.
Nvidia GPU
ray.init(num_cpus=4, num_gpus=1)
@ray.remote(num_gpus=0.25)
def f():
import time
time.sleep(1)
|
320
|
Nvidia GPU
ray.init(num_cpus=4, num_gpus=1)
@ray.remote(num_gpus=0.25)
def f():
import time
time.sleep(1)
# The four tasks created here can execute concurrently
# and share the same GPU.
ray.get([f.remote() for _ in range(4)])
AMD GPU
ray.init(num_cpus=4, num_gpus=1)
@ray.remote(num_gpus=0.25)
def f():
import time
time.sleep(1)
# The four tasks created here can execute concurrently
# and share the same GPU.
ray.get([f.remote() for _ in range(4)])
Intel GPU
ray.init(num_cpus=4, num_gpus=1)
@ray.remote(num_gpus=0.25)
def f():
import time
time.sleep(1)
# The four tasks created here can execute concurrently
# and share the same GPU.
ray.get([f.remote() for _ in range(4)])
AWS Neuron Core
AWS Neuron Core doesn’t support fractional resource.
Google TPU
Google TPU doesn’t support fractional resource.
Intel Gaudi
Intel Gaudi doesn’t support fractional resource.
Huawei Ascend
ray.init(num_cpus=4, resources={"NPU": 1})
|
321
|
Google TPU
Google TPU doesn’t support fractional resource.
Intel Gaudi
Intel Gaudi doesn’t support fractional resource.
Huawei Ascend
ray.init(num_cpus=4, resources={"NPU": 1})
@ray.remote(resources={"NPU": 0.25})
def f():
import time
time.sleep(1)
# The four tasks created here can execute concurrently
# and share the same NPU.
ray.get([f.remote() for _ in range(4)])
Note: It is the user’s responsibility to make sure that the individual tasks
don’t use more than their share of the accelerator memory.
Pytorch and TensorFlow can be configured to limit its memory usage.
When Ray assigns accelerators of a node to tasks or actors with fractional resource requirements,
it packs one accelerator before moving on to the next one to avoid fragmentation.
ray.init(num_gpus=3)
@ray.remote(num_gpus=0.5)
class FractionalGPUActor:
def ping(self):
print("GPU id: {}".format(ray.get_runtime_context().get_accelerator_ids()["GPU"]))
|
322
|
(FractionalGPUActor pid=57417) GPU id: [0]
(FractionalGPUActor pid=57416) GPU id: [0]
(FractionalGPUActor pid=57418) GPU id: [1]
Workers not Releasing GPU Resources#
Currently, when a worker executes a task that uses a GPU (e.g.,
through TensorFlow), the task may allocate memory on the GPU and may not release
it when the task finishes executing. This can lead to problems the next time a
task tries to use the same GPU. To address the problem, Ray disables the worker
process reuse between GPU tasks by default, where the GPU resources is released after
the task process exits. Since this adds overhead to GPU task scheduling,
you can re-enable worker reuse by setting max_calls=0
in the ray.remote decorator.
# By default, ray does not reuse workers for GPU tasks to prevent
# GPU resource leakage.
@ray.remote(num_gpus=1)
def leak_gpus():
import tensorflow as tf
# This task allocates memory on the GPU and then never release it.
tf.Session()
|
323
|
# This task allocates memory on the GPU and then never release it.
tf.Session()
Accelerator Types#
Ray supports resource specific accelerator types. The accelerator_type option can be used to force to a task or actor to run on a node with a specific type of accelerator.
Under the hood, the accelerator type option is implemented as a custom resource requirement of "accelerator_type:<type>": 0.001.
This forces the task or actor to be placed on a node with that particular accelerator type available.
This also lets the multi-node-type autoscaler know that there is demand for that type of resource, potentially triggering the launch of new nodes providing that accelerator.
from ray.util.accelerators import NVIDIA_TESLA_V100
@ray.remote(num_gpus=1, accelerator_type=NVIDIA_TESLA_V100)
def train(data):
return "This function was run on a node with a Tesla V100 GPU"
ray.get(train.remote(1))
See ray.util.accelerators for available accelerator types.
|
324
|
Placement Groups#
Placement groups allow users to atomically reserve groups of resources across multiple nodes (i.e., gang scheduling).
They can then be used to schedule Ray tasks and actors packed together for locality (PACK), or spread apart
(SPREAD). Placement groups are generally used for gang-scheduling actors, but also support tasks.
Here are some real-world use cases:
|
325
|
Distributed Machine Learning Training: Distributed Training (e.g., Ray Train and Ray Tune) uses the placement group APIs to enable gang scheduling. In these settings, all resources for a trial must be available at the same time. Gang scheduling is a critical technique to enable all-or-nothing scheduling for deep learning training.
Fault tolerance in distributed training: Placement groups can be used to configure fault tolerance. In Ray Tune, it can be beneficial to pack related resources from a single trial together, so that a node failure impacts a low number of trials. In libraries that support elastic training (e.g., XGBoost-Ray), spreading the resources across multiple nodes can help to ensure that training continues even when a node dies.
Key Concepts#
|
326
|
Key Concepts#
Bundles#
A bundle is a collection of “resources”. It could be a single resource, {"CPU": 1}, or a group of resources, {"CPU": 1, "GPU": 4}.
A bundle is a unit of reservation for placement groups. “Scheduling a bundle” means we find a node that fits the bundle and reserve the resources specified by the bundle.
A bundle must be able to fit on a single node on the Ray cluster. For example, if you only have an 8 CPU node, and if you have a bundle that requires {"CPU": 9}, this bundle cannot be scheduled.
Placement Group#
A placement group reserves the resources from the cluster. The reserved resources can only be used by tasks or actors that use the PlacementGroupSchedulingStrategy.
|
327
|
Placement Group#
A placement group reserves the resources from the cluster. The reserved resources can only be used by tasks or actors that use the PlacementGroupSchedulingStrategy.
Placement groups are represented by a list of bundles. For example, {"CPU": 1} * 4 means you’d like to reserve 4 bundles of 1 CPU (i.e., it reserves 4 CPUs).
Bundles are then placed according to the placement strategies across nodes on the cluster.
After the placement group is created, tasks or actors can be then scheduled according to the placement group and even on individual bundles.
|
328
|
Create a Placement Group (Reserve Resources)#
You can create a placement group using ray.util.placement_group().
Placement groups take in a list of bundles and a placement strategy.
Note that each bundle must be able to fit on a single node on the Ray cluster.
For example, if you only have a 8 CPU node, and if you have a bundle that requires {"CPU": 9},
this bundle cannot be scheduled.
Bundles are specified by a list of dictionaries, e.g., [{"CPU": 1}, {"CPU": 1, "GPU": 1}]).
CPU corresponds to num_cpus as used in ray.remote.
GPU corresponds to num_gpus as used in ray.remote.
memory corresponds to memory as used in ray.remote
Other resources corresponds to resources as used in ray.remote (E.g., ray.init(resources={"disk": 1}) can have a bundle of {"disk": 1}).
Placement group scheduling is asynchronous. The ray.util.placement_group returns immediately.
Python
from pprint import pprint
import time
|
329
|
Placement group scheduling is asynchronous. The ray.util.placement_group returns immediately.
Python
from pprint import pprint
import time
# Import placement group APIs.
from ray.util.placement_group import (
placement_group,
placement_group_table,
remove_placement_group,
)
from ray.util.scheduling_strategies import PlacementGroupSchedulingStrategy
# Initialize Ray.
import ray
# Create a single node Ray cluster with 2 CPUs and 2 GPUs.
ray.init(num_cpus=2, num_gpus=2)
# Reserve a placement group of 1 bundle that reserves 1 CPU and 1 GPU.
pg = placement_group([{"CPU": 1, "GPU": 1}])
Java
// Initialize Ray.
Ray.init();
// Construct a list of bundles.
Map<String, Double> bundle = ImmutableMap.of("CPU", 1.0);
List<Map<String, Double>> bundles = ImmutableList.of(bundle);
|
330
|
Java
// Initialize Ray.
Ray.init();
// Construct a list of bundles.
Map<String, Double> bundle = ImmutableMap.of("CPU", 1.0);
List<Map<String, Double>> bundles = ImmutableList.of(bundle);
// Make a creation option with bundles and strategy.
PlacementGroupCreationOptions options =
new PlacementGroupCreationOptions.Builder()
.setBundles(bundles)
.setStrategy(PlacementStrategy.STRICT_SPREAD)
.build();
PlacementGroup pg = PlacementGroups.createPlacementGroup(options);
C++
// Initialize Ray.
ray::Init();
// Construct a list of bundles.
std::vector<std::unordered_map<std::string, double>> bundles{{{"CPU", 1.0}}};
// Make a creation option with bundles and strategy.
ray::internal::PlacementGroupCreationOptions options{
false, "my_pg", bundles, ray::internal::PlacementStrategy::PACK};
ray::PlacementGroup pg = ray::CreatePlacementGroup(options);
You can block your program until the placement group is ready using one of two APIs:
|
331
|
ray::PlacementGroup pg = ray::CreatePlacementGroup(options);
You can block your program until the placement group is ready using one of two APIs:
ready, which is compatible with ray.get
wait, which blocks the program until the placement group is ready)
Python
# Wait until placement group is created.
ray.get(pg.ready(), timeout=10)
# You can also use ray.wait.
ready, unready = ray.wait([pg.ready()], timeout=10)
# You can look at placement group states using this API.
print(placement_group_table(pg))
Java
// Wait for the placement group to be ready within the specified time(unit is seconds).
boolean ready = pg.wait(60);
Assert.assertTrue(ready);
// You can look at placement group states using this API.
List<PlacementGroup> allPlacementGroup = PlacementGroups.getAllPlacementGroups();
for (PlacementGroup group: allPlacementGroup) {
System.out.println(group);
}
|
332
|
C++
// Wait for the placement group to be ready within the specified time(unit is seconds).
bool ready = pg.Wait(60);
assert(ready);
// You can look at placement group states using this API.
std::vector<ray::PlacementGroup> all_placement_group = ray::GetAllPlacementGroups();
for (const ray::PlacementGroup &group : all_placement_group) {
std::cout << group.GetName() << std::endl;
}
Let’s verify the placement group is successfully created.
# This API is only available when you download Ray via `pip install "ray[default]"`
ray list placement-groups
======== List: 2023-04-07 01:15:05.682519 ========
Stats:
------------------------------
Total: 1
Table:
------------------------------
PLACEMENT_GROUP_ID NAME CREATOR_JOB_ID STATE
0 3cd6174711f47c14132155039c0501000000 01000000 CREATED
|
333
|
Table:
------------------------------
PLACEMENT_GROUP_ID NAME CREATOR_JOB_ID STATE
0 3cd6174711f47c14132155039c0501000000 01000000 CREATED
The placement group is successfully created. Out of the {"CPU": 2, "GPU": 2} resources, the placement group reserves {"CPU": 1, "GPU": 1}.
The reserved resources can only be used when you schedule tasks or actors with a placement group.
The diagram below demonstrates the “1 CPU and 1 GPU” bundle that the placement group reserved.
Placement groups are atomically created; if a bundle cannot fit in any of the current nodes,
the entire placement group is not ready and no resources are reserved.
To illustrate, let’s create another placement group that requires {"CPU":1}, {"GPU": 2} (2 bundles).
|
334
|
Python
# Cannot create this placement group because we
# cannot create a {"GPU": 2} bundle.
pending_pg = placement_group([{"CPU": 1}, {"GPU": 2}])
# This raises the timeout exception!
try:
ray.get(pending_pg.ready(), timeout=5)
except Exception as e:
print(
"Cannot create a placement group because "
"{'GPU': 2} bundle cannot be created."
)
print(e)
You can verify the new placement group is pending creation.
# This API is only available when you download Ray via `pip install "ray[default]"`
ray list placement-groups
======== List: 2023-04-07 01:16:23.733410 ========
Stats:
------------------------------
Total: 2
Table:
------------------------------
PLACEMENT_GROUP_ID NAME CREATOR_JOB_ID STATE
0 3cd6174711f47c14132155039c0501000000 01000000 CREATED
1 e1b043bebc751c3081bddc24834d01000000 01000000 PENDING <---- the new placement group.
|
335
|
You can also verify that the {"CPU": 1, "GPU": 2} bundles cannot be allocated, using the ray status CLI command.
ray status
Resources
---------------------------------------------------------------
Usage:
0.0/2.0 CPU (0.0 used of 1.0 reserved in placement groups)
0.0/2.0 GPU (0.0 used of 1.0 reserved in placement groups)
0B/3.46GiB memory
0B/1.73GiB object_store_memory
Demands:
{'CPU': 1.0} * 1, {'GPU': 2.0} * 1 (PACK): 1+ pending placement groups <--- 1 placement group is pending creation.
The current cluster has {"CPU": 2, "GPU": 2}. We already created a {"CPU": 1, "GPU": 1} bundle, so only {"CPU": 1, "GPU": 1} is left in the cluster.
If we create 2 bundles {"CPU": 1}, {"GPU": 2}, we can create a first bundle successfully, but can’t schedule the second bundle.
Since we cannot create every bundle on the cluster, the placement group is not created, including the {"CPU": 1} bundle.
|
336
|
When the placement group cannot be scheduled in any way, it is called “infeasible”.
Imagine you schedule {"CPU": 4} bundle, but you only have a single node with 2 CPUs. There’s no way to create this bundle in your cluster.
The Ray Autoscaler is aware of placement groups, and auto-scales the cluster to ensure pending groups can be placed as needed.
If Ray Autoscaler cannot provide resources to schedule a placement group, Ray does not print a warning about infeasible groups and tasks and actors that use the groups.
You can observe the scheduling state of the placement group from the dashboard or state APIs.
Schedule Tasks and Actors to Placement Groups (Use Reserved Resources)#
In the previous section, we created a placement group that reserved {"CPU": 1, "GPU: 1"} from a 2 CPU and 2 GPU node.
Now let’s schedule an actor to the placement group.
You can schedule actors or tasks to a placement group using
options(scheduling_strategy=PlacementGroupSchedulingStrategy(...)).
|
337
|
Python
@ray.remote(num_cpus=1)
class Actor:
def __init__(self):
pass
def ready(self):
pass
# Create an actor to a placement group.
actor = Actor.options(
scheduling_strategy=PlacementGroupSchedulingStrategy(
placement_group=pg,
)
).remote()
# Verify the actor is scheduled.
ray.get(actor.ready.remote(), timeout=10)
Java
public static class Counter {
private int value;
public Counter(int initValue) {
this.value = initValue;
}
public int getValue() {
return value;
}
public static String ping() {
return "pong";
}
}
// Create GPU actors on a gpu bundle.
for (int index = 0; index < 1; index++) {
Ray.actor(Counter::new, 1)
.setPlacementGroup(pg, 0)
.remote();
}
C++
class Counter {
public:
Counter(int init_value) : value(init_value){}
int GetValue() {return value;}
std::string Ping() {
return "pong";
}
private:
int value;
};
|
338
|
C++
class Counter {
public:
Counter(int init_value) : value(init_value){}
int GetValue() {return value;}
std::string Ping() {
return "pong";
}
private:
int value;
};
// Factory function of Counter class.
static Counter *CreateCounter() {
return new Counter();
};
RAY_REMOTE(&Counter::Ping, &Counter::GetValue, CreateCounter);
// Create GPU actors on a gpu bundle.
for (int index = 0; index < 1; index++) {
ray::Actor(CreateCounter)
.SetPlacementGroup(pg, 0)
.Remote(1);
}
|
339
|
// Create GPU actors on a gpu bundle.
for (int index = 0; index < 1; index++) {
ray::Actor(CreateCounter)
.SetPlacementGroup(pg, 0)
.Remote(1);
}
Note
By default, Ray actors require 1 logical CPU at schedule time, but after being scheduled, they do not acquire any CPU resources.
In other words, by default, actors cannot get scheduled on a zero-cpu node, but an infinite number of them can run on any non-zero cpu node.
Thus, when scheduling an actor with the default resource requirements and a placement group, the placement group has to be created with a bundle containing at least 1 CPU
(since the actor requires 1 CPU for scheduling). However, after the actor is created, it doesn’t consume any placement group resources.
To avoid any surprises, always specify resource requirements explicitly for actors. If resources are specified explicitly, they are required both at schedule time and at execution time.
|
340
|
The actor is scheduled now! One bundle can be used by multiple tasks and actors (i.e., the bundle to task (or actor) is a one-to-many relationship).
In this case, since the actor uses 1 CPU, 1 GPU remains from the bundle.
You can verify this from the CLI command ray status. You can see the 1 CPU is reserved by the placement group, and 1.0 is used (by the actor we created).
ray status
Resources
---------------------------------------------------------------
Usage:
1.0/2.0 CPU (1.0 used of 1.0 reserved in placement groups) <---
0.0/2.0 GPU (0.0 used of 1.0 reserved in placement groups)
0B/4.29GiB memory
0B/2.00GiB object_store_memory
Demands:
(no resource demands)
You can also verify the actor is created using ray list actors.
# This API is only available when you download Ray via `pip install "ray[default]"`
ray list actors --detail
|
341
|
You can also verify the actor is created using ray list actors.
# This API is only available when you download Ray via `pip install "ray[default]"`
ray list actors --detail
- actor_id: b5c990f135a7b32bfbb05e1701000000
class_name: Actor
death_cause: null
is_detached: false
job_id: '01000000'
name: ''
node_id: b552ca3009081c9de857a31e529d248ba051a4d3aeece7135dde8427
pid: 8795
placement_group_id: d2e660ac256db230dbe516127c4a01000000 <------
ray_namespace: e5b19111-306c-4cd8-9e4f-4b13d42dff86
repr_name: ''
required_resources:
CPU_group_d2e660ac256db230dbe516127c4a01000000: 1.0
serialized_runtime_env: '{}'
state: ALIVE
|
342
|
Since 1 GPU remains, let’s create a new actor that requires 1 GPU.
This time, we also specify the placement_group_bundle_index. Each bundle is given an “index” within the placement group.
For example, a placement group of 2 bundles [{"CPU": 1}, {"GPU": 1}] has index 0 bundle {"CPU": 1}
and index 1 bundle {"GPU": 1}. Since we only have 1 bundle, we only have index 0. If you don’t specify a bundle, the actor (or task)
is scheduled on a random bundle that has unallocated reserved resources.
Python
@ray.remote(num_cpus=0, num_gpus=1)
class Actor:
def __init__(self):
pass
def ready(self):
pass
# Create a GPU actor on the first bundle of index 0.
actor2 = Actor.options(
scheduling_strategy=PlacementGroupSchedulingStrategy(
placement_group=pg,
placement_group_bundle_index=0,
)
).remote()
# Verify that the GPU actor is scheduled.
ray.get(actor2.ready.remote(), timeout=10)
|
343
|
# Verify that the GPU actor is scheduled.
ray.get(actor2.ready.remote(), timeout=10)
We succeed to schedule the GPU actor! The below image describes 2 actors scheduled into the placement group.
You can also verify that the reserved resources are all used, with the ray status command.
ray status
Resources
---------------------------------------------------------------
Usage:
1.0/2.0 CPU (1.0 used of 1.0 reserved in placement groups)
1.0/2.0 GPU (1.0 used of 1.0 reserved in placement groups) <----
0B/4.29GiB memory
0B/2.00GiB object_store_memory
|
344
|
Placement Strategy#
One of the features the placement group provides is to add placement constraints among bundles.
For example, you’d like to pack your bundles to the same
node or spread out to multiple nodes as much as possible. You can specify the strategy via strategy argument.
This way, you can make sure your actors and tasks can be scheduled with certain placement constraints.
The example below creates a placement group with 2 bundles with a PACK strategy;
both bundles have to be created in the same node. Note that it is a soft policy. If the bundles cannot be packed
into a single node, they are spread to other nodes. If you’d like to avoid the problem, you can instead use STRICT_PACK
policies, which fail to create placement groups if placement requirements cannot be satisfied.
# Reserve a placement group of 2 bundles
# that have to be packed on the same node.
pg = placement_group([{"CPU": 1}, {"GPU": 1}], strategy="PACK")
|
345
|
The image below demonstrates the PACK policy. Three of the {"CPU": 2} bundles are located in the same node.
The image below demonstrates the SPREAD policy. Each of three of the {"CPU": 2} bundles are located in three different nodes.
Ray supports four placement group strategies. The default scheduling policy is PACK.
STRICT_PACK
All bundles must be placed into a single node on the cluster. Use this strategy when you want to maximize the locality.
PACK
All provided bundles are packed onto a single node on a best-effort basis.
If strict packing is not feasible (i.e., some bundles do not fit on the node), bundles can be placed onto other nodes.
STRICT_SPREAD
Each bundle must be scheduled in a separate node.
SPREAD
Each bundle is spread onto separate nodes on a best-effort basis.
If strict spreading is not feasible, bundles can be placed on overlapping nodes.
|
346
|
Remove Placement Groups (Free Reserved Resources)#
By default, a placement group’s lifetime is scoped to the driver that creates placement groups
(unless you make it a detached placement group). When the placement group is created from
a detached actor, the lifetime is scoped to the detached actor.
In Ray, the driver is the Python script that calls ray.init.
Reserved resources (bundles) from the placement group are automatically freed when the driver or detached actor
that creates placement group exits. To free the reserved resources manually, remove the placement
group using the remove_placement_group API (which is also an asynchronous API).
Note
When you remove the placement group, actors or tasks that still use the reserved resources are
forcefully killed.
Python
# This API is asynchronous.
remove_placement_group(pg)
# Wait until placement group is killed.
time.sleep(1)
# Check that the placement group has died.
pprint(placement_group_table(pg))
|
347
|
Python
# This API is asynchronous.
remove_placement_group(pg)
# Wait until placement group is killed.
time.sleep(1)
# Check that the placement group has died.
pprint(placement_group_table(pg))
"""
{'bundles': {0: {'GPU': 1.0}, 1: {'CPU': 1.0}},
'name': 'unnamed_group',
'placement_group_id': '40816b6ad474a6942b0edb45809b39c3',
'state': 'REMOVED',
'strategy': 'PACK'}
"""
Java
PlacementGroups.removePlacementGroup(placementGroup.getId());
PlacementGroup removedPlacementGroup = PlacementGroups.getPlacementGroup(placementGroup.getId());
Assert.assertEquals(removedPlacementGroup.getState(), PlacementGroupState.REMOVED);
C++
ray::RemovePlacementGroup(placement_group.GetID());
ray::PlacementGroup removed_placement_group = ray::GetPlacementGroup(placement_group.GetID());
assert(removed_placement_group.GetState(), ray::PlacementGroupState::REMOVED);
|
348
|
ray::PlacementGroup removed_placement_group = ray::GetPlacementGroup(placement_group.GetID());
assert(removed_placement_group.GetState(), ray::PlacementGroupState::REMOVED);
Observe and Debug Placement Groups#
Ray provides several useful tools to inspect the placement group states and resource usage.
Ray Status is a CLI tool for viewing the resource usage and scheduling resource requirements of placement groups.
Ray Dashboard is a UI tool for inspecting placement group states.
Ray State API is a CLI for inspecting placement group states.
|
349
|
ray status (CLI)
The CLI command ray status provides the autoscaling status of the cluster.
It provides the “resource demands” from unscheduled placement groups as well as the resource reservation status.
Resources
---------------------------------------------------------------
Usage:
1.0/2.0 CPU (1.0 used of 1.0 reserved in placement groups)
0.0/2.0 GPU (0.0 used of 1.0 reserved in placement groups)
0B/4.29GiB memory
0B/2.00GiB object_store_memory
Dashboard
The dashboard job view provides the placement group table that displays the scheduling state and metadata of the placement group.
Note
Ray dashboard is only available when you install Ray is with pip install "ray[default]".
|
350
|
Note
Ray dashboard is only available when you install Ray is with pip install "ray[default]".
Ray State API
Ray state API is a CLI tool for inspecting the state of Ray resources (tasks, actors, placement groups, etc.).
ray list placement-groups provides the metadata and the scheduling state of the placement group.
ray list placement-groups --detail provides statistics and scheduling state in a greater detail.
Note
State API is only available when you install Ray is with pip install "ray[default]"
Inspect Placement Group Scheduling State#
With the above tools, you can see the state of the placement group. The definition of states are specified in the following files:
High level state
Details
[Advanced] Child Tasks and Actors#
By default, child actors and tasks don’t share the same placement group that the parent uses.
To automatically schedule child actors or tasks to the same placement group,
set placement_group_capture_child_tasks to True.
|
351
|
Python
import ray
from ray.util.placement_group import placement_group
from ray.util.scheduling_strategies import PlacementGroupSchedulingStrategy
ray.init(num_cpus=2)
# Create a placement group.
pg = placement_group([{"CPU": 2}])
ray.get(pg.ready())
@ray.remote(num_cpus=1)
def child():
import time
time.sleep(5)
@ray.remote(num_cpus=1)
def parent():
# The child task is scheduled to the same placement group as its parent,
# although it didn't specify the PlacementGroupSchedulingStrategy.
ray.get(child.remote())
# Since the child and parent use 1 CPU each, the placement group
# bundle {"CPU": 2} is fully occupied.
ray.get(
parent.options(
scheduling_strategy=PlacementGroupSchedulingStrategy(
placement_group=pg, placement_group_capture_child_tasks=True
)
).remote()
)
Java
It’s not implemented for Java APIs yet.
|
352
|
Java
It’s not implemented for Java APIs yet.
When placement_group_capture_child_tasks is True, but you don’t want to schedule
child tasks and actors to the same placement group, specify PlacementGroupSchedulingStrategy(placement_group=None).
@ray.remote
def parent():
# In this case, the child task isn't
# scheduled with the parent's placement group.
ray.get(
child.options(
scheduling_strategy=PlacementGroupSchedulingStrategy(placement_group=None)
).remote()
)
|
353
|
# This times out because we cannot schedule the child task.
# The cluster has {"CPU": 2}, and both of them are reserved by
# the placement group with a bundle {"CPU": 2}. Since the child shouldn't
# be scheduled within this placement group, it cannot be scheduled because
# there's no available CPU resources.
try:
ray.get(
parent.options(
scheduling_strategy=PlacementGroupSchedulingStrategy(
placement_group=pg, placement_group_capture_child_tasks=True
)
).remote(),
timeout=5,
)
except Exception as e:
print("Couldn't create a child task!")
print(e)
Warning
The value of placement_group_capture_child_tasks for a given actor isn’t inherited from its parent. If you’re creating nested actors of depth greater than 1
and should all use the same placement group, you should explicitly set placement_group_capture_child_tasks explicitly set for each actor.
|
354
|
[Advanced] Named Placement Group#
Within a namespace, you can name a placement group.
You can use the name of a placement group to retrieve the placement group from any job
in the Ray cluster, as long as the job is within the same namespace.
This is useful if you can’t directly pass the placement group handle to
the actor or task that needs it, or if you are trying to
access a placement group launched by another driver.
The placement group is destroyed when the original creation job completes if its
lifetime isn’t detached. You can avoid this by using a detached placement group
Note that this feature requires that you specify a
namespace associated with it, or else you can’t retrieve the
placement group across jobs.
Python
# first_driver.py
# Create a placement group with a unique name within a namespace.
# Start Ray or connect to a Ray cluster using: ray.init(namespace="pg_namespace")
pg = placement_group([{"CPU": 1}], name="pg_name")
ray.get(pg.ready())
|
355
|
# second_driver.py
# Retrieve a placement group with a unique name within a namespace.
# Start Ray or connect to a Ray cluster using: ray.init(namespace="pg_namespace")
pg = ray.util.get_placement_group("pg_name")
Java
// Create a placement group with a unique name.
Map<String, Double> bundle = ImmutableMap.of("CPU", 1.0);
List<Map<String, Double>> bundles = ImmutableList.of(bundle);
PlacementGroupCreationOptions options =
new PlacementGroupCreationOptions.Builder()
.setBundles(bundles)
.setStrategy(PlacementStrategy.STRICT_SPREAD)
.setName("global_name")
.build();
PlacementGroup pg = PlacementGroups.createPlacementGroup(options);
pg.wait(60);
...
// Retrieve the placement group later somewhere.
PlacementGroup group = PlacementGroups.getPlacementGroup("global_name");
Assert.assertNotNull(group);
C++
// Create a placement group with a globally unique name.
std::vector<std::unordered_map<std::string, double>> bundles{{{"CPU", 1.0}}};
|
356
|
C++
// Create a placement group with a globally unique name.
std::vector<std::unordered_map<std::string, double>> bundles{{{"CPU", 1.0}}};
ray::PlacementGroupCreationOptions options{
true/*global*/, "global_name", bundles, ray::PlacementStrategy::STRICT_SPREAD};
ray::PlacementGroup pg = ray::CreatePlacementGroup(options);
pg.Wait(60);
...
// Retrieve the placement group later somewhere.
ray::PlacementGroup group = ray::GetGlobalPlacementGroup("global_name");
assert(!group.Empty());
We also support non-global named placement group in C++, which means that the placement group name is only valid within the job and cannot be accessed from another job.
// Create a placement group with a job-scope-unique name.
std::vector<std::unordered_map<std::string, double>> bundles{{{"CPU", 1.0}}};
ray::PlacementGroupCreationOptions options{
false/*non-global*/, "non_global_name", bundles, ray::PlacementStrategy::STRICT_SPREAD};
|
357
|
ray::PlacementGroupCreationOptions options{
false/*non-global*/, "non_global_name", bundles, ray::PlacementStrategy::STRICT_SPREAD};
ray::PlacementGroup pg = ray::CreatePlacementGroup(options);
pg.Wait(60);
...
// Retrieve the placement group later somewhere in the same job.
ray::PlacementGroup group = ray::GetPlacementGroup("non_global_name");
assert(!group.Empty());
[Advanced] Detached Placement Group#
By default, the lifetimes of placement groups belong to the driver and actor.
If the placement group is created from a driver, it is destroyed when the driver is terminated.
If it is created from a detached actor, it is killed when the detached actor is killed.
To keep the placement group alive regardless of its job or detached actor, specify
lifetime="detached". For example:
|
358
|
To keep the placement group alive regardless of its job or detached actor, specify
lifetime="detached". For example:
Python
# driver_1.py
# Create a detached placement group that survives even after
# the job terminates.
pg = placement_group([{"CPU": 1}], lifetime="detached", name="global_name")
ray.get(pg.ready())
Java
The lifetime argument is not implemented for Java APIs yet.
Let’s terminate the current script and start a new Python script. Call ray list placement-groups, and you can see the placement group is not removed.
Note that the lifetime option is decoupled from the name. If we only specified
the name without specifying lifetime="detached", then the placement group can
only be retrieved as long as the original driver is still running.
It is recommended to always specify the name when creating the detached placement group.
[Advanced] Fault Tolerance#
|
359
|
[Advanced] Fault Tolerance#
Rescheduling Bundles on a Dead Node#
If nodes that contain some bundles of a placement group die, all the bundles are rescheduled on different nodes by
GCS (i.e., we try reserving resources again). This means that the initial creation of placement group is “atomic”,
but once it is created, there could be partial placement groups.
Rescheduling bundles have higher scheduling priority than other placement group scheduling.
Provide Resources for Partially Lost Bundles#
If there are not enough resources to schedule the partially lost bundles,
the placement group waits, assuming Ray Autoscaler will start a new node to satisfy the resource requirements.
If the additional resources cannot be provided (e.g., you don’t use the Autoscaler or the Autoscaler hits the resource limit),
the placement group remains in the partially created state indefinitely.
|
360
|
Fault tolerance#
Ray is a distributed system, and that means failures can happen. Generally, Ray classifies
failures into two classes:
1. application-level failures
2. system-level failures
Bugs in user-level code or external system failures trigger application-level failures.
Node failures, network failures, or just bugs in Ray trigger system-level failures.
The following section contains the mechanisms that Ray provides to allow applications to recover from failures.
To handle application-level failures, Ray provides mechanisms to catch errors,
retry failed code, and handle misbehaving code. See the pages for task and actor fault
tolerance for more information on these mechanisms.
Ray also provides several mechanisms to automatically recover from internal system-level failures like node failures.
In particular, Ray can automatically recover from some failures in the distributed object store.
|
361
|
How to write fault tolerant Ray applications#
There are several recommendations to make Ray applications fault tolerant:
First, if the fault tolerance mechanisms provided by Ray don’t work for you,
you can always catch exceptions caused by failures and recover manually.
@ray.remote
class Actor:
def read_only(self):
import sys
import random
rand = random.random()
if rand < 0.2:
return 2 / 0
elif rand < 0.3:
sys.exit(1)
return 2
actor = Actor.remote()
# Manually retry the actor task.
while True:
try:
print(ray.get(actor.read_only.remote()))
break
except ZeroDivisionError:
pass
except ray.exceptions.RayActorError:
# Manually restart the actor
actor = Actor.remote()
|
362
|
Second, avoid letting an ObjectRef outlive its owner task or actor
(the task or actor that creates the initial ObjectRef by calling ray.put() or foo.remote()).
As long as there are still references to an object,
the owner worker of the object keeps running even after the corresponding task or actor finishes.
If the owner worker fails, Ray cannot recover the object automatically for those who try to access the object.
One example of creating such outlived objects is returning ObjectRef created by ray.put() from a task:
import ray
# Non-fault tolerant version:
@ray.remote
def a():
x_ref = ray.put(1)
return x_ref
x_ref = ray.get(a.remote())
# Object x outlives its owner task A.
try:
# If owner of x (i.e. the worker process running task A) dies,
# the application can no longer get value of x.
print(ray.get(x_ref))
except ray.exceptions.OwnerDiedError:
pass
|
363
|
In the preceding example, object x outlives its owner task a.
If the worker process running task a fails, calling ray.get on x_ref afterwards results in an OwnerDiedError exception.
The following example is a fault tolerant version which returns x directly. In this example, the driver owns x and you only access it within the lifetime of the driver.
If x is lost, Ray can automatically recover it via lineage reconstruction.
See Anti-pattern: Returning ray.put() ObjectRefs from a task harms performance and fault tolerance for more details.
# Fault tolerant version:
@ray.remote
def a():
# Here we return the value directly instead of calling ray.put() first.
return 1
# The owner of x is the driver
# so x is accessible and can be auto recovered
# during the entire lifetime of the driver.
x_ref = a.remote()
print(ray.get(x_ref))
|
364
|
# The owner of x is the driver
# so x is accessible and can be auto recovered
# during the entire lifetime of the driver.
x_ref = a.remote()
print(ray.get(x_ref))
Third, avoid using custom resource requirements that only particular nodes can satisfy.
If that particular node fails, Ray won’t retry the running tasks or actors.
@ray.remote
def b():
return 1
# If the node with ip 127.0.0.3 fails while task b is running,
# Ray cannot retry the task on other nodes.
b.options(resources={"node:127.0.0.3": 1}).remote()
|
365
|
# If the node with ip 127.0.0.3 fails while task b is running,
# Ray cannot retry the task on other nodes.
b.options(resources={"node:127.0.0.3": 1}).remote()
If you prefer running a task on a particular node, you can use the NodeAffinitySchedulingStrategy.
It allows you to specify the affinity as a soft constraint so even if the target node fails, the task can still be retried on other nodes.
# Prefer running on the particular node specified by node id
# but can also run on other nodes if the target node fails.
b.options(
scheduling_strategy=ray.util.scheduling_strategies.NodeAffinitySchedulingStrategy(
node_id=ray.get_runtime_context().get_node_id(), soft=True
)
).remote()
More about Ray fault tolerance#
Task Fault Tolerance
Actor Fault Tolerance
Object Fault Tolerance
Node Fault Tolerance
GCS Fault Tolerance
|
366
|
Memory Management#
This page describes how memory management works in Ray.
Also view Debugging Out of Memory to learn how to troubleshoot out-of-memory issues.
Concepts#
There are several ways that Ray applications use memory:
Ray system memory: this is memory used internally by Ray
GCS: memory used for storing the list of nodes and actors present in the cluster. The amount of memory used for these purposes is typically quite small.
Raylet: memory used by the C++ raylet process running on each node. This cannot be controlled, but is typically quite small.
|
367
|
Application memory: this is memory used by your application
Worker heap: memory used by your application (e.g., in Python code or TensorFlow), best measured as the resident set size (RSS) of your application minus its shared memory usage (SHR) in commands such as top. The reason you need to subtract SHR is that object store shared memory is reported by the OS as shared with each worker. Not subtracting SHR will result in double counting memory usage.
|
368
|
Object store memory: memory used when your application creates objects in the object store via ray.put and when it returns values from remote functions. Objects are reference counted and evicted when they fall out of scope. An object store server runs on each node. By default, when starting an instance, Ray reserves 30% of available memory. The size of the object store can be controlled by –object-store-memory. The memory is by default allocated to /dev/shm (shared memory) for Linux. For MacOS, Ray uses /tmp (disk), which can impact the performance compared to Linux. In Ray 1.3+, objects are spilled to disk if the object store fills up.
Object store shared memory: memory used when your application reads objects via ray.get. Note that if an object is already present on the node, this does not cause additional allocations. This allows large objects to be efficiently shared among many actors and tasks.
|
369
|
ObjectRef Reference Counting#
Ray implements distributed reference counting so that any ObjectRef in scope in the cluster is pinned in the object store. This includes local python references, arguments to pending tasks, and IDs serialized inside of other objects.
Debugging using ‘ray memory’#
The ray memory command can be used to help track down what ObjectRef references are in scope and may be causing an ObjectStoreFullError.
Running ray memory from the command line while a Ray application is running will give you a dump of all of the ObjectRef references that are currently held by the driver, actors, and tasks in the cluster.
======== Object references status: 2021-02-23 22:02:22.072221 ========
Grouping by node address... Sorting by object size...
|
370
|
--- Summary for node address: 192.168.0.15 ---
Mem Used by Objects Local References Pinned Count Pending Tasks Captured in Objects Actor Handles
287 MiB 4 0 0 1 0
--- Object references for node address: 192.168.0.15 ---
IP Address PID Type Object Ref Size Reference Type Call Site
192.168.0.15 6465 Driver ffffffffffffffffffffffffffffffffffffffff0100000001000000 15 MiB LOCAL_REFERENCE (put object)
| test.py:
<module>:17
|
371
|
--- Aggregate object store stats across all nodes ---
Plasma memory usage 0 MiB, 4 objects, 0.0% full
Each entry in this output corresponds to an ObjectRef that’s currently pinning an object in the object store along with where the reference is (in the driver, in a worker, etc.), what type of reference it is (see below for details on the types of references), the size of the object in bytes, the process ID and IP address where the object was instantiated, and where in the application the reference was created.
ray memory comes with features to make the memory debugging experience more effective. For example, you can add arguments sort-by=OBJECT_SIZE and group-by=STACK_TRACE, which may be particularly helpful for tracking down the line of code where a memory leak occurs. You can see the full suite of options by running ray memory --help.
There are five types of references that can keep an object pinned:
1. Local ObjectRef references
import ray
@ray.remote
def f(arg):
return arg
|
372
|
@ray.remote
def f(arg):
return arg
a = ray.put(None)
b = f.remote(None)
In this example, we create references to two objects: one that is ray.put() in the object store and another that’s the return value from f.remote().
--- Summary for node address: 192.168.0.15 ---
Mem Used by Objects Local References Pinned Count Pending Tasks Captured in Objects Actor Handles
30 MiB 2 0 0 0 0
|
373
|
In the output from ray memory, we can see that each of these is marked as a LOCAL_REFERENCE in the driver process, but the annotation in the “Reference Creation Site” indicates that the first was created as a “put object” and the second from a “task call.”
2. Objects pinned in memory
import numpy as np
a = ray.put(np.zeros(1))
b = ray.get(a)
del a
In this example, we create a numpy array and then store it in the object store. Then, we fetch the same numpy array from the object store and delete its ObjectRef. In this case, the object is still pinned in the object store because the deserialized copy (stored in b) points directly to the memory in the object store.
--- Summary for node address: 192.168.0.15 ---
Mem Used by Objects Local References Pinned Count Pending Tasks Captured in Objects Actor Handles
243 MiB 0 1 0 0 0
|
374
|
--- Object references for node address: 192.168.0.15 ---
IP Address PID Type Object Ref Size Reference Type Call Site
192.168.0.15 7066 Driver ffffffffffffffffffffffffffffffffffffffff0100000001000000 243 MiB PINNED_IN_MEMORY test.
py:<module>:19
The output from ray memory displays this as the object being PINNED_IN_MEMORY. If we del b, the reference can be freed.
3. Pending task references
@ray.remote
def f(arg):
while True:
pass
a = ray.put(None)
b = f.remote(a)
|
375
|
a = ray.put(None)
b = f.remote(a)
In this example, we first create an object via ray.put() and then submit a task that depends on the object.
--- Summary for node address: 192.168.0.15 ---
Mem Used by Objects Local References Pinned Count Pending Tasks Captured in Objects Actor Handles
25 MiB 1 1 1 0 0
--- Object references for node address: 192.168.0.15 ---
IP Address PID Type Object Ref Size Reference Type Call Site
192.168.0.15 7207 Driver a67dc375e60ddd1affffffffffffffffffffffff0100000001000000 ? LOCAL_REFERENCE (task call)
| test.py:
:<module>:29
|
376
|
While the task is running, we see that ray memory shows both a LOCAL_REFERENCE and a USED_BY_PENDING_TASK reference for the object in the driver process. The worker process also holds a reference to the object because the Python arg is directly referencing the memory in the plasma, so it can’t be evicted; therefore it is PINNED_IN_MEMORY.
4. Serialized ObjectRef references
@ray.remote
def f(arg):
while True:
pass
a = ray.put(None)
b = f.remote([a])
In this example, we again create an object via ray.put(), but then pass it to a task wrapped in another object (in this case, a list).
--- Summary for node address: 192.168.0.15 ---
Mem Used by Objects Local References Pinned Count Pending Tasks Captured in Objects Actor Handles
15 MiB 2 0 1 0 0
|
377
|
192.168.0.15 7373 Driver ffffffffffffffffffffffffffffffffffffffff0100000001000000 15 MiB USED_BY_PENDING_TASK (put object)
| test.py:
<module>:37
Now, both the driver and the worker process running the task hold a LOCAL_REFERENCE to the object in addition to it being USED_BY_PENDING_TASK on the driver. If this was an actor task, the actor could even hold a LOCAL_REFERENCE after the task completes by storing the ObjectRef in a member variable.
5. Captured ObjectRef references
a = ray.put(None)
b = ray.put([a])
del a
|
378
|
In this example, we first create an object via ray.put(), then capture its ObjectRef inside of another ray.put() object, and delete the first ObjectRef. In this case, both objects are still pinned.
--- Summary for node address: 192.168.0.15 ---
Mem Used by Objects Local References Pinned Count Pending Tasks Captured in Objects Actor Handles
233 MiB 1 0 0 1 0
|
379
|
In the output of ray memory, we see that the second object displays as a normal LOCAL_REFERENCE, but the first object is listed as CAPTURED_IN_OBJECT.
Memory Aware Scheduling#
By default, Ray does not take into account the potential memory usage of a task or actor when scheduling. This is simply because it cannot estimate ahead of time how much memory is required. However, if you know how much memory a task or actor requires, you can specify it in the resource requirements of its ray.remote decorator to enable memory-aware scheduling:
Important
Specifying a memory requirement does NOT impose any limits on memory usage. The requirements are used for admission control during scheduling only (similar to how CPU scheduling works in Ray). It is up to the task itself to not use more memory than it requested.
|
380
|
To tell the Ray scheduler a task or actor requires a certain amount of available memory to run, set the memory argument. The Ray scheduler will then reserve the specified amount of available memory during scheduling, similar to how it handles CPU and GPU resources:
# reserve 500MiB of available memory to place this task
@ray.remote(memory=500 * 1024 * 1024)
def some_function(x):
pass
# reserve 2.5GiB of available memory to place this actor
@ray.remote(memory=2500 * 1024 * 1024)
class SomeActor:
def __init__(self, a, b):
pass
In the above example, the memory quota is specified statically by the decorator, but you can also set them dynamically at runtime using .options() as follows:
# override the memory quota to 100MiB when submitting the task
some_function.options(memory=100 * 1024 * 1024).remote(x=1)
# override the memory quota to 1GiB when creating the actor
SomeActor.options(memory=1000 * 1024 * 1024).remote(a=1, b=2)
|
381
|
# override the memory quota to 1GiB when creating the actor
SomeActor.options(memory=1000 * 1024 * 1024).remote(a=1, b=2)
Questions or Issues?#
You can post questions or issues or feedback through the following channels:
Discussion Board: For questions about Ray usage or feature requests.
GitHub Issues: For bug reports.
Ray Slack: For getting in touch with Ray maintainers.
StackOverflow: Use the [ray] tag questions about Ray.
|
382
|
Out-Of-Memory Prevention#
If application tasks or actors consume a large amount of heap space, it can cause the node to run out of memory (OOM). When that happens, the operating system will start killing worker or raylet processes, disrupting the application. OOM may also stall metrics and if this happens on the head node, it may stall the dashboard or other control processes and cause the cluster to become unusable.
In this section we will go over:
What is the memory monitor and how it works
How to enable and configure it
How to use the memory monitor to detect and resolve memory issues
Also view Debugging Out of Memory to learn how to troubleshoot out-of-memory issues.
|
383
|
Also view Debugging Out of Memory to learn how to troubleshoot out-of-memory issues.
What is the memory monitor?#
The memory monitor is a component that runs within the raylet process on each node. It periodically checks the memory usage, which includes the worker heap, the object store, and the raylet as described in memory management. If the combined usage exceeds a configurable threshold the raylet will kill a task or actor process to free up memory and prevent Ray from failing.
It’s available on Linux and is tested with Ray running inside a container that is using cgroup v1/v2. If you encounter issues when running the memory monitor outside of a container, file an issue or post a question.
How do I disable the memory monitor?#
The memory monitor is enabled by default and can be disabled by setting the environment variable RAY_memory_monitor_refresh_ms to zero when Ray starts (e.g., RAY_memory_monitor_refresh_ms=0 ray start …).
|
384
|
How do I configure the memory monitor?#
The memory monitor is controlled by the following environment variables:
RAY_memory_monitor_refresh_ms (int, defaults to 250) is the interval to check memory usage and kill tasks or actors if needed. Task killing is disabled when this value is 0. The memory monitor selects and kills one task at a time and waits for it to be killed before choosing another one, regardless of how frequent the memory monitor runs.
RAY_memory_usage_threshold (float, defaults to 0.95) is the threshold when the node is beyond the memory
capacity. If the memory usage is above this fraction it will start killing processes to free up memory. Ranges from [0, 1].
Using the Memory Monitor#
|
385
|
Using the Memory Monitor#
Retry policy#
When a task or actor is killed by the memory monitor it will be retried with exponential backoff. There is a cap on the retry delay, which is 60 seconds. If tasks are killed by the memory monitor, it retries infinitely (not respecting max_retries). If actors are killed by the memory monitor, it doesn’t recreate the actor infinitely (It respects max_restarts, which is 0 by default).
|
386
|
Worker killing policy#
The memory monitor avoids infinite loops of task retries by ensuring at least one task is able to run for each caller on each node. If it is unable to ensure this, the workload will fail with an OOM error. Note that this is only an issue for tasks, since the memory monitor will not indefinitely retry actors. If the workload fails, refer to how to address memory issues on how to adjust the workload to make it pass. For code example, see the last task example below.
When a worker needs to be killed, the policy first prioritizes tasks that are retriable, i.e. when max_retries or max_restarts is > 0. This is done to minimize workload failure. Actors by default are not retriable since max_restarts defaults to 0. Therefore, by default, tasks are preferred to actors when it comes to what gets killed first.
|
387
|
When there are multiple callers that has created tasks, the policy will pick a task from the caller with the most number of running tasks. If two callers have the same number of tasks it picks the caller whose earliest task has a later start time. This is done to ensure fairness and allow each caller to make progress.
Amongst the tasks that share the same caller, the latest started task will be killed first.
Below is an example to demonstrate the policy. In the example we have a script that creates two tasks, which in turn creates four more tasks each. The tasks are colored such that each color forms a “group” of tasks where they belong to the same caller.
|
388
|
If, at this point, the node runs out of memory, it will pick a task from the caller with the most number of tasks, and kill its task whose started the last:
If, at this point, the node still runs out of memory, the process will repeat:
Example: Workloads fails if the last task of the caller is killed
Let’s create an application oom.py that runs a single task that requires more memory than what is available. It is set to infinite retry by setting max_retries to -1.
The worker killer policy sees that it is the last task of the caller, and will fail the workload when it kills the task as it is the last one for the caller, even when the task is set to retry forever.
import ray
@ray.remote(max_retries=-1)
def leaks_memory():
chunks = []
bits_to_allocate = 8 * 100 * 1024 * 1024 # ~100 MiB
while True:
chunks.append([0] * bits_to_allocate)
|
389
|
@ray.remote(max_retries=-1)
def leaks_memory():
chunks = []
bits_to_allocate = 8 * 100 * 1024 * 1024 # ~100 MiB
while True:
chunks.append([0] * bits_to_allocate)
try:
ray.get(leaks_memory.remote())
except ray.exceptions.OutOfMemoryError as ex:
print("task failed with OutOfMemoryError, which is expected")
Set RAY_event_stats_print_interval_ms=1000 so it prints the worker kill summary every second, since by default it prints every minute.
RAY_event_stats_print_interval_ms=1000 python oom.py
|
390
|
(raylet) node_manager.cc:3040: 1 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 2c82620270df6b9dd7ae2791ef51ee4b5a9d5df9f795986c10dd219c, IP: 172.31.183.172) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip 172.31.183.172`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
task failed with OutOfMemoryError, which is expected
Verify the task was indeed executed twice via ``task_oom_retry``:
|
391
|
Example: memory monitor prefers to kill a retriable task
Let’s first start ray and specify the memory threshold.
RAY_memory_usage_threshold=0.4 ray start --head
Let’s create an application two_actors.py that submits two actors, where the first one is retriable and the second one is non-retriable.
from math import ceil
import ray
from ray._private.utils import (
get_system_memory,
) # do not use outside of this example as these are private methods.
from ray._private.utils import (
get_used_memory,
) # do not use outside of this example as these are private methods.
|
392
|
# estimates the number of bytes to allocate to reach the desired memory usage percentage.
def get_additional_bytes_to_reach_memory_usage_pct(pct: float) -> int:
used = get_used_memory()
total = get_system_memory()
bytes_needed = int(total * pct) - used
assert (
bytes_needed > 0
), "memory usage is already above the target. Increase the target percentage."
return bytes_needed
@ray.remote
class MemoryHogger:
def __init__(self):
self.allocations = []
def allocate(self, bytes_to_allocate: float) -> None:
# divide by 8 as each element in the array occupies 8 bytes
new_list = [0] * ceil(bytes_to_allocate / 8)
self.allocations.append(new_list)
first_actor = MemoryHogger.options(
max_restarts=1, max_task_retries=1, name="first_actor"
).remote()
second_actor = MemoryHogger.options(
max_restarts=0, max_task_retries=0, name="second_actor"
).remote()
|
393
|
# each task requests 0.3 of the system memory when the memory threshold is 0.4.
allocate_bytes = get_additional_bytes_to_reach_memory_usage_pct(0.3)
first_actor_task = first_actor.allocate.remote(allocate_bytes)
second_actor_task = second_actor.allocate.remote(allocate_bytes)
error_thrown = False
try:
ray.get(first_actor_task)
except ray.exceptions.OutOfMemoryError as ex:
error_thrown = True
print("First started actor, which is retriable, was killed by the memory monitor.")
assert error_thrown
ray.get(second_actor_task)
print("Second started actor, which is not-retriable, finished.")
Run the application to see that only the first actor was killed.
$ python two_actors.py
First started actor, which is retriable, was killed by the memory monitor.
Second started actor, which is not-retriable, finished.
|
394
|
First started actor, which is retriable, was killed by the memory monitor.
Second started actor, which is not-retriable, finished.
Addressing memory issues#
When the application fails due to OOM, consider reducing the memory usage of the tasks and actors, increasing the memory capacity of the node, or limit the number of concurrently running tasks.
Questions or Issues?#
You can post questions or issues or feedback through the following channels:
Discussion Board: For questions about Ray usage or feature requests.
GitHub Issues: For bug reports.
Ray Slack: For getting in touch with Ray maintainers.
StackOverflow: Use the [ray] tag questions about Ray.
|
395
|
Catching application-level failures#
Ray surfaces application-level failures as Python-level exceptions. When a task
on a remote worker or actor fails due to a Python-level exception, Ray wraps
the original exception in a RayTaskError and stores this as the task’s
return value. This wrapped exception will be thrown to any worker that tries
to get the result, either by calling ray.get or if the worker is executing
another task that depends on the object. If the user’s exception type can be subclassed,
the raised exception is an instance of both RayTaskError and the user’s exception type
so the user can try-catch either of them. Otherwise, the wrapped exception is just
RayTaskError and the actual user’s exception type can be accessed via the cause
field of the RayTaskError.
import ray
@ray.remote
def f():
raise Exception("the real error")
@ray.remote
def g(x):
return
|
396
|
import ray
@ray.remote
def f():
raise Exception("the real error")
@ray.remote
def g(x):
return
try:
ray.get(f.remote())
except ray.exceptions.RayTaskError as e:
print(e)
# ray::f() (pid=71867, ip=XXX.XX.XXX.XX)
# File "errors.py", line 5, in f
# raise Exception("the real error")
# Exception: the real error
try:
ray.get(g.remote(f.remote()))
except ray.exceptions.RayTaskError as e:
print(e)
# ray::g() (pid=73085, ip=128.32.132.47)
# At least one of the input arguments for this task could not be computed:
# ray.exceptions.RayTaskError: ray::f() (pid=73085, ip=XXX.XX.XXX.XX)
# File "errors.py", line 5, in f
# raise Exception("the real error")
# Exception: the real error
Example code of catching the user exception type when the exception type can be subclassed:
class MyException(Exception):
...
|
397
|
Example code of catching the user exception type when the exception type can be subclassed:
class MyException(Exception):
...
@ray.remote
def raises_my_exc():
raise MyException("a user exception")
try:
ray.get(raises_my_exc.remote())
except MyException as e:
print(e)
# ray::raises_my_exc() (pid=15329, ip=127.0.0.1)
# File "<$PWD>/task_exceptions.py", line 45, in raises_my_exc
# raise MyException("a user exception")
# MyException: a user exception
Example code of accessing the user exception type when the exception type can not be subclassed:
class MyFinalException(Exception):
def __init_subclass__(cls, /, *args, **kwargs):
raise TypeError("Cannot subclass this little exception class.")
|
398
|
@ray.remote
def raises_my_final_exc():
raise MyFinalException("a *final* user exception")
try:
ray.get(raises_my_final_exc.remote())
except ray.exceptions.RayTaskError as e:
assert isinstance(e.cause, MyFinalException)
print(e)
# 2024-04-08 21:11:47,417 WARNING exceptions.py:177 -- User exception type <class '__main__.MyFinalException'> in RayTaskError can not be subclassed! This exception will be raised as RayTaskError only. You can use `ray_task_error.cause` to access the user exception. Failure in subclassing: Cannot subclass this little exception class.
# ray::raises_my_final_exc() (pid=88226, ip=127.0.0.1)
# File "<$PWD>/task_exceptions.py", line 66, in raises_my_final_exc
# raise MyFinalException("a *final* user exception")
# MyFinalException: a *final* user exception
print(type(e.cause))
# <class '__main__.MyFinalException'>
print(e.cause)
# a *final* user exception
|
399
|
If Ray can’t serialize the user’s exception, it converts the exception to a RayError.
import threading
class UnserializableException(Exception):
def __init__(self):
self.lock = threading.Lock()
@ray.remote
def raise_unserializable_error():
raise UnserializableException
try:
ray.get(raise_unserializable_error.remote())
except ray.exceptions.RayTaskError as e:
print(e)
# ray::raise_unserializable_error() (pid=328577, ip=172.31.5.154)
# File "/home/ubuntu/ray/tmp~/main.py", line 25, in raise_unserializable_error
# raise UnserializableException
# UnserializableException
print(type(e.cause))
# <class 'ray.exceptions.RayError'>
print(e.cause)
# The original cause of the RayTaskError (<class '__main__.UnserializableException'>) isn't serializable: cannot pickle '_thread.lock' object. Overwriting the cause to a RayError.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.