---
dataset_info:
- config_name: cheetah_run
features:
- name: observation
dtype: image
- name: state
list: float32
length: 17
- name: mask
dtype: image
- name: action
list: float32
length: 6
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 53270273758
num_examples: 9000000
- name: test
num_bytes: 5918594227
num_examples: 1000000
download_size: 65071625266
dataset_size: 59188867985
- config_name: cheetah_run_distractor_hard
features:
- name: observation
dtype: image
- name: state
list: float32
length: 17
- name: mask
dtype: image
- name: action
list: float32
length: 6
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 74053289802
num_examples: 9000000
- name: test
num_bytes: 8082638681
num_examples: 1000000
download_size: 82135334541
dataset_size: 82135928483
- config_name: cheetah_run_distractor_low
features:
- name: observation
dtype: image
- name: state
list: float32
length: 17
- name: mask
dtype: image
- name: action
list: float32
length: 6
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 70535426992
num_examples: 9000000
- name: test
num_bytes: 8021400416
num_examples: 1000000
download_size: 86532692286
dataset_size: 78556827408
- config_name: hopper_hop
features:
- name: observation
dtype: image
- name: state
list: float32
length: 15
- name: mask
dtype: image
- name: action
list: float32
length: 4
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 51471971969
num_examples: 9000000
- name: test
num_bytes: 5718832813
num_examples: 1000000
download_size: 62655370451
dataset_size: 57190804782
- config_name: hopper_hop_distractor_hard
features:
- name: observation
dtype: image
- name: state
list: float32
length: 15
- name: mask
dtype: image
- name: action
list: float32
length: 4
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: test
num_bytes: 8249548536
num_examples: 1000000
- name: train
num_bytes: 72097453824
num_examples: 9000000
download_size: 160329248812
dataset_size: 80347002360
- config_name: hopper_hop_distractor_low
features:
- name: observation
dtype: image
- name: state
list: float32
length: 15
- name: mask
dtype: image
- name: action
list: float32
length: 4
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: test
num_bytes: 7839665057
num_examples: 1000000
- name: train
num_bytes: 68870596536
num_examples: 9000000
download_size: 152982979619
dataset_size: 76710261593
- config_name: humanoid_walk
features:
- name: observation
dtype: image
- name: state
list: float32
length: 67
- name: mask
dtype: image
- name: action
list: float32
length: 21
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: test
num_bytes: 5070235970
num_examples: 1000000
- name: train
num_bytes: 45625807845
num_examples: 9000000
download_size: 111746246866
dataset_size: 50696043815
- config_name: humanoid_walk_distractor_hard
features:
- name: observation
dtype: image
- name: state
list: float32
length: 67
- name: mask
dtype: image
- name: action
list: float32
length: 21
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: test
num_bytes: 7400537242
num_examples: 1000000
- name: train
num_bytes: 65863139376
num_examples: 9000000
download_size: 79307163440
dataset_size: 73263676618
- config_name: humanoid_walk_distractor_low
features:
- name: observation
dtype: image
- name: state
list: float32
length: 67
- name: mask
dtype: image
- name: action
list: float32
length: 21
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: test
num_bytes: 7296954770
num_examples: 1000000
- name: train
num_bytes: 65122732432
num_examples: 9000000
download_size: 134273060565
dataset_size: 72419687202
- config_name: walker_run
features:
- name: observation
dtype: image
- name: state
list: float32
length: 24
- name: mask
dtype: image
- name: action
list: float32
length: 6
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: test
num_bytes: 5733131603
num_examples: 1000000
- name: train
num_bytes: 51588531732
num_examples: 9000000
download_size: 56856633613
dataset_size: 57321663335
- config_name: walker_run_distractor_hard
features:
- name: observation
dtype: image
- name: state
list: float32
length: 24
- name: mask
dtype: image
- name: action
list: float32
length: 6
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: test
num_bytes: 7628712875
num_examples: 1000000
- name: train
num_bytes: 65388586082
num_examples: 9000000
download_size: 73094420747
dataset_size: 73017298957
- config_name: walker_run_distractor_low
features:
- name: observation
dtype: image
- name: state
list: float32
length: 24
- name: mask
dtype: image
- name: action
list: float32
length: 6
- name: reward
dtype: float32
- name: terminated
dtype: bool
- name: truncated
dtype: bool
splits:
- name: train
num_bytes: 62166053160
num_examples: 9000000
- name: test
num_bytes: 6939287765
num_examples: 1000000
download_size: 75665029057
dataset_size: 69105340925
configs:
- config_name: cheetah_run
data_files:
- split: train
path: cheetah_run/train-*
- split: test
path: cheetah_run/test-*
- config_name: cheetah_run_distractor_hard
data_files:
- split: test
path: cheetah_run_distractor_hard/test-*
- split: train
path: cheetah_run_distractor_hard/train-*
- config_name: cheetah_run_distractor_low
data_files:
- split: test
path: cheetah_run_distractor_low/test-*
- split: train
path: cheetah_run_distractor_low/train-*
- config_name: hopper_hop
data_files:
- split: train
path: hopper_hop/train-*
- split: test
path: hopper_hop/test-*
- config_name: hopper_hop_distractor_hard
data_files:
- split: train
path: hopper_hop_distractor_hard/train-*
- split: test
path: hopper_hop_distractor_hard/test-*
- config_name: hopper_hop_distractor_low
data_files:
- split: test
path: hopper_hop_distractor_low/test-*
- split: train
path: hopper_hop_distractor_low/train-*
- config_name: humanoid_walk
data_files:
- split: test
path: humanoid_walk/test-*
- split: train
path: humanoid_walk/train-*
- config_name: humanoid_walk_distractor_hard
data_files:
- split: train
path: humanoid_walk_distractor_hard/train-*
- split: test
path: humanoid_walk_distractor_hard/test-*
- config_name: humanoid_walk_distractor_low
data_files:
- split: test
path: humanoid_walk_distractor_low/test-*
- split: train
path: humanoid_walk_distractor_low/train-*
- config_name: walker_run
data_files:
- split: test
path: walker_run/test-*
- split: train
path: walker_run/train-*
- config_name: walker_run_distractor_hard
data_files:
- split: test
path: walker_run_distractor_hard/test-*
- split: train
path: walker_run_distractor_hard/train-*
- config_name: walker_run_distractor_low
data_files:
- split: test
path: walker_run_distractor_low/test-*
- split: train
path: walker_run_distractor_low/train-*
---
## Visual Distracting Control Suite Benchmark
This dataset contains expert trajectories generated by a Proximal Policy Optimization (PPO) reinforcement learning agent trained on 4 environments of the [Distracting Control Suite](https://github.com/google-research/google-research/tree/master/distracting_control). For each environment we collect data with different levels of distraction, which we define below, and masks for the agent.
Levels of distraction:
- None: ...
- Low: ...
- Hard: ...
## Dataset Usage
Regular usage (for the domain acrobot with task swingup):
```python
from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/visual_distracting_control_suite", name="cheetah_run_distractor_hard", split="train")
test_dataset = load_dataset("EpicPinkPenguin/visual_distracting_control_suite", name="cheetah_run_distractor_hard", split="test")
```
## Agent Performance
The PPO agent was trained for 2M steps on each environment and obtained the following final performance metrics on the evaluation environment.
| Environment | Steps (Train) | Steps (Test) | Return | Observation |
|:------------------------------|:----------------|:---------------|:---------|:------------|
| cheetah_run | 9,000,000 | 1,000,000 | 837.67 | |
| cheetah_run_distractor_low | 9,000,000 | 1,000,000 | 837.67 | |
| cheetah_run_distractor_hard | 9,000,000 | 1,000,000 | 837.67 | |
| hopper_hop | 9,000,000 | 1,000,000 | 307.33 | |
| hopper_hop_distractor_low | 9,000,000 | 1,000,000 | 307.33 | |
| hopper_hop_distractor_hard | 9,000,000 | 1,000,000 | 307.33 | |
| humanoid_walk | 9,000,000 | 1,000,000 | 616.52 | |
| humanoid_walk_distractor_low | 9,000,000 | 1,000,000 | 616.52 | |
| humanoid_walk_distractor_hard | 9,000,000 | 1,000,000 | 616.52 | |
| walker_run | 9,000,000 | 1,000,000 | 738.37 | |
| walker_run_distractor_low | 9,000,000 | 1,000,000 | 738.37 | |
| walker_run_distractor_hard | 9,000,000 | 1,000,000 | 738.37 | |
## Dataset Structure
### Data Instances
Each data instance represents a single step consisting of tuples of the form (observation, state, mask, action, reward, done, truncated) = (o_t, s_t, m_t, a_t, r_t, terminated_t, truncated_t).
```json
{'action': [1],
'observation': [[[0, 166, 253],
[0, 174, 255],
[0, 170, 251],
[0, 191, 255],
[0, 191, 255],
[0, 221, 255],
[0, 243, 255],
[0, 248, 255],
[0, 243, 255],
[10, 239, 255],
[25, 255, 255],
[0, 241, 255],
[0, 235, 255],
[17, 240, 255],
[10, 243, 255],
[27, 253, 255],
[39, 255, 255],
[58, 255, 255],
[85, 255, 255],
[111, 255, 255],
[135, 255, 255],
[151, 255, 255],
[173, 255, 255],
...
[0, 0, 37],
[0, 0, 39]]],
'state': [-0.09255199134349823, 0.028468089178204536, -0.05743644759058952, ..., -0.013366516679525375, -0.08739502727985382, 0.007727491203695536]
'mask' = [
[0, 0, 0, 0, ..., 0, 0, 0, 0],
[0, 0, 0, 0, ..., 0, 0, 0, 0],
[0, 0, 255, 255, ..., 255, 255, 0, 0],
[0, 0, 255, 255, ..., 255, 255, 0, 0],
...
[0, 0, 255, 255, ..., 255, 255, 0, 0],
[0, 0, 255, 255, ..., 255, 255, 0, 0],
[0, 0, 0, 0, ..., 0, 0, 0, 0],
[0, 0, 0, 0, ..., 0, 0, 0, 0],
]
'reward': 0.0,
'terminated': False
'truncated': False}
```
### Data Fields
- `observation`: The current RGB observation from the environment.
- `state`: The current state of the environment.
- `mask`: A segmentation mask of the agent, with everything zero, except the agent, which is 255.
- `action`: The action predicted by the agent for the current observation.
- `reward`: The received reward for the current observation.
- `terminated`: If the episode has terminated with the current observation.
- `truncated`: If the episode is truncated with the current observation.
### Data Splits
The dataset is divided into a `train` (90%) and `test` (10%) split. Each environment-dataset has in sum 10M steps (data points).
## Dataset Creation
The dataset was created by training a PPO RL agent 2M steps in each environment. The trajectories where generated by taking a greedy action (mean) from the predicted action distribution at each step. The agent was trained on the state. Each environment was created with the same random seed, making the trajectories identical between the different distraction levels. This means concretely, that episode 0 of cheetah_run is identical with episode 0 of cheetah_run_distractor_low and cheetah_run_distractor_hard in everything, except the observation due to the visual distractors. This continues for the remaining episodes.
## Distracting Control Suite
The [Distracting Control Suite](https://arxiv.org/abs/2101.02722) is an extension of the DeepMind Control Suite that augments standard continuous control tasks with visual distractions to evaluate the robustness of reinforcement learning (RL) algorithms. While preserving the underlying MuJoCo-based physics and task dynamics, it introduces changes in the visual observations—such as background videos, colors, textures, and camera variations—that are unrelated to the control objective. These distractions are designed to challenge agents’ ability to learn representations that generalize beyond spurious visual correlations. By decoupling task-relevant dynamics from high-dimensional, non-stationary visual noise, the Distracting Control Suite provides a controlled benchmark for studying generalization, representation learning, and robustness in vision-based RL. It is commonly used to assess how well algorithms trained in one visual setting transfer to others, and to compare methods that aim to improve invariance, stability, and sample efficiency under perceptual perturbations.