File size: 2,241 Bytes
813da54
c2d01c8
813da54
c2d01c8
813da54
c2d01c8
 
 
 
 
813da54
00e8e7f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
813da54
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
viewer: false
tags:
- reinforcement-learning
- in-context
- imitation-learning
- generalist-agent
license: apache-2.0
task_categories:
- reinforcement-learning
---
# Vintix II Cross-Domain ICRL Dataset

## Dataset Summary

This dataset is a large-scale cross-domain benchmark for **in-context reinforcement learning** and **continuous control**. It was introduced with **Vintix II** and covers a diverse set of tasks spanning robotic manipulation, dexterous control, locomotion, energy management, industrial process control, autonomous driving, and other control settings.

The training set contains **209 tasks across 10 domains**, totaling **3.8M episodes** and **709.7M timesteps**. In addition, the benchmark defines **46 held-out tasks** for evaluation on unseen tasks and environment variations.

| Domain | Tasks | Episodes | Timesteps | Sample Weight |
|---|---:|---:|---:|---:|
| Industrial-Benchmark | 16 | 288k | 72M | 10.1% |
| Bi-DexHands | 15 | 216.2k | 31.7M | 4.5% |
| Meta-World | 45 | 670k | 67M | 9.4% |
| Kinetix | 42 | 1.1M | 62.8M | 8.9% |
| CityLearn | 20 | 146.4k | 106.7M | 15.0% |
| ControlGym | 9 | 230k | 100M | 14.1% |
| HumEnv | 12 | 120k | 36M | 5.1% |
| MuJoCo | 11 | 665.1k | 100M | 14.1% |
| SinerGym | 22 | 42.3k | 30.9M | 4.4% |
| Meta-Drive | 17 | 271.9k | 102.6M | 14.4% |
| **Overall** | **209** | **3.8M** | **709.7M** | **100%** |

## Dataset Structure

The dataset is stored as a collection of **`.h5` files**, where each file corresponds to a single trajectory from a specific environment.

Each trajectory file is split into groups of **10,000 steps**, except for the final group, which may contain fewer steps.

Every group contains the following fields:

- **`proprio_observation`**: sequence of observations (`np.float32`)
- **`action`**: sequence of actions executed in the environment (`np.float32`)
- **`reward`**: sequence of rewards received after each action (`np.float32`)
- **`step_num`**: step indices within the episode (`np.int32`)
- **`demonstrator_action`**: sequence of demonstrator actions corresponding to the observations

This layout is designed for efficient storage and loading of long trajectories while preserving both collected behavior and demonstrator supervision.