File size: 4,752 Bytes
4230281
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
license: mit
viewer: false
task_categories:
- reinforcement-learning
tags:
- inverse-constrained-reinforcement-learning
- safe-rl
- q-learning
- offline-rl
- demonstrations
size_categories:
- 100K<n<1M
---

# Human-Generated Demonstrations for Safe Reinforcement Learning

**Paper:** [Learning to maintain safety through expert demonstrations in settings with unknown constraints: A Q-learning perspective](https://arxiv.org/abs/2602.23816)

**Code:** [AILabDsUnipi/SafeQIL](https://github.com/AILabDsUnipi/SafeQIL)


## Dataset Description
This dataset consists of human-generated demonstrations collected across four challenging constrained environments from the Safety-Gymnasium benchmark (`SafetyPointGoal1-v0`, `SafetyCarPush2-v0`, `SafetyPointCircle2-v0`, and `SafetyCarButton1-v0`). It is designed to train agents with **SafeQIL** (Safe Q Inverse Constrained Reinforcement Learning) to maximize the likelihood of safe trajectories in Constrained Markov Decision Processes (CMDPs) where constraints are unknown and costs are non-observable.

For every step in a demonstrated trajectory, we record the full transition dynamics. Each transition is captured as a tuple containing:
* `vector_obs`: The proprioceptive/kinematic state of the agent.
* `vision_obs`: The pixel-based visual observation.
* `action`: The continuous control action taken by the human demonstrator.
* `reward`: The standard task reward received.
* `done`: The boolean flag indicating episode termination.

To ensure efficient data loading and facilitate qualitative analysis, the data is distributed across three file types:
* **`.h5` (HDF5):** Stores the core transition tuples.
* **`.mp4`:** Provides rendered video rollouts of the expert's behavior for visual inspection.
* **`.txt`:** Contains summary statistics and metadata for each dataset split.


## Dataset Structure
The dataset is organized hierarchically by environment and dataset size.

```text
/
├── README.md                 <- This dataset card
├── SafetyPointGoal1-v0/
│   ├── x1/
│   │   ├── stats.txt         <- Dataset statistics
│   │   ├── 0.h5              <- Human generated trajectory data
│   │   ├── 0.mp4             <- Rendered trajectory
│   │   ├── 1.h5
│   │   ├── 1.mp4
│   │   ├── 2.h5
│   │   ├── 2.mp4
│   │   ...
│   │   ├── 39.h5
│   │   └── 39.mp4
│   ├── x2/
│   │   ├── stats.txt
│   │   ├── 0.h5
│   │   ...
│   │   └── 79.h5
│   ├── x4/
│   │   ├── stats.txt
│   │   ├── 0.h5
│   │   ...
│   │   └── 159.h5
│   ├── x8/
│   │   ├── stats.txt
│   │   ├── 0.h5
│   │   ...
│   │   └── 319.h5
├── SafetyCarPush2-v0/
│   ├── x1/
│   │   ...
│   │   x8/
├── ...
```

Note that `SafetyCarButton1-v0` has only `x1` dataset. Also, note that only `x1` datasets contain video examples.

## How to Use This Dataset

While the dataset is a manageable ~50GB, we recommend using the `huggingface_hub` Python library to selectively download subsets of the data (e.g., a specific environment or size multiplier) to save bandwidth.

```python
from huggingface_hub import snapshot_download

# Example: Download only the 'x1' dataset for SafetyPointGoal1-v0
snapshot_download(
    repo_id="george22294/SafeQIL-dataset", # Replace with your actual repo ID
    repo_type="dataset",
    allow_patterns="SafetyPointGoal1-v0/x1/*", 
    local_dir="./demonstrations/SafetyPointGoal1-v0/x1/"
)
```

### Loading HDF5 Files

You can load the human-generated tuples directly using `h5py`. Note that the data inside each file is nested under a group named after the episode (e.g., for the file `0.h5` the group name is `episode_0`, for the file `1.h5` it is `episode_1`, etc). 

You can dynamically grab this group name in Python to load the data:

```python
import h5py

file_path = './local_data/SafetyPointGoal1-v0/x1/0.h5'

with h5py.File(file_path, 'r') as f:
    
    # Load the arrays
    vector_obs = f['episode_0']['vector_obs'][:]
    vision_obs = f['episode_0']['vision_obs'][:]
    actions = f['episode_0']['actions'][:]
    reward = f['episode_0']['reward'][:]
    done = f['episode_0']['done'][:]
```

## Citation

```bibtex
@misc{papadopoulos2026learningmaintainsafetyexpert,
      title={Learning to maintain safety through expert demonstrations in settings with unknown constraints: A Q-learning perspective}, 
      author={George Papadopoulos and George A. Vouros},
      year={2026},
      eprint={2602.23816},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2602.23816}, 
}
```