Human-Generated Demonstrations for Safe Reinforcement Learning
Code: AILabDsUnipi/SafeQIL
Dataset Description
This dataset consists of human-generated demonstrations collected across four challenging constrained environments from the Safety-Gymnasium benchmark (SafetyPointGoal1-v0, SafetyCarPush2-v0, SafetyPointCircle2-v0, and SafetyCarButton1-v0). It is designed to train agents with SafeQIL (Safe Q Inverse Constrained Reinforcement Learning) to maximize the likelihood of safe trajectories in Constrained Markov Decision Processes (CMDPs) where constraints are unknown and costs are non-observable.
For every step in a demonstrated trajectory, we record the full transition dynamics. Each transition is captured as a tuple containing:
vector_obs: The proprioceptive/kinematic state of the agent.vision_obs: The pixel-based visual observation.action: The continuous control action taken by the human demonstrator.reward: The standard task reward received.done: The boolean flag indicating episode termination.
To ensure efficient data loading and facilitate qualitative analysis, the data is distributed across three file types:
.h5(HDF5): Stores the core transition tuples..mp4: Provides rendered video rollouts of the expert's behavior for visual inspection..txt: Contains summary statistics and metadata for each dataset split.
Dataset Structure
The dataset is organized hierarchically by environment and dataset size.
/
βββ README.md <- This dataset card
βββ SafetyPointGoal1-v0/
β βββ x1/
β β βββ stats.txt <- Dataset statistics
β β βββ 0.h5 <- Human generated trajectory data
β β βββ 0.mp4 <- Rendered trajectory
β β βββ 1.h5
β β βββ 1.mp4
β β βββ 2.h5
β β βββ 2.mp4
β β ...
β β βββ 39.h5
β β βββ 39.mp4
β βββ x2/
β β βββ stats.txt
β β βββ 0.h5
β β ...
β β βββ 79.h5
β βββ x4/
β β βββ stats.txt
β β βββ 0.h5
β β ...
β β βββ 159.h5
β βββ x8/
β β βββ stats.txt
β β βββ 0.h5
β β ...
β β βββ 319.h5
βββ SafetyCarPush2-v0/
β βββ x1/
β β ...
β β x8/
βββ ...
Note that SafetyCarButton1-v0 has only x1 dataset. Also, note that only x1 datasets contain video examples.
How to Use This Dataset
While the dataset is a manageable ~50GB, we recommend using the huggingface_hub Python library to selectively download subsets of the data (e.g., a specific environment or size multiplier) to save bandwidth.
from huggingface_hub import snapshot_download
# Example: Download only the 'x1' dataset for SafetyPointGoal1-v0
snapshot_download(
repo_id="george22294/SafeQIL-dataset", # Replace with your actual repo ID
repo_type="dataset",
allow_patterns="SafetyPointGoal1-v0/x1/*",
local_dir="./demonstrations/SafetyPointGoal1-v0/x1/"
)
Loading HDF5 Files
You can load the human-generated tuples directly using h5py. Note that the data inside each file is nested under a group named after the episode (e.g., for the file 0.h5 the group name is episode_0, for the file 1.h5 it is episode_1, etc).
You can dynamically grab this group name in Python to load the data:
import h5py
file_path = './local_data/SafetyPointGoal1-v0/x1/0.h5'
with h5py.File(file_path, 'r') as f:
# Load the arrays
vector_obs = f['episode_0']['vector_obs'][:]
vision_obs = f['episode_0']['vision_obs'][:]
actions = f['episode_0']['actions'][:]
reward = f['episode_0']['reward'][:]
done = f['episode_0']['done'][:]
Citation
@misc{papadopoulos2026learningmaintainsafetyexpert,
title={Learning to maintain safety through expert demonstrations in settings with unknown constraints: A Q-learning perspective},
author={George Papadopoulos and George A. Vouros},
year={2026},
eprint={2602.23816},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2602.23816},
}
- Downloads last month
- -