annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license: mit
task_categories:
- robotics
- reinforcement-learning
- video-to-video
task_ids:
- grasping
- task-planning
tags:
- world-model
- simulator
- friction
- contact-dynamics
- physics-simulation
- dynamics-prediction
pretty_name: DreamerBench
size_categories:
- n<1K
Dataset Card for DreamerBench
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: [TODO: Link to project website or repository]
- Repository: https://github.com/uwsbel/ChronoDreamer
- Paper: [TODO: Link to arXiv paper if available]
- Point of Contact: Json Zhou, zzhou292@wisc.edu
Dataset Summary
DreamerBench is a large-scale dataset designed for training and evaluating World Models in robotics applications. Unlike standard visual-only datasets, DreamerBench explicitly focuses on physical interaction dynamics, specifically friction and contact data.
The dataset is generated using Project Chrono (https://projectchrono.org/), simulating diverse robotic interaction scenarios where precise modeling of physical forces is critical. It includes pre-computed encodings to facilitate efficient training of latent dynamics models.
Key features:
- Physical Fidelity: detailed ground-truth annotations for coefficient of friction, contact forces, and slip.
- Multi-Modal: Contains visual observations (RGB/Depth), proprioceptive states, and explicit physics parameters.
- World Model Ready: Structured to support next-step prediction and imaginary rollout training (Dreamer-style architectures).
Supported Tasks and Leaderboards
- World Modeling / Dynamics Learning: Training models to predict future states ($s_{t+1}$) given current state ($s_t$) and action ($a_t$).
- Offline Reinforcement Learning: Learning policies from the provided simulator trajectories without active environmental interaction.
- Sim-to-Real Adaptation: Using the varied friction/contact parameters to train robust policies that generalize to real-world physics.
Dataset Structure
Data Instances
Each instance in the dataset represents a trajectory or episode of a robot interacting with the environment.
Example structure (JSON/Parquet format):
{
"episode_id": "traj_001",
"steps": 1000,
"observations": {
"rgb": [Array of (1000, 64, 64, 3) images],
"depth": [Array of (1000, 64, 64, 1) images],
"proprioception": [Array of joint angles/velocities]
},
"actions": [Array of control inputs],
"rewards": [Array of float scalars],
"physics_data": {
"contact_forces": [Array of 3D force vectors],
"friction_coefficient": 0.8,
"contact_detected": [Binary array]
},
"encoding": [Pre-computed latent vectors, e.g., VAE or RSSM states]
}
Example scenarios:
Visual Data Samples
Examples of 3 scenarios across 4 different camera angles (256x256).
| Scenario | Ego | Side 1 | Side 2 | Contact Splat |
|---|---|---|---|---|
| flashlight-box | ||||
| flashlight-coca | ||||
| waterbottle-coca |