--- license: cc-by-4.0 task_categories: - robotics language: - en tags: - robotics - manipulation - lerobot - humanoid - reachy2 - pick-and-place - simulation - mujoco - gr00t - nvidia - physical-ai size_categories: - n<1K configs: - config_name: default data_files: - split: train path: "data/**/*.parquet" --- # NOVA Dataset - Reachy 2 Pick-and-Place Demonstrations

LeRobot v2.1 100 Episodes Reachy 2

Expert demonstration dataset for training vision-language-action models on [Pollen Robotics' Reachy 2](https://www.pollen-robotics.com/reachy/) humanoid robot. Collected in MuJoCo simulation with domain randomization. ## Dataset Description This dataset contains **100 episodes** of pick-and-place manipulation tasks, designed for fine-tuning NVIDIA GR00T N1.6 or other imitation learning policies. ### Key Features | Feature | Description | |---------|-------------| | **Format** | LeRobot v2.1 (parquet + H264 video) | | **Episodes** | 100 | | **Task Variations** | 32 (4 objects × 8 colors) | | **Camera Views** | 2 (front_cam, workspace_cam) | | **Resolution** | 640×480 @ 15 FPS | | **Domain Randomization** | Position, lighting, camera jitter | ### Task Description The robot performs pick-and-place tasks with natural language instructions: - "Pick up the red cube and place it in the box" - "Pick up the blue cylinder and place it in the box" - "Pick up the green capsule and place it in the box" ## Dataset Structure ``` NOVA/ ├── meta/ │ ├── info.json # Dataset metadata │ ├── stats.json # Normalization statistics │ ├── tasks.jsonl # Task descriptions │ └── episodes.jsonl # Episode information ├── data/ │ └── chunk-000/ │ ├── episode_000000.parquet │ ├── episode_000001.parquet │ └── ... └── videos/ └── chunk-000/ ├── observation.images.front_cam/ │ ├── episode_000000.mp4 │ └── ... └── observation.images.workspace_cam/ ├── episode_000000.mp4 └── ... ``` ## Data Fields ### State (Proprioception) | Field | Dimension | Description | |-------|-----------|-------------| | `observation.state` | 7 | Joint positions (arm) | **Joint Names:** 1. `shoulder_pitch` (-180° to 90°) 2. `shoulder_roll` (-180° to 10°) 3. `elbow_yaw` (-90° to 90°) 4. `elbow_pitch` (-125° to 0°) 5. `wrist_roll` (-100° to 100°) 6. `wrist_pitch` (-45° to 45°) 7. `wrist_yaw` (-30° to 30°) ### Action | Field | Dimension | Description | |-------|-----------|-------------| | `action` | 8 | Joint positions (7 arm + 1 gripper) | **Gripper:** 0 = closed, 1 = open ### Video | Camera | Resolution | FOV | Format | |--------|------------|-----|--------| | `front_cam` | 640×480 | 108° | H264 MP4 | | `workspace_cam` | 640×480 | 70° | H264 MP4 | ### Language | Field | Description | |-------|-------------| | `annotation.human.task_description` | Natural language task instruction | ## Objects and Colors ### Objects (4 types) - **Cube** (4cm) - **Rectangular box** - **Cylinder** - **Capsule** ### Colors (8 variations) - Red, Green, Blue, Yellow - Cyan, Magenta, Orange, Purple ## Domain Randomization | Parameter | Range | |-----------|-------| | Object position | Workspace-aware random | | Lighting intensity | 0.5 - 1.0 | | Camera jitter | ±2° | | Object type | Random selection | | Object color | Random selection | ## Usage ### Loading with LeRobot ```python from lerobot.common.datasets.lerobot_dataset import LeRobotDataset dataset = LeRobotDataset("ganatrask/NOVA") # Access an episode episode = dataset[0] print(episode.keys()) # ['observation.state', 'observation.images.front_cam', 'action', ...] ``` ### Loading with HuggingFace Datasets ```python from datasets import load_dataset dataset = load_dataset("ganatrask/NOVA") ``` ### Training GR00T ```bash python -m gr00t.train \ --dataset_repo_id ganatrask/NOVA \ --embodiment_tag reachy2 \ --video_backend decord \ --num_gpus 2 \ --batch_size 64 \ --max_steps 30000 ``` ## Collection Details ### Environment - **Simulator**: MuJoCo via [reachy2_mujoco](https://github.com/pollen-robotics/reachy2_mujoco) - **Robot**: Reachy 2 humanoid (14-DOF arms, using right arm only) - **Control Frequency**: 15 Hz ### Collection Process 1. Random object and color selection 2. Random placement within workspace 3. Scripted expert policy execution 4. Recording of observations, states, and actions 5. Automatic episode segmentation ### Collection Command ```bash python scripts/data_collector.py \ --episodes 100 \ --output reachy2_dataset \ --arm right \ --randomize-object \ --randomize-color \ --cameras front_cam workspace_cam ``` ## Statistics | Statistic | Value | |-----------|-------| | Total episodes | 100 | | Avg. episode length | ~150 steps | | Collection rate | ~2 episodes/min | | Total size | ~2 GB | ## Limitations - **Simulation only**: Data collected in MuJoCo, not real robot - **Single arm**: Right arm manipulation only - **Fixed task type**: Pick-and-place only - **Limited objects**: 4 primitive shapes ## License This dataset is released under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). ## Citation ```bibtex @misc{nova_dataset_2025, title={NOVA Dataset: Reachy 2 Pick-and-Place Demonstrations}, author={ganatrask}, year={2025}, publisher={HuggingFace}, url={https://huggingface.co/datasets/ganatrask/NOVA} } ``` ## Acknowledgments - **[Pollen Robotics](https://www.pollen-robotics.com/)** - Reachy 2 robot and MuJoCo simulation - **[HuggingFace](https://huggingface.co/)** - LeRobot framework and dataset hosting - **[DeepMind](https://mujoco.org/)** - MuJoCo physics engine ## Related Resources - **Model**: [ganatrask/NOVA](https://huggingface.co/ganatrask/NOVA) - Fine-tuned GR00T model - **Code**: [ganatrask/NOVA](https://github.com/ganatrask/NOVA) - Training and inference code - **Base Model**: [nvidia/GR00T-N1.6-3B](https://huggingface.co/nvidia/GR00T-N1.6-3B) - **LeRobot**: [huggingface/lerobot](https://github.com/huggingface/lerobot)