The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 118, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 2392, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 328, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1656, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
G1 Locomanipulation Dataset v1
Dataset Description:
The G1 Locomanipulation Dataset v1 provides synthetic demonstrations of the Unitree G1 humanoid robot performing loco-manipulation tasks — picking up an object at one location, navigating around obstacles to a second location, and placing it there. Generated using NVIDIA Isaac Lab's Synthetic Data Generation (SDG) pipeline, the dataset extends teleoperated static manipulation recordings with automatically computed navigation segments using occupancy-map-based path planning and PI velocity control. This dataset is an example artifact for the Isaac Lab locomanipulation SDG pipeline — users who have run the data generation pipeline can reproduce it, but it is provided here so that step can be skipped when working through the pipeline examples. This dataset is for demonstration purposes and not for production usage.
Dataset Owner(s):
NVIDIA Corporation
Dataset Creation Date:
04/2026
License/Terms of Use:
This dataset is governed by the Creative Commons Attribution 4.0 International License (CC-BY-4.0).
Intended Usage:
This dataset is an example artifact for users working through the Isaac Lab locomanipulation SDG pipeline. It is provided so users can skip the data generation step and proceed directly to GR00T N1.5 finetuning or rollout examples. Users who wish to generate their own dataset can do so by running the locomanipulation SDG pipeline with the G1LocomanipulationSDGDataConfig data config. This dataset is not intended for training production models or for deployment on physical robots.
Dataset Characterization
Data Collection Method:
- Synthetic — Generated via NVIDIA Isaac Lab Synthetic Data Generation (SDG) pipeline. The pipeline takes existing teleoperated static manipulation recordings (where the object does not move during collection) and automatically extends them with navigation by replaying manipulation motions while the base locomotes between fixtures. Navigation trajectories are computed via occupancy-map-based path planning with PI velocity control. Only successful episodes are exported.
Labeling Method:
- Automatic/Sensors — Action and state labels are derived directly from simulation state at each timestep (200 Hz physics simulation). No human annotation was performed.
Dataset Format:
Video and numerical state/action arrays. Stored in HDF5 format, compatible with the LeRobot data format for GR00T N1.5 training.
| Key | Description | Shape |
|---|---|---|
video.ego_view |
RGB from torso-mounted Intel D435 camera | 160×256 per frame |
state.left_hand_pose |
Left wrist pose (xyz + xyzw quat) | 7D |
state.right_hand_pose |
Right wrist pose (xyz + xyzw quat) | 7D |
state.left_hand_joint_positions |
Left finger joint angles | 7D |
state.right_hand_joint_positions |
Right finger joint angles | 7D |
state.object_pose |
Manipulated object pose | 7D |
state.goal_pose |
Target placement pose | 7D |
state.end_fixture_pose |
Drop-off table pose | 7D |
action.left_hand_pose |
Left end-effector target pose | 7D |
action.right_hand_pose |
Right end-effector target pose | 7D |
action.left_hand_joint_positions |
Left finger joint targets | 7D |
action.right_hand_joint_positions |
Right finger joint targets | 7D |
action.base_velocity |
Base nav command (vx, vy, yaw_rate) | 3D |
action.base_height |
Base height target | 1D |
Total action dimension: 32D (28D manipulation + 4D locomotion)
Dataset Quantification:
- Record Count: 100K–1M data points (timesteps at 200 Hz across all episodes; ~10,000 timesteps per ~50 s episode)
- Feature Count: 14 modalities per timestep (8 state inputs + 6 action outputs)
- Total Data Storage: ~478 MB (compressed)
Reference(s):
Key Considerations:
This dataset is provided solely as a demonstration artifact for the Isaac Lab locomanipulation SDG pipeline examples and is not intended for training production models or for use in physical robot deployment. It contains only synthetic simulation data and does not include personal data, biometric information, or copyrighted content.
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer teams to ensure this dataset meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
- Downloads last month
- 17