Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('json', {}), NamedSplit('validation'): ('videofolder', {})}
Error code:   FileFormatMismatchBetweenSplitsError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

AgiBotWorld-Alpha-CtrlWorld-327 Dataset

⚠️ Important: This dataset is processed from task_327 in Agibot-World-Alpha. For more details, please refer to the Acknowledgements section.

📑 Table of Contents

🚀 Get Started

Download the Dataset

To download the full dataset, you can use the following commands. If you encounter any issues, please refer to the official Hugging Face documentation.

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install

# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/pyromind/AgiBotWorld-Alpha-CtrlWorld-327

# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/pyromind/AgiBotWorld-Alpha-CtrlWorld-327

The data has already been pre-processed and can be used directly with our inference code.

📁 Data Structure

The dataset is organized as follows:

task_327/
├── annotation/
│   ├── train/          # Training episode annotations (207 episodes)
│   │   ├── 0.json
│   │   ├── 1.json
│   │   └── ...
│   └── val/            # Validation episode annotations (2 episodes)
│       ├── 99.json
│       └── 199.json
├── latent_videos/
│   ├── train/          # Pre-encoded latent video representations
│   │   ├── 0/
│   │   │   ├── 0.pt
│   │   │   ├── 1.pt
│   │   │   └── 2.pt
│   │   └── ...
│   └── val/
│       └── ...
├── videos/
│   ├── train/          # Original video files
│   └── val/
└── stat.json           # Dataset statistics

📊 Explanation of Proprioceptive State

State and Action

State

The state represents the proprioceptive observations of the robot at each timestep. It includes:

  • End-effector orientation: 4D quaternion representation (w, x, y, z) describing the orientation of the robot's end-effector
  • End-effector position: 3D Cartesian coordinates (x, y, z) of the end-effector position
  • Effector position: 3D Cartesian coordinates (x, y, z) of additional effector position information

The state vector has a dimension of 16, combining all these proprioceptive measurements.

Action

The action represents the control commands sent to the robot. In this dataset:

  • Actions are 2-dimensional vectors
  • Actions control the effector position in the environment

Common Fields

Each annotation JSON file ({episode_id}.json) contains the following fields:

Field Type Description
texts List[str] Task description and initial scene description
episode_id int Unique identifier for the episode
success int Binary indicator (0 or 1) whether the episode was successful
video_length int Number of frames in the processed video
raw_length int Number of frames in the original raw video
state_columns List[str] Column names for state components: ['observation.states.end.orientation', 'observation.states.end.position', 'observation.states.effector.position']
action_columns List[str] Column names for action components: ['actions.effector.position']
states List[List[float]] Array of state vectors, one per timestep. Each state is a 16-dimensional vector
actions List[List[float]] Array of action vectors, one per timestep. Each action is a 2-dimensional vector
videos List[Dict] List of video file paths, e.g., [{'video_path': 'videos/train/{episode_id}/{segment_id}.mp4'}]
latent_videos List[str] List of paths to pre-encoded latent video representations

Value Shapes and Ranges

State Values

  • Shape: [T, 16] where T is the number of timesteps (video_length)
  • Components:
    • Orientation (quaternion): 4 values, typically in range [-1.0, 1.0]
    • Positions: 6 values (3D end position + 3D effector position), typically in range [-1.0, 1.0] for normalized coordinates, or larger ranges for absolute positions (e.g., up to ~97.0)
  • Overall range: Approximately [-0.86, 97.29] (values may vary depending on normalization)

Action Values

  • Shape: [T, 2] where T is the number of timesteps
  • Range: [0.0, 1.0] (normalized control values)

Latent Videos

  • Format: PyTorch tensor files (.pt)
  • Shape: [T, 4, 24, 40] where:
    • T: Number of frames (matches video_length)
    • 4: Number of channels
    • 24: Height dimension
    • 40: Width dimension

Dataset Statistics

  • Training episodes: 207
  • Validation episodes: 2
  • Total episodes: 209

The stat.json file contains sample state and action values at the 1st and 99th percentiles, which can be used for normalization or data analysis purposes.

🤗 Acknowledgements

We would like to express our gratitude to the following projects and teams:

  • AgiBot-World: This dataset is processed from task_327 in Agibot-World-Alpha. We acknowledge the OpenDriveLab team for their excellent work on the large-scale manipulation platform for scalable and intelligent embodied systems.

📄 License

All the data and code within this repository are licensed under CC BY-NC-SA 4.0.

💬 Contact

For questions or suggestions, please contact us through the project Issues.

Downloads last month
2,084