The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Community Dataset v1 (v3.0)
A large-scale community-contributed robotics dataset for vision-language-action learning, featuring 119 datasets from 52 contributors worldwide. This is a converted and curated version of the original HuggingFaceVLA/community_dataset_v1, upgraded to LeRobot v3.0 format.
This dataset was used to pretrain SmolVLA. It was filtered using specific criteria including fps, minimum number of episodes, and qualitative assessment of video quality, using the FilterLeRobotData tool.
π Overview
This dataset represents a collaborative effort from the robotics and AI community to build comprehensive training data for embodied AI systems. Each contribution contains demonstrations of robotic manipulation tasks with the SO100 arm, recorded using LeRobot tools, primarily focused on tabletop scenarios and everyday object interactions.
π Dataset Statistics
| Metric | Value |
|---|---|
| Total Datasets | 119 |
| Total Episodes | 9,528 |
| Total Frames | 4,489,949 |
| Contributors | 52 |
| Average FPS | 30 |
| Average Episodes per Dataset | 80 |
| Primary Tasks | Manipulation, Pick & Place, Sorting |
| Robot Types | SO-100 (various colors) |
| Data Format | LeRobot v3.0 dataset format |
| Total Size | ~107 GB |
ποΈ Structure
The dataset maintains a clear hierarchical structure:
community_dataset_v1/
βββ contributor1/
β βββ dataset_name_1/
β β βββ data/ # Parquet files with observations
β β βββ videos/ # MP4 recordings
β β βββ meta/ # Metadata and info
β βββ dataset_name_2/
βββ contributor2/
β βββ dataset_name_3/
βββ ...
Each dataset follows the LeRobot v3.0 format standard, ensuring compatibility with existing frameworks and easy integration.
π Usage
1. Authenticate with Hugging Face
You need to be logged in to access the dataset:
# Login to Hugging Face
huggingface-cli login
# Or alternatively, set your token as an environment variable
# export HF_TOKEN=your_token_here
Get your token from https://huggingface.co/settings/tokens
Download the Dataset
hf download username/community_dataset_v1 \
--repo-type=dataset \
--local-dir /path/local_dir/community_dataset_v1
Load Individual Datasets
from lerobot.datasets.lerobot_dataset import LeRobotDataset
import os
# Browse available datasets
for contributor in os.listdir("./community_dataset_v1"):
contributor_path = f"./community_dataset_v1/{contributor}"
if os.path.isdir(contributor_path):
for dataset in os.listdir(contributor_path):
print(f"π {contributor}/{dataset}")
# Load a specific dataset (requires authentication)
dataset = LeRobotDataset(
repo_id="local",
root="./community_dataset_v1/contributor_name/dataset_name"
)
# Access episodes and observations
print(f"Episodes: {len(dataset.episode_indices)}")
print(f"Total frames: {len(dataset)}")
Integration with SmolVLA pretraining framework
This dataset is designed for training VLA models. You can download this dataset and use it for Vision Language Action Models training framework, VLAb:
- Visit the VLAb repository.
- Follow the training instructions in the repo
- Point the training script to this dataset
accelerate launch --config_file accelerate_configs/multi_gpu.yaml \
src/lerobot/scripts/train.py \
--policy.type=smolvla2 \
--policy.repo_id=HuggingFaceTB/SmolVLM2-500M-Video-Instruct \
--dataset.repo_id="username/community_dataset_v1/AndrejOrsula/lerobot_double_ball_stacking_random,username/community_dataset_v1/aimihat/so100_tape" \
--dataset.root="local/path/to/datasets" \
--dataset.video_backend=pyav \
--dataset.features_version=2 \
--output_dir="./outputs/training" \
--batch_size=8 \
--steps=200000 \
--wandb.enable=true \
--wandb.project="smolvla2-training"
π§ Dataset Format (v3.0)
Each dataset contains:
data/: Parquet files with timestamped observations- Robot states (joint positions, velocities)
- Action sequences
- Camera observations (multiple views)
- Language instructions
videos/: Synchronized video recordings- Multiple camera angles
- High-resolution capture
- Timestamp alignment
meta/: Metadata and configuration- Dataset info (fps, episode count)
- Robot configuration
- Task descriptions
Key Differences from v2.1
- Unified data files: Episodes are concatenated into fewer parquet files (improved I/O)
- Restructured metadata: Episodes and stats stored in Parquet format instead of JSONL
- Improved video organization: Videos reorganized by camera key for better streaming
π― Intended Use
This dataset is designed for:
- Vision-Language-Action (VLA) model training
- Robotic manipulation research
- Imitation learning experiments
- Multi-task policy development
- Embodied AI research
π€ Community Contributions
This dataset exists thanks to the generous contributions from researchers, hobbyists, and institutions worldwide. Each dataset represents hours of careful data collection and curation.
Contributing Guidelines
Future contributions should follow:
- LeRobot v3.0 dataset format
- Consistent naming conventions for the features, camera views etc.
- Quality validation checks
- Proper task descriptions, describing the actions precisely.
Check the blogpost for more information
π Related Work
- VLAb Framework
- SmolVLA model
- SmolVLA Blogpost
- SmolVLA Paper
- Docs
- How to Build a successful Robotics dataset with Lerobot?
- Original Community Dataset v1 (v2.1)
Converted and curated with β€οΈ by the LeRobot Community
- Downloads last month
- 695