Retail-VLA-10K
A real-world egocentric dataset of retail manipulation skills collected by DreamVu, formatted for LeRobot (v2.1).
The dataset contains 10,123 episodes across 11 manipulation skills recorded from a first-person (ego-view) perspective in retail environments. Latent actions are encoded using LAPA (Latent Action Pretraining from Videos), a codebook-based action quantization model trained on large-scale egocentric video.
Dataset Summary
| Dataset | Skill | Episodes | Frames |
|---|---|---|---|
| gt001.manipulation_cart_pushing | Cart Pushing | 1,180 | 354,508 |
| gt001.manipulation_grasping | Grasping | 1,590 | 484,619 |
| gt001.manipulation_holding | Holding | 1,558 | 488,087 |
| gt001.manipulation_holding_item | Holding Item | 176 | 53,760 |
| gt001.manipulation_lifting | Lifting | 445 | 137,354 |
| gt001.manipulation_object_manipulation_ | Object Manipulation | 188 | 55,303 |
| gt001.manipulation_picking_up_item | Picking Up Item | 1,550 | 473,327 |
| gt001.manipulation_placing_item_in_basket | Placing Item in Basket | 153 | 47,938 |
| gt001.manipulation_placing_item_in_cart | Placing Item in Cart | 423 | 137,514 |
| gt001.manipulation_placing_item_on_shelf | Placing Item on Shelf | 1,267 | 395,428 |
| gt001.manipulation_reaching | Reaching | 1,593 | 467,868 |
| Total | 10,123 | 3,095,706 |
Format
- Codebase version: LeRobot v2.1
- FPS: 30
- Video:
observation.images.ego_viewβ 640Γ480, H.264, egocentric (first-person) - Action: 4 latent action indices per frame, encoded by LAPA (codebook size 8, sequence length 4)
- Annotations: Per-episode natural language task descriptions and task titles
Parquet Schema
Each episode parquet file contains the following columns:
| Column | Type | Description |
|---|---|---|
timestamp |
float32 | Time in seconds from episode start (frame_index / fps) |
frame_index |
int64 | Frame index within the episode |
episode_index |
int64 | Episode index within the dataset |
index |
int64 | Global frame index across all episodes |
task_index |
int64 | Index into meta/tasks.jsonl |
action |
int64[4] | LAPA latent action codes for this frame |
annotation.human.action.task_description |
string | Full natural language description of the task |
annotation.human.action.task_title |
string | Short task title |
Dataset Structure
Each skill is a self-contained sub-directory following the LeRobot v2.1 layout:
gt001.manipulation_<skill>/
βββ meta/
β βββ info.json # Dataset metadata (fps, features, total episodes)
β βββ episodes.jsonl # Per-episode metadata (length, task)
β βββ episodes_stats.jsonl # Per-episode statistics (mean/std/min/max per feature)
β βββ tasks.jsonl # Task index β task description mapping
βββ data/
β βββ chunk-000/
β βββ episode_000000.parquet
β βββ episode_000001.parquet
β βββ ...
βββ videos/
βββ chunk-000/
βββ observation.images.ego_view/
βββ episode_000000.mp4
βββ episode_000001.mp4
βββ ...
Usage
With LeRobot
Install LeRobot:
pip install lerobot
Load a single skill dataset:
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
# Load one skill (downloads automatically)
dataset = LeRobotDataset(
repo_id="DreamVu/Retail-VLA-10K",
root="gt001.manipulation_grasping", # sub-directory for the skill
)
print(f"Episodes: {dataset.num_episodes}")
print(f"Frames: {len(dataset)}")
# Access a frame
frame = dataset[0]
print(frame.keys())
# dict_keys(['observation.images.ego_view', 'action', 'timestamp', ...])
Load multiple skills together:
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from torch.utils.data import ConcatDataset
skills = [
"gt001.manipulation_grasping",
"gt001.manipulation_reaching",
"gt001.manipulation_picking_up_item",
]
datasets = [
LeRobotDataset("DreamVu/Retail-VLA-10K", root=skill)
for skill in skills
]
combined = ConcatDataset(datasets)
print(f"Total frames: {len(combined)}")
With HuggingFace datasets
from datasets import load_dataset
# Load parquet-only (no videos) for a specific skill
ds = load_dataset("DreamVu/Retail-VLA-10K", name="grasping", split="train")
print(ds)
# Dataset with columns: timestamp, frame_index, episode_index, index,
# task_index, action, annotation.human.action.task_description,
# annotation.human.action.task_title
Download a single skill manually
huggingface-cli download DreamVu/Retail-VLA-10K \
--repo-type dataset \
--include "gt001.manipulation_grasping/**" \
--local-dir ./retail_vla
License
This dataset is released under CC BY-NC 4.0. It is intended for non-commercial research use only.
- Downloads last month
- 4