| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - robotics |
| | tags: |
| | - LeRobot |
| | - language_table |
| | - openx |
| | - xarm |
| | configs: |
| | - config_name: default |
| | data_files: data/*.parquet |
| | --- |
| | Language Table (LeRobot) — Embedding-Only Release |
| | (DINOv3 + SigLIP2 image features; EmbeddingGemma task-text features) |
| |
|
| | This repository packages a re-encoded variant of [IPEC-COMMUNITY/kuka_lerobot](https://huggingface.co/datasets/IPEC-COMMUNITY/kuka_lerobot) where raw videos are replaced by fixed-length image embeddings, and task strings are augmented with text embeddings. All indices, splits, and semantics remain consistent with the source dataset while storage and I/O are substantially lighter. To make the dataset practical to upload/download and stream from the Hub, we also consolidated tiny per-episode Parquet files into N large Parquet shards under a single data/ folder. The file meta/sharded_index.json preserves a precise mapping from each original episode (referenced by a normalized identifier of the form data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet) to its shard path and row range, so you keep original addressing without paying the small-file tax. |
| |
|
| | - Robot: KUKA IIWA |
| | - Modalities kept: states, actions, timestamps, frame/episode indices, image embeddings, task-text embeddings |
| | - Removed: raw video tensors (column observation.images.image) |
| | - License: apache-2.0 (inherits from source) |
| |
|
| | ---------------------------------------------------------------- |
| |
|
| | Quick Stats |
| |
|
| | From meta/info.json and meta/task_text_embeddings_info.json: |
| | |
| | - Episodes: 209,880 |
| | - Frames: 2,455,879 |
| | - Tasks (unique): 1 |
| | - Chunks (original layout): 210 (chunks_size=1000) |
| | - Shards (this release): N Parquet files under data/ (see meta/sharded_index.json) |
| | - FPS: 10 |
| | - Image embeddings (per frame): |
| | - observation.images.image_dinov3 → float32 [1024] (DINOv3 ViT-L/16 CLS) |
| | - observation.images.image_siglip2 → float32 [768] (SigLIP2-base) |
| | - Task-text embeddings (per unique task): |
| | - embedding → float32 [768] from google/embeddinggemma-300m |
| | - Count: 1 rows (one per task) |
| | |
| | Note: This is an embedding-only package. video_path is omitted and the original observation.images.image pixels are dropped. |
| |
|
| | ---------------------------------------------------------------- |
| |
|
| | <details> |
| | <summary><b>Contents</b></summary> |
| |
|
| | <pre> |
| | . |
| | |-- meta/ |
| | | |-- info.json |
| | | |-- sharded_index.json |
| | | |-- tasks.jsonl |
| | | |-- episodes.jsonl |
| | | `-- task_text_embeddings_info.json |
| | |-- data/ |
| | | |-- shard-00000-of-000NN.parquet |
| | | |-- shard-00001-of-000NN.parquet |
| | | |-- ... |
| | | `-- task_text_embeddings.parquet |
| | `-- README.md |
| | </pre> |
| | </details> |
| |
|
| | ---------------------------------------------------------------- |
| |
|
| | How This Was Generated (Reproducible Pipeline) |
| |
|
| | 1) Episode → Image Embeddings (drop pixels) |
| | convert_lerobot_to_embeddings_mono.py (GPU-accelerated preprocessing). |
| | Adds: |
| | - observation.images.image_dinov3 (float32[1024]) |
| | - observation.images.image_siglip2 (float32[768]) |
| | Removes: |
| | - observation.images.image (raw frames) |
| |
|
| | 2) Task-Text Embeddings (one row per unique task) |
| | build_task_text_embeddings.py with SentenceTransformer("google/embeddinggemma-300m") → data/task_text_embeddings.parquet + meta/task_text_embeddings_info.json. |
| |
|
| | 3) Data Consolidation (this release) |
| | All per-episode Parquets were consolidated into N large Parquet shards in one data/ folder. |
| | - The index meta/sharded_index.json records, for each episode, its normalized source identifier data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet, the destination shard path, and the (row_offset, num_rows) range inside that shard. |
| | - This preserves original addressing while making Hub sync/clone/stream far faster and more reliable. |
| |
|
| | ---------------------------------------------------------------- |
| |
|
| | Metadata (Excerpts) |
| |
|
| | meta/task_text_embeddings_info.json |
| | ```json |
| | { |
| | "model": "google/embeddinggemma-300m", |
| | "dimension": 768, |
| | "normalized": false, |
| | "count": 1, |
| | "file": "task_text_embeddings.parquet" |
| | } |
| | ``` |
| | |
| | meta/info.json (embedding-only + shards) |
| | |
| | ```json |
| | { |
| | "codebase_version": "v2.0-embeddings-renamed-renamed-sharded", |
| | "robot_type": "kuka_iiwa", |
| | "total_episodes": 209880, |
| | "total_frames": 2455879, |
| | "total_tasks": 1, |
| | "total_videos": 209880, |
| | "total_chunks": 210, |
| | "chunks_size": 1000, |
| | "fps": 10, |
| | "splits": { |
| | "train": "0:209880" |
| | }, |
| | "data_path": "data/shard-{shard_id:05d}-of-{num_shards:05d}.parquet", |
| | "features": { |
| | "observation.state": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 8 |
| | ], |
| | "names": { |
| | "motors": [ |
| | "x", |
| | "y", |
| | "z", |
| | "rx", |
| | "ry", |
| | "rz", |
| | "rw", |
| | "gripper" |
| | ] |
| | } |
| | }, |
| | "action": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 7 |
| | ], |
| | "names": { |
| | "motors": [ |
| | "x", |
| | "y", |
| | "z", |
| | "roll", |
| | "pitch", |
| | "yaw", |
| | "gripper" |
| | ] |
| | } |
| | }, |
| | "timestamp": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "frame_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "episode_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "task_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "observation.images.image_dinov3": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 1024 |
| | ], |
| | "names": null |
| | }, |
| | "observation.images.image_siglip2": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 768 |
| | ], |
| | "names": null |
| | } |
| | }, |
| | "num_shards": 64, |
| | "index_path": "meta/sharded_index.json" |
| | } |
| | ``` |
| | ---------------------------------------------------------------- |
| | |
| | Environment & Dependencies |
| |
|
| | Python ≥ 3.9 • PyTorch ≥ 2.1 • transformers • sentence-transformers • pyarrow • tqdm • decord (and optionally av) |
| |
|
| | ---------------------------------------------------------------- |
| |
|
| | Provenance, License, and Citation |
| |
|
| | - Source dataset: IPEC-COMMUNITY/kuka_lerobot |
| | - License: apache-2.0 (inherits from the source) |
| | - Encoders to cite: |
| | - facebook/dinov3-vitl16-pretrain-lvd1689m |
| | - google/siglip2-base-patch16-384 |
| | - google/embeddinggemma-300m |
| | |
| | ---------------------------------------------------------------- |
| | |
| | Changelog |
| | |
| | - v2.0-embeddings-sharded — Replaced video tensors with DINOv3 + SigLIP2 features; added EmbeddingGemma task-text embeddings; consolidated per-episode Parquets into N shards with a repo-local index; preserved original indexing/splits via normalized episode identifiers. |
| | |