The Dataset Viewer has been disabled on this dataset.

PIZZA-DOUGH-BALLFORMATION-sample

Overview

This dataset captures the complex, non-linear dynamics of dough manipulation—a frontier in 'Soft-Body' robotics. It features a professional pizzaiolo performing the ball formation process (boulage), recorded through a synchronized multi-modal array. By focusing on deformable materials, this dataset provides the 'Physical Grounding' necessary for World Models to predict material resistance, elasticity, and tactile transitions that are absent in rigid-object datasets. It is an essential resource for training VLA models on high-dexterity, force-sensitive tasks.

Key Technical Features

Tri-Source Synchronization: Seamless alignment between Ego-centric FPV (Visual Intent), Top-Right Global view (Spatial Context), and Dual-Arm IMU telemetry (Proprioceptive Ground Truth). Soft-Body Physics: High-resolution capture of dough deformation, providing unique data for predicting material flow and surface tension. Precision Temporal Protocol (T1-T4): Micro-action segmentation designed for Dense-Action learning:

  • T1 (Contact): Initial tactile engagement and surface adhesion detection.
  • T2 (Lift): Overcoming material stiction and gravitational transition.
  • T3 (Manipulate): Complex bimanual deformation, folding, and shaping phase (The 'Tacit Knowledge' core).
  • T4 (Release): Final detachment, capturing the elastic snap-back of the material.

Use Cases for Research

Deformable Object Manipulation: Training Foundation Models (like OmniVLA) to understand and predict the behavior of non-rigid, viscoelastic materials. Cross-View Spatial Mapping: Benchmarking FPV-to-Top-Right translation to improve robot spatial awareness in cluttered professional environments. Proprioceptive-Visual Fusion: Leveraging IMU data to correlate visual pixel-flow with real-world acceleration and force-vector proxies during high-dexterity tasks. World Model Error Recovery: Analyzing the T3 (Manipulate) phase to train agents on handling 'Corner Cases' such as sticky textures or uneven dough consistency.

About US

We are newcomers to the field, coming from the staffing industry. This gives us direct access to 100+ professional environments (industrial kitchens, bakeries, workshops) where we capture real-world manual tasks. Our approach is iterative and driven by field testing:

  • Capture Stack: We test different camera positions (chest, head, multi-view, 360°) and select the most relevant setup based on the specific task to ensure hands and tools stay in frame.
  • Multimodal: We integrate synchronized IMU sensors and Audio to capture the motion dynamics and contact sounds missing from standard datasets.
  • Annotation: We provide task segmentation and can adapt our labeling depth to specific model requirements. Since we own the access to the field, we are ultra-flexible. We can easily adapt our sensors, mounting, and annotation protocols to build custom collections that fit your specific needs. We’d be happy to chat about your current research and see how we can help ground your models with real-world professional data. Contact us orgn3ai@gmail.com

Commercial Licensing and Contact

  • The complete dataset and our custom collection services are available for commercial licensing and large-scale R&D. Whether you need existing data or a custom setup in a specific professional environment, do not hesitate to reach out for more information.
  • Contact: orgn3ai@gmail.com

License

  • This dataset is licensed under cc-by-nc-nd-4.0.

Dataset Statistics

This section provides detailed statistics extracted from dataset_metadata.json:

Overall Statistics

  • Dataset Name: PIZZA-DOUGH-BALLFORMATION-sample
  • Batch ID: pizza
  • Total Clips: 26
  • Number of Sequences: 39
  • Number of Streams: 3
  • Stream Types: ego, imu_left_wrist, third

Duration Statistics

  • Total Duration: 12.62 minutes (757.07 seconds)
  • Average Clip Duration: 29.12 seconds (29118.0 ms)
  • Min Clip Duration: 26.37 seconds (26367 ms)
  • Max Clip Duration: 32.83 seconds (32833 ms)

Clip Configuration

  • Padding: 1500 ms

Statistics by Stream Type

Ego

  • Number of clips: 13
  • Total duration: 6.31 minutes (378.53 seconds)
  • Average clip duration: 29.12 seconds (29118.0 ms)
  • Min clip duration: 26.37 seconds (26367 ms)
  • Max clip duration: 32.83 seconds (32833 ms)

Third

  • Number of clips: 13
  • Total duration: 6.31 minutes (378.53 seconds)
  • Average clip duration: 29.12 seconds (29118.0 ms)
  • Min clip duration: 26.37 seconds (26367 ms)
  • Max clip duration: 32.83 seconds (32833 ms)

Note: Complete metadata is available in dataset_metadata.json in the dataset root directory.

Dataset Structure

The dataset uses a unified structure where each example contains all synchronized video streams:

dataset/
├── data-*.arrow         # Dataset files (Arrow format)
├── dataset_info.json    # Dataset metadata
├── dataset_metadata.json  # Complete dataset statistics
├── state.json           # Dataset state
├── README.md            # This file
├── medias/              # Media files (mosaics, previews, etc.)
│   └── mosaic.mp4       # Mosaic preview video
└── videos/              # All video clips
    └── ego/      # Ego video clips
    └── imu_left_wrist/      # Imu_left_wrist video clips
    └── third/      # Third video clips

Dataset Format

The dataset contains 26 synchronized scenes in a single train split. Each example includes:

  • Synchronized video columns: One column per flux type (e.g., ego, imu_left_wrist, third)
  • Scene metadata: scene_id, sync_id, duration_ms, padding_ms, fps
  • Rich metadata dictionary: Task, environment, audio info, and synchronization details

All videos in a single example are synchronized and correspond to the same moment in time.

Usage

Load and Access Dataset

import json
import random
from pathlib import Path
import cv2
from huggingface_hub import snapshot_download
from datasets import load_from_disk

repo = "orgn3ai/PIZZA-DOUGH-BALLFORMATION-sample"

# 1) Download snapshot locally
local_path = snapshot_download(repo_id=repo, repo_type="dataset")
base_dir = Path(local_path)
print("Snapshot path:", base_dir)

# 2) Load dataset saved with save_to_disk()
ds = load_from_disk(str(base_dir))
train = ds["train"] if isinstance(ds, dict) and "train" in ds else ds
print("Train rows:", len(train))
print("Train columns:", train.column_names)

# 3) Read root metadata.json and extract "flux"
metadata_path = base_dir / "dataset_metadata.json"
if not metadata_path.exists():
    raise FileNotFoundError(
        f"dataset_metadata.json not found at repo root: {metadata_path}\n"
        "Check your repo tree; maybe it's named dataset_metadata.json instead."
    )

with metadata_path.open("r", encoding="utf-8") as f:
    root_meta = json.load(f)

flux = root_meta.get("flux")
if not isinstance(flux, list) or not flux:
    raise ValueError(f'Expected metadata.json["flux"] to be a non-empty list, got: {flux}')

print("Flux entries:", flux)

# 4) Pick a random dataset entry
idx = random.randrange(len(train))
ex = train[idx]

print("\nRandom example index:", idx)
print("Example keys:", list(ex.keys()))

def resolve_video_path(video_value) -> Path:
    """
    video_value can be:
    - string path (most common case)
    - dict like {"path": "...", "bytes": ...} (for backward compatibility)
    """
    if isinstance(video_value, dict) and "path" in video_value:
        rel = video_value["path"]
    elif isinstance(video_value, str):
        rel = video_value
    else:
        raise TypeError(f"Unsupported video value type: {type(video_value)}; value={video_value}")

    # Normalize to avoid leading "./"
    rel = str(rel).lstrip("/")

    # Your dataset may store relative paths like "videos/ego/xxx.mp4"
    # Resolve them inside the snapshot folder.
    return base_dir / rel

def inspect_video(path: Path):
    print(f"  Local path: {path}")
    print(f"  Exists: {path.exists()}")
    if not path.exists():
        return {"ok": False, "reason": "file_not_found"}

    cap = cv2.VideoCapture(str(path))
    if not cap.isOpened():
        return {"ok": False, "reason": "cannot_open"}

    frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
    fps = float(cap.get(cv2.CAP_PROP_FPS))
    width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    # Some codecs report fps=0; guard it
    duration = (frame_count / fps) if fps and fps > 0 else None

    # Try read first frame
    ret, frame0 = cap.read()
    cap.release()

    info = {
        "ok": True,
        "width": width,
        "height": height,
        "fps": fps,
        "frame_count": frame_count,
        "duration_sec": duration,
        "first_frame_ok": bool(ret),
        "first_frame_shape": tuple(frame0.shape) if ret and frame0 is not None else None,
        "first_frame_dtype": str(frame0.dtype) if ret and frame0 is not None else None,
    }
    return info

# 5) For each flux key, inspect the associated video
print("\n=== VIDEO CHECK ===")
for key in flux:
    print(f"\nFlux key: {key}")
    if key not in ex:
        print(f"  ERROR: key '{key}' not in example. Available keys: {list(ex.keys())}")
        continue

    try:
        video_path = resolve_video_path(ex[key])
    except Exception as e:
        print(f"  ERROR resolving path: {e}")
        continue

    info = inspect_video(video_path)
    if not info["ok"]:
        print(f"  ERROR: {info['reason']}")
        continue

    print("  Video properties:")
    print(f"    - Resolution: {info['width']}x{info['height']}")
    print(f"    - FPS: {info['fps']:.3f}")
    print(f"    - Frames: {info['frame_count']}")
    if info["duration_sec"] is not None:
        print(f"    - Duration: {info['duration_sec']:.3f}s")
    else:
        print("    - Duration: (fps unavailable)")
    print(f"    - First frame decoded: {info['first_frame_ok']}")
    if info["first_frame_ok"]:
        print(f"    - Frame0 shape: {info['first_frame_shape']}")
        print(f"    - Frame0 dtype: {info['first_frame_dtype']}")

print('\n=== LABELS ===')
print(f"nbLabels: {len(ex['labels'])}")
for label in ex['labels']:
    print(f"    - {label['time_ms']}ms (withoutPadding): {label['label']}")

print("\nDONE.")

Dataset Features

Each example contains:

  • scene_id: Unique scene identifier (e.g., "01_0000")
  • sync_id: Synchronization ID linking synchronized clips
  • duration_ms: Duration of the synchronized clip in milliseconds (includes padding)
  • padding_ms: Padding applied to clips (added at beginning and end, total padding = padding_ms × 2)
  • fps: Frames per second (extracted from video)
  • batch_id: Batch identifier
  • dataset_name: Dataset name from config
  • One column per flux: Each flux name from metadata['flux_names'] has its own column (e.g., ego, imu_left_wrist, third) - String path to video file (relative to dataset root)
  • metadata: Dictionary containing:
    • task: Task identifier
    • environment: Environment description
    • has_audio: Whether videos contain audio
    • num_fluxes: Number of synchronized flux types
    • flux_names: List of flux names present
    • sequence_ids: List of original sequence IDs
    • sync_offsets_ms: List of synchronization offsets

Additional Notes

Important: This dataset uses a unified structure where each example contains all synchronized video streams in separate columns. All examples are in the train split.

Synchronization: Videos in the same example (same index in the train split) are automatically synchronized. They share the same sync_id and correspond to the same moment in time.

Flux Keys: The available flux keys are listed in dataset_metadata.json under the "flux" key. Use these keys to programmatically access video columns in each example.

Video Paths: Video paths are stored as strings (relative to the dataset root directory). Paths can be resolved using the resolve_video_path function shown in the usage example above.

License

This dataset is licensed under cc-by-nc-nd-4.0.

Downloads last month
86