|
|
--- |
|
|
license: cc-by-nc-nd-4.0 |
|
|
task_categories: |
|
|
- video-classification |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- egocentric |
|
|
- embodied-ai |
|
|
- robotics |
|
|
- real-world |
|
|
- computer-vision |
|
|
- dataset |
|
|
- sample-dataset |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
# SAUSAGE-CRAFTING-sample: Fine Manipulation of Deformable Sausage Casings |
|
|
|
|
|
## Overview |
|
|
|
|
|
This dataset provides a high-quality, multi-view synchronized capture of expert procedural tasks in a professional butchery environment. It specifically focuses on the complex manipulation of non-rigid and deformable objects such as sausage casings and stuffing. This resource addresses current challenges in robotics and computer vision regarding physical interaction with elastic and organic materials. |
|
|
|
|
|
|
|
|
<video controls loop width="100%"> |
|
|
<source src="https://huggingface.co/datasets/orgn3ai/SAUSAGE-CRAFTING-sample/resolve/main/medias/mosaic.mp4" type="video/mp4"> |
|
|
Your browser does not support the video tag. |
|
|
</video> |
|
|
|
|
|
## Key Technical Features |
|
|
|
|
|
* Synchronized Dual-View: Includes perfectly aligned ego-centric (First-Person View) and third-person perspectives. |
|
|
* Non-Rigid Physics: Captures complex material behaviors such as plasticity and elasticity during the sausage-making process. |
|
|
* High-Quality Synchronization: All views are precisely time-aligned using a unified sync_id to ensure seamless cross-modal understanding. |
|
|
* Expert Craftsmanship: Focused on the specific task of rolling and measuring sausage casings with professional dexterity. |
|
|
|
|
|
## Use Cases for Research |
|
|
|
|
|
* Embodied AI and World Models: Training agents to predict the physical consequences of interacting with deformable organic matter. |
|
|
* Procedural Task Learning: Modeling long-form sequential actions where expert intent is critical. |
|
|
* Tactile-Visual Inference: Learning to estimate force and material resistance through visual observation of fine manipulation. |
|
|
|
|
|
## Custom Data Collection Services |
|
|
|
|
|
Our team specializes in high-fidelity data acquisition within real-world professional settings. We provide on-demand data collection services tailored to specific AI and robotics requirements: |
|
|
* Professional Network: Direct access to 100+ professional environments, including professional kitchens, bakeries, mechanical workshops, craft studios, and industrial facilities. |
|
|
* Multi-Modal Capture: Expertise in collecting synchronized streams including Third-Person views, Ego-centric (FPV), IMU sensors (motion tracking), and Expert Audio Narration. |
|
|
* Domain Expertise: We bridge the gap between technical AI needs and authentic professional "tacit knowledge." |
|
|
|
|
|
## Full Dataset Specifications |
|
|
|
|
|
* Expert Audio Narration: Live commentary explaining intent, tactile feedback, and professional heuristics. |
|
|
* Total Duration: 50+ hours of continuous professional expert operations. |
|
|
* Extended Tasks: Includes stuffing preparation, casing filling, and specialized tool maintenance. |
|
|
* Data Quality: Comprehensive temporal action annotations. |
|
|
|
|
|
## Commercial Licensing and Contact |
|
|
|
|
|
* The complete dataset and our custom collection services are available for commercial licensing and large-scale R&D. Whether you need existing data or a custom setup in a specific professional environment, do not hesitate to reach out for more information. |
|
|
* Contact: orgn3ai@gmail.com |
|
|
|
|
|
## License |
|
|
|
|
|
* This dataset is licensed under cc-by-nc-nd-4.0. |
|
|
|
|
|
## Dataset Statistics |
|
|
|
|
|
This section provides detailed statistics extracted from `dataset_metadata.json`: |
|
|
|
|
|
### Overall Statistics |
|
|
|
|
|
- **Dataset Name**: SAUSAGE-CRAFTING-sample: Fine Manipulation of Deformable Sausage Casings |
|
|
- **Batch ID**: 01 |
|
|
- **Total Clips**: 120 |
|
|
- **Number of Sequences**: 6 |
|
|
- **Number of Streams**: 2 |
|
|
- **Stream Types**: ego, third |
|
|
|
|
|
### Duration Statistics |
|
|
|
|
|
- **Total Duration**: 8.00 minutes (480.00 seconds) |
|
|
- **Average Clip Duration**: 4.00 seconds |
|
|
- **Min Clip Duration**: 4.00 seconds |
|
|
- **Max Clip Duration**: 4.00 seconds |
|
|
|
|
|
### Clip Configuration |
|
|
|
|
|
- **Base Clip Duration**: 3.00 seconds |
|
|
- **Clip Duration with Padding**: 4.00 seconds |
|
|
- **Padding**: 500 ms |
|
|
|
|
|
### Statistics by Stream Type |
|
|
|
|
|
#### Ego |
|
|
|
|
|
- **Number of clips**: 60 |
|
|
- **Total duration**: 4.00 minutes (240.00 seconds) |
|
|
- **Average clip duration**: 4.00 seconds |
|
|
- **Min clip duration**: 4.00 seconds |
|
|
- **Max clip duration**: 4.00 seconds |
|
|
|
|
|
#### Third |
|
|
|
|
|
- **Number of clips**: 60 |
|
|
- **Total duration**: 4.00 minutes (240.00 seconds) |
|
|
- **Average clip duration**: 4.00 seconds |
|
|
- **Min clip duration**: 4.00 seconds |
|
|
- **Max clip duration**: 4.00 seconds |
|
|
|
|
|
> **Note**: Complete metadata is available in `dataset_metadata.json` in the dataset root directory. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
The dataset uses a **unified structure** where each example contains all synchronized video streams: |
|
|
|
|
|
``` |
|
|
dataset/ |
|
|
├── data-*.arrow # Dataset files (Arrow format) |
|
|
├── dataset_info.json # Dataset metadata |
|
|
├── dataset_metadata.json # Complete dataset statistics |
|
|
├── state.json # Dataset state |
|
|
├── README.md # This file |
|
|
├── medias/ # Media files (mosaics, previews, etc.) |
|
|
│ └── mosaic.mp4 # Mosaic preview video |
|
|
└── videos/ # All video clips |
|
|
└── ego/ # Ego video clips |
|
|
└── third/ # Third video clips |
|
|
``` |
|
|
|
|
|
### Dataset Format |
|
|
|
|
|
The dataset contains **120 synchronized scenes** in a single `train` split. Each example includes: |
|
|
|
|
|
- **Synchronized video columns**: One column per flux type (e.g., `ego_video`, `third_video`) |
|
|
- **Scene metadata**: `scene_id`, `sync_id`, `duration_sec`, `fps` |
|
|
- **Rich metadata dictionary**: Task, environment, audio info, and synchronization details |
|
|
|
|
|
All videos in a single example are synchronized and correspond to the same moment in time. |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Load Dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load from Hugging Face Hub |
|
|
dataset = load_dataset('orgn3ai/SAUSAGE-CRAFTING-sample') |
|
|
|
|
|
# IMPORTANT: The dataset has a 'train' split |
|
|
# Check available splits |
|
|
print(f"Available splits: {list(dataset.keys())}") # Should show: ['train'] |
|
|
|
|
|
# Or load from local directory |
|
|
# from datasets import load_from_disk |
|
|
# dataset = load_from_disk('outputs/01/dataset') |
|
|
|
|
|
# Access the 'train' split |
|
|
train_data = dataset['train'] |
|
|
|
|
|
# Access synchronized scenes from the train split |
|
|
example = train_data[0] # First synchronized scene |
|
|
|
|
|
# Or directly: |
|
|
example = dataset['train'][0] # First synchronized scene |
|
|
|
|
|
# Access all synchronized videos |
|
|
ego_video = example['ego_video'] # Ego-centric view |
|
|
third_video = example['third_video'] # Third-person view |
|
|
|
|
|
# Access metadata |
|
|
print(f"Scene ID: {example['scene_id']}") |
|
|
print(f"Duration: {example['duration_sec']}s") |
|
|
print(f"FPS: {example['fps']}") |
|
|
print(f"Metadata: {example['metadata']}") |
|
|
|
|
|
# Get dataset info |
|
|
print(f"Number of examples in train split: {len(dataset['train'])}") |
|
|
``` |
|
|
|
|
|
### Access Synchronized Videos |
|
|
|
|
|
Each example contains all synchronized video streams. Access them directly: |
|
|
|
|
|
```python |
|
|
import cv2 |
|
|
from pathlib import Path |
|
|
|
|
|
# IMPORTANT: Always access the 'train' split |
|
|
# Get a synchronized scene from the train split |
|
|
example = dataset['train'][0] |
|
|
|
|
|
# Access video objects (Video type stores path in 'path' attribute or as dict) |
|
|
ego_video_obj = example.get('ego_video') |
|
|
third_video_obj = example.get('third_video') |
|
|
|
|
|
# Extract path from Video object (Video type stores: {{'path': 'videos/ego/0000.mp4', 'bytes': ...}}) |
|
|
def get_video_path(video_obj): |
|
|
if isinstance(video_obj, dict) and 'path' in video_obj: |
|
|
return video_obj['path'] |
|
|
elif isinstance(video_obj, str): |
|
|
return video_obj |
|
|
else: |
|
|
return getattr(video_obj, 'path', str(video_obj)) |
|
|
|
|
|
ego_video_path = get_video_path(ego_video_obj) |
|
|
third_video_path = get_video_path(third_video_obj) |
|
|
|
|
|
# Resolve full paths from dataset cache (when loading from Hub) |
|
|
cache_dir = Path(dataset['train'].cache_files[0]['filename']).parent.parent |
|
|
ego_video_full_path = cache_dir / ego_video_path |
|
|
third_video_full_path = cache_dir / third_video_path |
|
|
|
|
|
# Process all synchronized videos together |
|
|
# IMPORTANT: Iterate over the 'train' split |
|
|
for example in dataset['train']: |
|
|
scene_id = example['scene_id'] |
|
|
sync_id = example['sync_id'] |
|
|
metadata = example['metadata'] |
|
|
|
|
|
print(f"Scene {{scene_id}}: {{metadata['num_fluxes']}} synchronized fluxes") |
|
|
print(f"Flux names: {{metadata['flux_names']}}") |
|
|
|
|
|
# Access video paths and resolve them |
|
|
ego_video_path = example.get('ego_video') |
|
|
third_video_path = example.get('third_video') |
|
|
|
|
|
# Resolve full paths |
|
|
ego_video_full = cache_dir / ego_video_path |
|
|
third_video_full = cache_dir / third_video_path |
|
|
|
|
|
# Process synchronized videos... |
|
|
``` |
|
|
|
|
|
### Filter and Process |
|
|
|
|
|
```python |
|
|
# IMPORTANT: Always work with the 'train' split |
|
|
# Filter by sync_id |
|
|
scene = dataset['train'].filter(lambda x: x['sync_id'] == 0)[0] |
|
|
|
|
|
# Filter by metadata |
|
|
scenes_with_audio = dataset['train'].filter(lambda x: x['metadata']['has_audio']) |
|
|
|
|
|
# Access metadata fields |
|
|
# Iterate over the 'train' split |
|
|
for example in dataset['train']: |
|
|
task = example['metadata']['task'] |
|
|
environment = example['metadata']['environment'] |
|
|
has_audio = example['metadata']['has_audio'] |
|
|
flux_names = example['metadata']['flux_names'] |
|
|
sync_offsets = example['metadata']['sync_offsets_ms'] |
|
|
``` |
|
|
|
|
|
### Dataset Features |
|
|
|
|
|
Each example contains: |
|
|
|
|
|
- **`scene_id`**: Unique scene identifier (e.g., "01_0000") |
|
|
- **`sync_id`**: Synchronization ID linking synchronized clips |
|
|
- **`duration_sec`**: Duration of the synchronized clip in seconds |
|
|
- **`fps`**: Frames per second (default: 30.0) |
|
|
- **`batch_id`**: Batch identifier |
|
|
- **`dataset_name`**: Dataset name from config |
|
|
- **`ego_video`**: Video object for ego-centric view (Hugging Face `Video` type with `decode=False`, stores path) |
|
|
- **`third_video`**: Video object for third-person view (Hugging Face `Video` type with `decode=False`, stores path) |
|
|
- **`metadata`**: Dictionary containing: |
|
|
- `task`: Task identifier |
|
|
- `environment`: Environment description |
|
|
- `has_audio`: Whether videos contain audio |
|
|
- `num_fluxes`: Number of synchronized flux types |
|
|
- `flux_names`: List of flux names present |
|
|
- `sequence_ids`: List of original sequence IDs |
|
|
- `sync_offsets_ms`: List of synchronization offsets |
|
|
|
|
|
## Additional Notes |
|
|
|
|
|
**Important**: This dataset uses a unified structure where each example contains all synchronized video streams in separate columns. All examples are in the `train` split. |
|
|
|
|
|
**Synchronization**: Videos in the same example (same index in the `train` split) are automatically synchronized. They share the same `sync_id` and correspond to the same moment in time. |
|
|
|
|
|
**Video Paths**: Video paths are stored using Hugging Face's `Video` type with `decode=False`. To access the actual file path, extract the `path` attribute from the Video object (see examples above). |
|
|
|
|
|
- `clip_index`: Clip index within the flux folder |
|
|
- `duration_sec`: Clip duration in seconds |
|
|
- `start_time_sec`: Start time in source video |
|
|
- `batch_id`, `dataset_name`, `source_video`, `sync_offset_ms`: Additional metadata |
|
|
|
|
|
## License |
|
|
|
|
|
This dataset is licensed under **cc-by-nc-nd-4.0**. |
|
|
|