|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: video_path |
|
|
dtype: string |
|
|
- name: participant |
|
|
dtype: string |
|
|
- name: camera |
|
|
dtype: string |
|
|
- name: video |
|
|
dtype: string |
|
|
- name: labels |
|
|
list: |
|
|
- name: start |
|
|
dtype: int64 |
|
|
- name: end |
|
|
dtype: int64 |
|
|
- name: label |
|
|
dtype: string |
|
|
splits: |
|
|
- name: s1 |
|
|
num_bytes: 80320 |
|
|
num_examples: 284 |
|
|
- name: s2 |
|
|
num_bytes: 137069 |
|
|
num_examples: 506 |
|
|
- name: s3 |
|
|
num_bytes: 130705 |
|
|
num_examples: 532 |
|
|
- name: s4 |
|
|
num_bytes: 166758 |
|
|
num_examples: 667 |
|
|
download_size: 107741 |
|
|
dataset_size: 514852 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: s1 |
|
|
path: data/s1-* |
|
|
- split: s2 |
|
|
path: data/s2-* |
|
|
- split: s3 |
|
|
path: data/s3-* |
|
|
- split: s4 |
|
|
path: data/s4-* |
|
|
--- |
|
|
|
|
|
# π³ Breakfast Actions Dataset (HF + WebDataset Ready) |
|
|
|
|
|
This repository hosts the **Breakfast Actions** dataset metadata and videos, organized for modern deep learning workflows. |
|
|
It provides: |
|
|
|
|
|
- 4 evaluation splits (`s1`, `s2`, `s3`, `s4`) |
|
|
- JSONL metadata describing each video, participant, camera, and frame-level action segments |
|
|
- Raw AVI videos stored directly on HuggingFace |
|
|
- Optional WebDataset shards for streaming training |
|
|
|
|
|
--- |
|
|
|
|
|
## π Folder Layout |
|
|
|
|
|
``` |
|
|
Breakfast-Actions/ |
|
|
β |
|
|
βββ Converted_Data/ |
|
|
β βββ metadata_s1.jsonl |
|
|
β βββ metadata_s2.jsonl |
|
|
β βββ metadata_s3.jsonl |
|
|
β βββ metadata_s4.jsonl |
|
|
β |
|
|
βββ Videos/ |
|
|
β βββ P03/cam01/*.avi |
|
|
β βββ P03/cam02/*.avi |
|
|
β βββ P04/cam01/*.avi |
|
|
β βββ ... (participants P03βP54, multiple cameras) |
|
|
β |
|
|
βββ WebDataset_Shards/ (optional) |
|
|
βββ 000000.tar |
|
|
βββ 000001.tar |
|
|
βββ ... |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π JSONL Record Format |
|
|
|
|
|
Each metadata line looks like: |
|
|
|
|
|
```json |
|
|
{ |
|
|
"video_path": "Videos/P03/cam01/P03_coffee.avi", |
|
|
"participant": "P03", |
|
|
"camera": "cam01", |
|
|
"video": "P03_coffee", |
|
|
"labels": [ |
|
|
{"start": 1, "end": 385, "label": "SIL"}, |
|
|
{"start": 385, "end": 599, "label": "pour_oil"}, |
|
|
... |
|
|
] |
|
|
} |
|
|
``` |
|
|
|
|
|
All video paths match the directory structure inside the HF repo. |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ Load Metadata Using HuggingFace Datasets |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("json", data_files="metadata_s2.jsonl")["train"] |
|
|
|
|
|
# Select all videos belonging to split s2 |
|
|
subset = ds |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ Load and Decode a Video |
|
|
|
|
|
### Using Decord |
|
|
|
|
|
```python |
|
|
from decord import VideoReader |
|
|
item = ds[0] |
|
|
|
|
|
vr = VideoReader(item["video_path"]) |
|
|
frame0 = vr[0] # first frame |
|
|
``` |
|
|
|
|
|
### Using TorchVision |
|
|
|
|
|
```python |
|
|
from torchvision.io import read_video |
|
|
video, audio, info = read_video(item["video_path"]) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ WebDataset Version (Optional) |
|
|
|
|
|
If the dataset includes `.tar` shards: |
|
|
|
|
|
```python |
|
|
import webdataset as wds, jsonlines |
|
|
|
|
|
ids = [rec["video_path"] for rec in jsonlines.open("metadata_s2.jsonl")] |
|
|
dset = wds.WebDataset("WebDataset_Shards/*.tar").select(lambda s: s["json"]["video_path"] in ids) |
|
|
``` |
|
|
|
|
|
Each shard contains: |
|
|
|
|
|
- `xxx.avi` β video bytes |
|
|
- `xxx.json` β metadata JSON |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ PyTorch Example |
|
|
|
|
|
```python |
|
|
from torch.utils.data import Dataset, DataLoader |
|
|
from decord import VideoReader |
|
|
|
|
|
class BreakfastDataset(Dataset): |
|
|
def __init__(self, subset): self.subset = subset |
|
|
def __len__(self): return len(self.subset) |
|
|
def __getitem__(self, idx): |
|
|
item = self.subset[idx] |
|
|
vr = VideoReader(item["video_path"]) |
|
|
frames = vr.get_batch([0, 8, 16]) |
|
|
return frames, item["labels"] |
|
|
|
|
|
loader = DataLoader(BreakfastDataset(ds), batch_size=4) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π’ Splits Description |
|
|
|
|
|
The dataset is partitioned by participant ID: |
|
|
|
|
|
| Split | Participants | |
|
|
|-------|--------------| |
|
|
| **s1** | P03βP15 | |
|
|
| **s2** | P16βP28 | |
|
|
| **s3** | P29βP41 | |
|
|
| **s4** | P42βP54 | |
|
|
|
|
|
Each split has its own metadata JSONL file. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Citation |
|
|
|
|
|
If you use the Breakfast Actions dataset, please cite: |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{kuehne2014language, |
|
|
title={The language of actions: Recovering the syntax and semantics of goal-directed human activities}, |
|
|
author={Kuehne, Hildegard and Arslan, Ali and Serre, Thomas}, |
|
|
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, |
|
|
pages={780--787}, |
|
|
year={2014} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|