Breakfast-Actions / README.md
shubham-kashyapi's picture
Update README.md
bb37af4 verified
metadata
dataset_info:
  features:
    - name: video_path
      dtype: string
    - name: participant
      dtype: string
    - name: camera
      dtype: string
    - name: video
      dtype: string
    - name: labels
      list:
        - name: start
          dtype: int64
        - name: end
          dtype: int64
        - name: label
          dtype: string
  splits:
    - name: s1
      num_bytes: 80320
      num_examples: 284
    - name: s2
      num_bytes: 137069
      num_examples: 506
    - name: s3
      num_bytes: 130705
      num_examples: 532
    - name: s4
      num_bytes: 166758
      num_examples: 667
  download_size: 107741
  dataset_size: 514852
configs:
  - config_name: default
    data_files:
      - split: s1
        path: data/s1-*
      - split: s2
        path: data/s2-*
      - split: s3
        path: data/s3-*
      - split: s4
        path: data/s4-*

🍳 Breakfast Actions Dataset (HF + WebDataset Ready)

This repository hosts the Breakfast Actions dataset metadata and videos, organized for modern deep learning workflows.
It provides:

  • 4 evaluation splits (s1, s2, s3, s4)
  • JSONL metadata describing each video, participant, camera, and frame-level action segments
  • Raw AVI videos stored directly on HuggingFace
  • Optional WebDataset shards for streaming training

πŸ“ Folder Layout

Breakfast-Actions/
β”‚
β”œβ”€β”€ Converted_Data/
β”‚     β”œβ”€β”€ metadata_s1.jsonl
β”‚     β”œβ”€β”€ metadata_s2.jsonl
β”‚     β”œβ”€β”€ metadata_s3.jsonl
β”‚     └── metadata_s4.jsonl
β”‚
β”œβ”€β”€ Videos/
β”‚     β”œβ”€β”€ P03/cam01/*.avi
β”‚     β”œβ”€β”€ P03/cam02/*.avi
β”‚     β”œβ”€β”€ P04/cam01/*.avi
β”‚     └── ... (participants P03–P54, multiple cameras)
β”‚
└── WebDataset_Shards/   (optional)
       β”œβ”€β”€ 000000.tar
       β”œβ”€β”€ 000001.tar
       └── ...

πŸ“ JSONL Record Format

Each metadata line looks like:

{
  "video_path": "Videos/P03/cam01/P03_coffee.avi",
  "participant": "P03",
  "camera": "cam01",
  "video": "P03_coffee",
  "labels": [
      {"start": 1, "end": 385, "label": "SIL"},
      {"start": 385, "end": 599, "label": "pour_oil"},
      ...
  ]
}

All video paths match the directory structure inside the HF repo.


πŸ”Ή Load Metadata Using HuggingFace Datasets

from datasets import load_dataset

ds = load_dataset("json", data_files="metadata_s2.jsonl")["train"]

# Select all videos belonging to split s2
subset = ds

πŸ”Ή Load and Decode a Video

Using Decord

from decord import VideoReader
item = ds[0]

vr = VideoReader(item["video_path"])
frame0 = vr[0]   # first frame

Using TorchVision

from torchvision.io import read_video
video, audio, info = read_video(item["video_path"])

πŸ”Ή WebDataset Version (Optional)

If the dataset includes .tar shards:

import webdataset as wds, jsonlines

ids = [rec["video_path"] for rec in jsonlines.open("metadata_s2.jsonl")]
dset = wds.WebDataset("WebDataset_Shards/*.tar").select(lambda s: s["json"]["video_path"] in ids)

Each shard contains:

  • xxx.avi β†’ video bytes
  • xxx.json β†’ metadata JSON

πŸ”Ή PyTorch Example

from torch.utils.data import Dataset, DataLoader
from decord import VideoReader

class BreakfastDataset(Dataset):
    def __init__(self, subset): self.subset = subset
    def __len__(self): return len(self.subset)
    def __getitem__(self, idx):
        item = self.subset[idx]
        vr = VideoReader(item["video_path"])
        frames = vr.get_batch([0, 8, 16])
        return frames, item["labels"]

loader = DataLoader(BreakfastDataset(ds), batch_size=4)

πŸ”’ Splits Description

The dataset is partitioned by participant ID:

Split Participants
s1 P03–P15
s2 P16–P28
s3 P29–P41
s4 P42–P54

Each split has its own metadata JSONL file.


πŸ“š Citation

If you use the Breakfast Actions dataset, please cite:

@inproceedings{kuehne2014language,
  title={The language of actions: Recovering the syntax and semantics of goal-directed human activities},
  author={Kuehne, Hildegard and Arslan, Ali and Serre, Thomas},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={780--787},
  year={2014}
}