--- dataset_info: features: - name: video_path dtype: string - name: participant dtype: string - name: camera dtype: string - name: video dtype: string - name: labels list: - name: start dtype: int64 - name: end dtype: int64 - name: label dtype: string splits: - name: s1 num_bytes: 80320 num_examples: 284 - name: s2 num_bytes: 137069 num_examples: 506 - name: s3 num_bytes: 130705 num_examples: 532 - name: s4 num_bytes: 166758 num_examples: 667 download_size: 107741 dataset_size: 514852 configs: - config_name: default data_files: - split: s1 path: data/s1-* - split: s2 path: data/s2-* - split: s3 path: data/s3-* - split: s4 path: data/s4-* --- # 🍳 Breakfast Actions Dataset (HF + WebDataset Ready) This repository hosts the **Breakfast Actions** dataset metadata and videos, organized for modern deep learning workflows. It provides: - 4 evaluation splits (`s1`, `s2`, `s3`, `s4`) - JSONL metadata describing each video, participant, camera, and frame-level action segments - Raw AVI videos stored directly on HuggingFace - Optional WebDataset shards for streaming training --- ## 📁 Folder Layout ``` Breakfast-Actions/ │ ├── Converted_Data/ │ ├── metadata_s1.jsonl │ ├── metadata_s2.jsonl │ ├── metadata_s3.jsonl │ └── metadata_s4.jsonl │ ├── Videos/ │ ├── P03/cam01/*.avi │ ├── P03/cam02/*.avi │ ├── P04/cam01/*.avi │ └── ... (participants P03–P54, multiple cameras) │ └── WebDataset_Shards/ (optional) ├── 000000.tar ├── 000001.tar └── ... ``` --- ## 📝 JSONL Record Format Each metadata line looks like: ```json { "video_path": "Videos/P03/cam01/P03_coffee.avi", "participant": "P03", "camera": "cam01", "video": "P03_coffee", "labels": [ {"start": 1, "end": 385, "label": "SIL"}, {"start": 385, "end": 599, "label": "pour_oil"}, ... ] } ``` All video paths match the directory structure inside the HF repo. --- ## 🔹 Load Metadata Using HuggingFace Datasets ```python from datasets import load_dataset ds = load_dataset("json", data_files="metadata_s2.jsonl")["train"] # Select all videos belonging to split s2 subset = ds ``` --- ## 🔹 Load and Decode a Video ### Using Decord ```python from decord import VideoReader item = ds[0] vr = VideoReader(item["video_path"]) frame0 = vr[0] # first frame ``` ### Using TorchVision ```python from torchvision.io import read_video video, audio, info = read_video(item["video_path"]) ``` --- ## 🔹 WebDataset Version (Optional) If the dataset includes `.tar` shards: ```python import webdataset as wds, jsonlines ids = [rec["video_path"] for rec in jsonlines.open("metadata_s2.jsonl")] dset = wds.WebDataset("WebDataset_Shards/*.tar").select(lambda s: s["json"]["video_path"] in ids) ``` Each shard contains: - `xxx.avi` → video bytes - `xxx.json` → metadata JSON --- ## 🔹 PyTorch Example ```python from torch.utils.data import Dataset, DataLoader from decord import VideoReader class BreakfastDataset(Dataset): def __init__(self, subset): self.subset = subset def __len__(self): return len(self.subset) def __getitem__(self, idx): item = self.subset[idx] vr = VideoReader(item["video_path"]) frames = vr.get_batch([0, 8, 16]) return frames, item["labels"] loader = DataLoader(BreakfastDataset(ds), batch_size=4) ``` --- ## 🔢 Splits Description The dataset is partitioned by participant ID: | Split | Participants | |-------|--------------| | **s1** | P03–P15 | | **s2** | P16–P28 | | **s3** | P29–P41 | | **s4** | P42–P54 | Each split has its own metadata JSONL file. --- ## 📚 Citation If you use the Breakfast Actions dataset, please cite: ```bibtex @inproceedings{kuehne2014language, title={The language of actions: Recovering the syntax and semantics of goal-directed human activities}, author={Kuehne, Hildegard and Arslan, Ali and Serre, Thomas}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, pages={780--787}, year={2014} } ``` ---