File size: 4,311 Bytes
365f4db
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91e1e9c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
---
dataset_info:
  features:
  - name: video_path
    dtype: string
  - name: participant
    dtype: string
  - name: camera
    dtype: string
  - name: video
    dtype: string
  - name: labels
    list:
    - name: start
      dtype: int64
    - name: end
      dtype: int64
    - name: label
      dtype: string
  splits:
  - name: s1
    num_bytes: 80320
    num_examples: 284
  - name: s2
    num_bytes: 137069
    num_examples: 506
  - name: s3
    num_bytes: 130705
    num_examples: 532
  - name: s4
    num_bytes: 166758
    num_examples: 667
  download_size: 107741
  dataset_size: 514852
configs:
- config_name: default
  data_files:
  - split: s1
    path: data/s1-*
  - split: s2
    path: data/s2-*
  - split: s3
    path: data/s3-*
  - split: s4
    path: data/s4-*
---

# 🍳 Breakfast Actions Dataset (HF + WebDataset Ready)

This repository hosts the **Breakfast Actions** dataset metadata and videos, organized for modern deep learning workflows.  
It provides:

- 4 evaluation splits (`s1`, `s2`, `s3`, `s4`)  
- JSONL metadata describing each video, participant, camera, and frame-level action segments  
- Raw AVI videos stored directly on HuggingFace  
- Optional WebDataset shards for streaming training

---

## πŸ“ Folder Layout

```
Breakfast-Actions/
β”‚
β”œβ”€β”€ Converted_Data/
β”‚     β”œβ”€β”€ metadata_s1.jsonl
β”‚     β”œβ”€β”€ metadata_s2.jsonl
β”‚     β”œβ”€β”€ metadata_s3.jsonl
β”‚     └── metadata_s4.jsonl
β”‚
β”œβ”€β”€ Videos/
β”‚     β”œβ”€β”€ P03/cam01/*.avi
β”‚     β”œβ”€β”€ P03/cam02/*.avi
β”‚     β”œβ”€β”€ P04/cam01/*.avi
β”‚     └── ... (participants P03–P54, multiple cameras)
β”‚
└── WebDataset_Shards/   (optional)
       β”œβ”€β”€ 000000.tar
       β”œβ”€β”€ 000001.tar
       └── ...
```

---

## πŸ“ JSONL Record Format

Each metadata line looks like:

```json
{
  "video_path": "Videos/P03/cam01/P03_coffee.avi",
  "participant": "P03",
  "camera": "cam01",
  "video": "P03_coffee",
  "labels": [
      {"start": 1, "end": 385, "label": "SIL"},
      {"start": 385, "end": 599, "label": "pour_oil"},
      ...
  ]
}
```

All video paths match the directory structure inside the HF repo.

---

## πŸ”Ή Load Metadata Using HuggingFace Datasets

```python
from datasets import load_dataset

ds = load_dataset("json", data_files="metadata_s2.jsonl")["train"]

# Select all videos belonging to split s2
subset = ds
```

---

## πŸ”Ή Load and Decode a Video

### Using Decord

```python
from decord import VideoReader
item = ds[0]

vr = VideoReader(item["video_path"])
frame0 = vr[0]   # first frame
```

### Using TorchVision

```python
from torchvision.io import read_video
video, audio, info = read_video(item["video_path"])
```

---

## πŸ”Ή WebDataset Version (Optional)

If the dataset includes `.tar` shards:

```python
import webdataset as wds, jsonlines

ids = [rec["video_path"] for rec in jsonlines.open("metadata_s2.jsonl")]
dset = wds.WebDataset("WebDataset_Shards/*.tar").select(lambda s: s["json"]["video_path"] in ids)
```

Each shard contains:

- `xxx.avi` β†’ video bytes  
- `xxx.json` β†’ metadata JSON  

---

## πŸ”Ή PyTorch Example

```python
from torch.utils.data import Dataset, DataLoader
from decord import VideoReader

class BreakfastDataset(Dataset):
    def __init__(self, subset): self.subset = subset
    def __len__(self): return len(self.subset)
    def __getitem__(self, idx):
        item = self.subset[idx]
        vr = VideoReader(item["video_path"])
        frames = vr.get_batch([0, 8, 16])
        return frames, item["labels"]

loader = DataLoader(BreakfastDataset(ds), batch_size=4)
```

---

## πŸ”’ Splits Description

The dataset is partitioned by participant ID:

| Split | Participants |
|-------|--------------|
| **s1** | P03–P15 |
| **s2** | P16–P28 |
| **s3** | P29–P41 |
| **s4** | P42–P54 |

Each split has its own metadata JSONL file.

---

## πŸ“š Citation

If you use the Breakfast Actions dataset, please cite:

```bibtex
@inproceedings{kuehne2014language,
  title={The language of actions: Recovering the syntax and semantics of goal-directed human activities},
  author={Kuehne, Hildegard and Arslan, Ali and Serre, Thomas},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={780--787},
  year={2014}
}
```

---