File size: 3,354 Bytes
7cc1ec5 7f06a25 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
---
dataset_info:
features:
- name: video_path
dtype: string
- name: label
dtype: string
- name: subset
dtype: int64
splits:
- name: split1
num_bytes: 636609
num_examples: 6766
- name: split2
num_bytes: 636609
num_examples: 6766
- name: split3
num_bytes: 636609
num_examples: 6766
download_size: 351201
dataset_size: 1909827
configs:
- config_name: default
data_files:
- split: split1
path: data/split1-*
- split: split2
path: data/split2-*
- split: split3
path: data/split3-*
---
# π HMDB51 Dataset (with Protocol Splits + Video Streaming Support)
This repository hosts the **HMDB51** human action recognition dataset in a format optimized for modern deep learning research.
It provides:
- Three official evaluation protocols (`split1`, `split2`, `split3`)
- JSONL metadata files containing action labels and train/test assignments
- Raw video files stored directly on HuggingFace Hub
- Optional **WebDataset** tar shards for high-performance streaming
---
## π Folder Layout
```
HMDB51/
β
βββ metadata_split1.jsonl
βββ metadata_split2.jsonl
βββ metadata_split3.jsonl
β
βββ Videos/
β βββ brush_hair/
β βββ climb/
β βββ ... (all 51 classes)
β
βββ webdataset/
βββ 000000.tar
βββ 000001.tar
βββ ...
```
Each JSONL record:
```json
{
"video_path": "Videos/brush_hair/example.avi",
"label": "brush_hair",
"subset": 1
}
```
---
## πΉ 1. Load Metadata (HF-native)
```python
from datasets import load_dataset
ds = load_dataset("json", data_files="metadata_split2.jsonl")["train"]
train = ds.filter(lambda x: x["subset"] == 1)
test = ds.filter(lambda x: x["subset"] == 2)
```
---
## πΉ 2. Load a Video File
### Decord
```python
from decord import VideoReader
vr = VideoReader(train[0]["video_path"])
frame0 = vr[0]
```
### TorchVision
```python
from torchvision.io import read_video
video, audio, info = read_video(train[0]["video_path"])
```
---
## πΉ 3. WebDataset Version (Optional)
```python
import webdataset as wds, jsonlines
ids = [rec["video_path"] for rec in jsonlines.open("metadata_split2.jsonl") if rec["subset"]==1]
train_wds = wds.WebDataset("webdataset/*.tar").select(lambda s: s["__key__"] in ids)
```
---
## πΉ 4. PyTorch DataLoader Example
```python
from torch.utils.data import Dataset, DataLoader
from decord import VideoReader
class VideoDataset(Dataset):
def __init__(self, subset): self.subset = subset
def __getitem__(self, i):
item = self.subset[i]
vr = VideoReader(item["video_path"])
return vr.get_batch([0,8,16]), item["label"]
def __len__(self): return len(self.subset)
loader = DataLoader(VideoDataset(train), batch_size=4)
```
---
## πΉ 5. Protocol Files
```
metadata_split1.jsonl
metadata_split2.jsonl
metadata_split3.jsonl
```
Each matches the official HMDB51 evaluation protocol.
---
## π Citation
```bibtex
@inproceedings{kuehne2011hmdb,
title={HMDB: a large video database for human motion recognition},
author={Kuehne, Hildegard and Jhuang, Hueihan and Garrote, Est{'\i}baliz and Poggio, Tomaso and Serre, Thomas},
booktitle={2011 International conference on computer vision},
pages={2556--2563},
year={2011},
organization={IEEE}
}
```
---
|