|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: video_path |
|
|
dtype: string |
|
|
- name: label |
|
|
dtype: string |
|
|
- name: subset |
|
|
dtype: int64 |
|
|
splits: |
|
|
- name: split1 |
|
|
num_bytes: 636609 |
|
|
num_examples: 6766 |
|
|
- name: split2 |
|
|
num_bytes: 636609 |
|
|
num_examples: 6766 |
|
|
- name: split3 |
|
|
num_bytes: 636609 |
|
|
num_examples: 6766 |
|
|
download_size: 351201 |
|
|
dataset_size: 1909827 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: split1 |
|
|
path: data/split1-* |
|
|
- split: split2 |
|
|
path: data/split2-* |
|
|
- split: split3 |
|
|
path: data/split3-* |
|
|
--- |
|
|
# π HMDB51 Dataset (with Protocol Splits + Video Streaming Support) |
|
|
|
|
|
This repository hosts the **HMDB51** human action recognition dataset in a format optimized for modern deep learning research. |
|
|
It provides: |
|
|
|
|
|
- Three official evaluation protocols (`split1`, `split2`, `split3`) |
|
|
- JSONL metadata files containing action labels and train/test assignments |
|
|
- Raw video files stored directly on HuggingFace Hub |
|
|
- Optional **WebDataset** tar shards for high-performance streaming |
|
|
|
|
|
--- |
|
|
|
|
|
## π Folder Layout |
|
|
|
|
|
``` |
|
|
HMDB51/ |
|
|
β |
|
|
βββ metadata_split1.jsonl |
|
|
βββ metadata_split2.jsonl |
|
|
βββ metadata_split3.jsonl |
|
|
β |
|
|
βββ Videos/ |
|
|
β βββ brush_hair/ |
|
|
β βββ climb/ |
|
|
β βββ ... (all 51 classes) |
|
|
β |
|
|
βββ webdataset/ |
|
|
βββ 000000.tar |
|
|
βββ 000001.tar |
|
|
βββ ... |
|
|
``` |
|
|
|
|
|
Each JSONL record: |
|
|
|
|
|
```json |
|
|
{ |
|
|
"video_path": "Videos/brush_hair/example.avi", |
|
|
"label": "brush_hair", |
|
|
"subset": 1 |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ 1. Load Metadata (HF-native) |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
ds = load_dataset("json", data_files="metadata_split2.jsonl")["train"] |
|
|
train = ds.filter(lambda x: x["subset"] == 1) |
|
|
test = ds.filter(lambda x: x["subset"] == 2) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ 2. Load a Video File |
|
|
|
|
|
### Decord |
|
|
|
|
|
```python |
|
|
from decord import VideoReader |
|
|
vr = VideoReader(train[0]["video_path"]) |
|
|
frame0 = vr[0] |
|
|
``` |
|
|
|
|
|
### TorchVision |
|
|
|
|
|
```python |
|
|
from torchvision.io import read_video |
|
|
video, audio, info = read_video(train[0]["video_path"]) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ 3. WebDataset Version (Optional) |
|
|
|
|
|
```python |
|
|
import webdataset as wds, jsonlines |
|
|
ids = [rec["video_path"] for rec in jsonlines.open("metadata_split2.jsonl") if rec["subset"]==1] |
|
|
train_wds = wds.WebDataset("webdataset/*.tar").select(lambda s: s["__key__"] in ids) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ 4. PyTorch DataLoader Example |
|
|
|
|
|
```python |
|
|
from torch.utils.data import Dataset, DataLoader |
|
|
from decord import VideoReader |
|
|
|
|
|
class VideoDataset(Dataset): |
|
|
def __init__(self, subset): self.subset = subset |
|
|
def __getitem__(self, i): |
|
|
item = self.subset[i] |
|
|
vr = VideoReader(item["video_path"]) |
|
|
return vr.get_batch([0,8,16]), item["label"] |
|
|
def __len__(self): return len(self.subset) |
|
|
|
|
|
loader = DataLoader(VideoDataset(train), batch_size=4) |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## πΉ 5. Protocol Files |
|
|
|
|
|
``` |
|
|
metadata_split1.jsonl |
|
|
metadata_split2.jsonl |
|
|
metadata_split3.jsonl |
|
|
``` |
|
|
|
|
|
Each matches the official HMDB51 evaluation protocol. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Citation |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{kuehne2011hmdb, |
|
|
title={HMDB: a large video database for human motion recognition}, |
|
|
author={Kuehne, Hildegard and Jhuang, Hueihan and Garrote, Est{'\i}baliz and Poggio, Tomaso and Serre, Thomas}, |
|
|
booktitle={2011 International conference on computer vision}, |
|
|
pages={2556--2563}, |
|
|
year={2011}, |
|
|
organization={IEEE} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|