HMDB51 / README.md
shubham-kashyapi's picture
Update README.md
2f6bb8a verified
metadata
dataset_info:
  features:
    - name: video_path
      dtype: string
    - name: label
      dtype: string
    - name: subset
      dtype: int64
  splits:
    - name: split1
      num_bytes: 636609
      num_examples: 6766
    - name: split2
      num_bytes: 636609
      num_examples: 6766
    - name: split3
      num_bytes: 636609
      num_examples: 6766
  download_size: 351201
  dataset_size: 1909827
configs:
  - config_name: default
    data_files:
      - split: split1
        path: data/split1-*
      - split: split2
        path: data/split2-*
      - split: split3
        path: data/split3-*

πŸ“˜ HMDB51 Dataset (with Protocol Splits + Video Streaming Support)

This repository hosts the HMDB51 human action recognition dataset in a format optimized for modern deep learning research.
It provides:

  • Three official evaluation protocols (split1, split2, split3)
  • JSONL metadata files containing action labels and train/test assignments
  • Raw video files stored directly on HuggingFace Hub
  • Optional WebDataset tar shards for high-performance streaming

πŸ“ Folder Layout

HMDB51/
β”‚
β”œβ”€β”€ metadata_split1.jsonl
β”œβ”€β”€ metadata_split2.jsonl
β”œβ”€β”€ metadata_split3.jsonl
β”‚
β”œβ”€β”€ Videos/
β”‚    β”œβ”€β”€ brush_hair/
β”‚    β”œβ”€β”€ climb/
β”‚    └── ... (all 51 classes)
β”‚
└── webdataset/
     β”œβ”€β”€ 000000.tar
     β”œβ”€β”€ 000001.tar
     └── ...

Each JSONL record:

{
  "video_path": "Videos/brush_hair/example.avi",
  "label": "brush_hair",
  "subset": 1
}

πŸ”Ή 1. Load Metadata (HF-native)

from datasets import load_dataset
ds = load_dataset("json", data_files="metadata_split2.jsonl")["train"]
train = ds.filter(lambda x: x["subset"] == 1)
test  = ds.filter(lambda x: x["subset"] == 2)

πŸ”Ή 2. Load a Video File

Decord

from decord import VideoReader
vr = VideoReader(train[0]["video_path"])
frame0 = vr[0]

TorchVision

from torchvision.io import read_video
video, audio, info = read_video(train[0]["video_path"])

πŸ”Ή 3. WebDataset Version (Optional)

import webdataset as wds, jsonlines
ids = [rec["video_path"] for rec in jsonlines.open("metadata_split2.jsonl") if rec["subset"]==1]
train_wds = wds.WebDataset("webdataset/*.tar").select(lambda s: s["__key__"] in ids)

πŸ”Ή 4. PyTorch DataLoader Example

from torch.utils.data import Dataset, DataLoader
from decord import VideoReader

class VideoDataset(Dataset):
    def __init__(self, subset): self.subset = subset
    def __getitem__(self, i):
        item = self.subset[i]
        vr = VideoReader(item["video_path"])
        return vr.get_batch([0,8,16]), item["label"]
    def __len__(self): return len(self.subset)

loader = DataLoader(VideoDataset(train), batch_size=4)

πŸ”Ή 5. Protocol Files

metadata_split1.jsonl
metadata_split2.jsonl
metadata_split3.jsonl

Each matches the official HMDB51 evaluation protocol.


πŸ“š Citation

@inproceedings{kuehne2011hmdb,
  title={HMDB: a large video database for human motion recognition},
  author={Kuehne, Hildegard and Jhuang, Hueihan and Garrote, Est{'\i}baliz and Poggio, Tomaso and Serre, Thomas},
  booktitle={2011 International conference on computer vision},
  pages={2556--2563},
  year={2011},
  organization={IEEE}
}