SmartHome-Bench / README.md
violetcliff's picture
Update README.md
4cf8cfb verified
metadata
license: cc-by-nc-nd-4.0

🏠 Dataset Card for SmartHome-Bench

Hugging Face Paper Page

πŸ“˜ Dataset Summary

SmartHome-Bench is a comprehensive benchmark for video anomaly detection and reasoning in smart-home environments.
The dataset contains 1,203 smart-home video clips spanning seven scene categories, among which 1,023 videos are open-sourced and can be downloaded via the provided video URL list.
Each video is accompanied by human-annotated descriptions, reasoning chains, and anomaly labels (normal, abnormal, vague abnormal).

SmartHome-Bench bridges the gap between Multimodal Large Language Model (MLLM) understanding and LLM-based reasoning, covering everyday household contexts such as pet activity, senior care, baby care, and security monitoring.

All 1,023 videos were collected from public platforms (e.g., YouTube) and carefully curated to ensure they represent authentic smart-home camera footage.

For more details and analyses, please refer to our
πŸ“„ CVPR 2025 Paper
πŸ’» GitHub Repository


🎯 Supported Tasks and Applications

The SmartHome-Bench dataset enables research across several key directions:

  • Video Anomaly Detection (VAD): Detecting and classifying normal versus abnormal events in smart-home videos.
  • Multimodal Reasoning: Generating coherent explanations and causal reasoning chains for detected anomalies.
  • Vision-Language Model Evaluation: Assessing the ability of models to understand and interpret video content within real-world household contexts.
  • Instruction-Following Fine-Tuning: Training LLMs to describe and reason about video observations using structured, instruction-based prompts.

🌐 Languages

All annotations in SmartHome-Bench are provided in English.


πŸ“Ή Video Collection

  1. Videos were collected from public resources such as YouTube, organized under seven taxonomy categories.
  2. Each category was queried with specific keywords to capture diverse normal and abnormal scenarios.
    • Example: "cat play home cam" for normal pet activities, "pet vomit home cam" for abnormal events.
  3. All collected videos were screened manually to ensure they were recorded from smart-home cameras only.

Category distribution
Fig. 1 – Video category distribution.


πŸ’Ύ Download Instructions

Step 1. Download the Video URLs

  • All public video links are provided in Video_url.csv.
    • The first 1,023 videos can be downloaded directly from YouTube.
    • The remaining 180 videos, collected internally, are private and not publicly available.

Step 2. Organize the Downloaded Files

After downloading, ensure each video file is named exactly as listed in the "Title" column of Video_url.csv

Step 3. Trim the Videos

To remove irrelevant frames (e.g., camera brand splash screens) from the raw videos, use the trimming script provided in our GitHub repository:

python Videos/Trim_Videos/Video_trim.py

You can find the script here: Video_trim.py

Step 4. Access Annotations

Complete annotation details for all 1,203 videos are available in Video_Annotation.csv.

Citation:

If you use SmartHome-Bench in a scientific publication, please cite the following:

@InProceedings{Zhao_2025_CVPR,
    author    = {Zhao, Xinyi and Zhang, Congjing and Guo, Pei and Li, Wei and Chen, Lin and Zhao, Chaoyue and Huang, Shuai},
    title     = {SmartHome-Bench: A Comprehensive Benchmark for Video Anomaly Detection in Smart Homes Using Multi-Modal Large Language Models},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops},
    month     = {June},
    year      = {2025},
    pages     = {3975-3985}
}

Acknowledgment: We sincerely thank Kevin Beussman for donating the videos. We also appreciate the efforts of Pengfei Gao, Xiaoya Hu, Liting Jia, Lina Liu, Vincent Nguyen, and Yunyun Xi for their assistance with video annotation. Work done during the authors’ internship at Wyze.