The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
Dataset Summary
The original MR.HiSum (Most-replayed Highlight Detection and Summarization) was designed as a unimodal dataset and only provides pre-extracted features, which limits its use for multimodal research. To support the multimodal video summarization approach proposed in TripleSumm, we reconstructed the dataset by independently crawling the original videos using the provided metadata and extracting features across three distinct modalities: Visual, Audio, and Text.
⚠️ Note on Reproduction: Please note that some videos may have been removed or set to private on YouTube since the original study. Therefore, the total video count and specific statistics may slightly differ from those reported in the original MR.HiSum paper.
- Paper: TripleSumm: Adaptive Triple-Modality Fusion for Video Summarization
- GitHub Repository: smkim37/TripleSumm
Dataset Structure
The dataset consists of 6 core files providing metadata, multimodal features, ground truth annotations, and evaluation splits.
1. Metadata (mrhisum_metadata.csv)
Contains foundational information for all 30,452 videos.
- video_id: The unique identifier for the video. This serves as the universal key to access data in all
.h5files and the split JSON. - youtube_id: The original YouTube video ID. The video can be accessed via
https://www.youtube.com/watch?v={youtube_id}. - duration: The length of the video in seconds.
- views: The total view count of the video.
- labels: Original multi-label annotations provided by the YouTube-8M dataset.
2. Multimodal Features (.h5 files)
Pre-extracted features for all three modalities. Each file is provided in HDF5 format and is approximately 20GB in size. All features have a shape of (N, D), where N corresponds to the video duration (in seconds) and D is 1024 for visual features and 768 for audio and text features.
- mrhisum_feat_visual_inceptionv3.h5: Visual features extracted using InceptionV3.
- mrhisum_feat_audio_ast.h5: Audio features extracted using Audio Spectrogram Transformer (AST).
- mrhisum_feat_text_roberta.h5: Text features extracted using RoBERTa.
3. Ground Truth (mrhisum_gt.h5)
An HDF5 file containing the summarization labels for all 30,452 videos. Each video_id (e.g., '00D7') maps to an HDF5 Group containing four specific keys:
- change_points: Temporal boundaries for video shots.
- gt_score: Frame-level ground-truth importance scores.
- gt_summary: Binary labels indicating whether a frame is included in the final summary.
4. Dataset Splits (mrhisum_split.json)
Contains standardized splits for training, validation, and testing.
- train_keys: List of video IDs for 26,639 training videos.
- val_keys: List of video IDs for 1,904 validation videos.
- test_keys: List of video IDs for 1,909 testing videos.
Citation
If you use this reproduced MR.HiSum dataset or the TripleSumm model in your research, please cite our paper:
@inproceedings{triplesumm2026,
title={TripleSumm: Adaptive Triple-Modality Fusion for Video Summarization},
author={Kim, Sumin and Jeong, Hyemin and Kang, Mingu and Kim, Yejin and Oh, Yoori and Lee, Joonseok},
booktitle={Proceedings of the International Conference on Learning Representations (ICLR)},
year={2026}
}
- Downloads last month
- -