Datasets:
license: mit
task_categories:
- video-text-retrieval
- text-to-video
language:
- en
tags:
- video-retrieval
- generative-retrieval
- semantic-ids
- text-to-video
size_categories:
- 10K<n<100K
GRDR-TVR: Generative Recall, Dense Reranking for Text-to-Video Retrieval
This dataset contains the pre-extracted video features and trained model checkpoints for the GRDR (Generative Recall, Dense Reranking) framework for efficient Text-to-Video Retrieval (TVR).
π Paper
Generative Recall, Dense Reranking: Learning Multi-View Semantic IDs for Efficient Text-to-Video Retrieval
Conference: SIGIR 2026
π Dataset Overview
This dataset includes three main components:
1. InternVideo2 Features (~3.4GB)
Pre-extracted video features using InternVideo2 encoder for four benchmark datasets:
- MSR-VTT: 10,000 videos (932MB)
- ActivityNet: 20,000 videos (1.1GB)
- DiDeMo: 10,464 videos (916MB)
- LSMDC: 1,000 movies, 118,081 clips (424MB)
Feature Details:
- Dimension: 512-d embeddings
- Format: Pickle files (
.pkl) with{video_id: embedding}mappings - Extraction: InternVideo2 (InternVL-2B) with temporal pooling
2. GRDR Model Checkpoints (~2GB)
Trained GRDR models (T5-small based) for all four datasets:
- MSR-VTT: 494MB
- ActivityNet: 498MB
- DiDeMo: 504MB
- LSMDC: 478MB
Checkpoint Components:
best_model.pt- Complete model checkpointbest_model.pt.model- T5 encoder-decoder weightsbest_model.pt.videorqvae- Video RQ-VAE quantizerbest_model.pt.code- Pre-computed semantic IDsbest_model.pt.centroids- Codebook centroidsbest_model.pt.embedding- Learned embeddingsbest_model.pt.start_token- Start token embeddings
Model Architecture:
- Base: T5-small (60M parameters)
- Codebook size: 128/96/200 (dataset-dependent)
- Max code length: 3
- Training: 3-phase progressive training
3. Xpool Reranker Checkpoints (~7.2GB)
Pre-trained reranker models for dense reranking stage:
- MSR-VTT: msrvtt9k_model_best.pth (1.8GB)
- ActivityNet: actnet_model_best.pth (1.8GB)
- DiDeMo: didemo_model_best.pth (1.8GB)
- LSMDC: lsmdc_model_best.pth (1.8GB)
Reranker Details:
- Architecture: CLIP-based (ViT-B/32)
- Purpose: Fine-grained reranking of recalled candidates
- Format: PyTorch checkpoint files (
.pth)
π Quick Start
Download Specific Components
Using Python Script
# Download everything
python download_features.py --all
# Download only InternVideo2 features for specific datasets
python download_features.py --features --datasets msrvtt actnet
# Download GRDR checkpoints only
python download_features.py --grdr
# Download Xpool reranker only
python download_features.py --xpool --datasets msrvtt
Using Hugging Face CLI
# Download entire dataset
huggingface-cli download JasonCoderMaker/GRDR-TVR --repo-type dataset --local-dir ./GRDR-TVR
# Download specific component
huggingface-cli download JasonCoderMaker/GRDR-TVR InternVideo2/msrvtt --repo-type dataset --local-dir ./features
# Download GRDR checkpoint for MSR-VTT
huggingface-cli download JasonCoderMaker/GRDR-TVR GRDR/msrvtt --repo-type dataset --local-dir ./checkpoints
Load Features in Python
import pickle
from huggingface_hub import hf_hub_download
# Download and load InternVideo2 features
feature_file = hf_hub_download(
repo_id="JasonCoderMaker/GRDR-TVR",
filename="InternVideo2/msrvtt/msrvtt_internvideo2.pkl",
repo_type="dataset"
)
with open(feature_file, 'rb') as f:
video_features = pickle.load(f)
# Access features
video_id = "video7015"
embedding = video_features[video_id] # Shape: (512,)
print(f"Feature shape: {embedding.shape}")
Load GRDR Model
import torch
from huggingface_hub import hf_hub_download
# Download checkpoint
checkpoint_path = hf_hub_download(
repo_id="JasonCoderMaker/GRDR-TVR",
filename="GRDR/msrvtt/best_model/best_model.pt",
repo_type="dataset"
)
# Load model
checkpoint = torch.load(checkpoint_path, map_location='cpu')
print(f"Available keys: {checkpoint.keys()}")
# Use with your GRDR model
from models.grdr import GRDR
model = GRDR(...)
model.load_state_dict(checkpoint['model'], strict=False)
Load Xpool Reranker
import torch
from huggingface_hub import hf_hub_download
# Download reranker checkpoint
reranker_path = hf_hub_download(
repo_id="JasonCoderMaker/GRDR-TVR",
filename="Xpool/msrvtt9k_model_best.pth",
repo_type="dataset"
)
# Load reranker
checkpoint = torch.load(reranker_path, map_location='cpu')
# Use with your Xpool model
π Repository Structure
GRDR-TVR/
βββ README.md # This file
βββ download_features.py # Python download utility
βββ download_checkpoints.sh # Bash download script
β
βββ InternVideo2/ # Video Features (3.4GB)
β βββ actnet/
β β βββ actnet_internvideo2.pkl
β βββ didemo/
β β βββ didemo_internvideo2.pkl
β βββ lsmdc/
β β βββ lsmdc_internvideo2.pkl
β βββ msrvtt/
β βββ msrvtt_internvideo2.pkl
β
βββ GRDR/ # GRDR Checkpoints (2GB)
β βββ actnet/best_model/
β βββ didemo/best_model/
β βββ lsmdc/best_model/
β βββ msrvtt/best_model/
β
βββ Xpool/ # Reranker Checkpoints (7.2GB)
βββ actnet_model_best.pth
βββ didemo_model_best.pth
βββ lsmdc_model_best.pth
βββ msrvtt9k_model_best.pth
π¬ Dataset Statistics
| Dataset | Videos | Train Queries | Test Queries | Feature Size | GRDR Size | Xpool Size |
|---|---|---|---|---|---|---|
| MSR-VTT | 10,000 | 9,000 | 1,000 | 932 MB | 494 MB | 1.8 GB |
| ActivityNet | 20,000 | 10,009 | 4,917 | 1.1 GB | 498 MB | 1.8 GB |
| DiDeMo | 10,464 | 8,395 | 1,065 | 916 MB | 504 MB | 1.8 GB |
| LSMDC | 118,081 | 118,081 | 1,000 | 424 MB | 478 MB | 1.8 GB |
| Total | - | - | - | 3.4 GB | 2.0 GB | 7.2 GB |
π― Performance
GRDR achieves competitive accuracy with dense retrievers while being significantly more efficient:
| Dataset | R@1 | R@5 | R@10 | Storage Reduction | Speed-up |
|---|---|---|---|---|---|
| MSR-VTT | 45.2 | 72.1 | 81.3 | 15.6Γ | 287Γ |
| ActivityNet | 41.8 | 76.2 | 86.4 | 12.3Γ | 310Γ |
| DiDeMo | 43.1 | 71.8 | 81.7 | 18.2Γ | 265Γ |
| LSMDC | 24.3 | 48.9 | 59.2 | 22.1Γ | 298Γ |
Compared to CLIP4Clip baseline with exhaustive search
π» Usage in GRDR Pipeline
Complete Retrieval Pipeline
from models.grdr import GRDR
from reranker.xpool import XpoolReranker
import torch
# 1. Load video features
video_features = load_internvideo2_features("msrvtt")
# 2. Load GRDR model for recall
grdr_model = GRDR.from_pretrained("JasonCoderMaker/GRDR-TVR", dataset="msrvtt")
# 3. Generate candidates (fast generative recall)
query = "A person playing guitar"
candidates = grdr_model.generate_candidates(query, top_k=100)
# 4. Load Xpool reranker
reranker = XpoolReranker.from_pretrained("JasonCoderMaker/GRDR-TVR", dataset="msrvtt")
# 5. Rerank candidates (dense reranking)
final_results = reranker.rerank(query, candidates, top_k=10)
π οΈ Requirements
pip install torch torchvision
pip install transformers>=4.30.0
pip install huggingface_hub
pip install sentencepiece
For the full GRDR codebase, see: GitHub Repository
π Citation
If you use this dataset or models in your research, please cite:
@inproceedings{grdr2026,
title={Generative Recall, Dense Reranking: Learning Multi-View Semantic IDs for Efficient Text-to-Video Retrieval},
author={Anonymous},
booktitle={Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval},
year={2026}
}
π License
This dataset is released under the MIT License. See LICENSE for details.
The video datasets (MSR-VTT, ActivityNet, DiDeMo, LSMDC) are subject to their original licenses. This repository only provides pre-extracted features, not the original videos.
π Acknowledgments
- InternVideo2: We thank the authors of InternVideo2 for their excellent video encoder
- Xpool: The reranker architecture is based on X-POOL
- Datasets: MSR-VTT, ActivityNet Captions, DiDeMo, and LSMDC benchmark creators
π§ Contact
For questions or issues, please open an issue on the GitHub repository or contact the authors.
Dataset Version: 1.0
Last Updated: January 2026
Maintained by: @JasonCoderMaker