Datasets:

Modalities:
Text
Video
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
CaReBench / README.md
geekifan's picture
Update README.md
03afdda verified
---
license: mit
size_categories:
- 1K<n<10K
configs:
- config_name: CaReBench
data_files:
- split: test
path: json/metadata.json
---
<div align="center">
<h1 style="margin: 0">
<img src="assets/logo.png" style="width:1.5em; vertical-align: middle; display: inline-block; margin: 0" alt="Logo">
<span style="vertical-align: middle; display: inline-block; margin: 0"><b>CaReBench: A Fine-grained Benchmark for Video Captioning and Retrieval</b></span>
</h1>
<p style="margin: 0">
Yifan Xu, <a href="https://scholar.google.com/citations?user=evR3uR0AAAAJ">Xinhao Li</a>, Yichun Yang, Desen Meng, Rui Huang, <a href="https://scholar.google.com/citations?user=HEuN8PcAAAAJ">Limin Wang</a>
</p>
<p align="center">
๐Ÿค— <a href="https://huggingface.co/MCG-NJU/CaRe-7B">Model</a> &nbsp&nbsp | &nbsp&nbsp ๐Ÿค— <a href="https://huggingface.co/datasets/MCG-NJU/CaReBench">Data</a> &nbsp&nbsp๏ฝœ &nbsp&nbsp ๐Ÿ“‘ <a href="https://arxiv.org/pdf/2501.00513">Paper</a> &nbsp&nbsp
</p>
</div>
![](assets/comparison.png)
## ๐Ÿ“ Introduction
**๐ŸŒŸ CaReBench** is a fine-grained benchmark comprising **1,000 high-quality videos** with detailed human-annotated captions, including **manually separated spatial and temporal descriptions** for independent spatiotemporal bias evaluation.
![CaReBench](assets/carebench.png)
**๐Ÿ“Š ReBias and CapST Metrics** are designed specifically for retrieval and captioning tasks, providing a comprehensive evaluation framework for spatiotemporal understanding in video-language models.
**โšก CaRe: A Unified Baseline** for fine-grained video retrieval and captioning, achieving competitive performance through **two-stage Supervised Fine-Tuning (SFT)**. CaRe excels in both generating detailed video descriptions and extracting robust video features.
![CaRe Training Recipe](assets/care_model.png)
**๐Ÿš€ State-of-the-art performance** on both detailed video captioning and fine-grained video retrieval. CaRe outperforms CLIP-based retrieval models and popular MLLMs in captioning tasks.
![alt text](assets/performance.png)