Improve model card: Add library name, project page, abstract, and news section
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,11 +1,26 @@
|
|
| 1 |
---
|
| 2 |
pipeline_tag: video-text-to-text
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
# GROVE Model
|
| 6 |
|
| 7 |
-
This
|
| 8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
---
|
| 11 |
|
|
@@ -20,7 +35,6 @@ pip install -e .[torch] --extra-index-url https://download.pytorch.org/whl/cu124
|
|
| 20 |
pip install flash-attn==2.7.3 --no-build-isolation
|
| 21 |
```
|
| 22 |
|
| 23 |
-
|
| 24 |
Alternatively, run:
|
| 25 |
```bash
|
| 26 |
pip install -e "git+https://github.com/ekazakos/grove.git#subdirectory=grove_transformers[torch]" \
|
|
@@ -106,4 +120,4 @@ outputs = processor.generate(
|
|
| 106 |
journal={arXiv preprint arXiv:2503.10781},
|
| 107 |
year={2025}
|
| 108 |
}
|
| 109 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
pipeline_tag: video-text-to-text
|
| 3 |
+
library_name: transformers
|
| 4 |
---
|
| 5 |
|
| 6 |
+
# GROVE Model: Large-scale Pre-training for Grounded Video Caption Generation
|
| 7 |
|
| 8 |
+
This repository hosts the **artifacts** (config, tokenizer, weights) for the GROVE model, as presented in the paper [Large-scale Pre-training for Grounded Video Caption Generation](https://huggingface.co/papers/2503.10781).
|
| 9 |
+
|
| 10 |
+
**Project Website**: https://ekazakos.github.io/grounded_video_caption_generation/
|
| 11 |
+
**Codebase**: The inference code is provided in the [`grove-transformers`](https://github.com/ekazakos/grove/tree/main/grove_transformers) package — a **slimmer version** of the full codebase at [https://github.com/ekazakos/grove/](https://github.com/ekazakos/grove/), designed specifically for **quick inference with GROVE**.
|
| 12 |
+
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
## Abstract
|
| 16 |
+
We propose a novel approach for captioning and object grounding in video, where the objects in the caption are grounded in the video via temporally dense bounding boxes. We introduce the following contributions. First, we present a large-scale automatic annotation method that aggregates frame-level captions grounded with bounding boxes into temporally dense and consistent annotations. We apply this approach on the HowTo100M dataset to construct a large-scale pre-training dataset, named HowToGround1M. We also introduce a Grounded Video Caption Generation model, dubbed GROVE, and pre-train the model on HowToGround1M. Second, we introduce iGround--a dataset of 3513 videos with manually annotated captions and dense spatio-temporally grounded bounding boxes. This allows us to measure progress on this challenging problem, as well as to fine-tune our model on this small-scale but high-quality data. Third, we demonstrate that our approach achieves state-of-the-art results on the proposed iGround dataset, as well as on the VidSTG, ActivityNet-Entities, GroundingYouTube, and YouCook-Interactions datasets. Our ablations demonstrate the importance of pre-training on our automatically annotated HowToGround1M dataset followed by fine-tuning on the manually annotated iGround dataset and validate the key technical contributions of our model. The dataset and code are available at this https URL .
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## News
|
| 21 |
+
- **02/09/2025**: We release **grove-transformers** — a lightweight, inference-only interface for GROVE, implemented with 🤗 Transformers.
|
| 22 |
+
- **21/08/2025**: Code, checkpoints, and datasets released!
|
| 23 |
+
- **25/06/2025**: Paper accepted to **ICCV 2025** 🎉
|
| 24 |
|
| 25 |
---
|
| 26 |
|
|
|
|
| 35 |
pip install flash-attn==2.7.3 --no-build-isolation
|
| 36 |
```
|
| 37 |
|
|
|
|
| 38 |
Alternatively, run:
|
| 39 |
```bash
|
| 40 |
pip install -e "git+https://github.com/ekazakos/grove.git#subdirectory=grove_transformers[torch]" \
|
|
|
|
| 120 |
journal={arXiv preprint arXiv:2503.10781},
|
| 121 |
year={2025}
|
| 122 |
}
|
| 123 |
+
```
|