Improve model card metadata and discoverability

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +15 -3
README.md CHANGED
@@ -1,24 +1,36 @@
1
  ---
2
  license: bsd-3-clause
3
  pipeline_tag: video-text-to-text
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  # VideoMind-2B
7
 
8
  <div style="display: flex; gap: 5px;">
 
9
  <a href="https://arxiv.org/abs/2503.13444" target="_blank"><img src="https://img.shields.io/badge/arXiv-2503.13444-red"></a>
10
  <a href="https://videomind.github.io/" target="_blank"><img src="https://img.shields.io/badge/Project-Page-brightgreen"></a>
11
  <a href="https://github.com/yeliudev/VideoMind/blob/main/LICENSE" target="_blank"><img src="https://img.shields.io/badge/License-BSD--3--Clause-purple"></a>
12
  <a href="https://github.com/yeliudev/VideoMind" target="_blank"><img src="https://img.shields.io/github/stars/yeliudev/VideoMind"></a>
13
  </div>
14
 
15
- VideoMind is a multi-modal agent framework that enhances video reasoning by emulating *human-like* processes, such as *breaking down tasks*, *localizing and verifying moments*, and *synthesizing answers*.
16
 
17
  ## πŸ”– Model Details
18
 
19
  - **Model type:** Multi-modal Large Language Model
 
20
  - **Language(s):** English
21
  - **License:** BSD-3-Clause
 
22
 
23
  ## πŸš€ Quick Start
24
 
@@ -286,11 +298,11 @@ print(f'Answerer Response: {response}')
286
 
287
  Please kindly cite our paper if you find this project helpful.
288
 
289
- ```
290
  @inproceedings{liu2026videomind,
291
  title={VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning},
292
  author={Liu, Ye and Lin, Kevin Qinghong and Chen, Chang Wen and Shou, Mike Zheng},
293
  booktitle={International Conference on Learning Representations (ICLR)},
294
  year={2026}
295
  }
296
- ```
 
1
  ---
2
  license: bsd-3-clause
3
  pipeline_tag: video-text-to-text
4
+ base_model: Qwen/Qwen2-VL-2B-Instruct
5
+ datasets:
6
+ - yeliudev/VideoMind-Dataset
7
+ tags:
8
+ - video-reasoning
9
+ - temporal-grounding
10
+ - chain-of-lora
11
+ - multimodal
12
+ - agent
13
  ---
14
 
15
  # VideoMind-2B
16
 
17
  <div style="display: flex; gap: 5px;">
18
+ <a href="https://huggingface.co/papers/2503.13444" target="_blank"><img src="https://img.shields.io/badge/Paper-huggingface-red"></a>
19
  <a href="https://arxiv.org/abs/2503.13444" target="_blank"><img src="https://img.shields.io/badge/arXiv-2503.13444-red"></a>
20
  <a href="https://videomind.github.io/" target="_blank"><img src="https://img.shields.io/badge/Project-Page-brightgreen"></a>
21
  <a href="https://github.com/yeliudev/VideoMind/blob/main/LICENSE" target="_blank"><img src="https://img.shields.io/badge/License-BSD--3--Clause-purple"></a>
22
  <a href="https://github.com/yeliudev/VideoMind" target="_blank"><img src="https://img.shields.io/github/stars/yeliudev/VideoMind"></a>
23
  </div>
24
 
25
+ VideoMind is a multi-modal agent framework that enhances video reasoning by emulating *human-like* processes, such as *breaking down tasks*, *localizing and verifying moments*, and *synthesizing answers*. It was introduced in the paper [VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning](https://huggingface.co/papers/2503.13444).
26
 
27
  ## πŸ”– Model Details
28
 
29
  - **Model type:** Multi-modal Large Language Model
30
+ - **Base model:** [Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct)
31
  - **Language(s):** English
32
  - **License:** BSD-3-Clause
33
+ - **Authors:** [Ye Liu](https://huggingface.co/yeliudev), [Kevin Qinghong Lin](https://huggingface.co/KevinQHLin), Chang Wen Chen, and [Mike Zheng Shou](https://huggingface.co/AnalMom).
34
 
35
  ## πŸš€ Quick Start
36
 
 
298
 
299
  Please kindly cite our paper if you find this project helpful.
300
 
301
+ ```bibtex
302
  @inproceedings{liu2026videomind,
303
  title={VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning},
304
  author={Liu, Ye and Lin, Kevin Qinghong and Chen, Chang Wen and Shou, Mike Zheng},
305
  booktitle={International Conference on Learning Representations (ICLR)},
306
  year={2026}
307
  }
308
+ ```