Improve model card: Add metadata (pipeline_tag, library_name, license) and update paper link
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,8 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# Boosting Multi-modal Keyphrase Prediction with Dynamic Chain-of-Thought in Vision-Language Models
|
| 2 |
|
| 3 |
<div align="center">
|
| 4 |
<p align="center">
|
| 5 |
-
<a>
|
| 6 |
<img
|
| 7 |
src="https://img.shields.io/badge/ArXiv-Paper-red?logo=arxiv&logoColor=red"
|
| 8 |
alt="Paper"
|
|
@@ -54,7 +60,7 @@ bash eval_full_sft.sh {/path/to/model} {/path/to/source_txt} --template {templat
|
|
| 54 |
```
|
| 55 |
|
| 56 |
## 🧾 License
|
| 57 |
-
DynamicCoT are derived from [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct), which is subject to [Qwen RESEARCH LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE). We retain ownership of all intellectual property rights in and to any derivative works and modifications that we made.
|
| 58 |
|
| 59 |
|
| 60 |
## 🙏 Acknowledgement
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pipeline_tag: image-to-text
|
| 3 |
+
library_name: transformers
|
| 4 |
+
license: other
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
# Boosting Multi-modal Keyphrase Prediction with Dynamic Chain-of-Thought in Vision-Language Models
|
| 8 |
|
| 9 |
<div align="center">
|
| 10 |
<p align="center">
|
| 11 |
+
<a href="https://huggingface.co/papers/2510.09358">
|
| 12 |
<img
|
| 13 |
src="https://img.shields.io/badge/ArXiv-Paper-red?logo=arxiv&logoColor=red"
|
| 14 |
alt="Paper"
|
|
|
|
| 60 |
```
|
| 61 |
|
| 62 |
## 🧾 License
|
| 63 |
+
DynamicCoT are derived from [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE), which is subject to [Qwen RESEARCH LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE). We retain ownership of all intellectual property rights in and to any derivative works and modifications that we made.
|
| 64 |
|
| 65 |
|
| 66 |
## 🙏 Acknowledgement
|