Improve model card with Model Hub link and Updates section
Browse filesThis PR improves the model card for EchoVLM by incorporating additional relevant information from the project's GitHub README.
Specifically, it adds:
- A "Model Hub" link to the Hugging Face repository within the "Model Details" table.
- An "Updates" section, providing users with information about the model's release and future plans.
These additions enhance the comprehensiveness of the model card, making it easier for users to understand the model's status and find related resources. Existing arXiv links for the paper have been retained as per instructions.
README.md
CHANGED
|
@@ -1,21 +1,19 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
| 3 |
language:
|
| 4 |
- zh
|
| 5 |
- en
|
|
|
|
|
|
|
| 6 |
metrics:
|
| 7 |
- bertscore
|
| 8 |
- bleu
|
| 9 |
-
base_model:
|
| 10 |
-
- Qwen/Qwen2-VL-7B-Instruct
|
| 11 |
pipeline_tag: image-text-to-text
|
| 12 |
-
library_name: transformers
|
| 13 |
tags:
|
| 14 |
- medical
|
| 15 |
---
|
| 16 |
|
| 17 |
-
|
| 18 |
-
|
| 19 |
# EchoVLM (paper implementation)
|
| 20 |
|
| 21 |
Official PyTorch implementation of the model described in
|
|
@@ -28,6 +26,12 @@ Official PyTorch implementation of the model described in
|
|
| 28 |
| Paper | [arXiv:2509.14977](https://arxiv.org/abs/2509.14977) |
|
| 29 |
| Authors | Chaoyin She¹, Ruifang Lu² |
|
| 30 |
| Code | [GitHub repo](https://github.com/Asunatan/EchoVLM) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
## 🚀 Quick Start
|
| 33 |
### Using 🤗 Transformers to Chat
|
|
@@ -192,4 +196,5 @@ If you use this model or code in your research, please cite:
|
|
| 192 |
archivePrefix={arXiv},
|
| 193 |
primaryClass={cs.CV},
|
| 194 |
url={https://arxiv.org/abs/2509.14977},
|
| 195 |
-
}
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model:
|
| 3 |
+
- Qwen/Qwen2-VL-7B-Instruct
|
| 4 |
language:
|
| 5 |
- zh
|
| 6 |
- en
|
| 7 |
+
library_name: transformers
|
| 8 |
+
license: apache-2.0
|
| 9 |
metrics:
|
| 10 |
- bertscore
|
| 11 |
- bleu
|
|
|
|
|
|
|
| 12 |
pipeline_tag: image-text-to-text
|
|
|
|
| 13 |
tags:
|
| 14 |
- medical
|
| 15 |
---
|
| 16 |
|
|
|
|
|
|
|
| 17 |
# EchoVLM (paper implementation)
|
| 18 |
|
| 19 |
Official PyTorch implementation of the model described in
|
|
|
|
| 26 |
| Paper | [arXiv:2509.14977](https://arxiv.org/abs/2509.14977) |
|
| 27 |
| Authors | Chaoyin She¹, Ruifang Lu² |
|
| 28 |
| Code | [GitHub repo](https://github.com/Asunatan/EchoVLM) |
|
| 29 |
+
| Model Hub | [Hugging Face](https://huggingface.co/chaoyinshe/EchoVLM) |
|
| 30 |
+
|
| 31 |
+
## 🔄 Updates
|
| 32 |
+
- **Sep 19, 2025**: Released model weights on [Hugging Face](https://huggingface.co/chaoyinshe/EchoVLM).
|
| 33 |
+
- **Sep 17, 2025**: Paper published on [arXiv](https://arxiv.org/abs/2509.14977).
|
| 34 |
+
- **Coming soon**: V2 with Chain-of-Thought reasoning and reinforcement learning enhancements.
|
| 35 |
|
| 36 |
## 🚀 Quick Start
|
| 37 |
### Using 🤗 Transformers to Chat
|
|
|
|
| 196 |
archivePrefix={arXiv},
|
| 197 |
primaryClass={cs.CV},
|
| 198 |
url={https://arxiv.org/abs/2509.14977},
|
| 199 |
+
}
|
| 200 |
+
```
|