Update README.md
Browse files
README.md
CHANGED
|
@@ -29,9 +29,11 @@ Official PyTorch implementation of the model described in
|
|
| 29 |
| Model Hub | [Hugging Face](https://huggingface.co/chaoyinshe/EchoVLM) |
|
| 30 |
|
| 31 |
## 🔄 Updates
|
|
|
|
|
|
|
|
|
|
| 32 |
- **Sep 19, 2025**: Released model weights on [Hugging Face](https://huggingface.co/chaoyinshe/EchoVLM).
|
| 33 |
-
- **Sep 17, 2025**: Paper published on [arXiv](https://arxiv.org/abs/2509.14977).
|
| 34 |
-
- **Coming soon**: V2 with Chain-of-Thought reasoning and reinforcement learning enhancements.
|
| 35 |
|
| 36 |
## 🚀 Quick Start
|
| 37 |
### Using 🤗 Transformers to Chat
|
|
|
|
| 29 |
| Model Hub | [Hugging Face](https://huggingface.co/chaoyinshe/EchoVLM) |
|
| 30 |
|
| 31 |
## 🔄 Updates
|
| 32 |
+
- **Coming soon**: V2 with Chain-of-Thought reasoning and reinforcement learning enhancements—full training & inference code plus benchmark test-set will be fully open-sourced.
|
| 33 |
+
- **Dec 1, 2025**: To better promote development in this field, we've open-sourced our latest instruction fine-tuned model based on Lingshu-7B. Essentially built on Qwen2.5VL, it enjoys a better ecosystem—for example, it can seamlessly leverage vLLM for accelerated inference. Released model weights on [Hugging Face](https://huggingface.co/chaoyinshe/EchoVLM_V2_lingshu_base_7b_instruct_preview).
|
| 34 |
+
- **Sep 21, 2025**: The full, uncleaned model codebase is now open-sourced on GitHub!
|
| 35 |
- **Sep 19, 2025**: Released model weights on [Hugging Face](https://huggingface.co/chaoyinshe/EchoVLM).
|
| 36 |
+
- **Sep 17, 2025**: Paper published on [arXiv](https://arxiv.org/abs/2509.14977).
|
|
|
|
| 37 |
|
| 38 |
## 🚀 Quick Start
|
| 39 |
### Using 🤗 Transformers to Chat
|