EchoVLM (paper implementation)
Official PyTorch implementation of the model described in
"EchoVLM: Dynamic Mixture-of-Experts Vision-Language Model for Universal Ultrasound Intelligence".
π€ Model Details
| Item | Value |
|---|---|
| Paper | arXiv:2509.14977 |
| Authors | Chaoyin SheΒΉ, Ruifang LuΒ² |
| Code | GitHub repo |
| Model Hub | Hugging Face |
π Updates
- Coming soon: V2 with Chain-of-Thought reasoning and reinforcement learning enhancementsβfull training & inference code plus benchmark test-set will be fully open-sourced.
- Dec 1, 2025: To better promote development in this field, we've open-sourced our latest instruction fine-tuned model based on Lingshu-7B. Essentially built on Qwen2.5VL, it enjoys a better ecosystemβfor example, it can seamlessly leverage vLLM for accelerated inference. Released model weights on Hugging Face.
- Sep 21, 2025: The full, uncleaned model codebase is now open-sourced on GitHub!
- Sep 19, 2025: Released model weights on Hugging Face.
- Sep 17, 2025: Paper published on arXiv.
π Quick Start
Reference Qwen2.5-VL-7B-Instruct
π Citation
If you use this model or code in your research, please cite:
@misc{she2025echovlmdynamicmixtureofexpertsvisionlanguage,
title={EchoVLM: Dynamic Mixture-of-Experts Vision-Language Model for Universal Ultrasound Intelligence},
author={Chaoyin She and Ruifang Lu and Lida Chen and Wei Wang and Qinghua Huang},
year={2025},
eprint={2509.14977},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.14977},
}
- Downloads last month
- 21
Model tree for chaoyinshe/EchoVLM_V2_lingshu_base_7b_instruct_preview
Base model
lingshu-medical-mllm/Lingshu-7B