EchoVLM (paper implementation)

Official PyTorch implementation of the model described in
"EchoVLM: Dynamic Mixture-of-Experts Vision-Language Model for Universal Ultrasound Intelligence".

πŸ€– Model Details

Item Value
Paper arXiv:2509.14977
Authors Chaoyin SheΒΉ, Ruifang LuΒ²
Code GitHub repo
Model Hub Hugging Face

πŸ”„ Updates

  • Coming soon: V2 with Chain-of-Thought reasoning and reinforcement learning enhancementsβ€”full training & inference code plus benchmark test-set will be fully open-sourced.
  • Dec 1, 2025: To better promote development in this field, we've open-sourced our latest instruction fine-tuned model based on Lingshu-7B. Essentially built on Qwen2.5VL, it enjoys a better ecosystemβ€”for example, it can seamlessly leverage vLLM for accelerated inference. Released model weights on Hugging Face.
  • Sep 21, 2025: The full, uncleaned model codebase is now open-sourced on GitHub!
  • Sep 19, 2025: Released model weights on Hugging Face.
  • Sep 17, 2025: Paper published on arXiv.

πŸš€ Quick Start

Reference Qwen2.5-VL-7B-Instruct

πŸ“Œ Citation

If you use this model or code in your research, please cite:

@misc{she2025echovlmdynamicmixtureofexpertsvisionlanguage,
      title={EchoVLM: Dynamic Mixture-of-Experts Vision-Language Model for Universal Ultrasound Intelligence}, 
      author={Chaoyin She and Ruifang Lu and Lida Chen and Wei Wang and Qinghua Huang},
      year={2025},
      eprint={2509.14977},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.14977}, 
}
Downloads last month
21
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for chaoyinshe/EchoVLM_V2_lingshu_base_7b_instruct_preview

Finetuned
(2)
this model