Add model card for ChemVLR (#1)
Browse files- Add model card for ChemVLR (8dd36ae90d545b8ce2a070ff64dedd36a4a13bc2)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
pipeline_tag: image-text-to-text
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# ChemVLR-7B
|
| 7 |
+
|
| 8 |
+
[ChemVLR](https://huggingface.co/papers/2604.06685) is a chemical Vision-Language Model (VLM) designed to prioritize reasoning within the perception process. Unlike conventional chemical VLMs that often function as "black-box" systems, ChemVLR analyzes visual inputs in a fine-grained manner by explicitly identifying granular chemical descriptors, such as functional groups, prior to generating answers. This approach ensures the production of explicit and interpretable reasoning paths for complex visual chemical problems.
|
| 9 |
+
|
| 10 |
+
## Model Description
|
| 11 |
+
|
| 12 |
+
ChemVLR-7B is built upon the Qwen2.5-VL-7B backbone and trained using a three-stage framework to systemically build perception and reasoning capacity. It utilizes a curated dataset of 760k high-quality samples across molecular and reaction tasks.
|
| 13 |
+
|
| 14 |
+
- **Paper:** [ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding](https://huggingface.co/papers/2604.06685)
|
| 15 |
+
- **Repository:** [https://github.com/xxlllz/ChemVLR](https://github.com/xxlllz/ChemVLR)
|
| 16 |
+
|
| 17 |
+
## Model Highlights
|
| 18 |
+
- **Reasoning-Prioritized Perception:** Explicitly identifies chemical components before answering to provide interpretable outputs.
|
| 19 |
+
- **Large-Scale Dataset:** Trained on 760k high-quality reasoning-and-captioning samples.
|
| 20 |
+
- **State-of-the-Art Performance:** Surpasses leading proprietary models and domain-specific open-source baselines in chemical understanding benchmarks.
|
| 21 |
+
|
| 22 |
+
## Citation
|
| 23 |
+
|
| 24 |
+
```bibtex
|
| 25 |
+
@misc{zhao2026chemvlrprioritizingreasoningperception,
|
| 26 |
+
title={ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding},
|
| 27 |
+
author={Xuanle Zhao and Xinyuan Cai and Xiang Cheng and Xiuyi Chen and Bo Xu},
|
| 28 |
+
year={2026},
|
| 29 |
+
eprint={2604.06685},
|
| 30 |
+
archivePrefix={arXiv},
|
| 31 |
+
primaryClass={cs.CL},
|
| 32 |
+
url={https://arxiv.org/abs/2604.06685},
|
| 33 |
+
}
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
## Acknowledgement
|
| 37 |
+
ChemVLR is built upon [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) and [Qwen3-VL](https://github.com/QwenLM/Qwen3-VL). We thank these teams for open-sourcing their work!
|