Add model card for ChemVLR
Browse filesHi, I'm Niels from the Hugging Face team. This PR adds a model card for ChemVLR, a specialized vision-language model for chemical understanding, as presented in the paper [ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding](https://huggingface.co/papers/2604.06685).
The PR includes:
- Metadata for `pipeline_tag` (`image-text-to-text`) and `library_name` (`transformers`).
- Links to the paper and the official GitHub repository.
- A brief overview of the model based on the paper's abstract and GitHub README.
- The BibTeX citation for the paper.
README.md
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
pipeline_tag: image-text-to-text
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding
|
| 7 |
+
|
| 8 |
+
ChemVLR is a chemical Vision-Language Model (VLM) designed to prioritize reasoning within the perception process. Unlike conventional chemical VLMs that often function as "black-box" systems, ChemVLR analyzes visual inputs in a fine-grained manner by explicitly identifying granular chemical descriptors, such as functional groups, prior to generating answers. This approach ensures the production of explicit and interpretable reasoning paths for complex visual chemical problems.
|
| 9 |
+
|
| 10 |
+
## Resources
|
| 11 |
+
- **Paper:** [ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding](https://huggingface.co/papers/2604.06685)
|
| 12 |
+
- **Repository:** [GitHub - xxlllz/ChemVLR](https://github.com/xxlllz/ChemVLR)
|
| 13 |
+
|
| 14 |
+
## Model Highlights
|
| 15 |
+
- **Reasoning-Prioritized Perception:** Explicitly identifies chemical descriptors like functional groups before answer generation.
|
| 16 |
+
- **Interpretable Paths:** Generates granular reasoning steps for molecular and reaction tasks.
|
| 17 |
+
- **Large-Scale Data:** Trained on a curated reasoning-and-captioning dataset comprising 760k high-quality samples.
|
| 18 |
+
- **Backbone:** This version of ChemVLR is built upon the Qwen3-VL-8B-Instruct architecture.
|
| 19 |
+
|
| 20 |
+
## Citation
|
| 21 |
+
|
| 22 |
+
```bibtex
|
| 23 |
+
@misc{zhao2026chemvlrprioritizingreasoningperception,
|
| 24 |
+
title={ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding},
|
| 25 |
+
author={Xuanle Zhao and Xinyuan Cai and Xiang Cheng and Xiuyi Chen and Bo Xu},
|
| 26 |
+
year={2026},
|
| 27 |
+
eprint={2604.06685},
|
| 28 |
+
archivePrefix={arXiv},
|
| 29 |
+
primaryClass={cs.CL},
|
| 30 |
+
url={https://arxiv.org/abs/2604.06685},
|
| 31 |
+
}
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
## Acknowledgement
|
| 35 |
+
ChemVLR is built upon the open-source work of [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL) and [Qwen3-VL](https://github.com/QwenLM/Qwen3-VL).
|