ChemVLR-8B / README.md
xxxllz's picture
Add model card for ChemVLR (#1)
2ae5d8b
metadata
library_name: transformers
pipeline_tag: image-text-to-text

ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding

ChemVLR is a chemical Vision-Language Model (VLM) designed to prioritize reasoning within the perception process. Unlike conventional chemical VLMs that often function as "black-box" systems, ChemVLR analyzes visual inputs in a fine-grained manner by explicitly identifying granular chemical descriptors, such as functional groups, prior to generating answers. This approach ensures the production of explicit and interpretable reasoning paths for complex visual chemical problems.

Resources

Model Highlights

  • Reasoning-Prioritized Perception: Explicitly identifies chemical descriptors like functional groups before answer generation.
  • Interpretable Paths: Generates granular reasoning steps for molecular and reaction tasks.
  • Large-Scale Data: Trained on a curated reasoning-and-captioning dataset comprising 760k high-quality samples.
  • Backbone: This version of ChemVLR is built upon the Qwen3-VL-8B-Instruct architecture.

Citation

@misc{zhao2026chemvlrprioritizingreasoningperception,
      title={ChemVLR: Prioritizing Reasoning in Perception for Chemical Vision-Language Understanding}, 
      author={Xuanle Zhao and Xinyuan Cai and Xiang Cheng and Xiuyi Chen and Bo Xu},
      year={2026},
      eprint={2604.06685},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2604.06685}, 
}

Acknowledgement

ChemVLR is built upon the open-source work of Qwen2.5-VL and Qwen3-VL.