|
|
--- |
|
|
datasets: |
|
|
- dmarsili/Omni3D-Bench |
|
|
language: |
|
|
- en |
|
|
metrics: |
|
|
- accuracy |
|
|
base_model: |
|
|
- Qwen/Qwen3-8B |
|
|
tags: |
|
|
- reasoning |
|
|
- visual-programming |
|
|
- program-synthesis |
|
|
- visual-reasoning |
|
|
license: mit |
|
|
--- |
|
|
# Model Card for VALOR-8B |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
This is the RL-tuned Qwen3-8B model from the paper: [No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers](https://glab-caltech.github.io/valor/) |
|
|
|
|
|
For further information please refer to the [project webpage](https://glab-caltech.github.io/valor/), [paper](https://arxiv.org/abs/2512.08889), and [repository](https://github.com/damianomarsili/VALOR). |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use VALOR in your research, please consider citing our work: |
|
|
|
|
|
**BibTeX:** |
|
|
``` |
|
|
@misc{marsili2025labelsproblemtrainingvisual, |
|
|
title={No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers}, |
|
|
author={Damiano Marsili and Georgia Gkioxari}, |
|
|
year={2025}, |
|
|
eprint={2512.08889}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2512.08889}, |
|
|
} |
|
|
``` |