Update README.md
Browse files
README.md
CHANGED
|
@@ -3,7 +3,7 @@ license: apache-2.0
|
|
| 3 |
base_model:
|
| 4 |
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 5 |
---
|
| 6 |
-
[Paper]() | [Code](https://github.com/Cominclip/OmniVerifier)
|
| 7 |
|
| 8 |
We introduce **Generative Universal Verifier**, a novel concept and plugin designed for next-generation multimodal reasoning in vision-language models and unified multimodal models, providing the fundamental capability of reflection and refinement on visual outcomes during the reasoning and generation process.
|
| 9 |
|
|
@@ -18,7 +18,7 @@ OmniVerifier advances both reliable reflection during generation and scalable te
|
|
| 18 |
@article{zhang2025generative,
|
| 19 |
author = {Zhang, Xinchen and Zhang, Xiaoying and Wu, Youbin and Cao, Yanbin and Zhang, Renrui and Chu, Ruihang and Yang, Ling and Yang, Yujiu},
|
| 20 |
title = {Generative Universal Verifier as Multimodal Meta-Reasoner},
|
| 21 |
-
journal = {arXiv preprint arXiv:
|
| 22 |
year = {2025}
|
| 23 |
}
|
| 24 |
```
|
|
|
|
| 3 |
base_model:
|
| 4 |
- Qwen/Qwen2.5-VL-7B-Instruct
|
| 5 |
---
|
| 6 |
+
[Paper](https://arxiv.org/abs/2510.13804) | [Code](https://github.com/Cominclip/OmniVerifier)
|
| 7 |
|
| 8 |
We introduce **Generative Universal Verifier**, a novel concept and plugin designed for next-generation multimodal reasoning in vision-language models and unified multimodal models, providing the fundamental capability of reflection and refinement on visual outcomes during the reasoning and generation process.
|
| 9 |
|
|
|
|
| 18 |
@article{zhang2025generative,
|
| 19 |
author = {Zhang, Xinchen and Zhang, Xiaoying and Wu, Youbin and Cao, Yanbin and Zhang, Renrui and Chu, Ruihang and Yang, Ling and Yang, Yujiu},
|
| 20 |
title = {Generative Universal Verifier as Multimodal Meta-Reasoner},
|
| 21 |
+
journal = {arXiv preprint arXiv:2510.13804},
|
| 22 |
year = {2025}
|
| 23 |
}
|
| 24 |
```
|