Improve model card and add metadata
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,22 +1,23 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
|
|
|
| 5 |
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
Code: https://github.com/tulerfeng/Gen-Searcher
|
| 9 |
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
|
|
|
| 12 |
|
| 13 |
# π Intro
|
| 14 |
|
| 15 |
<div align="center">
|
| 16 |
-
<img src="https://github.com/tulerfeng/Gen-Searcher/blob/main/assets/teaser.jpg?raw=true" alt="
|
| 17 |
</div>
|
| 18 |
|
| 19 |
-
|
| 20 |
We introduce **Gen-Searcher**, as the first attempt to train a multimodal **deep research agent** for image generation that requires complex real-world knowledge. Gen-Searcher can **search the web, browse evidence, reason over multiple sources, and search visual references** before generation, enabling more accurate and up-to-date image synthesis in real-world scenarios.
|
| 21 |
|
| 22 |
We build two dedicated training datasets **Gen-Searcher-SFT-10k**, **Gen-Searcher-RL-6k** and one new benchmark **KnowGen** for search-grounded image generation.
|
|
@@ -25,19 +26,25 @@ Gen-Searcher achieves significant improvements, delivering **15+ point gains on
|
|
| 25 |
|
| 26 |
All code, models, data, and benchmark are fully released.
|
| 27 |
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
## π₯ Demo
|
| 32 |
|
| 33 |
#### Inference Process Example
|
| 34 |
|
| 35 |
<div align="center">
|
| 36 |
-
<img src="https://github.com/tulerfeng/Gen-Searcher/blob/main/assets/example.jpg?raw=true" alt="
|
| 37 |
</div>
|
| 38 |
|
|
|
|
| 39 |
|
| 40 |
-
|
| 41 |
-
|
| 42 |
|
|
|
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
library_name: transformers
|
| 4 |
+
pipeline_tag: image-text-to-text
|
| 5 |
+
---
|
| 6 |
|
| 7 |
+
# Gen-Searcher SFT Model
|
|
|
|
|
|
|
| 8 |
|
| 9 |
+
This repository contains the Supervised Fine-Tuning (SFT) model presented in the paper: [Gen-Searcher: Reinforcing Agentic Search for Image Generation](https://arxiv.org/abs/2603.28767).
|
| 10 |
|
| 11 |
+
This is an intermediate model prepared for subsequent reinforcement learning (RL) training using the GRPO algorithm with dual reward feedback.
|
| 12 |
|
| 13 |
+
[**π Project Page**](https://gen-searcher.vercel.app/) | [**π» Code**](https://github.com/tulerfeng/Gen-Searcher) | [**π Paper**](https://arxiv.org/abs/2603.28767)
|
| 14 |
|
| 15 |
# π Intro
|
| 16 |
|
| 17 |
<div align="center">
|
| 18 |
+
<img src="https://github.com/tulerfeng/Gen-Searcher/blob/main/assets/teaser.jpg?raw=true" alt="Gen-Searcher Teaser" width="80%">
|
| 19 |
</div>
|
| 20 |
|
|
|
|
| 21 |
We introduce **Gen-Searcher**, as the first attempt to train a multimodal **deep research agent** for image generation that requires complex real-world knowledge. Gen-Searcher can **search the web, browse evidence, reason over multiple sources, and search visual references** before generation, enabling more accurate and up-to-date image synthesis in real-world scenarios.
|
| 22 |
|
| 23 |
We build two dedicated training datasets **Gen-Searcher-SFT-10k**, **Gen-Searcher-RL-6k** and one new benchmark **KnowGen** for search-grounded image generation.
|
|
|
|
| 26 |
|
| 27 |
All code, models, data, and benchmark are fully released.
|
| 28 |
|
|
|
|
|
|
|
|
|
|
| 29 |
## π₯ Demo
|
| 30 |
|
| 31 |
#### Inference Process Example
|
| 32 |
|
| 33 |
<div align="center">
|
| 34 |
+
<img src="https://github.com/tulerfeng/Gen-Searcher/blob/main/assets/example.jpg?raw=true" alt="Inference Process Example" width="85%">
|
| 35 |
</div>
|
| 36 |
|
| 37 |
+
For more examples, please refer to our website [[π Project Page]](https://gen-searcher.vercel.app/).
|
| 38 |
|
| 39 |
+
## Citation
|
|
|
|
| 40 |
|
| 41 |
+
If you find our work helpful for your research, please consider citing our work:
|
| 42 |
|
| 43 |
+
```bibtex
|
| 44 |
+
@article{feng2026gensearcher,
|
| 45 |
+
title={Gen-Searcher: Reinforcing Agentic Search for Image Generation},
|
| 46 |
+
author={Kaituo Feng and Manyuan Zhang and Shuang Chen and Yunlong Lin and Kaixuan Fan and Yilei Jiang and Hongyu Li and Dian Zheng and Chenyang Wang and Xiangyu Yue},
|
| 47 |
+
journal={arXiv preprint arXiv:2603.28767},
|
| 48 |
+
year={2026}
|
| 49 |
+
}
|
| 50 |
+
```
|