File size: 2,699 Bytes
25d036c
 
 
 
 
 
3dd68c6
 
 
 
25d036c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ec81122
 
 
 
215e147
25d036c
ec81122
25d036c
 
 
 
 
3dd68c6
25d036c
3dd68c6
25d036c
 
 
 
 
 
 
 
 
 
cbef473
 
 
 
 
2253af0
3dd68c6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: apache-2.0
language:
  - en
base_model:
  - Qwen/Qwen3-4B
library_name: transformers
pipeline_tag: text-generation
tags:
  - arxiv:2602.04634
metrics:
  - accuracy
model-index:
  - name: WideSeek-R1-4B
    results:
      - task:
          type: WideSearch
        dataset:
          type: WideSearch
          name: WideSearch
        metrics:
          - type: accuracy
            value: 40.0
---

# WideSeek-R1-4B

<div align="center">

[**๐ŸŒ Project Page**](https://wideseek-r1.github.io/) | [**๐Ÿ“„ Paper**](https://arxiv.org/pdf/2602.04634) | [**๐Ÿ’ป Code**](https://github.com/RLinf/RLinf/tree/main/examples/wideseek_r1) | [**๐Ÿ“ฆ Dataset**](https://huggingface.co/datasets/RLinf/WideSeek-R1-train-data) | [**๐Ÿค— Models**](https://huggingface.co/RLinf/WideSeek-R1-4b)

</div>

## Overview

![image](fig/scaling.png)

Recent advancements in Large Language Models (LLMs) have largely focused on depth scaling, where a single agent solves long-horizon problems with multi-turn reasoning and tool use. However, as tasks grow broader, the key bottleneck shifts from individual competence to organizational capability.

In this work, we explore a complementary dimension of width scaling with multi-agent systems to address broad information seeking. Existing multi-agent systems often rely on hand-crafted workflows and turn-taking interactions that fail to parallelize work effectively. To bridge this gap, we propose WideSeek-R1, a lead-agent-subagent framework trained via multi-agent reinforcement learning (MARL) to synergize scalable orchestration and parallel execution. By utilizing a shared LLM with isolated contexts and specialized tools, WideSeek-R1 jointly optimizes the lead agent and parallel subagents on a curated dataset of 20k broad information-seeking tasks.

Extensive experiments show that WideSeek-R1-4B achieves an item F1 score of 40.0\% on the WideSearch benchmark, which is comparable to the performance of single-agent DeepSeek-R1-671B. Furthermore, WideSeek-R1-4B exhibits consistent performance gains as the number of parallel subagents increases, highlighting the effectiveness of width scaling.

For more details, see our [project page](https://thu-nics.github.io/WideSeek-R1/)

## Citation

If you use this model in your research, please cite our paper:

```bibtex
@article{xu2026wideseek,
  title   = {WideSeek-R1: Exploring Width Scaling for Broad Information Seeking via Multi-Agent Reinforcement Learning},
  author  = {Xu, Zelai and Xu, Zhexuan and Zhang, Ruize and Zhu, Chunyang and Yu, Shi and Liu, Weilin and Zhang, Quanlu and Ding, Wenbo and Yu, Chao and Wang, Yu},
  journal = {arXiv preprint arXiv:2602.04634},
  year    = {2026},
}
```