WebDancer-32B / README.md
nielsr's picture
nielsr HF Staff
Improve model card: Update pipeline tag, add comprehensive details and demos
f39397c verified
|
raw
history blame
4.93 kB
metadata
base_model:
  - Qwen/QwQ-32B
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
tags:
  - web-agent
  - gui-agent
  - multimodal
  - reinforcement-learning
  - react

WebDancer: Towards Autonomous Information Seeking Agency

This repository contains the WebDancer model, presented in the paper WebDancer: Towards Autonomous Information Seeking Agency.

Code & Project Page: https://github.com/Alibaba-NLP/WebAgent

Abstract

Addressing intricate real-world problems necessitates in-depth information seeking and multi-step reasoning. Recent progress in agentic systems, exemplified by Deep Research, underscores the potential for autonomous multi-step research. In this work, we present a cohesive paradigm for building end-to-end agentic information seeking agents from a data-centric and training-stage perspective. Our approach consists of four key stages: (1) browsing data construction, (2) trajectories sampling, (3) supervised fine-tuning for effective cold start, and (4) reinforcement learning for enhanced generalisation. We instantiate this framework in a web agent based on the ReAct, WebDancer. Empirical evaluations on the challenging information seeking benchmarks, GAIA and WebWalkerQA, demonstrate the strong performance of WebDancer, achieving considerable results and highlighting the efficacy of our training paradigm. Further analysis of agent training provides valuable insights and actionable, systematic pathways for developing more capable agentic models. The codes and demo will be released in this https URL .

Features for WebDancer

  • Native agentic search reasoning model using ReAct framework towards autonomous information seeking agency and Deep Research-like model.
  • We introduce a four-stage training paradigm comprising browsing data construction, trajectory sampling, supervised fine-tuning for effective cold start, and reinforcement learning for improved generalization, enabling the agent to autonomously acquire autonomous search and reasoning skills.
  • Our data-centric approach integrates trajectory-level supervision fine-tuning and reinforcement learning (DAPO) to develop a scalable pipeline for training agentic systems via SFT or RL.
  • WebDancer achieves a Pass@3 score of 64.1% on GAIA and 62.0% on WebWalkerQA.

Quick Start

You need to enter the WebDancer folder for the following commands.

Step 0: Set Up the Environment

conda create -n webdancer python=3.12
pip install -r requirements.txt

Step 1: Deploy the Model

Download the WebDancer model from 🤗 HuggingFace and deploy it using the provided scripts with sglang.

cd scripts
bash depoly_model.sh WebDancer_PATH

Note: Replace WebDancer_PATH with the actual path to the downloaded model.

Step 2: Run the Demo

Edit the following keys in WebDancer/scripts/run_demo.sh:

  • GOOGLE_SEARCH_KEY
  • JINA_API_KEY
  • DASHSCOPE_API_KEY

Then, launch the demo with Gradio to interact with the WebDancer model:

cd scripts
bash run_demo.sh

Demos

We provide demos for WebWalkerQA, GAIA and Daily Use. Our model can execute the long-horizon tasks with multiple steps and complex reasoning, such as web traversal, information seeking and question answering.

WebWalkerQA

GAIA

Daily Use

Citation

If this work is helpful, please kindly cite as:

@misc{wu2025webdancer,
      title={WebDancer: Towards Autonomous Information Seeking Agency},
      author={Jialong Wu and Baixuan Li and Runnan Fang and Wenbiao Yin and Liwen Zhang and Zhengwei Tao and Dingchu Zhang and Zekun Xi and Yong Jiang and Pengjun Xie and Fei Huang and Jingren Zhou},
      year={2025},
      eprint={2505.22648},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.22648},
}
@misc{wu2025webwalker,
      title={WebWalker: Benchmarking LLMs in Web Traversal},
      author={Jialong Wu and Wenbiao Yin and Yong Jiang and Zhenglin Wang and Zekun Xi and Runnan Fang and Deyu Zhou and Pengjun Xie and Fei Huang},
      year={2025},
      eprint={2501.07572},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.07572},
}