base_model:
- Qwen/QwQ-32B
datasets:
- inclusionAI/ASearcher-train-data
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- agent
- search
- qwen
ASearcher-Web-QwQ-32B
This model is presented in the paper Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale Asynchronous RL.
Paper: https://huggingface.co/papers/2508.07976 Code: https://github.com/inclusionAI/ASearcher
Instruction
ASearcher is an open-source framework designed for large-scale online reinforcement learning (RL) training of search agents. Our mission is to advance Search Intelligence to expert-level performance. We are fully committed to open-source by releasing model weights, detailed training methodologies, and data construction pipelines. Additionally, we provide comprehensive guidance on building and training customized agents based on AReaL. ASearcher empowers developers to build their own high-performance search agents easily and cost-effectively.
We have released multiple models trained with different settings and based on foundation models of varying sizes. These models have achieved outstanding performance on Single-Hop / Multi-Hop QA and more challenging tool-augmented benchmarks like GAIA, Xbench.
Model Download
| Model Name | Base Model | Training Setting | Download Link |
|---|---|---|---|
| ASearcher-Local-7B | Qwen2.5-7B | Local knowledge base with RAG | 🤗Huggingface |
| ASearcher-Web-7B | Qwen2.5-7B | Web-based search and browsing | 🤗Huggingface |
| ASearcher-Local-14B | Qwen2.5-14B | Local knowledge base with RAG | 🤗Huggingface |
| ASearcher-Web-14B | Qwen2.5-14B | Web-based search and browsing | 🤗Huggingface |
| ASearcher-Web-QwQ-32B | QwQ-32B | Web-based search and browsing | 🤗Huggingface |
Performance
Evaluation on challenging benchmarks (ASearcher-Web-QwQ)
Evaluation with a local knowledge base with RAG
Evaluation with web-based search and browsing
Dataset Download
We also release our full training data and test data, you can easily get them and reproduce our result.
Quickstart
To perform text generation with ASearcher-Web-QwQ-32B using the transformers library, you can use the following code:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "inclusionAI/ASearcher-Web-QwQ-32B"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
messages = [
{"role": "user", "content": "What is the capital of France?"},
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
For more details and advanced usage, please refer to our GitHub repository: ASearcher