ASearcher-test-data / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper link, code, task category, and detailed description
a3ec167 verified
|
raw
history blame
3.61 kB
metadata
license: apache-2.0
task_categories:
  - question-answering
language:
  - en
tags:
  - reinforcement-learning
  - agents
  - web-search
  - LLM
  - long-horizon

ASearcher-train-data: Training Data for Long-Horizon Agentic Search

This repository contains ASearcher-train-data, a large-scale, open-source training dataset integral to the ASearcher project. ASearcher is an open-source framework designed for large-scale online reinforcement learning (RL) training of search agents, aiming to advance Search Intelligence to expert-level performance.

The dataset is presented in the paper Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale Asynchronous RL. It is comprised of high-quality and challenging Question-Answering (QA) pairs, autonomously synthesized by a prompt-based LLM agent to enable agents to learn complex, long-horizon search strategies.

Paper: https://huggingface.co/papers/2508.07976 Code: https://github.com/inclusionAI/AReaL Project Page: https://inclusionai.github.io/AReaL/ Related Models: https://huggingface.co/collections/inclusionAI/asearcher-6891d8acad5ebc3a1e1fb2d1

Introduction

ASearcher is an open-source framework designed for large-scale online reinforcement learning (RL) training of search agents. Our mission is to advance Search Intelligence to expert-level performance. We are fully committed to open-source by releasing model weights, detailed training methodologies, and data synthesis pipelines. This dataset empowers developers to build their own high-performance search agents easily and cost-effectively.

Data Synthesis

The training data in this repository is generated using a prompt-based LLM agent designed to autonomously create grounded, challenging, and highly uncertain QA pairs. The synthesis process begins with basic questions, which the agent then iteratively refines through two key strategies:

  • Fuzzing: Increasing uncertainty by obscuring key details in the query.
  • Context Injection: Augmenting questions with external facts retrieved via tools to deepen complexity.

Each generated question undergoes rigorous multi-stage validation:

  • Quality Assurance: Checks for fluency, timeliness, and logical coherence.
  • Difficulty Verification: Compares answers generated by an LRM against ground truth to ensure challenge.
  • Answer Uniqueness Validation: Confirms that incorrect LRM answers are indeed invalid, preserving question integrity.

Sample Usage

You can easily load this dataset using the Hugging Face datasets library:

from datasets import load_dataset

# Load the ASearcher training dataset
dataset = load_dataset("inclusionAI/ASearcher-train-data")

# Print the dataset structure
print(dataset)

# Access a sample (e.g., the first item in the 'train' split)
print(dataset["train"][0])

Citation

If you find our work useful, please cite our paper:

@misc{gao2025turnsunlockinglonghorizonagentic,
      title={Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale Asynchronous RL}, 
      author={Jiaxuan Gao and Wei Fu and Minyang Xie and Shusheng Xu and Chuyi He and Zhiyu Mei and Banghua Zhu and Yi Wu},
      year={2025},
      eprint={2508.07976},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2508.07976}, 
}