Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Server error while post-processing the split rows. Please report the issue.
Error code:   RowsPostProcessingError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

TaxoBench: A Hierarchical Taxonomy Generation and Evaluation Benchmark for Scientific Literature

License Python 3.10+ Paper


πŸ“– Project Overview

TaxoBench is the first benchmark framework that systematically evaluates Deep Research Agents and Large Language Models (LLMs) on their ability to organize, summarize, and construct hierarchical knowledge structures from scientific literature, from the perspective of human expert cognitive structures.

This project is based on the paper from Fudan University NLP Lab:

Can Deep Research Agents Find and Organize?
Evaluating the Synthesis Gap with Expert Taxonomies

πŸ“š Data Sources

  • 72 highly-cited computer science survey papers (Survey Topics)
  • Expert-constructed Taxonomy Trees
  • 3,815 precisely classified cited papers as Ground Truth

🎯 Evaluation Modes

TaxoBench implements the two core evaluation paradigms defined in the paper:

  1. Deep Research Mode
    End-to-end evaluation: Retrieval β†’ Filtering β†’ Organization β†’ Structured Summarization

  2. Bottom-Up Mode (focus of this repository)
    Given a fixed collection of papers, evaluate the model's ability to build hierarchical knowledge structures (taxonomies) bottom-up


🌟 Key Features

  • πŸ§ͺ Two-level Evaluation Architecture

    • Leaf-Level: Retrieval & clustering quality
    • Hierarchy-Level: Reasonableness of the classification tree structure
  • ⚑ High-throughput concurrent inference

    • Based on Python multiprocessing
    • Supports large-scale parallelism
  • 🧠 Native support for Thinking / Reasoning modes

    • Well adapted to reasoning-enhanced models:
      • DeepSeek-R1 / V3
      • Claude 4.5 Sonnet
      • Kimi-k2-Thinking, etc.
  • πŸ”Œ Unified interface for multiple models

    • OpenAI (GPT-5)
    • Anthropic (Claude 4.5)
    • Google (Gemini 3)
    • DeepSeek / Qwen / Moonshot (Kimi)

πŸ“‚ Repository Structure

TaxoBench/
β”œβ”€β”€ data.jsonl                # Benchmark data (one JSON object per survey topic)
β”œβ”€β”€ README.md
└── README_zh.md

πŸ“„ Data Format (data.jsonl)

This repository currently releases the benchmark data file data.jsonl.

  • Each line is one JSON object for a survey topic.
  • Key fields:

Top-Level Fields

Field Type Description
id int Survey topic ID (0-indexed)
survey str Full title of the survey paper
survey_topic str Survey topic (same as survey in most cases)
survey_topic_path str File path identifier for the survey topic
gt_paper_count int Total number of ground truth papers in this survey
gt dict Expert-constructed taxonomy tree
pdfs list[dict] List of papers with metadata (contains core_task/summary/contributions)

πŸ“ Citation

If you use this code or dataset in your research, please cite our paper:

@misc{zhang2026deepresearchagentsorganize,
      title={Can Deep Research Agents Find and Organize? Evaluating the Synthesis Gap with Expert Taxonomies}, 
      author={Ming Zhang and Jiabao Zhuang and Wenqing Jing and Ziyu Kong and Jingyi Deng and Yujiong Shen and Kexin Tan and Yuhang Zhao and Ning Luo and Renzhe Zheng and Jiahui Lin and Mingqi Wu and Long Ma and Yi Zou and Shihan Dou and Tao Gui and Qi Zhang and Xuanjing Huang},
      year={2026},
      eprint={2601.12369},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2601.12369}, 
}

πŸ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Downloads last month
33

Paper for wuqingzhuan16/TaxoBench