Datasets:
The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
TaxoBench: A Hierarchical Taxonomy Generation and Evaluation Benchmark for Scientific Literature
π Project Overview
TaxoBench is the first benchmark framework that systematically evaluates Deep Research Agents and Large Language Models (LLMs) on their ability to organize, summarize, and construct hierarchical knowledge structures from scientific literature, from the perspective of human expert cognitive structures.
This project is based on the paper from Fudan University NLP Lab:
Can Deep Research Agents Find and Organize?
Evaluating the Synthesis Gap with Expert Taxonomies
π Data Sources
- 72 highly-cited computer science survey papers (Survey Topics)
- Expert-constructed Taxonomy Trees
- 3,815 precisely classified cited papers as Ground Truth
π― Evaluation Modes
TaxoBench implements the two core evaluation paradigms defined in the paper:
Deep Research Mode
End-to-end evaluation: Retrieval β Filtering β Organization β Structured SummarizationBottom-Up Mode (focus of this repository)
Given a fixed collection of papers, evaluate the model's ability to build hierarchical knowledge structures (taxonomies) bottom-up
π Key Features
π§ͺ Two-level Evaluation Architecture
- Leaf-Level: Retrieval & clustering quality
- Hierarchy-Level: Reasonableness of the classification tree structure
β‘ High-throughput concurrent inference
- Based on Python
multiprocessing - Supports large-scale parallelism
- Based on Python
π§ Native support for Thinking / Reasoning modes
- Well adapted to reasoning-enhanced models:
- DeepSeek-R1 / V3
- Claude 4.5 Sonnet
- Kimi-k2-Thinking, etc.
- Well adapted to reasoning-enhanced models:
π Unified interface for multiple models
- OpenAI (GPT-5)
- Anthropic (Claude 4.5)
- Google (Gemini 3)
- DeepSeek / Qwen / Moonshot (Kimi)
π Repository Structure
TaxoBench/
βββ data.jsonl # Benchmark data (one JSON object per survey topic)
βββ README.md
βββ README_zh.md
π Data Format (data.jsonl)
This repository currently releases the benchmark data file data.jsonl.
- Each line is one JSON object for a survey topic.
- Key fields:
Top-Level Fields
| Field | Type | Description |
|---|---|---|
id |
int |
Survey topic ID (0-indexed) |
survey |
str |
Full title of the survey paper |
survey_topic |
str |
Survey topic (same as survey in most cases) |
survey_topic_path |
str |
File path identifier for the survey topic |
gt_paper_count |
int |
Total number of ground truth papers in this survey |
gt |
dict |
Expert-constructed taxonomy tree |
pdfs |
list[dict] |
List of papers with metadata (contains core_task/summary/contributions) |
π Citation
If you use this code or dataset in your research, please cite our paper:
@misc{zhang2026deepresearchagentsorganize,
title={Can Deep Research Agents Find and Organize? Evaluating the Synthesis Gap with Expert Taxonomies},
author={Ming Zhang and Jiabao Zhuang and Wenqing Jing and Ziyu Kong and Jingyi Deng and Yujiong Shen and Kexin Tan and Yuhang Zhao and Ning Luo and Renzhe Zheng and Jiahui Lin and Mingqi Wu and Long Ma and Yi Zou and Shihan Dou and Tao Gui and Qi Zhang and Xuanjing Huang},
year={2026},
eprint={2601.12369},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2601.12369},
}
π License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Downloads last month
- 33