Improve dataset card: Add 'text-generation' task category, relevant tags, HF paper link, abstract, and project page
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,37 +1,52 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
-
task_categories:
|
| 4 |
-
- feature-extraction
|
| 5 |
-
- question-answering
|
| 6 |
language:
|
| 7 |
- en
|
| 8 |
-
|
| 9 |
-
- code
|
| 10 |
-
pretty_name: DeepScholarBench Dataset
|
| 11 |
size_categories:
|
| 12 |
- 1K<n<10K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
configs:
|
| 14 |
- config_name: papers
|
| 15 |
-
data_files:
|
| 16 |
- config_name: citations
|
| 17 |
-
data_files:
|
| 18 |
- config_name: important_citations
|
| 19 |
-
data_files:
|
| 20 |
- config_name: full
|
| 21 |
-
data_files:
|
|
|
|
|
|
|
|
|
|
| 22 |
---
|
|
|
|
| 23 |
# DeepScholarBench Dataset
|
| 24 |
|
| 25 |
[](https://huggingface.co/datasets/deepscholar-bench/DeepScholarBench)
|
| 26 |
[](https://github.com/guestrin-lab/deepscholar-bench)
|
| 27 |
[](https://github.com/guestrin-lab/deepscholar-bench/blob/main/LICENSE)
|
| 28 |
-
[](https://arxiv.org/abs/2508.20033)
|
|
|
|
|
|
|
| 29 |
[](https://guestrin-lab.github.io/deepscholar-leaderboard/leaderboard/deepscholar_bench_leaderboard.html)
|
| 30 |
|
| 31 |
---
|
| 32 |
|
| 33 |
A comprehensive dataset of academic papers with extracted related works sections and recovered citations, designed for training and evaluating research generation systems.
|
| 34 |
|
|
|
|
|
|
|
|
|
|
| 35 |
## π Dataset Overview
|
| 36 |
|
| 37 |
This dataset contains **63 academic papers** from ArXiv with their related works sections and **1630 recovered citations**, providing a rich resource for research generation and citation analysis tasks.
|
|
@@ -70,7 +85,7 @@ Contains academic papers with extracted related works sections in multiple forma
|
|
| 70 |
Contains individual citations with recovered metadata:
|
| 71 |
|
| 72 |
| Column | Description |
|
| 73 |
-
|
| 74 |
| `parent_paper_title` | Title of the paper containing the citation |
|
| 75 |
| `parent_paper_arxiv_id` | ArXiv ID of the parent paper |
|
| 76 |
| `citation_shorthand` | Citation key (e.g., "NBERw21340") |
|
|
@@ -134,7 +149,7 @@ Contains enhanced citations with full paper metadata and content:
|
|
| 134 |
| `citations` | Citations only | `recovered_citations.csv` | 1,630 citations | Citation analysis, relationship mapping |
|
| 135 |
| `important_citations` | Enhanced citations with metadata | `important_citations.csv` | 1,050 citations | Advanced citation analysis, paper-citation linking |
|
| 136 |
|
| 137 |
-
## π
|
| 138 |
|
| 139 |
### Loading from Hugging Face Hub (Recommended)
|
| 140 |
|
|
@@ -165,7 +180,8 @@ important_citations_df = important_citations.to_pandas()
|
|
| 165 |
# Get a specific paper
|
| 166 |
paper = papers_df[papers_df['arxiv_id'] == '2506.02838v1'].iloc[0]
|
| 167 |
print(f"Title: {paper['title']}")
|
| 168 |
-
print(f"Related Works
|
|
|
|
| 169 |
|
| 170 |
# Get all citations for this paper
|
| 171 |
paper_citations = citations_df[citations_df['parent_paper_arxiv_id'] == '2506.02838v1']
|
|
@@ -202,25 +218,25 @@ print(f"Citations with related work sections: {important_df['related_work_sectio
|
|
| 202 |
|
| 203 |
This dataset was created using the [DeepScholarBench](https://github.com/guestrin-lab/deepscholar-bench) pipeline:
|
| 204 |
|
| 205 |
-
1.
|
| 206 |
-
2.
|
| 207 |
-
3.
|
| 208 |
-
4.
|
| 209 |
-
5.
|
| 210 |
|
| 211 |
## π Related Resources
|
| 212 |
|
| 213 |
-
-
|
| 214 |
-
-
|
| 215 |
-
-
|
| 216 |
|
| 217 |
## π Leaderboard
|
| 218 |
|
| 219 |
We maintain a leaderboard to track the performance of various models on the DeepScholarBench evaluation tasks:
|
| 220 |
|
| 221 |
-
-
|
| 222 |
-
-
|
| 223 |
-
-
|
| 224 |
|
| 225 |
|
| 226 |
## π€ Contributing
|
|
@@ -233,4 +249,4 @@ This dataset is released under the MIT License. See the [LICENSE](https://github
|
|
| 233 |
|
| 234 |
---
|
| 235 |
|
| 236 |
-
**Note**: This dataset is actively maintained and updated. Check the GitHub repository for the latest version and additional resources.
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: mit
|
|
|
|
|
|
|
| 5 |
size_categories:
|
| 6 |
- 1K<n<10K
|
| 7 |
+
task_categories:
|
| 8 |
+
- text-generation
|
| 9 |
+
pretty_name: DeepScholarBench Dataset
|
| 10 |
+
tags:
|
| 11 |
+
- code
|
| 12 |
+
- scientific-research
|
| 13 |
+
- academic-papers
|
| 14 |
+
- citation-analysis
|
| 15 |
+
- retrieval-augmented-generation
|
| 16 |
+
- rag
|
| 17 |
+
- summarization
|
| 18 |
+
- llm-evaluation
|
| 19 |
configs:
|
| 20 |
- config_name: papers
|
| 21 |
+
data_files: papers_with_related_works.csv
|
| 22 |
- config_name: citations
|
| 23 |
+
data_files: recovered_citations.csv
|
| 24 |
- config_name: important_citations
|
| 25 |
+
data_files: important_citations.csv
|
| 26 |
- config_name: full
|
| 27 |
+
data_files:
|
| 28 |
+
- papers_with_related_works.csv
|
| 29 |
+
- recovered_citations.csv
|
| 30 |
+
- important_citations.csv
|
| 31 |
---
|
| 32 |
+
|
| 33 |
# DeepScholarBench Dataset
|
| 34 |
|
| 35 |
[](https://huggingface.co/datasets/deepscholar-bench/DeepScholarBench)
|
| 36 |
[](https://github.com/guestrin-lab/deepscholar-bench)
|
| 37 |
[](https://github.com/guestrin-lab/deepscholar-bench/blob/main/LICENSE)
|
| 38 |
+
[](https://arxiv.org/abs/2508.20033)
|
| 39 |
+
[](https://huggingface.co/papers/2508.20033)
|
| 40 |
+
[](https://guestrin-lab.github.io/deepscholar-leaderboard/leaderboard/deepscholar_bench_leaderboard.html)
|
| 41 |
[](https://guestrin-lab.github.io/deepscholar-leaderboard/leaderboard/deepscholar_bench_leaderboard.html)
|
| 42 |
|
| 43 |
---
|
| 44 |
|
| 45 |
A comprehensive dataset of academic papers with extracted related works sections and recovered citations, designed for training and evaluating research generation systems.
|
| 46 |
|
| 47 |
+
## Abstract
|
| 48 |
+
The ability to research and synthesize knowledge is central to human expertise and progress. An emerging class of systems promises these exciting capabilities through generative research synthesis, performing retrieval over the live web and synthesizing discovered sources into long-form, cited summaries. However, evaluating such systems remains an open challenge: existing question-answering benchmarks focus on short-form factual responses, while expert-curated datasets risk staleness and data contamination. Both fail to capture the complexity and evolving nature of real research synthesis tasks. In this work, we introduce DeepScholar-bench, a live benchmark and holistic, automated evaluation framework designed to evaluate generative research synthesis. DeepScholar-bench draws queries from recent, high-quality ArXiv papers and focuses on a real research synthesis task: generating the related work sections of a paper by retrieving, synthesizing, and citing prior research. Our evaluation framework holistically assesses performance across three key dimensions, knowledge synthesis, retrieval quality, and verifiability. We also develop DeepScholar-base, a reference pipeline implemented efficiently using the LOTUS API. Using the DeepScholar-bench framework, we perform a systematic evaluation of prior open-source systems, search AI's, OpenAI's DeepResearch, and DeepScholar-base. We find that DeepScholar-base establishes a strong baseline, attaining competitive or higher performance than each other method. We also find that DeepScholar-bench remains far from saturated, with no system exceeding a score of $19\%$ across all metrics. These results underscore the difficulty of DeepScholar-bench, as well as its importance for progress towards AI systems capable of generative research synthesis.
|
| 49 |
+
|
| 50 |
## π Dataset Overview
|
| 51 |
|
| 52 |
This dataset contains **63 academic papers** from ArXiv with their related works sections and **1630 recovered citations**, providing a rich resource for research generation and citation analysis tasks.
|
|
|
|
| 85 |
Contains individual citations with recovered metadata:
|
| 86 |
|
| 87 |
| Column | Description |
|
| 88 |
+
|--------|-------------|\
|
| 89 |
| `parent_paper_title` | Title of the paper containing the citation |
|
| 90 |
| `parent_paper_arxiv_id` | ArXiv ID of the parent paper |
|
| 91 |
| `citation_shorthand` | Citation key (e.g., "NBERw21340") |
|
|
|
|
| 149 |
| `citations` | Citations only | `recovered_citations.csv` | 1,630 citations | Citation analysis, relationship mapping |
|
| 150 |
| `important_citations` | Enhanced citations with metadata | `important_citations.csv` | 1,050 citations | Advanced citation analysis, paper-citation linking |
|
| 151 |
|
| 152 |
+
## π Sample Usage
|
| 153 |
|
| 154 |
### Loading from Hugging Face Hub (Recommended)
|
| 155 |
|
|
|
|
| 180 |
# Get a specific paper
|
| 181 |
paper = papers_df[papers_df['arxiv_id'] == '2506.02838v1'].iloc[0]
|
| 182 |
print(f"Title: {paper['title']}")
|
| 183 |
+
print(f"Related Works:
|
| 184 |
+
{paper['clean_latex_related_works']}")
|
| 185 |
|
| 186 |
# Get all citations for this paper
|
| 187 |
paper_citations = citations_df[citations_df['parent_paper_arxiv_id'] == '2506.02838v1']
|
|
|
|
| 218 |
|
| 219 |
This dataset was created using the [DeepScholarBench](https://github.com/guestrin-lab/deepscholar-bench) pipeline:
|
| 220 |
|
| 221 |
+
1. **ArXiv Scraping**: Collected papers by category and date range
|
| 222 |
+
2. **Author Filtering**: Focused on high-impact researchers (h-index β₯ 25)
|
| 223 |
+
3. **LaTeX Extraction**: Extracted related works sections from LaTeX source
|
| 224 |
+
4. **Citation Recovery**: Resolved citations and recovered metadata
|
| 225 |
+
5. **Quality Filtering**: Ensured data quality and completeness
|
| 226 |
|
| 227 |
## π Related Resources
|
| 228 |
|
| 229 |
+
- **[GitHub Repository](https://github.com/guestrin-lab/deepscholar-bench)**: Full source code and documentation
|
| 230 |
+
- **[Data Pipeline](https://github.com/guestrin-lab/deepscholar-bench/tree/main/data_pipeline)**: Tools for collecting similar datasets
|
| 231 |
+
- **[Evaluation Framework](https://github.com/guestrin-lab/deepscholar-bench/tree/main/eval)**: Framework for evaluating research generation systems
|
| 232 |
|
| 233 |
## π Leaderboard
|
| 234 |
|
| 235 |
We maintain a leaderboard to track the performance of various models on the DeepScholarBench evaluation tasks:
|
| 236 |
|
| 237 |
+
- **[Official Leaderboard](https://guestrin-lab.github.io/deepscholar-leaderboard/leaderboard/deepscholar_bench_leaderboard.html)**: Live rankings of model performance
|
| 238 |
+
- **Evaluation Metrics**: Models are evaluated on relevance, coverage, and citation accuracy as described in the [evaluation guide](https://github.com/guestrin-lab/deepscholar-bench/tree/main/eval)
|
| 239 |
+
- **Submission Process**: Submit your results via this [Form](https://docs.google.com/forms/d/e/1FAIpQLSeug4igDHhVUU3XnrUSeMVRUJFKlHP28i8fcBAu_LHCkqdV1g/viewform)
|
| 240 |
|
| 241 |
|
| 242 |
## π€ Contributing
|
|
|
|
| 249 |
|
| 250 |
---
|
| 251 |
|
| 252 |
+
**Note**: This dataset is actively maintained and updated. Check the GitHub repository for the latest version and additional resources.
|