dataset_info:
features:
- name: title
dtype: string
- name: keywords
sequence: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1851098097.8341584
num_examples: 145064
- name: test
num_bytes: 78063099.39124106
num_examples: 6653
download_size: 626249553
dataset_size: 1929161197.2253995
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
structured_paper_summarization
A 151 k‑example dataset of chat‐style prompt → structured abstract pairs, built from ~19 000 research papers across business, management, information‑systems and social‑science domains. Each example shows the full paper (body text) being summarised into a five‑section Emerald‑style structured abstract (Purpose, Design/methodology/approach, Findings, Practical implications, Originality/value).
Why this dataset?
Large‑language models (LLMs) frequently struggle to:
- Condense long scientific prose into factual, concise summaries.
- Follow rigid output structures (e.g. subsection headings).
This dataset targets both challenges simultaneously, enabling fine‑tuning or instruction‑tuning of LLMs that must output structured scholarly abstracts.
At a glance
| Split | Rows | Size (compressed) |
|---|---|---|
| train | 145 067 | 626 MB |
| test | 6 650 | 29 MB |
| Total | 151 717 | ≈655 MB |
Counts taken from the Hugging Face viewer on 2025‑04‑29.
Data schema
{
title: string # Paper title
keywords: list[string] # Author‑supplied keywords (0‑23)
messages: list[dict] length ≥ 2 # ChatML‑style conversation
}
messages format
Each list contains alternating dictionaries with:
role: either"user"or"assistant".content: UTF‑8 text.
Typical pattern (2 items):
[
{
"role": "user",
"content": "Summarize the following paper into structured abstract.\n\n<full paper text>"
},
{
"role": "assistant",
"content": "Purpose: …\nDesign/methodology/approach: …\nFindings: …\nPractical implications: …\nOriginality/value: …"
}
]
Some papers are longer and may be truncated to ~8 k tokens.
Loading the data
from datasets import load_dataset
ds_train = load_dataset(
"Neooooo/structured_paper_summarization", split="train"
)
print(ds_train[0]["messages"][1]["content"][:500])
The dataset is stored as Apache Parquet with streaming support; the example above requires ~5 s to start iterating with no local download.
Suggested use‑cases
- Instruction‑tuning chat LLMs for long‑document summarisation.
- Research on controlled text generation and output formatting.
- Training retrieval‑augmented systems that must cite sections of the source paper.
Source & construction
- Full‑text articles were collected via institutional access to the Emerald Insight corpus (open‑access + subscription).
- The canonical structured abstract supplied by each journal was extracted as ground truth.
- The article’s main body was embedded into a prompt of the form shown above.
- Data were converted to Hugging Face
datasets➜ auto‑parquet.
No additional manual cleaning was performed; typos and OCR artefacts may persist.
Licensing & acceptable use
The article texts are copyright their original publishers/authors and are redistributed here solely for non‑commercial research. By using this dataset you agree to:
- Not redistribute the raw paper texts.
- Cite the original articles in any derivative work.
- Abide by Emerald’s usage policy and your local copyright laws.
The metadata & structured abstracts are released under CC BY‑NC 4.0. For commercial licensing, please contact the original rights‑holders.
Citation
If you use this dataset, please cite:
@dataset{hu_2025_structured_prompts,
author = {Xingyu Hu},
title = {structured_paper_summarization},
year = 2025,
url = {https://huggingface.co/datasets/Neooooo/structured_paper_summarization},
note = {Version 1.0}
}
Contributions
Feel free to open PRs to:
- Fix metadata errors.
- Provide additional splits (validation, domain‑specific subsets).
- Add scripts for evaluation or preprocessing.
Happy summarising!