File size: 4,623 Bytes
fe7aeac c59b3c2 fe7aeac c59b3c2 fe7aeac c59b3c2 fe7aeac 299fa3c 8207041 299fa3c 22396fe 299fa3c 22396fe 299fa3c 22396fe 299fa3c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
---
dataset_info:
features:
- name: title
dtype: string
- name: keywords
sequence: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1851098097.8341584
num_examples: 145064
- name: test
num_bytes: 78063099.39124106
num_examples: 6653
download_size: 626249553
dataset_size: 1929161197.2253995
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# structured_paper_summarization
A **151 k‑example** dataset of chat‐style *prompt → structured abstract* pairs, built from ~19 000 research papers across business, management, information‑systems and social‑science domains. Each example shows the full paper (body text) being summarised into a five‑section Emerald‑style structured abstract (Purpose, Design/methodology/approach, Findings, Practical implications, Originality/value).
---
## Why this dataset?
Large‑language models (LLMs) frequently struggle to:
1. **Condense long scientific prose** into factual, concise summaries.
2. **Follow rigid output structures** (e.g. subsection headings).
This dataset targets both challenges simultaneously, enabling fine‑tuning or instruction‑tuning of LLMs that must output *structured* scholarly abstracts.
---
## At a glance
| Split | Rows | Size (compressed) |
|-------|------|-------------------|
| train | **145 067** | 626 MB |
| test | **6 650** | 29 MB |
| **Total** | **151 717** | ≈655 MB |
<sup>Counts taken from the Hugging Face viewer on 2025‑04‑29.</sup>
---
## Data schema
```text
{
title: string # Paper title
keywords: list[string] # Author‑supplied keywords (0‑23)
messages: list[dict] length ≥ 2 # ChatML‑style conversation
}
```
### `messages` format
Each list contains alternating dictionaries with:
- `role`: either `"user"` or `"assistant"`.
- `content`: UTF‑8 text.
Typical pattern (2 items):
```jsonc
[
{
"role": "user",
"content": "Summarize the following paper into structured abstract.\n\n<full paper text>"
},
{
"role": "assistant",
"content": "Purpose: …\nDesign/methodology/approach: …\nFindings: …\nPractical implications: …\nOriginality/value: …"
}
]
```
Some papers are longer and may be truncated to ~8 k tokens.
---
## Loading the data
```python
from datasets import load_dataset
ds_train = load_dataset(
"Neooooo/structured_paper_summarization", split="train"
)
print(ds_train[0]["messages"][1]["content"][:500])
```
The dataset is stored as Apache **Parquet** with streaming support; the example above requires ~5 s to start iterating with no local download.
---
## Suggested use‑cases
* **Instruction‑tuning** chat LLMs for long‑document summarisation.
* Research on **controlled text generation** and output formatting.
* Training **retrieval‑augmented systems** that must cite sections of the source paper.
---
## Source & construction
1. Full‑text articles were collected via institutional access to the *Emerald Insight* corpus (open‑access + subscription).
2. The canonical *structured abstract* supplied by each journal was extracted as ground truth.
3. The article’s main body was embedded into a prompt of the form shown above.
4. Data were converted to Hugging Face `datasets` ➜ auto‑parquet.
No additional manual cleaning was performed; typos and OCR artefacts may persist.
---
## Licensing & acceptable use
The article texts are **copyright their original publishers/authors** and are redistributed here *solely for non‑commercial research*. By using this dataset you agree to:
- **Not** redistribute the raw paper texts.
- Cite the original articles in any derivative work.
- Abide by Emerald’s usage policy and your local copyright laws.
The **metadata & structured abstracts** are released under **CC BY‑NC 4.0**. For commercial licensing, please contact the original rights‑holders.
---
## Citation
If you use this dataset, please cite:
```text
@dataset{hu_2025_structured_prompts,
author = {Xingyu Hu},
title = {structured_paper_summarization},
year = 2025,
url = {https://huggingface.co/datasets/Neooooo/structured_paper_summarization},
note = {Version 1.0}
}
```
---
## Contributions
Feel free to open PRs to:
- Fix metadata errors.
- Provide additional splits (validation, domain‑specific subsets).
- Add scripts for evaluation or preprocessing.
---
*Happy summarising!*
|