File size: 7,165 Bytes
77e711a 8a4437f 77e711a 8a4437f 4ab4eac a4f17f3 4ab4eac a4f17f3 4ab4eac a4f17f3 4ab4eac |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
---
dataset_info:
features:
- name: video_path
dtype: string
- name: sr
dtype: int64
- name: abstract
dtype: string
- name: language
dtype: string
- name: split
dtype: string
- name: duration
dtype: float64
- name: conference
dtype: string
- name: year
dtype: string
- name: transcription
dtype: string
- name: title
dtype: string
- name: references
list:
- name: abstract
dtype: string
- name: authors
sequence: string
- name: container_title
dtype: string
- name: doi
dtype: string
- name: editors
sequence: string
- name: id
dtype: string
- name: issue
dtype: string
- name: keywords
sequence: string
- name: matched_title
dtype: string
- name: meeting
dtype: string
- name: pages
dtype: string
- name: publisher
dtype: string
- name: ref_id
dtype: string
- name: sections
sequence: string
- name: title
dtype: string
- name: topics
sequence: string
- name: url
dtype: string
- name: volume
dtype: string
- name: year
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_examples: 3971
- name: dev
num_examples: 882
- name: test
num_examples: 1426
download_size: 126226823361
dataset_size: 133070338668.697
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
license: cc-by-4.0
language:
- en
pretty_name: Talk2Ref
size_categories:
- 1K<n<10K
---
# Talk2Ref: A Dataset for Reference Prediction from Scientific Talks
Scientific talks are a growing medium for disseminating research, and automatically identifying relevant literature that
grounds or enriches a talk would be highly valuable for researchers and students alike. We introduce **Reference
Prediction from Talks (RPT)**, a new task that maps long and unstructured scientific presentations to relevant papers.
To support research on RPT, we present **Talk2Ref**, the first large-scale dataset of its kind, containing **6,279 talks**
and **43,429 cited papers** (26 per talk on average), where relevance is approximated by the papers cited in the talk’s
corresponding source publication.
We establish strong baselines by evaluating state-of-the-art text embedding
models in zero-shot retrieval scenarios and propose a **dual-encoder architecture** trained on Talk2Ref. We further
explore strategies for handling long transcripts and training for domain adaptation.
Our results show that fine-tuning on Talk2Ref significantly improves citation prediction performance, demonstrating both
the challenges of the task and the effectiveness of our dataset for learning semantic representations from spoken scientific content.
The dataset and trained models are released under an open license to foster future research on integrating spoken
scientific communication into citation recommendation systems.
---
## Dataset Summary
To the best of our knowledge, **no existing dataset** supports research on Reference Prediction from Talks (RPT).
**Talk2Ref** is the first large-scale resource pairing scientific presentations with their corresponding relevant papers.
Relevance is modeled using the citations in each talk’s source publication.
Talk2Ref includes:
- **6,279 scientific talks**
- **43,429 cited papers**
- **≈26 references per talk**
- Spanning **2017–2022**
- Covering **ACL, NAACL, and EMNLP conferences**
This dataset provides a foundation for systematically studying reference prediction from spoken scientific content at scale.
---
## Dataset Structure
| Split | Conferences | Years | Talks | Avg. Length (min) | Avg. Words | Avg. References | Total References |
|:------|:-------------|:------|------:|------------------:|------------:|----------------:|-----------------:|
| Train | ACL, NAACL, EMNLP | 2017–2021 | 3,971 | 12.1 | 1615 | 26.75 | 31,064 |
| Dev | ACL | 2022 | 882 | 9.9 | 1327 | 26.05 | 11,805 |
| Test | EMNLP, NAACL | 2022 | 1,426 | 9.1 | 1186 | 25.66 | 16,935 |
| **Total** | ACL, NAACL, EMNLP | **2017–2022** | **6,279** | **11.1** | **1,478** | **26.4** | **43,429** |
Talks are partitioned chronologically by conference year.
Earlier years form the training split (2017–2021), and later years (2022) are used for development and testing,
ensuring **temporal consistency** between splits.
---
## Dataset Fields
| Field | Type | Description |
|:------|:-----|:-------------|
| `video_path` | string | URL or path to the original conference talk video. |
| `audio` | audio | Audio waveform of the talk segment with sampling rate information. |
| `sr` | int | Sampling rate (Hz) of the audio recording. |
| `abstract` | string | Abstract of the corresponding scientific paper. |
| `language` | string | Language of the talk (English). |
| `split` | string | Split name (“train”, “dev”, or “test”). |
| `duration` | float | Duration of the audio in seconds. |
| `conference` | string | Conference name (ACL, NAACL, or EMNLP). |
| `year` | string | Year of the conference. |
| `transcription` | string | Automatic speech recognition (ASR) transcript of the talk. |
| `title` | string | Paper title associated with the talk. |
| `references` | list | List of structured metadata for cited papers, including title, authors, abstract, year. |
---
## Data Collection and Processing
1. **Source Acquisition:**
Conference talks and associated papers were obtained from the **ACL Anthology**.
2. **Audio Extraction:**
Audio tracks were extracted from videos and converted to `.wav` format using FFmpeg.
3. **Transcription:**
Speech was transcribed using **Whisper-Large-v3**.
4. **Reference Extraction:**
The corresponding paper PDFs were parsed with **GROBID**, extracting all cited references and metadata.
5. **Abstract Retrieval:**
Missing abstracts were filled by querying **CrossRef**, **arXiv**, **OpenAlex**, and **Semantic Scholar**.
6. **Filtering:**
Invalid or placeholder abstracts were removed.
This process results in a rich dataset linking each talk to its cited papers, including audio, transcript, and metadata.
---
## Use Cases
Talk2Ref supports research on:
- **Reference Prediction from Spoken Content**
- **Speech-to-Text and Speech-to-Abstract Generation**
- **Retrieval and Representation Learning**
---
## Licensing
The dataset is distributed under the **Creative Commons Attribution 4.0 International License (CC BY 4.0)**.
Users are free to share and adapt the dataset with appropriate attribution.
---
## Citation
If you use this dataset, please cite the following paper:
```bibtex
@misc{broy2025talk2refdatasetreferenceprediction,
title = {Talk2Ref: A Dataset for Reference Prediction from Scientific Talks},
author = {Frederik Broy and Maike Züfle and Jan Niehues},
year = {2025},
eprint = {2510.24478},
archivePrefix= {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2510.24478}
} |