Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
s8frbroy commited on
Commit
8a4437f
·
verified ·
1 Parent(s): 77e711a

Update README.md

Browse files

Readme first version

Files changed (1) hide show
  1. README.md +117 -3
README.md CHANGED
@@ -65,13 +65,10 @@ dataset_info:
65
  dtype: audio
66
  splits:
67
  - name: train
68
- num_bytes: 91208954389.009
69
  num_examples: 3971
70
  - name: dev
71
- num_bytes: 16869854498.0
72
  num_examples: 882
73
  - name: test
74
- num_bytes: 24991529781.688
75
  num_examples: 1426
76
  download_size: 126226823361
77
  dataset_size: 133070338668.697
@@ -84,4 +81,121 @@ configs:
84
  path: data/dev-*
85
  - split: test
86
  path: data/test-*
 
 
 
 
 
 
87
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  dtype: audio
66
  splits:
67
  - name: train
 
68
  num_examples: 3971
69
  - name: dev
 
70
  num_examples: 882
71
  - name: test
 
72
  num_examples: 1426
73
  download_size: 126226823361
74
  dataset_size: 133070338668.697
 
81
  path: data/dev-*
82
  - split: test
83
  path: data/test-*
84
+ license: cc-by-4.0
85
+ language:
86
+ - en
87
+ pretty_name: Talk2Ref
88
+ size_categories:
89
+ - 1K<n<10K
90
  ---
91
+
92
+ # Talk2Ref: A Dataset for Reference Prediction from Scientific Talks
93
+
94
+ Scientific talks are a growing medium for disseminating research, and automatically identifying relevant literature that
95
+ grounds or enriches a talk would be highly valuable for researchers and students alike. We introduce **Reference
96
+ Prediction from Talks (RPT)**, a new task that maps long and unstructured scientific presentations to relevant papers.
97
+ To support research on RPT, we present **Talk2Ref**, the first large-scale dataset of its kind, containing **6,279 talks**
98
+ and **43,429 cited papers** (26 per talk on average), where relevance is approximated by the papers cited in the talk’s
99
+ corresponding source publication.
100
+
101
+ We establish strong baselines by evaluating state-of-the-art text embedding
102
+ models in zero-shot retrieval scenarios and propose a **dual-encoder architecture** trained on Talk2Ref. We further
103
+ explore strategies for handling long transcripts and training for domain adaptation.
104
+ Our results show that fine-tuning on Talk2Ref significantly improves citation prediction performance, demonstrating both
105
+ the challenges of the task and the effectiveness of our dataset for learning semantic representations from spoken scientific content.
106
+
107
+ The dataset and trained models are released under an open license to foster future research on integrating spoken
108
+ scientific communication into citation recommendation systems.
109
+
110
+ ---
111
+
112
+ ## Dataset Summary
113
+
114
+ To the best of our knowledge, **no existing dataset** supports research on Reference Prediction from Talks (RPT).
115
+ **Talk2Ref** is the first large-scale resource pairing scientific presentations with their corresponding relevant papers.
116
+ Relevance is modeled using the citations in each talk’s source publication.
117
+
118
+ Talk2Ref includes:
119
+ - **6,279 scientific talks**
120
+ - **43,429 cited papers**
121
+ - **≈26 references per talk**
122
+ - Spanning **2017–2022**
123
+ - Covering **ACL, NAACL, and EMNLP conferences**
124
+
125
+ This dataset provides a foundation for systematically studying reference prediction from spoken scientific content at scale.
126
+
127
+ ---
128
+
129
+ ## Dataset Structure
130
+
131
+ | Split | Conferences | Years | Talks | Avg. Length (min) | Avg. Words | Avg. References | Total References |
132
+ |:------|:-------------|:------|------:|------------------:|------------:|----------------:|-----------------:|
133
+ | Train | ACL, NAACL, EMNLP | 2017–2021 | 3,971 | 12.1 | 1615 | 26.75 | 31,064 |
134
+ | Dev | ACL | 2022 | 882 | 9.9 | 1327 | 26.05 | 11,805 |
135
+ | Test | EMNLP, NAACL | 2022 | 1,426 | 9.1 | 1186 | 25.66 | 16,935 |
136
+ | **Total** | ACL, NAACL, EMNLP | **2017–2022** | **6,279** | **11.1** | **1,478** | **26.4** | **43,429** |
137
+
138
+ Talks are partitioned chronologically by conference year.
139
+ Earlier years form the training split (2017–2021), and later years (2022) are used for development and testing,
140
+ ensuring **temporal consistency** between splits.
141
+
142
+ ---
143
+
144
+ ## Dataset Fields
145
+
146
+ | Field | Type | Description |
147
+ |:------|:-----|:-------------|
148
+ | `video_path` | string | URL or path to the original conference talk video. |
149
+ | `audio` | audio | Audio waveform of the talk segment with sampling rate information. |
150
+ | `sr` | int | Sampling rate (Hz) of the audio recording. |
151
+ | `abstract` | string | Abstract of the corresponding scientific paper. |
152
+ | `language` | string | Language of the talk (English). |
153
+ | `split` | string | Split name (“train”, “dev”, or “test”). |
154
+ | `duration` | float | Duration of the audio in seconds. |
155
+ | `conference` | string | Conference name (ACL, NAACL, or EMNLP). |
156
+ | `year` | string | Year of the conference. |
157
+ | `transcription` | string | Automatic speech recognition (ASR) transcript of the talk. |
158
+ | `title` | string | Paper title associated with the talk. |
159
+ | `references` | list | List of structured metadata for cited papers, including title, authors, abstract, year. |
160
+
161
+ ---
162
+
163
+ ## Data Collection and Processing
164
+
165
+ 1. **Source Acquisition:**
166
+ Conference talks and associated papers were obtained from the **ACL Anthology**.
167
+
168
+ 2. **Audio Extraction:**
169
+ Audio tracks were extracted from videos and converted to `.wav` format using FFmpeg.
170
+
171
+ 3. **Transcription:**
172
+ Speech was transcribed using **Whisper-Large-v3**.
173
+
174
+ 4. **Reference Extraction:**
175
+ The corresponding paper PDFs were parsed with **GROBID**, extracting all cited references and metadata.
176
+
177
+ 5. **Abstract Retrieval:**
178
+ Missing abstracts were filled by querying **CrossRef**, **arXiv**, **OpenAlex**, and **Semantic Scholar**.
179
+
180
+ 6. **Filtering:**
181
+ Invalid or placeholder abstracts were removed.
182
+
183
+ This process results in a rich dataset linking each talk to its cited papers, including audio, transcript, and metadata.
184
+
185
+ ---
186
+
187
+ ## Use Cases
188
+
189
+ Talk2Ref supports research on:
190
+ - **Reference Prediction from Spoken Content**
191
+ - **Speech-to-Text and Speech-to-Abstract Generation**
192
+ - **Retrieval and Representation Learning**
193
+
194
+ ---
195
+
196
+ ## Licensing
197
+
198
+ The dataset is distributed under the **Creative Commons Attribution 4.0 International License (CC BY 4.0)**.
199
+ Users are free to share and adapt the dataset with appropriate attribution.
200
+
201
+ ---