Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
talk2ref / README.md
s8frbroy's picture
Update README.md
a4f17f3 verified
metadata
dataset_info:
  features:
    - name: video_path
      dtype: string
    - name: sr
      dtype: int64
    - name: abstract
      dtype: string
    - name: language
      dtype: string
    - name: split
      dtype: string
    - name: duration
      dtype: float64
    - name: conference
      dtype: string
    - name: year
      dtype: string
    - name: transcription
      dtype: string
    - name: title
      dtype: string
    - name: references
      list:
        - name: abstract
          dtype: string
        - name: authors
          sequence: string
        - name: container_title
          dtype: string
        - name: doi
          dtype: string
        - name: editors
          sequence: string
        - name: id
          dtype: string
        - name: issue
          dtype: string
        - name: keywords
          sequence: string
        - name: matched_title
          dtype: string
        - name: meeting
          dtype: string
        - name: pages
          dtype: string
        - name: publisher
          dtype: string
        - name: ref_id
          dtype: string
        - name: sections
          sequence: string
        - name: title
          dtype: string
        - name: topics
          sequence: string
        - name: url
          dtype: string
        - name: volume
          dtype: string
        - name: year
          dtype: string
    - name: audio
      dtype: audio
  splits:
    - name: train
      num_examples: 3971
    - name: dev
      num_examples: 882
    - name: test
      num_examples: 1426
  download_size: 126226823361
  dataset_size: 133070338668.697
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: dev
        path: data/dev-*
      - split: test
        path: data/test-*
license: cc-by-4.0
language:
  - en
pretty_name: Talk2Ref
size_categories:
  - 1K<n<10K

Talk2Ref: A Dataset for Reference Prediction from Scientific Talks

Scientific talks are a growing medium for disseminating research, and automatically identifying relevant literature that grounds or enriches a talk would be highly valuable for researchers and students alike. We introduce Reference Prediction from Talks (RPT), a new task that maps long and unstructured scientific presentations to relevant papers.
To support research on RPT, we present Talk2Ref, the first large-scale dataset of its kind, containing 6,279 talks and 43,429 cited papers (26 per talk on average), where relevance is approximated by the papers cited in the talk’s corresponding source publication.

We establish strong baselines by evaluating state-of-the-art text embedding models in zero-shot retrieval scenarios and propose a dual-encoder architecture trained on Talk2Ref. We further explore strategies for handling long transcripts and training for domain adaptation.
Our results show that fine-tuning on Talk2Ref significantly improves citation prediction performance, demonstrating both the challenges of the task and the effectiveness of our dataset for learning semantic representations from spoken scientific content.

The dataset and trained models are released under an open license to foster future research on integrating spoken scientific communication into citation recommendation systems.


Dataset Summary

To the best of our knowledge, no existing dataset supports research on Reference Prediction from Talks (RPT). Talk2Ref is the first large-scale resource pairing scientific presentations with their corresponding relevant papers. Relevance is modeled using the citations in each talk’s source publication.

Talk2Ref includes:

  • 6,279 scientific talks
  • 43,429 cited papers
  • ≈26 references per talk
  • Spanning 2017–2022
  • Covering ACL, NAACL, and EMNLP conferences

This dataset provides a foundation for systematically studying reference prediction from spoken scientific content at scale.


Dataset Structure

Split Conferences Years Talks Avg. Length (min) Avg. Words Avg. References Total References
Train ACL, NAACL, EMNLP 2017–2021 3,971 12.1 1615 26.75 31,064
Dev ACL 2022 882 9.9 1327 26.05 11,805
Test EMNLP, NAACL 2022 1,426 9.1 1186 25.66 16,935
Total ACL, NAACL, EMNLP 2017–2022 6,279 11.1 1,478 26.4 43,429

Talks are partitioned chronologically by conference year.
Earlier years form the training split (2017–2021), and later years (2022) are used for development and testing, ensuring temporal consistency between splits.


Dataset Fields

Field Type Description
video_path string URL or path to the original conference talk video.
audio audio Audio waveform of the talk segment with sampling rate information.
sr int Sampling rate (Hz) of the audio recording.
abstract string Abstract of the corresponding scientific paper.
language string Language of the talk (English).
split string Split name (“train”, “dev”, or “test”).
duration float Duration of the audio in seconds.
conference string Conference name (ACL, NAACL, or EMNLP).
year string Year of the conference.
transcription string Automatic speech recognition (ASR) transcript of the talk.
title string Paper title associated with the talk.
references list List of structured metadata for cited papers, including title, authors, abstract, year.

Data Collection and Processing

  1. Source Acquisition:
    Conference talks and associated papers were obtained from the ACL Anthology.

  2. Audio Extraction:
    Audio tracks were extracted from videos and converted to .wav format using FFmpeg.

  3. Transcription:
    Speech was transcribed using Whisper-Large-v3.

  4. Reference Extraction:
    The corresponding paper PDFs were parsed with GROBID, extracting all cited references and metadata.

  5. Abstract Retrieval:
    Missing abstracts were filled by querying CrossRef, arXiv, OpenAlex, and Semantic Scholar.

  6. Filtering:
    Invalid or placeholder abstracts were removed.

This process results in a rich dataset linking each talk to its cited papers, including audio, transcript, and metadata.


Use Cases

Talk2Ref supports research on:

  • Reference Prediction from Spoken Content
  • Speech-to-Text and Speech-to-Abstract Generation
  • Retrieval and Representation Learning

Licensing

The dataset is distributed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
Users are free to share and adapt the dataset with appropriate attribution.


Citation

If you use this dataset, please cite the following paper:

@misc{broy2025talk2refdatasetreferenceprediction,
  title        = {Talk2Ref: A Dataset for Reference Prediction from Scientific Talks},
  author       = {Frederik Broy and Maike Züfle and Jan Niehues},
  year         = {2025},
  eprint       = {2510.24478},
  archivePrefix= {arXiv},
  primaryClass = {cs.CL},
  url          = {https://arxiv.org/abs/2510.24478}
}