Datasets:
dataset_info:
features:
- name: speaker_id
dtype: string
- name: gender
dtype: string
- name: speaker_number
dtype: int64
- name: utterance_type
dtype: string
- name: utterance_type_description
dtype: string
- name: utterance_number
dtype: int64
- name: utterance_id
dtype: string
- name: lar_file_path
dtype: string
- name: mic_file_path
dtype: string
- name: ref_file_path
dtype: string
- name: lar_sample_rate
dtype: int64
- name: mic_sample_rate
dtype: int64
- name: lar_duration
dtype: float64
- name: mic_duration
dtype: float64
- name: lar_audio
dtype: audio
- name: mic_audio
dtype: audio
- name: f0_data
sequence:
sequence: float64
splits:
- name: train
num_bytes: 7254772452.868
num_examples: 4718
download_size: 5886303253
dataset_size: 7254772452.868
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- audio-classification
language:
- en
tags:
- pitch
- audio
- voice
pretty_name: Pitch Tracking Database from Graz University of Technology
size_categories:
- 1K<n<10K
PTDB-TUG: Pitch Tracking Database from Graz University of Technology
The Pitch Tracking Database from Graz University of Technology (PTDB-TUG) is a speech database for pitch tracking that provides microphone and laryngograph signals of 20 English native speakers as well as the extracted pitch trajectories as a reference. This dataset contains 4,718 recordings of phonetically rich sentences from the TIMIT corpus, providing both LAR (laryngograph) and MIC (microphone) recordings along with F0 reference data extracted using RAPT.
Dataset Details
Dataset Description
The PTDB-TUG database is a comprehensive pitch tracking corpus designed for evaluating pitch tracking algorithms. It contains clean speech recordings with simultaneous laryngograph signals, which provide accurate ground truth pitch information. The database uses 2,342 phonetically rich sentences from the TIMIT corpus, recorded by 20 native English speakers (10 male, 10 female) in a professional recording studio environment.
- Curated by: Signal Processing and Speech Communication Laboratory (SPSC), Graz University of Technology
- Original Authors: Gregor Pirker, Michael Wohlmayr, Stefan Petrik, Franz Pernkopf
- Shared by: HuggingFace user (dataset conversion)
- Language(s): English
- License: Open Database License (ODbL) v1.0
Dataset Sources
- Original Repository: https://www.spsc.tugraz.at/databases-and-tools/ptdb-tug-pitch-tracking-database-from-graz-university-of-technology.html
- Paper: "A pitch tracking corpus with evaluation on multipitch tracking scenario" (Interspeech 2011)
- Direct Download: http://www2.spsc.tugraz.at/databases/PTDB-TUG
Uses
Direct Use
This dataset is specifically designed for:
- Pitch tracking algorithm development and evaluation: Primary use case for developing and benchmarking F0 estimation algorithms
- Speech signal processing research: Studying vocal fold vibration patterns using simultaneous laryngograph and microphone signals
- Voice quality analysis: Analyzing voice characteristics using clean speech with ground truth pitch information
- Multi-modal speech analysis: Leveraging both acoustic and physiological (laryngograph) signals
- Phonetic research: Using phonetically balanced TIMIT sentences for comprehensive speech analysis
Out-of-Scope Use
- Real-time applications: This is a research dataset, not optimized for real-time pitch tracking
- Noisy speech scenarios: Recordings are clean studio quality and may not generalize to noisy environments
- Non-English languages: Dataset contains only English speech from native speakers
- Speaker identification: Limited to 20 speakers, insufficient for robust speaker recognition systems
- Emotional speech analysis: Recordings are neutral read speech, not spontaneous or emotional speech
Dataset Structure
The dataset contains 4,718 utterances with the following features:
Data Fields
- speaker_id (string): Speaker identifier (F01-F10 for females, M01-M10 for males)
- gender (string): Speaker gender ("female" or "male")
- speaker_number (int): Numeric speaker identifier (1-10)
- utterance_type (string): Type of utterance from TIMIT corpus
- "sa": dialect sentence (2 per speaker)
- "si": phonetically-balanced sentence (~189 per speaker)
- "sx": phonetically-compact sentence (~45 per speaker)
- utterance_type_description (string): Human-readable description of utterance type
- utterance_number (int): Numeric identifier for the specific utterance
- utterance_id (string): Combined utterance type and number (e.g., "si548")
- lar_file_path (string): Path to the laryngograph audio file
- mic_file_path (string): Path to the microphone audio file
- ref_file_path (string): Path to the reference F0 data file
- lar_sample_rate (int): Sample rate of laryngograph recording (48kHz)
- mic_sample_rate (int): Sample rate of microphone recording (48kHz)
- lar_duration (float): Duration of laryngograph recording in seconds
- mic_duration (float): Duration of microphone recording in seconds
- lar_audio (Audio): Laryngograph audio data with automatic decoding
- mic_audio (Audio): Microphone audio data with automatic decoding
- f0_data (list): Reference F0 data from RAPT algorithm with 4 columns per frame:
- Pitch estimate (Hz)
- Probability of voicing
- Local root mean squared estimate (RMSE)
- Peak normalized cross-correlation value
Data Splits
Currently contains only a training split with all 4,718 utterances. The original dataset does not define standard train/validation/test splits.
Dataset Creation
Curation Rationale
The PTDB-TUG database was created to address the need for a comprehensive pitch tracking evaluation corpus with high-quality ground truth data. Traditional pitch tracking evaluation relies on synthetic signals or manual annotation, both of which have limitations. By using laryngograph signals that directly measure vocal fold vibration, this dataset provides more accurate and objective ground truth for pitch tracking algorithm development and evaluation.
Source Data
Data Collection and Processing
Recording Setup:
- Location: Recording Studio of the Institute of Broadband Communications, Graz University of Technology
- Equipment: Professional recording setup with simultaneous microphone and laryngograph capture
- Sample Rate: 48 kHz for both microphone and laryngograph signals
- Condition: Clean, studio environment recordings
- Text Material: 2,342 phonetically rich sentences selected from the TIMIT corpus
Signal Processing:
- F0 Extraction: RAPT (Robust Algorithm for Pitch Tracking) algorithm applied to extract reference pitch trajectories
- Output: Four-column F0 data per time frame including pitch estimate, voicing probability, RMSE, and cross-correlation values
- Quality Control: Professional recording conditions ensure high signal quality and accurate laryngograph measurements
Who are the source data producers?
Original Dataset Creators:
- Gregor Pirker - Signal Processing and Speech Communication Laboratory, Graz University of Technology
- Michael Wohlmayr - Signal Processing and Speech Communication Laboratory, Graz University of Technology
- Stefan Petrik - Signal Processing and Speech Communication Laboratory, Graz University of Technology
- Franz Pernkopf - Signal Processing and Speech Communication Laboratory, Graz University of Technology
Speakers:
- 20 native English speakers (10 male, 10 female)
- Recruited for the recording sessions at Graz University of Technology
- No detailed demographic information beyond gender is available in the original dataset
Annotations
Annotation process
The F0 reference annotations are generated automatically using the RAPT (Robust Algorithm for Pitch Tracking) algorithm applied to the laryngograph signals. This provides objective, algorithmic annotations rather than manual human annotations.
RAPT Processing:
- Applied to laryngograph signals which directly measure vocal fold vibration
- Generates four measures per time frame:
- Pitch estimate in Hz
- Probability of voicing (confidence measure)
- Local root mean squared estimate (RMSE)
- Peak normalized cross-correlation value
- No manual correction or validation of automatic annotations
Who are the annotators?
The annotations are generated automatically by the RAPT algorithm. No human annotators were involved in the F0 annotation process. The RAPT algorithm was developed by David Talkin and is a well-established method for pitch tracking.
Personal and Sensitive Information
The dataset contains audio recordings of human speech, which could potentially be considered personal information. However:
- Speaker Identity: Speakers are anonymized with only numeric/alphabetic identifiers (F01-F10, M01-M10)
- Content: Recordings are of TIMIT corpus sentences (phonetically designed text), not personal conversations
- Consent: Speakers were recruited and recorded specifically for this research dataset
- Demographic Data: Only gender information is provided; no other demographic details are included
- Biometric Concerns: Voice recordings could potentially be used for speaker identification, though this is not the intended use
Bias, Risks, and Limitations
Technical Limitations
- Limited Speaker Diversity: Only 20 speakers may not represent full population variability
- Clean Speech Only: Studio recordings may not generalize to real-world noisy conditions
- Single Language: English-only corpus limits cross-linguistic applicability
- Read Speech: Scripted TIMIT sentences may not reflect natural speech patterns
- F0 Range: Limited to normal speaking voice F0 ranges (no singing, extreme emotions)
- Recording Equipment: Results may be specific to the laryngograph and microphone setup used
Potential Biases
- Gender Balance: Equal male/female distribution (10 each) may not reflect natural population distributions
- Native Speaker Bias: All speakers are native English speakers, limiting accent/dialect diversity
- Socioeconomic Bias: University-recruited speakers may represent limited socioeconomic backgrounds
- Age Bias: No age information provided, but likely skewed toward university-age adults
- Geographic Bias: All recordings from single location (Graz, Austria) with potential acoustic environment effects
Risks
- Speaker Identification: Voice recordings could potentially be used to identify speakers despite anonymization
- Overfitting: Small speaker set may lead to speaker-specific rather than generalizable models
- Evaluation Bias: Using this dataset alone for evaluation may not represent real-world performance
Recommendations
- Combine with Other Datasets: Use alongside other pitch tracking corpora for comprehensive evaluation
- Cross-Dataset Validation: Test algorithms on multiple corpora to ensure generalizability
- Consider Population Diversity: Be aware of limited speaker diversity when interpreting results
- Respect Privacy: Follow ethical guidelines when using voice data, even if anonymized
- Report Limitations: Acknowledge dataset limitations when publishing research results
- Multiple Evaluation Conditions: Test on both clean and noisy conditions if developing practical applications
Citation
If you use this dataset in your research, please cite the original paper:
BibTeX:
@inproceedings{pirker11_interspeech,
author={Gregor Pirker and Michael Wohlmayr and Stefan Petrik and Franz Pernkopf},
title={{A pitch tracking corpus with evaluation on multipitch tracking scenario}},
year=2011,
booktitle={Proc. Interspeech 2011},
pages={1509--1512},
doi={10.21437/Interspeech.2011-317},
url={https://www.isca-speech.org/archive/interspeech_2011/pirker11_interspeech.html}
}
APA:
Pirker, G., Wohlmayr, M., Petrik, S., & Pernkopf, F. (2011). A pitch tracking corpus with evaluation on multipitch tracking scenario. In Proceedings of Interspeech 2011 (pp. 1509-1512). Florence, Italy.
Glossary
F0 (Fundamental Frequency): The lowest frequency of a periodic waveform, corresponding to the rate of vocal fold vibration in speech.
Laryngograph (LAR): An instrument that measures vocal fold contact area during speech by placing electrodes on the throat. Provides direct physiological measurement of vocal fold vibration.
RAPT: Robust Algorithm for Pitch Tracking - an algorithm developed by David Talkin for extracting fundamental frequency from speech signals.
TIMIT Corpus: A large corpus of read speech designed for acoustic-phonetic research and automatic speech recognition system development.
Pitch Tracking: The process of estimating the fundamental frequency contour of speech over time.
Voicing: Speech sounds produced with vocal fold vibration (vowels, voiced consonants) vs. unvoiced sounds.
More Information
- Original Dataset Homepage: https://www.spsc.tugraz.at/databases-and-tools/ptdb-tug-pitch-tracking-database-from-graz-university-of-technology.html
- Signal Processing and Speech Communication Laboratory: https://www.spsc.tugraz.at/
- TIMIT Corpus Information: Linguistic Data Consortium (LDC) catalog LDC93S1
- Open Database License: http://opendatacommons.org/licenses/odbl/1.0/
- RAPT Algorithm: Talkin, D. (1995). "A robust algorithm for pitch tracking (RAPT)." Speech coding and synthesis, 495-518.
Dataset Card Authors
This dataset card was created by Kim Gilkey for the dataset conversion. All credit for the original dataset creation goes to Gregor Pirker, Michael Wohlmayr, Stefan Petrik, and Franz Pernkopf from the Signal Processing and Speech Communication Laboratory at Graz University of Technology.
Dataset Card Contact
For questions about this HuggingFace dataset version, please create an issue in the dataset repository. For questions about the original dataset, please contact the Signal Processing and Speech Communication Laboratory at Graz University of Technology.