gilkeyio commited on
Commit
0a3064c
·
verified ·
1 Parent(s): 105fe71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +155 -65
README.md CHANGED
@@ -59,140 +59,230 @@ pretty_name: Pitch Tracking Database from Graz University of Technology
59
  size_categories:
60
  - 1K<n<10K
61
  ---
62
- # Dataset Card for Dataset Name
63
 
64
- <!-- Provide a quick summary of the dataset. -->
65
-
66
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
67
 
68
  ## Dataset Details
69
 
70
  ### Dataset Description
71
 
72
- <!-- Provide a longer summary of what this dataset is. -->
73
-
74
-
75
 
76
- - **Curated by:** [More Information Needed]
77
- - **Funded by [optional]:** [More Information Needed]
78
- - **Shared by [optional]:** [More Information Needed]
79
- - **Language(s) (NLP):** [More Information Needed]
80
- - **License:** [More Information Needed]
81
 
82
- ### Dataset Sources [optional]
83
 
84
- <!-- Provide the basic links for the dataset. -->
85
-
86
- - **Repository:** [More Information Needed]
87
- - **Paper [optional]:** [More Information Needed]
88
- - **Demo [optional]:** [More Information Needed]
89
 
90
  ## Uses
91
 
92
- <!-- Address questions around how the dataset is intended to be used. -->
93
-
94
  ### Direct Use
95
 
96
- <!-- This section describes suitable use cases for the dataset. -->
97
 
98
- [More Information Needed]
 
 
 
 
99
 
100
  ### Out-of-Scope Use
101
 
102
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
103
-
104
- [More Information Needed]
 
 
105
 
106
  ## Dataset Structure
107
 
108
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
109
-
110
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
 
112
  ## Dataset Creation
113
 
114
  ### Curation Rationale
115
 
116
- <!-- Motivation for the creation of this dataset. -->
117
-
118
- [More Information Needed]
119
 
120
  ### Source Data
121
 
122
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
123
-
124
  #### Data Collection and Processing
125
 
126
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
 
 
 
 
 
127
 
128
- [More Information Needed]
 
 
 
129
 
130
  #### Who are the source data producers?
131
 
132
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
 
 
 
 
133
 
134
- [More Information Needed]
 
 
 
135
 
136
- ### Annotations [optional]
137
-
138
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
139
 
140
  #### Annotation process
141
 
142
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
143
 
144
- [More Information Needed]
 
 
 
 
 
 
 
145
 
146
  #### Who are the annotators?
147
 
148
- <!-- This section describes the people or systems who created the annotations. -->
149
-
150
- [More Information Needed]
151
 
152
- #### Personal and Sensitive Information
153
 
154
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
155
 
156
- [More Information Needed]
 
 
 
 
157
 
158
  ## Bias, Risks, and Limitations
159
 
160
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
161
 
162
- [More Information Needed]
 
 
 
 
 
163
 
164
- ### Recommendations
 
 
 
 
 
 
 
 
165
 
166
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
 
 
 
 
167
 
168
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
 
 
 
 
 
169
 
170
- ## Citation [optional]
171
 
172
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
173
 
174
  **BibTeX:**
175
 
176
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
177
 
178
  **APA:**
179
 
180
- [More Information Needed]
 
 
 
 
 
 
 
 
 
181
 
182
- ## Glossary [optional]
183
 
184
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
185
 
186
- [More Information Needed]
187
 
188
- ## More Information [optional]
189
 
190
- [More Information Needed]
 
 
 
 
191
 
192
- ## Dataset Card Authors [optional]
193
 
194
- [More Information Needed]
195
 
196
  ## Dataset Card Contact
197
 
198
- [More Information Needed]
 
59
  size_categories:
60
  - 1K<n<10K
61
  ---
62
+ # PTDB-TUG: Pitch Tracking Database from Graz University of Technology
63
 
64
+ The Pitch Tracking Database from Graz University of Technology (PTDB-TUG) is a speech database for pitch tracking that provides microphone and laryngograph signals of 20 English native speakers as well as the extracted pitch trajectories as a reference. This dataset contains 4,718 recordings of phonetically rich sentences from the TIMIT corpus, providing both LAR (laryngograph) and MIC (microphone) recordings along with F0 reference data extracted using RAPT.
 
 
65
 
66
  ## Dataset Details
67
 
68
  ### Dataset Description
69
 
70
+ The PTDB-TUG database is a comprehensive pitch tracking corpus designed for evaluating pitch tracking algorithms. It contains clean speech recordings with simultaneous laryngograph signals, which provide accurate ground truth pitch information. The database uses 2,342 phonetically rich sentences from the TIMIT corpus, recorded by 20 native English speakers (10 male, 10 female) in a professional recording studio environment.
 
 
71
 
72
+ - **Curated by:** Signal Processing and Speech Communication Laboratory (SPSC), Graz University of Technology
73
+ - **Original Authors:** Gregor Pirker, Michael Wohlmayr, Stefan Petrik, Franz Pernkopf
74
+ - **Shared by:** HuggingFace user (dataset conversion)
75
+ - **Language(s):** English
76
+ - **License:** Open Database License (ODbL) v1.0
77
 
78
+ ### Dataset Sources
79
 
80
+ - **Original Repository:** https://www.spsc.tugraz.at/databases-and-tools/ptdb-tug-pitch-tracking-database-from-graz-university-of-technology.html
81
+ - **Paper:** "A pitch tracking corpus with evaluation on multipitch tracking scenario" (Interspeech 2011)
82
+ - **Direct Download:** http://www2.spsc.tugraz.at/databases/PTDB-TUG
 
 
83
 
84
  ## Uses
85
 
 
 
86
  ### Direct Use
87
 
88
+ This dataset is specifically designed for:
89
 
90
+ - **Pitch tracking algorithm development and evaluation:** Primary use case for developing and benchmarking F0 estimation algorithms
91
+ - **Speech signal processing research:** Studying vocal fold vibration patterns using simultaneous laryngograph and microphone signals
92
+ - **Voice quality analysis:** Analyzing voice characteristics using clean speech with ground truth pitch information
93
+ - **Multi-modal speech analysis:** Leveraging both acoustic and physiological (laryngograph) signals
94
+ - **Phonetic research:** Using phonetically balanced TIMIT sentences for comprehensive speech analysis
95
 
96
  ### Out-of-Scope Use
97
 
98
+ - **Real-time applications:** This is a research dataset, not optimized for real-time pitch tracking
99
+ - **Noisy speech scenarios:** Recordings are clean studio quality and may not generalize to noisy environments
100
+ - **Non-English languages:** Dataset contains only English speech from native speakers
101
+ - **Speaker identification:** Limited to 20 speakers, insufficient for robust speaker recognition systems
102
+ - **Emotional speech analysis:** Recordings are neutral read speech, not spontaneous or emotional speech
103
 
104
  ## Dataset Structure
105
 
106
+ The dataset contains 4,718 utterances with the following features:
107
+
108
+ ### Data Fields
109
+
110
+ - **speaker_id** (string): Speaker identifier (F01-F10 for females, M01-M10 for males)
111
+ - **gender** (string): Speaker gender ("female" or "male")
112
+ - **speaker_number** (int): Numeric speaker identifier (1-10)
113
+ - **utterance_type** (string): Type of utterance from TIMIT corpus
114
+ - "sa": dialect sentence (2 per speaker)
115
+ - "si": phonetically-balanced sentence (~189 per speaker)
116
+ - "sx": phonetically-compact sentence (~45 per speaker)
117
+ - **utterance_type_description** (string): Human-readable description of utterance type
118
+ - **utterance_number** (int): Numeric identifier for the specific utterance
119
+ - **utterance_id** (string): Combined utterance type and number (e.g., "si548")
120
+ - **lar_file_path** (string): Path to the laryngograph audio file
121
+ - **mic_file_path** (string): Path to the microphone audio file
122
+ - **ref_file_path** (string): Path to the reference F0 data file
123
+ - **lar_sample_rate** (int): Sample rate of laryngograph recording (48kHz)
124
+ - **mic_sample_rate** (int): Sample rate of microphone recording (48kHz)
125
+ - **lar_duration** (float): Duration of laryngograph recording in seconds
126
+ - **mic_duration** (float): Duration of microphone recording in seconds
127
+ - **lar_audio** (Audio): Laryngograph audio data with automatic decoding
128
+ - **mic_audio** (Audio): Microphone audio data with automatic decoding
129
+ - **f0_data** (list): Reference F0 data from RAPT algorithm with 4 columns per frame:
130
+ 1. Pitch estimate (Hz)
131
+ 2. Probability of voicing
132
+ 3. Local root mean squared estimate (RMSE)
133
+ 4. Peak normalized cross-correlation value
134
+
135
+ ### Data Splits
136
+
137
+ Currently contains only a training split with all 4,718 utterances. The original dataset does not define standard train/validation/test splits.
138
 
139
  ## Dataset Creation
140
 
141
  ### Curation Rationale
142
 
143
+ The PTDB-TUG database was created to address the need for a comprehensive pitch tracking evaluation corpus with high-quality ground truth data. Traditional pitch tracking evaluation relies on synthetic signals or manual annotation, both of which have limitations. By using laryngograph signals that directly measure vocal fold vibration, this dataset provides more accurate and objective ground truth for pitch tracking algorithm development and evaluation.
 
 
144
 
145
  ### Source Data
146
 
 
 
147
  #### Data Collection and Processing
148
 
149
+ **Recording Setup:**
150
+ - **Location:** Recording Studio of the Institute of Broadband Communications, Graz University of Technology
151
+ - **Equipment:** Professional recording setup with simultaneous microphone and laryngograph capture
152
+ - **Sample Rate:** 48 kHz for both microphone and laryngograph signals
153
+ - **Condition:** Clean, studio environment recordings
154
+ - **Text Material:** 2,342 phonetically rich sentences selected from the TIMIT corpus
155
 
156
+ **Signal Processing:**
157
+ - **F0 Extraction:** RAPT (Robust Algorithm for Pitch Tracking) algorithm applied to extract reference pitch trajectories
158
+ - **Output:** Four-column F0 data per time frame including pitch estimate, voicing probability, RMSE, and cross-correlation values
159
+ - **Quality Control:** Professional recording conditions ensure high signal quality and accurate laryngograph measurements
160
 
161
  #### Who are the source data producers?
162
 
163
+ **Original Dataset Creators:**
164
+ - **Gregor Pirker** - Signal Processing and Speech Communication Laboratory, Graz University of Technology
165
+ - **Michael Wohlmayr** - Signal Processing and Speech Communication Laboratory, Graz University of Technology
166
+ - **Stefan Petrik** - Signal Processing and Speech Communication Laboratory, Graz University of Technology
167
+ - **Franz Pernkopf** - Signal Processing and Speech Communication Laboratory, Graz University of Technology
168
 
169
+ **Speakers:**
170
+ - 20 native English speakers (10 male, 10 female)
171
+ - Recruited for the recording sessions at Graz University of Technology
172
+ - No detailed demographic information beyond gender is available in the original dataset
173
 
174
+ ### Annotations
 
 
175
 
176
  #### Annotation process
177
 
178
+ The F0 reference annotations are generated automatically using the RAPT (Robust Algorithm for Pitch Tracking) algorithm applied to the laryngograph signals. This provides objective, algorithmic annotations rather than manual human annotations.
179
 
180
+ **RAPT Processing:**
181
+ - Applied to laryngograph signals which directly measure vocal fold vibration
182
+ - Generates four measures per time frame:
183
+ 1. Pitch estimate in Hz
184
+ 2. Probability of voicing (confidence measure)
185
+ 3. Local root mean squared estimate (RMSE)
186
+ 4. Peak normalized cross-correlation value
187
+ - No manual correction or validation of automatic annotations
188
 
189
  #### Who are the annotators?
190
 
191
+ The annotations are generated automatically by the RAPT algorithm. No human annotators were involved in the F0 annotation process. The RAPT algorithm was developed by David Talkin and is a well-established method for pitch tracking.
 
 
192
 
193
+ ### Personal and Sensitive Information
194
 
195
+ The dataset contains audio recordings of human speech, which could potentially be considered personal information. However:
196
 
197
+ - **Speaker Identity:** Speakers are anonymized with only numeric/alphabetic identifiers (F01-F10, M01-M10)
198
+ - **Content:** Recordings are of TIMIT corpus sentences (phonetically designed text), not personal conversations
199
+ - **Consent:** Speakers were recruited and recorded specifically for this research dataset
200
+ - **Demographic Data:** Only gender information is provided; no other demographic details are included
201
+ - **Biometric Concerns:** Voice recordings could potentially be used for speaker identification, though this is not the intended use
202
 
203
  ## Bias, Risks, and Limitations
204
 
205
+ ### Technical Limitations
206
 
207
+ - **Limited Speaker Diversity:** Only 20 speakers may not represent full population variability
208
+ - **Clean Speech Only:** Studio recordings may not generalize to real-world noisy conditions
209
+ - **Single Language:** English-only corpus limits cross-linguistic applicability
210
+ - **Read Speech:** Scripted TIMIT sentences may not reflect natural speech patterns
211
+ - **F0 Range:** Limited to normal speaking voice F0 ranges (no singing, extreme emotions)
212
+ - **Recording Equipment:** Results may be specific to the laryngograph and microphone setup used
213
 
214
+ ### Potential Biases
215
+
216
+ - **Gender Balance:** Equal male/female distribution (10 each) may not reflect natural population distributions
217
+ - **Native Speaker Bias:** All speakers are native English speakers, limiting accent/dialect diversity
218
+ - **Socioeconomic Bias:** University-recruited speakers may represent limited socioeconomic backgrounds
219
+ - **Age Bias:** No age information provided, but likely skewed toward university-age adults
220
+ - **Geographic Bias:** All recordings from single location (Graz, Austria) with potential acoustic environment effects
221
+
222
+ ### Risks
223
 
224
+ - **Speaker Identification:** Voice recordings could potentially be used to identify speakers despite anonymization
225
+ - **Overfitting:** Small speaker set may lead to speaker-specific rather than generalizable models
226
+ - **Evaluation Bias:** Using this dataset alone for evaluation may not represent real-world performance
227
+
228
+ ### Recommendations
229
 
230
+ - **Combine with Other Datasets:** Use alongside other pitch tracking corpora for comprehensive evaluation
231
+ - **Cross-Dataset Validation:** Test algorithms on multiple corpora to ensure generalizability
232
+ - **Consider Population Diversity:** Be aware of limited speaker diversity when interpreting results
233
+ - **Respect Privacy:** Follow ethical guidelines when using voice data, even if anonymized
234
+ - **Report Limitations:** Acknowledge dataset limitations when publishing research results
235
+ - **Multiple Evaluation Conditions:** Test on both clean and noisy conditions if developing practical applications
236
 
237
+ ## Citation
238
 
239
+ If you use this dataset in your research, please cite the original paper:
240
 
241
  **BibTeX:**
242
 
243
+ ```bibtex
244
+ @inproceedings{pirker11_interspeech,
245
+ author={Gregor Pirker and Michael Wohlmayr and Stefan Petrik and Franz Pernkopf},
246
+ title={{A pitch tracking corpus with evaluation on multipitch tracking scenario}},
247
+ year=2011,
248
+ booktitle={Proc. Interspeech 2011},
249
+ pages={1509--1512},
250
+ doi={10.21437/Interspeech.2011-317},
251
+ url={https://www.isca-speech.org/archive/interspeech_2011/pirker11_interspeech.html}
252
+ }
253
+ ```
254
 
255
  **APA:**
256
 
257
+ Pirker, G., Wohlmayr, M., Petrik, S., & Pernkopf, F. (2011). A pitch tracking corpus with evaluation on multipitch tracking scenario. In *Proceedings of Interspeech 2011* (pp. 1509-1512). Florence, Italy.
258
+
259
+
260
+ ## Glossary
261
+
262
+ **F0 (Fundamental Frequency):** The lowest frequency of a periodic waveform, corresponding to the rate of vocal fold vibration in speech.
263
+
264
+ **Laryngograph (LAR):** An instrument that measures vocal fold contact area during speech by placing electrodes on the throat. Provides direct physiological measurement of vocal fold vibration.
265
+
266
+ **RAPT:** Robust Algorithm for Pitch Tracking - an algorithm developed by David Talkin for extracting fundamental frequency from speech signals.
267
 
268
+ **TIMIT Corpus:** A large corpus of read speech designed for acoustic-phonetic research and automatic speech recognition system development.
269
 
270
+ **Pitch Tracking:** The process of estimating the fundamental frequency contour of speech over time.
271
 
272
+ **Voicing:** Speech sounds produced with vocal fold vibration (vowels, voiced consonants) vs. unvoiced sounds.
273
 
274
+ ## More Information
275
 
276
+ - **Original Dataset Homepage:** https://www.spsc.tugraz.at/databases-and-tools/ptdb-tug-pitch-tracking-database-from-graz-university-of-technology.html
277
+ - **Signal Processing and Speech Communication Laboratory:** https://www.spsc.tugraz.at/
278
+ - **TIMIT Corpus Information:** Linguistic Data Consortium (LDC) catalog LDC93S1
279
+ - **Open Database License:** http://opendatacommons.org/licenses/odbl/1.0/
280
+ - **RAPT Algorithm:** Talkin, D. (1995). "A robust algorithm for pitch tracking (RAPT)." Speech coding and synthesis, 495-518.
281
 
282
+ ## Dataset Card Authors
283
 
284
+ This dataset card was created by Kim Gilkey for the dataset conversion. All credit for the original dataset creation goes to Gregor Pirker, Michael Wohlmayr, Stefan Petrik, and Franz Pernkopf from the Signal Processing and Speech Communication Laboratory at Graz University of Technology.
285
 
286
  ## Dataset Card Contact
287
 
288
+ For questions about this HuggingFace dataset version, please create an issue in the dataset repository. For questions about the original dataset, please contact the Signal Processing and Speech Communication Laboratory at Graz University of Technology.