Update README.md
Browse files
README.md
CHANGED
|
@@ -15,8 +15,8 @@ A multilingual dataset of high-quality speech recordings in Norwegian, English,
|
|
| 15 |
<!-- Provide a longer summary of what this dataset is. -->
|
| 16 |
|
| 17 |
- **Curated by:** Magdalena Wrembel, Krzysztof Hwaszcz, Agnieszka Pludra, Anna Skałba, Jarosław Weckwerth, Kamil Malarski, Zuzanna Ewa Cal, Hanna Kędzierska, Tristan Czarnecki-Verner, Anna Balas, Kamil Kaźmierski, Sylwiusz Żychliński, Justyna Gruszecka
|
| 18 |
-
- **Funded by:**
|
| 19 |
-
|
| 20 |
- **Language(s) (NLP):** Norwegian, English, Polish
|
| 21 |
- **License:** Creative Commons Attribution 4.0
|
| 22 |
|
|
@@ -25,8 +25,8 @@ A multilingual dataset of high-quality speech recordings in Norwegian, English,
|
|
| 25 |
<!-- Provide the basic links for the dataset. -->
|
| 26 |
|
| 27 |
- **Repository:** https://adim.web.amu.edu.pl/en/lnnor-corpus/
|
| 28 |
-
|
| 29 |
-
|
| 30 |
|
| 31 |
## Uses
|
| 32 |
|
|
@@ -90,7 +90,7 @@ The dataset was collected as part of two research projects, CLIMAD (Cross-lingui
|
|
| 90 |
|
| 91 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
| 92 |
|
| 93 |
-
Data were recorded between 2021 and 2024 using Shure SM-35 unidirectional cardioid microphones and Marantz PMD620 recorders, ensuring minimal noise interference. Recordings were captured at 48 kHz, 16-bit resolution
|
| 94 |
|
| 95 |
#### Who are the source data producers?
|
| 96 |
|
|
@@ -122,12 +122,12 @@ The dataset includes the following types of annotations:
|
|
| 122 |
|
| 123 |
The annotation process combined both automated and manual methods. It consisted of the following steps:
|
| 124 |
|
| 125 |
-
- Orthographic transcriptions: For Polish and English recordings, transcriptions were generated using a STT tool
|
| 126 |
- Phonetic transcriptions: Phonetic transcriptions were automatically generated using WebMAUS. The output was encoded in SAMPA (Speech Assessment Methods Phonetic Alphabet), ensuring consistency and compatibility with downstream processing.
|
| 127 |
- Alignments: Word- and phoneme-level alignments were created using WebMAUS, which produced TextGrids that aligned the transcriptions with corresponding audio files.
|
| 128 |
- Speaker metadata: The speaker metadata were collected before the recording sessions through the Linguistic History Questionnaire (LHQ) and supplementary forms provided to participants. These forms were designed to capture detailed linguistic and demographic information, ensuring a comprehensive profile of each speaker.
|
| 129 |
- Audio metadata: The audio metadata were automatically captured during the recording process by the equipment used for data collection and embedded into the corresponding audio files.
|
| 130 |
-
|
| 131 |
#### Who are the annotators?
|
| 132 |
|
| 133 |
<!-- This section describes the people or systems who created the annotations. -->
|
|
@@ -145,19 +145,22 @@ While the majority of annotations were generated using automated tools, the anno
|
|
| 145 |
|
| 146 |
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 147 |
|
| 148 |
-
|
| 149 |
|
| 150 |
## Bias, Risks, and Limitations
|
| 151 |
|
| 152 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 153 |
|
| 154 |
-
|
|
|
|
|
|
|
|
|
|
| 155 |
|
| 156 |
### Recommendations
|
| 157 |
|
| 158 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 159 |
|
| 160 |
-
|
| 161 |
|
| 162 |
## Dataset Card Authors
|
| 163 |
|
|
|
|
| 15 |
<!-- Provide a longer summary of what this dataset is. -->
|
| 16 |
|
| 17 |
- **Curated by:** Magdalena Wrembel, Krzysztof Hwaszcz, Agnieszka Pludra, Anna Skałba, Jarosław Weckwerth, Kamil Malarski, Zuzanna Ewa Cal, Hanna Kędzierska, Tristan Czarnecki-Verner, Anna Balas, Kamil Kaźmierski, Sylwiusz Żychliński, Justyna Gruszecka
|
| 18 |
+
- **Funded by:** Norwegian Financial Mechanism 2014-2021 project number 2019/34/H/HS2/00495
|
| 19 |
+
<!-- **Shared by [optional]:** [More Information Needed]-->
|
| 20 |
- **Language(s) (NLP):** Norwegian, English, Polish
|
| 21 |
- **License:** Creative Commons Attribution 4.0
|
| 22 |
|
|
|
|
| 25 |
<!-- Provide the basic links for the dataset. -->
|
| 26 |
|
| 27 |
- **Repository:** https://adim.web.amu.edu.pl/en/lnnor-corpus/
|
| 28 |
+
<!-- **Paper [optional]:** [More Information Needed]-->
|
| 29 |
+
<!-- **Demo [optional]:** [More Information Needed]-->
|
| 30 |
|
| 31 |
## Uses
|
| 32 |
|
|
|
|
| 90 |
|
| 91 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
| 92 |
|
| 93 |
+
Data were recorded between 2021 and 2024 using Shure SM-35 unidirectional cardioid microphones and Marantz PMD620 recorders, ensuring minimal noise interference. Recordings were captured at 48 kHz, 16-bit resolution. Some of the recordings were annotated with orthographic and/or phonetic transcriptions and aligned at a word and phoneme level. Metadata includes speaker characteristics, language status (L1, L2, L3/Ln), task type, and audio details.
|
| 94 |
|
| 95 |
#### Who are the source data producers?
|
| 96 |
|
|
|
|
| 122 |
|
| 123 |
The annotation process combined both automated and manual methods. It consisted of the following steps:
|
| 124 |
|
| 125 |
+
- Orthographic transcriptions: For Polish and English recordings, transcriptions were generated using a STT tool or created manually by linguists with a high level of proficiency in the respective languages. Norwegian transcriptions were entirely human-generated to ensure high accuracy.
|
| 126 |
- Phonetic transcriptions: Phonetic transcriptions were automatically generated using WebMAUS. The output was encoded in SAMPA (Speech Assessment Methods Phonetic Alphabet), ensuring consistency and compatibility with downstream processing.
|
| 127 |
- Alignments: Word- and phoneme-level alignments were created using WebMAUS, which produced TextGrids that aligned the transcriptions with corresponding audio files.
|
| 128 |
- Speaker metadata: The speaker metadata were collected before the recording sessions through the Linguistic History Questionnaire (LHQ) and supplementary forms provided to participants. These forms were designed to capture detailed linguistic and demographic information, ensuring a comprehensive profile of each speaker.
|
| 129 |
- Audio metadata: The audio metadata were automatically captured during the recording process by the equipment used for data collection and embedded into the corresponding audio files.
|
| 130 |
+
|
| 131 |
#### Who are the annotators?
|
| 132 |
|
| 133 |
<!-- This section describes the people or systems who created the annotations. -->
|
|
|
|
| 145 |
|
| 146 |
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
| 147 |
|
| 148 |
+
During the recordings, the participants were asked not to disclose any personal or sensitive information, which was especially relevant for the tasks eliciting free speech. The remaining tasks based on text reading did not contain any personal or sensitive information.
|
| 149 |
|
| 150 |
## Bias, Risks, and Limitations
|
| 151 |
|
| 152 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 153 |
|
| 154 |
+
Potential biases in this dataset include:
|
| 155 |
+
- Participant demographics: The majority of participants were young adults aged 18 to 25.
|
| 156 |
+
- Gender distribution: Women constituted 68% of the speakers.
|
| 157 |
+
- Linguistic scope: The speech samples are limited mostly to three languages under investigation, i.e., Norwegian, English, and Polish.
|
| 158 |
|
| 159 |
### Recommendations
|
| 160 |
|
| 161 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 162 |
|
| 163 |
+
We recommend to use the set of short audio files (under 30s) for any subsequent analysis. The raw recordings of full tasks can be found [TBA].
|
| 164 |
|
| 165 |
## Dataset Card Authors
|
| 166 |
|