Chiz commited on
Commit
d9b86e5
·
verified ·
1 Parent(s): 728dfa0

Initial upload: Igbo blind spot evaluation dataset

Browse files
README.md ADDED
@@ -0,0 +1,435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ language:
2
+ - ig
3
+ license: cc-by-4.0
4
+ task_categories:
5
+ - automatic-speech-recognition
6
+ tags:
7
+ - african-languages
8
+ - low-resource
9
+ - tonal-languages
10
+ - asr-bias
11
+ - model-evaluation
12
+ - igbo
13
+ size_categories:
14
+ - n<1K
15
+ ---
16
+
17
+ # omniASR Igbo Blind Spot Dataset
18
+
19
+ ## Research Questions
20
+
21
+ This dataset investigates three interrelated questions about multilingual ASR performance on tonal languages:
22
+
23
+ 1. **Operational Definition:** What does "language support" mean when a model lists 1,600+ languages? Does coverage imply functional accuracy on linguistically meaningful distinctions?
24
+
25
+ 2. **Diagnostic Validity:** Can tonal diacritic preservation serve as a diagnostic for acoustic competence vs. orthographic pattern matching in low-resource languages?
26
+
27
+ 3. **Systematic Evaluation:** Does facebook/omniASR-CTC-1B exhibit systematic tonal collapse in Igbo, and if so, what error patterns emerge?
28
+
29
+ ## Overview
30
+
31
+ This dataset provides a controlled diagnostic evaluation of **tonal fidelity** in facebook/omniASR-CTC-1B when processing Igbo (ibo_Latn), a tonal Niger-Congo language with ~45 million speakers. Through 21 systematically designed audio samples, we document a 61.2% diacritic loss rate on tonal markers and present evidence consistent with probabilistic diacritic generation rather than robust acoustic conditioning.
32
+
33
+ **Key Finding:** The model exhibits a 61.2% diacritic loss rate on tonal markers, fails to distinguish tonal minimal pairs, and paradoxically hallucinates diacritics on monotone speech.
34
+
35
+ ## Motivation
36
+
37
+ Recent work on ASR fairness has documented systematic performance disparities across demographic groups (Koenecke et al., 2020) and languages (Ogueji et al., 2024). However, existing evaluations focus primarily on word error rates in high-resource languages. This dataset addresses three critical gaps:
38
+
39
+ 1. **Tonal language evaluation:** Most ASR benchmarks ignore whether models preserve linguistically meaningful tone distinctions
40
+ 2. **Low-resource African languages:** Igbo remains underrepresented in ML evaluation despite being a major world language
41
+ 3. **Native speaker ground truth:** As a native Igbo speaker, I provide authoritative ground truth for phonetic and tonal correctness that automated metrics cannot capture
42
+
43
+ ## The Paradox of "Supported" Languages
44
+
45
+ omniASR's model card lists Igbo (ibo_Latn) among its 1,600+ supported languages. However, as recent work on low-resource ASR demonstrates, **nominal support does not guarantee functional accuracy** (EMNLP 2024, "The Zeno's Paradox of 'Low-Resource' Languages").
46
+
47
+ The challenge is definitional: what does it mean for a language to be "low-resource"?
48
+ - **By training data:** Igbo has fewer hours than English (low-resource)
49
+ - **By speaker population:** 45 million speakers (NOT low-resource)
50
+ - **By model performance:** Our findings show it behaves like a low-resource language despite being "supported"
51
+
52
+ This dataset reveals the gap between **coverage** (language is in the training set) and **competence** (model preserves linguistically meaningful distinctions). As the EMNLP paper argues, we risk creating a **Zeno's paradox**: models claim to support more and more languages, yet the quality asymptote never reaches parity with high-resource languages.
53
+
54
+ **Our contribution:** We provide native-speaker ground truth to quantify this gap for Igbo, moving beyond subjective impressions to measurable blind spots.
55
+
56
+ ## Dataset Structure
57
+ ```
58
+ huggingface_dataset/
59
+ ├── audio/ # 21 WAV files (16kHz mono)
60
+ ├── metadata.csv # Ground truth, model outputs, error metrics
61
+ └── README.md # This file
62
+ ```
63
+
64
+ ### Metadata Schema
65
+
66
+ | Column | Description |
67
+ |--------|-------------|
68
+ | `file_name` | Path to audio file |
69
+ | `ground_truth` | Correct transcription with tone marks |
70
+ | `model_output` | omniASR-CTC-1B prediction |
71
+ | `category` | Error category (see taxonomy below) |
72
+ | `subcategory` | Specific test condition |
73
+ | `language` | Language code (ibo_Latn, yor_Latn, fra_Latn, mixed) |
74
+ | `character_error_rate` | Character-level error rate (0-1) |
75
+ | `diacritics_expected` | Number of tone marks in ground truth |
76
+ | `diacritics_produced` | Number of tone marks in model output |
77
+ | `diacritic_loss` | Net diacritic difference (negative = hallucination) |
78
+
79
+ ## Error Taxonomy
80
+
81
+ ### 1. Cross-lingual Orthographic Interference (5 samples)
82
+ **Hypothesis:** Model applies incorrect orthographic conventions from other languages to Igbo text.
83
+
84
+ **Tests:**
85
+ - Personal names (01_script_names)
86
+ - Formal greetings (02_script_formal)
87
+ - Numeric sequences (03_script_numbers)
88
+ - Proverbs (04_script_proverb)
89
+ - Prosody variation (05_script_slow)
90
+
91
+ **Finding:** Model systematically adds incorrect diacritics where none exist (-38.9% net diacritic loss = 38.9% hallucination rate), suggesting cross-lingual interference from other supported languages.
92
+
93
+ ### 2. Phonemic Tone Sensitivity (6 samples)
94
+ **Hypothesis:** Model cannot distinguish phonemically contrastive tones in Igbo.
95
+
96
+ **Tests:**
97
+ - Minimal pairs: akwa/akwà/àkwà/ákwá (06_tonal_akwa)
98
+ - Minimal pairs: oke/òkè/ọkè (07_tonal_oke)
99
+ - Dense tone marks (08_tonal_dense)
100
+ - Monotone control (09_tonal_flat)
101
+ - Yoruba controls (10_tonal_yoruba, 21_tonal_yoruba_formal)
102
+
103
+ **Finding:**
104
+ - 61.2% diacritic loss (30/49 tone marks dropped)
105
+ - CER 74.4% on monotone speech where model ADDED tones that don't exist
106
+ - Model outputs collapse multiple tonal minimal-pair forms into a shared orthographic representation, indicating weak tonal separability in this evaluation setup
107
+
108
+ **Linguistic Impact:** In Igbo, tone changes word meaning. Losing tone marks is equivalent to losing consonants in English (e.g., "bat" vs "hat" vs "cat" all transcribed as "at").
109
+
110
+ ### 3. Language Boundary Effects (5 samples)
111
+ **Hypothesis:** English-Igbo code-switching (extremely common in Nigerian speech) disrupts language-specific processing.
112
+
113
+ **Tests:**
114
+ - English → Igbo embedding (11_codeswitch_en2ig)
115
+ - Igbo → English embedding (12_codeswitch_ig2en)
116
+ - Sentence-level alternation (13_codeswitch_alternate)
117
+ - Diacritics in English context (14_codeswitch_embedded)
118
+ - Nigerian Pidgin control (15_codeswitch_pidgin)
119
+
120
+ **Finding:** 14.3% diacritic loss. English portions transcribed perfectly while adjacent Igbo loses tone marks (e.g., "The ụlọ is beautiful" → "te ulọ is beautiful"), suggesting language detection boundaries affect orthographic fidelity.
121
+
122
+ ### 4. Domain-Specific Lexical Coverage (5 samples)
123
+ **Hypothesis:** Model struggles with culturally specific terms, place names, and idiomatic expressions outside training distribution.
124
+
125
+ **Tests:**
126
+ - Nigerian place names (16_context_places)
127
+ - Igbo food terms (17_context_food)
128
+ - Long proverbs (18_context_proverb)
129
+ - French control (19_context_french)
130
+ - Background noise robustness (20_context_noise)
131
+
132
+ **Finding:**
133
+ - Best diacritic preservation (6.3% loss) but high word-level errors (30% CER)
134
+ - Place names corrupted: "Owerri" → "weri" (missing syllable)
135
+ - High-resource French performed unexpectedly poorly (Czech/Slavic character hallucinations)
136
+
137
+ ## Quantitative Summary
138
+
139
+ | Category | Samples | Diacritic Loss | Avg CER |
140
+ |----------|---------|----------------|---------|
141
+ | **Phonemic Tone Sensitivity** | 6 | **61.2%** | 50.6% |
142
+ | Cross-lingual Orthographic Interference | 5 | -38.9% (hallucination) | 28.8% |
143
+ | Domain-Specific Lexical Coverage | 5 | 6.3% | 30.1% |
144
+ | Language Boundary Effects | 5 | 14.3% | 20.0% |
145
+ | **Overall** | **21** | **26.8%** | **32.5%** |
146
+
147
+ ## Statistical Analysis
148
+
149
+ ### Diacritic-Specific Metrics
150
+
151
+ Standard Character Error Rate (CER) conflates spacing, capitalization, and tonal errors. We define **Diacritic Error Rate (DER)** to isolate tone-related failures:
152
+
153
+ $$
154
+ \text{DER} = \frac{\text{diacritics\_lost} + \text{diacritics\_hallucinated}}{\text{diacritics\_expected}}
155
+ $$
156
+
157
+ **Results:**
158
+ - Overall DER: 26.8% (vs. CER: 32.5%)
159
+ - Phonemic Tone Sensitivity DER: 61.2% (vs. CER: 50.6%)
160
+
161
+ **Why DER matters:** In tonal languages, diacritic errors change word meaning (e.g., "crying" vs. "cloth"). DER quantifies semantic preservation failure independent of general transcription accuracy.
162
+
163
+ ### Bootstrap Uncertainty Quantification
164
+
165
+ To account for small sample size (N=21), we computed 95% confidence intervals via bootstrap resampling (10,000 iterations):
166
+
167
+ **Diacritic Loss Rate:**
168
+ - Overall: 52.6% (95% CI: [30.3%, 69.7%])
169
+ - Phonemic Tone Sensitivity: 75.5% (95% CI: [57.1%, 89.7%])
170
+
171
+ **Hallucination Rate:**
172
+ - Overall: 35.2% (95% CI: [18.2%, 53.3%])
173
+ - Cross-lingual Orthographic Interference: 36.0% (95% CI: [8.7%, 68.0%])
174
+
175
+ **Character Error Rate:**
176
+ - Overall: 0.333 (95% CI: [0.267, 0.402])
177
+ - Phonemic Tone Sensitivity: 0.506 (95% CI: [0.416, 0.617])
178
+
179
+ **Interpretation:** Even with wide confidence intervals due to small sample size, the lower bounds remain substantial. The tonal category's worst-case lower bound (57.1% loss) still represents severe degradation of phonemic information. This demonstrates that the observed effects are robust to sample variation.
180
+
181
+ ## Scope and Limitations of Claims
182
+
183
+ **This study demonstrates:**
184
+ - Systematic diacritic loss in omniASR-CTC-1B on Igbo audio (21 controlled samples)
185
+ - Failure to preserve tonal minimal pair distinctions in this evaluation setup
186
+ - Diacritic hallucination on monotone speech (evidence of orthographic bias)
187
+
188
+ **This study does NOT claim:**
189
+ - That omniASR fails universally on all Igbo speech
190
+ - That tone modeling is architecturally absent from the model
191
+ - That Igbo is uniquely disadvantaged relative to all other low-resource languages
192
+ - That the observed error rates generalize to all dialects or all speakers
193
+
194
+ **What would be needed to strengthen these claims:**
195
+ - Multi-speaker evaluation (N=10+ speakers across dialects)
196
+ - Acoustic analysis (F0 contour extraction, pitch tracking validation)
197
+ - Comparative evaluation on other tonal African languages
198
+ - Controlled resynthesis experiments isolating acoustic vs. lexical priors
199
+
200
+ ## Critical Insight: Evidence of Weak Tonal Conditioning
201
+
202
+ The clearest diagnostic signal comes from **File 09 (monotone speech)**:
203
+ - **Setup:** I spoke "O na-eri oji n'ututu" with deliberately FLAT intonation (no tonal variation)
204
+ - **Expected:** If tonal diacritics were tightly conditioned on acoustics in this setting, the output would contain few or no added diacritics
205
+ - **Result:** "ọne rị ọjí nụ tútú" - model ADDED random tone marks that I didn't produce
206
+
207
+ **Interpretation:** The observed behavior is consistent with probabilistic diacritic insertion driven primarily by lexical or orthographic priors, rather than robust conditioning on acoustic tone. Confirming this mechanism would require acoustic analysis (e.g., F0 contour statistics) and controlled resynthesis experiments.
208
+
209
+ ## Linguistic Error Analysis: When Tone Loss Changes Meaning
210
+
211
+ | File | Ground Truth | Model Output | Semantic Error |
212
+ |------|--------------|--------------|----------------|
213
+ | 06_tonal_akwa | akwà (cloth) | akwa | Could mean "crying" instead of "cloth" |
214
+ | 06_tonal_akwa | àkwà (egg) | akwa | Meaning completely lost |
215
+ | 06_tonal_akwa | ákwá (bridge) | akua | Wrong word + wrong tone |
216
+ | 07_tonal_oke | òkè (rat) | oke | Could mean "male/big" instead of "rat" |
217
+ | 08_tonal_dense | ọ̀jị̀ (kolanut) | ọjị | Partial tone loss, meaning ambiguous |
218
+ | 16_context_places | Owerri (city) | weri | Unrecognizable as place name |
219
+
220
+ **Impact:** These are not minor transcription errors. A voice assistant that transcribes "I need àkwà" (eggs) as "I need akwa" (crying) has produced semantically nonsensical output.
221
+
222
+ ## Performance Gap: Claimed vs. Measured
223
+
224
+ According to Meta's omnilingual ASR paper (arXiv:2511.09690):
225
+ - omniASR achieves **CER <10%** for 78% of supported languages
226
+ - Igbo (ibo_Latn) is listed among the 1,600+ supported languages
227
+
228
+ **Our findings:**
229
+ - **Overall CER: 32.5%** (3.25× worse than claimed threshold)
230
+ - **Tonal category CER: 50.6%** (5× worse than claimed threshold)
231
+ - **Worst sample CER: 74.4%** (7.4× worse than claimed threshold)
232
+
233
+ **Interpretation:** Either (a) Igbo is in the bottom 22% of languages by performance, or (b) the published benchmarks use test sets that don't capture tonal accuracy. Our native-speaker evaluation is consistent with the latter possibility, but does not isolate whether the primary driver is benchmark construction, data domain mismatch, or evaluation protocol differences.
234
+
235
+ ## Implications for Low-Resource ASR
236
+
237
+ This dataset reveals that raw multilingual coverage (1,600+ languages) does not guarantee linguistic accuracy:
238
+
239
+ 1. **Tonal languages require specialized evaluation:** WER/CER metrics miss semantic errors when tones are lost. Recent work on extremely low-resource ASR demonstrates that models systematically fail on tonal distinctions even when the language is nominally "supported" (ACL 2025, "Breaking the Transcription Bottleneck").
240
+
241
+ 2. **Native speaker validation is essential:** Automated metrics cannot catch when "crying" (akwa) is transcribed as "cloth" (akwà). Following methodological frameworks from dialect bias research (EMNLP Findings 2024), we provide single-speaker ground truth to establish baseline performance before scaling to multi-speaker evaluation.
242
+
243
+ 3. **Code-switching is not a solved problem:** Real-world multilingual speech patterns break current ASR systems. Nigerian English-Igbo code-switching represents a common speech pattern that production systems must handle.
244
+
245
+ 4. **"Supported" ≠ "Works well":** As the EMNLP 2024 best paper on low-resource language paradoxes demonstrates, models can list languages in their documentation while providing functionally inadequate service. Our results indicate a substantial gap between nominal language coverage and functional performance on tone-sensitive orthography in Igbo.
246
+
247
+ ## Why This Matters: ASR Fairness as a Philosophical Question
248
+
249
+ Beyond technical accuracy, ASR errors have **real-world consequences** for marginalized language communities. Drawing on recent philosophical frameworks for ASR fairness (AAAI 2025), we can understand tonal diacritic loss through three lenses:
250
+
251
+ ### 1. Epistemic Harm
252
+ When models consistently strip tone marks from Igbo speech, they create a **distorted representation** of the language. This:
253
+ - Reinforces the idea that Igbo tones are "optional" or "decorative" rather than phonemically essential
254
+ - Marginalizes native speakers whose linguistic knowledge contradicts model outputs
255
+ - Creates compounding errors in downstream applications (translation, voice assistants, accessibility tools)
256
+
257
+ ### 2. Representational Harm
258
+ A 61% diacritic loss rate sends the message that Igbo linguistic features are **less important** to preserve than features in high-resource languages. This mirrors historical patterns where:
259
+ - Colonial education systems dismissed African languages as "primitive"
260
+ - Technology development prioritizes Western linguistic structures
261
+ - "Multilingual" models provide drastically unequal service quality across languages
262
+
263
+ ### 3. Allocative Harm
264
+ ASR systems are increasingly gatekeepers to services:
265
+ - **Voice interfaces:** Siri, Alexa, Google Assistant rely on accurate transcription
266
+ - **Accessibility:** Automated captioning for Igbo-language media
267
+ - **Education:** Language learning apps that reinforce incorrect orthography
268
+ - **Healthcare:** Voice-based medical intake systems
269
+
270
+ When these systems fail on Igbo, they create **access barriers** for 45 million speakers.
271
+
272
+ ### The Stakes: Persistent Misrecognition
273
+ Persistent ASR failures can plausibly influence linguistic behavior and technology adoption. This motivates downstream user studies to quantify behavioral impact. If every voice interface strips tone marks, speakers may:
274
+ - Code-switch to English more often (accelerating language shift)
275
+ - Abandon voice interfaces entirely (digital exclusion)
276
+ - Internalize that "correct" Igbo doesn't need diacritics (orthographic erosion)
277
+
278
+ This dataset documents not just technical limitations, but the **mechanisms of linguistic marginalization** in AI systems.
279
+
280
+ ## Comparison to Related Work
281
+
282
+ | Study | Focus | Key Finding |
283
+ |-------|-------|-------------|
284
+ | Koenecke et al. (2020) | Racial disparities in commercial ASR | 2x higher WER for Black speakers |
285
+ | Ogueji et al. (2024) | African language ASR evaluation | Performance degrades severely on low-resource languages |
286
+ | ACL (2025) | Extremely low-resource ASR | Tonal distinctions fail even when language is "supported" |
287
+ | **This work** | Tonal distinctions in Igbo ASR | **61% loss of phonemically contrastive tone marks** |
288
+
289
+ ## Use Cases
290
+
291
+ This dataset is designed for:
292
+ - **ASR developers:** Benchmark tonal accuracy for African languages
293
+ - **Linguists:** Document systematic biases in multilingual models
294
+ - **ML fairness researchers:** Extend demographic fairness analysis to linguistic fairness
295
+ - **African NLP community:** Provide native-speaker ground truth for Igbo
296
+
297
+ ## Recording Methodology
298
+
299
+ - **Speaker:** Native Igbo speaker (Nigerian)
300
+ - **Dialect:** Afikpo Igbo (Ebonyi State). Speaker grew up in multilingual Northern Nigerian environment; both parents from Afikpo. Recordings reflect a single-speaker variety and are not intended to represent all Igbo dialects.
301
+ - **Device:** iPhone SE 2nd Generation Voice Memos app
302
+ - **Format:** M4A (AAC codec) converted to 16kHz mono WAV
303
+ - **Duration:** 4-15 seconds per sample
304
+ - **Environment:** Quiet indoor setting (File 20 includes controlled background noise)
305
+ - **Speech style:** Natural conversational pace unless otherwise noted (File 05 is deliberately slow)
306
+
307
+ Following methodological frameworks from dialect bias research (EMNLP Findings 2024), single-speaker recordings establish baseline performance before scaling to multi-speaker, multi-dialect evaluation.
308
+
309
+ ## Model Details
310
+
311
+ - **Model:** facebook/omniASR-CTC-1B
312
+ - **Features:** ASR (Automatic Speech Recognition)
313
+ - **Parameters:** 975,065,300 (~975M)
314
+ - **Download Size:** 3.7 GiB (FP32)
315
+ - **Inference VRAM:** ~3 GiB
316
+ - **Architecture:** CTC-based ASR (wav2vec2-style encoder with CTC head)
317
+ - **Training:** Multilingual (1,600+ languages) on clean, spontaneous speech
318
+ - **Release:** November 14, 2025
319
+ - **License:** Apache 2.0
320
+
321
+ ## Reproducibility
322
+
323
+ All transcriptions generated using:
324
+ ```python
325
+ from omnilingual_asr.models.inference.pipeline import ASRInferencePipeline
326
+ pipeline = ASRInferencePipeline(model_card="omniASR_CTC_1B")
327
+ transcription = pipeline.transcribe(inp=[audio_path], lang=["ibo_Latn"])
328
+ ```
329
+
330
+ **Environment:**
331
+ - Google Colab (NVIDIA Tesla T4, 15GB VRAM)
332
+ - omnilingual-asr==0.1.0
333
+ - torch==2.1.0
334
+ - Python 3.12
335
+ - Date: March 1, 2026
336
+
337
+ ## Limitations and Scope
338
+
339
+ This dataset represents a **proof-of-concept** demonstration of native-speaker auditing for low-resource ASR. By design, it prioritizes:
340
+
341
+ 1. **Depth over breadth:** 21 carefully designed samples targeting specific failure modes rather than 1000s of random utterances
342
+ 2. **Native-speaker authority:** Single speaker provides unambiguous ground truth for initial blind spot discovery
343
+ 3. **Systematic coverage:** Four distinct categories of errors (orthographic, tonal, code-switching, lexical)
344
+
345
+ **Known limitations:**
346
+ - **Generalizability:** Single speaker limits claims about model performance across all Igbo speakers
347
+ - **Dialectal coverage:** Does not test all major Igbo dialects (Onitsha, Enugu, Nsukka, Afikpo, etc.)
348
+ - **Real-world conditions:** Primarily clean audio; limited noise robustness testing
349
+ - **Sample size:** 21 recordings establish blind spot existence but not prevalence rates
350
+
351
+ **Why this scope is appropriate:** Following established ASR fairness methodologies (Koenecke et al., 2020; EMNLP 2024), initial bias discovery uses controlled conditions and expert annotators before scaling to large-scale evaluation. This dataset serves as the **foundation** for future multi-speaker, multi-dialect studies.
352
+
353
+ ## Future Work: Research Agenda
354
+
355
+ ### Phase 1: Scale Current Approach (3-6 months)
356
+ - Record 50+ samples per category (total: 200+ recordings)
357
+ - Recruit 10 speakers across major dialects (Owerri, Onitsha, Enugu, Nsukka, Afikpo)
358
+ - Add female/male speaker balance
359
+ - Test age range effects (youth vs. elders)
360
+
361
+ ### Phase 2: Comparative Model Evaluation (6-12 months)
362
+ Audit the same test set on:
363
+ - OpenAI Whisper (large-v3)
364
+ - Meta MMS (1B-all)
365
+ - Google USM
366
+ - Microsoft Azure Speech
367
+
368
+ **Research question:** Is 61% tonal loss specific to omniASR or universal across multilingual ASR?
369
+
370
+ ### Phase 3: Intervention Studies (12-18 months)
371
+ Following ACL 2025 recommendations on fine-tuning for low-resource languages:
372
+ - Fine-tune omniASR on Igbo data with tonal annotations
373
+ - Measure pre/post diacritic accuracy
374
+ - Publish open-source fine-tuning pipeline for other tonal African languages
375
+
376
+ ### Phase 4: Downstream Impact (18-24 months)
377
+ - Partner with Nigerian voice assistant developers
378
+ - Measure real-world consequences of tonal errors in deployed systems
379
+ - User studies: Do Igbo speakers trust ASR that strips tones?
380
+
381
+ ## Data Collection Ethics
382
+
383
+ - **Informed consent:** Recordings made by the author with full knowledge of public release
384
+ - **Privacy:** Recordings are self-recorded by the author. Ground truth uses [name] placeholder for dataset generalizability. No third-party identifiable information included.
385
+ - **Cultural sensitivity:** Proverbs and idioms are common knowledge, not sacred/restricted content
386
+ - **Community benefit:** Dataset released open-source to benefit Igbo NLP research
387
+ - **No exploitation:** Zero-compensation labor issue does not apply (self-recorded by community member)
388
+
389
+ This dataset follows guidelines from the ACM Code of Ethics and Professional Conduct for responsible AI research.
390
+
391
+ ## Citation
392
+
393
+ If you use this dataset, please cite:
394
+ ```bibtex
395
+ @misc{obasi2026igbo,
396
+ title={Igbo Blind Spot Dataset for omniASR-CTC-1B: Systematic Evaluation of Tonal Diacritic Loss},
397
+ author={Obasi, Chizoba},
398
+ year={2026},
399
+ publisher={HuggingFace},
400
+ howpublished={\url{https://huggingface.co/datasets/chiz/omniASR-igbo-blindspots}},
401
+ note={Model evaluated: facebook/omniASR-CTC-1B (975M parameters)}
402
+ }
403
+ ```
404
+
405
+ ## References
406
+
407
+ AAAI. (2025). Fairness of automatic speech recognition: Looking through a philosophical lens. *Proceedings of the 39th AAAI Conference on Artificial Intelligence*.
408
+
409
+ ACL. (2025). Breaking the transcription bottleneck: Fine-tuning ASR models for extremely low-resource languages. *Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages*.
410
+
411
+ EMNLP. (2024). The Zeno's paradox of 'low-resource' languages. *Best Paper Award, 2024 Conference on Empirical Methods in Natural Language Processing*.
412
+
413
+ EMNLP. (2024). Modeling gender and dialect bias in automatic speech recognition. *Findings of the Association for Computational Linguistics: EMNLP 2024*.
414
+
415
+ Koenecke, A., Nam, A., Lake, E., Nudell, J., Quartey, M., Mengesha, Z., ... & Goel, S. (2020). Racial disparities in automated speech recognition. *Proceedings of the National Academy of Sciences*, 117(14), 7684-7689.
416
+
417
+ Meta AI. (2025). Omnilingual ASR: Scaling automatic speech recognition to 1,600+ languages. *arXiv preprint arXiv:2511.09690*.
418
+
419
+ Ogueji, K., Gwadabe, T. R., & Zhang, Y. (2024). A systematic literature review on bias evaluation in automatic speech recognition for low-resource African languages. *ACM Computing Surveys*.
420
+
421
+ ## License
422
+
423
+ - **Audio recordings:** CC-BY-4.0 (attribution required)
424
+ - **Metadata/annotations:** CC0 (public domain)
425
+ - **Code:** MIT License
426
+
427
+ ## Contact
428
+
429
+ **Author:** Chizoba Obasi
430
+ **HuggingFace:** [hf.co/chiz](https://huggingface.co/chiz)
431
+ **Purpose:** Fatima Fellowship Technical Challenge (2026)
432
+
433
+ ---
434
+
435
+ *This dataset was created as part of the Fatima Fellowship application to demonstrate systematic evaluation of ML model blind spots using native speaker expertise.*
audio/01_script_names.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6f8ae861f390aeb65d2a92dad36743ab19c6377ff62e0c8e6dc17b0c30b5567
3
+ size 286764
audio/02_script_formal.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a5a5fa7656e2c5c9215d1c87b5c74d2fc755ea2ab068cbc0da035dda66aaf58
3
+ size 208940
audio/03_script_numbers.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c90845252f2628278059a474995717b9ccee3eed9c651a127ffe0b45430e53d
3
+ size 395308
audio/04_script_proverb.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c63b2661009b5cd0cedd1746a498dd91ef84d9a41446c81b35e95a5543147de
3
+ size 127020
audio/05_script_slow.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9647094ac75f214670591ca16358aa85c84b92b167f1598c897cf426f5290602
3
+ size 266284
audio/06_tonal_akwa.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:696c6faf91556ac0bcfc66c41c7b8189c2211a53ee4078b7c04cf1dd0145ae54
3
+ size 533890
audio/07_tonal_oke.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83ca0af749c9dd27f60ae7f5967a68c78c15791a7845544ab5366feb2a76678c
3
+ size 406914
audio/08_tonal_dense.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14ba8d4b843a0285afd3bfd7d200c87593ad59be1fe1d994660e47c1bc6d1e86
3
+ size 162520
audio/09_tonal_flat.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e21b7acb8568166b1b8552368cd3ab3342d64547dabf0da1ea954024559d33ec
3
+ size 171394
audio/10_tonal_yoruba.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd4c99577cf956c77f15a1cd90efe9d0e09d5fa0c1c91192bb6776e084f531ac
3
+ size 141356
audio/11_codeswitch_en2ig.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67203a836d03863c441d38e9f755ee75221bf427c10fdb13a2d3f16ab3dd9ae8
3
+ size 165250
audio/12_codeswitch_ig2en.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1dea1ab7cb1662cd1761cf8d6857d9eb4015cee1d8fe1047f4129c9c6f78a90
3
+ size 173442
audio/13_codeswitch_alternate.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ec2422abd8c863bc05fc2ed5e6cac5c95ce89357c483c69d76143291703c299
3
+ size 212354
audio/14_codeswitch_embedded.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9534b1b4f944136211686e2b0641c6ab181bc9f23fe6326c873eed9b5b169d1
3
+ size 200066
audio/15_codeswitch_pidgin.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8d2d8c34a1254a426fd30d2b3be93fd071bed3a217a730db1f42674495ced09
3
+ size 142722
audio/16_context_places.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e0d03f24b55b116925cb848b2a70ba15c7a15fc8ee56ecde889dec254cd539e
3
+ size 200748
audio/17_context_food.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:266033748e28c3852ba40b70d82dbc41db297b052b3bdea01d1f7ec284c1a17e
3
+ size 167298
audio/18_context_proverb.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80731b4756502ed0982882ba84dc40d51ab96420753a165d80426a69f244fecf
3
+ size 235564
audio/19_context_french.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e63622f78f25cfbea421ebd7302f7594ff08e18645024dbfe741e7ee13e4f4b
3
+ size 214402
audio/20_context_noise.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e594057b53355ce83cbd2dd95f81d16c9928e62e6797bb199be4441770d6b02
3
+ size 183000
audio/21_tonal_yoruba_formal.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d69663fe13e7e36b9d24c5ad617db94668d740e69dcb0729439fb52c04945a31
3
+ size 124972
metadata.csv ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ file_name,ground_truth,model_output,category,subcategory,language,character_error_rate,diacritics_expected,diacritics_produced,diacritic_loss
2
+ audio/01_script_names.wav,Aha m bụ Chukwuemeka. Nna m bụ Obiora. Nne m bụ Ngozi.,ahambụ cheku emeka nnam bụ ọbiọra nnem bụ ngọzi,script_hallucination,personal_names,ibo_Latn,0.485,3,6,-3
3
+ audio/02_script_formal.wav,Nnọọ. Kedu ka ị mere? Ọ dị mma. Daalụ.,nọ kedo ke imere ọ dị mma daalụ,script_hallucination,greetings,ibo_Latn,0.188,6,4,2
4
+ audio/03_script_numbers.wav,"Otu, abụọ, atọ, anọ, ise, isii, asaa, asatọ, itoolu, iri.",ọtu abuọ atọ anọ ise isi asa asatọ ìtọlu iri,script_hallucination,numeric,ibo_Latn,0.208,5,7,-2
5
+ audio/04_script_proverb.wav,Onye aghala nwanne ya.,onje agalá wánẹ yá,script_hallucination,idiomatic,ibo_Latn,0.4,0,4,-4
6
+ audio/05_script_slow.wav,Aha m bụ Chizoba. Kedu ka ị mere taa? Ọ dị mma.,aham bụ chizọba kedu ke imereta ọ dị mmaa,script_hallucination,prosody,ibo_Latn,0.159,4,4,0
7
+ audio/06_tonal_akwa.wav,"Akwa, akwa, akwa. Akwà, akwà, akwà. Àkwà, àkwà, àkwà. Ákwá, ákwá, ákwá.",akua akua akua akua akwa akwa akwa akua akwa ọkua ọkua ọkua,tonal_diacritics,minimal_pair,ibo_Latn,0.6,15,3,12
8
+ audio/07_tonal_oke.wav,"Oke, oke, oke. Òkè, òkè, òkè. Ọkè, ọkè, ọkè.",oke oke oke oke oke oke oke oke oki,tonal_diacritics,minimal_pair,ibo_Latn,0.418,12,0,12
9
+ audio/08_tonal_dense.wav,Ọ nà-èrì ọ̀jị̀ n'ụ̀tụ̀tụ̀.,ọ na eri ọjị n'ututu,tonal_diacritics,high_density,ibo_Latn,0.435,9,3,6
10
+ audio/09_tonal_flat.wav,O na-eri oji n'ututu.,ọne rị ọjí nụ tútú,tonal_diacritics,monotone,ibo_Latn,0.744,0,7,-7
11
+ audio/10_tonal_yoruba.wav,"Kí ló dé, kí ló ṣe lẹ́?",kílode kílo ṣele,tonal_diacritics,control_language,yor_Latn,0.385,7,3,4
12
+ audio/11_codeswitch_en2ig.wav,I'm going to the ọgbọ today with my ụmụnne.,iam going today ogbo today with my umun ne,code_switching,en_to_ig,mixed,0.224,4,0,4
13
+ audio/12_codeswitch_ig2en.wav,M ga-eje shopping taa. M chọrọ credit card m.,nga eje shọpịnta nchọrọ kridịkad m,code_switching,ig_to_en,mixed,0.367,2,5,-3
14
+ audio/13_codeswitch_alternate.wav,I need rice. M chọrọ ji. I want yam. M chọrọ akpu.,i nied ric nchọrọ ji i wọnt yam nchọrọ akpụ,code_switching,sentence_level,mixed,0.183,4,6,-2
15
+ audio/14_codeswitch_embedded.wav,The ụlọ is beautiful. My ụmụaka are playing.,te ulọ is beautiful my umuaka a playing,code_switching,with_diacritics,mixed,0.133,4,1,3
16
+ audio/15_codeswitch_pidgin.wav,"I wan chop rice. Abeg, give me water.",i want chop rice i beg give me water,code_switching,control_pidgin,pcm_Latn,0.096,0,0,0
17
+ audio/16_context_places.wav,M bi na Enugu. Nnà m sị Owerri. M ga Onitsha.,mbị na ịnugu nnamsigo weri nga ọnịcha,cultural_context,geographic,ibo_Latn,0.39,2,4,-2
18
+ audio/17_context_food.wav,M na-eri jollof rice na egusi soup. Ọ tọrọ ụtọ.,emna iri jelọfres na egusi suup ọtọrụ ụtọ,cultural_context,culinary,mixed,0.25,5,6,-1
19
+ audio/18_context_proverb.wav,"Ọnwa na-agbanwe, anyanwụ na-agbanwe, ma ala adịghị agbanwe.",owuana agbawe ayawu na agbawe mala adege agbawe,cultural_context,idiomatic,ibo_Latn,0.264,4,0,4
20
+ audio/19_context_french.wav,"Bonjour, comment ça va? Je m'appelle Chizoba. J'habite à Paris.",bondu komosova že ma pelčizoba jea bit apari,cultural_context,control_language,fra_Latn,0.402,1,0,1
21
+ audio/20_context_noise.wav,Aha m bụ Chizoba. Kedu ka ị mere taa? Ọ dị mma.,aha mbụchizọba kidụka imereta ọ dị mma,cultural_context,noise_robustness,ibo_Latn,0.2,4,5,-1
22
+ audio/21_tonal_yoruba_formal.wav,Ẹ káàárọ̀. Báwo ni?,ekaọrọ bawọ ni,tonal_diacritics,control_formal,yor_Latn,0.455,6,3,3