prajjwal024 commited on
Commit
2196c6c
·
verified ·
1 Parent(s): 937e193

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -78
README.md CHANGED
@@ -903,115 +903,178 @@ configs:
903
  path: audio/mucs/telugu/train-*
904
  ---
905
 
906
- ## Detailed Dataset Descriptions
907
 
908
- ### Common Voice 12.0 (Mozilla Foundation)
909
- **Repository**: [mozilla-foundation/common_voice_12_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0)
910
 
911
- The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 26,119 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help improve the accuracy of speech recognition engines.
912
 
913
- The dataset currently consists of 17,127 validated hours in 104 languages, but more voices and languages are always added.
914
 
915
- **Languages**: Abkhaz, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hill Mari, Hindi, Hungarian, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Latvian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamil, Tatar, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Yoruba
916
 
917
- ### IndicVoices (AI4Bharat)
918
- **Repository**: [AI4Bharat IndicVoices](https://ai4bharat.iitm.ac.in/datasets/indicvoices)
 
 
919
 
920
- We present INDICVOICES, a dataset of natural and spontaneous speech containing a total of 12,000 hours of read (8%), extempore (76%) and conversational (15%) audio from 22,563 speakers covering 208 Indian districts and 22 languages. Of these 12,000 hours, 3,200 hours have already been transcribed, with a median of 122 hours per language. Through this paper, we share our journey of capturing the cultural, linguistic and demographic diversity of India to create a one-of-its-kind inclusive and representative dataset. More specifically, we share an open-source blueprint for data collection at scale comprising of standardised protocols, centralised tools, a repository of engaging questions, prompts and conversation scenarios spanning multiple domains and topics of interest, quality control mechanisms, comprehensive transcription guidelines and transcription tools. We hope that this open source blueprint will serve as a comprehensive starter kit for data collection efforts in other multilingual regions of the world. Using INDICVOICES, we build IndicASR, the first ASR model to support all the 22 languages listed in the 8th schedule of the Constitution of India.
921
 
922
- **Languages**: Assamese, Bengali, Bodo, Dogri, Gujarati, Hindi, Kannada, Kashmiri, Konkani, Maithili, Malayalam, Manipuri, Marathi, Nepali, Oriya, Punjabi, Sanskrit, Santhali, Sindhi, Tamil, Telugu, and Urdu
923
 
924
- ### FLEURS (Google)
925
- **Repository**: [google/fleurs](https://huggingface.co/datasets/google/fleurs)
 
926
 
927
- FLEURS is the speech version of the FLoRes machine translation benchmark. We use 2,009 n-way parallel sentences from the FLoRes dev and devtest publicly available sets, in 102 languages.
 
928
 
929
- Training sets have around 10 hours of supervision. Speakers of the train sets are different than speakers from the dev/test sets. Multilingual fine-tuning is used and "unit error rate" (characters, signs) of all languages is averaged. Languages and results are also grouped into seven geographical areas:
930
 
931
- **South-Asia**: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Oriya, Punjabi, Sindhi, Tamil, Telugu, Urdu
932
- **South-East Asia**: Burmese, Cebuano, Filipino, Indonesian, Javanese, Khmer, Lao, Malay, Maori, Thai, Vietnamese
933
 
934
- ### Gramvaani Hindi ASR Benchmark
935
- **Repository**: [Gramvaani Hindi Dataset](https://aikosh.indiaai.gov.in/home/datasets/details/hindi_asr_benchmark_dataset_for_speech_recognition_gramvaani_hindi.html)
 
 
 
936
 
937
- **Gramvaani Hindi ASR Benchmark Dataset for Speech Recognition**
938
- Hindi ASR (Automatic Speech Recognition) benchmark dataset from Bhashini for supporting the development of robust regional speech recognition systems.
939
 
940
- **About Dataset**
941
- This is a Hindi ASR benchmark dataset developed to evaluate and improve Automatic Speech Recognition (ASR) systems for the Hindi language. The dataset includes diverse and high-quality audio samples, focusing on topics such as agriculture, healthcare, and general knowledge. It serves as a critical resource for researchers and developers to build robust ASR models. Submitted by AI4Bharat, this dataset supports advancements in speech recognition technologies for regional languages.
942
 
943
- ### MUCS 2021 Challenge Dataset
944
- **Repository**: [MUCS 2021](https://navana-tech.github.io/MUCS2021/data.html)
945
 
946
- Recently, there is an increasing interest in multilingual automatic speech recognition (ASR) where a speech recognition system caters to multiple low resource languages by taking advantage of low amounts of labelled corpora in multiple languages. With multilingualism becoming common in today's world, there has been increasing interest in code-switching ASR as well. In code-switching, multiple languages are freely interchanged within a single sentence or between sentences. The success of low-resource multilingual and code-switching (MUCS) ASR often depends on the variety of languages in terms of their acoustics, linguistic characteristics as well as the amount of data available and how these are carefully considered in building the ASR system. In this MUCS 2021 challenge, we would like to focus on building MUCS ASR systems through two different subtasks related to a total of seven Indian languages, namely Hindi, Marathi, Odia, Tamil, Telugu, Gujarati and Bengali. For this purpose, we provide a total of ∼600 hours of transcribed speech data, comprising train and test sets, in these languages, including two code-switched language pairs, Hindi-English and Bengali-English. We also provide baseline recipes for both the subtasks with 30.73% and 32.45% word error rate on the MUCS test sets, respectively.
947
 
948
- **Index Terms**: Multilingual, Code-switching, low-resource
949
 
950
- ### IndicTTS Database
951
- **Repository**: [IndicTTS Database](https://www.iitm.ac.in/donlab/indictts/database)
952
 
953
- A corpus of Indian languages covering 22 major languages of India. It comprises of 10,000+ spoken sentences/utterances each of native and english recorded by both Male and Female native speakers. Speech waveform files are available in .wav format along with the corresponding text. We hope that these recordings will be useful for researchers and speech technologists working on synthesis and recognition. The statistics given below include multiple speakers and genders for each language. Detailed statistics of the same is available.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
954
 
955
- **Languages**: Assamese, Bengali, Bodo, Gujarati, Hindi, Kannada, Malayalam, Manipuri, Marathi, Odia, Punjabi, Rajasthani, Tamil, Telugu
 
 
 
 
956
 
957
- ### Kathbath (IndicSUPERB)
958
- **Repository**: [IndicSUPERB](https://github.com/AI4Bharat/IndicSUPERB)
959
 
960
- IndicSUPERB is a robust benchmark consisting of 6 speech language understanding (SLU) tasks across 12 Indian languages. The tasks include automatic speech recognition, automatic speaker verification, speech identification, query by example and keyword spotting. The IndicSUPERB also encompasses Kathbath dataset which has 1,684 hours of labelled speech data across 12 Indian Languages.
 
961
 
962
- **Languages**: Kannada, Malayalam, Tamil, Telugu, Gujarati, Marathi, Bengali, Odia, Hindi, Punjabi, Sanskrit, Urdu
 
 
 
 
963
 
964
- ---
 
 
 
965
 
966
- ## Citation
967
 
968
- If you use this dataset collection in your research, please cite the original datasets:
 
969
 
970
- ```bibtex
971
- @misc{commonvoice:2022,
972
- title={Common Voice Corpus 12.0},
973
- author={Mozilla},
974
- year={2022},
975
- url={https://commonvoice.mozilla.org/}
976
- }
977
 
978
- @article{kaushal2023indicvoices,
979
- title={IndicVoices: A Dataset of Natural and Spontaneous Speech in 22 Indian Languages},
980
- author={Kaushal, Ashish and others},
981
- journal={AI4Bharat},
982
- year={2023}
983
- }
984
 
985
- @article{conneau2022fleurs,
986
- title={FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
987
- author={Conneau, Alexis and others},
988
- journal={arXiv preprint},
989
- year={2022}
990
- }
991
 
992
- @misc{gramvaani2023,
993
- title={Gramvaani Hindi ASR Benchmark Dataset},
994
- author={Bhashini},
995
- year={2023},
996
- url={https://aikosh.indiaai.gov.in/}
997
- }
998
 
999
- @inproceedings{mucs2021,
1000
- title={MUCS 2021: Multilingual and Code-switching ASR Challenges for Low Resource Indian Languages},
1001
- author={MUCS Challenge Organizers},
1002
- year={2021}
1003
- }
1004
 
1005
- @misc{indictts,
1006
- title={IndicTTS: Indian Language Text-to-Speech Database},
1007
- author={IIT Madras},
1008
- url={https://www.iitm.ac.in/donlab/indictts/database}
1009
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1010
 
1011
- @article{javed2022indicsuperb,
1012
- title={IndicSUPERB: A Benchmark for Indian Language Speech Understanding},
1013
- author={Javed, Tahir and others},
1014
- journal={AI4Bharat},
1015
- year={2022}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1016
  }
1017
- ```
 
 
 
903
  path: audio/mucs/telugu/train-*
904
  ---
905
 
906
+ # Vaani ASR Benchmark: Comprehensive Evaluation of Indian Language Speech Recognition
907
 
908
+ ## About the Vaani ASR Benchmark
 
909
 
910
+ The **Vaani ASR Benchmark** is a comprehensive evaluation framework designed to assess the performance of Automatic Speech Recognition (ASR) models across multiple Indian languages. This benchmark addresses the critical need for standardized evaluation of ASR systems in the linguistically diverse Indian subcontinent, where over 700 languages are spoken with 22 official languages recognized by the Constitution.
911
 
912
+ ### Why This Benchmark Matters
913
 
914
+ **Addressing the Indian Language Gap**: While significant progress has been made in ASR for high-resource languages like English and Mandarin, Indian languages have remained underrepresented in speech recognition research. The Vaani benchmark fills this critical gap by providing:
915
 
916
+ - **Standardized Evaluation**: Consistent metrics and methodology across different models and languages
917
+ - **Diverse Linguistic Coverage**: Support for major Indian languages including Hindi, Tamil, Telugu, Kannada, Bengali, and more
918
+ - **Real-world Applicability**: Evaluation datasets that reflect actual usage scenarios across India
919
+ - **Research Acceleration**: A common platform for researchers to compare and improve their ASR models
920
 
921
+ ### What We Evaluate
922
 
923
+ The benchmark evaluates ASR models across multiple dimensions:
924
 
925
+ **🎯 Primary Metrics**
926
+ - **Word Error Rate (WER)**: Percentage of words incorrectly recognized (lower is better)
927
+ - **Character Error Rate (CER)**: Percentage of characters incorrectly recognized (lower is better)
928
 
929
+ **📊 Multiple Test Sets**
930
+ Our evaluation incorporates diverse, high-quality datasets:
931
 
932
+ 1. **FLEURS (Google)**: Multilingual speech corpus with 102 languages, providing ~10 hours per language with parallel sentences for robust cross-linguistic evaluation
933
 
934
+ 2. **Common Voice 12.0 (Mozilla)**: Community-contributed dataset with 26,119+ recorded hours across 104 languages, including rich demographic metadata (age, gender, accent)
 
935
 
936
+ 3. **IndicVoices (AI4Bharat)**: 12,000 hours of natural Indian speech covering 22 languages with diverse content:
937
+ - Read speech (8%)
938
+ - Extempore speech (76%)
939
+ - Conversational speech (15%)
940
+ - 22,563 speakers across 208 Indian districts
941
 
942
+ 4. **Gramvaani Hindi Dataset**: Specialized Hindi ASR benchmark focusing on agriculture, healthcare, and general knowledge domains
 
943
 
944
+ 5. **MUCS 2021**: Multilingual and code-switching dataset with ~600 hours across 7 Indian languages, including Hindi-English and Bengali-English code-switching
 
945
 
946
+ 6. **IndicTTS Database**: 10,000+ utterances per language across 22 Indian languages with both native and English content
 
947
 
948
+ 7. **Kathbath (IndicSUPERB)**: 1,684 hours of labeled speech data across 12 Indian languages for comprehensive speech understanding evaluation
949
 
950
+ ### How We Evaluate
951
 
952
+ **🔬 Rigorous Methodology**
953
+ Our evaluation follows a standardized protocol ensuring fair and accurate assessment:
954
 
955
+ **Text Preprocessing Pipeline:**
956
+ ```python
957
+ def clean(text):
958
+ # Remove annotations and markup
959
+ text = re.sub(r'{[^}]*}','',text) # Remove {annotations}
960
+ text = re.sub("[([].*?[)]]", "", text) # Remove [brackets] and (parentheses)
961
+ text = re.sub('<[^>]+>', '', text) # Remove HTML/XML tags
962
+
963
+ # Normalize punctuation
964
+ text = text.replace("।", " ").replace("|", " ").replace("-", " ")\
965
+ .replace(".", " ").replace(",", " ").replace("I", " ")\
966
+ .replace('\n', ' ')
967
+
968
+ # Normalize spacing
969
+ text = re.sub(' +', ' ', text)
970
+ return text.strip()
971
+ ```
972
 
973
+ **Error Rate Calculation:**
974
+ - Uses industry-standard `jiwer` library for accurate WER/CER computation
975
+ - Identical preprocessing applied to both reference and hypothesis texts
976
+ - Results scaled to percentage (0-100) with 2-decimal precision
977
+ - Handles edge cases and missing data appropriately
978
 
979
+ ### Language Coverage
 
980
 
981
+ **🗣️ Multilingual Support**
982
+ The benchmark currently supports major Indian languages with plans for expansion:
983
 
984
+ **Currently Supported:**
985
+ - **Indo-Aryan**: Hindi, Bengali, Marathi, Gujarati, Punjabi, Urdu, Assamese, Odia, Nepali
986
+ - **Dravidian**: Tamil, Telugu, Kannada, Malayalam
987
+ - **Tibeto-Burman**: Manipuri, Bodo
988
+ - **Others**: Sanskrit, Santhali
989
 
990
+ **Planned Expansion:**
991
+ - Additional regional languages and dialects
992
+ - Tribal and minority languages
993
+ - Code-switching scenarios (Hindi-English, Tamil-English, etc.)
994
 
995
+ ### Dataset Characteristics
996
 
997
+ **📈 Comprehensive Coverage**
998
+ Our test datasets provide diverse evaluation scenarios:
999
 
1000
+ **Audio Quality Spectrum:**
1001
+ - Studio-quality recordings for controlled evaluation
1002
+ - Real-world recordings capturing natural speech variations
1003
+ - Telephonic and mobile recordings for practical applications
 
 
 
1004
 
1005
+ **Speaker Diversity:**
1006
+ - **Demographics**: Balanced age, gender, and regional representation
1007
+ - **Accents**: Multiple dialectal variations within languages
1008
+ - **Speaking Styles**: Read speech, spontaneous speech, conversational audio
 
 
1009
 
1010
+ **Content Variety:**
1011
+ - **Domains**: News, agriculture, healthcare, education, general knowledge
1012
+ - **Speech Types**: Formal presentations, casual conversations, prompted responses
1013
+ - **Acoustic Conditions**: Clean studio, noisy environments, multiple speakers
 
 
1014
 
1015
+ ### Performance Analysis
 
 
 
 
 
1016
 
1017
+ **📊 Detailed Metrics**
1018
+ - **AVG WER/CER**: Simple average across all test datasets
1019
+ - **Language-specific Performance**: Individual language breakdowns
1020
+ - **Dataset-specific Analysis**: Performance variations across different test sets
1021
+ - **Statistical Significance**: Confidence intervals and significance testing
1022
 
1023
+ **🔍 Interactive Exploration**
1024
+ - **Metric Selector**: Switch between WER and CER views
1025
+ - **Language Filtering**: Focus on specific languages or language families
1026
+ - **Dataset Comparison**: Compare model performance across different test sets
1027
+ - **Trend Analysis**: Track model improvements over time
1028
+
1029
+ ### Research Impact
1030
+
1031
+ **🎯 Advancing Indian Language ASR**
1032
+ The Vaani benchmark serves multiple stakeholders:
1033
+
1034
+ **For Researchers:**
1035
+ - Standardized evaluation platform for model comparison
1036
+ - Comprehensive datasets for training and testing
1037
+ - Open-source evaluation code for reproducibility
1038
+
1039
+ **For Industry:**
1040
+ - Performance benchmarks for commercial ASR systems
1041
+ - Quality assurance metrics for product development
1042
+ - Market readiness assessment for Indian language applications
1043
 
1044
+ **For Society:**
1045
+ - Enabling voice interfaces in local languages
1046
+ - Supporting digital inclusion across linguistic communities
1047
+ - Preserving and promoting linguistic diversity through technology
1048
+
1049
+ ### Technical Implementation
1050
+
1051
+ **🛠️ Robust Infrastructure**
1052
+ - **Scalable Evaluation**: Automated pipeline handling large-scale model evaluation
1053
+ - **Reproducible Results**: Version-controlled datasets and evaluation scripts
1054
+ - **Quality Assurance**: Multiple validation checkpoints and error detection
1055
+ - **Open Source**: Full transparency in methodology and implementation
1056
+
1057
+ ### Future Roadmap
1058
+
1059
+ **🚀 Continuous Enhancement**
1060
+ - **Dataset Expansion**: Adding more languages and domains
1061
+ - **Metric Refinement**: Incorporating semantic and contextual evaluation measures
1062
+ - **Real-time Evaluation**: Support for streaming ASR model assessment
1063
+ - **Community Integration**: Enabling community contributions and model submissions
1064
+
1065
+ ---
1066
+
1067
+ ## Citation
1068
+
1069
+ If you use this benchmark in your research, please cite:
1070
+
1071
+ ```bibtex
1072
+ @misc{vaani_asr_benchmark_2024,
1073
+ title={Vaani ASR Benchmark: Comprehensive Evaluation Framework for Indian Language Speech Recognition},
1074
+ author={Vaani Team},
1075
+ year={2024},
1076
+ url={https://vaani.iisc.ac.in}
1077
  }
1078
+ ```
1079
+
1080
+ For individual datasets used in the benchmark, please also cite the original sources as provided in our dataset documentation.