ksingla025 commited on
Commit
5fb642f
·
verified ·
1 Parent(s): f2e2ff9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +224 -32
README.md CHANGED
@@ -1,32 +1,224 @@
1
- ---
2
- license: cc-by-4.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- - split: valid
9
- path: data/valid-*
10
- - split: test
11
- path: data/test-*
12
- dataset_info:
13
- features:
14
- - name: audio_filepath
15
- dtype: audio
16
- - name: text
17
- dtype: string
18
- - name: duration
19
- dtype: float64
20
- splits:
21
- - name: train
22
- num_bytes: 433693291547.502
23
- num_examples: 759309
24
- - name: valid
25
- num_bytes: 11490209888.704
26
- num_examples: 37464
27
- - name: test
28
- num_bytes: 23113506775.214
29
- num_examples: 43406
30
- download_size: 155500308826
31
- dataset_size: 468297008211.42
32
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Meta Speech Recognition Hindi Dataset (Set 1)
2
+
3
+ This dataset contains both metadata and audio files for Hindi speech recognition samples, curated from multiple sources.
4
+
5
+ ## Dataset Sources and Credits
6
+
7
+ This dataset combines samples from the following sources:
8
+
9
+ 1. **AI4Bharat Indic Speech Dataset**
10
+ - Source: https://ai4bharat.org/indic-speech-dataset
11
+ - License: CC-BY 4.0
12
+ - Citation: Please cite the original paper if you use this data
13
+
14
+ 2. **Common Voice Hindi**
15
+ - Source: https://commonvoice.mozilla.org/en/datasets
16
+ - License: CC0-1.0
17
+ - Citation: Please acknowledge Mozilla Common Voice if you use this data
18
+
19
+ 3. **Shrutilipi**
20
+ - Source: https://ai4bharat.org/shrutilipi
21
+ - License: CC-BY 4.0
22
+ - Citation: Please cite the original paper if you use this data
23
+
24
+ ## Dataset Statistics
25
+
26
+ ### Splits and Sample Counts
27
+ - **train**: 759309 samples
28
+ - **valid**: 37464 samples
29
+ - **test**: 43406 samples
30
+
31
+ ## Example Samples
32
+ ### train
33
+ ```json
34
+ {
35
+ "audio_filepath": "/external4/datasets/AI4Bharat/hindi/wavs_train/5348024557584410_0.wav",
36
+ "text": "पालन INTENT_KEYWORDS_SPOTTING AGE_60+ GENDER_MALE DIALECT_RAJASTHAN",
37
+ "duration": 1.32
38
+ }
39
+ ```
40
+ ```json
41
+ {
42
+ "audio_filepath": "/external4/datasets/AI4Bharat/hindi/wavs_train/5348024557795852_0.wav",
43
+ "text": "हमारी मातृभाषा ENTITY_LANGUAGE हिंदी END यह इतनी दिलचस्प है कि किसी को ENTITY_TIME दिन भर END में बोलने के लिए हमें कई प्रकार के मुहावरों का इस्तेमाल किया जाता है INTENT_LANGUAGE_SPECIFIC AGE_30_45 GENDER_MALE DIALECT_MADHYA_PRADESH",
44
+ "duration": 10.816
45
+ }
46
+ ```
47
+
48
+ ### valid
49
+ ```json
50
+ {
51
+ "audio_filepath": "/external4/datasets/AI4Bharat/hindi/wavs/8012ebba-be5d-4cf1-953d-1282fe7d75c4_0_1.wav",
52
+ "text": "ENTITY_ORGANIZATION जी ट्रैवलिंग ट्रैवल एजेंसी END से बोल रए हैं सर INTENT_CONVERSATION AGE_30_45 GENDER_MALE DIALECT_MADHYA_PRADESH",
53
+ "duration": 2.9440000000000004
54
+ }
55
+ ```
56
+ ```json
57
+ {
58
+ "audio_filepath": "/external4/datasets/AI4Bharat/hindi/wavs/8012ebba-be5d-4cf1-953d-1282fe7d75c4_0_2.wav",
59
+ "text": "मुझे ENTITY_PERSON_NAME सर ओमकारेश्‍वर END के लिए ENTITY_VEHICLE_TYPE सेवेन सीटर गाड़ी END चहिए थी INTENT_CONVERSATION AGE_30_45 GENDER_MALE DIALECT_MADHYA_PRADESH",
60
+ "duration": 3.808
61
+ }
62
+ ```
63
+
64
+ ### test
65
+ ```json
66
+ {
67
+ "audio_filepath": "/external4/datasets/AI4Bharat/hindi/wavs_train/82ff7f7c-8c24-4ff2-8c99-74149ab6016b_1_25.wav",
68
+ "text": "त दु सौ के लीजिए देखिए पहिला बोहनी है आपका करिए INTENT_CONVERSATION AGE_45_60 GENDER_FEMALE DIALECT_BIHAR",
69
+ "duration": 4.289999999999992
70
+ }
71
+ ```
72
+ ```json
73
+ {
74
+ "audio_filepath": "/external4/datasets/AI4Bharat/hindi/wavs_train/82ff7f7c-8c24-4ff2-8c99-74149ab6016b_1_26.wav",
75
+ "text": "हां त पहला बोहनी है आप खरीद लीजिए हमको ओ टोक दिए हैं और जाइए नहीं बिकेगा तो उ भी हमको बहुत बती होता है जाना आप ले लीजिए न INTENT_CONVERSATION AGE_45_60 GENDER_FEMALE DIALECT_BIHAR",
76
+ "duration": 11.733000000000004
77
+ }
78
+ ```
79
+
80
+
81
+ ## Training NeMo Conformer ASR for Hindi
82
+
83
+ ### 1. Pull and Run NeMo Docker
84
+ ```bash
85
+ # Pull the NeMo Docker image
86
+ docker pull nvcr.io/nvidia/nemo:24.05
87
+
88
+ # Run the container with GPU support
89
+ docker run --gpus all -it --rm \
90
+ -v /external1:/external1 \
91
+ -v /external2:/external2 \
92
+ -v /external3:/external3 \
93
+ --shm-size=8g \
94
+ -p 8888:8888 -p 6006:6006 \
95
+ --ulimit memlock=-1 \
96
+ --ulimit stack=67108864 \
97
+ nvcr.io/nvidia/nemo:24.05
98
+ ```
99
+
100
+ ### 2. Create Training Script
101
+ Create a script `train_nemo_asr_hindi.py`:
102
+ ```python
103
+ from nemo.collections.asr.models import EncDecCTCModel
104
+ from nemo.collections.asr.data.audio_to_text import TarredAudioToTextDataset
105
+ import pytorch_lightning as pl
106
+ from omegaconf import OmegaConf
107
+ import os
108
+
109
+ # Load the dataset from Hugging Face
110
+ from datasets import load_dataset
111
+ dataset = load_dataset("WhissleAI/Meta_STT_HI_Set1")
112
+
113
+ # Create config
114
+ config = OmegaConf.create({
115
+ 'model': {
116
+ 'name': 'EncDecCTCModel',
117
+ 'train_ds': {
118
+ 'manifest_filepath': None, # Will be set dynamically
119
+ 'batch_size': 32,
120
+ 'shuffle': True,
121
+ 'num_workers': 4,
122
+ 'pin_memory': True,
123
+ 'use_start_end_token': False,
124
+ },
125
+ 'validation_ds': {
126
+ 'manifest_filepath': None, # Will be set dynamically
127
+ 'batch_size': 32,
128
+ 'shuffle': False,
129
+ 'num_workers': 4,
130
+ 'pin_memory': True,
131
+ 'use_start_end_token': False,
132
+ },
133
+ 'optim': {
134
+ 'name': 'adamw',
135
+ 'lr': 0.001,
136
+ 'weight_decay': 0.01,
137
+ },
138
+ 'trainer': {
139
+ 'devices': 1,
140
+ 'accelerator': 'gpu',
141
+ 'max_epochs': 100,
142
+ 'precision': 16,
143
+ }
144
+ }
145
+ })
146
+
147
+ # Initialize model
148
+ model = EncDecCTCModel(cfg=config.model)
149
+
150
+ # Create trainer
151
+ trainer = pl.Trainer(**config.model.trainer)
152
+
153
+ # Train
154
+ trainer.fit(model)
155
+ ```
156
+
157
+ ### 3. Create Config File
158
+ Create a config file `config_hindi.yaml`:
159
+ ```yaml
160
+ model:
161
+ name: "EncDecCTCModel"
162
+ train_ds:
163
+ manifest_filepath: "train.json"
164
+ batch_size: 32
165
+ shuffle: true
166
+ num_workers: 4
167
+ pin_memory: true
168
+ use_start_end_token: false
169
+
170
+ validation_ds:
171
+ manifest_filepath: "valid.json"
172
+ batch_size: 32
173
+ shuffle: false
174
+ num_workers: 4
175
+ pin_memory: true
176
+ use_start_end_token: false
177
+
178
+ optim:
179
+ name: adamw
180
+ lr: 0.001
181
+ weight_decay: 0.01
182
+
183
+ trainer:
184
+ devices: 1
185
+ accelerator: "gpu"
186
+ max_epochs: 100
187
+ precision: 16
188
+ ```
189
+
190
+ ### 4. Start Training
191
+ ```bash
192
+ # Inside the NeMo container
193
+ python -m torch.distributed.launch --nproc_per_node=1 \
194
+ train_nemo_asr_hindi.py \
195
+ --config-path=. \
196
+ --config-name=config_hindi.yaml
197
+ ```
198
+
199
+ ## Usage Notes
200
+
201
+ 1. The dataset includes both metadata and audio files.
202
+ 2. Audio files are stored in the dataset repository.
203
+ 3. For optimal performance:
204
+ - Use a GPU with at least 16GB VRAM
205
+ - Adjust batch size based on your GPU memory
206
+ - Consider gradient accumulation for larger effective batch sizes
207
+ - Monitor training with TensorBoard (accessible via port 6006)
208
+
209
+ ## Common Issues and Solutions
210
+
211
+ 1. **Memory Issues**:
212
+ - Reduce batch size if you encounter OOM errors
213
+ - Use gradient accumulation for larger effective batch sizes
214
+ - Enable mixed precision training (fp16)
215
+
216
+ 2. **Training Speed**:
217
+ - Increase num_workers based on your CPU cores
218
+ - Use pin_memory=True for faster data transfer to GPU
219
+ - Consider using tarred datasets for faster I/O
220
+
221
+ 3. **Model Performance**:
222
+ - Adjust learning rate based on your batch size
223
+ - Use learning rate warmup for better convergence
224
+ - Consider using a pretrained model as initialization