ksingla025 commited on
Commit
669a2e0
·
verified ·
1 Parent(s): 5f107ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -146
README.md CHANGED
@@ -90,149 +90,3 @@ This dataset combines samples from the following sources:
90
  "duration": 11.733000000000004
91
  }
92
  ```
93
-
94
-
95
- ## Training NeMo Conformer ASR for Hindi
96
-
97
- ### 1. Pull and Run NeMo Docker
98
- ```bash
99
- # Pull the NeMo Docker image
100
- docker pull nvcr.io/nvidia/nemo:24.05
101
-
102
- # Run the container with GPU support
103
- docker run --gpus all -it --rm \
104
- -v /external1:/external1 \
105
- -v /external2:/external2 \
106
- -v /external3:/external3 \
107
- --shm-size=8g \
108
- -p 8888:8888 -p 6006:6006 \
109
- --ulimit memlock=-1 \
110
- --ulimit stack=67108864 \
111
- nvcr.io/nvidia/nemo:24.05
112
- ```
113
-
114
- ### 2. Create Training Script
115
- Create a script `train_nemo_asr_hindi.py`:
116
- ```python
117
- from nemo.collections.asr.models import EncDecCTCModel
118
- from nemo.collections.asr.data.audio_to_text import TarredAudioToTextDataset
119
- import pytorch_lightning as pl
120
- from omegaconf import OmegaConf
121
- import os
122
-
123
- # Load the dataset from Hugging Face
124
- from datasets import load_dataset
125
- dataset = load_dataset("WhissleAI/Meta_STT_HI_Set1")
126
-
127
- # Create config
128
- config = OmegaConf.create({
129
- 'model': {
130
- 'name': 'EncDecCTCModel',
131
- 'train_ds': {
132
- 'manifest_filepath': None, # Will be set dynamically
133
- 'batch_size': 32,
134
- 'shuffle': True,
135
- 'num_workers': 4,
136
- 'pin_memory': True,
137
- 'use_start_end_token': False,
138
- },
139
- 'validation_ds': {
140
- 'manifest_filepath': None, # Will be set dynamically
141
- 'batch_size': 32,
142
- 'shuffle': False,
143
- 'num_workers': 4,
144
- 'pin_memory': True,
145
- 'use_start_end_token': False,
146
- },
147
- 'optim': {
148
- 'name': 'adamw',
149
- 'lr': 0.001,
150
- 'weight_decay': 0.01,
151
- },
152
- 'trainer': {
153
- 'devices': 1,
154
- 'accelerator': 'gpu',
155
- 'max_epochs': 100,
156
- 'precision': 16,
157
- }
158
- }
159
- })
160
-
161
- # Initialize model
162
- model = EncDecCTCModel(cfg=config.model)
163
-
164
- # Create trainer
165
- trainer = pl.Trainer(**config.model.trainer)
166
-
167
- # Train
168
- trainer.fit(model)
169
- ```
170
-
171
- ### 3. Create Config File
172
- Create a config file `config_hindi.yaml`:
173
- ```yaml
174
- model:
175
- name: "EncDecCTCModel"
176
- train_ds:
177
- manifest_filepath: "train.json"
178
- batch_size: 32
179
- shuffle: true
180
- num_workers: 4
181
- pin_memory: true
182
- use_start_end_token: false
183
-
184
- validation_ds:
185
- manifest_filepath: "valid.json"
186
- batch_size: 32
187
- shuffle: false
188
- num_workers: 4
189
- pin_memory: true
190
- use_start_end_token: false
191
-
192
- optim:
193
- name: adamw
194
- lr: 0.001
195
- weight_decay: 0.01
196
-
197
- trainer:
198
- devices: 1
199
- accelerator: "gpu"
200
- max_epochs: 100
201
- precision: 16
202
- ```
203
-
204
- ### 4. Start Training
205
- ```bash
206
- # Inside the NeMo container
207
- python -m torch.distributed.launch --nproc_per_node=1 \
208
- train_nemo_asr_hindi.py \
209
- --config-path=. \
210
- --config-name=config_hindi.yaml
211
- ```
212
-
213
- ## Usage Notes
214
-
215
- 1. The dataset includes both metadata and audio files.
216
- 2. Audio files are stored in the dataset repository.
217
- 3. For optimal performance:
218
- - Use a GPU with at least 16GB VRAM
219
- - Adjust batch size based on your GPU memory
220
- - Consider gradient accumulation for larger effective batch sizes
221
- - Monitor training with TensorBoard (accessible via port 6006)
222
-
223
- ## Common Issues and Solutions
224
-
225
- 1. **Memory Issues**:
226
- - Reduce batch size if you encounter OOM errors
227
- - Use gradient accumulation for larger effective batch sizes
228
- - Enable mixed precision training (fp16)
229
-
230
- 2. **Training Speed**:
231
- - Increase num_workers based on your CPU cores
232
- - Use pin_memory=True for faster data transfer to GPU
233
- - Consider using tarred datasets for faster I/O
234
-
235
- 3. **Model Performance**:
236
- - Adjust learning rate based on your batch size
237
- - Use learning rate warmup for better convergence
238
- - Consider using a pretrained model as initialization
 
90
  "duration": 11.733000000000004
91
  }
92
  ```