--- dataset_info: features: - name: file_name dtype: string - name: url dtype: string - name: inference_transcript dtype: string - name: audio_duration dtype: float64 - name: original_id dtype: string - name: strata dtype: string - name: age_group dtype: string - name: duration_category dtype: string - name: content_type dtype: string - name: path dtype: string - name: inference_checkpoint-10000 dtype: string - name: inference_checkpoint-19000 dtype: string - name: inference_checkpoint-5000 dtype: string - name: uni dtype: string - name: base_cer dtype: float64 - name: cer_5000 dtype: float64 - name: cer_10000 dtype: float64 - name: cer_19000 dtype: float64 splits: - name: train num_bytes: 1205643 num_examples: 893 download_size: 393078 dataset_size: 1205643 configs: - config_name: default data_files: - split: train path: data/train-* --- # Tibetan Speech Recognition Model Performance Report ## Overview This report summarizes the performance evaluation of a Wav2Vec2-based speech recognition model trained on Tibetan speech data of Garchen Rinpoche. The evaluation was conducted across different training checkpoints using multiple error metrics. ## Training datasets [train](https://huggingface.co/datasets/ganga4364/garchen_rinpoche_data) ## Benchmark Dataset Information - Total audio duration: 1.07 hours - Number of audio segments: 893 - Average segment duration: 4.31 seconds - Content type: Teaching material - Speaker age group: 70-90 years ## Model Architecture - Base Model: `ganga4364/mms_300_v4.96000` - Model Type: Wav2Vec2ForCTC - Fine-tuning Method: Parameter-Efficient Fine-Tuning (PEFT) with LoRA ## Training Parameters - Batch Size: 8 (per device) - Gradient Accumulation Steps: 2 - Learning Rate: 3e-4 - Training Epochs: 100 - Warmup Steps: 500 - FP16 Training: Enabled - Evaluation Strategy: Steps-based - Logging Interval: 100 steps - Evaluation Interval: 1000 steps - Save Interval: 1000 steps - Save Total Limit: 50 checkpoints - Data Loading: 4 workers - Monitoring: Weights & Biases (wandb) ## Training Progress Character Error Rate (CER) across training checkpoints: | Checkpoint | Micro CER (%) | |------------|---------| | Base model | 27.67 | | 5000 steps | 27.41 | | 10000 steps| 23.37 | | 19000 steps| 22.93 | ## Final Model Performance (Checkpoint 19000) ### Word Error Rate (WER) - Micro-average WER: 39.42% - Macro-average WER: 45.92% ### Error Analysis - Total test sentences: 893 - Error breakdown: - Substitutions: 4,217 - Insertions: 779 - Deletions: 1,190 ## Sample Prediction Example transcription: **Reference:** དེའི་རྩིས་གཞི་ད་ད་ལྟ་སྐད་ཆ་བཤད་དགོས་རེད་ད། དྲན་པས་ཡང་མི་གནོད། **Model Prediction:** དེ་རིང་དེ་དུ་དག་ད་ྟ་སྐད་ཆ་བཤད་དགོས་རེད། ད་དྲིརིང་གི་ཡང་མི་འདུག CER for this example: 39.34% ## Key Findings 1. **Consistent Improvement**: The model shows steady improvement in CER across training checkpoints, with a total reduction of 4.74 percentage points from base model to final checkpoint. 2. **Character vs Word Accuracy**: While the final character-level error rate is relatively low (19.46%), the word-level error rate is higher (39.42%), indicating challenges in maintaining word integrity during recognition. 3. **Error Distribution**: The majority of errors are substitutions (4,217), followed by deletions (1,190) and insertions (779), suggesting the model is more prone to replacing characters than inserting or deleting them. ## Conclusion The model demonstrates promising performance for Tibetan speech recognition, particularly at the character level. However, the higher word error rate suggests room for improvement in capturing complete word structures. Future work might focus on reducing the gap between character and word-level accuracy. --- *Report generated on July 14, 2025* --- # Dataset Statistics ## Configuration: `default` ### Split: `train` **Total Rows**: 893 #### `audio_duration` - **Type**: numerical - **Data Type**: `float64` - **Sum**: 3,870.19 - **Average**: 4.33 #### `strata` - **Type**: categorical - **Data Type**: `object` - **Unique Values**: 8 **Value Distribution:** | Value | Count | Percentage | |-------|-------|------------| | `70-80__medium__Prayer` | 125 | 14.00% | | `70-80__short__Teaching` | 123 | 13.77% | | `70-80__long__Teaching` | 123 | 13.77% | | `70-80__long__Prayer` | 121 | 13.55% | | `70-80__medium__Teaching` | 118 | 13.21% | | `70-80__long__Q&A` | 108 | 12.09% | | `80-90__long__Practice` | 108 | 12.09% | | `70-80__medium__Q&A` | 67 | 7.50% | #### `age_group` - **Type**: categorical - **Data Type**: `object` - **Unique Values**: 2 **Value Distribution:** | Value | Count | Percentage | |-------|-------|------------| | `70-80` | 785 | 87.91% | | `80-90` | 108 | 12.09% | #### `duration_category` - **Type**: categorical - **Data Type**: `object` - **Unique Values**: 3 **Value Distribution:** | Value | Count | Percentage | |-------|-------|------------| | `long` | 460 | 51.51% | | `medium` | 310 | 34.71% | | `short` | 123 | 13.77% | #### `content_type` - **Type**: categorical - **Data Type**: `object` - **Unique Values**: 4 **Value Distribution:** | Value | Count | Percentage | |-------|-------|------------| | `Teaching` | 364 | 40.76% | | `Prayer` | 246 | 27.55% | | `Q&A` | 175 | 19.60% | | `Practice` | 108 | 12.09% | #### `base_cer` - **Type**: numerical - **Data Type**: `float64` - **Sum**: 247.09 - **Average**: 0.28 #### `cer_5000` - **Type**: numerical - **Data Type**: `float64` - **Sum**: 244.76 - **Average**: 0.27 #### `cer_10000` - **Type**: numerical - **Data Type**: `float64` - **Sum**: 208.71 - **Average**: 0.23 #### `cer_19000` - **Type**: numerical - **Data Type**: `float64` - **Sum**: 204.79 - **Average**: 0.23 ---