st192011 commited on
Commit
4a6f1f6
·
verified ·
1 Parent(s): dc7524d

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -193,7 +193,7 @@ with gr.Blocks() as demo:
193
  ### **III. The Training Stack: Ablation & Optimization**
194
  To reach the final protocol, we systematically tested a suite of PEFT and regularization techniques:
195
  * **Parameter-Efficient Fine-Tuning:** Comparison of **LoRA vs. DoRA (Weight-Decomposed LoRA)** for improved weight update stability.
196
- * **Regularization & Safeguards:** Integration of **NEFTune** (noise injection) to prevent overfitting and **Modality Dropout** to force the model to prioritize the Phonetic `[P]` witness over the noisy Semantic `[W]` witness.
197
  * **Curriculum & Loss Logic:** Implementation of **Curriculum Learning** (Phonetic Anchoring first) combined with **Specialized Loss Masking** to ensure the model learns to reconstruct meaning rather than merely copying inputs.
198
 
199
  **Outcome:** This journey has culminated in a **Standardized DSR Protocol**, providing a blueprint for training robust correction layers for atypical speech by prioritizing real-world phonetic grounding and multi-modal arbitration logic.
 
193
  ### **III. The Training Stack: Ablation & Optimization**
194
  To reach the final protocol, we systematically tested a suite of PEFT and regularization techniques:
195
  * **Parameter-Efficient Fine-Tuning:** Comparison of **LoRA vs. DoRA (Weight-Decomposed LoRA)** for improved weight update stability.
196
+ * **Regularization & Safeguards:** Integration of **NEFTune** (noise injection) to prevent overfitting and **Modality Dropout** to force the model to prioritize the Phonetic witness over the noisy Semantic witness.
197
  * **Curriculum & Loss Logic:** Implementation of **Curriculum Learning** (Phonetic Anchoring first) combined with **Specialized Loss Masking** to ensure the model learns to reconstruct meaning rather than merely copying inputs.
198
 
199
  **Outcome:** This journey has culminated in a **Standardized DSR Protocol**, providing a blueprint for training robust correction layers for atypical speech by prioritizing real-world phonetic grounding and multi-modal arbitration logic.