bhaskarbuilds commited on
Commit
1130d5d
·
verified ·
1 Parent(s): bd59300

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -66,7 +66,7 @@ The stereo recording format with separate speaker channels enables direct learni
66
 
67
  ### Two-stage training recipe
68
 
69
- **Stage 1 — Pre-training** on the full 26,000-hour corpus. Learning rate of 3×10⁻⁵ (matching original Moshi pre-training). AdamW with β₁=0.9, β₂=0.95, weight decay 0.1. Effective batch size of 64 (~2.9 hours of audio per update). Trained for 1 epoch (~10,000 steps) in approximately 13 hours on 8× NVIDIA H100 80GB GPUs.
70
 
71
  **Stage 2 — Fine-tuning** on ~990 hours of curated high-quality conversational data. Split learning rates: 2×10⁻⁶ for the Temporal Transformer, 4×10⁻⁶ for the Depth Transformer. Optimal checkpoint selected at step 4,812 based on minimum total validation loss (3.370).
72
 
 
66
 
67
  ### Two-stage training recipe
68
 
69
+ **Stage 1 — Pre-training** on the full 26,000-hour corpus. Learning rate of 3×10⁻⁵ (matching original Moshi pre-training). AdamW with β₁=0.9, β₂=0.95, weight decay 0.1. Effective batch size of 64 (\~2.9 hours of audio per update). Trained for 1 epoch (\~10,000 steps) in approximately 13 hours on 8× NVIDIA H100 80GB GPUs.
70
 
71
  **Stage 2 — Fine-tuning** on ~990 hours of curated high-quality conversational data. Split learning rates: 2×10⁻⁶ for the Temporal Transformer, 4×10⁻⁶ for the Depth Transformer. Optimal checkpoint selected at step 4,812 based on minimum total validation loss (3.370).
72