Update README.md
Browse files
README.md
CHANGED
|
@@ -30,6 +30,58 @@ Training follows the GAN objective as in the original Vocos, while adopting loss
|
|
| 30 |
|
| 31 |
---
|
| 32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
## Checkpoints
|
| 34 |
|
| 35 |
This repo provides a PyTorch Lightning checkpoint:
|
|
|
|
| 30 |
|
| 31 |
---
|
| 32 |
|
| 33 |
+
## ⚡ Streaming Latency & Real-Time Performance
|
| 34 |
+
|
| 35 |
+
We benchmark **Streaming Vocos** in **streaming inference mode** using chunked mel-spectrogram decoding on both CPU and GPU.
|
| 36 |
+
|
| 37 |
+
### Benchmark setup
|
| 38 |
+
|
| 39 |
+
- **Audio duration:** 3.24 s
|
| 40 |
+
- **Sample rate:** 16 kHz
|
| 41 |
+
- **Mel hop size:** 320 samples (20 ms per mel frame)
|
| 42 |
+
- **Chunk size:** 5 mel frames (100 ms buffering latency)
|
| 43 |
+
- **Runs:** 100 warm-up + 1000 timed runs
|
| 44 |
+
- **Inference mode:** Streaming (stateful causal decoding)
|
| 45 |
+
|
| 46 |
+
**Metrics**
|
| 47 |
+
- **Processing time per chunk**
|
| 48 |
+
- **End-to-end latency** = chunk buffering + processing time
|
| 49 |
+
- **RTF (Real-Time Factor)** = processing time / audio duration
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
### Results
|
| 54 |
+
|
| 55 |
+
#### Streaming performance (chunk size = 5 frames, 100 ms buffer)
|
| 56 |
+
|
| 57 |
+
| Device | Avg proc / chunk | First-chunk proc | End-to-end latency | Total proc (3.2 s audio) | RTF |
|
| 58 |
+
|------|------------------|------------------|--------------------|---------------------------|-----|
|
| 59 |
+
| **CPU** | 14.0 ms | 14.0 ms | **114.0 ms** | 464 ms | 0.14 |
|
| 60 |
+
| **GPU (CUDA)** | **3.4 ms** | **3.3 ms** | **103.3 ms** | **113 ms** | **0.035** |
|
| 61 |
+
|
| 62 |
+
> End-to-end latency includes the **100 ms chunk buffering delay** required for streaming inference.
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
|
| 66 |
+
### Interpretation
|
| 67 |
+
|
| 68 |
+
- **Real-time capable on CPU**
|
| 69 |
+
Streaming Vocos achieves an RTF of approximately **0.14**, corresponding to inference running ~7× faster than real time.
|
| 70 |
+
|
| 71 |
+
- **Ultra-low compute overhead on GPU**
|
| 72 |
+
Chunk processing time is reduced to **~3.4 ms**, making overall latency dominated by buffering rather than computation.
|
| 73 |
+
|
| 74 |
+
- **Streaming-friendly first-chunk behavior**
|
| 75 |
+
First-chunk latency closely matches steady-state latency, indicating **no cold-start penalty** during streaming inference.
|
| 76 |
+
|
| 77 |
+
- **Latency–quality tradeoff**
|
| 78 |
+
Smaller chunk sizes further reduce buffering latency (e.g., 1–2 frames → <40 ms), at the cost of slightly increased computational overhead.
|
| 79 |
+
|
| 80 |
+
---
|
| 81 |
+
|
| 82 |
+
With a **chunk size of 1 frame (20 ms buffering)**, GPU end-to-end latency drops below **25 ms**, making **Streaming Vocos** suitable for **interactive and conversational TTS pipelines**.
|
| 83 |
+
|
| 84 |
+
|
| 85 |
## Checkpoints
|
| 86 |
|
| 87 |
This repo provides a PyTorch Lightning checkpoint:
|