Three-Queue Parallel Pipeline β Implementation Complete
Date: March 10, 2026 (updated March 10 β implementation complete)
Implementation: backend/e2e/pipeline.py + backend/e2e/server.py
Original plan target: backend/api/pipeline.py
Goal: Eliminate inter-sentence stutter (ISSUE-19) by replacing the sequential _speak_text loop with a fully pipelined three-stage architecture where TTS, Whisper, and UNet run concurrently.
Status: β IMPLEMENTED AND READY FOR TESTING
Run the e2e server:
cd backend && uvicorn e2e.server:app --host 0.0.0.0 --port 8767 --reload
# or
python -m e2e.server
Problem Summary
The current StreamingPipeline._speak_text() is sequential:
Fragment 1: [TTS] βββΊ [Whisper] βββΊ [UNet] βββΊ Queue βββΊ Publish
β queue drains
Fragment 2: [TTS] βββΊ [Whisper] βββΊ [UNet] βββΊ Queue βββΊ Publish
β 140β500ms GAP
Between each sentence fragment, the video queue empties and the idle loop's 500ms timeout fires β causing a visible freeze. Root cause: full three-stage restart per sentence. Full analysis in ISSUES_AND_PLAN.md.
Target Architecture
TTS Producer: [F1-audio] βββΊ [F2-audio] βββΊ [F3-audio] βββΊ ...
β β
βΌ βΌ
Whisper Worker: [F1-feats] βββΊ [F2-feats] βββΊ ...
β β
βΌ βΌ
UNet Worker: [F1-frames] βββΊ [F2-frames] βββΊ ...
β β
βΌ βΌ
Frame Queue: ββββββββββββββββββββββββββββ (always has data)
β
βΌ
Publish Loop: [25fps drain] βββΊ [25fps drain] βββΊ ...
Key property: while _unet_worker generates frames for fragment N, _whisper_worker is already encoding fragment N+1. Queue never starves between sentences.
Bugs Fixed vs. Previous Sketches
| # | Bug | Problem | Fix Applied |
|---|---|---|---|
| 1 | Producer dies after first utterance | timeout caused break out of the loop |
Loop forever with continue on timeout |
| 2 | _END_ string crashes on unpacking |
String passed to audio_flat, pts_s, pts_e = item |
Use None sentinel only, consistently |
| 3 | get_nowait() causes idle flickers |
Fires idle on any momentary empty queue | wait_for(..., timeout=frame_interval) β 40ms grace |
| 4 | stop() only cancels _idle_task |
Worker tasks keep running after disconnect | Cancel all four tasks in stop() |
| 5 | Double splitting in _tts_producer |
Outer _split_to_fragments + inner split inside synthesize_stream = redundant split and PTS reset per fragment |
Remove outer split β let synthesize_stream handle splitting internally |
| 6 | Audio PTS always 0.0 | publish_audio_chunk(audio_slice, 0.0) β no shared time reference |
Track cumulative audio_pts_samples in _publish_loop |
| 7 | Stale queues on reconnect | stop() cancels tasks but doesn't drain queues |
Drain all queues in stop() before returning |
Queue Design
| Queue | Type | Max Size | Contents | Rationale |
|---|---|---|---|---|
_text_queue |
unbounded | β | str |
Holds incoming /speak requests |
_tts_queue |
bounded | 6 | (audio_flat, pts_s, pts_e) |
Buffers Kokoro output; 6 β ~2 full sentences |
_whisper_queue |
bounded | 3 | (feats, audio_flat, pts_s, pts_e, total_frames) |
Whisper features; small β GPU is the bottleneck |
_frame_queue |
bounded | 64 | (frame_rgba, audio_slice | None) |
~2.5s of video at 25fps; never starves publish loop |
Sentinel convention
None is the end-of-utterance marker. Each worker forwards it downstream and loops back to idle:
_tts_producer βββΊ None βββΊ _tts_queue
_whisper_worker βββΊ None βββΊ _whisper_queue (on receipt of None)
_unet_worker βββΊ no forward (None means "done with this utterance, loop back")
_publish_loop never receives None β it only falls to idle on asyncio.TimeoutError.
Implementation
File: backend/api/pipeline.py
1. Imports β remove dead code
Remove unused imports now that _speak_text and _idle_loop are gone:
# REMOVE:
from sync.av_sync import AVSyncGate, SimpleAVSync
# KEEP:
from sync.av_sync import SimpleAVSync # only if SimpleAVSync is still referenced elsewhere
Also remove TTS_SAMPLE_RATE import if it is no longer used in this file (it was used in _speak_text for samples_per_frame calculation β that logic moves into _unet_worker).
Note: _split_to_fragments does NOT need to be imported. The _tts_producer no longer calls it directly β synthesize_stream handles splitting internally (see Bug 5 fix above).
2. StreamingPipeline.__init__
class StreamingPipeline:
def __init__(
self,
tts: KokoroTTS,
musetalk: MuseTalkWorker,
publisher: AVPublisher,
avatar_assets,
):
self._tts = tts
self._musetalk = musetalk
self._publisher = publisher
self._avatar_assets = avatar_assets
self._idle_generator = IdleFrameGenerator(
avatar_assets,
target_width=publisher._video_width,
target_height=publisher._video_height,
)
self._running = False
# Three-stage async queues
self._text_queue: asyncio.Queue = asyncio.Queue()
self._tts_queue: asyncio.Queue = asyncio.Queue(maxsize=6)
self._whisper_queue: asyncio.Queue = asyncio.Queue(maxsize=3)
self._frame_queue: asyncio.Queue = asyncio.Queue(maxsize=64)
# Worker task handles
self._tts_task: Optional[asyncio.Task] = None
self._whisper_task: Optional[asyncio.Task] = None
self._unet_task: Optional[asyncio.Task] = None
self._publish_task: Optional[asyncio.Task] = None
self._log_task: Optional[asyncio.Task] = None
log.info("StreamingPipeline initialized (3-queue parallel)")
3. start() and stop()
async def start(self):
self._running = True
self._tts_task = asyncio.create_task(self._tts_producer())
self._whisper_task = asyncio.create_task(self._whisper_worker())
self._unet_task = asyncio.create_task(self._unet_worker())
self._publish_task = asyncio.create_task(self._publish_loop())
self._log_task = asyncio.create_task(self._log_queue_depths())
log.info("StreamingPipeline started")
async def stop(self):
self._running = False
for task in (
self._tts_task, self._whisper_task,
self._unet_task, self._publish_task, self._log_task,
):
if task:
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
# Drain all queues to prevent stale data on reconnect (Bug 7 fix)
for q in (self._text_queue, self._tts_queue, self._whisper_queue, self._frame_queue):
while not q.empty():
try:
q.get_nowait()
except asyncio.QueueEmpty:
break
log.info("StreamingPipeline stopped")
4. push_text() β unchanged interface
async def push_text(self, text: str):
"""Push text to be spoken. Non-blocking."""
await self._text_queue.put(text)
The asyncio.Lock guard added in Step 5 (ISSUES_AND_PLAN) is no longer needed here β there is no _processing flag or task-spawn race. The producer loop handles concurrency naturally via the queue.
5. _tts_producer β Bugs 1, 5 fixed
async def _tts_producer(self):
"""
Stage 1: text β Kokoro audio chunks β _tts_queue.
Runs forever. Each utterance ends with a None sentinel.
NOTE: Do NOT split text here. synthesize_stream() already calls
_split_to_fragments() internally. Calling it here too would:
(a) double-split (redundant, each fragment has no further split points)
(b) reset PTS to 0.0 at each fragment boundary (wrong semantics)
(c) require importing _split_to_fragments into this file
The queue buffering (_tts_queue maxsize=6) absorbs the inter-fragment
Kokoro reinit time (~50-100ms) so the downstream workers never starve.
"""
while self._running:
try:
text = await asyncio.wait_for(self._text_queue.get(), timeout=0.1)
except asyncio.TimeoutError:
continue # no text yet β loop, don't exit
log.debug("tts_producer: got text (%d chars)", len(text))
async for audio, pts_s, pts_e in self._tts.synthesize_stream(text):
audio_flat = audio.flatten() if audio.ndim > 1 else audio
await self._tts_queue.put((audio_flat, pts_s, pts_e))
# End-of-utterance marker β not a worker-shutdown signal
await self._tts_queue.put(None)
log.debug("tts_producer: utterance complete")
6. _whisper_worker β Bug 2 fixed
async def _whisper_worker(self):
"""
Stage 2: audio chunks β Whisper features β _whisper_queue.
Runs forever. Forwards None sentinel downstream on end-of-utterance.
"""
while self._running:
item = await self._tts_queue.get()
if item is None:
# End-of-utterance: pass sentinel downstream and wait for next utterance
await self._whisper_queue.put(None)
continue
audio_flat, pts_s, pts_e = item
t0 = time.monotonic()
feats, total_frames = await self._musetalk.extract_features(audio_flat)
log.debug("whisper_worker: encoded %.0fms audio β %d frames (%.0fms)",
len(audio_flat) / TTS_SAMPLE_RATE * 1000,
total_frames,
(time.monotonic() - t0) * 1000)
await self._whisper_queue.put((feats, audio_flat, pts_s, pts_e, total_frames))
7. _unet_worker β Bug 2 fixed
async def _unet_worker(self):
"""
Stage 3: Whisper features β UNet frames + audio slices β _frame_queue.
Runs forever. Drops None sentinel (no frame needed for end-of-utterance).
"""
BATCH = self._musetalk.BATCH_FRAMES
while self._running:
item = await self._whisper_queue.get()
if item is None:
# End-of-utterance: nothing to generate, loop back
continue
feats, audio_flat, pts_s, pts_e, total_frames = item
spf = len(audio_flat) / max(total_frames, 1) # samples per video frame
first_batch = True
for batch_start in range(0, total_frames, BATCH):
n = min(BATCH, total_frames - batch_start)
t0 = time.monotonic()
frames = await self._musetalk.generate_batch(feats, batch_start, n)
if first_batch:
log.debug("unet_worker: first batch %.0fms (%d frames)",
(time.monotonic() - t0) * 1000, n)
first_batch = False
for fi, frame in enumerate(frames):
a_s = int((batch_start + fi) * spf)
a_e = min(int((batch_start + fi + 1) * spf), len(audio_flat))
audio_slice = audio_flat[a_s:a_e] if a_e > a_s else None
await self._frame_queue.put((frame, audio_slice))
8. _publish_loop β Bugs 3, 6 fixed (renamed from _idle_loop)
async def _publish_loop(self):
"""
Drains _frame_queue at 25fps.
Waits up to one frame interval (40ms) before falling to idle.
This prevents idle flickers during momentary queue-empty events.
Tracks audio PTS via cumulative sample count (Bug 6 fix).
"""
frame_interval = 1.0 / VIDEO_FPS
session_start = time.monotonic()
audio_pts_samples = 0 # cumulative audio sample count β the timing master
try:
while self._running:
frame_start = time.monotonic()
try:
item = await asyncio.wait_for(
self._frame_queue.get(),
timeout=frame_interval, # 40ms β one frame grace period
)
frame, audio_slice = item
except asyncio.TimeoutError:
# Truly idle (no speech), or pipeline stalled > 40ms
frame = self._idle_generator.next_frame()
audio_slice = None
pts_us = int((frame_start - session_start) * 1_000_000)
await self._publisher.publish_video_frame(frame, pts_us)
if audio_slice is not None and len(audio_slice) > 0:
audio_pts_sec = audio_pts_samples / TTS_SAMPLE_RATE
await self._publisher.publish_audio_chunk(audio_slice, audio_pts_sec)
audio_pts_samples += len(audio_slice)
elapsed = time.monotonic() - frame_start
sleep_time = frame_interval - elapsed
if sleep_time > 0:
await asyncio.sleep(sleep_time)
elif sleep_time < -0.01:
log.warning("publish_loop: frame took %.0fms (budget %.0fms)",
elapsed * 1000, frame_interval * 1000)
except asyncio.CancelledError:
raise
9. _log_queue_depths β debug visibility
async def _log_queue_depths(self):
"""Log queue depths every 2 seconds for pipeline health monitoring."""
while self._running:
log.debug(
"pipeline queues β text=%d tts=%d/%d whisper=%d/%d frame=%d/%d",
self._text_queue.qsize(),
self._tts_queue.qsize(), self._tts_queue.maxsize,
self._whisper_queue.qsize(), self._whisper_queue.maxsize,
self._frame_queue.qsize(), self._frame_queue.maxsize,
)
await asyncio.sleep(2.0)
10. Remove SpeechToVideoPipeline (dead code)
SpeechToVideoPipeline.speak() is never called by server.py. The server only calls StreamingPipeline.push_text(). Remove the entire SpeechToVideoPipeline class (~80 lines) to eliminate confusion.
Also remove AVSyncGate and SimpleAVSync imports if no other code references them after the removal.
Issue Coverage Matrix
Cross-validation of this plan against all 19 issues in ISSUES_AND_PLAN.md.
Issues this plan FIXES
| # | Issue | How the plan fixes it |
|---|---|---|
| 08 | push_text race condition |
Replaced with await _text_queue.put(text) β no flags, no task-spawn race |
| 09 | QueueFull drops audio | Bounded queues use blocking await put() β backpressure, never drops |
| 12 | No Whisper/UNet overlap | Three-queue design decouples stages; TTS pre-generation fills buffer ahead of Whisper/UNet |
| 13 | AV sync dead code | Removes SpeechToVideoPipeline + SimpleAVSync/AVSyncGate imports |
| 19 | Inter-sentence stutter | PRIMARY FIX. _tts_queue(6) buffers |
Issues this plan PARTIALLY fixes
| # | Issue | What's fixed | What remains |
|---|---|---|---|
| 07 | PTS always 0 / ignored | Video PTS computed from session_start wallclock. Audio PTS tracked via cumulative sample count (Bug 6 fix). |
Need to verify publish_audio_chunk in livekit_publisher.py actually uses the pts_start parameter β currently it's accepted but may be ignored by the LiveKit SDK |
Issues NOT in scope (no regression)
These issues were already fixed in other files. This plan only touches pipeline.py β no regressions possible:
| # | Issue | Status | File |
|---|---|---|---|
| 01 | Kokoro sentence-level chunks | β FIXED | kokoro_tts.py |
| 03 | UNet behind realtime (batch=4β8) | β FIXED | config.py |
| 04 | cv2 blocking event loop | β FIXED | livekit_publisher.py |
| 05 | torchaudio resample per frame | β FIXED | config.py (rate=24k) |
| 06 | _publish_audio_async 2Γ bug |
β FIXED | livekit_publisher.py |
| 10 | get_event_loop() deprecated |
β FIXED | worker.py |
| 16 | os.chdir() on every GPU call |
β FIXED | worker.py |
| 18 | torch.load no weights_only |
β FIXED | processor.py |
Issues NOT addressed (remain for future work)
| # | Issue | Why not in scope | Effort needed |
|---|---|---|---|
| 02 | Whisper 30s mel window | HF WhisperModel enforces T=3000 (position embeddings). Needs custom encoder bypass or swap to openai-whisper. Not a pipeline.py change. |
Medium |
| 11 | build_frame_windows in GPU thread |
Requires splitting worker.py/processor.py. Orthogonal to pipeline architecture. |
Low |
| 14 | PRE_ROLL_FRAMES unused | Cosmetic. Can add pre-roll logic to _publish_loop later if needed. |
Low |
| 15 | BASE_VIDEO_PATH hardcoded | Config fix, not pipeline architecture. | Trivial |
| 17 | CPU compositing in GPU thread | RGBA pre-convert done. Full GPU/CPU split requires processor.py changes. |
Medium |
Architectural Note: Single-Executor Limitation
Both extract_features() and generate_batch() in MuseTalkWorker use the same ThreadPoolExecutor(max_workers=1). This means Whisper and UNet cannot truly run in parallel on the GPU β they are serialized by the executor.
The three-queue design still provides significant benefit because:
TTS pre-generation (the primary win): Kokoro runs on CPU, completely decoupled from the GPU executor. It fills
_tts_queueahead of time, eliminating the inter-fragment stall.Interleaved scheduling: Between UNet batches (when
_unet_workerawaitsframe_queue.put()), the event loop can schedule_whisper_worker'sextract_featuresto the executor. Whisper work runs in the gaps between UNet batches.No blocking waits: The old
_speak_textloop forced Whisper(N) β UNet(N) β Whisper(N+1) strictly serial. The new design allows Whisper(N+1) to be submitted to the executor while UNet(N) results are being processed (put into frame_queue).
Future optimization: Use two separate executors β one for Whisper (CPU-bound feature extraction) and one for UNet (GPU-bound inference). This would allow true parallelism if the GPU has spare compute during Whisper's CPU phases. However, since both stages ultimately need the GPU, the benefit is limited to overlapping CPU preprocessing with GPU inference.
Changes Summary
| What | Where | Lines delta |
|---|---|---|
Replace __init__ (add 4 queues, 5 task handles, remove flags) |
StreamingPipeline.__init__ |
~+10 |
Rewrite start() (spawn 5 tasks) |
StreamingPipeline.start |
~+5 |
Rewrite stop() (cancel 5 tasks + drain queues) |
StreamingPipeline.stop |
~+15 |
Rewrite push_text() (just queue.put) |
StreamingPipeline.push_text |
~-10 |
Add _tts_producer (no outer split β delegates to synthesize_stream) |
new method | +20 |
Add _whisper_worker |
new method | +20 |
Add _unet_worker |
new method | +30 |
Add _publish_loop (replaces _idle_loop, with audio PTS tracking) |
new method replaces old | ~+5 |
Add _log_queue_depths |
new method | +10 |
Remove _speak_text, _process_queue, _idle_loop |
deleted | β120 |
Remove SpeechToVideoPipeline |
deleted | β80 |
Remove dead imports (AVSyncGate, SimpleAVSync) |
top of file | β2 |
| Net | ~β100 lines |
Testing Checklist
| Test | Expected Result |
|---|---|
First /speak |
Works, first frame in ~150ms |
Second /speak after first completes |
Works β producer still alive (Bug 1 fixed) |
Text with . ! ? ; β two sentences |
Seamless playback, no gap between sentences (ISSUE-19 fixed) |
Three+ rapid /speak calls |
All processed in order, no overlap |
/disconnect β /connect β /speak |
Clean restart, no orphaned tasks or stale queue data (Bugs 4, 7 fixed) |
| Queue depth logs during speech | See frame queue filling 10β30 deep during active speech |
| Queue depth logs at idle | All queues at 0 |
| Long utterance (5+ sentences) | No stutter at any sentence boundary |
| Audio PTS progression | Log audio_pts_sec values β should monotonically increase during speech (Bug 6 fixed) |
| PTS values after reconnect | audio_pts_samples resets to 0 on new session (queue drain in stop()) |
Expected Outcome
| Metric | Before | After |
|---|---|---|
| Inter-sentence gap | 200β500ms visible freeze | β€40ms (one frame interval grace) |
| First-frame latency | ~150β200ms | ~150β200ms (unchanged) |
| Queue starvation | Frequent at punctuation | Never |
| GPU utilization between sentences | Drops to 0 | Continuous β Whisper pre-encodes next fragment |
| Code complexity | 1 monolithic _speak_text loop |
4 focused, independent coroutines |
Implementation Files
The three-queue parallel pipeline has been implemented in the backend/e2e/ folder as a complete, self-contained alternative to the original backend/api/ pipeline.
Files Created
| File | Purpose | Lines |
|---|---|---|
backend/e2e/__init__.py |
Re-exports StreamingPipeline for easy import |
15 |
backend/e2e/pipeline.py |
Complete three-queue parallel StreamingPipeline implementation |
260 |
backend/e2e/server.py |
FastAPI server using the e2e pipeline (same port 8767) | 386 |
Key Differences from Original
| Aspect | Original api/pipeline.py |
New e2e/pipeline.py |
|---|---|---|
| Architecture | Sequential _speak_text loop |
4 parallel coroutines + 3 queues |
| Queue handling | Single video queue (256 slots) | 3-stage pipeline: textβttsβwhisperβframe queues |
| Inter-sentence gap | 200β500ms visible freeze | β€40ms (one frame grace period) |
| Audio PTS | Always 0.0 |
Monotonic sample counter |
| Task management | 1 idle task | 5 worker tasks (all cancelled on stop) |
| Queue draining | No draining on stop | All queues drained to prevent stale data |
Usage
Run the e2e server (same port as original):
cd backend && uvicorn e2e.server:app --host 0.0.0.0 --port 8767 --reload
# or
python -m e2e.server
Import the pipeline:
from e2e.pipeline import StreamingPipeline # instead of api.pipeline.StreamingPipeline
API is identical:
pipeline = StreamingPipeline(tts, musetalk, publisher, avatar_assets)
await pipeline.start()
await pipeline.push_text("Hello world!")
await pipeline.stop()
Testing Status
The implementation is complete and passes syntax validation. Ready for runtime testing to verify:
- β Inter-sentence stutter elimination (ISSUE-19)
- β No queue starvation between fragments
- β Proper audio PTS tracking
- β Clean shutdown without orphaned tasks
- β Queue depth monitoring for pipeline health
Last updated: March 10, 2026