gnumanth commited on
Commit
ea31d8c
·
verified ·
1 Parent(s): ba3164d

FastAPI + WebSocket streaming with newspaper UI

Browse files
Files changed (5) hide show
  1. Dockerfile +49 -0
  2. README.md +33 -19
  3. main.py +224 -0
  4. requirements.txt +7 -5
  5. static/index.html +511 -0
Dockerfile ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dockerfile for Nemotron Speech Streaming
2
+ # FastAPI + WebSocket + NeMo ASR
3
+
4
+ FROM nvidia/cuda:12.1.0-runtime-ubuntu22.04
5
+
6
+ # Set environment variables
7
+ ENV PYTHONDONTWRITEBYTECODE=1 \
8
+ PYTHONUNBUFFERED=1 \
9
+ DEBIAN_FRONTEND=noninteractive
10
+
11
+ # Install system dependencies
12
+ RUN apt-get update && apt-get install -y --no-install-recommends \
13
+ python3.10 \
14
+ python3-pip \
15
+ python3.10-dev \
16
+ ffmpeg \
17
+ libsndfile1 \
18
+ git \
19
+ && rm -rf /var/lib/apt/lists/*
20
+
21
+ # Set Python 3.10 as default
22
+ RUN ln -sf /usr/bin/python3.10 /usr/bin/python3 && \
23
+ ln -sf /usr/bin/python3.10 /usr/bin/python
24
+
25
+ WORKDIR /app
26
+
27
+ # Install Python dependencies
28
+ COPY requirements.txt .
29
+ RUN pip install --no-cache-dir --upgrade pip && \
30
+ pip install --no-cache-dir -r requirements.txt
31
+
32
+ # Copy application code
33
+ COPY main.py .
34
+ COPY static/ ./static/
35
+
36
+ # Create non-root user for security
37
+ RUN useradd --create-home --shell /bin/bash appuser && \
38
+ chown -R appuser:appuser /app
39
+ USER appuser
40
+
41
+ # Expose port
42
+ EXPOSE 7860
43
+
44
+ # Health check
45
+ HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
46
+ CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:7860/health')" || exit 1
47
+
48
+ # Run the application
49
+ CMD ["python", "-m", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,11 +1,10 @@
1
  ---
2
  title: Nemotron Speech Streaming
3
- emoji: 🎙️
4
- colorFrom: green
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 4.44.0
8
- app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
  suggested_hardware: t4-small
@@ -13,22 +12,30 @@ models:
13
  - nvidia/nemotron-speech-streaming-en-0.6b
14
  ---
15
 
16
- # Nemotron Speech ASR
17
 
18
- Real-time English speech recognition powered by [NVIDIA Nemotron Speech Streaming](https://huggingface.co/nvidia/nemotron-speech-streaming-en-0.6b).
19
 
20
  ## Features
21
 
22
- - **600M parameter** FastConformer-CacheAware-RNNT model
23
- - **Streaming-optimized** architecture for low-latency transcription
24
- - **Automatic punctuation and capitalization**
25
- - Three input modes: File upload, Microphone recording, Live streaming
26
 
27
- ## Usage
28
 
29
- 1. **File Upload**: Upload any audio file to get a transcription
30
- 2. **Microphone**: Record audio directly from your browser
31
- 3. **Live Streaming**: Real-time transcription as you speak
 
 
 
 
 
 
 
 
32
 
33
  ## Model Details
34
 
@@ -36,8 +43,15 @@ Real-time English speech recognition powered by [NVIDIA Nemotron Speech Streamin
36
  - **Input**: 16kHz mono audio (minimum 80ms)
37
  - **Output**: English text with punctuation and capitalization
38
 
39
- ## Acknowledgments
40
 
41
- Built with [NVIDIA NeMo](https://github.com/NVIDIA/NeMo) and [Gradio](https://gradio.app/).
 
 
 
 
 
 
 
42
 
43
- # Rebuild trigger Sat Jan 10 23:15:06 PST 2026
 
1
  ---
2
  title: Nemotron Speech Streaming
3
+ emoji: 📰
4
+ colorFrom: gray
5
+ colorTo: gray
6
+ sdk: docker
7
+ app_port: 7860
 
8
  pinned: false
9
  license: apache-2.0
10
  suggested_hardware: t4-small
 
12
  - nvidia/nemotron-speech-streaming-en-0.6b
13
  ---
14
 
15
+ # The Daily Transcript
16
 
17
+ Real-time speech-to-text transcription powered by **NVIDIA Nemotron ASR**.
18
 
19
  ## Features
20
 
21
+ - **WebSocket-based streaming** for low-latency transcription
22
+ - **Newspaper-style UI** - clean black & white design
23
+ - **Real-time updates** - see your words appear as you speak
24
+ - **No Triton required** - uses NeMo model directly with PyTorch
25
 
26
+ ## How It Works
27
 
28
+ 1. Click "Start Recording" to begin
29
+ 2. Speak into your microphone
30
+ 3. Watch your words appear in real-time
31
+ 4. Click "Stop Recording" when done
32
+
33
+ ## Technical Details
34
+
35
+ - **Backend**: FastAPI with WebSocket support
36
+ - **Frontend**: Vanilla HTML/JS with Web Audio API
37
+ - **Model**: nvidia/nemotron-speech-streaming-en-0.6b (600M parameters)
38
+ - **Audio**: 16kHz, mono, PCM16
39
 
40
  ## Model Details
41
 
 
43
  - **Input**: 16kHz mono audio (minimum 80ms)
44
  - **Output**: English text with punctuation and capitalization
45
 
46
+ ## Local Development
47
 
48
+ ```bash
49
+ pip install -r requirements.txt
50
+ python main.py
51
+ ```
52
+
53
+ Then open http://localhost:7860 in your browser.
54
+
55
+ ## Acknowledgments
56
 
57
+ Built with [NVIDIA NeMo](https://github.com/NVIDIA/NeMo) and FastAPI.
main.py ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ FastAPI + WebSocket backend for real-time speech transcription.
3
+ Uses NeMo ASR model directly (no Triton required).
4
+ """
5
+
6
+ import asyncio
7
+ import json
8
+ import uuid
9
+ import sys
10
+ from pathlib import Path
11
+ from typing import Optional, AsyncIterator
12
+ from datetime import datetime
13
+
14
+ import numpy as np
15
+ import torch
16
+ from fastapi import FastAPI, WebSocket, WebSocketDisconnect
17
+ from fastapi.staticfiles import StaticFiles
18
+ from fastapi.responses import FileResponse
19
+ from loguru import logger
20
+
21
+ # Configure logging
22
+ logger.remove()
23
+ logger.add(
24
+ sys.stderr,
25
+ format="<green>{time:HH:mm:ss}</green> | <level>{level: <8}</level> | <level>{message}</level>",
26
+ level="INFO",
27
+ )
28
+
29
+ # Global model
30
+ ASR_MODEL = None
31
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
32
+
33
+ def load_model():
34
+ """Load the NeMo ASR model."""
35
+ global ASR_MODEL
36
+
37
+ logger.info("Loading NeMo ASR Model...")
38
+ try:
39
+ import nemo.collections.asr as nemo_asr
40
+ ASR_MODEL = nemo_asr.models.ASRModel.from_pretrained(
41
+ model_name="nvidia/nemotron-speech-streaming-en-0.6b"
42
+ )
43
+ ASR_MODEL.eval()
44
+
45
+ if torch.cuda.is_available():
46
+ logger.info("Moving model to CUDA")
47
+ ASR_MODEL = ASR_MODEL.cuda()
48
+ else:
49
+ logger.warning("CUDA not available, using CPU (will be slow)")
50
+
51
+ logger.info("Model loaded successfully!")
52
+ return True
53
+ except Exception as e:
54
+ logger.error(f"Failed to load model: {e}")
55
+ return False
56
+
57
+
58
+ # Create FastAPI app
59
+ app = FastAPI(title="Nemotron Speech Streaming")
60
+
61
+
62
+ @app.on_event("startup")
63
+ async def startup():
64
+ """Load model on startup."""
65
+ load_model()
66
+
67
+
68
+ @app.get("/health")
69
+ async def health():
70
+ """Health check endpoint."""
71
+ return {
72
+ "status": "healthy",
73
+ "model_loaded": ASR_MODEL is not None,
74
+ "device": DEVICE,
75
+ }
76
+
77
+
78
+ @app.get("/")
79
+ async def root():
80
+ """Serve the frontend."""
81
+ return FileResponse(Path(__file__).parent / "static" / "index.html")
82
+
83
+
84
+ @app.websocket("/ws/transcribe")
85
+ async def websocket_transcribe(websocket: WebSocket):
86
+ """
87
+ WebSocket endpoint for streaming transcription.
88
+
89
+ Protocol:
90
+ - Client sends binary PCM audio data (16-bit, 16kHz, mono)
91
+ - Server sends JSON: {"type": "transcript", "text": "...", "is_final": bool}
92
+ """
93
+ await websocket.accept()
94
+
95
+ session_id = str(uuid.uuid4())[:8]
96
+ logger.info(f"[{session_id}] Client connected")
97
+
98
+ # Send ready message
99
+ await websocket.send_json({
100
+ "type": "ready",
101
+ "session_id": session_id,
102
+ "model_loaded": ASR_MODEL is not None,
103
+ })
104
+
105
+ if ASR_MODEL is None:
106
+ await websocket.send_json({
107
+ "type": "error",
108
+ "message": "Model not loaded. Please wait and try again.",
109
+ })
110
+ await websocket.close()
111
+ return
112
+
113
+ # Audio buffer
114
+ audio_buffer = np.array([], dtype=np.float32)
115
+ chunk_count = 0
116
+ last_transcript = ""
117
+
118
+ # Processing settings
119
+ MIN_AUDIO_LENGTH = 8000 # 0.5 seconds at 16kHz
120
+ MAX_AUDIO_LENGTH = 80000 # 5 seconds at 16kHz
121
+ PROCESS_EVERY_N_CHUNKS = 3 # Process every N chunks for efficiency
122
+
123
+ try:
124
+ while True:
125
+ message = await websocket.receive()
126
+
127
+ if message["type"] == "websocket.disconnect":
128
+ break
129
+
130
+ # Handle binary audio data
131
+ if "bytes" in message:
132
+ audio_bytes = message["bytes"]
133
+ chunk_count += 1
134
+
135
+ # Convert bytes to numpy array (expecting 16-bit PCM)
136
+ audio_chunk = np.frombuffer(audio_bytes, dtype=np.int16).astype(np.float32) / 32768.0
137
+
138
+ # Add to buffer
139
+ audio_buffer = np.concatenate([audio_buffer, audio_chunk])
140
+
141
+ # Log periodically
142
+ if chunk_count % 20 == 0:
143
+ logger.debug(f"[{session_id}] Chunks: {chunk_count}, Buffer: {len(audio_buffer)} samples")
144
+
145
+ # Process when we have enough audio
146
+ if len(audio_buffer) >= MIN_AUDIO_LENGTH and chunk_count % PROCESS_EVERY_N_CHUNKS == 0:
147
+ # Use last N samples for context
148
+ audio_context = audio_buffer[-MAX_AUDIO_LENGTH:] if len(audio_buffer) > MAX_AUDIO_LENGTH else audio_buffer
149
+
150
+ try:
151
+ with torch.no_grad():
152
+ start_time = datetime.now()
153
+ results = ASR_MODEL.transcribe([audio_context])
154
+ inference_time = (datetime.now() - start_time).total_seconds() * 1000
155
+
156
+ if results and len(results) > 0:
157
+ hyp = results[0]
158
+
159
+ # Extract text
160
+ if isinstance(hyp, str):
161
+ text = hyp
162
+ elif hasattr(hyp, 'text'):
163
+ text = hyp.text
164
+ elif hasattr(hyp, 'pred_text'):
165
+ text = hyp.pred_text
166
+ else:
167
+ text = str(hyp)
168
+
169
+ text = text.strip()
170
+
171
+ if text and text != last_transcript:
172
+ last_transcript = text
173
+ logger.info(f"[{session_id}] ({inference_time:.0f}ms) {text[:60]}...")
174
+
175
+ await websocket.send_json({
176
+ "type": "transcript",
177
+ "text": text,
178
+ "is_final": False,
179
+ "latency_ms": inference_time,
180
+ })
181
+
182
+ except Exception as e:
183
+ logger.error(f"[{session_id}] Inference error: {e}")
184
+
185
+ # Trim buffer to prevent memory growth
186
+ if len(audio_buffer) > MAX_AUDIO_LENGTH:
187
+ audio_buffer = audio_buffer[-MAX_AUDIO_LENGTH:]
188
+
189
+ # Handle JSON control messages
190
+ elif "text" in message:
191
+ try:
192
+ data = json.loads(message["text"])
193
+ msg_type = data.get("type")
194
+
195
+ if msg_type == "reset":
196
+ audio_buffer = np.array([], dtype=np.float32)
197
+ chunk_count = 0
198
+ last_transcript = ""
199
+ logger.info(f"[{session_id}] Session reset")
200
+ await websocket.send_json({"type": "reset_ack"})
201
+
202
+ elif msg_type == "ping":
203
+ await websocket.send_json({"type": "pong"})
204
+
205
+ except json.JSONDecodeError:
206
+ pass
207
+
208
+ except WebSocketDisconnect:
209
+ logger.info(f"[{session_id}] Client disconnected")
210
+ except Exception as e:
211
+ logger.error(f"[{session_id}] WebSocket error: {e}")
212
+ finally:
213
+ logger.info(f"[{session_id}] Session ended (processed {chunk_count} chunks)")
214
+
215
+
216
+ # Mount static files
217
+ static_path = Path(__file__).parent / "static"
218
+ if static_path.exists():
219
+ app.mount("/static", StaticFiles(directory=str(static_path)), name="static")
220
+
221
+
222
+ if __name__ == "__main__":
223
+ import uvicorn
224
+ uvicorn.run(app, host="0.0.0.0", port=7860)
requirements.txt CHANGED
@@ -1,9 +1,11 @@
1
- gradio>=4.44.0
2
- spaces
 
 
 
3
  torch>=2.0.0
4
- numpy
5
- librosa
6
- soundfile
7
  nemo_toolkit[asr] @ git+https://github.com/NVIDIA/NeMo.git@main
8
  Cython
9
  packaging
 
1
+ fastapi>=0.104.0
2
+ uvicorn[standard]>=0.24.0
3
+ websockets>=12.0
4
+ loguru>=0.7.0
5
+ numpy>=1.24.0
6
  torch>=2.0.0
7
+ librosa>=0.10.0
8
+ soundfile>=0.12.0
 
9
  nemo_toolkit[asr] @ git+https://github.com/NVIDIA/NeMo.git@main
10
  Cython
11
  packaging
static/index.html ADDED
@@ -0,0 +1,511 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>The Daily Transcript</title>
7
+ <link href="https://fonts.googleapis.com/css2?family=Playfair+Display:wght@400;700;900&family=Source+Serif+Pro:wght@400;600&family=IBM+Plex+Mono&display=swap" rel="stylesheet">
8
+ <style>
9
+ :root {
10
+ --bg: #f5f5f0;
11
+ --text: #1a1a1a;
12
+ --text-light: #666;
13
+ --border: #1a1a1a;
14
+ --accent: #1a1a1a;
15
+ }
16
+
17
+ * {
18
+ margin: 0;
19
+ padding: 0;
20
+ box-sizing: border-box;
21
+ }
22
+
23
+ body {
24
+ font-family: 'Source Serif Pro', Georgia, serif;
25
+ background: var(--bg);
26
+ color: var(--text);
27
+ min-height: 100vh;
28
+ line-height: 1.6;
29
+ }
30
+
31
+ /* Newspaper Header */
32
+ .masthead {
33
+ text-align: center;
34
+ padding: 30px 20px 20px;
35
+ border-bottom: 3px double var(--border);
36
+ margin-bottom: 20px;
37
+ }
38
+
39
+ .masthead h1 {
40
+ font-family: 'Playfair Display', Georgia, serif;
41
+ font-size: clamp(2.5rem, 8vw, 4.5rem);
42
+ font-weight: 900;
43
+ letter-spacing: -0.02em;
44
+ text-transform: uppercase;
45
+ margin-bottom: 5px;
46
+ }
47
+
48
+ .masthead .tagline {
49
+ font-style: italic;
50
+ color: var(--text-light);
51
+ font-size: 1rem;
52
+ margin-bottom: 10px;
53
+ }
54
+
55
+ .masthead .edition {
56
+ font-family: 'IBM Plex Mono', monospace;
57
+ font-size: 0.75rem;
58
+ color: var(--text-light);
59
+ border-top: 1px solid var(--border);
60
+ border-bottom: 1px solid var(--border);
61
+ padding: 8px 0;
62
+ margin-top: 15px;
63
+ display: flex;
64
+ justify-content: space-between;
65
+ max-width: 600px;
66
+ margin-left: auto;
67
+ margin-right: auto;
68
+ }
69
+
70
+ /* Main Content */
71
+ .container {
72
+ max-width: 800px;
73
+ margin: 0 auto;
74
+ padding: 0 20px 40px;
75
+ }
76
+
77
+ /* Status Bar */
78
+ .status-bar {
79
+ display: flex;
80
+ justify-content: space-between;
81
+ align-items: center;
82
+ padding: 10px 0;
83
+ border-bottom: 1px solid var(--border);
84
+ margin-bottom: 30px;
85
+ font-family: 'IBM Plex Mono', monospace;
86
+ font-size: 0.8rem;
87
+ }
88
+
89
+ .status-indicator {
90
+ display: flex;
91
+ align-items: center;
92
+ gap: 8px;
93
+ }
94
+
95
+ .status-dot {
96
+ width: 10px;
97
+ height: 10px;
98
+ border-radius: 50%;
99
+ background: #ccc;
100
+ border: 1px solid var(--border);
101
+ }
102
+
103
+ .status-dot.connected {
104
+ background: #1a1a1a;
105
+ animation: pulse 2s infinite;
106
+ }
107
+
108
+ .status-dot.recording {
109
+ background: #1a1a1a;
110
+ animation: pulse 0.5s infinite;
111
+ }
112
+
113
+ @keyframes pulse {
114
+ 0%, 100% { opacity: 1; }
115
+ 50% { opacity: 0.3; }
116
+ }
117
+
118
+ /* Transcript Area */
119
+ .transcript-section {
120
+ margin-bottom: 40px;
121
+ }
122
+
123
+ .section-header {
124
+ font-family: 'Playfair Display', Georgia, serif;
125
+ font-size: 0.9rem;
126
+ font-weight: 700;
127
+ text-transform: uppercase;
128
+ letter-spacing: 0.1em;
129
+ border-bottom: 2px solid var(--border);
130
+ padding-bottom: 5px;
131
+ margin-bottom: 20px;
132
+ }
133
+
134
+ .transcript-box {
135
+ min-height: 200px;
136
+ padding: 30px;
137
+ background: #fff;
138
+ border: 1px solid var(--border);
139
+ position: relative;
140
+ }
141
+
142
+ .transcript-box::before {
143
+ content: '"';
144
+ font-family: 'Playfair Display', Georgia, serif;
145
+ font-size: 4rem;
146
+ position: absolute;
147
+ top: 10px;
148
+ left: 20px;
149
+ color: #ddd;
150
+ line-height: 1;
151
+ }
152
+
153
+ .transcript-text {
154
+ font-family: 'Source Serif Pro', Georgia, serif;
155
+ font-size: 1.5rem;
156
+ line-height: 1.8;
157
+ text-align: justify;
158
+ hyphens: auto;
159
+ padding-left: 40px;
160
+ }
161
+
162
+ .transcript-text.placeholder {
163
+ color: var(--text-light);
164
+ font-style: italic;
165
+ }
166
+
167
+ .transcript-text .cursor {
168
+ display: inline-block;
169
+ width: 2px;
170
+ height: 1.2em;
171
+ background: var(--text);
172
+ margin-left: 2px;
173
+ animation: blink 1s infinite;
174
+ vertical-align: text-bottom;
175
+ }
176
+
177
+ @keyframes blink {
178
+ 0%, 50% { opacity: 1; }
179
+ 51%, 100% { opacity: 0; }
180
+ }
181
+
182
+ /* Controls */
183
+ .controls {
184
+ display: flex;
185
+ justify-content: center;
186
+ gap: 20px;
187
+ margin-top: 30px;
188
+ }
189
+
190
+ .btn {
191
+ font-family: 'IBM Plex Mono', monospace;
192
+ font-size: 0.85rem;
193
+ padding: 15px 40px;
194
+ border: 2px solid var(--border);
195
+ background: var(--bg);
196
+ color: var(--text);
197
+ cursor: pointer;
198
+ text-transform: uppercase;
199
+ letter-spacing: 0.1em;
200
+ transition: all 0.2s ease;
201
+ }
202
+
203
+ .btn:hover {
204
+ background: var(--text);
205
+ color: var(--bg);
206
+ }
207
+
208
+ .btn:disabled {
209
+ opacity: 0.3;
210
+ cursor: not-allowed;
211
+ }
212
+
213
+ .btn.primary {
214
+ background: var(--text);
215
+ color: var(--bg);
216
+ }
217
+
218
+ .btn.primary:hover {
219
+ background: var(--bg);
220
+ color: var(--text);
221
+ }
222
+
223
+ .btn.recording {
224
+ background: var(--text);
225
+ color: var(--bg);
226
+ animation: pulse 0.5s infinite;
227
+ }
228
+
229
+ /* Footer */
230
+ .footer {
231
+ text-align: center;
232
+ padding: 20px;
233
+ border-top: 1px solid var(--border);
234
+ margin-top: 40px;
235
+ font-family: 'IBM Plex Mono', monospace;
236
+ font-size: 0.75rem;
237
+ color: var(--text-light);
238
+ }
239
+
240
+ /* Latency display */
241
+ .latency {
242
+ font-family: 'IBM Plex Mono', monospace;
243
+ font-size: 0.7rem;
244
+ color: var(--text-light);
245
+ text-align: right;
246
+ margin-top: 10px;
247
+ }
248
+
249
+ /* Responsive */
250
+ @media (max-width: 600px) {
251
+ .masthead h1 {
252
+ font-size: 2rem;
253
+ }
254
+
255
+ .transcript-text {
256
+ font-size: 1.2rem;
257
+ padding-left: 30px;
258
+ }
259
+
260
+ .transcript-box::before {
261
+ font-size: 3rem;
262
+ }
263
+
264
+ .controls {
265
+ flex-direction: column;
266
+ }
267
+
268
+ .btn {
269
+ width: 100%;
270
+ }
271
+ }
272
+ </style>
273
+ </head>
274
+ <body>
275
+ <header class="masthead">
276
+ <h1>The Daily Transcript</h1>
277
+ <p class="tagline">All the Words That's Fit to Transcribe</p>
278
+ <div class="edition">
279
+ <span id="date"></span>
280
+ <span id="session-id">Connecting...</span>
281
+ <span id="time"></span>
282
+ </div>
283
+ </header>
284
+
285
+ <main class="container">
286
+ <div class="status-bar">
287
+ <div class="status-indicator">
288
+ <div class="status-dot" id="status-dot"></div>
289
+ <span id="status-text">Connecting...</span>
290
+ </div>
291
+ <div id="latency-display"></div>
292
+ </div>
293
+
294
+ <section class="transcript-section">
295
+ <h2 class="section-header">Live Transcription</h2>
296
+ <div class="transcript-box">
297
+ <p class="transcript-text placeholder" id="transcript">
298
+ Press the button below to begin recording. Your words will appear here as you speak.
299
+ </p>
300
+ </div>
301
+ <div class="latency" id="latency-info"></div>
302
+ </section>
303
+
304
+ <div class="controls">
305
+ <button class="btn primary" id="record-btn" disabled>Start Recording</button>
306
+ <button class="btn" id="clear-btn">Clear</button>
307
+ </div>
308
+
309
+ <footer class="footer">
310
+ <p>Powered by NVIDIA Nemotron ASR &bull; Real-time Speech Recognition</p>
311
+ </footer>
312
+ </main>
313
+
314
+ <script>
315
+ // Elements
316
+ const statusDot = document.getElementById('status-dot');
317
+ const statusText = document.getElementById('status-text');
318
+ const transcriptEl = document.getElementById('transcript');
319
+ const recordBtn = document.getElementById('record-btn');
320
+ const clearBtn = document.getElementById('clear-btn');
321
+ const latencyInfo = document.getElementById('latency-info');
322
+ const sessionIdEl = document.getElementById('session-id');
323
+ const dateEl = document.getElementById('date');
324
+ const timeEl = document.getElementById('time');
325
+ const latencyDisplay = document.getElementById('latency-display');
326
+
327
+ // State
328
+ let ws = null;
329
+ let audioContext = null;
330
+ let mediaStream = null;
331
+ let processor = null;
332
+ let isRecording = false;
333
+ let currentTranscript = '';
334
+
335
+ // Update date/time
336
+ function updateDateTime() {
337
+ const now = new Date();
338
+ dateEl.textContent = now.toLocaleDateString('en-US', {
339
+ weekday: 'long',
340
+ year: 'numeric',
341
+ month: 'long',
342
+ day: 'numeric'
343
+ });
344
+ timeEl.textContent = now.toLocaleTimeString('en-US', {
345
+ hour: '2-digit',
346
+ minute: '2-digit'
347
+ });
348
+ }
349
+ updateDateTime();
350
+ setInterval(updateDateTime, 1000);
351
+
352
+ // Connect WebSocket
353
+ function connect() {
354
+ const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
355
+ ws = new WebSocket(`${protocol}//${window.location.host}/ws/transcribe`);
356
+
357
+ ws.onopen = () => {
358
+ console.log('WebSocket connected');
359
+ };
360
+
361
+ ws.onmessage = (event) => {
362
+ const data = JSON.parse(event.data);
363
+
364
+ switch (data.type) {
365
+ case 'ready':
366
+ statusDot.className = 'status-dot connected';
367
+ statusText.textContent = 'Ready';
368
+ sessionIdEl.textContent = `Session: ${data.session_id}`;
369
+ recordBtn.disabled = false;
370
+ break;
371
+
372
+ case 'transcript':
373
+ currentTranscript = data.text;
374
+ updateTranscript();
375
+ if (data.latency_ms) {
376
+ latencyDisplay.textContent = `${Math.round(data.latency_ms)}ms`;
377
+ }
378
+ break;
379
+
380
+ case 'error':
381
+ statusText.textContent = `Error: ${data.message}`;
382
+ break;
383
+
384
+ case 'reset_ack':
385
+ currentTranscript = '';
386
+ updateTranscript();
387
+ break;
388
+ }
389
+ };
390
+
391
+ ws.onclose = () => {
392
+ statusDot.className = 'status-dot';
393
+ statusText.textContent = 'Disconnected';
394
+ recordBtn.disabled = true;
395
+ // Reconnect after 2 seconds
396
+ setTimeout(connect, 2000);
397
+ };
398
+
399
+ ws.onerror = (error) => {
400
+ console.error('WebSocket error:', error);
401
+ };
402
+ }
403
+
404
+ // Update transcript display
405
+ function updateTranscript() {
406
+ if (currentTranscript) {
407
+ transcriptEl.className = 'transcript-text';
408
+ transcriptEl.innerHTML = currentTranscript + (isRecording ? '<span class="cursor"></span>' : '');
409
+ } else {
410
+ transcriptEl.className = 'transcript-text placeholder';
411
+ transcriptEl.textContent = 'Press the button below to begin recording. Your words will appear here as you speak.';
412
+ }
413
+ }
414
+
415
+ // Start recording
416
+ async function startRecording() {
417
+ try {
418
+ // Get microphone access
419
+ mediaStream = await navigator.mediaDevices.getUserMedia({
420
+ audio: {
421
+ sampleRate: 16000,
422
+ channelCount: 1,
423
+ echoCancellation: true,
424
+ noiseSuppression: true,
425
+ }
426
+ });
427
+
428
+ // Create audio context
429
+ audioContext = new AudioContext({ sampleRate: 16000 });
430
+ const source = audioContext.createMediaStreamSource(mediaStream);
431
+
432
+ // Create script processor for capturing audio
433
+ processor = audioContext.createScriptProcessor(4096, 1, 1);
434
+
435
+ processor.onaudioprocess = (e) => {
436
+ if (ws && ws.readyState === WebSocket.OPEN) {
437
+ const inputData = e.inputBuffer.getChannelData(0);
438
+ // Convert float32 to int16
439
+ const int16Data = new Int16Array(inputData.length);
440
+ for (let i = 0; i < inputData.length; i++) {
441
+ int16Data[i] = Math.max(-32768, Math.min(32767, inputData[i] * 32768));
442
+ }
443
+ ws.send(int16Data.buffer);
444
+ }
445
+ };
446
+
447
+ source.connect(processor);
448
+ processor.connect(audioContext.destination);
449
+
450
+ isRecording = true;
451
+ recordBtn.textContent = 'Stop Recording';
452
+ recordBtn.className = 'btn recording';
453
+ statusDot.className = 'status-dot recording';
454
+ statusText.textContent = 'Recording...';
455
+ updateTranscript();
456
+
457
+ } catch (error) {
458
+ console.error('Error starting recording:', error);
459
+ statusText.textContent = 'Microphone access denied';
460
+ }
461
+ }
462
+
463
+ // Stop recording
464
+ function stopRecording() {
465
+ if (processor) {
466
+ processor.disconnect();
467
+ processor = null;
468
+ }
469
+ if (audioContext) {
470
+ audioContext.close();
471
+ audioContext = null;
472
+ }
473
+ if (mediaStream) {
474
+ mediaStream.getTracks().forEach(track => track.stop());
475
+ mediaStream = null;
476
+ }
477
+
478
+ isRecording = false;
479
+ recordBtn.textContent = 'Start Recording';
480
+ recordBtn.className = 'btn primary';
481
+ statusDot.className = 'status-dot connected';
482
+ statusText.textContent = 'Ready';
483
+ updateTranscript();
484
+ }
485
+
486
+ // Clear transcript
487
+ function clearTranscript() {
488
+ currentTranscript = '';
489
+ updateTranscript();
490
+ latencyDisplay.textContent = '';
491
+ if (ws && ws.readyState === WebSocket.OPEN) {
492
+ ws.send(JSON.stringify({ type: 'reset' }));
493
+ }
494
+ }
495
+
496
+ // Event listeners
497
+ recordBtn.addEventListener('click', () => {
498
+ if (isRecording) {
499
+ stopRecording();
500
+ } else {
501
+ startRecording();
502
+ }
503
+ });
504
+
505
+ clearBtn.addEventListener('click', clearTranscript);
506
+
507
+ // Start connection
508
+ connect();
509
+ </script>
510
+ </body>
511
+ </html>