File size: 22,746 Bytes
521f25e
 
380b24b
521f25e
380b24b
521f25e
 
 
380b24b
 
521f25e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4607568
521f25e
 
 
 
 
 
 
4607568
521f25e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
---
title: RADHA
emoji: πŸ’œ
colorFrom: purple
colorTo: pink
sdk: docker
pinned: false
---


# R.A.D.H.A - Responsive And Deeply Human Assistant

An intelligent AI assistant built with FastAPI, LangChain, Groq AI, and a modern glass-morphism web UI. RADHA provides two chat modes (General and Realtime with web search), streaming responses, text-to-speech, voice input, and learns from your personal data files. Everything runs on one server with one command.

---

## Table of Contents

- [Quick Start](#quick-start)
- [Features](#features)
- [How It Works (Full Workflow)](#how-it-works-full-workflow)
- [Architecture](#architecture)
- [Project Structure](#project-structure)
- [API Endpoints](#api-endpoints)
- [Configuration](#configuration)
- [Technologies Used](#technologies-used)
- [Frontend Guide](#frontend-guide)
- [Troubleshooting](#troubleshooting)
- [Developer](#developer)

---

## Quick Start

### Prerequisites

- **Python 3.10+** with pip
- **OS**: Windows, macOS, or Linux
- **API Keys** (set in `.env` file):
  - `GROQ_API_KEY` (required) - Get from https://console.groq.com  
    You can use **multiple Groq API keys** (`GROQ_API_KEY_2`, `GROQ_API_KEY_3`, ...) for automatic fallback when one hits rate limits or fails.
  - `TAVILY_API_KEY` (optional, for Realtime mode) - Get from https://tavily.com

### Installation

1. **Clone or download** this repository.

2. **Install dependencies**:

```bash
pip install -r requirements.txt
```

3. **Create a `.env` file** in the project root:

```env
GROQ_API_KEY=your_groq_api_key_here
# Optional: multiple keys for fallback when one hits rate limit
# GROQ_API_KEY_2=second_key
# GROQ_API_KEY_3=third_key
TAVILY_API_KEY=your_tavily_api_key_here

# Optional
GROQ_MODEL=llama-3.3-70b-versatile
ASSISTANT_NAME=Radha
RADHA_USER_TITLE=Sir
TTS_VOICE=en-IN-NeerjaNeural
TTS_RATE=+22%
```

4. **Start the server**:

```bash
python run.py
```

5. **Open in browser**: http://localhost:8000

That's it. The server hosts both the API and the frontend on port 8000.

---

## Features

### Chat Modes

- **General Mode**: Pure LLM responses using Groq AI. Uses your learning data and conversation history as context. No internet access.
- **Realtime Mode**: Searches the web via Tavily before answering. Smart query extraction converts messy conversational text into focused search queries. Uses advanced search depth with AI-synthesized answers.

### Text-to-Speech (TTS)

- Server-side TTS using `edge-tts` (Microsoft Edge's free cloud TTS, no API key needed).
- Audio is generated on the server and streamed inline with text chunks via SSE.
- Sentences are detected in real time as text streams in, converted to speech in background threads (ThreadPoolExecutor), and sent to the client as base64 MP3.
- The client plays audio segments sequentially in a queue β€” speech starts as soon as the first sentence is ready, not after the full response.
- Works on all devices including iOS (uses a persistent `<audio>` element with AudioContext unlock).

### Voice Input

- Browser-native speech recognition (Web Speech API).
- Speak your question, and it auto-sends when you finish.

### Learning System

- Put `.txt` files in `database/learning_data/` with any personal information, preferences, or context.
- Past conversations are saved as JSON in `database/chats_data/`.
- At startup, all learning data and past chats are chunked, embedded with HuggingFace sentence-transformers, and stored in a FAISS vector index.
- For each question, only the most relevant chunks are retrieved (semantic search) and sent to the LLM. This keeps token usage bounded no matter how much data you add.

### Session Persistence

- Conversations are saved to disk after each message and survive server restarts.
- General and Realtime modes share the same session, so context carries over between modes.

### Multi-Key API Fallback

- Configure multiple Groq API keys (`GROQ_API_KEY`, `GROQ_API_KEY_2`, `GROQ_API_KEY_3`, ...).
- Primary-first: every request tries the first key. If it fails (rate limit, timeout), the next key is tried automatically.
- Each key gets one retry for transient failures before falling back.

### Frontend

- Dark glass-morphism UI with animated WebGL orb in the background.
- The orb animates when the AI is speaking (TTS playing) and stays subtle when idle.
- Responsive: works on desktop, tablets, and mobile (including iOS safe area handling).
- No build tools, no frameworks β€” vanilla HTML/CSS/JS.

---

## How It Works (Full Workflow)

This section explains the complete journey of a user's message from the moment they press Send to the moment they hear the AI speak.

### Step 1: User Sends a Message

The user types a question (or speaks it via voice input) and presses Send. The frontend (`script.js`) does the following:

1. Captures the text from the textarea.
2. Adds the user's message bubble to the chat UI.
3. Shows a typing indicator (three bouncing dots).
4. If TTS is enabled, unlocks the audio context (required on iOS for programmatic playback).
5. Sends a `POST` request to the backend with `{ message, session_id, tts }`.

The endpoint depends on the mode:
- **General**: `POST /chat/stream`
- **Realtime**: `POST /chat/realtime/stream`

### Step 2: Backend Receives the Request (app/main.py)

FastAPI validates the request body using the `ChatRequest` Pydantic model (checks message length 1-32,000 chars). The endpoint handler:

1. Gets or creates a session via `ChatService.get_or_create_session()`.
2. Calls `ChatService.process_message_stream()` (general) or `process_realtime_message_stream()` (realtime), which returns a chunk iterator.
3. Wraps the iterator in `_stream_generator()` and returns a `StreamingResponse` with `media_type="text/event-stream"`.

### Step 3: Session Management (app/services/chat_service.py)

`ChatService` manages all conversation state:

1. If no `session_id` is provided, generates a new UUID.
2. If a `session_id` is provided, checks in-memory first, then tries loading from disk (`database/chats_data/chat_{id}.json`).
3. Validates the session ID (no path traversal, max 255 chars).
4. Adds the user's message to the session's message list.
5. Formats conversation history into `(user, assistant)` pairs, capped at `MAX_CHAT_HISTORY_TURNS` (default 20) to keep the prompt within token limits.

### Step 4: Context Retrieval (app/services/vector_store.py)

Before generating a response, the system retrieves relevant context:

1. The user's question is embedded into a vector using the HuggingFace sentence-transformers model (runs locally, no API key needed).
2. FAISS performs a nearest-neighbor search against the vector store (which contains chunks from learning data `.txt` files and past conversations).
3. The top 10 most similar chunks are returned.
4. These chunks are escaped (curly braces doubled for LangChain) and added to the system message.

### Step 5a: General Mode (app/services/groq_service.py)

For general chat:

1. `_build_prompt_and_messages()` assembles the system message:
   - Base personality prompt (from `config.py`)
   - Current date and time
   - Retrieved context chunks from the vector store
   - General mode addendum ("answer from your knowledge, no web search")
2. The prompt is sent to Groq AI via LangChain's `ChatGroq` with streaming enabled.
3. Tokens arrive one by one and are yielded as an iterator.
4. If the first API key fails (rate limit, timeout), the system automatically tries the next key.

### Step 5b: Realtime Mode (app/services/realtime_service.py)

For realtime chat, three additional steps happen before calling Groq:

1. **Query Extraction**: A fast LLM call (with `max_tokens=50`, `temperature=0`) converts the user's raw conversational text into a clean search query. Example: "tell me about that website I mentioned" becomes "Radha for Everyone website". It uses the last 3 conversation turns to resolve references like "that", "him", "it".

2. **Tavily Web Search**: The clean query is sent to Tavily's advanced search API:
   - `search_depth="advanced"` for thorough results
   - `include_answer=True` so Tavily's AI synthesizes a direct answer
   - Up to 7 results with relevance scores

3. **Result Formatting**: Search results are structured with clear headers:
   - AI-synthesized answer (marked as primary source)
   - Individual sources with title, content, URL, and relevance score

4. These results are injected into the system message before the Realtime mode addendum (which explicitly instructs the LLM to USE the search data).

### Step 6: Streaming with Inline TTS (app/main.py - _stream_generator)

The `_stream_generator` function is the core of the streaming + TTS pipeline:

1. **Text chunks are yielded immediately** as SSE events (`data: {"chunk": "...", "done": false}`). The frontend displays them in real time β€” TTS never blocks text display.

2. If TTS is enabled, the generator also:
   a. Accumulates text in a buffer.
   b. Splits the buffer into sentences at punctuation boundaries (`. ! ? , ; :`).
   c. Merges short fragments to avoid choppy speech.
   d. Submits each sentence to a `ThreadPoolExecutor` (4 workers) for background TTS generation via `edge-tts`.
   e. Checks the front of the audio queue for completed TTS jobs and yields them as `data: {"audio": "<base64 MP3>"}` events β€” in order, without blocking.

3. When the LLM stream ends, any remaining buffered text is flushed and all pending TTS futures are awaited (with a 15-second timeout per sentence).

4. Final event: `data: {"chunk": "", "done": true, "session_id": "..."}`.

### Step 7: Frontend Receives the Stream (frontend/script.js)

The frontend reads the SSE stream with `fetch()` + `ReadableStream`:

1. **Text chunks** (`data.chunk`): Appended to the message bubble in real time. A blinking cursor appears during streaming.
2. **Audio events** (`data.audio`): Passed to `TTSPlayer.enqueue()`, which adds the base64 MP3 to a playback queue.
3. **Done event** (`data.done`): Streaming is complete. The cursor is removed.

### Step 8: TTS Playback (frontend/script.js - TTSPlayer)

The `TTSPlayer` manages audio playback:

1. `enqueue(base64Audio)` adds audio to the queue and starts `_playLoop()` if not already running.
2. `_playLoop()` plays segments sequentially: converts base64 to a data URL, sets it as the `<audio>` element's source, plays it, and waits for `onended` before playing the next segment.
3. When audio starts playing, the orb's `.speaking` class and WebGL animation are activated.
4. When all segments finish (or the user mutes TTS), the orb returns to its idle state.

### Step 9: Session Save (app/services/chat_service.py)

After the stream completes:

1. The full assistant response (accumulated from all chunks) is saved in the session.
2. The session is written to `database/chats_data/chat_{id}.json`.
3. During streaming, the session is also saved every 5 chunks for durability.

### Step 10: Next Startup

When the server restarts:

1. All `.txt` files in `database/learning_data/` are loaded.
2. All `.json` files in `database/chats_data/` (past conversations) are loaded.
3. Everything is chunked, embedded, and indexed in the FAISS vector store.
4. New conversations benefit from all previous context.

---

## Architecture

```
User (Browser)
    |
    |  HTTP POST (JSON) + SSE response stream
    v
+--------------------------------------------------+
|  FastAPI Application  (app/main.py)              |
|  - CORS middleware                               |
|  - Timing middleware (logs all requests)         |
|  - _stream_generator (SSE + inline TTS)          |
+--------------------------------------------------+
    |                           |
    v                           v
+------------------+   +------------------------+
|  ChatService     |   |  TTS Thread Pool       |
|  (chat_service)  |   |  (4 workers, edge-tts) |
|  - Sessions      |   +------------------------+
|  - History       |
|  - Disk I/O      |
+------------------+
    |
    v
+------------------+   +------------------------+
|  GroqService     |   |  RealtimeGroqService   |
|  (groq_service)  |   |  (realtime_service)    |
|  - General chat  |   |  - Query extraction    |
|  - Multi-key     |   |  - Tavily web search   |
|  - LangChain     |   |  - Extends GroqService |
+------------------+   +------------------------+
    |                           |
    v                           v
+--------------------------------------------------+
|  VectorStoreService  (vector_store.py)           |
|  - FAISS index (learning data + past chats)      |
|  - HuggingFace embeddings (local, no API key)    |
|  - Semantic search: returns top-k chunks         |
+--------------------------------------------------+
    |
    v
+--------------------------------------------------+
|  Groq Cloud API  (LLM inference)                 |
|  - llama-3.3-70b-versatile (or configured model) |
|  - Primary-first multi-key fallback              |
+--------------------------------------------------+
```

---

## Project Structure

```
RADHA/
β”œβ”€β”€ frontend/                    # Web UI (vanilla HTML/CSS/JS, no build tools)
β”‚   β”œβ”€β”€ index.html               # Single-page app structure
β”‚   β”œβ”€β”€ style.css                # Dark glass-morphism theme, responsive
β”‚   β”œβ”€β”€ script.js                # Chat logic, SSE streaming, TTS player, voice input
β”‚   └── orb.js                   # WebGL animated orb renderer (GLSL shaders)
β”‚
β”œβ”€β”€ app/                         # Backend (FastAPI)
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ main.py                  # FastAPI app, all endpoints, inline TTS, SSE streaming
β”‚   β”œβ”€β”€ models.py                # Pydantic models (ChatRequest, ChatResponse, etc.)
β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ chat_service.py      # Session management, message storage, disk persistence
β”‚   β”‚   β”œβ”€β”€ groq_service.py      # General chat: LangChain + Groq LLM + multi-key fallback
β”‚   β”‚   β”œβ”€β”€ realtime_service.py  # Realtime chat: query extraction + Tavily search + Groq
β”‚   β”‚   └── vector_store.py      # FAISS vector index, embeddings, semantic retrieval
β”‚   └── utils/
β”‚       β”œβ”€β”€ __init__.py
β”‚       β”œβ”€β”€ retry.py             # Retry with exponential backoff (for API calls)
β”‚       └── time_info.py         # Current date/time for the system prompt
β”‚
β”œβ”€β”€ database/                    # Auto-created on first run
β”‚   β”œβ”€β”€ learning_data/           # Your .txt files (personal info, preferences, etc.)
β”‚   β”œβ”€β”€ chats_data/              # Saved conversations as JSON
β”‚   └── vector_store/            # FAISS index files
β”‚
β”œβ”€β”€ config.py                    # All settings: API keys, paths, system prompt, TTS config
β”œβ”€β”€ run.py                       # Entry point: python run.py
β”œβ”€β”€ requirements.txt             # Python dependencies
β”œβ”€β”€ .env                         # Your API keys (not committed to git)
└── README.md                    # This file
```

---

## API Endpoints

### POST `/chat`
General chat (non-streaming). Returns full response at once.

### POST `/chat/stream`
General chat with streaming. Returns Server-Sent Events.

### POST `/chat/realtime`
Realtime chat (non-streaming). Searches the web first, then responds.

### POST `/chat/realtime/stream`
Realtime chat with streaming. Web search + SSE streaming.

**Request body (all chat endpoints):**
```json
{
  "message": "What is Python?",
  "session_id": "optional-uuid",
  "tts": true
}
```
- `message` (required): 1-32,000 characters.
- `session_id` (optional): omit to create a new session; include to continue an existing one.
- `tts` (optional, default false): set to `true` to receive inline audio events in the stream.

**SSE stream format:**
```
data: {"session_id": "uuid-here", "chunk": "", "done": false}
data: {"chunk": "Hello", "done": false}
data: {"chunk": ", how", "done": false}
data: {"audio": "<base64 MP3>", "sentence": "Hello, how can I help?"}
data: {"chunk": "", "done": true, "session_id": "uuid-here"}
```

**Non-streaming response:**
```json
{
  "response": "Python is a high-level programming language...",
  "session_id": "uuid-here"
}
```

### GET `/chat/history/{session_id}`
Returns all messages for a session.

### GET `/health`
Health check. Returns status of all services.

### POST `/tts`
Standalone TTS endpoint. Send `{"text": "Hello"}`, receive streamed MP3 audio.

### GET `/`
Redirects to `/app/` (the frontend).

### GET `/api`
Returns list of available endpoints.

---

## Configuration

### Environment Variables (.env)

| Variable | Required | Default | Description |
|----------|----------|---------|-------------|
| `GROQ_API_KEY` | Yes | - | Primary Groq API key |
| `GROQ_API_KEY_2`, `_3`, ... | No | - | Additional keys for fallback |
| `TAVILY_API_KEY` | No | - | Tavily search API key (for Realtime mode) |
| `GROQ_MODEL` | No | `llama-3.3-70b-versatile` | LLM model name |
| `ASSISTANT_NAME` | No | `Radha` | Assistant's name |
| `RADHA_USER_TITLE` | No | - | How to address the user (e.g. "Sir") |
| `TTS_VOICE` | No | `en-IN-NeerjaNeural` | Edge TTS voice (run `edge-tts --list-voices` to see all) |
| `TTS_RATE` | No | `+22%` | Speech speed adjustment |

### System Prompt

The assistant's personality is defined in `config.py`. Key sections:
- **Role**: conversational face of the system; does not claim to have completed actions unless the result is visible
- **Answering Quality**: instructed to be specific, use context/search results, never give vague answers
- **Tone**: warm, intelligent, concise, witty
- **Formatting**: no asterisks, no emojis, no markdown, plain text only

### Learning Data

Add `.txt` files to `database/learning_data/`:
- Files are loaded and indexed at startup.
- Only relevant chunks are sent to the LLM per question (not the full text).
- Restart the server after adding new files.

### Multiple Groq API Keys

You can use **multiple Groq API keys** for automatic fallback. Set `GROQ_API_KEY` (required) and optionally `GROQ_API_KEY_2`, `GROQ_API_KEY_3`, etc. in your `.env`:

```env
GROQ_API_KEY=first_key
GROQ_API_KEY_2=second_key
GROQ_API_KEY_3=third_key
```

Every request tries the first key first. If it fails (rate limit, timeout, or error), the next key is tried automatically. Each key has its own daily limit on Groq's free tier, so multiple keys give you more capacity.

---

## Technologies Used

### Backend
| Technology | Purpose |
|-----------|---------|
| FastAPI | Web framework, async endpoints, SSE streaming |
| LangChain | LLM orchestration, prompt templates, message formatting |
| Groq AI | LLM inference (Llama 3.3 70B, extremely fast) |
| Tavily | AI-optimized web search with answer synthesis |
| FAISS | Vector similarity search for context retrieval |
| HuggingFace | Local embeddings (sentence-transformers/all-MiniLM-L6-v2) |
| edge-tts | Server-side text-to-speech (Microsoft Edge, free, no API key) |
| Pydantic | Request/response validation |
| Uvicorn | ASGI server |

### Frontend
| Technology | Purpose |
|-----------|---------|
| Vanilla JS | Chat logic, SSE streaming, TTS playback queue |
| WebGL/GLSL | Animated orb (simplex noise, procedural lighting) |
| Web Speech API | Browser-native speech-to-text |
| CSS Glass-morphism | Dark translucent panels with backdrop blur |
| Poppins (Google Fonts) | Typography |

---

## Frontend Guide

### Modes

- **General**: Click "General" in the header. Uses the LLM's knowledge + your learning data. No internet.
- **Realtime**: Click "Realtime" in the header. Searches the web first, then answers with fresh information.

### TTS (Text-to-Speech)

- Click the speaker icon to enable/disable TTS.
- When enabled, the AI speaks its response as it streams in.
- Click again to mute mid-speech (stops immediately, orb returns to idle).

### Voice Input

- Click the microphone icon to start listening.
- Speak your question. It auto-sends when you finish.
- Click again to cancel.

### Orb Animation

- **Idle**: Subtle glow (35% opacity), slowly rotating.
- **Speaking (TTS active)**: Full brightness, pulsing scale animation.
- The orb only animates when TTS audio is playing, not during text streaming.

### Quick Chips

On the welcome screen, click any chip ("What can you do?", "Open YouTube", etc.) to send a preset message.

---

## Troubleshooting

### Server won't start
- Ensure `GROQ_API_KEY` is set in `.env`.
- Run `pip install -r requirements.txt` to install all dependencies.
- Check that port 8000 is not in use.

### "Offline" status in the UI
- The server is not running. Start it with `python run.py`.
- Check the terminal for error messages.

### Realtime mode gives generic answers
- Ensure `TAVILY_API_KEY` is set in `.env` and is valid.
- Check the server logs for `[TAVILY]` entries to see if search is working.
- The query extraction LLM call should appear as `[REALTIME] Query extraction:` in logs.

### TTS not working
- Make sure TTS is enabled (speaker icon should be highlighted purple).
- On iOS: TTS requires a user interaction first (tap the speaker button before sending a message).
- Check server logs for `[TTS-INLINE]` errors.

### Vector store errors
- Delete `database/vector_store/` and restart β€” the index rebuilds automatically.
- Check that `database/` directories exist and are writable.

### Template variable errors
- Likely caused by `{` or `}` in learning data files. The system escapes these automatically, but if you see errors, check your `.txt` files.

---

## Performance

The server logs `[TIMING]` entries for every operation:

| Log Entry | What It Measures |
|-----------|-----------------|
| `session_get_or_create` | Session lookup (memory/disk/new) |
| `vector_db` | Vector store retrieval |
| `tavily_search` | Web search (Realtime only) |
| `groq_api` | Full Groq API call |
| `first_chunk` | Time to first streaming token |
| `groq_stream_total` | Total stream duration + chunk count |
| `save_session_json` | Session save to disk |

Typical latencies:
- General mode first token: 0.3-1s
- Realtime mode first token: 2-5s (includes query extraction + web search)
- TTS first audio: ~1s after first sentence completes

---

## Security Notes

- Session IDs are validated against path traversal (`..`, `/`, `\`).
- API keys are stored in `.env` (never in code).
- CORS allows all origins (`*`) since this is a single-user server.
- No authentication β€” add it if deploying for multiple users.

---

## Developer

**R.A.D.H.A** was developed by **Aditya Yadav**.


## πŸ“„ License
MIT License

---

Made with ❀️ by **Aditya Yadav ** 

---
**Start chatting:** `python run.py` then open http://localhost:8000