fullstackufo commited on
Commit
1b21e08
·
verified ·
1 Parent(s): 3856a91

# Enhanced Technical Specification

Browse files

## Real-Time Conversational AI Agent - System Requirements

### Core Capabilities

**Voice Processing Pipeline**
- Bidirectional audio streaming with <50ms total latency (VAD + STT + LLM + TTS + network)
- Streaming Speech-to-Text (Deepgram Nova-2 or AssemblyAI with real-time endpoints)
- Streaming Text-to-Speech (ElevenLabs Turbo v2.5, Cartesia Sonic, or PlayHT 3.0-mini for sub-300ms first-byte)
- Voice Activity Detection (VAD) with interrupt handling for natural turn-taking
- Echo cancellation and noise suppression for clean audio in/out

**AI Agent Architecture**
- LLM backbone: Claude 3.5 Sonnet or GPT-4 Turbo with streaming responses
- Function calling / tool use for:
- Web search (Brave Search API, Perplexity API, or Tavily)
- PDF processing (PyMuPDF + semantic chunking + vector embeddings)
- Memory/context retention across conversation
- Async message queue for non-blocking tool execution
- Response streaming with partial sentence cutoff for ultra-low latency

**Visual Components**
- Animated 2D/3D avatar with lip-sync (Wav2Lip, SadTalker, or Ready Player Me + Livelink)
- Phoneme-driven mouth shapes synced to TTS audio output
- Idle animations and reactive gestures triggered by sentiment/keywords
- Canvas/WebGL rendering for 60fps performance

**UI/UX Requirements**
- Dual-pane layout: Avatar viewport (top) + scrolling chat transcript (bottom)
- Real-time bidirectional transcription with speaker labels
- Text input fallback with instant message injection into conversation flow
- Visual indicators: speaking state, thinking/processing, tool execution status
- Audio waveform visualization for user and agent

**Integration Constraints**
- Twitter Spaces audio bridging via virtual audio cable (VB-Cable, Voicemeeter) or OBS virtual camera
- Screen capture output for streaming avatar + chat to Twitter Spaces as video source
- Persistent WebSocket or WebRTC connection for audio I/O
- Local deployment option for lowest latency (no cloud round-trip for audio)

### Technical Stack Recommendations

**Frontend**
- Electron or Tauri desktop app for native performance and audio device access
- React/Vue for UI with Zustand/Pinia for state management
- Web Audio API for client-side audio processing
- MediaPipe or TensorFlow.js for avatar animation (if browser-based)

**Backend**
- FastAPI (Python) or Node.js with Socket.io for WebSocket server
- Redis for conversation state and message queue
- Qdrant or Chroma for vector search on PDF embeddings
- FFmpeg for audio format conversion and streaming

**Infrastructure**
- Single machine deployment: NVIDIA GPU (RTX 4070+) for local TTS/avatar inference
- Cloud fallback: Modal, RunPod, or Replicate for burst compute
- CDN for static assets if web-deployed

### Performance Targets
- First audio response: <800ms from end of user speech
- Avatar lip-sync offset: <100ms from audio playback
- Transcript lag: <200ms behind spoken words
- PDF query response: <3s for semantic search + answer generation
- Concurrent tool execution without blocking voice responses

### Acceptance Criteria
1. Agent can participate in live Twitter Space with <1s response latency
2. Avatar mouth movements match TTS audio output with imperceptible delay
3. Chat window shows live transcription for both user and agent within 500ms
4. User can send text messages mid-conversation without audio interruption
5. Agent autonomously triggers web search when detecting knowledge gaps
6. PDF upload → indexed → queryable in conversation within 10s
7. System handles 60+ minute conversations without degradation

Files changed (4) hide show
  1. README.md +8 -5
  2. index.html +237 -19
  3. script.js +361 -0
  4. style.css +217 -19
README.md CHANGED
@@ -1,10 +1,13 @@
1
  ---
2
- title: Neural Nexus
3
- emoji: 📊
4
- colorFrom: purple
5
- colorTo: yellow
6
  sdk: static
7
  pinned: false
 
 
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
1
  ---
2
+ title: Neural Nexus 🤖
3
+ colorFrom: gray
4
+ colorTo: gray
5
+ emoji: 🐳
6
  sdk: static
7
  pinned: false
8
+ tags:
9
+ - deepsite-v3
10
  ---
11
 
12
+ # Welcome to your new DeepSite project!
13
+ This project was created with [DeepSite](https://huggingface.co/deepsite).
index.html CHANGED
@@ -1,19 +1,237 @@
1
- <!doctype html>
2
- <html>
3
- <head>
4
- <meta charset="utf-8" />
5
- <meta name="viewport" content="width=device-width" />
6
- <title>My static Space</title>
7
- <link rel="stylesheet" href="style.css" />
8
- </head>
9
- <body>
10
- <div class="card">
11
- <h1>Welcome to your static Space!</h1>
12
- <p>You can modify this app directly by editing <i>index.html</i> in the Files and versions tab.</p>
13
- <p>
14
- Also don't forget to check the
15
- <a href="https://huggingface.co/docs/hub/spaces" target="_blank">Spaces documentation</a>.
16
- </p>
17
- </div>
18
- </body>
19
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en" data-theme="dark">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>Neural Nexus - Real-Time Conversational AI</title>
7
+ <link rel="icon" type="image/svg+xml" href="data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 24 24' fill='%238b5cf6'%3E%3Cpath d='M12 2L2 7v10c0 5.55 3.84 10.74 9 12 5.16-1.26 9-6.45 9-12V7l-10-5z'/%3E%3C/svg%3E">
8
+ <link rel="stylesheet" href="style.css">
9
+ <script src="https://cdn.tailwindcss.com"></script>
10
+ <script src="https://unpkg.com/feather-icons"></script>
11
+ <script src="https://cdn.jsdelivr.net/npm/feather-icons/dist/feather.min.js"></script>
12
+ <script>
13
+ tailwind.config = {
14
+ darkMode: 'class',
15
+ theme: {
16
+ extend: {
17
+ colors: {
18
+ primary: '#8b5cf6',
19
+ secondary: '#06b6d4',
20
+ accent: '#10b981',
21
+ danger: '#ef4444',
22
+ warning: '#f59e0b',
23
+ info: '#3b82f6'
24
+ },
25
+ animation: {
26
+ 'pulse-ring': 'pulse-ring 2s cubic-bezier(0.4, 0, 0.6, 1) infinite',
27
+ 'float': 'float 3s ease-in-out infinite',
28
+ 'glow': 'glow 2s ease-in-out infinite alternate',
29
+ 'slide-up': 'slide-up 0.3s ease-out',
30
+ 'slide-down': 'slide-down 0.3s ease-out'
31
+ }
32
+ }
33
+ }
34
+ }
35
+ </script>
36
+ </head>
37
+ <body class="bg-slate-950 text-slate-100 overflow-hidden">
38
+ <!-- Main Application Container -->
39
+ <div class="flex flex-col h-screen relative">
40
+ <!-- Header Status Bar -->
41
+ <header class="glass-panel border-b border-slate-700/50 px-6 py-3 flex items-center justify-between z-10">
42
+ <div class="flex items-center space-x-3">
43
+ <div class="w-3 h-3 rounded-full bg-accent animate-pulse-ring"></div>
44
+ <h1 class="text-lg font-semibold bg-gradient-to-r from-primary to-secondary bg-clip-text text-transparent">
45
+ Neural Nexus
46
+ </h1>
47
+ <span class="text-xs text-slate-400">v2.5.1</span>
48
+ </div>
49
+ <div class="flex items-center space-x-4">
50
+ <div class="flex items-center space-x-2">
51
+ <i data-feather="clock" class="w-4 h-4 text-slate-400"></i>
52
+ <span class="text-xs text-slate-300" id="session-timer">00:00:00</span>
53
+ </div>
54
+ <div class="flex items-center space-x-2">
55
+ <i data-feather="activity" class="w-4 h-4 text-success"></i>
56
+ <span class="text-xs text-slate-300">Latency: <span id="latency-display">45ms</span></span>
57
+ </div>
58
+ </div>
59
+ </header>
60
+
61
+ <!-- Dual-Pane Layout -->
62
+ <main class="flex-1 flex flex-col lg:flex-row relative">
63
+ <!-- Avatar Viewport (Top on mobile, Left on desktop) -->
64
+ <section class="relative flex-1 flex items-center justify-center bg-gradient-to-br from-slate-900 via-slate-800 to-slate-900 overflow-hidden">
65
+ <!-- Animated Background -->
66
+ <div class="absolute inset-0 opacity-30">
67
+ <canvas id="neural-network-canvas" class="w-full h-full"></canvas>
68
+ </div>
69
+
70
+ <!-- Avatar Container -->
71
+ <div class="relative z-10 w-full max-w-2xl mx-auto p-4">
72
+ <!-- Status Indicator -->
73
+ <div class="absolute top-4 left-4 z-20">
74
+ <status-indicator id="main-status"></status-indicator>
75
+ </div>
76
+
77
+ <!-- Avatar Renderer -->
78
+ <avatar-renderer id="ai-avatar" class="block w-full h-full"></avatar-renderer>
79
+
80
+ <!-- Audio Waveform Overlay -->
81
+ <div class="absolute bottom-0 left-0 right-0 h-24">
82
+ <wave-visualizer id="agent-visualizer" color="#10b981"></wave-visualizer>
83
+ </div>
84
+ </div>
85
+
86
+ <!-- Control Buttons -->
87
+ <div class="absolute top-4 right-4 flex flex-col space-y-2">
88
+ <button class="glass-button w-10 h-10 rounded-lg flex items-center justify-center hover:scale-110 transition-transform" id="fullscreen-btn">
89
+ <i data-feather="maximize" class="w-4 h-4"></i>
90
+ </button>
91
+ <button class="glass-button w-10 h-10 rounded-lg flex items-center justify-center hover:scale-110 transition-transform" id="settings-btn">
92
+ <i data-feather="settings" class="w-4 h-4"></i>
93
+ </button>
94
+ </div>
95
+ </section>
96
+
97
+ <!-- Chat Transcript Panel (Bottom on mobile, Right on desktop) -->
98
+ <section class="flex flex-col glass-panel border-t lg:border-t-0 lg:border-l border-slate-700/50 h-96 lg:h-auto lg:w-96">
99
+ <!-- Chat Header -->
100
+ <div class="px-4 py-3 border-b border-slate-700/50 flex items-center justify-between">
101
+ <div class="flex items-center space-x-2">
102
+ <i data-feather="message-square" class="w-5 h-5 text-slate-400"></i>
103
+ <h2 class="font-medium text-slate-200">Conversation</h2>
104
+ </div>
105
+ <div class="flex items-center space-x-2">
106
+ <button class="text-slate-400 hover:text-slate-200 transition-colors" id="clear-chat">
107
+ <i data-feather="trash-2" class="w-4 h-4"></i>
108
+ </button>
109
+ <button class="text-slate-400 hover:text-slate-200 transition-colors" id="export-chat">
110
+ <i data-feather="download" class="w-4 h-4"></i>
111
+ </button>
112
+ </div>
113
+ </div>
114
+
115
+ <!-- Transcript Area -->
116
+ <div class="flex-1 overflow-y-auto p-4 space-y-4 transcript-scroll" id="transcript-container">
117
+ <!-- Welcome Message -->
118
+ <div class="flex items-start space-x-3 animate-slide-up">
119
+ <div class="w-8 h-8 rounded-full bg-gradient-to-br from-primary to-secondary flex items-center justify-center flex-shrink-0">
120
+ <i data-feather="cpu" class="w-4 h-4 text-white"></i>
121
+ </div>
122
+ <div class="glass-message max-w-xs">
123
+ <p class="text-sm text-slate-300">Hello! I'm Neural Nexus, ready for real-time conversation. My voice latency is under 50ms. Try speaking or type a message below.</p>
124
+ <span class="text-xs text-slate-400 block mt-2">12:34:56</span>
125
+ </div>
126
+ </div>
127
+ </div>
128
+
129
+ <!-- Tool Status Bar -->
130
+ <div class="px-4 py-2 border-t border-slate-700/50 flex items-center space-x-2" id="tool-status" style="display: none;">
131
+ <div class="w-2 h-2 rounded-full bg-warning animate-pulse"></div>
132
+ <span class="text-xs text-slate-400" id="tool-status-text">Processing PDF document...</span>
133
+ <div class="w-4 h-4 border-2 border-slate-600 border-t-warning rounded-full animate-spin ml-auto"></div>
134
+ </div>
135
+
136
+ <!-- Input Area -->
137
+ <div class="p-4 border-t border-slate-700/50">
138
+ <div class="flex items-center space-x-3">
139
+ <div class="flex-1 relative">
140
+ <input
141
+ type="text"
142
+ id="message-input"
143
+ placeholder="Type message or use voice..."
144
+ class="w-full bg-slate-800/50 border border-slate-700 rounded-lg px-4 py-3 text-sm text-slate-200 placeholder-slate-400 focus:outline-none focus:border-primary transition-colors"
145
+ />
146
+ <!-- User Waveform -->
147
+ <wave-visualizer
148
+ id="user-visualizer"
149
+ color="#3b82f6"
150
+ class="absolute -top-8 left-0 right-0 h-6 opacity-0 transition-opacity duration-300"
151
+ ></wave-visualizer>
152
+ </div>
153
+ <button
154
+ id="send-btn"
155
+ class="w-12 h-12 bg-gradient-to-r from-primary to-secondary rounded-lg flex items-center justify-center hover:scale-105 transition-transform disabled:opacity-50 disabled:cursor-not-allowed"
156
+ disabled
157
+ >
158
+ <i data-feather="send" class="w-5 h-5 text-white"></i>
159
+ </button>
160
+ <button
161
+ id="voice-btn"
162
+ class="w-12 h-12 bg-slate-800 border border-slate-600 rounded-lg flex items-center justify-center hover:bg-slate-700 transition-colors"
163
+ >
164
+ <i data-feather="mic" class="w-5 h-5 text-slate-400"></i>
165
+ </button>
166
+ </div>
167
+ </div>
168
+ </section>
169
+ </main>
170
+
171
+ <!-- Floating Action Buttons -->
172
+ <div class="fixed bottom-6 right-6 flex flex-col space-y-3 z-50">
173
+ <button class="fab-btn bg-info" id="pdf-upload-btn" title="Upload PDF">
174
+ <i data-feather="file-text" class="w-5 h-5"></i>
175
+ </button>
176
+ <button class="fab-btn bg-warning" id="web-search-btn" title="Enable Web Search">
177
+ <i data-feather="search" class="w-5 h-5"></i>
178
+ </button>
179
+ <button class="fab-btn bg-accent" id="connect-twitter-btn" title="Connect to Twitter Spaces">
180
+ <i data-feather="twitter" class="w-5 h-5"></i>
181
+ </button>
182
+ </div>
183
+
184
+ <!-- Hidden File Input -->
185
+ <input type="file" id="pdf-file-input" accept=".pdf" class="hidden" />
186
+
187
+ <!-- Settings Modal -->
188
+ <div id="settings-modal" class="fixed inset-0 bg-black/60 backdrop-blur-sm hidden items-center justify-center z-50">
189
+ <div class="glass-panel rounded-2xl p-6 w-full max-w-md mx-4">
190
+ <div class="flex items-center justify-between mb-6">
191
+ <h3 class="text-xl font-semibold">Settings</h3>
192
+ <button id="close-settings" class="text-slate-400 hover:text-slate-200">
193
+ <i data-feather="x" class="w-5 h-5"></i>
194
+ </button>
195
+ </div>
196
+ <div class="space-y-4">
197
+ <div>
198
+ <label class="block text-sm font-medium text-slate-300 mb-2">Voice Provider</label>
199
+ <select class="w-full bg-slate-800 border border-slate-700 rounded-lg px-3 py-2 text-sm">
200
+ <option>ElevenLabs Turbo v2.5</option>
201
+ <option>Cartesia Sonic</option>
202
+ <option>PlayHT 3.0-mini</option>
203
+ </select>
204
+ </div>
205
+ <div>
206
+ <label class="block text-sm font-medium text-slate-300 mb-2">Response Latency Target</label>
207
+ <input type="range" min="50" max="200" value="50" class="w-full">
208
+ <div class="flex justify-between text-xs text-slate-400 mt-1">
209
+ <span>50ms</span>
210
+ <span>200ms</span>
211
+ </div>
212
+ </div>
213
+ <div class="flex items-center justify-between">
214
+ <span class="text-sm text-slate-300">Enable Avatar Lip-Sync</span>
215
+ <label class="relative inline-flex items-center cursor-pointer">
216
+ <input type="checkbox" checked class="sr-only peer">
217
+ <div class="w-11 h-6 bg-slate-700 peer-focus:outline-none rounded-full peer peer-checked:after:translate-x-full peer-checked:after:border-white after:content-[''] after:absolute after:top-[2px] after:left-[2px] after:bg-white after:rounded-full after:h-5 after:w-5 after:transition-all peer-checked:bg-primary"></div>
218
+ </label>
219
+ </div>
220
+ </div>
221
+ </div>
222
+ </div>
223
+ </div>
224
+
225
+ <!-- Component Scripts -->
226
+ <script src="components/status-indicator.js"></script>
227
+ <script src="components/wave-visualizer.js"></script>
228
+ <script src="components/avatar-renderer.js"></script>
229
+
230
+ <!-- Main Scripts -->
231
+ <script src="script.js"></script>
232
+
233
+ <!-- Initialize Feather Icons -->
234
+ <script>feather.replace();</script>
235
+ <script src="https://huggingface.co/deepsite/deepsite-badge.js"></script>
236
+ </body>
237
+ </html>
script.js ADDED
@@ -0,0 +1,361 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /**
2
+ * NEURAL NEXUS - Main Application Controller
3
+ * Real-Time Conversational AI Agent
4
+ */
5
+
6
+ class NeuralNexusApp {
7
+ constructor() {
8
+ // State Management
9
+ this.state = {
10
+ isListening: false,
11
+ isSpeaking: false,
12
+ isThinking: false,
13
+ toolsActive: false,
14
+ sessionStartTime: Date.now(),
15
+ latency: 45,
16
+ messages: [],
17
+ currentTranscript: '',
18
+ sessionTimer: null
19
+ };
20
+
21
+ // DOM Elements
22
+ this.elements = {
23
+ // Main UI
24
+ messageInput: document.getElementById('message-input'),
25
+ sendBtn: document.getElementById('send-btn'),
26
+ voiceBtn: document.getElementById('voice-btn'),
27
+ transcriptContainer: document.getElementById('transcript-container'),
28
+ latencyDisplay: document.getElementById('latency-display'),
29
+ sessionTimer: document.getElementById('session-timer'),
30
+
31
+ // Avatar and Visualizers
32
+ avatar: document.getElementById('ai-avatar'),
33
+ agentVisualizer: document.getElementById('agent-visualizer'),
34
+ userVisualizer: document.getElementById('user-visualizer'),
35
+ statusIndicator: document.getElementById('main-status'),
36
+
37
+ // Tool Status
38
+ toolStatus: document.getElementById('tool-status'),
39
+ toolStatusText: document.getElementById('tool-status-text'),
40
+
41
+ // Neural Network Canvas
42
+ neuralCanvas: document.getElementById('neural-network-canvas'),
43
+
44
+ // Settings
45
+ settingsModal: document.getElementById('settings-modal'),
46
+ settingsBtn: document.getElementById('settings-btn'),
47
+ closeSettings: document.getElementById('close-settings'),
48
+
49
+ // FABs
50
+ pdfUploadBtn: document.getElementById('pdf-upload-btn'),
51
+ pdfFileInput: document.getElementById('pdf-file-input'),
52
+ webSearchBtn: document.getElementById('web-search-btn'),
53
+ connectTwitterBtn: document.getElementById('connect-twitter-btn'),
54
+
55
+ // Chat controls
56
+ clearChatBtn: document.getElementById('clear-chat'),
57
+ exportChatBtn: document.getElementById('export-chat'),
58
+ fullscreenBtn: document.getElementById('fullscreen-btn')
59
+ };
60
+
61
+ // Initialize subsystems
62
+ this.initializeNeuralNetworkCanvas();
63
+ this.initializeEventListeners();
64
+ this.initializeSessionTimer();
65
+ this.simulateLatency();
66
+ }
67
+
68
+ /**
69
+ * Initialize neural network background animation
70
+ */
71
+ initializeNeuralNetworkCanvas() {
72
+ const canvas = this.elements.neuralCanvas;
73
+ const ctx = canvas.getContext('2d');
74
+
75
+ const resizeCanvas = () => {
76
+ canvas.width = canvas.offsetWidth * window.devicePixelRatio;
77
+ canvas.height = canvas.offsetHeight * window.devicePixelRatio;
78
+ ctx.scale(window.devicePixelRatio, window.devicePixelRatio);
79
+ };
80
+
81
+ resizeCanvas();
82
+ window.addEventListener('resize', resizeCanvas);
83
+
84
+ // Neural network nodes and connections
85
+ const nodes = [];
86
+ const connections = [];
87
+
88
+ // Create nodes
89
+ for (let i = 0; i < 50; i++) {
90
+ nodes.push({
91
+ x: Math.random() * canvas.offsetWidth,
92
+ y: Math.random() * canvas.offsetHeight,
93
+ vx: (Math.random() - 0.5) * 0.5,
94
+ vy: (Math.random() - 0.5) * 0.5,
95
+ size: Math.random() * 2 + 1
96
+ });
97
+ }
98
+
99
+ // Create connections
100
+ for (let i = 0; i < nodes.length; i++) {
101
+ for (let j = i + 1; j < nodes.length; j++) {
102
+ const dist = Math.hypot(nodes[i].x - nodes[j].x, nodes[i].y - nodes[j].y);
103
+ if (dist < 150) {
104
+ connections.push({ from: i, to: j, alpha: Math.random() });
105
+ }
106
+ }
107
+ }
108
+
109
+ let animationFrame;
110
+ const animate = () => {
111
+ ctx.clearRect(0, 0, canvas.offsetWidth, canvas.offsetHeight);
112
+
113
+ // Update and draw nodes
114
+ nodes.forEach(node => {
115
+ node.x += node.vx;
116
+ node.y += node.vy;
117
+
118
+ // Bounce off edges
119
+ if (node.x < 0 || node.x > canvas.offsetWidth) node.vx *= -1;
120
+ if (node.y < 0 || node.y > canvas.offsetHeight) node.vy *= -1;
121
+
122
+ // Draw node
123
+ ctx.beginPath();
124
+ ctx.arc(node.x, node.y, node.size, 0, Math.PI * 2);
125
+ ctx.fillStyle = '#8b5cf6';
126
+ ctx.fill();
127
+ });
128
+
129
+ // Draw connections
130
+ connections.forEach(conn => {
131
+ const from = nodes[conn.from];
132
+ const to = nodes[conn.to];
133
+ const dist = Math.hypot(from.x - to.x, from.y - to.y);
134
+
135
+ if (dist < 150) {
136
+ ctx.beginPath();
137
+ ctx.moveTo(from.x, from.y);
138
+ ctx.lineTo(to.x, to.y);
139
+ const alpha = (1 - dist / 150) * 0.3;
140
+ ctx.strokeStyle = `rgba(139, 92, 246, ${alpha})`;
141
+ ctx.stroke();
142
+ }
143
+ });
144
+
145
+ animationFrame = requestAnimationFrame(animate);
146
+ };
147
+
148
+ animate();
149
+ }
150
+
151
+ /**
152
+ * Initialize all event listeners
153
+ */
154
+ initializeEventListeners() {
155
+ // Message input
156
+ this.elements.messageInput.addEventListener('input', (e) => {
157
+ const hasValue = e.target.value.trim().length > 0;
158
+ this.elements.sendBtn.disabled = !hasValue;
159
+ this.elements.sendBtn.classList.toggle('opacity-50', !hasValue);
160
+ });
161
+
162
+ this.elements.messageInput.addEventListener('keypress', (e) => {
163
+ if (e.key === 'Enter' && !e.shiftKey) {
164
+ e.preventDefault();
165
+ this.sendMessage();
166
+ }
167
+ });
168
+
169
+ // Send button
170
+ this.elements.sendBtn.addEventListener('click', () => this.sendMessage());
171
+
172
+ // Voice button
173
+ this.elements.voiceBtn.addEventListener('click', () => this.toggleVoice());
174
+
175
+ // Settings modal
176
+ this.elements.settingsBtn.addEventListener('click', () => {
177
+ this.elements.settingsModal.classList.remove('hidden');
178
+ this.elements.settingsModal.classList.add('flex');
179
+ });
180
+
181
+ this.elements.closeSettings.addEventListener('click', () => {
182
+ this.elements.settingsModal.classList.add('hidden');
183
+ this.elements.settingsModal.classList.remove('flex');
184
+ });
185
+
186
+ // FABs
187
+ this.elements.pdfUploadBtn.addEventListener('click', () => {
188
+ this.elements.pdfFileInput.click();
189
+ });
190
+
191
+ this.elements.pdfFileInput.addEventListener('change', (e) => this.handlePDFUpload(e));
192
+
193
+ this.elements.webSearchBtn.addEventListener('click', () => this.toggleWebSearch());
194
+
195
+ this.elements.connectTwitterBtn.addEventListener('click', () => this.connectTwitterSpaces());
196
+
197
+ // Chat controls
198
+ this.elements.clearChatBtn.addEventListener('click', () => this.clearChat());
199
+ this.elements.exportChatBtn.addEventListener('click', () => this.exportChat());
200
+ this.elements.fullscreenBtn.addEventListener('click', () => this.toggleFullscreen());
201
+ }
202
+
203
+ /**
204
+ * Session timer
205
+ */
206
+ initializeSessionTimer() {
207
+ const updateTimer = () => {
208
+ const elapsed = Date.now() - this.state.sessionStartTime;
209
+ const hours = Math.floor(elapsed / 3600000).toString().padStart(2, '0');
210
+ const minutes = Math.floor((elapsed % 3600000) / 60000).toString().padStart(2, '0');
211
+ const seconds = Math.floor((elapsed % 60000) / 1000).toString().padStart(2, '0');
212
+ this.elements.sessionTimer.textContent = `${hours}:${minutes}:${seconds}`;
213
+ };
214
+
215
+ updateTimer();
216
+ this.state.sessionTimer = setInterval(updateTimer, 1000);
217
+ }
218
+
219
+ /**
220
+ * Simulate latency variation
221
+ */
222
+ simulateLatency() {
223
+ setInterval(() => {
224
+ // Simulate network latency between 30-70ms
225
+ this.state.latency = Math.floor(Math.random() * 40) + 30;
226
+ this.elements.latencyDisplay.textContent = `${this.state.latency}ms`;
227
+ }, 3000);
228
+ }
229
+
230
+ /**
231
+ * Send message handler
232
+ */
233
+ async sendMessage() {
234
+ const message = this.elements.messageInput.value.trim();
235
+ if (!message || this.state.isThinking) return;
236
+
237
+ // Add user message to transcript
238
+ this.addMessageToTranscript({
239
+ speaker: 'user',
240
+ text: message,
241
+ timestamp: new Date()
242
+ });
243
+
244
+ this.elements.messageInput.value = '';
245
+ this.elements.sendBtn.disabled = true;
246
+ this.elements.sendBtn.classList.add('opacity-50');
247
+
248
+ // Show user waveform briefly
249
+ this.elements.userVisualizer.style.opacity = '1';
250
+ setTimeout(() => {
251
+ this.elements.userVisualizer.style.opacity = '0';
252
+ }, 1000);
253
+
254
+ // Process message through AI pipeline
255
+ await this.processAIMessage(message);
256
+ }
257
+
258
+ /**
259
+ * Add message to transcript UI
260
+ */
261
+ addMessageToTranscript(message) {
262
+ const messageEl = document.createElement('div');
263
+ messageEl.className = 'flex items-start space-x-3 animate-slide-up';
264
+
265
+ const isUser = message.speaker === 'user';
266
+ const timeStr = message.timestamp.toLocaleTimeString('en-US', {
267
+ hour12: false,
268
+ hour: '2-digit',
269
+ minute: '2-digit',
270
+ second: '2-digit'
271
+ });
272
+
273
+ messageEl.innerHTML = `
274
+ <div class="w-8 h-8 rounded-full ${isUser ? 'bg-gradient-to-br from-info to-accent' : 'bg-gradient-to-br from-primary to-secondary'} flex items-center justify-center flex-shrink-0">
275
+ <i data-feather="${isUser ? 'user' : 'cpu'}" class="w-4 h-4 text-white"></i>
276
+ </div>
277
+ <div class="${isUser ? 'glass-message ml-auto' : 'glass-message'} max-w-xs">
278
+ <p class="text-sm text-slate-300">${message.text}</p>
279
+ <span class="text-xs text-slate-400 block mt-2">${timeStr}</span>
280
+ </div>
281
+ `;
282
+
283
+ this.elements.transcriptContainer.appendChild(messageEl);
284
+ this.elements.transcriptContainer.scrollTop = this.elements.transcriptContainer.scrollHeight;
285
+
286
+ // Re-render feather icons
287
+ feather.replace();
288
+ }
289
+
290
+ /**
291
+ * Process AI message with simulated pipeline
292
+ */
293
+ async processAIMessage(userMessage) {
294
+ // Set thinking state
295
+ this.state.isThinking = true;
296
+ this.elements.statusIndicator.setStatus('thinking');
297
+
298
+ // Simulate LLM processing time (target: <800ms)
299
+ const thinkingTime = Math.random() * 400 + 300; // 300-700ms
300
+ await this.delay(thinkingTime);
301
+
302
+ // Simulate tool usage for certain queries
303
+ if (userMessage.toLowerCase().includes('search') || userMessage.toLowerCase().includes('what is')) {
304
+ await this.executeTool('web-search', 'Searching web for information...');
305
+ }
306
+
307
+ // Generate AI response (simulated)
308
+ const responses = [
309
+ "I've analyzed your query. Based on real-time data processing, I can confirm that the latency metrics are well within acceptable parameters. The system is operating at optimal performance.",
310
+ "Processing complete. I've integrated the information into my context window. The neural pathways are firing at peak efficiency, with sub-50ms audio processing latency maintained.",
311
+ "Interesting question! Let me search my knowledge base... Ah yes, I found relevant information. The vector embeddings show high similarity scores for this topic.",
312
+ "I'm detecting a knowledge gap in my training data. Initiating web search protocol... Stand by for real-time information retrieval.",
313
+ "Analysis complete. The PDF document has been successfully processed and indexed. You can now query its contents naturally in our conversation."
314
+ ];
315
+
316
+ const response = responses[Math.floor(Math.random() * responses.length)];
317
+
318
+ // Update status to speaking
319
+ this.state.isThinking = false;
320
+ this.elements.statusIndicator.setStatus('speaking');
321
+ this.elements.avatar.startSpeaking();
322
+
323
+ // Add AI message to transcript with typing effect
324
+ await this.typeMessage({
325
+ speaker: 'agent',
326
+ text: response,
327
+ timestamp: new Date()
328
+ });
329
+
330
+ // Show agent waveform
331
+ this.elements.agentVisualizer.style.opacity = '1';
332
+
333
+ // Simulate speaking duration
334
+ const speakingTime = response.length * 50; // ~50ms per character
335
+ setTimeout(() => {
336
+ this.elements.avatar.stopSpeaking();
337
+ this.elements.statusIndicator.setStatus('idle');
338
+ this.elements.agentVisualizer.style.opacity = '0';
339
+ }, speakingTime);
340
+ }
341
+
342
+ /**
343
+ * Typewriter effect for AI messages
344
+ */
345
+ async typeMessage(message) {
346
+ const messageEl = document.createElement('div');
347
+ messageEl.className = 'flex items-start space-x-3 animate-slide-up';
348
+
349
+ const timeStr = message.timestamp.toLocaleTimeString('en-US', {
350
+ hour12: false,
351
+ hour: '2-digit',
352
+ minute: '2-digit',
353
+ second: '2-digit'
354
+ });
355
+
356
+ messageEl.innerHTML = `
357
+ <div class="w-8 h-8 rounded-full bg-gradient-to-br from-primary to-secondary flex items-center justify-center flex-shrink-0">
358
+ <i data-feather="cpu" class="w-4 h-4 text-white"></i>
359
+ </div>
360
+ <div class="glass-message max-w-xs">
361
+ <p class="
style.css CHANGED
@@ -1,28 +1,226 @@
1
- body {
2
- padding: 2rem;
3
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  }
5
 
6
- h1 {
7
- font-size: 16px;
8
- margin-top: 0;
 
 
 
 
 
 
 
 
 
 
 
9
  }
10
 
11
- p {
12
- color: rgb(107, 114, 128);
13
- font-size: 15px;
14
- margin-bottom: 10px;
15
- margin-top: 5px;
 
 
16
  }
17
 
18
- .card {
19
- max-width: 620px;
20
- margin: 0 auto;
21
- padding: 16px;
22
- border: 1px solid lightgray;
23
- border-radius: 16px;
 
24
  }
25
 
26
- .card p:last-child {
27
- margin-bottom: 0;
 
 
 
 
 
 
 
28
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* ============================================
2
+ NEURAL NEXUS - DARK MODE THEME SYSTEM
3
+ ============================================ */
4
+
5
+ :root {
6
+ --primary: #8b5cf6;
7
+ --secondary: #06b6d4;
8
+ --accent: #10b981;
9
+ --danger: #ef4444;
10
+ --warning: #f59e0b;
11
+ --info: #3b82f6;
12
+ --dark-bg: #020617;
13
+ --glass-bg: rgba(15, 23, 42, 0.7);
14
+ --glass-border: rgba(71, 85, 105, 0.5);
15
+ }
16
+
17
+ /* Dark mode overrides */
18
+ [data-theme="dark"] {
19
+ color-scheme: dark;
20
+ }
21
+
22
+ /* Custom Scrollbar */
23
+ .transcript-scroll::-webkit-scrollbar {
24
+ width: 6px;
25
+ }
26
+
27
+ .transcript-scroll::-webkit-scrollbar-track {
28
+ background: rgba(15, 23, 42, 0.5);
29
+ border-radius: 3px;
30
+ }
31
+
32
+ .transcript-scroll::-webkit-scrollbar-thumb {
33
+ background: linear-gradient(to bottom, var(--primary), var(--secondary));
34
+ border-radius: 3px;
35
+ }
36
+
37
+ /* Glassmorphism Effects */
38
+ .glass-panel {
39
+ background: var(--glass-bg);
40
+ backdrop-filter: blur(12px);
41
+ -webkit-backdrop-filter: blur(12px);
42
+ }
43
+
44
+ .glass-button {
45
+ background: var(--glass-bg);
46
+ border: 1px solid var(--glass-border);
47
+ backdrop-filter: blur(10px);
48
+ transition: all 0.2s ease;
49
+ }
50
+
51
+ .glass-button:hover {
52
+ background: rgba(30, 41, 59, 0.8);
53
+ transform: translateY(-2px);
54
+ }
55
+
56
+ .glass-message {
57
+ background: linear-gradient(135deg, rgba(139, 92, 246, 0.15), rgba(6, 182, 212, 0.1));
58
+ border: 1px solid var(--glass-border);
59
+ border-radius: 0.75rem;
60
+ padding: 0.75rem;
61
+ backdrop-filter: blur(10px);
62
+ }
63
+
64
+ /* Floating Action Buttons */
65
+ .fab-btn {
66
+ width: 56px;
67
+ height: 56px;
68
+ border-radius: 50%;
69
+ display: flex;
70
+ align-items: center;
71
+ justify-content: center;
72
+ color: white;
73
+ box-shadow: 0 4px 20px rgba(0, 0, 0, 0.3);
74
+ transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1);
75
+ border: 1px solid rgba(255, 255, 255, 0.1);
76
+ }
77
+
78
+ .fab-btn:hover {
79
+ transform: scale(1.1) translateY(-4px);
80
+ box-shadow: 0 6px 30px rgba(0, 0, 0, 0.4);
81
  }
82
 
83
+ /* Custom Animations */
84
+ @keyframes pulse-ring {
85
+ 0% {
86
+ transform: scale(1);
87
+ opacity: 0.8;
88
+ }
89
+ 50% {
90
+ transform: scale(1.5);
91
+ opacity: 0;
92
+ }
93
+ 100% {
94
+ transform: scale(1);
95
+ opacity: 0;
96
+ }
97
  }
98
 
99
+ @keyframes float {
100
+ 0%, 100% {
101
+ transform: translateY(0px);
102
+ }
103
+ 50% {
104
+ transform: translateY(-10px);
105
+ }
106
  }
107
 
108
+ @keyframes glow {
109
+ 0% {
110
+ box-shadow: 0 0 5px var(--primary), 0 0 10px var(--primary), 0 0 15px var(--primary);
111
+ }
112
+ 100% {
113
+ box-shadow: 0 0 10px var(--primary), 0 0 20px var(--primary), 0 0 30px var(--primary);
114
+ }
115
  }
116
 
117
+ @keyframes slide-up {
118
+ from {
119
+ opacity: 0;
120
+ transform: translateY(20px);
121
+ }
122
+ to {
123
+ opacity: 1;
124
+ transform: translateY(0);
125
+ }
126
  }
127
+
128
+ @keyframes slide-down {
129
+ from {
130
+ opacity: 0;
131
+ transform: translateY(-20px);
132
+ }
133
+ to {
134
+ opacity: 1;
135
+ transform: translateY(0);
136
+ }
137
+ }
138
+
139
+ /* Text Selection */
140
+ ::selection {
141
+ background: var(--primary);
142
+ color: white;
143
+ }
144
+
145
+ /* Focus Styles */
146
+ input:focus, button:focus {
147
+ outline: 2px solid var(--primary);
148
+ outline-offset: 2px;
149
+ }
150
+
151
+ /* Loading State */
152
+ .loading-dots {
153
+ display: inline-flex;
154
+ gap: 2px;
155
+ }
156
+
157
+ .loading-dots span {
158
+ width: 4px;
159
+ height: 4px;
160
+ background: var(--primary);
161
+ border-radius: 50%;
162
+ animation: loading-dot 1.4s infinite ease-in-out both;
163
+ }
164
+
165
+ .loading-dots span:nth-child(1) { animation-delay: -0.32s; }
166
+ .loading-dots span:nth-child(2) { animation-delay: -0.16s; }
167
+
168
+ @keyframes loading-dot {
169
+ 0%, 80%, 100% {
170
+ transform: scale(0);
171
+ }
172
+ 40% {
173
+ transform: scale(1);
174
+ }
175
+ }
176
+
177
+ /* Responsive Adjustments */
178
+ @media (max-width: 1024px) {
179
+ .glass-panel {
180
+ backdrop-filter: blur(8px);
181
+ }
182
+ }
183
+
184
+ /* Avatar Canvas Styles */
185
+ avatar-renderer {
186
+ display: block;
187
+ aspect-ratio: 1;
188
+ max-height: 80vh;
189
+ }
190
+
191
+ wave-visualizer {
192
+ display: block;
193
+ width: 100%;
194
+ height: 100%;
195
+ }
196
+
197
+ /* Tooltips */
198
+ .tooltip {
199
+ position: relative;
200
+ }
201
+
202
+ .tooltip::after {
203
+ content: attr(data-tooltip);
204
+ position: absolute;
205
+ bottom: 100%;
206
+ left: 50%;
207
+ transform: translateX(-50%);
208
+ background: rgba(0, 0, 0, 0.8);
209
+ color: white;
210
+ padding: 4px 8px;
211
+ border-radius: 4px;
212
+ font-size: 12px;
213
+ white-space: nowrap;
214
+ opacity: 0;
215
+ pointer-events: none;
216
+ transition: opacity 0.2s;
217
+ }
218
+
219
+ .tooltip:hover::after {
220
+ opacity: 1;
221
+ }
222
+
223
+ /* Neural Network Canvas */
224
+ #neural-network-canvas {
225
+ opacity: 0.15;
226
+ }