coder-vansh commited on
Commit
859216d
Β·
0 Parent(s):

README.md file was completed

Browse files
Files changed (1) hide show
  1. README.md +568 -0
README.md ADDED
@@ -0,0 +1,568 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # J.A.R.V.I.S - Just A Rather Very Intelligent System
2
+
3
+ An intelligent AI assistant built with FastAPI, LangChain, and Groq AI. JARVIS provides two modes of interaction: General Chat (pure LLM, no web search) and Realtime Chat (with Tavily web search). The system learns from user data files and past conversations, maintaining context across sessions.
4
+
5
+ ## πŸš€ Quick Start
6
+
7
+ ### Prerequisites
8
+
9
+ - Python 3.8+ with pip
10
+ - Operating System: Windows, macOS, or Linux (fully cross-platform)
11
+ - API Keys (set in .env file):
12
+ - GROQ_API_KEY - Get from https://console.groq.com (required). You can add more keys for round-robin and fallback (see Multiple Groq API keys).
13
+ - TAVILY_API_KEY - Get from https://tavily.com (optional, for realtime mode)
14
+ - GROQ_MODEL - Optional, defaults to "llama-3.3-70b-versatile"
15
+
16
+ ### Installation
17
+
18
+ 1. Clone/Download this repository
19
+
20
+ 2. Install Python dependencies:
21
+
22
+ ```bash
23
+ pip install -r requirements.txt
24
+ ```
25
+
26
+ 3. Create .env file in the project root:
27
+
28
+ ```env
29
+ GROQ_API_KEY=your_groq_api_key_here
30
+ # Optional: add more keys for round-robin and fallback (GROQ_API_KEY_2, GROQ_API_KEY_3, ...)
31
+ TAVILY_API_KEY=your_tavily_api_key_here
32
+ GROQ_MODEL=llama-3.3-70b-versatile
33
+ # Optional: assistant name (default: Jarvis). Tone and personality stay the same.
34
+ # ASSISTANT_NAME=Jarvis
35
+ # Optional: how to address the user; otherwise uses learning data/chats.
36
+ # JARVIS_USER_TITLE=Sir
37
+ ```
38
+
39
+ 4. Start the server:
40
+
41
+ ```bash
42
+ python run.py
43
+ ```
44
+
45
+ The server will start at http://localhost:8000
46
+
47
+ 5. Test the system (in another terminal):
48
+
49
+ ```bash
50
+ python test.py
51
+ ```
52
+
53
+ ## πŸ“‹ Features
54
+
55
+ ### Core Features
56
+
57
+ - βœ… Dual Chat Modes: General chat (pure LLM, no web search) and Realtime chat (with Tavily search)
58
+ - βœ… Session Management: Conversations persist across server restarts
59
+ - βœ… Learning System: Learns from user data files and past conversations via semantic search (no token limit blow-up). No hardcoded namesβ€”assistant name and user title come from ASSISTANT_NAME and JARVIS_USER_TITLE in .env, or from learning data and chats.
60
+ - βœ… Learning data on restart: Add or edit .txt files in database/learning_data/ and restart the server to pick them up
61
+ - βœ… Vector Store: FAISS index of learning data + past chats; only relevant chunks are sent to the LLM so you never hit token limits
62
+ - βœ… Assistant Personality: Sophisticated, witty, professional tone with British humor (name configurable via ASSISTANT_NAME in .env)
63
+
64
+ ### Technical Features
65
+
66
+ - **Learning data**: All .txt files in database/learning_data/ are indexed in the vector store. The AI answers from this data by retrieving relevant chunks per question (not by sending all text in every prompt), so you can add many files without exceeding token limits.
67
+ - **Hot-reload**: A background check runs every 15 seconds. If any .txt in learning_data/ is new or modified, the vector store is rebuilt so new content is learned instantly.
68
+ - **Curly Brace Escaping**: Prevents LangChain template variable errors
69
+ - **Smart Response Length**: Adapts answer length based on question complexity
70
+ - **Clean Formatting**: No markdown, asterisks, or emojis in responses
71
+ - **Time Awareness**: AI knows current date and time
72
+
73
+ ## πŸ—οΈ Architecture
74
+
75
+ ### System Overview
76
+
77
+ ```
78
+ User Input
79
+ ↓
80
+ FastAPI Endpoints (/chat or /chat/realtime)
81
+ ↓
82
+ ChatService (Session Management)
83
+ ↓
84
+ GroqService or RealtimeGroqService
85
+ ↓
86
+ VectorStoreService (Context Retrieval)
87
+ ↓
88
+ Groq AI (LLM Response Generation)
89
+ ```
90
+
91
+ ### Component Breakdown
92
+
93
+ **FastAPI Application (app/main.py)**
94
+ - REST API endpoints
95
+ - CORS middleware
96
+ - Application lifespan management
97
+
98
+ **Chat Service (app/services/chat_service.py)**
99
+ - Session creation and management
100
+ - Message storage (in-memory and disk)
101
+ - Conversation history formatting
102
+
103
+ **Groq Service (app/services/groq_service.py)**
104
+ - General chat mode (pure LLM, no web search)
105
+ - Retrieves relevant context from vector store (learning data + past chats) per request; no full-text dump, so token usage stays bounded
106
+
107
+ **Realtime Service (app/services/realtime_service.py)**
108
+ - Extends GroqService
109
+ - Adds Tavily web search
110
+ - Combines search results with AI knowledge
111
+
112
+ **Vector Store Service (app/services/vector_store.py)**
113
+ - FAISS vector database
114
+ - Embeddings generation (HuggingFace)
115
+ - Semantic search for context retrieval
116
+
117
+ **Configuration (config.py)**
118
+ - Centralized settings
119
+ - User context loading
120
+ - System prompt definition
121
+
122
+ ## πŸ“ Project Structure
123
+
124
+ ```
125
+ JARVIS/
126
+ β”œβ”€β”€ app/
127
+ β”‚ β”œβ”€β”€ __init__.py
128
+ β”‚ β”œβ”€β”€ main.py # FastAPI application and API endpoints
129
+ β”‚ β”œβ”€β”€ models.py # Pydantic data models
130
+ β”‚ β”œβ”€β”€ services/
131
+ β”‚ β”‚ β”œβ”€β”€ __init__.py
132
+ β”‚ β”‚ β”œβ”€β”€ chat_service.py # Session and conversation management
133
+ β”‚ β”‚ β”œβ”€β”€ groq_service.py # General chat AI service
134
+ β”‚ β”‚ β”œβ”€β”€ realtime_service.py # Realtime chat with web search
135
+ β”‚ β”‚ └── vector_store.py # FAISS vector store and embeddings
136
+ β”‚ └── utils/
137
+ β”‚ β”œβ”€β”€ __init__.py
138
+ β”‚ └── time_info.py # Current date/time information
139
+ β”œβ”€β”€ database/
140
+ β”‚ β”œβ”€β”€ learning_data/ # User data files (.txt)
141
+ β”‚ β”‚ β”œβ”€β”€ userdata.txt # Personal information (auto-loaded)
142
+ β”‚ β”‚ β”œβ”€β”€ system_context.txt # System context (auto-loaded)
143
+ β”‚ β”‚ └── *.txt # Any other .txt files (auto-loaded)
144
+ β”‚ β”œβ”€β”€ chats_data/ # Saved conversations (.json)
145
+ β”‚ └── vector_store/ # FAISS index files
146
+ β”œβ”€β”€ config.py # Configuration and settings
147
+ β”œβ”€β”€ run.py # Server startup script
148
+ β”œβ”€β”€ test.py # CLI test interface
149
+ β”œβ”€β”€ requirements.txt # Python dependencies
150
+ └── README.md # This file
151
+ ```
152
+
153
+ ## πŸ”Œ API Endpoints
154
+
155
+ ### POST /chat
156
+
157
+ General chat endpoint (pure LLM, no web search).
158
+
159
+ **Request:**
160
+ ```json
161
+ {
162
+ "message": "What is Python?",
163
+ "session_id": "optional-session-id"
164
+ }
165
+ ```
166
+
167
+ **Response:**
168
+ ```json
169
+ {
170
+ "response": "Python is a high-level programming language...",
171
+ "session_id": "session-id-here"
172
+ }
173
+ ```
174
+
175
+ ### POST /chat/realtime
176
+
177
+ Realtime chat endpoint (with Tavily web search).
178
+
179
+ **Request:**
180
+ ```json
181
+ {
182
+ "message": "What's the latest AI news?",
183
+ "session_id": "optional-session-id"
184
+ }
185
+ ```
186
+
187
+ **Response:**
188
+ ```json
189
+ {
190
+ "response": "Based on recent search results...",
191
+ "session_id": "session-id-here"
192
+ }
193
+ ```
194
+
195
+ ### GET /chat/history/{session_id}
196
+
197
+ Get chat history for a session.
198
+
199
+ **Response:**
200
+ ```json
201
+ {
202
+ "session_id": "session-id",
203
+ "messages": [
204
+ {"role": "user", "content": "Hello"},
205
+ {"role": "assistant", "content": "Good day. How may I assist you?"}
206
+ ]
207
+ }
208
+ ```
209
+
210
+ ### GET /health
211
+
212
+ Health check endpoint.
213
+
214
+ **Response:**
215
+ ```json
216
+ {
217
+ "status": "healthy",
218
+ "vector_store": true,
219
+ "groq_service": true,
220
+ "realtime_service": true,
221
+ "chat_service": true
222
+ }
223
+ ```
224
+
225
+ ### GET /
226
+
227
+ API information endpoint.
228
+
229
+ ## 🧠 How It Works
230
+
231
+ ### 1. Learning Data and Vector Store
232
+
233
+ - **At startup**: All .txt files in database/learning_data/ and all past chats in chats_data/ are loaded, chunked, embedded, and stored in a FAISS vector store.
234
+ - **Restart for new learning data**: Restart the server after adding or changing .txt files in learning_data/; the vector store is rebuilt on startup.
235
+ - **No full dump**: Learning data is never sent in full in the prompt. Only the top-k retrieved chunks (from learning data + past conversations) are sent per request, so token usage stays bounded.
236
+
237
+ ### 2. Vector Store Creation
238
+
239
+ On startup (and when learning_data changes):
240
+ 1. Loads all .txt files from learning_data/
241
+ 2. Loads all past conversations from chats_data/
242
+ 3. Converts text to embeddings using HuggingFace model
243
+ 4. Creates FAISS index for fast similarity search
244
+ 5. Saves index to disk
245
+
246
+ ### 3. Message Processing (General Mode)
247
+
248
+ 1. User sends message via /chat endpoint
249
+ 2. ChatService creates/retrieves session
250
+ 3. User message stored in session
251
+ 4. GroqService retrieves relevant context from the vector store:
252
+ - Relevant chunks from learning data (.txt files) and past conversations (semantic search)
253
+ - Current time information
254
+ 5. System prompt built with all context
255
+ 6. Groq AI generates response
256
+ 7. Response stored in session
257
+ 8. Session saved to disk
258
+
259
+ ### 4. Message Processing (Realtime Mode)
260
+
261
+ 1. User sends message via /chat/realtime endpoint
262
+ 2. Same session management as general mode
263
+ 3. RealtimeGroqService:
264
+ - Searches Tavily for real-time information
265
+ - Retrieves relevant context (same as general mode)
266
+ - Combines search results with context
267
+ - Generates response with current information
268
+ 4. Response stored and saved
269
+
270
+ ### 5. Context Retrieval
271
+
272
+ When answering a question:
273
+ 1. Vector store performs semantic search
274
+ 2. Finds most relevant documents (k=6 by default)
275
+ 3. Documents can be from:
276
+ - Learning data files
277
+ - Past conversations
278
+ 4. Context is escaped (curly braces) to prevent template errors
279
+ 5. Context added to system prompt
280
+
281
+ ### 6. Session Management
282
+
283
+ - **Server-managed**: If no session_id provided, server generates UUID
284
+ - **User-managed**: If session_id provided, server uses it
285
+ - Sessions persist across server restarts (loaded from disk)
286
+ - Both /chat and /chat/realtime share the same session
287
+ - Sessions saved to database/chats_data/ as JSON files
288
+
289
+ ## 🎯 Usage Examples
290
+
291
+ ### Using test.py (CLI Interface)
292
+
293
+ ```bash
294
+ python test.py
295
+ ```
296
+
297
+ **Commands:**
298
+ - `1` - Switch to General Chat mode
299
+ - `2` - Switch to Realtime Chat mode
300
+ - `/history` - View chat history
301
+ - `/clear` - Start new session
302
+ - `/quit` - Exit
303
+
304
+ ### Using Python Requests
305
+
306
+ ```python
307
+ import requests
308
+
309
+ # General chat
310
+ response = requests.post(
311
+ "http://localhost:8000/chat",
312
+ json={
313
+ "message": "What is machine learning?",
314
+ "session_id": "my-session-id"
315
+ }
316
+ )
317
+ print(response.json()["response"])
318
+
319
+ # Realtime chat
320
+ response = requests.post(
321
+ "http://localhost:8000/chat/realtime",
322
+ json={
323
+ "message": "What's happening in AI today?",
324
+ "session_id": "my-session-id" # Same session continues
325
+ }
326
+ )
327
+ print(response.json()["response"])
328
+ ```
329
+
330
+ ## πŸ”§ Configuration
331
+
332
+ ### Environment Variables
333
+
334
+ Create a `.env` file in the project root:
335
+
336
+ ```env
337
+ # Required
338
+ GROQ_API_KEY=your_groq_api_key
339
+
340
+
341
+ # Optional: add more keys for round-robin and fallback when one hits rate limit
342
+ # GROQ_API_KEY_2=second_key
343
+ # GROQ_API_KEY_3=third_key
344
+
345
+
346
+ # Optional (for realtime mode)
347
+ TAVILY_API_KEY=your_tavily_api_key
348
+
349
+
350
+ # Optional (defaults to llama-3.3-70b-versatile)
351
+ GROQ_MODEL=llama-3.3-70b-versatile
352
+
353
+
354
+ # Optional: assistant name (default: Jarvis). Tone and personality stay the same.
355
+ # ASSISTANT_NAME=Jarvis
356
+
357
+
358
+ # Optional: how to address the user (e.g. "Sir", "Mr. Smith"). If not set, the AI uses
359
+ # only learning data and conversation history to address the user (no hardcoded names).
360
+ # JARVIS_USER_TITLE=
361
+ ```
362
+
363
+ ### Multiple Groq API keys
364
+
365
+ You can add multiple Groq API keys so the server uses every key one-by-one in rotation and falls back to the next key if one fails.
366
+
367
+ - **Round-robin (one-by-one)**: The server uses each key in order: 1st request β†’ 1st key, 2nd request β†’ 2nd key, 3rd request β†’ 3rd key, then back to the 1st key, and so on. Every key you give is used in turn; no key is skipped.
368
+ - **Fallback**: If the chosen key fails (e.g. rate limit 429 or any error), the server tries the next key, then the next, until one succeeds or all have been tried.
369
+
370
+ In your .env, set as many keys as you want using this pattern:
371
+
372
+ ```env
373
+ GROQ_API_KEY=your_first_key
374
+ GROQ_API_KEY_2=your_second_key
375
+ GROQ_API_KEY_3=your_third_key
376
+ # Add more: GROQ_API_KEY_4, GROQ_API_KEY_5, ... (no upper limit)
377
+ ```
378
+
379
+ Only GROQ_API_KEY is required. Add GROQ_API_KEY_2, GROQ_API_KEY_3, etc. for extra keys. Each key has its own daily token limit on Groq's free tier, so multiple keys give you more capacity. The code that does round-robin and fallback is in app/services/groq_service.py (see _invoke_llm and module docstring for line-by-line explanation).
380
+
381
+ ### System Prompt Customization
382
+
383
+ Edit config.py to modify:
384
+ - Assistant personality and tone (the assistant name is set via ASSISTANT_NAME in .env)
385
+ - Response length guidelines
386
+ - Formatting rules
387
+ - General behavior guidelines
388
+
389
+ ### User Data Files
390
+
391
+ Add any .txt files to database/learning_data/:
392
+ - Files are automatically detected and loaded
393
+ - Content is always included in system prompt
394
+ - Files are loaded in alphabetical order
395
+ - No need to modify code when adding new files
396
+
397
+ **Example files:**
398
+ - userdata.txt - Personal information
399
+ - system_context.txt - System context
400
+ - usersinterest.txt - User interests
401
+ - Any other .txt file you add
402
+
403
+ ## πŸ› οΈ Technologies Used
404
+
405
+ ### Backend
406
+
407
+ - **FastAPI**: Modern Python web framework
408
+ - **LangChain**: LLM application framework
409
+ - **Groq AI**: Fast LLM inference (Llama 3.3 70B)
410
+ - **Tavily**: AI-optimized web search API
411
+ - **FAISS**: Vector similarity search
412
+ - **HuggingFace**: Embeddings model (sentence-transformers)
413
+ - **Pydantic**: Data validation
414
+ - **Uvicorn**: ASGI server
415
+
416
+ ### Data Storage
417
+
418
+ - **JSON Files**: Chat session storage
419
+ - **FAISS Index**: Vector embeddings storage
420
+ - **Text Files**: User learning data
421
+
422
+ ## πŸ“ Key Features Explained
423
+
424
+ ### Learning Data (restart to pick up new files)
425
+
426
+ - **Indexing**: All .txt files in database/learning_data/ are indexed in the vector store (with past chats). The AI retrieves only relevant chunks per question, so token usage stays bounded and you can add many files without hitting limits.
427
+ - **Restart to pick up new files**: New or changed .txt files in learning_data/ are loaded when you restart the server (vector store is rebuilt on startup).
428
+ - **No full dump**: The system does not send all learning data in every prompt; it uses semantic search to pull only what's relevant, so you never hit the token limit.
429
+
430
+ ### Curly Brace Escaping
431
+
432
+ The escape_curly_braces() function:
433
+ - Prevents LangChain from interpreting {variable} as template variables
434
+ - Escapes braces by doubling them: { β†’ {{, } β†’ }}
435
+ - Applied to all context before adding to system prompt
436
+
437
+ **Why this matters**: Prevents template variable errors when content contains curly braces.
438
+
439
+ ### Vector Store
440
+
441
+ The vector store:
442
+ - Converts text to numerical embeddings
443
+ - Stores embeddings in FAISS index
444
+ - Enables fast similarity search
445
+ - Rebuilt on every startup (always current)
446
+
447
+ **Why this matters**: Allows JARVIS to find relevant information from past conversations and learning data.
448
+
449
+ ### Session Persistence
450
+
451
+ Sessions:
452
+ - Stored in memory during active use
453
+ - Saved to disk after each message
454
+ - Loaded from disk on server restart
455
+ - Shared between general and realtime modes
456
+
457
+ **Why this matters**: Conversations continue seamlessly across server restarts.
458
+
459
+ ## πŸ› Troubleshooting
460
+
461
+ ### Server won't start
462
+
463
+ - Check that GROQ_API_KEY is set in .env
464
+ - Ensure all dependencies are installed: `pip install -r requirements.txt`
465
+ - Check port 8000 is not in use
466
+
467
+ ### "Cannot connect to backend"
468
+
469
+ - Make sure server is running: `python run.py`
470
+ - Check server is on http://localhost:8000
471
+ - Verify no firewall blocking the connection
472
+
473
+ ### Vector store errors
474
+
475
+ - Ensure database/ directories exist
476
+ - Check file permissions on database/ directory
477
+ - Delete database/vector_store/ to rebuild index
478
+
479
+ ### Template variable errors
480
+
481
+ - Should be fixed by curly brace escaping
482
+ - Check for any unescaped { or } in learning data files
483
+ - Restart server after fixing
484
+
485
+ ### Realtime mode not working
486
+
487
+ - Check TAVILY_API_KEY is set in .env
488
+ - Verify Tavily API key is valid
489
+ - Check internet connection
490
+
491
+ ## πŸ”’ Security Notes
492
+
493
+ - Session IDs are validated to prevent path traversal (checks for both / and \)
494
+ - API keys stored in .env (not in code)
495
+ - CORS enabled for all origins (adjust for production)
496
+ - No authentication (add for production use)
497
+
498
+ ## 🌐 Cross-Platform Compatibility
499
+
500
+ This code is fully cross-platform and works on:
501
+
502
+ - βœ… Windows (Windows 10/11)
503
+ - βœ… macOS (all versions)
504
+ - βœ… Linux (all distributions)
505
+
506
+ **Why it's cross-platform:**
507
+ - Uses pathlib.Path for all file paths (handles / vs \ automatically)
508
+ - Explicit UTF-8 encoding for all file operations
509
+ - No hardcoded path separators
510
+ - No shell commands or platform-specific code
511
+ - Standard Python libraries only
512
+ - Session ID validation checks both / and \ for security
513
+
514
+ **Tested on:**
515
+ - macOS (Darwin)
516
+ - Windows (should work - uses standard Python practices)
517
+ - Linux (should work - uses standard Python practices)
518
+
519
+ ## πŸ“š Development
520
+
521
+ ### Running in Development Mode
522
+
523
+ ```bash
524
+ python run.py
525
+ ```
526
+
527
+ Auto-reload is enabled, so code changes restart the server automatically.
528
+
529
+ ### Testing
530
+
531
+ ```bash
532
+ # CLI test interface
533
+ python test.py
534
+
535
+ # Or use curl
536
+ curl -X POST http://localhost:8000/chat \
537
+ -H "Content-Type: application/json" \
538
+ -d '{"message": "Hello"}'
539
+ ```
540
+
541
+ ### Project Structure Philosophy
542
+
543
+ - **Separation of Concerns**: Each service handles one responsibility
544
+ - **Configuration Centralization**: All settings in config.py
545
+ - **Type Safety**: Pydantic models for validation
546
+ - **Documentation**: Comprehensive docstrings in all modules
547
+
548
+ # πŸ‘€ Developer
549
+
550
+ **Vansh Tiwari**
551
+
552
+ **Aditya Yadav**
553
+ ---
554
+
555
+ ---
556
+
557
+ ## πŸ“„ License
558
+ MIT License
559
+
560
+ ---
561
+
562
+ Made with ❀️ by **Vansh Tiwari** & **Aditya Yadav**
563
+
564
+ ---
565
+
566
+ ## ▢️ Getting Started
567
+
568
+ **Start chatting**: `python run.py` then `python test.py`