| # NovaChat Platform | |
| A production-grade AI chatbot platform with live web search, streaming responses, and a modern UI. | |
| ## Folder Structure | |
| - backend | |
| - src | |
| - ai (routing, memory, context compression) | |
| - search (search, extraction, ranking, cache) | |
| - utils (LLM streaming client) | |
| - frontend | |
| - src (React UI) | |
| ## Requirements | |
| - Node.js 20+ | |
| - npm 10+ | |
| ## Backend Setup | |
| ```bash | |
| cd backend | |
| npm install | |
| npm run build | |
| set LLM_API_KEY=YOUR_KEY_HERE | |
| set LLM_MODEL=gpt-4o-mini | |
| set LLM_API_BASE=https://api.openai.com | |
| npm start | |
| ``` | |
| ## Frontend Setup | |
| ```bash | |
| cd frontend | |
| npm install | |
| npm run dev | |
| ``` | |
| The frontend is proxied to `http://localhost:8080` for API calls. | |
| ## Docker Setup (Compose) | |
| ```bash | |
| set LLM_API_KEY=YOUR_KEY_HERE | |
| set LLM_MODEL=gpt-4o-mini | |
| set LLM_API_BASE=https://api.openai.com | |
| docker compose up --build | |
| ``` | |
| Open `http://localhost:4173`. | |
| ## Performance Notes | |
| - SSE streaming for token-by-token delivery. | |
| - Parallel search fetch + content extraction. | |
| - LRU caches for search results and conversation memory. | |
| - Minimal sync blocking in the request path. | |
| ## Environment Variables | |
| - `LLM_API_KEY`: API key for OpenAI-compatible endpoint | |
| - `LLM_API_BASE`: Base URL for the OpenAI-compatible API (default: `https://api.openai.com`) | |
| - `LLM_MODEL`: Model id (default: `gpt-4o-mini`) | |
| - `PORT`: Backend port (default: `8080`) | |