YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
AXERA Lite WebUI
A lightweight, local-first chat UI for OpenAI-compatible APIs.
It runs entirely in the browser, supports streaming responses, and stores conversations locally.
Features
- OpenAI-compatible chat interface
- Streaming responses
- Markdown rendering with code highlighting
- Local conversation history in the browser
- Image upload and paste for vision-capable models
- Audio transcription workflow for compatible APIs
- Configurable context window and auto-reset threshold
- Light and dark themes
Requirements
- Node.js 18+
- An OpenAI-compatible API that supports:
GET /v1/modelsPOST /v1/chat/completions
- Optional audio support:
POST /v1/audio/transcriptions
Quick Start
npm install
npm run dev
Open:
http://localhost:5173
First-Time Setup
- Open Settings.
- Enter your API Base URL.
- Enter your API Key if your provider requires one.
- Set Max Context Tokens to the real limit of your model.
- Adjust Auto-reset Threshold (%) if needed.
- Click Save Settings.
- Click Fetch Models.
- Select a model from the top bar.
API Base URL
Use the server root URL. Do not append /v1.
Correct:
http://127.0.0.1:8000
http://127.0.0.1:11434
https://your-api.example.com
Incorrect:
http://127.0.0.1:8000/v1
http://127.0.0.1:11434/v1
Usage
- Send:
Enter - New line:
Shift+Enter - Reset API context only:
/reset - Clear current conversation and API context:
/clean
Images
- Upload an image with the image button, or paste an image into the input box.
- The current model must have Vision enabled.
Audio
- Attach an audio file from the input bar.
- The current model must have Audio enabled.
- Your API must support
POST /v1/audio/transcriptions.
Context Badge
The ctx x/y badge in the input bar shows:
- current estimated context usage
- configured context window limit
When usage approaches the configured threshold, the app automatically resets API context before the next send.
Development and Deployment
Development
npm run dev
Development mode includes a built-in proxy, which is useful when your API does not allow browser CORS requests during local development.
Production Preview
npm run build
npm run preview
The production build is static. Your API must allow browser access directly, be served behind a reverse proxy, or share the same origin as the frontend.
Commands
npm run dev
npm run build
npm run preview
npm test
Data Storage
Settings, model capability overrides, theme preference, and conversations are stored in your browser with localStorage.
Clearing site storage resets the app.
Troubleshooting
Fetch Models fails
- Make sure you clicked Save Settings first.
- Make sure API Base URL does not include
/v1. - Confirm your API supports
GET /v1/models. - If it works in
npm run devbut fails in preview or production, check CORS.
Model list is empty
- Click Fetch Models.
- Verify the request succeeded.
- Confirm your API returns models from
GET /v1/models.
Image button is disabled
- Enable Vision for the selected model in Settings.
Audio button is disabled
- Enable Audio for the selected model in Settings.
- Confirm your backend supports audio transcription.
Requests fail even though the server is reachable
Make sure your backend is compatible with the OpenAI Chat Completions format, especially model, messages, and streaming responses.
