File size: 3,658 Bytes
ca51841 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 | # AXERA Lite WebUI
A lightweight, local-first chat UI for OpenAI-compatible APIs.
It runs entirely in the browser, supports streaming responses, and stores conversations locally.
## Features
- OpenAI-compatible chat interface
- Streaming responses
- Markdown rendering with code highlighting
- Local conversation history in the browser
- Image upload and paste for vision-capable models
- Audio transcription workflow for compatible APIs
- Configurable context window and auto-reset threshold
- Light and dark themes
## Requirements
- Node.js 18+
- An OpenAI-compatible API that supports:
- `GET /v1/models`
- `POST /v1/chat/completions`
- Optional audio support:
- `POST /v1/audio/transcriptions`
## Quick Start
```bash
npm install
npm run dev
```
Open:
```text
http://localhost:5173
```
## First-Time Setup
1. Open **Settings**.
2. Enter your **API Base URL**.
3. Enter your **API Key** if your provider requires one.
4. Set **Max Context Tokens** to the real limit of your model.
5. Adjust **Auto-reset Threshold (%)** if needed.
6. Click **Save Settings**.
7. Click **Fetch Models**.
8. Select a model from the top bar.

### API Base URL
Use the server root URL. Do **not** append `/v1`.
Correct:
```text
http://127.0.0.1:8000
http://127.0.0.1:11434
https://your-api.example.com
```
Incorrect:
```text
http://127.0.0.1:8000/v1
http://127.0.0.1:11434/v1
```
## Usage
- **Send:** `Enter`
- **New line:** `Shift+Enter`
- **Reset API context only:** `/reset`
- **Clear current conversation and API context:** `/clean`
### Images
- Upload an image with the image button, or paste an image into the input box.
- The current model must have **Vision** enabled.
### Audio
- Attach an audio file from the input bar.
- The current model must have **Audio** enabled.
- Your API must support `POST /v1/audio/transcriptions`.
### Context Badge
The `ctx x/y` badge in the input bar shows:
- current estimated context usage
- configured context window limit
When usage approaches the configured threshold, the app automatically resets API context before the next send.
## Development and Deployment
### Development
```bash
npm run dev
```
Development mode includes a built-in proxy, which is useful when your API does not allow browser CORS requests during local development.
### Production Preview
```bash
npm run build
npm run preview
```
The production build is static. Your API must allow browser access directly, be served behind a reverse proxy, or share the same origin as the frontend.
## Commands
```bash
npm run dev
npm run build
npm run preview
npm test
```
## Data Storage
Settings, model capability overrides, theme preference, and conversations are stored in your browser with `localStorage`.
Clearing site storage resets the app.
## Troubleshooting
### Fetch Models fails
- Make sure you clicked **Save Settings** first.
- Make sure **API Base URL** does not include `/v1`.
- Confirm your API supports `GET /v1/models`.
- If it works in `npm run dev` but fails in preview or production, check CORS.
### Model list is empty
- Click **Fetch Models**.
- Verify the request succeeded.
- Confirm your API returns models from `GET /v1/models`.
### Image button is disabled
- Enable **Vision** for the selected model in **Settings**.
### Audio button is disabled
- Enable **Audio** for the selected model in **Settings**.
- Confirm your backend supports audio transcription.
### Requests fail even though the server is reachable
Make sure your backend is compatible with the OpenAI Chat Completions format, especially `model`, `messages`, and streaming responses.
|