victor HF Staff commited on
Commit
008e1e3
·
1 Parent(s): 92b5726

Update README for improved setup instructions

Browse files

Clarifies local setup instructions, corrects Docker port mapping, and updates LLM router description to reflect server-side smart routing. These changes improve accuracy and usability of the documentation.

Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -53,7 +53,7 @@ npm install
53
  npm run dev -- --open
54
  ```
55
 
56
- You now have Chat UI running against the Hugging Face router without needing to host MongoDB yourself.
57
 
58
  ## Database Options
59
 
@@ -95,7 +95,7 @@ Prefer containerized setup? You can run everything in one container as long as y
95
 
96
  ```bash
97
  docker run \
98
- -p 3000 \
99
  -e MONGODB_URL=mongodb://host.docker.internal:27017 \
100
  -e OPENAI_BASE_URL=https://router.huggingface.co/v1 \
101
  -e OPENAI_API_KEY=hf_*** \
@@ -128,7 +128,7 @@ This build does not use the `MODELS` env var or GGUF discovery. Configure models
128
 
129
  ### LLM Router (Optional)
130
 
131
- Chat UI can perform client-side routing [katanemo/Arch-Router-1.5B](https://huggingface.co/katanemo/Arch-Router-1.5B) as the routing model without running a separate router service. The UI exposes a virtual model alias called "Omni" (configurable) that, when selected, chooses the best route/model for each message.
132
 
133
  - Provide a routes policy JSON via `LLM_ROUTER_ROUTES_PATH`. No sample file ships with this branch, so you must point the variable to a JSON array you create yourself (for example, commit one in your project like `config/routes.chat.json`). Each route entry needs `name`, `description`, `primary_model`, and optional `fallback_models`.
134
  - Configure the Arch router selection endpoint with `LLM_ROUTER_ARCH_BASE_URL` (OpenAI-compatible `/chat/completions`) and `LLM_ROUTER_ARCH_MODEL` (e.g. `router/omni`). The Arch call reuses `OPENAI_API_KEY` for auth.
 
53
  npm run dev -- --open
54
  ```
55
 
56
+ You now have Chat UI running locally. Open the browser and start chatting.
57
 
58
  ## Database Options
59
 
 
95
 
96
  ```bash
97
  docker run \
98
+ -p 3000:3000 \
99
  -e MONGODB_URL=mongodb://host.docker.internal:27017 \
100
  -e OPENAI_BASE_URL=https://router.huggingface.co/v1 \
101
  -e OPENAI_API_KEY=hf_*** \
 
128
 
129
  ### LLM Router (Optional)
130
 
131
+ Chat UI can perform server-side smart routing using [katanemo/Arch-Router-1.5B](https://huggingface.co/katanemo/Arch-Router-1.5B) as the routing model without running a separate router service. The UI exposes a virtual model alias called "Omni" (configurable) that, when selected, chooses the best route/model for each message.
132
 
133
  - Provide a routes policy JSON via `LLM_ROUTER_ROUTES_PATH`. No sample file ships with this branch, so you must point the variable to a JSON array you create yourself (for example, commit one in your project like `config/routes.chat.json`). Each route entry needs `name`, `description`, `primary_model`, and optional `fallback_models`.
134
  - Configure the Arch router selection endpoint with `LLM_ROUTER_ARCH_BASE_URL` (OpenAI-compatible `/chat/completions`) and `LLM_ROUTER_ARCH_MODEL` (e.g. `router/omni`). The Arch call reuses `OPENAI_API_KEY` for auth.