UI-Venus-1.5-8B bf16
This is an MLX conversion of inclusionAI/UI-Venus-1.5-8B, optimized for Apple Silicon.
UI-Venus-1.5 is a unified end-to-end GUI agent family built for grounding, web navigation, and mobile navigation. The 1.5 family spans dense 2B and 8B variants plus a 30B-A3B MoE variant, and is framed upstream around a shared GUI semantics stage, online RL for long-horizon navigation, and model merging across grounding, web, and mobile domains.
This MLX artifact was converted with mlx-vlm and validated locally with both mlx_vlm prompt-packet checks and vllm-mlx OpenAI-compatible serve checks.
Conversion Details
| Field | Value |
|---|---|
| Upstream model | inclusionAI/UI-Venus-1.5-8B |
| Artifact type | bf16 MLX conversion |
| Conversion tool | mlx_vlm.convert via mlx-vlm 0.3.12 |
| Python | 3.11.14 |
| MLX | 0.31.0 |
| Transformers | 5.2.0 |
| Validation backend | vllm-mlx (phase/p1 @ 8a5d41b) |
| Quantization | bf16 |
| Group size | n/a |
| Quantization mode | n/a |
| Artifact size | 16.34G |
| Template repair | tokenizer_config.json["chat_template"] was re-injected after conversion |
Additional notes:
- Upstream UI-Venus ships both
chat_template.jsonandtokenizer_config.json["chat_template"]. - This MLX artifact preserves the dual-template contract across
chat_template.json,chat_template.jinja, andtokenizer_config.json["chat_template"]. chat_template.jinjais present as an additive compatibility shim for downstream tooling.
Validation
This artifact passed local validation in this workspace:
mlx_vlmprompt-packet validation:PASSvllm-mlxOpenAI-compatible serve validation:PASS
Local validation notes:
- structured action was strong on the fixed packet and retained the requested
reasonfield - the main caution was grounding looseness rather than text or schema degeneration
Performance
- Artifact size on disk:
16.34G - Local fixed-packet
mlx_vlmvalidation used about30.55 GBpeak memory - Observed local fixed-packet throughput was about
206-244prompt tok/s and20.4-26.8generation tok/s across the four validation prompts - Local
vllm-mlxserve validation passed cleanly on both non-stream and streamed multimodal chat
These are local validation measurements, not a full benchmark suite.
Usage
Install
pip install -U mlx-vlm
CLI
python -m mlx_vlm.generate \
--model mlx-community/UI-Venus-1.5-8B-bf16 \
--image path/to/image.png \
--prompt "Describe the visible controls on this screen." \
--max-tokens 256 \
--temperature 0.0
Python
from mlx_vlm import load, generate
model, processor = load("mlx-community/UI-Venus-1.5-8B-bf16")
result = generate(
model,
processor,
prompt="Describe the visible controls on this screen.",
image="path/to/image.png",
max_tokens=256,
temp=0.0,
)
print(result.text)
vllm-mlx Serve
python -m vllm_mlx.cli serve mlx-community/UI-Venus-1.5-8B-bf16 --mllm --localhost --port 8000
Links
- Upstream model: inclusionAI/UI-Venus-1.5-8B
- Paper: UI-Venus-1.5 Technical Report
- Paper: UI-Venus Technical Report: Building High-performance UI Agents with RFT
- GitHub: inclusionAI/UI-Venus
- MLX framework: ml-explore/mlx
- mlx-vlm: Blaizzy/mlx-vlm
Other Quantizations
Planned sibling repos in this wave:
mlx-community/UI-Venus-1.5-8B-bf16- this modelmlx-community/UI-Venus-1.5-8B-6bitmlx-community/UI-Venus-1.5-8B-4bit
Notes and Limitations
- This card reports local MLX conversion and validation results only.
- Upstream benchmark claims belong to the original UI-Venus model family and were not re-run here unless explicitly stated.
- This artifact is the local reference artifact for the
6bitand4bitvariants in this wave. - On the fixed local packet, the main qualitative caution was loose grounding rather than schema breakage.
Citation
If you use this MLX conversion, please cite the original UI-Venus papers:
- UI-Venus-1.5 Technical Report
- UI-Venus Technical Report: Building High-performance UI Agents with RFT
License
This repo follows the upstream model license: Apache 2.0. See the upstream model card for the authoritative license details: inclusionAI/UI-Venus-1.5-8B.
- Downloads last month
- 18
Quantized
Model tree for mlx-community/UI-Venus-1.5-8B-bf16
Base model
inclusionAI/UI-Venus-1.5-8B