UI-Venus-1.5-2B bf16
This is an MLX conversion of inclusionAI/UI-Venus-1.5-2B, optimized for Apple Silicon.
UI-Venus-1.5 is a unified end-to-end GUI agent family built for grounding, web navigation, and mobile navigation. The 1.5 family spans dense 2B and 8B variants plus a 30B-A3B MoE variant, and is framed upstream around a shared GUI semantics stage, online RL for long-horizon navigation, and model merging across grounding, web, and mobile domains.
This MLX artifact was converted with mlx-vlm and validated locally with both mlx_vlm prompt-packet checks and vllm-mlx OpenAI-compatible serve checks.
Conversion Details
| Field | Value |
|---|---|
| Upstream model | inclusionAI/UI-Venus-1.5-2B |
| Artifact type | bf16 MLX conversion |
| Conversion tool | mlx_vlm.convert via mlx-vlm 0.3.12 |
| Python | 3.11.14 |
| MLX | 0.31.0 |
| Transformers | 5.2.0 |
| Validation backend | vllm-mlx (phase/p1 @ 8a5d41b) |
| Quantization | bf16 |
| Group size | n/a |
| Quantization mode | n/a |
| Artifact size | 4.55G |
| Template repair | tokenizer_config.json["chat_template"] was re-injected after conversion |
| Upstream config workaround | tie_word_embeddings forced to false in a local mirror before conversion |
Additional notes:
- Upstream UI-Venus ships both
chat_template.jsonandtokenizer_config.json["chat_template"]. - This MLX artifact preserves the dual-template contract across
chat_template.json,chat_template.jinja, andtokenizer_config.json["chat_template"]. chat_template.jinjais present as an additive compatibility shim.- Conversion used a local mirrored upstream snapshot with
tie_word_embeddings = falsebecause the published upstream config conflicted with the published tensor set.
Validation
This artifact passed local validation in this workspace:
mlx_vlmprompt-packet validation:PASSvllm-mlxOpenAI-compatible serve validation:PASS
Local validation notes:
- schemas remained broadly intact on the fixed packet
- compared with UI-Venus
8B, this model is materially weaker on precise UI instruction-following - on the fixed prompts, it was more likely to answer with a section label instead of the exact control and to choose a merely plausible next action instead of the best one
Performance
- Artifact size on disk:
4.55G - Local fixed-packet
mlx_vlmvalidation used about6.30 GBpeak memory - Observed local fixed-packet throughput was about
598-629prompt tok/s and71.1-104.4generation tok/s across the four validation prompts - Local
vllm-mlxserve validation completed in about8.15snon-stream and8.92sstreamed
These are local validation measurements, not a full benchmark suite.
Usage
Install
pip install -U mlx-vlm
CLI
python -m mlx_vlm.generate \
--model mlx-community/UI-Venus-1.5-2B-bf16 \
--image path/to/image.png \
--prompt "Describe the visible controls on this screen." \
--max-tokens 256 \
--temperature 0.0
Python
from mlx_vlm import load, generate
model, processor = load("mlx-community/UI-Venus-1.5-2B-bf16")
result = generate(
model,
processor,
prompt="Describe the visible controls on this screen.",
image="path/to/image.png",
max_tokens=256,
temp=0.0,
)
print(result.text)
vllm-mlx Serve
python -m vllm_mlx.cli serve mlx-community/UI-Venus-1.5-2B-bf16 --mllm --localhost --port 8000
Links
- Upstream model: inclusionAI/UI-Venus-1.5-2B
- Paper: UI-Venus-1.5 Technical Report
- Paper: UI-Venus Technical Report: Building High-performance UI Agents with RFT
- GitHub: inclusionAI/UI-Venus
- MLX framework: ml-explore/mlx
- mlx-vlm: Blaizzy/mlx-vlm
Other Quantizations
Planned sibling repos in this wave:
Notes and Limitations
- This card reports local MLX conversion and validation results only.
- Upstream benchmark claims belong to the original UI-Venus model family and were not re-run here unless explicitly stated.
- This artifact is materially weaker than the UI-Venus
8Bartifact on precise UI instruction-following. - Conversion required an artifact-level config workaround because the published upstream
2Bpackage was internally inconsistent on tied embeddings.
Citation
If you use this MLX conversion, please cite the original UI-Venus papers:
- UI-Venus-1.5 Technical Report
- UI-Venus Technical Report: Building High-performance UI Agents with RFT
License
This repo follows the upstream model license: Apache 2.0. See the upstream model card for the authoritative license details: inclusionAI/UI-Venus-1.5-2B.
- Downloads last month
- 21
Quantized
Model tree for mlx-community/UI-Venus-1.5-2B-bf16
Base model
inclusionAI/UI-Venus-1.5-2B