UI-Venus-1.5-2B bf16

This is an MLX conversion of inclusionAI/UI-Venus-1.5-2B, optimized for Apple Silicon.

UI-Venus-1.5 is a unified end-to-end GUI agent family built for grounding, web navigation, and mobile navigation. The 1.5 family spans dense 2B and 8B variants plus a 30B-A3B MoE variant, and is framed upstream around a shared GUI semantics stage, online RL for long-horizon navigation, and model merging across grounding, web, and mobile domains.

This MLX artifact was converted with mlx-vlm and validated locally with both mlx_vlm prompt-packet checks and vllm-mlx OpenAI-compatible serve checks.

Conversion Details

Field Value
Upstream model inclusionAI/UI-Venus-1.5-2B
Artifact type bf16 MLX conversion
Conversion tool mlx_vlm.convert via mlx-vlm 0.3.12
Python 3.11.14
MLX 0.31.0
Transformers 5.2.0
Validation backend vllm-mlx (phase/p1 @ 8a5d41b)
Quantization bf16
Group size n/a
Quantization mode n/a
Artifact size 4.55G
Template repair tokenizer_config.json["chat_template"] was re-injected after conversion
Upstream config workaround tie_word_embeddings forced to false in a local mirror before conversion

Additional notes:

  • Upstream UI-Venus ships both chat_template.json and tokenizer_config.json["chat_template"].
  • This MLX artifact preserves the dual-template contract across chat_template.json, chat_template.jinja, and tokenizer_config.json["chat_template"].
  • chat_template.jinja is present as an additive compatibility shim.
  • Conversion used a local mirrored upstream snapshot with tie_word_embeddings = false because the published upstream config conflicted with the published tensor set.

Validation

This artifact passed local validation in this workspace:

  • mlx_vlm prompt-packet validation: PASS
  • vllm-mlx OpenAI-compatible serve validation: PASS

Local validation notes:

  • schemas remained broadly intact on the fixed packet
  • compared with UI-Venus 8B, this model is materially weaker on precise UI instruction-following
  • on the fixed prompts, it was more likely to answer with a section label instead of the exact control and to choose a merely plausible next action instead of the best one

Performance

  • Artifact size on disk: 4.55G
  • Local fixed-packet mlx_vlm validation used about 6.30 GB peak memory
  • Observed local fixed-packet throughput was about 598-629 prompt tok/s and 71.1-104.4 generation tok/s across the four validation prompts
  • Local vllm-mlx serve validation completed in about 8.15s non-stream and 8.92s streamed

These are local validation measurements, not a full benchmark suite.

Usage

Install

pip install -U mlx-vlm

CLI

python -m mlx_vlm.generate \
  --model mlx-community/UI-Venus-1.5-2B-bf16 \
  --image path/to/image.png \
  --prompt "Describe the visible controls on this screen." \
  --max-tokens 256 \
  --temperature 0.0

Python

from mlx_vlm import load, generate

model, processor = load("mlx-community/UI-Venus-1.5-2B-bf16")
result = generate(
    model,
    processor,
    prompt="Describe the visible controls on this screen.",
    image="path/to/image.png",
    max_tokens=256,
    temp=0.0,
)
print(result.text)

vllm-mlx Serve

python -m vllm_mlx.cli serve mlx-community/UI-Venus-1.5-2B-bf16 --mllm --localhost --port 8000

Links

Other Quantizations

Planned sibling repos in this wave:

Notes and Limitations

  • This card reports local MLX conversion and validation results only.
  • Upstream benchmark claims belong to the original UI-Venus model family and were not re-run here unless explicitly stated.
  • This artifact is materially weaker than the UI-Venus 8B artifact on precise UI instruction-following.
  • Conversion required an artifact-level config workaround because the published upstream 2B package was internally inconsistent on tied embeddings.

Citation

If you use this MLX conversion, please cite the original UI-Venus papers:

License

This repo follows the upstream model license: Apache 2.0. See the upstream model card for the authoritative license details: inclusionAI/UI-Venus-1.5-2B.

Downloads last month
21
Safetensors
Model size
2B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/UI-Venus-1.5-2B-bf16

Finetuned
(1)
this model

Papers for mlx-community/UI-Venus-1.5-2B-bf16