MAI-UI-2B 6bit

This is a 6-bit quantized MLX conversion of Tongyi-MAI/MAI-UI-2B, optimized for Apple Silicon.

MAI-UI is a family of real-world centric foundation GUI agents built for grounding, GUI navigation, user interaction, and broader device-cloud agent workflows. The family spans multiple scales and is framed upstream around realistic deployment, including user interaction, MCP-style tool use, online RL, and device-cloud collaboration.

This artifact was derived from the validated local MLX bf16 reference conversion and then quantized with mlx-vlm. It was validated locally with both mlx_vlm prompt-packet checks and vllm-mlx OpenAI-compatible serve checks.

Conversion Details

Field Value
Upstream model Tongyi-MAI/MAI-UI-2B
Artifact type 6bit quantized MLX conversion
Source artifact local validated bf16 MLX artifact
Conversion tool mlx_vlm.convert via mlx-vlm 0.3.12
Python 3.11.14
MLX 0.31.0
Transformers 5.2.0
Validation backend vllm-mlx (phase/p1 @ 48b51ed)
Quantization 6bit
Group size 64
Quantization mode affine
Reported effective bits per weight 8.318
Artifact size 2.1G
Template repair tokenizer_config.json["chat_template"] was re-injected from chat_template.jinja after quantization

Additional notes:

  • Root-level packaging is intentional for vllm-mlx multimodal detection compatibility.
  • processor_config.json and video_preprocessor_config.json are present at repo root.
  • This refreshed 6bit line is paired with the refreshed bf16-v2 repo rather than the older nested mlx-community/MAI-UI-2B-bf16 line.

Validation

This artifact passed local validation in this workspace:

  • mlx_vlm prompt-packet validation: PASS
  • vllm-mlx OpenAI-compatible serve validation: PASS

Local validation notes:

  • output stayed in the same viable envelope as the refreshed local bf16 reference artifact
  • grounding shifted modestly relative to bf16, but stayed on the correct API Host region
  • the structured-action row simplified to a cleaner atomic click and dropped the injected URL payload

Performance

  • Artifact size on disk: 2.1G
  • Local fixed-packet mlx_vlm validation used about 3.48 GB average peak memory
  • Observed local fixed-packet throughput was about 446-556 prompt tok/s and 1.8-12.1 generation tok/s across the four validation prompts
  • Local vllm-mlx non-stream request time was about 9.36s, with streamed completion in about 9.81s

These are local validation measurements, not a full benchmark suite.

Usage

Install

pip install -U mlx-vlm

CLI

python -m mlx_vlm.generate \
  --model mlx-community/MAI-UI-2B-6bit-v2 \
  --image path/to/image.png \
  --prompt "Describe the visible controls on this screen." \
  --max-tokens 256 \
  --temperature 0.0

Python

from mlx_vlm import load, generate

model, processor = load("mlx-community/MAI-UI-2B-6bit-v2")
result = generate(
    model,
    processor,
    prompt="Describe the visible controls on this screen.",
    image="path/to/image.png",
    max_tokens=256,
    temp=0.0,
)
print(result.text)

vllm-mlx Serve

python -m vllm_mlx.cli serve mlx-community/MAI-UI-2B-6bit-v2 --mllm --localhost --port 8000

Links

Other Quantizations

Planned sibling repos in this Track C refresh:

Not published from this wave:

  • 4bit was evaluated locally and rejected after runtime validation

Notes and Limitations

  • This card reports local MLX conversion and validation results only.
  • Upstream benchmark claims belong to the original MAI-UI model family and were not re-run here unless explicitly stated.
  • Quantization changes numerical behavior relative to the refreshed local bf16 reference artifact.
  • Final public Track C comparative benchmarking happens after the refreshed 2B repos are uploaded.

Citation

If you use this MLX conversion, please also cite the original MAI-UI work:

@misc{zhou2025maiuitechnicalreportrealworld,
  title={MAI-UI Technical Report: Real-World Centric Foundation GUI Agents},
  author={Hanzhang Zhou and Xu Zhang and Panrong Tong and Jianan Zhang and Liangyu Chen and Quyu Kong and Chenglin Cai and Chen Liu and Yue Wang and Jingren Zhou and Steven Hoi},
  year={2025},
  eprint={2512.22047},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2512.22047},
}

License

This repo follows the upstream model license: Apache 2.0. See the upstream model card for the authoritative license details: Tongyi-MAI/MAI-UI-2B.

Downloads last month
41
Safetensors
Model size
0.8B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/MAI-UI-2B-6bit-v2

Quantized
(5)
this model

Paper for mlx-community/MAI-UI-2B-6bit-v2