Keye-VL 1.5 8B โ€” MLX 4-bit

Kwai-Keye/Keye-VL-1_5-8B converted to MLX format with 4-bit quantization for fast inference on Apple Silicon.

Performance (M4 Pro, 24GB)

Mode Prompt (tok/s) Generation (tok/s) Peak Memory
Text only ~210 ~52 5.6 GB
Video (8 frames) ~194 ~36 7.2 GB
Image ~150 ~34 14.2 GB

Quick Start

pip install mlx-vlm qwen-vl-utils

Python

from mlx_vlm import load, generate

model, processor = load("andrevp/Keye-VL-1.5-8B-MLX-4bit", trust_remote_code=True)

# Image
prompt = processor.apply_chat_template(
    [{"role": "user", "content": [
        {"type": "image", "image": "photo.jpg"},
        {"type": "text", "text": "Describe this image."},
    ]}],
    tokenize=False, add_generation_prompt=True,
)
output = generate(
    model, processor, prompt,
    image=["photo.jpg"], max_tokens=200,
)
print(output.text)

CLI

# One-shot
python chat.py photo.jpg -p "What's in this image?"
python chat.py video.mp4 -p "Describe this video" --nframes 16

# Interactive
python chat.py photo.jpg

Model Details

  • Base model: Kwai-Keye/Keye-VL-1_5-8B
  • Quantization: 4-bit (~5.1 bits effective), 5.2 GB on disk
  • Vision encoder: 27-layer ViT with learnable position embeddings and 2D RoPE
  • Language model: 36-layer Qwen3 with MRoPE and GQA (32 heads, 8 KV heads)
  • Projector: 2x2 spatial merge + LayerNorm + MLP
  • Supports: Images, video, text-only, multilingual (EN/ZH/ID)

Notes

  • Video inference uses sampled frames to fit in memory. Default is 8 frames at 224px max resolution.
  • High-resolution images (~1000px+) can use up to 14GB due to the vision attention mask.
  • Custom mlx-vlm model module (keyevl1_5) is required โ€” included in this repo's conversion.
Downloads last month
28
Safetensors
Model size
2B params
Tensor type
BF16
ยท
U32
ยท
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for andrevp/Keye-VL-1.5-8B-MLX-4bit

Quantized
(1)
this model