Huihui-Step3-VL-10B-abliterated (MLX 8-bit)

MLX conversion (8-bit affine quantization, group size 64) of huihui-ai/Huihui-Step3-VL-10B-abliterated.

Files

  • MLX weights: model-*.safetensors + model.safetensors.index.json
  • Loader code: step3_vl_mlx.py (referenced by config.json via model_file)

Text-Only Usage (mlx-lm)

This repo includes a minimal MLX module tree primarily meant for loading and quantization. Multimodal/image inference is not implemented in the included model file.

pip install -U mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("AITRADER/Huihui-Step3-VL-10B-abliterated-mlx-8bit")
print(generate(model, tokenizer, prompt="Hello!", max_tokens=128))

Source

Original model: huihui-ai/Huihui-Step3-VL-10B-abliterated

Downloads last month
63
Safetensors
Model size
10B params
Tensor type
F16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AITRADER/Huihui-Step3-VL-10B-abliterated-mlx-8bit