Huihui-Step3-VL-10B-abliterated (MLX 8-bit)
MLX conversion (8-bit affine quantization, group size 64) of huihui-ai/Huihui-Step3-VL-10B-abliterated.
Files
- MLX weights:
model-*.safetensors+model.safetensors.index.json - Loader code:
step3_vl_mlx.py(referenced byconfig.jsonviamodel_file)
Text-Only Usage (mlx-lm)
This repo includes a minimal MLX module tree primarily meant for loading and quantization. Multimodal/image inference is not implemented in the included model file.
pip install -U mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("AITRADER/Huihui-Step3-VL-10B-abliterated-mlx-8bit")
print(generate(model, tokenizer, prompt="Hello!", max_tokens=128))
Source
Original model: huihui-ai/Huihui-Step3-VL-10B-abliterated
- Downloads last month
- 63
Model size
10B params
Tensor type
F16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for AITRADER/Huihui-Step3-VL-10B-abliterated-mlx-8bit
Base model
stepfun-ai/Step3-VL-10B-Base Finetuned
stepfun-ai/Step3-VL-10B