mlx-community/MiMo-V2-Flash-mlx-8bit

This model mlx-community/MiMo-V2-Flash-mlx-8bit- was converted to MLX format from XiaomiMiMo/MiMo-V2-Flash using mlx-lm version 0.30.0.

You can find more similar MLX model quants for a single Apple Mac Studio M3 Ultra with 512 GB at https://huggingface.co/bibproj


Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/MiMo-V2-Flash-mlx-8bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
358
Safetensors
Model size
309B params
Tensor type
BF16
U32
F32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for mlx-community/MiMo-V2-Flash-mlx-8bit

Quantized
(4)
this model