Trinity-Mini-oQ8 / README.md
bearzi's picture
Upload Trinity-Mini-oQ8
ddaa161 verified
metadata
base_model: arcee-ai/Trinity-Mini
library_name: mlx
pipeline_tag: text-generation
license: apache-2.0
tags:
  - mlx
  - omlx
  - oq
  - oq8
  - quantized

Trinity-Mini-oQ8

oQ8 mixed-precision MLX quantization produced via oMLX.

  • Quantization: oQ8 (sensitivity-driven mixed precision, group_size=64)
  • Format: MLX safetensors
  • Compatible with: mlx-lm, mlx-vlm, oMLX on Apple Silicon

Usage

from mlx_lm import load, generate
model, tokenizer = load("bearzi/Trinity-Mini-oQ8")
prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": "Hello"}],
    add_generation_prompt=True,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))

About oQ

oQ measures per-layer quantization sensitivity through calibration and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. Target averages of 2/3/4/6/8 bits are provided; actual per-layer bits vary by measured sensitivity.

See oQ documentation.

Comparative benchmarks and feedback welcome — please open a discussion.