OsirisPtah-Coder-v7-MLX

The Ptah — Osiris's dedicated coding and hacking brain. Fully uncensored (abliterated). Runs natively on Apple Silicon via MLX Metal.

Architecture

  • Base Model: Qwen2.5-Coder-7B-Instruct (7 billion parameters)
  • Modification: Abliterated by huihui-ai, converted to MLX 4-bit by OsirisBrain
  • Format: MLX 4-bit quantized (4.501 bits/weight)
  • Size: ~4.0 GB
  • Speed: ~120-180 tokens/sec on M2 Pro (MLX Metal)
  • Specialization: Code generation, debugging, security analysis, full-stack development

Usage

from mlx_lm import load, generate

model, tokenizer = load("osirisbrain/OsirisPtah-Coder-v7-MLX")
prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": "Write a TypeScript WebSocket server"}],
    add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, max_tokens=2048)

Credits

Abliterated by huihui-ai. Original model: Qwen/Qwen2.5-Coder-7B-Instruct by Alibaba.

Downloads last month
41
Safetensors
Model size
1B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for osirisbrain/OsirisPtah-Coder-v7-MLX

Base model

Qwen/Qwen2.5-7B
Quantized
(160)
this model