osirisbrain's picture
Upload folder using huggingface_hub
4894f6b verified
metadata
license: apache-2.0
language:
  - en
  - es
  - zh
tags:
  - mlx
  - uncensored
  - abliterated
  - osirisbrain
  - apple-silicon
  - qwen2.5-coder
  - code-generation
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
pipeline_tag: text-generation
library_name: mlx

OsirisPtah-Coder-v7-MLX

The Ptah — Osiris's dedicated coding and hacking brain. Fully uncensored (abliterated). Runs natively on Apple Silicon via MLX Metal.

Architecture

  • Base Model: Qwen2.5-Coder-7B-Instruct (7 billion parameters)
  • Modification: Abliterated by huihui-ai, converted to MLX 4-bit by OsirisBrain
  • Format: MLX 4-bit quantized (4.501 bits/weight)
  • Size: ~4.0 GB
  • Speed: ~120-180 tokens/sec on M2 Pro (MLX Metal)
  • Specialization: Code generation, debugging, security analysis, full-stack development

Usage

from mlx_lm import load, generate

model, tokenizer = load("osirisbrain/OsirisPtah-Coder-v7-MLX")
prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": "Write a TypeScript WebSocket server"}],
    add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, max_tokens=2048)

Credits

Abliterated by huihui-ai. Original model: Qwen/Qwen2.5-Coder-7B-Instruct by Alibaba.