Qwenjamin Franklin V2 4bit

Qwenjamin Franklin V2 4bit

Qwenjamin Franklin V2 4bit is the compact MLX export of the second-generation Qwenjamin Franklin workshop release.

It keeps the V2 tuning goals for stronger everyday reasoning, stricter JSON and tool behavior, and more reliable false-premise correction while shrinking the footprint for local Apple Silicon use.

This repo is the 4-bit sibling of stamsam/Qwenjamin_Franklin_V2. If you want the full fused PyTorch release and its benchmark table, use that repo instead.

What This Release Is

  • Compact fused MLX 4-bit model
  • Base: Qwen/Qwen3.5-9B
  • Workshop lineage: v55 targeted SFT
  • CUDA-native PEFT training origin from the V2 checkpoint
  • Best fit: local inference where memory and load time matter more than exact full-precision parity

How It Relates To V2

  • Same model family and tuning direction as the full V2 release
  • Quantized for a smaller footprint and faster local loading
  • Exact outputs can differ slightly from the full V2 checkpoint on borderline prompts, especially strict JSON and tool calls

Usage

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("stamsam/Qwenjamin_Franklin_V2_4bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages,
        add_generation_prompt=True,
        return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
python -m mlx_lm generate \
  --model stamsam/Qwenjamin_Franklin_V2_4bit \
  --prompt "Return only valid JSON." \
  --max-tokens 256 \
  --temp 0.0

Notes

  • Use explicit instructions for strict JSON or tool-heavy prompts.
  • Verify important outputs in high-stakes workflows.
  • For benchmark context and the full comparison table, see stamsam/Qwenjamin_Franklin_V2.
Downloads last month
25
Safetensors
Model size
1B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including stamsam/Qwenjamin_Franklin_V2_4bit