qwen3.5-27b-code-forged-mlx-4bit

Optimized through Experiential Plasticity. Forged from Qwen/Qwen3.5-27B for code tasks.

Not quantized. Not distilled. Structurally reshaped.

The architecture co-evolves with training: heads that contribute to the domain specialize, heads that don't are removed. The result is a model architecturally optimized for its task — like biological synaptic pruning during brain development.

Runs on MacBook (Apple Silicon)

No GPU required. No API keys. No cloud costs. Uses Apple's MLX framework for native Metal acceleration.

pip install mlx-lm

from mlx_lm import load, generate
model, tokenizer = load("continuum-ai/qwen3.5-27b-code-forged-mlx-4bit")
print(generate(model, tokenizer, prompt="def merge_sort(arr):", max_tokens=200))

Tested on MacBook Pro M1 32GB at ~9 tokens/second. M2/M3/M4 will be faster. Works on Mac Mini, MacBook Air (16GB+), and iMac.

Results

Metric Value
Base Model Qwen/Qwen3.5-27B
Domain code
Training Data wikitext-2
Strategy combined
Pruning Level 30%
Cycles 3
Steps/Cycle 1000

Runs On

Device Format Verified
MacBook Pro 32GB fp16 Yes
RTX 3090 24GB fp16 Yes

These models are designed for consumer hardware. No A100s required. Your MacBook, your gaming PC, your home server.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("continuum-ai/qwen3.5-27b-code-forged-mlx-4bit",
    torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("continuum-ai/qwen3.5-27b-code-forged-mlx-4bit")

inputs = tokenizer("Write a Python decorator that caches results:", return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=200, do_sample=True, temperature=0.7)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Forge Your Own

Three commands. Any NVIDIA GPU with 8GB+ VRAM.

git clone https://github.com/CambrianTech/sentinel-ai && cd sentinel-ai && ./setup.sh
source .venv/bin/activate
python scripts/forge_model.py Qwen/Qwen3.5-27B --domain code

The forge script auto-detects your GPU, picks the right memory tier (fp16 / 4-bit NF4), trains with LoRA + AMP, prunes attention heads, defrags, and saves. Progress observable via status.json.

The Science: Experiential Plasticity

Traditional model compression (quantization, distillation) makes models smaller but worse. Experiential Plasticity makes them smaller AND better.

How It Works

  1. Train on domain-specific data (LoRA + AMP mixed precision)
  2. Measure each attention head's information contribution (entropy-based importance)
  3. Prune the lowest-contributing heads
  4. Retrain on the same domain data — surviving heads specialize and compensate
  5. Defrag — structurally remove dead heads, free VRAM
  6. Repeat — each cycle the model improves on its domain

Scaling Law

Larger models harbor more architectural redundancy. Plasticity exploits this — bigger models benefit more:

Model Params Domain Improvement
Qwen2.5-0.5B 0.5B General -3.2% (too small to prune)
Qwen2.5-1.5B 1.5B General +3.0%
Qwen2.5-7B 7.6B General +11.8%
Qwen3.5-4B 3.4B Code +24.0%
Qwen3.5-27B 23.6B Code +3.5% (4-bit, runs in 17GB)

Domain-specific training amplifies the effect. Qwen3.5-4B on code (+24%) exceeds Qwen2.5-7B on generic text (+11.8%) despite being a smaller model.

Transfer Function

Recovery from iterative pruning follows a measurable exponential decay:

recovery = 1.45 * exp(-0.18 * cycle) - 0.03

This connects transformer optimization to classical control theory — the same mathematics used in electrical engineering and robotics for decades. A PID controller can manage the entire forging process with zero human hyperparameters.

Continuous Defrag

Traditional pruning masks heads but doesn't free memory. Continuous defrag structurally removes dead heads between cycles:

Cycle 1: train (batch=1, 27B, 17.9GB) -> prune -> defrag -> freed 1.7GB
Cycle 2: train (batch=2, 24.5B, 16.2GB) -> prune -> defrag -> freed 1.7GB  (2x faster)
Cycle 3: train (batch=3, 22B, 14.5GB)  -> prune -> defrag                  (2.8x faster)

40% faster total training and a 33% smaller final model.

Head Mitosis

Pruning frees slots. Mitosis fills them. When a head is overutilized, it gets cloned into a pruned slot — each copy at 50% gate value to maintain output continuity. After continued training, the clones diverge and specialize, like cell differentiation after biological mitosis. The model grows new specialized capacity exactly where it's needed.

Read the full paper: Experiential Plasticity: Transformers That Grow Their Own Architecture From Experience

Output Samples

Generated by the forged model immediately after forging — no cherry-picking, no post-processing.

No generation samples available for this model.

Forging Metadata

{
  "model": "Qwen/Qwen3.5-27B",
  "improvement_pct": 0,
  "baseline_ppl": 0,
  "final_ppl": 0
}

Research

Links

Downloads last month
391
Safetensors
Model size
14B params
Tensor type
BF16
·
U8
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for continuum-ai/qwen3.5-27b-code-forged-mlx-4bit

Base model

Qwen/Qwen3.5-27B
Finetuned
(179)
this model

Dataset used to train continuum-ai/qwen3.5-27b-code-forged-mlx-4bit