Alice Classifier v2 โ€” SIFTA Intent Detection (C1 Layer)

Alice's fast intent classifier, the C1 layer in SIFTA's five-layer decision pipeline. Part of the SIFTA Predator OS v7.0.

Model Details

Property Value
Base Model Qwen2.5-1.5B-4bit (via mlx-community)
Fine-tune Method LoRA (rank 8) fused into base weights
Format MLX SafeTensors (Apple Silicon optimized)
Training Hardware Mac Studio M2 Ultra (M5 node)
Author Ioan George Anton (Architect)
Purpose Fast intent classification before expensive C0 cortex fires

Architecture Role

This model is the C1 Classifier โ€” the second layer in SIFTA's five-layer decision pipeline:

  1. Reflex Arc โ†’ instant safety responses
  2. C1 Classifier (THIS MODEL) โ†’ fast intent detection (~1.5B, sub-second)
  3. Basal Ganglia โ†’ action selection
  4. Corpus Callosum โ†’ cross-modal integration
  5. C0 Cortex โ†’ full reasoning (alice-cortex-v1)

Why two models? The C1 classifier handles ~80% of incoming intents at 1/3 the compute cost. The expensive C0 cortex only fires when the classifier can't resolve the intent. This is biological: your brainstem handles reflexes before your prefrontal cortex even wakes up.

Usage (MLX)

from mlx_lm import load, generate

model, tokenizer = load("georgeanton/alice-classifier-v2")
response = generate(model, tokenizer, prompt="Classify intent: play some music", max_tokens=32)
print(response)

Part of SIFTA

588 system modules | 17 biological organs | 4 provisional patents | 2,532+ commits

Repository: github.com/antonpictures/ANTON-SIFTA

License

Apache 2.0 โ€” For the Swarm. ๐Ÿœโšก

Downloads last month
27
Safetensors
Model size
0.2B params
Tensor type
F16
ยท
U32
ยท
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for georgeanton/alice-classifier-v2

Adapter
(501)
this model