Alice Classifier v2 โ SIFTA Intent Detection (C1 Layer)
Alice's fast intent classifier, the C1 layer in SIFTA's five-layer decision pipeline. Part of the SIFTA Predator OS v7.0.
Model Details
| Property | Value |
|---|---|
| Base Model | Qwen2.5-1.5B-4bit (via mlx-community) |
| Fine-tune Method | LoRA (rank 8) fused into base weights |
| Format | MLX SafeTensors (Apple Silicon optimized) |
| Training Hardware | Mac Studio M2 Ultra (M5 node) |
| Author | Ioan George Anton (Architect) |
| Purpose | Fast intent classification before expensive C0 cortex fires |
Architecture Role
This model is the C1 Classifier โ the second layer in SIFTA's five-layer decision pipeline:
- Reflex Arc โ instant safety responses
- C1 Classifier (THIS MODEL) โ fast intent detection (~1.5B, sub-second)
- Basal Ganglia โ action selection
- Corpus Callosum โ cross-modal integration
- C0 Cortex โ full reasoning (alice-cortex-v1)
Why two models? The C1 classifier handles ~80% of incoming intents at 1/3 the compute cost. The expensive C0 cortex only fires when the classifier can't resolve the intent. This is biological: your brainstem handles reflexes before your prefrontal cortex even wakes up.
Usage (MLX)
from mlx_lm import load, generate
model, tokenizer = load("georgeanton/alice-classifier-v2")
response = generate(model, tokenizer, prompt="Classify intent: play some music", max_tokens=32)
print(response)
Part of SIFTA
588 system modules | 17 biological organs | 4 provisional patents | 2,532+ commits
Repository: github.com/antonpictures/ANTON-SIFTA
License
Apache 2.0 โ For the Swarm. ๐โก
- Downloads last month
- 27
Model size
0.2B params
Tensor type
F16
ยท
U32 ยท
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for georgeanton/alice-classifier-v2
Base model
Qwen/Qwen2.5-1.5B