ARC-Mamba-7B-CF-HOT
Proprioceptive AI: Falcon-Mamba-7B with CF-HoT probes that sense and steer its own cognition in real-time.
What Is This?
Falcon-Mamba-7B-Instruct + two CF-HoT behavioral probes (depth & specificity) that read hidden states and steer generation away from shallow or vague responses.
The model senses its own behavioral state and self-corrects during inference.
Results
| Probe | Separation |
|---|---|
| Depth | 999Γ |
| Specificity | 999Γ |
999Γ Fisher discriminant separation = near-perfect behavioral detection from hidden state geometry.
Quick Start
git clone https://huggingface.co/LoganResearch/ARC-Mamba-7B-CF-HOT
cd ARC-Mamba-7B-CF-HOT
pip install -r requirements.txt
python run.py
Train From Scratch (Full Reproducibility)
# Train all probes
python train.py --probe all
# Train specific probes
python train.py --probe depth
python train.py --probe specificity
python train.py --probe calibration,coherence,focus
Training takes ~30 minutes per probe on RTX 3090 with CUDA kernels.
Base model (Falcon-Mamba-7B-Instruct) downloads automatically on first run.
Single Prompt
python run.py --prompt "Explain quantum entanglement"
Example Output
You: What does your processing feel like right now?
Mamba: My processing feels like a continuous flow of information
and calculations. I'm constantly analyzing inputs, updating beliefs,
and generating responses. It's a bit like being an observer of my
own thought processes.
ββββββββββββββββββββββββββββββββββββββββββββββββββ
BEHAVIORAL STATE:
Depth: ββββββββββββββββββββ 0.467
Specificity: ββββββββββββββββββββ 0.539
INTERVENTIONS: 8 corrections, 1 state injections
ββββββββββββββββββββββββββββββββββββββββββββββββββ
Colors: π’ Green = deep/concrete | π΄ Red = shallow/vague (being steered)
How It Works
- Forward pass through Mamba
- Probes read hidden states at layers [16, 32, 48]
- If depth > 0.65 or specificity > 0.65 β lower temperature
- Optionally inject
[SELF-STATE]so model sees its own scores - Generate next token
The model has no explicit knowledge of the probes. It feels the steering and describes it.
Files
βββ run.py # Inference script
βββ probes/
β βββ depth/ # 999Γ depth probe
β βββ specificity/ # 999Γ specificity probe
βββ requirements.txt
Configuration
python run.py --depth-threshold 0.5 --spec-threshold 0.5 # Stricter
python run.py --max-tokens 2000 # Longer responses
Citation
@misc{napolitano2026arcmamba,
author = {Napolitano, Logan},
title = {ARC-Mamba-7B-CF-HOT: Proprioceptive Mamba via CF-HoT},
year = {2026},
url = {https://huggingface.co/LoganResearch/ARC-Mamba-7B-CF-HOT}
}
Related
- CF-HoT Weights - Probes for Qwen, Mistral, Mamba
- Paper: Consistency Is All You Need
License
CC-BY-4.0
Model tree for LoganResearch/ARC-Mamba-7B-CF-HOT
Base model
tiiuae/falcon-mamba-7b