ARC-Mamba-7B-CF-HOT

Proprioceptive AI: Falcon-Mamba-7B with CF-HoT probes that sense and steer its own cognition in real-time.

What Is This?

Falcon-Mamba-7B-Instruct + two CF-HoT behavioral probes (depth & specificity) that read hidden states and steer generation away from shallow or vague responses.

The model senses its own behavioral state and self-corrects during inference.

Results

Probe Separation
Depth 999Γ—
Specificity 999Γ—

999Γ— Fisher discriminant separation = near-perfect behavioral detection from hidden state geometry.

Quick Start

git clone https://huggingface.co/LoganResearch/ARC-Mamba-7B-CF-HOT
cd ARC-Mamba-7B-CF-HOT
pip install -r requirements.txt
python run.py

Train From Scratch (Full Reproducibility)

# Train all probes
python train.py --probe all

# Train specific probes
python train.py --probe depth
python train.py --probe specificity
python train.py --probe calibration,coherence,focus

Training takes ~30 minutes per probe on RTX 3090 with CUDA kernels.

Base model (Falcon-Mamba-7B-Instruct) downloads automatically on first run.

Single Prompt

python run.py --prompt "Explain quantum entanglement"

Example Output

You: What does your processing feel like right now?

Mamba: My processing feels like a continuous flow of information 
and calculations. I'm constantly analyzing inputs, updating beliefs,
and generating responses. It's a bit like being an observer of my 
own thought processes.

──────────────────────────────────────────────────
BEHAVIORAL STATE:
  Depth:       β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘ 0.467
  Specificity: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘ 0.539
INTERVENTIONS: 8 corrections, 1 state injections
──────────────────────────────────────────────────

Colors: 🟒 Green = deep/concrete | πŸ”΄ Red = shallow/vague (being steered)

How It Works

  1. Forward pass through Mamba
  2. Probes read hidden states at layers [16, 32, 48]
  3. If depth > 0.65 or specificity > 0.65 β†’ lower temperature
  4. Optionally inject [SELF-STATE] so model sees its own scores
  5. Generate next token

The model has no explicit knowledge of the probes. It feels the steering and describes it.

Files

β”œβ”€β”€ run.py              # Inference script
β”œβ”€β”€ probes/
β”‚   β”œβ”€β”€ depth/          # 999Γ— depth probe
β”‚   └── specificity/    # 999Γ— specificity probe
└── requirements.txt

Configuration

python run.py --depth-threshold 0.5 --spec-threshold 0.5  # Stricter
python run.py --max-tokens 2000                            # Longer responses

Citation

@misc{napolitano2026arcmamba,
  author = {Napolitano, Logan},
  title = {ARC-Mamba-7B-CF-HOT: Proprioceptive Mamba via CF-HoT},
  year = {2026},
  url = {https://huggingface.co/LoganResearch/ARC-Mamba-7B-CF-HOT}
}

Related

License

CC-BY-4.0

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LoganResearch/ARC-Mamba-7B-CF-HOT

Finetuned
(2)
this model