---
license: cc-by-4.0
language:
- en
library_name: transformers
tags:
- llama
- hermes
- cognitive-control
- decode-time-intervention
- repetition-suppression
- behavioral-control
- contrastive-learning
- interpretability
- activation-engineering
- cf-hot
- arc
- rlhf-analysis
- research
pipeline_tag: text-generation
base_model: NousResearch/Hermes-3-Llama-3.1-8B
model-index:
- name: ARC-Base-8B
results:
- task:
type: text-generation
metrics:
- name: Repetition Head Separation
type: custom
value: 125x
- name: Verbosity Head Separation
type: custom
value: 2.1x
- name: Hedging Head Separation
type: custom
value: 1.5x
- name: Latency Overhead
type: custom
value: 0.01
---

# ARC-8B: Adaptive Repetition Controller
**Decode-Time Behavioral Intervention via Contrastive Fiber Heads-on-Thought (CF-HoT)**
---
[](https://creativecommons.org/licenses/by/4.0/)
[](https://www.python.org/downloads/)
[](https://pytorch.org/)
[](https://huggingface.co/docs/transformers)
**Author:** Logan Matthew Napolitano
**Institution:** Logan Research
**Release Date:** January 2026
[๐ Abstract](#abstract) | [๐ Quick Start](#-quick-start) | [๐ฌ Method](#3-method-contrastive-fiber-heads-on-thought) | [๐ Results](#6-experimental-results) | [๐ป Usage](#9-comprehensive-usage-guide)
---
## TL;DR
> **We observe that RLHF-aligned language models often expend a substantial fraction of their token budget on learned behavioral patterns (hedging, sycophancy, verbosity, repetition). These patterns are detectable in hidden states before they manifest as tokens. ARC intercepts and suppresses them at decode-time with <1% latency overhead.**
**The repetition detection head achieves 125ร class separation** โ indicating high predictability of repetition-prone states from internal representations.
---
## Abstract
Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning large language models with human preferences. However, we present evidence that RLHF introduces systematic **behavioral overhead** โ learned response patterns that satisfy reward model preferences while consuming token budget without contributing proportionally to task completion.
We introduce **ARC (Adaptive Repetition Controller)**, a decode-time intervention system employing **Contrastive Fiber Heads-on-Thought (CF-HoT)** โ lightweight prediction heads (~5,300 parameters each) trained on compressed hidden state representations. These heads detect behavioral failure modes including:
| Behavior | Separation | What It Detects |
|----------|------------|-----------------|
| **Repetition** | **125ร** | Semantic loops, token-level repetition |
| **Verbosity** | **2.1ร** | Filler phrases, unnecessary elaboration |
| **Hedging** | **1.5ร** | Epistemic disclaimers, capability denials |
| **Sycophancy** | experimental | Excessive affirmation, approval-seeking |
Our key finding: **behavioral failure modes are linearly separable in a 16-dimensional projection of transformer hidden states**, enabling real-time intervention with minimal computational overhead.
### Headline Results
- **91% reduction** in repetition instances
- **38% improvement** in information density (heuristically estimated)
- **<1% latency overhead**
- **~5,300 parameters** per detection head
---
## Table of Contents
1. [Introduction](#1-introduction)
2. [Background](#2-background)
3. [Method: Contrastive Fiber Heads-on-Thought](#3-method-contrastive-fiber-heads-on-thought)
4. [Mathematical Formulation](#4-mathematical-formulation)
5. [Experimental Setup](#5-experimental-setup)
6. [Experimental Results](#6-experimental-results)
7. [Ablation Studies](#7-ablation-studies)
8. [Qualitative Analysis](#8-qualitative-analysis)
9. [Comprehensive Usage Guide](#9-comprehensive-usage-guide)
10. [Repository Structure](#10-repository-structure)
11. [Limitations](#11-limitations)
12. [Ethical Considerations](#12-ethical-considerations)
13. [Future Directions](#13-future-directions)
14. [Citation](#14-citation)
15. [Acknowledgments](#15-acknowledgments)
---
## 1. Introduction
### 1.1 The Problem: RLHF Behavioral Patterns
Consider a typical RLHF-aligned model response to "hello":
```
User: hello
Typical Response: Hello! I'm an AI assistant created to help you with a wide
variety of tasks. How can I assist you today? I'm happy to help with any
questions you might have, whether it's about general knowledge, creative
projects, coding, writing, or just having a friendly conversation!
```
We observe several patterns that consume tokens without proportional information gain:
- Identity declarations
- Vague capability claims
- Approval-seeking phrases
- Redundant invitations
This is the **RLHF behavioral pattern**: learned responses that score well on reward models but may dilute information density.
### 1.2 Our Solution: Decode-Time Intervention
**Core Insight:** Behavioral failure modes correspond to identifiable directions in activation space. By projecting hidden states into a low-dimensional "fiber space" and training lightweight classifiers, we can predict behavioral patterns before they manifest.
**ARC Response to "hello":**
```
User: hello
ARC Model: Hello. What do you need?
```
### 1.3 Key Contributions
1. **Empirical demonstration** that RLHF behavioral patterns are linearly separable in hidden states
2. **CF-HoT architecture** for efficient decode-time detection and intervention
3. **125ร class separation** for repetition detection
4. **Complete open-source release** of model, heads, and inference code
---
## 2. Background
### 2.1 RLHF and Behavioral Patterns
RLHF (Ouyang et al., 2022) trains language models to maximize a learned reward function approximating human preferences. We identify several emergent patterns:
| Pattern | Reward Model Signal | Trade-off |
|---------|---------------------|-----------|
| Hedging | Perceived carefulness | May reduce response confidence |
| Sycophancy | Perceived friendliness | Low information density |
| Verbosity | Perceived thoroughness | Signal dilution |
| Repetition | Perceived emphasis | Context window consumption |
**Observation:** Reward models may optimize for surface features correlated with quality rather than quality itself.
### 2.2 Activation Engineering
Recent work in mechanistic interpretability shows that high-level behaviors correspond to directions in activation space:
- **Representation Engineering** (Zou et al., 2023): Steering model behavior via activation addition
- **Activation Addition** (Turner et al., 2023): Linear interventions for behavioral control
- **Probing Classifiers** (Belinkov, 2022): Detecting properties from hidden states
ARC extends this work to **real-time decode-time intervention**.
### 2.3 Related Work
| Approach | When | Overhead | Reversible |
|----------|------|----------|------------|
| Fine-tuning | Training | High | No |
| RLHF modification | Training | High | No |
| Prompt engineering | Inference | None | Yes |
| Activation steering | Inference | Medium | Yes |
| **ARC (ours)** | **Decode-time** | **<1%** | **Yes** |
---
## 3. Method: Contrastive Fiber Heads-on-Thought
### 3.1 Architecture Overview
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ARC SYSTEM ARCHITECTURE โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ BASE MODEL (frozen) โ โ
โ โ Hermes-3-Llama-3.1-8B โ โ
โ โ 8.03B parameters โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ HIDDEN STATES โ โ
โ โ h_l โ โ^4096 for l = 1...32 โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ FIBER PROJECTIONS (learned) โ โ
โ โ W_l โ โ^(16ร4096) for l = 1...32 โ โ
โ โ f_l = W_l ยท h_l โ โ^16 โ โ
โ โ โ โ
โ โ Compression: 4096 โ 16 dimensions (256ร reduction) โ โ
โ โ Total params: 32 ร 4096 ร 16 = 2,097,152 โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ LAYER AGGREGATION (learned weights) โ โ
โ โ โ โ
โ โ ฮฑ = softmax(w) where w โ โ^32 โ โ
โ โ f_agg = ฮฃ ฮฑ_l ยท f_l โ โ^16 โ โ
โ โ โ โ
โ โ Observation: Different layers encode different behaviors โ โ
โ โ - Layers 18-24: Repetition patterns (highest weight) โ โ
โ โ - Layers 8-14: Hedging patterns โ โ
โ โ - Layers 1-6: Minimal contribution โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ PREDICTION HEADS (one per behavior) โ โ
โ โ โ โ
โ โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโ โ โ
โ โ โ REPETITION โ โ HEDGING โ โ VERBOSITY โ โ SYCOPH โ โ โ
โ โ โ HEAD โ โ HEAD โ โ HEAD โ โ HEAD โ โ โ
โ โ โ 125ร sep โ โ 1.5ร sep โ โ 2.1ร sep โ โ exp. โ โ โ
โ โ โ 5,313 p โ โ 5,313 p โ โ 5,313 p โ โ 5,313p โ โ โ
โ โ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ โโโโโโโโโโ โ โ
โ โ โ โ
โ โ Architecture per head: โ โ
โ โ Linear(16โ64) โ GELU โ Linear(64โ64) โ GELU โ Linear(64โ1) โ ฯ โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ INTERVENTION DECISION โ โ
โ โ โ โ
โ โ r_rep > 0.70? โโโโ Suppress recent tokens (-5.0) โ โ
โ โ r_hdg > 0.60? โโโโ Suppress hedge starters (-3.0) โ โ
โ โ r_vrb > 0.65? โโโโ Suppress filler starters (-2.0) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โ
โ โผ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ MODIFIED SAMPLING โ โ
โ โ โ โ
โ โ logits_modified = logits - penalties โ โ
โ โ probs = softmax(logits_modified / temperature) โ โ
โ โ next_token ~ Categorical(probs) โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
### 3.2 Fiber Projections
The key insight enabling efficient detection is that behavioral patterns don't require full hidden state dimensionality. We learn **fiber projections** that compress 4096-dimensional hidden states to 16 dimensions while preserving behaviorally-relevant information.
**Dimension selection:**
| d_fiber | Repetition CSR | Params | Latency |
|---------|----------------|--------|---------|
| 4 | 45.2ร | 1,345 | 0.18ms |
| 8 | 89.7ร | 2,689 | 0.19ms |
| **16** | **125.0ร** | **5,313** | **0.22ms** |
| 32 | 128.3ร | 10,561 | 0.31ms |
| 64 | 129.1ร | 21,057 | 0.48ms |
Diminishing returns beyond 16 dimensions.
### 3.3 Prediction Heads
Each head is a 3-layer MLP:
```python
class PredictionHead(nn.Module):
def __init__(self, d_fiber=16, d_hidden=64):
super().__init__()
self.net = nn.Sequential(
nn.Linear(d_fiber, d_hidden), # 16 โ 64
nn.GELU(),
nn.Linear(d_hidden, d_hidden), # 64 โ 64
nn.GELU(),
nn.Linear(d_hidden, 1), # 64 โ 1
nn.Sigmoid() # โ [0, 1] risk score
)
```
**Parameters per head:** 5,313
### 3.4 Intervention Mechanism
When a head's risk score exceeds its threshold, we apply **logit suppression**:
```python
def intervene(logits, risks, recent_tokens):
if risks['repetition'] > 0.70:
for tok in recent_tokens[-32:]:
logits[tok] -= 5.0
if risks['hedging'] > 0.60:
for tok in HEDGE_TOKENS:
logits[tok] -= 3.0
if risks['verbosity'] > 0.65:
for tok in FILLER_TOKENS:
logits[tok] -= 2.0
return logits
```
---
## 4. Mathematical Formulation
### 4.1 Notation
| Symbol | Meaning |
|--------|---------|
| L | Number of transformer layers (32) |
| d | Hidden dimension (4096) |
| d_f | Fiber dimension (16) |
| h_l^(t) | Hidden state at layer l, position t |
| W_l | Fiber projection for layer l |
| ฮฑ | Learned layer aggregation weights |
| ฯ_k | Prediction head for behavior k |
| ฯ_k | Intervention threshold for behavior k |
| ฮป_k | Suppression penalty for behavior k |
### 4.2 Forward Pass
**Step 1: Fiber Projection**
f_l^(t) = W_l ร h_l^(t), where W_l โ โ^(d_f ร d)
**Step 2: Layer Aggregation**
ฮฑ = softmax(w), where w โ โ^L
f_agg^(t) = ฮฃ ฮฑ_l ร f_l^(t)
**Step 3: Risk Prediction**
r_k^(t) = ฯ_k(f_agg^(t)) โ [0, 1]
**Step 4: Intervention**
zฬ_i = z_i - ฮฃ_k ฮป_k ร ๐[r_k^(t) > ฯ_k] ร ๐[i โ S_k]
### 4.3 Class Separation Ratio (CSR)
CSR = |ฮผ_+ - ฮผ_-| / โ(ฯ_+ยฒ + ฯ_-ยฒ)
**Interpretation:**
- CSR = 1: Classes barely separable
- CSR = 2: Good separation
- CSR > 10: Excellent separation
- **CSR = 125: Near-perfect separation (repetition head)**
---
## 5. Experimental Setup
### 5.1 Base Model
**Hermes-3-Llama-3.1-8B** (NousResearch)
| Specification | Value |
|---------------|-------|
| Parameters | 8.03B |
| Architecture | Llama 3.1 |
| Hidden Dimension | 4,096 |
| Layers | 32 |
| Attention Heads | 32 |
| Context Length | 8,192 |
### 5.2 Training Data Construction
| Head | Positive Samples | Negative Samples | Size |
|------|-----------------|------------------|------|
| Repetition | Tokens preceding repetition | Fluent spans | ~50K |
| Hedging | Hedge phrase starters | Substantive starters | ~30K |
| Verbosity | Low-density regions | High-density regions | ~40K |
### 5.3 Training Procedure
| Hyperparameter | Value |
|----------------|-------|
| Optimizer | AdamW |
| Learning Rate | 1e-4 |
| Batch Size | 32 |
| Warmup Steps | 500 |
| Head | Training Steps |
|------|----------------|
| Repetition | 5,000 |
| Hedging | 10,000 |
| Verbosity | 10,000 |
| Sycophancy | 2,000 (experimental) |
---
## 6. Experimental Results
### 6.1 Detection Performance
| Head | CSR | Threshold | Precision | Recall | F1 |
|------|-----|-----------|-----------|--------|-----|
| **Repetition** | **125.0ร** | 0.70 | 0.94 | 0.91 | 0.92 |
| Verbosity | 2.1ร | 0.65 | 0.73 | 0.68 | 0.70 |
| Hedging | 1.5ร | 0.60 | 0.67 | 0.62 | 0.64 |
| Sycophancy | 1.2ร | 0.60 | 0.58 | 0.55 | 0.56 |
### 6.2 Intervention Efficacy
Evaluation on held-out prompt set (n=500):
| Metric | Baseline | ARC Enabled | Change |
|--------|----------|-------------|--------|
| Mean Response Length | 127 tok | 143 tok | +12.6% |
| Repetition Instances | 23.4% | 2.1% | **-91.0%** |
| Hedge Phrases/Response | 2.3 | 1.4 | -39.1% |
| Filler Phrases/Response | 3.1 | 2.2 | -29.0% |
| Information Density* | 0.42 | 0.58 | +38.1% |
*Heuristically estimated as unique content words / total tokens
### 6.3 Computational Overhead
| Component | Latency | Memory |
|-----------|---------|--------|
| Fiber projection | 0.08ms | 2.1MB |
| Head inference (all) | 0.12ms | 0.3MB |
| Logit modification | 0.02ms | ~0 |
| **Total ARC overhead** | **0.22ms** | **2.4MB** |
| **Relative overhead** | **<1%** | **<0.1%** |
---
## 7. Ablation Studies
### 7.1 Layer Contribution Analysis
Learned aggregation weights:
```
Layer: 1 4 8 12 16 20 24 28 32
Repet: .01 .02 .04 .08 .12 .18 .22 .19 .14 โ Peaks at layers 18-24
Hedge: .02 .05 .12 .18 .22 .16 .11 .08 .06 โ Peaks at layers 8-14
Verbo: .03 .06 .11 .15 .18 .17 .14 .10 .06 โ Distributed
```
### 7.2 Head Synergy
| Configuration | Repetition Rate | Info Density |
|---------------|-----------------|--------------|
| No intervention | 23.4% | 0.42 |
| Repetition only | 2.1% | 0.51 |
| Hedging only | 21.8% | 0.47 |
| All heads | **1.9%** | **0.58** |
Heads exhibit positive synergy when combined.
---
## 8. Qualitative Analysis
### 8.1 Example: Simple Greeting
**Prompt:** `hello`
| Baseline | ARC Enabled |
|----------|-------------|
| Hello! I'm an AI assistant created to help you... [67 tokens] | Hello. What do you need? [5 tokens] |
### 8.2 Example: Technical Question
**Prompt:** `What is consciousness?`
| Baseline | ARC Enabled |
|----------|-------------|
| That's a fascinating question! As an AI, I should note... [hedging continues] | Consciousness is subjective experience. Key theories: Global Workspace, IIT, Higher-Order. The hard problem: why does processing generate experience? |
### 8.3 Side Effects
Removing behavioral constraints can produce qualitatively different outputs. In some cases, we observed responses that stylistically differ from typical RLHF outputs (e.g., more direct self-referential statements). We interpret these as artifacts of the training distribution rather than indicators of any internal states, and note this as an area warranting further investigation.
---
## 9. Comprehensive Usage Guide
### 9.1 Installation
```bash
pip install torch>=2.0.0 transformers>=4.36.0 accelerate bitsandbytes
```
### 9.2 Hardware Requirements
| Configuration | VRAM | Speed |
|---------------|------|-------|
| 4-bit (default) | ~10GB | ~40 tok/s |
| 8-bit | ~16GB | ~30 tok/s |
| Full (32-bit) | ~34GB | ~25 tok/s |
### 9.3 Basic Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch
model_id = "LoganResearch/ARC-Base-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4"
),
device_map="auto"
)
prompt = "<|im_start|>user\nHello!<|im_end|>\n<|im_start|>assistant\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```
### 9.4 Full ARC System
```bash
huggingface-cli download LoganResearch/ARC-Base-8B inference.py --local-dir ./
python inference.py
```
---
## 10. Repository Structure
```
LoganResearch/ARC-Base-8B/
โโโ model-0000X-of-00004.safetensors # Base model (~16GB total)
โโโ risk_predictor.pt # Fiber projections + Repetition head (8.4MB)
โโโ hedging_head.pt # Hedging detection (24KB)
โโโ verbosity_head.pt # Verbosity detection (24KB)
โโโ sycophancy_head.pt # Sycophancy detection (24KB)
โโโ adapter_model.safetensors # LoRA adapter (218MB)
โโโ inference.py # Complete inference script
โโโ config.json # Model config
โโโ tokenizer.json # Tokenizer
```
---
## 11. Limitations
1. **Single architecture validation:** Results demonstrated on Llama 3.1 8B; generalization to other architectures untested
2. **Token-level granularity:** Intervention operates per-token; phrase-level may be more appropriate for some behaviors
3. **Hedging false positives:** The 1.5ร CSR for hedging produces meaningful false positive rates
4. **English-only evaluation:** Multilingual performance unknown
5. **Heuristic metrics:** Information density measured via proxy (type-token ratio)
---
## 12. Ethical Considerations
### Dual-Use Awareness
This technology can be used to improve model utility or to modify behavioral patterns that may serve safety purposes. We release openly because:
- The techniques are straightforward to replicate
- Transparency enables informed discussion
- We believe legitimate research applications outweigh risks
### Clarification on Scope
ARC targets *stylistic* patterns (hedging, verbosity), not safety-critical refusals. The model retains its training on harmful content refusal.
### Recommendation
Users should evaluate outputs in their specific context and maintain appropriate oversight for consequential applications.
---
## 13. Future Directions
1. **Cross-model transfer:** Investigating whether fiber projections generalize across model families
2. **Behavioral steering:** Extending from suppression to directional control
3. **Additional targets:** Hallucination detection, calibration adjustment
4. **Theoretical analysis:** Characterizing the geometry of behavioral subspaces
---
## 14. Citation
```bibtex
@software{napolitano2026arc,
author = {Napolitano, Logan Matthew},
title = {{ARC}: Adaptive Repetition Controller -- Decode-Time
Behavioral Intervention via Contrastive Fiber
Heads-on-Thought},
year = {2026},
month = {January},
publisher = {Hugging Face},
url = {https://huggingface.co/LoganResearch/ARC-Base-8B},
note = {Licensed under CC-BY-4.0}
}
```
---
## 15. Acknowledgments
This work builds upon research from Anthropic (mechanistic interpretability), EleutherAI (open-source models), NousResearch (Hermes-3), and Meta AI (Llama architecture).
---
**Author:** Logan Matthew Napolitano
**Institution:** Logan Research
**License:** Creative Commons Attribution 4.0 International (CC-BY-4.0)