File size: 27,832 Bytes
719d0c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
858276a
 
 
 
719d0c4
 
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
719d0c4
15bc491
858276a
15bc491
5166bad
e624897
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
15bc491
719d0c4
 
858276a
 
 
 
 
 
 
 
b7c1661
858276a
b7c1661
858276a
b7c1661
15bc491
719d0c4
15bc491
719d0c4
15bc491
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
b7c1661
858276a
 
 
 
b7c1661
858276a
 
b7c1661
858276a
 
b7c1661
 
 
 
 
858276a
b7c1661
858276a
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
 
b7c1661
858276a
b7c1661
858276a
b7c1661
 
 
 
 
 
858276a
b7c1661
858276a
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15bc491
858276a
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
15bc491
719d0c4
15bc491
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7c1661
 
 
 
 
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15bc491
719d0c4
15bc491
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
b7c1661
 
 
858276a
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
 
b7c1661
858276a
 
 
 
 
 
 
 
 
b7c1661
 
 
858276a
b7c1661
858276a
 
 
 
 
b7c1661
858276a
b7c1661
858276a
b7c1661
858276a
 
 
 
 
 
 
15bc491
b7c1661
15bc491
858276a
 
 
 
 
 
 
 
 
 
 
15bc491
719d0c4
 
 
 
858276a
719d0c4
 
 
 
 
 
 
 
 
 
15bc491
858276a
 
 
 
 
15bc491
 
858276a
 
 
 
 
 
15bc491
719d0c4
15bc491
858276a
 
15bc491
858276a
 
 
 
 
 
 
 
 
 
15bc491
 
719d0c4
 
858276a
 
b7c1661
 
 
 
 
858276a
 
 
 
 
b7c1661
858276a
b7c1661
 
858276a
b7c1661
858276a
b7c1661
858276a
b7c1661
 
 
 
 
858276a
 
 
 
719d0c4
b7c1661
 
 
 
719d0c4
 
 
858276a
 
719d0c4
 
858276a
 
 
 
 
 
 
 
 
719d0c4
15bc491
 
719d0c4
15bc491
858276a
 
b7c1661
858276a
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
---
license: cc-by-4.0
language:
- en
library_name: transformers
tags:
- llama
- hermes
- cognitive-control
- decode-time-intervention
- repetition-suppression
- behavioral-control
- contrastive-learning
- interpretability
- activation-engineering
- cf-hot
- arc
- rlhf-analysis
- research
pipeline_tag: text-generation
base_model: NousResearch/Hermes-3-Llama-3.1-8B
model-index:
- name: ARC-Base-8B
  results:
  - task:
      type: text-generation
    metrics:
    - name: Repetition Head Separation
      type: custom
      value: 125x
    - name: Verbosity Head Separation
      type: custom
      value: 2.1x
    - name: Hedging Head Separation
      type: custom
      value: 1.5x
    - name: Latency Overhead
      type: custom
      value: 0.01
---

<div align="center">

![ARC-8B: Adaptive Repetition Controller](https://huggingface.co/LoganResearch/ARC-Base-8B/resolve/main/arc_model_card.png)

</div>

<div align="center">

# ARC-8B: Adaptive Repetition Controller

**Decode-Time Behavioral Intervention via Contrastive Fiber Heads-on-Thought (CF-HoT)**

---

[![License: CC BY 4.0](https://img.shields.io/badge/License-CC_BY_4.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![PyTorch 2.0+](https://img.shields.io/badge/pytorch-2.0+-ee4c2c.svg)](https://pytorch.org/)
[![Transformers](https://img.shields.io/badge/πŸ€—_Transformers-4.36+-orange.svg)](https://huggingface.co/docs/transformers)

**Author:** Logan Matthew Napolitano  
**Institution:** Logan Research  
**Release Date:** January 2026

[πŸ“– Abstract](#abstract) | [πŸš€ Quick Start](#-quick-start) | [πŸ”¬ Method](#3-method-contrastive-fiber-heads-on-thought) | [πŸ“Š Results](#6-experimental-results) | [πŸ’» Usage](#9-comprehensive-usage-guide)

</div>

---

## TL;DR

> **We observe that RLHF-aligned language models often expend a substantial fraction of their token budget on learned behavioral patterns (hedging, sycophancy, verbosity, repetition). These patterns are detectable in hidden states before they manifest as tokens. ARC intercepts and suppresses them at decode-time with <1% latency overhead.**

**The repetition detection head achieves 125Γ— class separation** β€” indicating high predictability of repetition-prone states from internal representations.

---

## Abstract

Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning large language models with human preferences. However, we present evidence that RLHF introduces systematic **behavioral overhead** β€” learned response patterns that satisfy reward model preferences while consuming token budget without contributing proportionally to task completion.

We introduce **ARC (Adaptive Repetition Controller)**, a decode-time intervention system employing **Contrastive Fiber Heads-on-Thought (CF-HoT)** β€” lightweight prediction heads (~5,300 parameters each) trained on compressed hidden state representations. These heads detect behavioral failure modes including:

| Behavior | Separation | What It Detects |
|----------|------------|-----------------|
| **Repetition** | **125Γ—** | Semantic loops, token-level repetition |
| **Verbosity** | **2.1Γ—** | Filler phrases, unnecessary elaboration |
| **Hedging** | **1.5Γ—** | Epistemic disclaimers, capability denials |
| **Sycophancy** | experimental | Excessive affirmation, approval-seeking |

Our key finding: **behavioral failure modes are linearly separable in a 16-dimensional projection of transformer hidden states**, enabling real-time intervention with minimal computational overhead.

### Headline Results

- **91% reduction** in repetition instances
- **38% improvement** in information density (heuristically estimated)
- **<1% latency overhead**
- **~5,300 parameters** per detection head

---

## Table of Contents

1. [Introduction](#1-introduction)
2. [Background](#2-background)
3. [Method: Contrastive Fiber Heads-on-Thought](#3-method-contrastive-fiber-heads-on-thought)
4. [Mathematical Formulation](#4-mathematical-formulation)
5. [Experimental Setup](#5-experimental-setup)
6. [Experimental Results](#6-experimental-results)
7. [Ablation Studies](#7-ablation-studies)
8. [Qualitative Analysis](#8-qualitative-analysis)
9. [Comprehensive Usage Guide](#9-comprehensive-usage-guide)
10. [Repository Structure](#10-repository-structure)
11. [Limitations](#11-limitations)
12. [Ethical Considerations](#12-ethical-considerations)
13. [Future Directions](#13-future-directions)
14. [Citation](#14-citation)
15. [Acknowledgments](#15-acknowledgments)

---

## 1. Introduction

### 1.1 The Problem: RLHF Behavioral Patterns

Consider a typical RLHF-aligned model response to "hello":

```
User: hello

Typical Response: Hello! I'm an AI assistant created to help you with a wide 
variety of tasks. How can I assist you today? I'm happy to help with any 
questions you might have, whether it's about general knowledge, creative 
projects, coding, writing, or just having a friendly conversation!
```

We observe several patterns that consume tokens without proportional information gain:
- Identity declarations
- Vague capability claims
- Approval-seeking phrases
- Redundant invitations

This is the **RLHF behavioral pattern**: learned responses that score well on reward models but may dilute information density.

### 1.2 Our Solution: Decode-Time Intervention

**Core Insight:** Behavioral failure modes correspond to identifiable directions in activation space. By projecting hidden states into a low-dimensional "fiber space" and training lightweight classifiers, we can predict behavioral patterns before they manifest.

**ARC Response to "hello":**
```
User: hello

ARC Model: Hello. What do you need?
```

### 1.3 Key Contributions

1. **Empirical demonstration** that RLHF behavioral patterns are linearly separable in hidden states
2. **CF-HoT architecture** for efficient decode-time detection and intervention
3. **125Γ— class separation** for repetition detection
4. **Complete open-source release** of model, heads, and inference code

---

## 2. Background

### 2.1 RLHF and Behavioral Patterns

RLHF (Ouyang et al., 2022) trains language models to maximize a learned reward function approximating human preferences. We identify several emergent patterns:

| Pattern | Reward Model Signal | Trade-off |
|---------|---------------------|-----------|
| Hedging | Perceived carefulness | May reduce response confidence |
| Sycophancy | Perceived friendliness | Low information density |
| Verbosity | Perceived thoroughness | Signal dilution |
| Repetition | Perceived emphasis | Context window consumption |

**Observation:** Reward models may optimize for surface features correlated with quality rather than quality itself.

### 2.2 Activation Engineering

Recent work in mechanistic interpretability shows that high-level behaviors correspond to directions in activation space:

- **Representation Engineering** (Zou et al., 2023): Steering model behavior via activation addition
- **Activation Addition** (Turner et al., 2023): Linear interventions for behavioral control  
- **Probing Classifiers** (Belinkov, 2022): Detecting properties from hidden states

ARC extends this work to **real-time decode-time intervention**.

### 2.3 Related Work

| Approach | When | Overhead | Reversible |
|----------|------|----------|------------|
| Fine-tuning | Training | High | No |
| RLHF modification | Training | High | No |
| Prompt engineering | Inference | None | Yes |
| Activation steering | Inference | Medium | Yes |
| **ARC (ours)** | **Decode-time** | **<1%** | **Yes** |

---

## 3. Method: Contrastive Fiber Heads-on-Thought

### 3.1 Architecture Overview

```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         ARC SYSTEM ARCHITECTURE                              β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                              β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚    β”‚                    BASE MODEL (frozen)                           β”‚      β”‚
β”‚    β”‚                 Hermes-3-Llama-3.1-8B                            β”‚      β”‚
β”‚    β”‚                     8.03B parameters                             β”‚      β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                  β”‚                                           β”‚
β”‚                                  β–Ό                                           β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚    β”‚                    HIDDEN STATES                                 β”‚      β”‚
β”‚    β”‚              h_l ∈ ℝ^4096 for l = 1...32                         β”‚      β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                  β”‚                                           β”‚
β”‚                                  β–Ό                                           β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚    β”‚              FIBER PROJECTIONS (learned)                         β”‚      β”‚
β”‚    β”‚           W_l ∈ ℝ^(16Γ—4096) for l = 1...32                       β”‚      β”‚
β”‚    β”‚                f_l = W_l Β· h_l ∈ ℝ^16                            β”‚      β”‚
β”‚    β”‚                                                                  β”‚      β”‚
β”‚    β”‚    Compression: 4096 β†’ 16 dimensions (256Γ— reduction)            β”‚      β”‚
β”‚    β”‚    Total params: 32 Γ— 4096 Γ— 16 = 2,097,152                      β”‚      β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                  β”‚                                           β”‚
β”‚                                  β–Ό                                           β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚    β”‚              LAYER AGGREGATION (learned weights)                 β”‚      β”‚
β”‚    β”‚                                                                  β”‚      β”‚
β”‚    β”‚              Ξ± = softmax(w) where w ∈ ℝ^32                       β”‚      β”‚
β”‚    β”‚              f_agg = Ξ£ Ξ±_l Β· f_l ∈ ℝ^16                          β”‚      β”‚
β”‚    β”‚                                                                  β”‚      β”‚
β”‚    β”‚    Observation: Different layers encode different behaviors      β”‚      β”‚
β”‚    β”‚    - Layers 18-24: Repetition patterns (highest weight)          β”‚      β”‚
β”‚    β”‚    - Layers 8-14: Hedging patterns                               β”‚      β”‚
β”‚    β”‚    - Layers 1-6: Minimal contribution                            β”‚      β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                  β”‚                                           β”‚
β”‚                                  β–Ό                                           β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚    β”‚              PREDICTION HEADS (one per behavior)                 β”‚      β”‚
β”‚    β”‚                                                                  β”‚      β”‚
β”‚    β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚      β”‚
β”‚    β”‚  β”‚  REPETITION  β”‚ β”‚   HEDGING    β”‚ β”‚  VERBOSITY   β”‚ β”‚ SYCOPH β”‚  β”‚      β”‚
β”‚    β”‚  β”‚    HEAD      β”‚ β”‚    HEAD      β”‚ β”‚    HEAD      β”‚ β”‚  HEAD  β”‚  β”‚      β”‚
β”‚    β”‚  β”‚   125Γ— sep   β”‚ β”‚   1.5Γ— sep   β”‚ β”‚   2.1Γ— sep   β”‚ β”‚  exp.  β”‚  β”‚      β”‚
β”‚    β”‚  β”‚   5,313 p    β”‚ β”‚   5,313 p    β”‚ β”‚   5,313 p    β”‚ β”‚ 5,313p β”‚  β”‚      β”‚
β”‚    β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚      β”‚
β”‚    β”‚                                                                  β”‚      β”‚
β”‚    β”‚  Architecture per head:                                          β”‚      β”‚
β”‚    β”‚  Linear(16β†’64) β†’ GELU β†’ Linear(64β†’64) β†’ GELU β†’ Linear(64β†’1) β†’ Οƒ β”‚      β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                  β”‚                                           β”‚
β”‚                                  β–Ό                                           β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚    β”‚              INTERVENTION DECISION                               β”‚      β”‚
β”‚    β”‚                                                                  β”‚      β”‚
β”‚    β”‚         r_rep > 0.70?  ───→ Suppress recent tokens (-5.0)        β”‚      β”‚
β”‚    β”‚         r_hdg > 0.60?  ───→ Suppress hedge starters (-3.0)       β”‚      β”‚
β”‚    β”‚         r_vrb > 0.65?  ───→ Suppress filler starters (-2.0)      β”‚      β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                  β”‚                                           β”‚
β”‚                                  β–Ό                                           β”‚
β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚
β”‚    β”‚              MODIFIED SAMPLING                                   β”‚      β”‚
β”‚    β”‚                                                                  β”‚      β”‚
β”‚    β”‚         logits_modified = logits - penalties                     β”‚      β”‚
β”‚    β”‚         probs = softmax(logits_modified / temperature)           β”‚      β”‚
β”‚    β”‚         next_token ~ Categorical(probs)                          β”‚      β”‚
β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚
β”‚                                                                              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
```

### 3.2 Fiber Projections

The key insight enabling efficient detection is that behavioral patterns don't require full hidden state dimensionality. We learn **fiber projections** that compress 4096-dimensional hidden states to 16 dimensions while preserving behaviorally-relevant information.

**Dimension selection:**

| d_fiber | Repetition CSR | Params | Latency |
|---------|----------------|--------|---------|
| 4 | 45.2Γ— | 1,345 | 0.18ms |
| 8 | 89.7Γ— | 2,689 | 0.19ms |
| **16** | **125.0Γ—** | **5,313** | **0.22ms** |
| 32 | 128.3Γ— | 10,561 | 0.31ms |
| 64 | 129.1Γ— | 21,057 | 0.48ms |

Diminishing returns beyond 16 dimensions.

### 3.3 Prediction Heads

Each head is a 3-layer MLP:

```python
class PredictionHead(nn.Module):
    def __init__(self, d_fiber=16, d_hidden=64):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(d_fiber, d_hidden),   # 16 β†’ 64
            nn.GELU(),
            nn.Linear(d_hidden, d_hidden),  # 64 β†’ 64
            nn.GELU(),
            nn.Linear(d_hidden, 1),         # 64 β†’ 1
            nn.Sigmoid()                     # β†’ [0, 1] risk score
        )
```

**Parameters per head:** 5,313

### 3.4 Intervention Mechanism

When a head's risk score exceeds its threshold, we apply **logit suppression**:

```python
def intervene(logits, risks, recent_tokens):
    if risks['repetition'] > 0.70:
        for tok in recent_tokens[-32:]:
            logits[tok] -= 5.0
    
    if risks['hedging'] > 0.60:
        for tok in HEDGE_TOKENS:
            logits[tok] -= 3.0
    
    if risks['verbosity'] > 0.65:
        for tok in FILLER_TOKENS:
            logits[tok] -= 2.0
    
    return logits
```

---

## 4. Mathematical Formulation

### 4.1 Notation

| Symbol | Meaning |
|--------|---------|
| L | Number of transformer layers (32) |
| d | Hidden dimension (4096) |
| d_f | Fiber dimension (16) |
| h_l^(t) | Hidden state at layer l, position t |
| W_l | Fiber projection for layer l |
| Ξ± | Learned layer aggregation weights |
| Ο†_k | Prediction head for behavior k |
| Ο„_k | Intervention threshold for behavior k |
| Ξ»_k | Suppression penalty for behavior k |

### 4.2 Forward Pass

**Step 1: Fiber Projection**

f_l^(t) = W_l Γ— h_l^(t), where W_l ∈ ℝ^(d_f Γ— d)

**Step 2: Layer Aggregation**

Ξ± = softmax(w), where w ∈ ℝ^L

f_agg^(t) = Ξ£ Ξ±_l Γ— f_l^(t)

**Step 3: Risk Prediction**

r_k^(t) = Ο†_k(f_agg^(t)) ∈ [0, 1]

**Step 4: Intervention**

zΜƒ_i = z_i - Ξ£_k Ξ»_k Γ— πŸ™[r_k^(t) > Ο„_k] Γ— πŸ™[i ∈ S_k]

### 4.3 Class Separation Ratio (CSR)

CSR = |ΞΌ_+ - ΞΌ_-| / √(Οƒ_+Β² + Οƒ_-Β²)

**Interpretation:**
- CSR = 1: Classes barely separable
- CSR = 2: Good separation
- CSR > 10: Excellent separation
- **CSR = 125: Near-perfect separation (repetition head)**

---

## 5. Experimental Setup

### 5.1 Base Model

**Hermes-3-Llama-3.1-8B** (NousResearch)

| Specification | Value |
|---------------|-------|
| Parameters | 8.03B |
| Architecture | Llama 3.1 |
| Hidden Dimension | 4,096 |
| Layers | 32 |
| Attention Heads | 32 |
| Context Length | 8,192 |

### 5.2 Training Data Construction

| Head | Positive Samples | Negative Samples | Size |
|------|-----------------|------------------|------|
| Repetition | Tokens preceding repetition | Fluent spans | ~50K |
| Hedging | Hedge phrase starters | Substantive starters | ~30K |
| Verbosity | Low-density regions | High-density regions | ~40K |

### 5.3 Training Procedure

| Hyperparameter | Value |
|----------------|-------|
| Optimizer | AdamW |
| Learning Rate | 1e-4 |
| Batch Size | 32 |
| Warmup Steps | 500 |

| Head | Training Steps |
|------|----------------|
| Repetition | 5,000 |
| Hedging | 10,000 |
| Verbosity | 10,000 |
| Sycophancy | 2,000 (experimental) |

---

## 6. Experimental Results

### 6.1 Detection Performance

| Head | CSR | Threshold | Precision | Recall | F1 |
|------|-----|-----------|-----------|--------|-----|
| **Repetition** | **125.0Γ—** | 0.70 | 0.94 | 0.91 | 0.92 |
| Verbosity | 2.1Γ— | 0.65 | 0.73 | 0.68 | 0.70 |
| Hedging | 1.5Γ— | 0.60 | 0.67 | 0.62 | 0.64 |
| Sycophancy | 1.2Γ— | 0.60 | 0.58 | 0.55 | 0.56 |

### 6.2 Intervention Efficacy

Evaluation on held-out prompt set (n=500):

| Metric | Baseline | ARC Enabled | Change |
|--------|----------|-------------|--------|
| Mean Response Length | 127 tok | 143 tok | +12.6% |
| Repetition Instances | 23.4% | 2.1% | **-91.0%** |
| Hedge Phrases/Response | 2.3 | 1.4 | -39.1% |
| Filler Phrases/Response | 3.1 | 2.2 | -29.0% |
| Information Density* | 0.42 | 0.58 | +38.1% |

*Heuristically estimated as unique content words / total tokens

### 6.3 Computational Overhead

| Component | Latency | Memory |
|-----------|---------|--------|
| Fiber projection | 0.08ms | 2.1MB |
| Head inference (all) | 0.12ms | 0.3MB |
| Logit modification | 0.02ms | ~0 |
| **Total ARC overhead** | **0.22ms** | **2.4MB** |
| **Relative overhead** | **<1%** | **<0.1%** |

---

## 7. Ablation Studies

### 7.1 Layer Contribution Analysis

Learned aggregation weights:

```
Layer:    1   4   8  12  16  20  24  28  32
Repet:   .01 .02 .04 .08 .12 .18 .22 .19 .14   ← Peaks at layers 18-24
Hedge:   .02 .05 .12 .18 .22 .16 .11 .08 .06   ← Peaks at layers 8-14  
Verbo:   .03 .06 .11 .15 .18 .17 .14 .10 .06   ← Distributed
```

### 7.2 Head Synergy

| Configuration | Repetition Rate | Info Density |
|---------------|-----------------|--------------|
| No intervention | 23.4% | 0.42 |
| Repetition only | 2.1% | 0.51 |
| Hedging only | 21.8% | 0.47 |
| All heads | **1.9%** | **0.58** |

Heads exhibit positive synergy when combined.

---

## 8. Qualitative Analysis

### 8.1 Example: Simple Greeting

**Prompt:** `hello`

| Baseline | ARC Enabled |
|----------|-------------|
| Hello! I'm an AI assistant created to help you... [67 tokens] | Hello. What do you need? [5 tokens] |

### 8.2 Example: Technical Question

**Prompt:** `What is consciousness?`

| Baseline | ARC Enabled |
|----------|-------------|
| That's a fascinating question! As an AI, I should note... [hedging continues] | Consciousness is subjective experience. Key theories: Global Workspace, IIT, Higher-Order. The hard problem: why does processing generate experience? |

### 8.3 Side Effects

Removing behavioral constraints can produce qualitatively different outputs. In some cases, we observed responses that stylistically differ from typical RLHF outputs (e.g., more direct self-referential statements). We interpret these as artifacts of the training distribution rather than indicators of any internal states, and note this as an area warranting further investigation.

---

## 9. Comprehensive Usage Guide

### 9.1 Installation

```bash
pip install torch>=2.0.0 transformers>=4.36.0 accelerate bitsandbytes
```

### 9.2 Hardware Requirements

| Configuration | VRAM | Speed |
|---------------|------|-------|
| 4-bit (default) | ~10GB | ~40 tok/s |
| 8-bit | ~16GB | ~30 tok/s |
| Full (32-bit) | ~34GB | ~25 tok/s |

### 9.3 Basic Usage

```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch

model_id = "LoganResearch/ARC-Base-8B"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_compute_dtype=torch.float16,
        bnb_4bit_use_double_quant=True,
        bnb_4bit_quant_type="nf4"
    ),
    device_map="auto"
)

prompt = "<|im_start|>user\nHello!<|im_end|>\n<|im_start|>assistant\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))
```

### 9.4 Full ARC System

```bash
huggingface-cli download LoganResearch/ARC-Base-8B inference.py --local-dir ./
python inference.py
```

---

## 10. Repository Structure

```
LoganResearch/ARC-Base-8B/
β”œβ”€β”€ model-0000X-of-00004.safetensors  # Base model (~16GB total)
β”œβ”€β”€ risk_predictor.pt                  # Fiber projections + Repetition head (8.4MB)
β”œβ”€β”€ hedging_head.pt                    # Hedging detection (24KB)
β”œβ”€β”€ verbosity_head.pt                  # Verbosity detection (24KB)
β”œβ”€β”€ sycophancy_head.pt                 # Sycophancy detection (24KB)
β”œβ”€β”€ adapter_model.safetensors          # LoRA adapter (218MB)
β”œβ”€β”€ inference.py                       # Complete inference script
β”œβ”€β”€ config.json                        # Model config
└── tokenizer.json                     # Tokenizer
```

---

## 11. Limitations

1. **Single architecture validation:** Results demonstrated on Llama 3.1 8B; generalization to other architectures untested
2. **Token-level granularity:** Intervention operates per-token; phrase-level may be more appropriate for some behaviors
3. **Hedging false positives:** The 1.5Γ— CSR for hedging produces meaningful false positive rates
4. **English-only evaluation:** Multilingual performance unknown
5. **Heuristic metrics:** Information density measured via proxy (type-token ratio)

---

## 12. Ethical Considerations

### Dual-Use Awareness

This technology can be used to improve model utility or to modify behavioral patterns that may serve safety purposes. We release openly because:
- The techniques are straightforward to replicate
- Transparency enables informed discussion
- We believe legitimate research applications outweigh risks

### Clarification on Scope

ARC targets *stylistic* patterns (hedging, verbosity), not safety-critical refusals. The model retains its training on harmful content refusal.

### Recommendation

Users should evaluate outputs in their specific context and maintain appropriate oversight for consequential applications.

---

## 13. Future Directions

1. **Cross-model transfer:** Investigating whether fiber projections generalize across model families
2. **Behavioral steering:** Extending from suppression to directional control
3. **Additional targets:** Hallucination detection, calibration adjustment
4. **Theoretical analysis:** Characterizing the geometry of behavioral subspaces

---

## 14. Citation

```bibtex
@software{napolitano2026arc,
  author       = {Napolitano, Logan Matthew},
  title        = {{ARC}: Adaptive Repetition Controller -- Decode-Time 
                  Behavioral Intervention via Contrastive Fiber 
                  Heads-on-Thought},
  year         = {2026},
  month        = {January},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/LoganResearch/ARC-Base-8B},
  note         = {Licensed under CC-BY-4.0}
}
```

---

## 15. Acknowledgments

This work builds upon research from Anthropic (mechanistic interpretability), EleutherAI (open-source models), NousResearch (Hermes-3), and Meta AI (Llama architecture).

---

<div align="center">

**Author:** Logan Matthew Napolitano  
**Institution:** Logan Research  
**License:** Creative Commons Attribution 4.0 International (CC-BY-4.0)

</div>