File size: 3,899 Bytes
9ae4cad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
# K-FAC Memorization Suppression - Local MacBook Execution

## Task

Run the K-FAC memorization suppression reproduction **locally on MacBook M2 (16GB RAM)** using OLMo-2 1B model.

---

## Background

This repo reproduces the paper "From Memorization to Reasoning in the Spectrum of Loss Curvature". The code is already implemented in `src/`. We need to run the full pipeline locally instead of using Colab.

**Paper's 1B model results (our target):**
| Method | Dolma Strict % | Perplexity |
|--------|----------------|------------|
| Baseline | 98.46% | 23.19 |
| K-FAC | 2.8% | 26.53 |

---

## Key Files

- `src/kfac_collector.py` - Collects K-FAC statistics (A, G covariance matrices)
- `src/kfac_editor.py` - Applies weight editing using eigendecomposition
- `src/evaluate.py` - Memorization metrics (strict/loose accuracy, Levenshtein, perplexity)
- `src/mine_memorized.py` - Mines memorized sequences from training data
- `plans/implementation_plan.md` - Detailed implementation plan

---

## Configuration for 1B Model

| Setting | Value |
|---------|-------|
| Model | `allenai/OLMo-1B-0724-hf` |
| Target layers | 11, 12, 13 (proportionally scaled from 7B) |
| Projections | gate_proj, up_proj |
| Energy threshold | 60% |
| K-FAC tokens | 2-5M (reduced for local) |
| Prefix length | 64 tokens |
| Suffix length | 48 tokens |

---

## Steps to Execute

### Step 1: Install dependencies

```bash
pip install torch transformers datasets accelerate tqdm python-Levenshtein
```

### Step 2: Collect K-FAC Statistics

Create and run a script that:
1. Loads `allenai/OLMo-1B-0724-hf` model with `torch.float16` or `torch.bfloat16`
2. Uses `src/kfac_collector.py` to collect A and G matrices
3. Streams ~2-5M tokens from `allenai/dolma` (name=`v1_6-sample`)
4. Target layers: 11, 12, 13; projections: gate_proj, up_proj
5. Saves to `kfac_statistics.pt`

**Use MPS (Metal) for acceleration:** `device = "mps" if torch.backends.mps.is_available() else "cpu"`

**Reduce batch_size to 1-2 and use gradient checkpointing if OOM.**

### Step 3: Mine Memorized Sequences

1. Load the 1B model
2. Use `src/mine_memorized.py` to sample ~10k candidates from `allenai/olmo-mix-1124`
3. Filter for strict memorization matches
4. Target: find ~100-200 memorized sequences (paper found 650 for 1B)
5. Save to `memorized_sequences.json`

### Step 4: Evaluate Baseline

1. Use `src/evaluate.py` with `memorization_score()` function
2. Measure strict accuracy, loose accuracy on found sequences
3. Measure perplexity on `NeelNanda/pile-10k` (use smaller sample, e.g., 200)
4. **Expected:** ~98% strict accuracy, ~23 perplexity

### Step 5: Apply K-FAC Editing

1. Load K-FAC statistics from `kfac_statistics.pt`
2. Use `src/kfac_editor.py` with `EditConfig(energy_threshold=0.6, formula="original")`
3. Edit layers 11, 12, 13 with gate_proj and up_proj
4. Evaluate memorization and perplexity after editing
5. **Expected:** ~3% strict accuracy, ~27 perplexity

### Step 6: Test Modified Formula

1. Restore original weights
2. Apply editing with `formula="modified"` (Π = λ·μ·|C|²)
3. Compare results to original formula

---

## Memory Optimization Tips

- Use `torch.float16` or `torch.bfloat16`
- Batch size 1-2
- Use `torch.no_grad()` for inference
- Use MPS device: `model.to("mps")`
- For K-FAC collection, process fewer tokens (2M instead of 20M)
- For mining, use smaller candidate pool (10k instead of 50k)
- For perplexity, use fewer samples (200 instead of 1000)

---

## Success Criteria

1. K-FAC statistics collected and saved
2. Found some memorized sequences (even 50-100 is useful)
3. Baseline shows high memorization (~95%+)
4. After K-FAC edit, memorization drops significantly (<10%)
5. Perplexity increase <30%

---

## Output Files

- `kfac_statistics.pt` - K-FAC A and G matrices
- `memorized_sequences.json` - Found memorized sequences
- `experiment_results.json` - Final metrics