omrifahn commited on
Commit
3dc3ac4
·
verified ·
1 Parent(s): 88b5d86

Upload INSTRUCTIONS.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. INSTRUCTIONS.md +93 -0
INSTRUCTIONS.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # K-FAC Memorization Suppression - Instructions
2
+
3
+ ## Target: OLMo-2 1B Model
4
+
5
+ We're using the **1B model** for faster iteration and lower compute.
6
+
7
+ ---
8
+
9
+ ## Paper Baseline (1B Model) - Our Target to Reproduce
10
+
11
+ | Method | Dolma Strict % | Dolma Loose % | Quotes Strict % | Perplexity |
12
+ |--------|----------------|---------------|-----------------|------------|
13
+ | **Baseline** | 98.46 | 99.38 | 98.5 | 23.19 |
14
+ | **K-FAC** | 2.8 | 7.2 | 27.7 | 26.53 |
15
+
16
+ **Key findings from paper:**
17
+ - K-FAC reduces memorization from 98% → 3%
18
+ - Perplexity increases ~14% (23.19 → 26.53)
19
+ - Better transfer to unseen memorized content (quotes) than BSN
20
+
21
+ ---
22
+
23
+ ## Model & Hyperparameters
24
+
25
+ | Setting | Value | Notes |
26
+ |---------|-------|-------|
27
+ | Model | `allenai/OLMo-1B-0724-hf` | 16 layers |
28
+ | Target layers | **11, 12, 13** | ~72% depth (scaled from 7B's 23-25/32) |
29
+ | Projections | gate_proj, up_proj | Same as 7B |
30
+ | Energy threshold | 60% | Same as 7B |
31
+
32
+ **⚠️ Layer selection for 1B is NOT specified in paper** - we're using proportional scaling. May need tuning.
33
+
34
+ ---
35
+
36
+ ## What You Need
37
+
38
+ - **Google Colab** with GPU (T4 sufficient for 1B, A100 faster)
39
+ - **Google Drive** to store intermediate files
40
+
41
+ ---
42
+
43
+ ## Step 1: Local - Verify Tests
44
+
45
+ ```bash
46
+ python3 tests/test_local.py
47
+ ```
48
+
49
+ ---
50
+
51
+ ## Step 2: Colab - Collect K-FAC Stats (~30-60 min on T4)
52
+
53
+ 1. Open `notebooks/01_collect_kfac.ipynb`
54
+ 2. **Change model to:** `allenai/OLMo-1B-0724-hf`
55
+ 3. Run all cells
56
+ 4. Output: `kfac_statistics.pt`
57
+
58
+ ---
59
+
60
+ ## Step 3: Colab - Mine Memorized Sequences (~30 min)
61
+
62
+ 1. Open `notebooks/02_mine_memorized.ipynb`
63
+ 2. **Change model to:** `allenai/OLMo-1B-0724-hf`
64
+ 3. Run all cells
65
+ 4. Output: `memorized_val.json` (target: ~125 sequences like paper)
66
+
67
+ ---
68
+
69
+ ## Step 4: Colab - Run Experiments (~30-60 min)
70
+
71
+ 1. Open `notebooks/03_experiments.ipynb`
72
+ 2. **Change model to:** `allenai/OLMo-1B-0724-hf`
73
+ 3. Run all cells
74
+ 4. Compare to paper baseline above
75
+
76
+ ---
77
+
78
+ ## Expected Compute Time (1B model)
79
+
80
+ | Task | T4 GPU | A100 GPU |
81
+ |------|--------|----------|
82
+ | K-FAC collection (20M tokens) | ~60 min | ~20 min |
83
+ | Mining memorized sequences | ~30 min | ~10 min |
84
+ | Experiments | ~30 min | ~15 min |
85
+ | **Total** | **~2 hours** | **~45 min** |
86
+
87
+ ---
88
+
89
+ ## Troubleshooting
90
+
91
+ - **OOM:** Reduce batch_size (4→2→1)
92
+ - **Wrong layers:** If 11-13 doesn't work well, try 10-12 or 12-14
93
+ - **Low memorization rate when mining:** Normal - paper found 650 sequences from much larger candidate pool