K-FAC Memorization Suppression - Local MacBook Execution
Task
Run the K-FAC memorization suppression reproduction locally on MacBook M2 (16GB RAM) using OLMo-2 1B model.
Background
This repo reproduces the paper "From Memorization to Reasoning in the Spectrum of Loss Curvature". The code is already implemented in src/. We need to run the full pipeline locally instead of using Colab.
Paper's 1B model results (our target):
| Method | Dolma Strict % | Perplexity |
|---|---|---|
| Baseline | 98.46% | 23.19 |
| K-FAC | 2.8% | 26.53 |
Key Files
src/kfac_collector.py- Collects K-FAC statistics (A, G covariance matrices)src/kfac_editor.py- Applies weight editing using eigendecompositionsrc/evaluate.py- Memorization metrics (strict/loose accuracy, Levenshtein, perplexity)src/mine_memorized.py- Mines memorized sequences from training dataplans/implementation_plan.md- Detailed implementation plan
Configuration for 1B Model
| Setting | Value |
|---|---|
| Model | allenai/OLMo-1B-0724-hf |
| Target layers | 11, 12, 13 (proportionally scaled from 7B) |
| Projections | gate_proj, up_proj |
| Energy threshold | 60% |
| K-FAC tokens | 2-5M (reduced for local) |
| Prefix length | 64 tokens |
| Suffix length | 48 tokens |
Steps to Execute
Step 1: Install dependencies
pip install torch transformers datasets accelerate tqdm python-Levenshtein
Step 2: Collect K-FAC Statistics
Create and run a script that:
- Loads
allenai/OLMo-1B-0724-hfmodel withtorch.float16ortorch.bfloat16 - Uses
src/kfac_collector.pyto collect A and G matrices - Streams ~2-5M tokens from
allenai/dolma(name=v1_6-sample) - Target layers: 11, 12, 13; projections: gate_proj, up_proj
- Saves to
kfac_statistics.pt
Use MPS (Metal) for acceleration: device = "mps" if torch.backends.mps.is_available() else "cpu"
Reduce batch_size to 1-2 and use gradient checkpointing if OOM.
Step 3: Mine Memorized Sequences
- Load the 1B model
- Use
src/mine_memorized.pyto sample ~10k candidates fromallenai/olmo-mix-1124 - Filter for strict memorization matches
- Target: find ~100-200 memorized sequences (paper found 650 for 1B)
- Save to
memorized_sequences.json
Step 4: Evaluate Baseline
- Use
src/evaluate.pywithmemorization_score()function - Measure strict accuracy, loose accuracy on found sequences
- Measure perplexity on
NeelNanda/pile-10k(use smaller sample, e.g., 200) - Expected: ~98% strict accuracy, ~23 perplexity
Step 5: Apply K-FAC Editing
- Load K-FAC statistics from
kfac_statistics.pt - Use
src/kfac_editor.pywithEditConfig(energy_threshold=0.6, formula="original") - Edit layers 11, 12, 13 with gate_proj and up_proj
- Evaluate memorization and perplexity after editing
- Expected: ~3% strict accuracy, ~27 perplexity
Step 6: Test Modified Formula
- Restore original weights
- Apply editing with
formula="modified"(Π = λ·μ·|C|²) - Compare results to original formula
Memory Optimization Tips
- Use
torch.float16ortorch.bfloat16 - Batch size 1-2
- Use
torch.no_grad()for inference - Use MPS device:
model.to("mps") - For K-FAC collection, process fewer tokens (2M instead of 20M)
- For mining, use smaller candidate pool (10k instead of 50k)
- For perplexity, use fewer samples (200 instead of 1000)
Success Criteria
- K-FAC statistics collected and saved
- Found some memorized sequences (even 50-100 is useful)
- Baseline shows high memorization (~95%+)
- After K-FAC edit, memorization drops significantly (<10%)
- Perplexity increase <30%
Output Files
kfac_statistics.pt- K-FAC A and G matricesmemorized_sequences.json- Found memorized sequencesexperiment_results.json- Final metrics