kfac-memorization-code / LOCAL_INSTRUCTIONS.md
omrifahn's picture
Upload LOCAL_INSTRUCTIONS.md with huggingface_hub
9ae4cad verified

K-FAC Memorization Suppression - Local MacBook Execution

Task

Run the K-FAC memorization suppression reproduction locally on MacBook M2 (16GB RAM) using OLMo-2 1B model.


Background

This repo reproduces the paper "From Memorization to Reasoning in the Spectrum of Loss Curvature". The code is already implemented in src/. We need to run the full pipeline locally instead of using Colab.

Paper's 1B model results (our target):

Method Dolma Strict % Perplexity
Baseline 98.46% 23.19
K-FAC 2.8% 26.53

Key Files

  • src/kfac_collector.py - Collects K-FAC statistics (A, G covariance matrices)
  • src/kfac_editor.py - Applies weight editing using eigendecomposition
  • src/evaluate.py - Memorization metrics (strict/loose accuracy, Levenshtein, perplexity)
  • src/mine_memorized.py - Mines memorized sequences from training data
  • plans/implementation_plan.md - Detailed implementation plan

Configuration for 1B Model

Setting Value
Model allenai/OLMo-1B-0724-hf
Target layers 11, 12, 13 (proportionally scaled from 7B)
Projections gate_proj, up_proj
Energy threshold 60%
K-FAC tokens 2-5M (reduced for local)
Prefix length 64 tokens
Suffix length 48 tokens

Steps to Execute

Step 1: Install dependencies

pip install torch transformers datasets accelerate tqdm python-Levenshtein

Step 2: Collect K-FAC Statistics

Create and run a script that:

  1. Loads allenai/OLMo-1B-0724-hf model with torch.float16 or torch.bfloat16
  2. Uses src/kfac_collector.py to collect A and G matrices
  3. Streams ~2-5M tokens from allenai/dolma (name=v1_6-sample)
  4. Target layers: 11, 12, 13; projections: gate_proj, up_proj
  5. Saves to kfac_statistics.pt

Use MPS (Metal) for acceleration: device = "mps" if torch.backends.mps.is_available() else "cpu"

Reduce batch_size to 1-2 and use gradient checkpointing if OOM.

Step 3: Mine Memorized Sequences

  1. Load the 1B model
  2. Use src/mine_memorized.py to sample ~10k candidates from allenai/olmo-mix-1124
  3. Filter for strict memorization matches
  4. Target: find ~100-200 memorized sequences (paper found 650 for 1B)
  5. Save to memorized_sequences.json

Step 4: Evaluate Baseline

  1. Use src/evaluate.py with memorization_score() function
  2. Measure strict accuracy, loose accuracy on found sequences
  3. Measure perplexity on NeelNanda/pile-10k (use smaller sample, e.g., 200)
  4. Expected: ~98% strict accuracy, ~23 perplexity

Step 5: Apply K-FAC Editing

  1. Load K-FAC statistics from kfac_statistics.pt
  2. Use src/kfac_editor.py with EditConfig(energy_threshold=0.6, formula="original")
  3. Edit layers 11, 12, 13 with gate_proj and up_proj
  4. Evaluate memorization and perplexity after editing
  5. Expected: ~3% strict accuracy, ~27 perplexity

Step 6: Test Modified Formula

  1. Restore original weights
  2. Apply editing with formula="modified" (Π = λ·μ·|C|²)
  3. Compare results to original formula

Memory Optimization Tips

  • Use torch.float16 or torch.bfloat16
  • Batch size 1-2
  • Use torch.no_grad() for inference
  • Use MPS device: model.to("mps")
  • For K-FAC collection, process fewer tokens (2M instead of 20M)
  • For mining, use smaller candidate pool (10k instead of 50k)
  • For perplexity, use fewer samples (200 instead of 1000)

Success Criteria

  1. K-FAC statistics collected and saved
  2. Found some memorized sequences (even 50-100 is useful)
  3. Baseline shows high memorization (~95%+)
  4. After K-FAC edit, memorization drops significantly (<10%)
  5. Perplexity increase <30%

Output Files

  • kfac_statistics.pt - K-FAC A and G matrices
  • memorized_sequences.json - Found memorized sequences
  • experiment_results.json - Final metrics