kfac-memorization-code / INSTRUCTIONS.md
omrifahn's picture
Upload INSTRUCTIONS.md with huggingface_hub
3dc3ac4 verified

K-FAC Memorization Suppression - Instructions

Target: OLMo-2 1B Model

We're using the 1B model for faster iteration and lower compute.


Paper Baseline (1B Model) - Our Target to Reproduce

Method Dolma Strict % Dolma Loose % Quotes Strict % Perplexity
Baseline 98.46 99.38 98.5 23.19
K-FAC 2.8 7.2 27.7 26.53

Key findings from paper:

  • K-FAC reduces memorization from 98% → 3%
  • Perplexity increases ~14% (23.19 → 26.53)
  • Better transfer to unseen memorized content (quotes) than BSN

Model & Hyperparameters

Setting Value Notes
Model allenai/OLMo-1B-0724-hf 16 layers
Target layers 11, 12, 13 ~72% depth (scaled from 7B's 23-25/32)
Projections gate_proj, up_proj Same as 7B
Energy threshold 60% Same as 7B

⚠️ Layer selection for 1B is NOT specified in paper - we're using proportional scaling. May need tuning.


What You Need

  • Google Colab with GPU (T4 sufficient for 1B, A100 faster)
  • Google Drive to store intermediate files

Step 1: Local - Verify Tests

python3 tests/test_local.py

Step 2: Colab - Collect K-FAC Stats (~30-60 min on T4)

  1. Open notebooks/01_collect_kfac.ipynb
  2. Change model to: allenai/OLMo-1B-0724-hf
  3. Run all cells
  4. Output: kfac_statistics.pt

Step 3: Colab - Mine Memorized Sequences (~30 min)

  1. Open notebooks/02_mine_memorized.ipynb
  2. Change model to: allenai/OLMo-1B-0724-hf
  3. Run all cells
  4. Output: memorized_val.json (target: ~125 sequences like paper)

Step 4: Colab - Run Experiments (~30-60 min)

  1. Open notebooks/03_experiments.ipynb
  2. Change model to: allenai/OLMo-1B-0724-hf
  3. Run all cells
  4. Compare to paper baseline above

Expected Compute Time (1B model)

Task T4 GPU A100 GPU
K-FAC collection (20M tokens) ~60 min ~20 min
Mining memorized sequences ~30 min ~10 min
Experiments ~30 min ~15 min
Total ~2 hours ~45 min

Troubleshooting

  • OOM: Reduce batch_size (4→2→1)
  • Wrong layers: If 11-13 doesn't work well, try 10-12 or 12-14
  • Low memorization rate when mining: Normal - paper found 650 sequences from much larger candidate pool