File size: 4,717 Bytes
88b5d86 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
# K-FAC Memorization Suppression
Reproduction of ["From Memorization to Reasoning in the Spectrum of Loss Curvature"](https://github.com/goodfire-ai/memorization_kfac) with extended experiments on modified importance formulas.
## Overview
This project implements K-FAC (Kronecker-Factored Approximate Curvature) based weight editing to suppress verbatim memorization in language models while preserving general capabilities.
**Key insight:** The Fisher Information Matrix, approximated by K-FAC, reveals directions in weight space associated with memorization (low curvature) vs. generalization (high curvature). By removing low-curvature components, we can suppress memorization.
## Project Goal
1. **Reproduce** the paper's K-FAC method on OLMo-2 1B
2. **Compare** the original importance formula with a modified version:
- **Original:** $\Pi_{ij} = \lambda_i \cdot \mu_j$
- **Modified:** $\Pi_{ij} = \lambda_i \cdot \mu_j \cdot |C_{ij}|^2$
## Installation
```bash
pip install -r requirements.txt
```
## Project Structure
```
├── src/
│ ├── kfac_collector.py # Collect K-FAC statistics (A, G matrices)
│ ├── kfac_editor.py # Weight editing via eigendecomposition
│ ├── evaluate.py # Memorization and perplexity metrics
│ └── mine_memorized.py # Mine memorized sequences from training data
├── notebooks/
│ ├── 01_collect_kfac.ipynb # Colab: K-FAC collection (~2h on A100)
│ ├── 02_mine_memorized.ipynb # Colab: Find memorized sequences (~1h)
│ └── 03_experiments.ipynb # Colab: Run experiments (~2h)
├── plans/
│ └── implementation_plan.md # Detailed implementation plan
├── context/
│ ├── original_paper/ # Paper sections in markdown
│ └── REPRODUCTION_PLAN.md # Initial reproduction plan
└── requirements.txt
```
## Quick Start
### Local Development
```python
from src.kfac_collector import KFACCollector, KFACConfig
from src.kfac_editor import KFACEditor, EditConfig
from src.evaluate import memorization_score, perplexity
# Load model
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-1124-7B")
# Load pre-collected K-FAC stats
collector = KFACCollector.load("kfac_statistics.pt", model)
kfac_stats = collector.get_statistics()
# Apply K-FAC editing
config = EditConfig(energy_threshold=0.6, formula="original")
editor = KFACEditor(model, kfac_stats, config)
editor.edit_model()
# Evaluate
result = memorization_score(model, tokenizer, prefixes, suffixes)
print(f"Strict accuracy: {result.strict_accuracy*100:.1f}%")
```
### Running on Colab
1. **01_collect_kfac.ipynb** - Collect K-FAC statistics (~20M tokens, ~2h on A100)
2. **02_mine_memorized.ipynb** - Find memorized sequences from training data
3. **03_experiments.ipynb** - Run experiments and compare formulas
## Method
### K-FAC Statistics Collection
For each target MLP layer, we collect:
- **A**: Activation covariance matrix (input side)
- **G**: Gradient covariance matrix (output side)
These approximate the Fisher Information Matrix: $F_W \approx G \otimes A$
### Weight Editing
1. **Eigendecompose** A and G matrices
2. **Transform** weights to curvature basis: $C = U_G^T W U_A$
3. **Compute importance** using either formula
4. **Select** top components by cumulative energy (e.g., 60%)
5. **Reconstruct** edited weights: $W_{edited} = U_G (C \odot M) U_A^T$
### Importance Formulas
| Formula | Definition | Intuition |
|---------|------------|-----------|
| Original | $\Pi_{ij} = \lambda_i \cdot \mu_j$ | Pure curvature |
| Modified | $\Pi_{ij} = \lambda_i \cdot \mu_j \cdot |C_{ij}|^2$ | Curvature weighted by actual weight magnitude |
## Hyperparameters
| Parameter | 7B Model | 1B Model (estimated) |
|-----------|----------|---------------------|
| Target layers | 23, 24, 25 | 11, 12, 13 |
| Projections | gate_proj, up_proj | gate_proj, up_proj |
| Energy threshold | 60% | 60% |
| K-FAC tokens | ~20M | ~20M |
## Expected Results
Based on the paper (OLMo-2 1B):
| Metric | Baseline | After K-FAC |
|--------|----------|-------------|
| Dolma strict accuracy | ~98% | ~3% |
| Perplexity (Pile-10k) | ~23 | ~27 |
## References
- Paper: [From Memorization to Reasoning in the Spectrum of Loss Curvature](https://github.com/goodfire-ai/memorization_kfac)
- Original code: https://github.com/goodfire-ai/memorization_kfac
- Model: [OLMo-2](https://huggingface.co/allenai/OLMo-2-1124-7B)
- K-FAC: [Martens & Grosse, 2015](https://arxiv.org/abs/1503.05671)
## License
MIT
|