omrifahn's picture
Upload README.md with huggingface_hub
88b5d86 verified

K-FAC Memorization Suppression

Reproduction of "From Memorization to Reasoning in the Spectrum of Loss Curvature" with extended experiments on modified importance formulas.

Overview

This project implements K-FAC (Kronecker-Factored Approximate Curvature) based weight editing to suppress verbatim memorization in language models while preserving general capabilities.

Key insight: The Fisher Information Matrix, approximated by K-FAC, reveals directions in weight space associated with memorization (low curvature) vs. generalization (high curvature). By removing low-curvature components, we can suppress memorization.

Project Goal

  1. Reproduce the paper's K-FAC method on OLMo-2 1B
  2. Compare the original importance formula with a modified version:
    • Original: $\Pi_{ij} = \lambda_i \cdot \mu_j$
    • Modified: $\Pi_{ij} = \lambda_i \cdot \mu_j \cdot |C_{ij}|^2$

Installation

pip install -r requirements.txt

Project Structure

├── src/
│   ├── kfac_collector.py    # Collect K-FAC statistics (A, G matrices)
│   ├── kfac_editor.py       # Weight editing via eigendecomposition
│   ├── evaluate.py          # Memorization and perplexity metrics
│   └── mine_memorized.py    # Mine memorized sequences from training data
├── notebooks/
│   ├── 01_collect_kfac.ipynb      # Colab: K-FAC collection (~2h on A100)
│   ├── 02_mine_memorized.ipynb    # Colab: Find memorized sequences (~1h)
│   └── 03_experiments.ipynb       # Colab: Run experiments (~2h)
├── plans/
│   └── implementation_plan.md     # Detailed implementation plan
├── context/
│   ├── original_paper/            # Paper sections in markdown
│   └── REPRODUCTION_PLAN.md       # Initial reproduction plan
└── requirements.txt

Quick Start

Local Development

from src.kfac_collector import KFACCollector, KFACConfig
from src.kfac_editor import KFACEditor, EditConfig
from src.evaluate import memorization_score, perplexity

# Load model
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-1124-7B")

# Load pre-collected K-FAC stats
collector = KFACCollector.load("kfac_statistics.pt", model)
kfac_stats = collector.get_statistics()

# Apply K-FAC editing
config = EditConfig(energy_threshold=0.6, formula="original")
editor = KFACEditor(model, kfac_stats, config)
editor.edit_model()

# Evaluate
result = memorization_score(model, tokenizer, prefixes, suffixes)
print(f"Strict accuracy: {result.strict_accuracy*100:.1f}%")

Running on Colab

  1. 01_collect_kfac.ipynb - Collect K-FAC statistics (~20M tokens, ~2h on A100)
  2. 02_mine_memorized.ipynb - Find memorized sequences from training data
  3. 03_experiments.ipynb - Run experiments and compare formulas

Method

K-FAC Statistics Collection

For each target MLP layer, we collect:

  • A: Activation covariance matrix (input side)
  • G: Gradient covariance matrix (output side)

These approximate the Fisher Information Matrix: $F_W \approx G \otimes A$

Weight Editing

  1. Eigendecompose A and G matrices
  2. Transform weights to curvature basis: $C = U_G^T W U_A$
  3. Compute importance using either formula
  4. Select top components by cumulative energy (e.g., 60%)
  5. Reconstruct edited weights: $W_{edited} = U_G (C \odot M) U_A^T$

Importance Formulas

Formula Definition Intuition
Original $\Pi_{ij} = \lambda_i \cdot \mu_j$ Pure curvature
Modified $\Pi_{ij} = \lambda_i \cdot \mu_j \cdot C_{ij}

Hyperparameters

Parameter 7B Model 1B Model (estimated)
Target layers 23, 24, 25 11, 12, 13
Projections gate_proj, up_proj gate_proj, up_proj
Energy threshold 60% 60%
K-FAC tokens ~20M ~20M

Expected Results

Based on the paper (OLMo-2 1B):

Metric Baseline After K-FAC
Dolma strict accuracy ~98% ~3%
Perplexity (Pile-10k) ~23 ~27

References

License

MIT