Optimizing Neural Networks with Kronecker-factored Approximate Curvature
Paper
β’
1503.05671
β’
Published
text
stringlengths 0
53
|
|---|
# K-FAC Memorization Suppression - Dependencies
|
# Core ML
|
torch>=2.0
|
transformers>=4.35
|
datasets>=2.14
|
accelerate>=0.24
|
# Numerical
|
numpy>=1.24
|
scipy>=1.11
|
# Utilities
|
tqdm>=4.65
|
python-Levenshtein>=0.21 # Fast Levenshtein distance
|
# Development
|
jupyter>=1.0
|
ipywidgets>=8.0
|
# Optional: for loading OLMo models
|
# ai2-olmo>=0.3.0
|
Reproduction of "From Memorization to Reasoning in the Spectrum of Loss Curvature" with extended experiments on modified importance formulas.
This project implements K-FAC (Kronecker-Factored Approximate Curvature) based weight editing to suppress verbatim memorization in language models while preserving general capabilities.
Key insight: The Fisher Information Matrix, approximated by K-FAC, reveals directions in weight space associated with memorization (low curvature) vs. generalization (high curvature). By removing low-curvature components, we can suppress memorization.
pip install -r requirements.txt
βββ src/
β βββ kfac_collector.py # Collect K-FAC statistics (A, G matrices)
β βββ kfac_editor.py # Weight editing via eigendecomposition
β βββ evaluate.py # Memorization and perplexity metrics
β βββ mine_memorized.py # Mine memorized sequences from training data
βββ notebooks/
β βββ 01_collect_kfac.ipynb # Colab: K-FAC collection (~2h on A100)
β βββ 02_mine_memorized.ipynb # Colab: Find memorized sequences (~1h)
β βββ 03_experiments.ipynb # Colab: Run experiments (~2h)
βββ plans/
β βββ implementation_plan.md # Detailed implementation plan
βββ context/
β βββ original_paper/ # Paper sections in markdown
β βββ REPRODUCTION_PLAN.md # Initial reproduction plan
βββ requirements.txt
from src.kfac_collector import KFACCollector, KFACConfig
from src.kfac_editor import KFACEditor, EditConfig
from src.evaluate import memorization_score, perplexity
# Load model
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("allenai/OLMo-2-1124-7B")
tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-2-1124-7B")
# Load pre-collected K-FAC stats
collector = KFACCollector.load("kfac_statistics.pt", model)
kfac_stats = collector.get_statistics()
# Apply K-FAC editing
config = EditConfig(energy_threshold=0.6, formula="original")
editor = KFACEditor(model, kfac_stats, config)
editor.edit_model()
# Evaluate
result = memorization_score(model, tokenizer, prefixes, suffixes)
print(f"Strict accuracy: {result.strict_accuracy*100:.1f}%")
For each target MLP layer, we collect:
These approximate the Fisher Information Matrix: $F_W \approx G \otimes A$
| Formula | Definition | Intuition |
|---|---|---|
| Original | $\Pi_{ij} = \lambda_i \cdot \mu_j$ | Pure curvature |
| Modified | $\Pi_{ij} = \lambda_i \cdot \mu_j \cdot | C_{ij} |
| Parameter | 7B Model | 1B Model (estimated) |
|---|---|---|
| Target layers | 23, 24, 25 | 11, 12, 13 |
| Projections | gate_proj, up_proj | gate_proj, up_proj |
| Energy threshold | 60% | 60% |
| K-FAC tokens | ~20M | ~20M |
Based on the paper (OLMo-2 1B):
| Metric | Baseline | After K-FAC |
|---|---|---|
| Dolma strict accuracy | ~98% | ~3% |
| Perplexity (Pile-10k) | ~23 | ~27 |
MIT