Upload LOCAL_INSTRUCTIONS.md with huggingface_hub
Browse files- LOCAL_INSTRUCTIONS.md +123 -0
LOCAL_INSTRUCTIONS.md
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# K-FAC Memorization Suppression - Local MacBook Execution
|
| 2 |
+
|
| 3 |
+
## Task
|
| 4 |
+
|
| 5 |
+
Run the K-FAC memorization suppression reproduction **locally on MacBook M2 (16GB RAM)** using OLMo-2 1B model.
|
| 6 |
+
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
## Background
|
| 10 |
+
|
| 11 |
+
This repo reproduces the paper "From Memorization to Reasoning in the Spectrum of Loss Curvature". The code is already implemented in `src/`. We need to run the full pipeline locally instead of using Colab.
|
| 12 |
+
|
| 13 |
+
**Paper's 1B model results (our target):**
|
| 14 |
+
| Method | Dolma Strict % | Perplexity |
|
| 15 |
+
|--------|----------------|------------|
|
| 16 |
+
| Baseline | 98.46% | 23.19 |
|
| 17 |
+
| K-FAC | 2.8% | 26.53 |
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
## Key Files
|
| 22 |
+
|
| 23 |
+
- `src/kfac_collector.py` - Collects K-FAC statistics (A, G covariance matrices)
|
| 24 |
+
- `src/kfac_editor.py` - Applies weight editing using eigendecomposition
|
| 25 |
+
- `src/evaluate.py` - Memorization metrics (strict/loose accuracy, Levenshtein, perplexity)
|
| 26 |
+
- `src/mine_memorized.py` - Mines memorized sequences from training data
|
| 27 |
+
- `plans/implementation_plan.md` - Detailed implementation plan
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
## Configuration for 1B Model
|
| 32 |
+
|
| 33 |
+
| Setting | Value |
|
| 34 |
+
|---------|-------|
|
| 35 |
+
| Model | `allenai/OLMo-1B-0724-hf` |
|
| 36 |
+
| Target layers | 11, 12, 13 (proportionally scaled from 7B) |
|
| 37 |
+
| Projections | gate_proj, up_proj |
|
| 38 |
+
| Energy threshold | 60% |
|
| 39 |
+
| K-FAC tokens | 2-5M (reduced for local) |
|
| 40 |
+
| Prefix length | 64 tokens |
|
| 41 |
+
| Suffix length | 48 tokens |
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
## Steps to Execute
|
| 46 |
+
|
| 47 |
+
### Step 1: Install dependencies
|
| 48 |
+
|
| 49 |
+
```bash
|
| 50 |
+
pip install torch transformers datasets accelerate tqdm python-Levenshtein
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
### Step 2: Collect K-FAC Statistics
|
| 54 |
+
|
| 55 |
+
Create and run a script that:
|
| 56 |
+
1. Loads `allenai/OLMo-1B-0724-hf` model with `torch.float16` or `torch.bfloat16`
|
| 57 |
+
2. Uses `src/kfac_collector.py` to collect A and G matrices
|
| 58 |
+
3. Streams ~2-5M tokens from `allenai/dolma` (name=`v1_6-sample`)
|
| 59 |
+
4. Target layers: 11, 12, 13; projections: gate_proj, up_proj
|
| 60 |
+
5. Saves to `kfac_statistics.pt`
|
| 61 |
+
|
| 62 |
+
**Use MPS (Metal) for acceleration:** `device = "mps" if torch.backends.mps.is_available() else "cpu"`
|
| 63 |
+
|
| 64 |
+
**Reduce batch_size to 1-2 and use gradient checkpointing if OOM.**
|
| 65 |
+
|
| 66 |
+
### Step 3: Mine Memorized Sequences
|
| 67 |
+
|
| 68 |
+
1. Load the 1B model
|
| 69 |
+
2. Use `src/mine_memorized.py` to sample ~10k candidates from `allenai/olmo-mix-1124`
|
| 70 |
+
3. Filter for strict memorization matches
|
| 71 |
+
4. Target: find ~100-200 memorized sequences (paper found 650 for 1B)
|
| 72 |
+
5. Save to `memorized_sequences.json`
|
| 73 |
+
|
| 74 |
+
### Step 4: Evaluate Baseline
|
| 75 |
+
|
| 76 |
+
1. Use `src/evaluate.py` with `memorization_score()` function
|
| 77 |
+
2. Measure strict accuracy, loose accuracy on found sequences
|
| 78 |
+
3. Measure perplexity on `NeelNanda/pile-10k` (use smaller sample, e.g., 200)
|
| 79 |
+
4. **Expected:** ~98% strict accuracy, ~23 perplexity
|
| 80 |
+
|
| 81 |
+
### Step 5: Apply K-FAC Editing
|
| 82 |
+
|
| 83 |
+
1. Load K-FAC statistics from `kfac_statistics.pt`
|
| 84 |
+
2. Use `src/kfac_editor.py` with `EditConfig(energy_threshold=0.6, formula="original")`
|
| 85 |
+
3. Edit layers 11, 12, 13 with gate_proj and up_proj
|
| 86 |
+
4. Evaluate memorization and perplexity after editing
|
| 87 |
+
5. **Expected:** ~3% strict accuracy, ~27 perplexity
|
| 88 |
+
|
| 89 |
+
### Step 6: Test Modified Formula
|
| 90 |
+
|
| 91 |
+
1. Restore original weights
|
| 92 |
+
2. Apply editing with `formula="modified"` (Π = λ·μ·|C|²)
|
| 93 |
+
3. Compare results to original formula
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
## Memory Optimization Tips
|
| 98 |
+
|
| 99 |
+
- Use `torch.float16` or `torch.bfloat16`
|
| 100 |
+
- Batch size 1-2
|
| 101 |
+
- Use `torch.no_grad()` for inference
|
| 102 |
+
- Use MPS device: `model.to("mps")`
|
| 103 |
+
- For K-FAC collection, process fewer tokens (2M instead of 20M)
|
| 104 |
+
- For mining, use smaller candidate pool (10k instead of 50k)
|
| 105 |
+
- For perplexity, use fewer samples (200 instead of 1000)
|
| 106 |
+
|
| 107 |
+
---
|
| 108 |
+
|
| 109 |
+
## Success Criteria
|
| 110 |
+
|
| 111 |
+
1. K-FAC statistics collected and saved
|
| 112 |
+
2. Found some memorized sequences (even 50-100 is useful)
|
| 113 |
+
3. Baseline shows high memorization (~95%+)
|
| 114 |
+
4. After K-FAC edit, memorization drops significantly (<10%)
|
| 115 |
+
5. Perplexity increase <30%
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
## Output Files
|
| 120 |
+
|
| 121 |
+
- `kfac_statistics.pt` - K-FAC A and G matrices
|
| 122 |
+
- `memorized_sequences.json` - Found memorized sequences
|
| 123 |
+
- `experiment_results.json` - Final metrics
|