File size: 2,437 Bytes
3dc3ac4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
# K-FAC Memorization Suppression - Instructions
## Target: OLMo-2 1B Model
We're using the **1B model** for faster iteration and lower compute.
---
## Paper Baseline (1B Model) - Our Target to Reproduce
| Method | Dolma Strict % | Dolma Loose % | Quotes Strict % | Perplexity |
|--------|----------------|---------------|-----------------|------------|
| **Baseline** | 98.46 | 99.38 | 98.5 | 23.19 |
| **K-FAC** | 2.8 | 7.2 | 27.7 | 26.53 |
**Key findings from paper:**
- K-FAC reduces memorization from 98% → 3%
- Perplexity increases ~14% (23.19 → 26.53)
- Better transfer to unseen memorized content (quotes) than BSN
---
## Model & Hyperparameters
| Setting | Value | Notes |
|---------|-------|-------|
| Model | `allenai/OLMo-1B-0724-hf` | 16 layers |
| Target layers | **11, 12, 13** | ~72% depth (scaled from 7B's 23-25/32) |
| Projections | gate_proj, up_proj | Same as 7B |
| Energy threshold | 60% | Same as 7B |
**⚠️ Layer selection for 1B is NOT specified in paper** - we're using proportional scaling. May need tuning.
---
## What You Need
- **Google Colab** with GPU (T4 sufficient for 1B, A100 faster)
- **Google Drive** to store intermediate files
---
## Step 1: Local - Verify Tests
```bash
python3 tests/test_local.py
```
---
## Step 2: Colab - Collect K-FAC Stats (~30-60 min on T4)
1. Open `notebooks/01_collect_kfac.ipynb`
2. **Change model to:** `allenai/OLMo-1B-0724-hf`
3. Run all cells
4. Output: `kfac_statistics.pt`
---
## Step 3: Colab - Mine Memorized Sequences (~30 min)
1. Open `notebooks/02_mine_memorized.ipynb`
2. **Change model to:** `allenai/OLMo-1B-0724-hf`
3. Run all cells
4. Output: `memorized_val.json` (target: ~125 sequences like paper)
---
## Step 4: Colab - Run Experiments (~30-60 min)
1. Open `notebooks/03_experiments.ipynb`
2. **Change model to:** `allenai/OLMo-1B-0724-hf`
3. Run all cells
4. Compare to paper baseline above
---
## Expected Compute Time (1B model)
| Task | T4 GPU | A100 GPU |
|------|--------|----------|
| K-FAC collection (20M tokens) | ~60 min | ~20 min |
| Mining memorized sequences | ~30 min | ~10 min |
| Experiments | ~30 min | ~15 min |
| **Total** | **~2 hours** | **~45 min** |
---
## Troubleshooting
- **OOM:** Reduce batch_size (4→2→1)
- **Wrong layers:** If 11-13 doesn't work well, try 10-12 or 12-14
- **Low memorization rate when mining:** Normal - paper found 650 sequences from much larger candidate pool
|