Merge branch 'master'
Browse files- .gitattributes +6 -0
- 2.8b_l0/checkpoints/ae_390000.pt +3 -0
- 2.8b_l1/checkpoints/ae_390000.pt +3 -0
- 2.8b_l1/checkpoints/ae_740000.pt +3 -0
- 2.8b_l1/checkpoints/ae_750000.pt +3 -0
- 2.8b_l15/checkpoints/ae_490000.pt +3 -0
- 2.8b_l2/checkpoints/ae_390000.pt +3 -0
- README.md +29 -0
- results/2.8b_l0_390k.png +0 -0
- results/2.8b_l15_490k.png +0 -0
- results/2.8b_l1_390k.png +0 -0
- results/2.8b_l1_740k.png +0 -0
- results/2.8b_l2_390k.png +0 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
2.8b_l2 filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
2.8b_l0 filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
2.8b_l1 filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
2.8b_l15 filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
checkpoints/*.pt filter=lfs diff=lfs merge=lfs -text
|
2.8b_l0/checkpoints/ae_390000.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c34f8f87a979745fcb6cfa259fe49a17f51c87023e6e1805a69bb31b27b2cd9d
|
| 3 |
+
size 839037024
|
2.8b_l1/checkpoints/ae_390000.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1226f41519f04c462311f16c93476bd5f0c72b32a8eea3555e0df10a84155f64
|
| 3 |
+
size 839037024
|
2.8b_l1/checkpoints/ae_740000.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:52fcd56fdd9587095373f3bd7a5d91eb1358aea921244d39cceb7e810195628c
|
| 3 |
+
size 839037024
|
2.8b_l1/checkpoints/ae_750000.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2bd5ae298ffdfd942c7961b98ec6cc21087bd06714158d3ca2acb8ae09307b57
|
| 3 |
+
size 839037024
|
2.8b_l15/checkpoints/ae_490000.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9b4c7ba419fee5dbbb5d446df44da5a824575e7e051b8b4203ef1869c4dc7b1c
|
| 3 |
+
size 839037024
|
2.8b_l2/checkpoints/ae_390000.pt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02638ada96593ac34ad77861ca51aea5dfdecdf88e707e444f47fa5c43d99b40
|
| 3 |
+
size 839037024
|
README.md
CHANGED
|
@@ -1,3 +1,32 @@
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<<<<<<< HEAD
|
| 2 |
---
|
| 3 |
license: apache-2.0
|
| 4 |
---
|
| 5 |
+
=======
|
| 6 |
+
# Trained Sparse Autoencoders on Pythia 2.8B
|
| 7 |
+
I trained SAEs on the MLP_out activations of the Pythia 2.8B dataset. I trained using https://github.com/magikarp01/facts-sae.git, a fork of https://github.com/saprmarks/dictionary_learning.
|
| 8 |
+
|
| 9 |
+
## SAE Setup
|
| 10 |
+
- **Training Dataset**: Uncopyrighted Pile, at monology/pile-uncopyrighted
|
| 11 |
+
- **Model**: 32-layer Pythia 2.8B
|
| 12 |
+
- **Activation**: MLP_out
|
| 13 |
+
- **Layers Trained**: 0, 1, 2, 15
|
| 14 |
+
- **Batch Size**: 2048 for layer 15, 2560 for layers 0, 1, 2
|
| 15 |
+
- **Training Tokens**: 1e9 for layers 15, 0, 2, slightly less than 2e9 for layer 1
|
| 16 |
+
- **Training Steps**: 4e5 for layers 0, 2, 5e5 for layer 15, 7.5e5 for layer 1
|
| 17 |
+
|
| 18 |
+
## Training Hyperparamaters
|
| 19 |
+
- **Learning Rate**: 3e-4
|
| 20 |
+
- **Sparsity Penalty**: 1e-3
|
| 21 |
+
- **Warmup Steps**: 5000
|
| 22 |
+
- **Resample Steps**: 50000
|
| 23 |
+
- **Optimizer**: Constrained Adam
|
| 24 |
+
- **Scheduler**: LambdaLR, linear warmup lr between 0 and warmup_steps
|
| 25 |
+
|
| 26 |
+
## SAE Metrics
|
| 27 |
+

|
| 28 |
+

|
| 29 |
+

|
| 30 |
+

|
| 31 |
+

|
| 32 |
+
>>>>>>> master
|
results/2.8b_l0_390k.png
ADDED
|
results/2.8b_l15_490k.png
ADDED
|
results/2.8b_l1_390k.png
ADDED
|
results/2.8b_l1_740k.png
ADDED
|
results/2.8b_l2_390k.png
ADDED
|