Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ base_model:
|
|
| 14 |
|
| 15 |
[**Use with SAELens** (In progress)]
|
| 16 |
|
| 17 |
-
[**Explore in Neuronpedia**
|
| 18 |
|
| 19 |
Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models, yet scalable training remains a significant challenge. We introduce a suite of 256 improved TopK SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features.
|
| 20 |
|
|
|
|
| 14 |
|
| 15 |
[**Use with SAELens** (In progress)]
|
| 16 |
|
| 17 |
+
[**Explore in Neuronpedia**](https://www.neuronpedia.org/llama-scope)
|
| 18 |
|
| 19 |
Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models, yet scalable training remains a significant challenge. We introduce a suite of 256 improved TopK SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features.
|
| 20 |
|