--- license: cc-by-4.0 library_name: saelens --- # Gemma Scope 2: This is a landing page for **Gemma Scope 2**, a comprehensive, open suite of sparse autoencoders for the Gemma 3 model family. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals. **There are no model weights in this repo. If you are looking for them, please visit one of our repos:** - https://huggingface.co/google/gemma-scope-2-270m-pt - https://huggingface.co/google/gemma-scope-2-270m-it - https://huggingface.co/google/gemma-scope-2-1b-pt - https://huggingface.co/google/gemma-scope-2-1b-it - https://huggingface.co/google/gemma-scope-2-4b-pt - https://huggingface.co/google/gemma-scope-2-4b-it - https://huggingface.co/google/gemma-scope-2-12b-pt - https://huggingface.co/google/gemma-scope-2-12b-it - https://huggingface.co/google/gemma-scope-2-27b-pt - https://huggingface.co/google/gemma-scope-2-27b-it # Key links: - Check out the [interactive Gemma Scope demo](https://www.neuronpedia.org/gemma-scope-2) made by [Neuronpedia](https://www.neuronpedia.org/). - (NEW!) We have a colab notebook tutorial for JumpReLU SAE training in JAX and PyTorch [here](https://colab.research.google.com/drive/1PlFzI_PWGTN9yCQLuBcSuPJUjgHL7GiD). - Learn more about Gemma Scope in our [Google DeepMind blog post](https://deepmind.google/blog/gemma-scope-2-helping-the-ai-safety-community-deepen-understanding-of-complex-language-model-behavior). - Check out our [Google Colab notebook tutorial](https://colab.research.google.com/drive/1NhWjg7n0nhfW--CjtsOdw5A5J_-Bzn4r) for how to use Gemma Scope 2. - Read [the Gemma Scope technical report](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/gemma-scope-2-helping-the-ai-safety-community-deepen-understanding-of-complex-language-model-behavior/Gemma_Scope_2_Technical_Paper.pdf). - Check out [Mishax](https://github.com/google-deepmind/mishax), a GDM internal tool that we used in this project to expose the internal activations inside Gemma 3 models. The full list of SAEs we trained at which sites and layers can be found in our technical report.