rufimelo commited on
Commit
e215949
·
verified ·
1 Parent(s): 7bb4499

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: sae_lens
3
+ tags:
4
+ - sparse-autoencoder
5
+ - mechanistic-interpretability
6
+ - sae
7
+ ---
8
+
9
+ # Sparse Autoencoders for Qwen/Qwen2.5-7B-Instruct
10
+
11
+ This repository contains 3 Sparse Autoencoder(s) (SAE) trained using [SAELens](https://github.com/jbloomAus/SAELens).
12
+
13
+ ## Model Details
14
+
15
+ | Property | Value |
16
+ |----------|-------|
17
+ | **Base Model** | `Qwen/Qwen2.5-7B-Instruct` |
18
+ | **Architecture** | `standard` |
19
+ | **Input Dimension** | 3584 |
20
+ | **SAE Dimension** | 16384 |
21
+ | **Training Dataset** | `TQRG/DeltaSecommits_qwen-2.5-7b-instruct_tokenized` |
22
+
23
+ ## Available Hook Points
24
+
25
+ | Hook Point |
26
+ |------------|
27
+ | `blocks.0.hook_resid_post` |
28
+ | `blocks.14.hook_resid_post` |
29
+ | `blocks.27.hook_resid_post` |
30
+
31
+ ## Usage
32
+
33
+ ```python
34
+ from sae_lens import SAE
35
+
36
+ # Load an SAE for a specific hook point
37
+ sae, cfg_dict, sparsity = SAE.from_pretrained(
38
+ release="rufimelo/secure_code_qwen_coder_strd_16384",
39
+ sae_id="blocks.0.hook_resid_post" # Choose from available hook points above
40
+ )
41
+
42
+ # Use with TransformerLens
43
+ from transformer_lens import HookedTransformer
44
+
45
+ model = HookedTransformer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
46
+
47
+ # Get activations and encode
48
+ _, cache = model.run_with_cache("your text here")
49
+ activations = cache["blocks.0.hook_resid_post"]
50
+ features = sae.encode(activations)
51
+ ```
52
+
53
+ ## Files
54
+
55
+ - `blocks.0.hook_resid_post/cfg.json` - SAE configuration
56
+ - `blocks.0.hook_resid_post/sae_weights.safetensors` - Model weights
57
+ - `blocks.0.hook_resid_post/sparsity.safetensors` - Feature sparsity statistics
58
+ - `blocks.14.hook_resid_post/cfg.json` - SAE configuration
59
+ - `blocks.14.hook_resid_post/sae_weights.safetensors` - Model weights
60
+ - `blocks.14.hook_resid_post/sparsity.safetensors` - Feature sparsity statistics
61
+ - `blocks.27.hook_resid_post/cfg.json` - SAE configuration
62
+ - `blocks.27.hook_resid_post/sae_weights.safetensors` - Model weights
63
+ - `blocks.27.hook_resid_post/sparsity.safetensors` - Feature sparsity statistics
64
+
65
+ ## Training
66
+
67
+ These SAEs were trained with SAELens version 6.26.2.
blocks.0.hook_resid_post/cfg.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"reshape_activations": "none", "normalize_activations": "layer_norm", "metadata": {"sae_lens_version": "6.26.2", "sae_lens_training_version": "6.26.2", "dataset_path": "TQRG/DeltaSecommits_qwen-2.5-7b-instruct_tokenized", "hook_name": "blocks.0.hook_resid_post", "model_name": "Qwen/Qwen2.5-7B-Instruct", "model_class_name": "HookedTransformer", "hook_head_index": null, "context_size": 128, "seqpos_slice": [null, null], "model_from_pretrained_kwargs": {}, "prepend_bos": true, "exclude_special_tokens": false, "sequence_separator_token": "bos", "disable_concat_sequences": false}, "apply_b_dec_to_input": true, "device": "cuda", "d_sae": 16384, "dtype": "float32", "d_in": 3584, "architecture": "standard"}
blocks.0.hook_resid_post/sae_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c0d67a15a45b4d5fb88c3c2a5ce3afa76bb12503901d6862731eb230490399c
3
+ size 469842240
blocks.0.hook_resid_post/sparsity.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa67e76b38c5ab4ee48265c427e1d67f679356713b1aab54bdabb2c980721e55
3
+ size 65616
blocks.14.hook_resid_post/cfg.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"d_in": 3584, "d_sae": 16384, "device": "cuda", "reshape_activations": "none", "apply_b_dec_to_input": true, "dtype": "float32", "metadata": {"sae_lens_version": "6.26.2", "sae_lens_training_version": "6.26.2", "dataset_path": "TQRG/DeltaSecommits_qwen-2.5-7b-instruct_tokenized", "hook_name": "blocks.14.hook_resid_post", "model_name": "Qwen/Qwen2.5-7B-Instruct", "model_class_name": "HookedTransformer", "hook_head_index": null, "context_size": 128, "seqpos_slice": [null, null], "model_from_pretrained_kwargs": {}, "prepend_bos": true, "exclude_special_tokens": false, "sequence_separator_token": "bos", "disable_concat_sequences": false}, "normalize_activations": "layer_norm", "architecture": "standard"}
blocks.14.hook_resid_post/sae_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e21f3f704c516988b4c6c5b2e98cb2a35688a3cb56a180a91504faa377ecca2
3
+ size 469842240
blocks.14.hook_resid_post/sparsity.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ef9e54f4cdec3839a1a080cf3485d1a55ec5e25dfe5fb710149c5d69c6b2f04
3
+ size 65616
blocks.27.hook_resid_post/cfg.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"normalize_activations": "layer_norm", "reshape_activations": "none", "device": "cuda", "dtype": "float32", "apply_b_dec_to_input": true, "d_in": 3584, "metadata": {"sae_lens_version": "6.26.2", "sae_lens_training_version": "6.26.2", "dataset_path": "TQRG/DeltaSecommits_qwen-2.5-7b-instruct_tokenized", "hook_name": "blocks.27.hook_resid_post", "model_name": "Qwen/Qwen2.5-7B-Instruct", "model_class_name": "HookedTransformer", "hook_head_index": null, "context_size": 128, "seqpos_slice": [null, null], "model_from_pretrained_kwargs": {}, "prepend_bos": true, "exclude_special_tokens": false, "sequence_separator_token": "bos", "disable_concat_sequences": false}, "d_sae": 16384, "architecture": "standard"}
blocks.27.hook_resid_post/sae_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41f5620d68f3ed26184dc51e28a3656babf54fb9b31cd0238f55affe714836da
3
+ size 469842240
blocks.27.hook_resid_post/sparsity.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c331d299a0beb5635fa87c075114fddc163c6a27bf25c8f9f8a4ada039832ec
3
+ size 65616