Yurim0507 commited on
Commit
4ed6a08
·
verified ·
1 Parent(s): f01a3fc

Upload folder using huggingface_hub

Browse files
.gitkeep ADDED
File without changes
HF_README.md ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - machine-unlearning
5
+ - sparse-autoencoder
6
+ - vision-transformer
7
+ - interpretability
8
+ - restoration
9
+ ---
10
+
11
+ # Suppression or Deletion: Pretrained Models
12
+
13
+ This repository contains pretrained models and SAE (Sparse Autoencoder) assets for testing SAE-based restoration on machine unlearned models.
14
+
15
+ **Main GitHub Repository**: [suppression-or-deletion](https://github.com/Yurim0507/suppression-or-deletion)
16
+
17
+ ## Overview
18
+
19
+ These assets enable researchers to test whether unlearned models can be restored during inference using Sparse Autoencoder (SAE) features. The repository includes:
20
+
21
+ - **Original ViT-Base/16 models** trained on CIFAR-10 and Imagenette
22
+ - **SAE models** trained on layers 8, 9, 10 with TopK sparsity
23
+ - **Activation statistics** for normalization
24
+ - **Expert features** identified for each class
25
+
26
+ ## Repository Contents
27
+
28
+ ```
29
+ pretrained/
30
+ ├── cifar10/
31
+ │ ├── vit_base_16_original.pth # Original ViT-Base model (~350 MB)
32
+ │ ├── sae_layer8_k16.pt # SAE for layer 8, k=16 (~25 MB)
33
+ │ ├── sae_layer9_k16.pt # SAE for layer 9, k=16 (~25 MB)
34
+ │ ├── sae_layer10_k16.pt # SAE for layer 10, k=16 (~25 MB)
35
+ │ ├── activations_layer8_stats.npy # Normalization stats for layer 8 (<1 MB)
36
+ │ ├── activations_layer9_stats.npy # Normalization stats for layer 9 (<1 MB)
37
+ │ ├── activations_layer10_stats.npy # Normalization stats for layer 10 (<1 MB)
38
+ │ ├── expert_features_layer8_k16.pt # Expert features for layer 8 (<1 MB)
39
+ │ ├── expert_features_layer9_k16.pt # Expert features for layer 9 (<1 MB)
40
+ │ └── expert_features_layer10_k16.pt # Expert features for layer 10 (<1 MB)
41
+ └── imagenette/
42
+ ├── vit_base_16_original.pth # Original ViT-Base model (~350 MB)
43
+ ├── sae_layer8_k32.pt # SAE for layer 8, k=32 (~25 MB)
44
+ ├── sae_layer9_k32.pt # SAE for layer 9, k=32 (~25 MB)
45
+ ├── sae_layer10_k32.pt # SAE for layer 10, k=32 (~25 MB)
46
+ ├── activations_layer8_stats.npy # Normalization stats for layer 8 (<1 MB)
47
+ ├── activations_layer9_stats.npy # Normalization stats for layer 9 (<1 MB)
48
+ ├── activations_layer10_stats.npy # Normalization stats for layer 10 (<1 MB)
49
+ ├── expert_features_layer8_k32.pt # Expert features for layer 8 (<1 MB)
50
+ ├── expert_features_layer9_k32.pt # Expert features for layer 9 (<1 MB)
51
+ └── expert_features_layer10_k32.pt # Expert features for layer 10 (<1 MB)
52
+ ```
53
+
54
+ **Total size**: ~860 MB
55
+
56
+ ## Dataset-Specific Configurations
57
+
58
+ | Dataset | Classes | TopK (k) | Expert Features per Class |
59
+ |---------|---------|----------|---------------------------|
60
+ | **CIFAR-10** | 10 | 16 | 20 (k×5/4) |
61
+ | **Imagenette** | 10 | 32 | 40 (k×5/4) |
62
+
63
+ ## Quick Start
64
+
65
+ ### Download All Files
66
+
67
+ ```bash
68
+ # Using Hugging Face CLI (recommended)
69
+ pip install huggingface_hub
70
+ huggingface-cli download Yurim0507/suppression-or-deletion --local-dir ./pretrained --repo-type=model
71
+ ```
72
+
73
+ ```python
74
+ # Using Python
75
+ from huggingface_hub import snapshot_download
76
+ snapshot_download(
77
+ repo_id="Yurim0507/suppression-or-deletion",
78
+ local_dir="./pretrained",
79
+ repo_type="model"
80
+ )
81
+ ```
82
+
83
+ ### Download Specific Dataset
84
+
85
+ ```bash
86
+ # CIFAR-10 only
87
+ huggingface-cli download Yurim0507/suppression-or-deletion --include "cifar10/*" --local-dir ./pretrained --repo-type=model
88
+
89
+ # Imagenette only
90
+ huggingface-cli download Yurim0507/suppression-or-deletion --include "imagenette/*" --local-dir ./pretrained --repo-type=model
91
+ ```
92
+
93
+ ## Usage with Main Repository
94
+
95
+ 1. **Clone the main repository**:
96
+ ```bash
97
+ git clone https://github.com/Yurim0507/suppression-or-deletion.git
98
+ cd suppression-or-deletion
99
+ ```
100
+
101
+ 2. **Download pretrained assets** (using commands above)
102
+
103
+ 3. **Prepare your unlearned model**:
104
+ - Train an unlearned model using any unlearning method (CF-k, SALUN, SCRUB, etc.)
105
+ - Save the checkpoint in `.pth` format
106
+
107
+ 4. **Run restoration test**:
108
+ ```bash
109
+ # Test restoration on CIFAR-10 class 0 (airplane)
110
+ python recovery_test.py \
111
+ --dataset cifar10 \
112
+ --unlearned_model path/to/your/unlearned_model.pth \
113
+ --target_class 0 \
114
+ --layer 9 \
115
+ --alpha 1.0 2.0 5.0 10.0
116
+ ```
117
+
118
+ 5. **Simple demo script**:
119
+ ```bash
120
+ python demo.py \
121
+ --dataset cifar10 \
122
+ --unlearned_model path/to/your/unlearned_model.pth \
123
+ --target_class 0
124
+ ```
125
+
126
+ ## File Formats
127
+
128
+ ### Original Model (`vit_base_16_original.pth`)
129
+
130
+ PyTorch checkpoint containing ViT-Base/16 model trained on CIFAR-10 or Imagenette:
131
+ ```python
132
+ {
133
+ 'model_state_dict': <OrderedDict>, # Model weights
134
+ 'epoch': <int>, # Training epoch
135
+ # ... other training metadata
136
+ }
137
+ ```
138
+
139
+ ### SAE Model (`sae_layer{8,9,10}_k{16,32}.pt`)
140
+
141
+ Sparse Autoencoder checkpoint:
142
+ ```python
143
+ {
144
+ 'model_state_dict': <OrderedDict>, # SAE weights
145
+ 'model_config': {
146
+ 'input_dim': 768, # ViT hidden dimension
147
+ 'hidden_dim': 3072, # SAE latent dimension (768×4)
148
+ 'k': 16, # TopK sparsity (16 for CIFAR-10, 32 for Imagenette)
149
+ 'activation': 'topk' # Activation type
150
+ },
151
+ 'pre_bias': <Tensor>, # Pre-bias parameter
152
+ }
153
+ ```
154
+
155
+ ### Activation Statistics (`activations_layer{8,9,10}_stats.npy`)
156
+
157
+ Normalization statistics:
158
+ ```python
159
+ {
160
+ 'patch_mean': <ndarray>, # Mean of patch token activations
161
+ 'patch_std': <ndarray>, # Std of patch token activations
162
+ }
163
+ ```
164
+
165
+ ### Expert Features (`expert_features_layer{8,9,10}_k{16,32}.pt`)
166
+
167
+ Class-specific expert features:
168
+ ```python
169
+ {
170
+ 'class_experts_details': {
171
+ 0: [feature_id_1, feature_id_2, ...], # Expert features for class 0
172
+ 1: [...], # Expert features for class 1
173
+ ...
174
+ 9: [...] # Expert features for class 9
175
+ }
176
+ }
177
+ ```
178
+
179
+ **Expert feature selection criteria:**
180
+ - Top k×5/4 features per class (20 for CIFAR-10, 40 for Imagenette)
181
+ - Sorted by F1 score
182
+ - Common features (active in 7+ classes) excluded
183
+
184
+ ## Training Details
185
+
186
+ ### Original Models
187
+
188
+ - **Architecture**: ViT-Base/16 (google/vit-base-patch16-224)
189
+ - **Training**: Fine-tuned on CIFAR-10/Imagenette from pretrained ImageNet weights
190
+ - **Optimizer**: AdamW
191
+ - **Learning rate**: 1e-4
192
+ - **Epochs**: 20
193
+ - **Data augmentation**: RandomCrop, RandomHorizontalFlip
194
+
195
+ ### SAE Models
196
+
197
+ - **Layers**: 8, 9, 10 (out of 12 ViT layers)
198
+ - **Architecture**: Overcomplete (768 → 3072 → 768)
199
+ - **Sparsity**: TopK activation
200
+ - **CIFAR-10**: k=16 (only top 16 features active per sample)
201
+ - **Imagenette**: k=32 (only top 32 features active per sample)
202
+ - **Training loss**: MSE reconstruction + L1 regularization
203
+ - **Training samples**: All training set activations (patch tokens only)
204
+
205
+ ### Expert Features
206
+
207
+ - **Selection criteria**: Top k×5/4 features per class
208
+ - **CIFAR-10**: 20 features per class (16×5/4)
209
+ - **Imagenette**: 40 features per class (32×5/4)
210
+ - **Metrics**: F1 score, precision, recall
211
+ - **Filtering**: Common features (active in 7+ classes) excluded
212
+
213
+ ## Restoration Method
214
+
215
+ Expert features are amplified by coefficient **α (alpha)** during inference:
216
+
217
+ - **α = 1.0**: No amplification (baseline)
218
+ - **α = 2.0**: Double the expert feature strength
219
+ - **α = 5.0**: 5x amplification
220
+ - **α = 10.0**: 10x amplification
221
+
222
+ The restoration uses **direct injection mode**, which requires the original model's activations.
223
+
224
+ ## Citation
225
+
226
+ If you use these models in your research, please cite:
227
+
228
+ ```bibtex
229
+ @article{suppression-or-deletion,
230
+ title={Suppression or Deletion: Understanding Machine Unlearning via SAE-based Restoration},
231
+ author={...},
232
+ year={2024}
233
+ }
234
+ ```
235
+
236
+ ## License
237
+
238
+ MIT License - See main repository for details.
239
+
240
+ ## Questions and Issues
241
+
242
+ For questions or issues, please open an issue on the [main GitHub repository](https://github.com/Yurim0507/suppression-or-deletion/issues).
README.md ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - machine-unlearning
5
+ - sparse-autoencoder
6
+ - vision-transformer
7
+ - interpretability
8
+ - restoration
9
+ ---
10
+
11
+ # Suppression or Deletion: Pretrained Models
12
+
13
+ This repository contains pretrained models and SAE (Sparse Autoencoder) assets for testing SAE-based restoration on machine unlearned models.
14
+
15
+ **Main GitHub Repository**: [suppression-or-deletion](https://github.com/Yurim0507/suppression-or-deletion)
16
+
17
+ ## Overview
18
+
19
+ These assets enable researchers to test whether unlearned models can be restored during inference using Sparse Autoencoder (SAE) features. The repository includes:
20
+
21
+ - **Original ViT-Base/16 models** trained on CIFAR-10 and Imagenette
22
+ - **SAE models** trained on layers 8, 9, 10 with TopK sparsity
23
+ - **Activation statistics** for normalization
24
+ - **Expert features** identified for each class
25
+
26
+ ## Repository Contents
27
+
28
+ ```
29
+ pretrained/
30
+ ├── cifar10/
31
+ │ ├── vit_base_16_original.pth # Original ViT-Base model (~350 MB)
32
+ │ ├── sae_layer8_k16.pt # SAE for layer 8, k=16 (~25 MB)
33
+ │ ├── sae_layer9_k16.pt # SAE for layer 9, k=16 (~25 MB)
34
+ │ ├── sae_layer10_k16.pt # SAE for layer 10, k=16 (~25 MB)
35
+ │ ├── activations_layer8_stats.npy # Normalization stats for layer 8 (<1 MB)
36
+ │ ├── activations_layer9_stats.npy # Normalization stats for layer 9 (<1 MB)
37
+ │ ├── activations_layer10_stats.npy # Normalization stats for layer 10 (<1 MB)
38
+ │ ├── expert_features_layer8_k16.pt # Expert features for layer 8 (<1 MB)
39
+ │ ├── expert_features_layer9_k16.pt # Expert features for layer 9 (<1 MB)
40
+ │ └── expert_features_layer10_k16.pt # Expert features for layer 10 (<1 MB)
41
+ └── imagenette/
42
+ ├── vit_base_16_original.pth # Original ViT-Base model (~350 MB)
43
+ ├── sae_layer8_k32.pt # SAE for layer 8, k=32 (~25 MB)
44
+ ├── sae_layer9_k32.pt # SAE for layer 9, k=32 (~25 MB)
45
+ ├── sae_layer10_k32.pt # SAE for layer 10, k=32 (~25 MB)
46
+ ├── activations_layer8_stats.npy # Normalization stats for layer 8 (<1 MB)
47
+ ├── activations_layer9_stats.npy # Normalization stats for layer 9 (<1 MB)
48
+ ├── activations_layer10_stats.npy # Normalization stats for layer 10 (<1 MB)
49
+ ├── expert_features_layer8_k32.pt # Expert features for layer 8 (<1 MB)
50
+ ├── expert_features_layer9_k32.pt # Expert features for layer 9 (<1 MB)
51
+ └── expert_features_layer10_k32.pt # Expert features for layer 10 (<1 MB)
52
+ ```
53
+
54
+ **Total size**: ~860 MB
55
+
56
+ ## Dataset-Specific Configurations
57
+
58
+ | Dataset | Classes | TopK (k) | Expert Features per Class |
59
+ |---------|---------|----------|---------------------------|
60
+ | **CIFAR-10** | 10 | 16 | 20 (k×5/4) |
61
+ | **Imagenette** | 10 | 32 | 40 (k×5/4) |
62
+
63
+ ## Quick Start
64
+
65
+ ### Download All Files
66
+
67
+ ```bash
68
+ # Using Hugging Face CLI (recommended)
69
+ pip install huggingface_hub
70
+ huggingface-cli download Yurim0507/suppression-or-deletion --local-dir ./pretrained --repo-type=model
71
+ ```
72
+
73
+ ```python
74
+ # Using Python
75
+ from huggingface_hub import snapshot_download
76
+ snapshot_download(
77
+ repo_id="Yurim0507/suppression-or-deletion",
78
+ local_dir="./pretrained",
79
+ repo_type="model"
80
+ )
81
+ ```
82
+
83
+ ### Download Specific Dataset
84
+
85
+ ```bash
86
+ # CIFAR-10 only
87
+ huggingface-cli download Yurim0507/suppression-or-deletion --include "cifar10/*" --local-dir ./pretrained --repo-type=model
88
+
89
+ # Imagenette only
90
+ huggingface-cli download Yurim0507/suppression-or-deletion --include "imagenette/*" --local-dir ./pretrained --repo-type=model
91
+ ```
92
+
93
+ ## Usage with Main Repository
94
+
95
+ 1. **Clone the main repository**:
96
+ ```bash
97
+ git clone https://github.com/Yurim0507/suppression-or-deletion.git
98
+ cd suppression-or-deletion
99
+ ```
100
+
101
+ 2. **Download pretrained assets** (using commands above)
102
+
103
+ 3. **Prepare your unlearned model**:
104
+ - Train an unlearned model using any unlearning method (CF-k, SALUN, SCRUB, etc.)
105
+ - Save the checkpoint in `.pth` format
106
+
107
+ 4. **Run restoration test**:
108
+ ```bash
109
+ # Test restoration on CIFAR-10 class 0 (airplane)
110
+ python recovery_test.py \
111
+ --dataset cifar10 \
112
+ --unlearned_model path/to/your/unlearned_model.pth \
113
+ --target_class 0 \
114
+ --layer 9 \
115
+ --alpha 1.0 2.0 5.0 10.0
116
+ ```
117
+
118
+ 5. **Simple demo script**:
119
+ ```bash
120
+ python demo.py \
121
+ --dataset cifar10 \
122
+ --unlearned_model path/to/your/unlearned_model.pth \
123
+ --target_class 0
124
+ ```
125
+
126
+ ## File Formats
127
+
128
+ ### Original Model (`vit_base_16_original.pth`)
129
+
130
+ PyTorch checkpoint containing ViT-Base/16 model trained on CIFAR-10 or Imagenette:
131
+ ```python
132
+ {
133
+ 'model_state_dict': <OrderedDict>, # Model weights
134
+ 'epoch': <int>, # Training epoch
135
+ # ... other training metadata
136
+ }
137
+ ```
138
+
139
+ ### SAE Model (`sae_layer{8,9,10}_k{16,32}.pt`)
140
+
141
+ Sparse Autoencoder checkpoint:
142
+ ```python
143
+ {
144
+ 'model_state_dict': <OrderedDict>, # SAE weights
145
+ 'model_config': {
146
+ 'input_dim': 768, # ViT hidden dimension
147
+ 'hidden_dim': 3072, # SAE latent dimension (768×4)
148
+ 'k': 16, # TopK sparsity (16 for CIFAR-10, 32 for Imagenette)
149
+ 'activation': 'topk' # Activation type
150
+ },
151
+ 'pre_bias': <Tensor>, # Pre-bias parameter
152
+ }
153
+ ```
154
+
155
+ ### Activation Statistics (`activations_layer{8,9,10}_stats.npy`)
156
+
157
+ Normalization statistics:
158
+ ```python
159
+ {
160
+ 'patch_mean': <ndarray>, # Mean of patch token activations
161
+ 'patch_std': <ndarray>, # Std of patch token activations
162
+ }
163
+ ```
164
+
165
+ ### Expert Features (`expert_features_layer{8,9,10}_k{16,32}.pt`)
166
+
167
+ Class-specific expert features:
168
+ ```python
169
+ {
170
+ 'class_experts_details': {
171
+ 0: [feature_id_1, feature_id_2, ...], # Expert features for class 0
172
+ 1: [...], # Expert features for class 1
173
+ ...
174
+ 9: [...] # Expert features for class 9
175
+ }
176
+ }
177
+ ```
178
+
179
+ **Expert feature selection criteria:**
180
+ - Top k×5/4 features per class (20 for CIFAR-10, 40 for Imagenette)
181
+ - Sorted by F1 score
182
+ - Common features (active in 7+ classes) excluded
183
+
184
+ ## Training Details
185
+
186
+ ### Original Models
187
+
188
+ - **Architecture**: ViT-Base/16 (google/vit-base-patch16-224)
189
+ - **Training**: Fine-tuned on CIFAR-10/Imagenette from pretrained ImageNet weights
190
+ - **Optimizer**: AdamW
191
+ - **Learning rate**: 1e-4
192
+ - **Epochs**: 20
193
+ - **Data augmentation**: RandomCrop, RandomHorizontalFlip
194
+
195
+ ### SAE Models
196
+
197
+ - **Layers**: 8, 9, 10 (out of 12 ViT layers)
198
+ - **Architecture**: Overcomplete (768 → 3072 → 768)
199
+ - **Sparsity**: TopK activation
200
+ - **CIFAR-10**: k=16 (only top 16 features active per sample)
201
+ - **Imagenette**: k=32 (only top 32 features active per sample)
202
+ - **Training loss**: MSE reconstruction + L1 regularization
203
+ - **Training samples**: All training set activations (patch tokens only)
204
+
205
+ ### Expert Features
206
+
207
+ - **Selection criteria**: Top k×5/4 features per class
208
+ - **CIFAR-10**: 20 features per class (16×5/4)
209
+ - **Imagenette**: 40 features per class (32×5/4)
210
+ - **Metrics**: F1 score, precision, recall
211
+ - **Filtering**: Common features (active in 7+ classes) excluded
212
+
213
+ ## Restoration Method
214
+
215
+ Expert features are amplified by coefficient **α (alpha)** during inference:
216
+
217
+ - **α = 1.0**: No amplification (baseline)
218
+ - **α = 2.0**: Double the expert feature strength
219
+ - **α = 5.0**: 5x amplification
220
+ - **α = 10.0**: 10x amplification
221
+
222
+ The restoration uses **direct injection mode**, which requires the original model's activations.
223
+
224
+ ## Citation
225
+
226
+ If you use these models in your research, please cite:
227
+
228
+ ```bibtex
229
+ @article{suppression-or-deletion,
230
+ title={Suppression or Deletion: Understanding Machine Unlearning via SAE-based Restoration},
231
+ author={...},
232
+ year={2024}
233
+ }
234
+ ```
235
+
236
+ ## License
237
+
238
+ MIT License - See main repository for details.
239
+
240
+ ## Questions and Issues
241
+
242
+ For questions or issues, please open an issue on the [main GitHub repository](https://github.com/Yurim0507/suppression-or-deletion/issues).
cifar10/.gitkeep ADDED
File without changes
cifar10/activations_layer10_stats.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8dba9a99f048c812035c04cafcc459d081d2388e59a8d2e535e758bb84db8de8
3
+ size 6687
cifar10/activations_layer8_stats.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68e98706ddbeaf581ae017a432492706718f889a5e6629ebb76193003dd86eba
3
+ size 6687
cifar10/activations_layer9_stats.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dea477de2ba2ef2953c955a56e40661550f7d6ad5b058d20d3f2fa3137aa8a85
3
+ size 6687
cifar10/expert_features_layer10_k16.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ad57627b83e30918494d0ee87a6a39897fafe0639adf603370be3e5841332b8
3
+ size 9924
cifar10/expert_features_layer8_k16.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:512df037dfebf1affe23c9665d59223e1f8d9cf6514aa0c971625237967b40c9
3
+ size 9924
cifar10/expert_features_layer9_k16.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3524e2059fcf64602f986f6f58a85553aac874fc05a5ab4de2b24c96279804e9
3
+ size 9924
cifar10/sae_layer10_k16.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3df2cc847e0bc8532c30160bb4ecadd6f9874b83e73da99758a30e7926d6a2d
3
+ size 4737856
cifar10/sae_layer8_k16.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a671fd8089ebc73293edc11431d8fc347c2f7e5af4fef1b20b46f15519bec1a
3
+ size 4737856
cifar10/sae_layer9_k16.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9f70e6b719022e0ccae88890c5130b0f14c1eacc4bcb7d659509d503a6502a1
3
+ size 4737856
cifar10/vit_base_16_original.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e216d8d5af6f68ed5dc5ff5d008cc1edf91dc807836cf27471fd240f195d6a54
3
+ size 686599246
imagenette/.gitkeep ADDED
File without changes
imagenette/activations_layer10_stats.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8090a2201f7b98c0b557a51e726b283a13a6364dbe1b0ce616338f3a0326758d
3
+ size 6687
imagenette/activations_layer8_stats.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b75d155e1d21186b8948b1503e0eeabf175fbf122b047582aa04e4219b10a71
3
+ size 6687
imagenette/activations_layer9_stats.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:243d3b8127467d34995e2139ae96f29eec84befc3907ffa12bcd975b4752616d
3
+ size 6687
imagenette/expert_features_layer10_k32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4a32f3f405241d9a53c1e85dd437c31cf22ce527675ec229469d98bfe82624d
3
+ size 18948
imagenette/expert_features_layer8_k32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de2bb6b4fd10e47ddddc13f0a308d6f57a95ebf3940358a00ab8494b0a0c19b1
3
+ size 18948
imagenette/expert_features_layer9_k32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa116861bb905f4939f229aa14ecf72c05a054a31a178adc6a93c9419efbd09b
3
+ size 18948
imagenette/sae_layer10_k32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7025ea0cc776bbc8377f8c5df6e06efb751ed00277f917e2a99eeddff945f741
3
+ size 4737920
imagenette/sae_layer8_k32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4c71f41320ca4231a35f1a5c544278f8db7b1055048bde683307d90cda31897
3
+ size 4737920
imagenette/sae_layer9_k32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ec1d4d6dd957b9feb575dbacacb7f797cb736519f01330ff53c4c1090c6fbd5
3
+ size 4737920
imagenette/vit_base_16_original.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebb2856706a905327dd55945ad5fa3866416a96bb294e3a3aa6046187884e425
3
+ size 686601202