AndrewAndrewsen commited on
Commit
372e2db
Β·
verified Β·
1 Parent(s): cf5fe62

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +323 -321
README.md CHANGED
@@ -1,321 +1,323 @@
1
- ---
2
- language:
3
- - en
4
- license: mit
5
- tags:
6
- - secret-detection
7
- - mixture-of-experts
8
- - gating-network
9
- - security
10
- - nlp
11
- - token-classification
12
- pipeline_tag: token-classification
13
- library_name: pytorch
14
- datasets:
15
- - custom
16
- metrics:
17
- - accuracy
18
- - f1
19
- model-index:
20
- - name: secretmask-gate
21
- results:
22
- - task:
23
- type: routing
24
- name: Expert Routing
25
- dataset:
26
- name: SecretMask v2
27
- type: custom
28
- metrics:
29
- - type: accuracy
30
- value: 0.927
31
- name: Test Accuracy
32
- - type: accuracy
33
- value: 1.0
34
- name: Validation Accuracy
35
- base_model:
36
- - andrewandrewsen/distilbert-secret-masker
37
- - andrewandrewsen/longformer-secret-masker
38
- ---
39
-
40
- # SecretMask MoE Gating Network
41
-
42
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
43
- [![Model Size](https://img.shields.io/badge/Size-12KB-green.svg)](https://huggingface.co/andrewandrewsen/secretmask-gate)
44
-
45
- **Lightweight learned gating network for SecretMask Mixture-of-Experts routing.**
46
-
47
- This repository contains a trained 12KB neural network that learns optimal routing between two secret detection expert models. Use this for true MoE inference with weighted expert combination.
48
-
49
- ---
50
-
51
- ## πŸ“‹ Overview
52
-
53
- The gating network is a tiny 3-layer MLP (3,042 parameters) that:
54
-
55
- 1. Takes 10 features extracted from text
56
- 2. Outputs routing weights `[w_fast, w_long]` (sum to 1.0)
57
- 3. Enables weighted combination of expert model outputs
58
-
59
- **Training Results:**
60
-
61
- - βœ… 100% validation accuracy (200 examples)
62
- - βœ… 92.7% test accuracy (600 examples)
63
- - βœ… Only 0.19ms inference overhead
64
- - βœ… Matches heuristic routing performance
65
-
66
- ---
67
-
68
- ## πŸš€ Quick Start
69
-
70
- ### Installation
71
-
72
- ```bash
73
- pip install torch transformers huggingface-hub
74
- ```
75
-
76
- ### Download and Use
77
-
78
- ```python
79
- from huggingface_hub import hf_hub_download
80
- from moe_gate import GatingNetwork, extract_features_tensor
81
-
82
- # Download gating network
83
- gate_path = hf_hub_download("andrewandrewsen/secretmask-gate", "best_gate.pt")
84
-
85
- # Load model
86
- gate = GatingNetwork.load(gate_path)
87
- gate.eval()
88
-
89
- # Extract features from text
90
- text = "AWS key: AKIAIOSFODNN7EXAMPLE"
91
- features = extract_features_tensor(text)
92
-
93
- # Get routing weights
94
- import torch
95
- with torch.no_grad():
96
- weights = gate(features.unsqueeze(0))
97
-
98
- print(f"Fast expert weight: {weights[0][0]:.3f}")
99
- print(f"Long expert weight: {weights[0][1]:.3f}")
100
- # Output: Fast expert weight: 0.950, Long expert weight: 0.050
101
- ```
102
-
103
- ### Integration with SecretMask
104
-
105
- ```bash
106
- # Clone SecretMask repository
107
- git clone https://github.com/andrewandrewsen/secmask.git
108
- cd secmask
109
-
110
- # Run inference with learned MoE routing
111
- python infer_moe.py \
112
- --text "My AWS key is AKIAIOSFODNN7EXAMPLE" \
113
- --routing-mode learned \
114
- --fast-model andrewandrewsen/distilbert-secret-masker \
115
- --long-model andrewandrewsen/longformer-secret-masker \
116
- --gate-model andrewandrewsen/secretmask-gate \
117
- --tau 0.80
118
- ```
119
-
120
- ---
121
-
122
- ## πŸ—οΈ Model Architecture
123
-
124
- ```
125
- Input: [10 features]
126
- ↓
127
- Linear(10 β†’ 64) + LayerNorm + ReLU + Dropout(0.1)
128
- ↓
129
- Linear(64 β†’ 32) + LayerNorm + ReLU + Dropout(0.1)
130
- ↓
131
- Linear(32 β†’ 2) + Softmax
132
- ↓
133
- Output: [w_fast, w_long] (sum = 1.0)
134
- ```
135
-
136
- **Total Parameters:** 3,042
137
- **Model Size:** 12KB (float32)
138
- **Inference Time:** ~0.19ms on CPU
139
-
140
- ---
141
-
142
- ## πŸ“Š Input Features (10D)
143
-
144
- The gating network takes a normalized 10-dimensional feature vector:
145
-
146
- | Index | Feature | Description | Normalization |
147
- | ----- | ----------------- | ----------------------- | ------------- |
148
- | 0 | `token_count` | Number of tokens | / 1000 |
149
- | 1 | `entropy` | Shannon entropy | / 6 |
150
- | 2 | `has_pem` | Has PEM block (binary) | 0 or 1 |
151
- | 3 | `has_k8s` | Has K8s secret (binary) | 0 or 1 |
152
- | 4 | `akia_count` | AWS pattern count | / 5 |
153
- | 5 | `github_count` | GitHub token count | / 5 |
154
- | 6 | `jwt_count` | JWT token count | / 5 |
155
- | 7 | `base64_count` | Base64 pattern count | / 50 |
156
- | 8 | `line_count` | Number of lines | / 100 |
157
- | 9 | `avg_line_length` | Avg chars per line | / 100 |
158
-
159
- ---
160
-
161
- ## πŸ“ˆ Training Details
162
-
163
- **Dataset:**
164
-
165
- - Training: 6,000 examples
166
- - Validation: 200 examples
167
- - Test: 600 examples
168
-
169
- **Configuration:**
170
-
171
- - Optimizer: AdamW (lr=0.001, weight_decay=0.01)
172
- - Scheduler: Cosine annealing
173
- - Batch size: 32
174
- - Epochs: 10
175
- - Device: Apple M-series (MPS)
176
-
177
- **Training Results:**
178
-
179
- | Epoch | Train Loss | Train Acc | Val Loss | Val Acc |
180
- | ----- | ---------- | --------- | -------- | -------- |
181
- | 1 | 0.0808 | 97.6% | 0.0051 | **100%** |
182
- | 2 | 0.0036 | 100% | 0.0010 | **100%** |
183
- | 10 | 0.0005 | 100% | 0.0001 | **100%** |
184
-
185
- **Test Performance:**
186
-
187
- - Routing accuracy: 92.7%
188
- - Fast expert: 92.7% of examples
189
- - Long expert: 7.3% of examples
190
- - Matches heuristic routing exactly
191
-
192
- ---
193
-
194
- ## πŸ”§ Usage with Expert Models
195
-
196
- This gating network coordinates two expert models:
197
-
198
- | Expert | Model | Size | Max Tokens | Use Case |
199
- | -------- | ----------------------------------------------------------------------------------------------------------- | ----- | ---------- | ---------------------------- |
200
- | **Fast** | [andrewandrewsen/distilbert-secret-masker](https://huggingface.co/andrewandrewsen/distilbert-secret-masker) | 265MB | 512 | Short texts, code snippets |
201
- | **Long** | [andrewandrewsen/longformer-secret-masker](https://huggingface.co/andrewandrewsen/longformer-secret-masker) | 592MB | 2048 | Long documents, config files |
202
-
203
- ### How It Works
204
-
205
- ```python
206
- # 1. Extract features
207
- features = extract_features_tensor(text)
208
-
209
- # 2. Get routing weights from gating network
210
- weights = gate(features) # [w_fast, w_long]
211
-
212
- # 3. Run both expert models
213
- fast_output = fast_expert(text)
214
- long_output = long_expert(text)
215
-
216
- # 4. Combine outputs using learned weights
217
- final_output = weights[0] * fast_output + weights[1] * long_output
218
- ```
219
-
220
- ---
221
-
222
- ## πŸ“¦ Files in This Repository
223
-
224
- - **`best_gate.pt`** - Trained gating network (12KB)
225
- - **`final_gate.pt`** - Final checkpoint (12KB)
226
- - **`history.json`** - Training history (3.2KB)
227
- - **`README.md`** - This file
228
-
229
- ---
230
-
231
- ## πŸ”¬ Technical Details
232
-
233
- ### Load Balancing
234
-
235
- The model was trained with a load balancing loss to encourage uniform expert usage:
236
-
237
- ```python
238
- target_distribution = [0.5, 0.5] # 50% fast, 50% long
239
- actual_distribution = weights.mean(dim=0)
240
- load_balance_loss = 0.01 * MSE(actual_distribution, target_distribution)
241
- ```
242
-
243
- Despite this, the model learned to route 90.5% to fast expert and 9.5% to long expert, matching the natural data distribution.
244
-
245
- ### Routing Metrics
246
-
247
- ```python
248
- from moe_gate import compute_routing_metrics
249
-
250
- weights = gate(features)
251
- metrics = compute_routing_metrics(weights)
252
-
253
- # Returns:
254
- # {
255
- # 'fast_expert_pct': 92.7,
256
- # 'long_expert_pct': 7.3,
257
- # 'avg_fast_weight': 0.924,
258
- # 'avg_long_weight': 0.076,
259
- # 'entropy': 0.031
260
- # }
261
- ```
262
-
263
- Low entropy (0.031) indicates confident routing decisions.
264
-
265
- ---
266
-
267
- ## πŸ†š Heuristic vs Learned Routing
268
-
269
- | Metric | Heuristic | Learned MoE |
270
- | --------------------- | -------------------- | ------------------------ |
271
- | **Routing Accuracy** | 92.7% | 92.7% |
272
- | **Model Size** | 0KB (rules only) | 12KB |
273
- | **Latency** | 0.065ms | 0.256ms |
274
- | **Training Required** | No | Yes (10 epochs) |
275
- | **Explainability** | High (if-else rules) | Medium (learned weights) |
276
- | **Adaptability** | Manual updates | Data-driven |
277
-
278
- **Recommendation:** Use heuristic routing for simplicity and explainability. Use learned routing when you want to fine-tune on your specific data distribution.
279
-
280
- ---
281
-
282
- ## πŸ“š Citation
283
-
284
- If you use this model, please cite:
285
-
286
- ```bibtex
287
- @model{secretmask-gate,
288
- author = {Anders Andersson},
289
- title = {SecretMask MoE Gating Network},
290
- year = {2025},
291
- publisher = {HuggingFace},
292
- url = {https://huggingface.co/andrewandrewsen/secretmask-gate}
293
- }
294
- ```
295
-
296
- ---
297
-
298
- ## πŸ“„ License
299
-
300
- MIT License - see [LICENSE](LICENSE) file.
301
-
302
- **Note:** This model is trained to work with the SecretMask expert models, which use Apache 2.0 licensed base models (DistilBERT, Longformer). See the expert model repositories for full licensing details.
303
-
304
- ---
305
-
306
- ## πŸ”— Related Resources
307
-
308
- - **SecretMask MoE Repository:** [GitHub](https://github.com/andrewandrewsen/secmask)
309
- - **Fast Expert Model:** [andrewandrewsen/distilbert-secret-masker](https://huggingface.co/andrewandrewsen/distilbert-secret-masker)
310
- - **Long Expert Model:** [andrewandrewsen/longformer-secret-masker](https://huggingface.co/andrewandrewsen/longformer-secret-masker)
311
- - **Documentation:** See repository for BENCHMARKS.md, USE_CASES.md, etc.
312
-
313
- ---
314
-
315
- ## 🀝 Contributing
316
-
317
- Issues and pull requests welcome at [GitHub](https://github.com/andrewandrewsen/secmask).
318
-
319
- ---
320
-
321
- **Built with ❀️ for the open source community**
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ tags:
6
+ - secret-detection
7
+ - mixture-of-experts
8
+ - gating-network
9
+ - security
10
+ - nlp
11
+ - token-classification
12
+ pipeline_tag: token-classification
13
+ library_name: pytorch
14
+ datasets:
15
+ - custom
16
+ metrics:
17
+ - accuracy
18
+ - f1
19
+ model-index:
20
+ - name: secretmask-gate
21
+ results:
22
+ - task:
23
+ type: routing
24
+ name: Expert Routing
25
+ dataset:
26
+ name: SecretMask v2
27
+ type: custom
28
+ metrics:
29
+ - type: accuracy
30
+ value: 0.927
31
+ name: Test Accuracy
32
+ - type: accuracy
33
+ value: 1.0
34
+ name: Validation Accuracy
35
+ base_model:
36
+ - andrewandrewsen/distilbert-secret-masker
37
+ - andrewandrewsen/longformer-secret-masker
38
+ ---
39
+
40
+ # SecretMask MoE Gating Network
41
+
42
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
43
+ [![Model Size](https://img.shields.io/badge/Size-12KB-green.svg)](https://huggingface.co/andrewandrewsen/secretmask-gate)
44
+
45
+ **Lightweight learned gating network for SecretMask Mixture-of-Experts routing.**
46
+
47
+ This repository contains a trained 12KB neural network that learns optimal routing between two secret detection expert models. Use this for true MoE inference with weighted expert combination.
48
+
49
+ ---
50
+
51
+ ## πŸ“‹ Overview
52
+
53
+ The gating network is a tiny 3-layer MLP (3,042 parameters) that:
54
+
55
+ 1. Takes 10 features extracted from text
56
+ 2. Outputs routing weights `[w_fast, w_long]` (sum to 1.0)
57
+ 3. Enables weighted combination of expert model outputs
58
+
59
+ **Training Results:**
60
+
61
+ - βœ… 100% validation accuracy (200 examples)
62
+ - βœ… 92.7% test accuracy (600 examples)
63
+ - βœ… Only 0.19ms inference overhead
64
+ - βœ… Matches heuristic routing performance
65
+
66
+ > **Note**: This gating network is **optional and experimental**. Heuristic (rule-based) routing achieves identical results (92.7% accuracy) without requiring this model. The recommended production configuration uses **Fast Expert + Filters** without learned routing or the Long Expert. This gate is primarily for learning/experimentation with MoE architectures. See [Configuration Guide](https://github.com/AndrewAndrewsen/secmask/blob/main/CONFIGURATION_GUIDE.md) for details.
67
+
68
+ ---
69
+
70
+ ## πŸš€ Quick Start
71
+
72
+ ### Installation
73
+
74
+ ```bash
75
+ pip install torch transformers huggingface-hub
76
+ ```
77
+
78
+ ### Download and Use
79
+
80
+ ```python
81
+ from huggingface_hub import hf_hub_download
82
+ from moe_gate import GatingNetwork, extract_features_tensor
83
+
84
+ # Download gating network
85
+ gate_path = hf_hub_download("andrewandrewsen/secretmask-gate", "best_gate.pt")
86
+
87
+ # Load model
88
+ gate = GatingNetwork.load(gate_path)
89
+ gate.eval()
90
+
91
+ # Extract features from text
92
+ text = "AWS key: AKIAIOSFODNN7EXAMPLE"
93
+ features = extract_features_tensor(text)
94
+
95
+ # Get routing weights
96
+ import torch
97
+ with torch.no_grad():
98
+ weights = gate(features.unsqueeze(0))
99
+
100
+ print(f"Fast expert weight: {weights[0][0]:.3f}")
101
+ print(f"Long expert weight: {weights[0][1]:.3f}")
102
+ # Output: Fast expert weight: 0.950, Long expert weight: 0.050
103
+ ```
104
+
105
+ ### Integration with SecretMask
106
+
107
+ ```bash
108
+ # Clone SecretMask repository
109
+ git clone https://github.com/andrewandrewsen/secmask.git
110
+ cd secmask
111
+
112
+ # Run inference with learned MoE routing
113
+ python infer_moe.py \
114
+ --text "My AWS key is AKIAIOSFODNN7EXAMPLE" \
115
+ --routing-mode learned \
116
+ --fast-model andrewandrewsen/distilbert-secret-masker \
117
+ --long-model andrewandrewsen/longformer-secret-masker \
118
+ --gate-model andrewandrewsen/secretmask-gate \
119
+ --tau 0.80
120
+ ```
121
+
122
+ ---
123
+
124
+ ## πŸ—οΈ Model Architecture
125
+
126
+ ```
127
+ Input: [10 features]
128
+ ↓
129
+ Linear(10 β†’ 64) + LayerNorm + ReLU + Dropout(0.1)
130
+ ↓
131
+ Linear(64 β†’ 32) + LayerNorm + ReLU + Dropout(0.1)
132
+ ↓
133
+ Linear(32 β†’ 2) + Softmax
134
+ ↓
135
+ Output: [w_fast, w_long] (sum = 1.0)
136
+ ```
137
+
138
+ **Total Parameters:** 3,042
139
+ **Model Size:** 12KB (float32)
140
+ **Inference Time:** ~0.19ms on CPU
141
+
142
+ ---
143
+
144
+ ## πŸ“Š Input Features (10D)
145
+
146
+ The gating network takes a normalized 10-dimensional feature vector:
147
+
148
+ | Index | Feature | Description | Normalization |
149
+ | ----- | ----------------- | ----------------------- | ------------- |
150
+ | 0 | `token_count` | Number of tokens | / 1000 |
151
+ | 1 | `entropy` | Shannon entropy | / 6 |
152
+ | 2 | `has_pem` | Has PEM block (binary) | 0 or 1 |
153
+ | 3 | `has_k8s` | Has K8s secret (binary) | 0 or 1 |
154
+ | 4 | `akia_count` | AWS pattern count | / 5 |
155
+ | 5 | `github_count` | GitHub token count | / 5 |
156
+ | 6 | `jwt_count` | JWT token count | / 5 |
157
+ | 7 | `base64_count` | Base64 pattern count | / 50 |
158
+ | 8 | `line_count` | Number of lines | / 100 |
159
+ | 9 | `avg_line_length` | Avg chars per line | / 100 |
160
+
161
+ ---
162
+
163
+ ## πŸ“ˆ Training Details
164
+
165
+ **Dataset:**
166
+
167
+ - Training: 6,000 examples
168
+ - Validation: 200 examples
169
+ - Test: 600 examples
170
+
171
+ **Configuration:**
172
+
173
+ - Optimizer: AdamW (lr=0.001, weight_decay=0.01)
174
+ - Scheduler: Cosine annealing
175
+ - Batch size: 32
176
+ - Epochs: 10
177
+ - Device: Apple M-series (MPS)
178
+
179
+ **Training Results:**
180
+
181
+ | Epoch | Train Loss | Train Acc | Val Loss | Val Acc |
182
+ | ----- | ---------- | --------- | -------- | -------- |
183
+ | 1 | 0.0808 | 97.6% | 0.0051 | **100%** |
184
+ | 2 | 0.0036 | 100% | 0.0010 | **100%** |
185
+ | 10 | 0.0005 | 100% | 0.0001 | **100%** |
186
+
187
+ **Test Performance:**
188
+
189
+ - Routing accuracy: 92.7%
190
+ - Fast expert: 92.7% of examples
191
+ - Long expert: 7.3% of examples
192
+ - Matches heuristic routing exactly
193
+
194
+ ---
195
+
196
+ ## πŸ”§ Usage with Expert Models
197
+
198
+ This gating network coordinates two expert models:
199
+
200
+ | Expert | Model | Size | Max Tokens | Use Case |
201
+ | -------- | ----------------------------------------------------------------------------------------------------------- | ----- | ---------- | ---------------------------- |
202
+ | **Fast** | [andrewandrewsen/distilbert-secret-masker](https://huggingface.co/andrewandrewsen/distilbert-secret-masker) | 265MB | 512 | Short texts, code snippets |
203
+ | **Long** | [andrewandrewsen/longformer-secret-masker](https://huggingface.co/andrewandrewsen/longformer-secret-masker) | 592MB | 2048 | Long documents, config files |
204
+
205
+ ### How It Works
206
+
207
+ ```python
208
+ # 1. Extract features
209
+ features = extract_features_tensor(text)
210
+
211
+ # 2. Get routing weights from gating network
212
+ weights = gate(features) # [w_fast, w_long]
213
+
214
+ # 3. Run both expert models
215
+ fast_output = fast_expert(text)
216
+ long_output = long_expert(text)
217
+
218
+ # 4. Combine outputs using learned weights
219
+ final_output = weights[0] * fast_output + weights[1] * long_output
220
+ ```
221
+
222
+ ---
223
+
224
+ ## πŸ“¦ Files in This Repository
225
+
226
+ - **`best_gate.pt`** - Trained gating network (12KB)
227
+ - **`final_gate.pt`** - Final checkpoint (12KB)
228
+ - **`history.json`** - Training history (3.2KB)
229
+ - **`README.md`** - This file
230
+
231
+ ---
232
+
233
+ ## πŸ”¬ Technical Details
234
+
235
+ ### Load Balancing
236
+
237
+ The model was trained with a load balancing loss to encourage uniform expert usage:
238
+
239
+ ```python
240
+ target_distribution = [0.5, 0.5] # 50% fast, 50% long
241
+ actual_distribution = weights.mean(dim=0)
242
+ load_balance_loss = 0.01 * MSE(actual_distribution, target_distribution)
243
+ ```
244
+
245
+ Despite this, the model learned to route 90.5% to fast expert and 9.5% to long expert, matching the natural data distribution.
246
+
247
+ ### Routing Metrics
248
+
249
+ ```python
250
+ from moe_gate import compute_routing_metrics
251
+
252
+ weights = gate(features)
253
+ metrics = compute_routing_metrics(weights)
254
+
255
+ # Returns:
256
+ # {
257
+ # 'fast_expert_pct': 92.7,
258
+ # 'long_expert_pct': 7.3,
259
+ # 'avg_fast_weight': 0.924,
260
+ # 'avg_long_weight': 0.076,
261
+ # 'entropy': 0.031
262
+ # }
263
+ ```
264
+
265
+ Low entropy (0.031) indicates confident routing decisions.
266
+
267
+ ---
268
+
269
+ ## πŸ†š Heuristic vs Learned Routing
270
+
271
+ | Metric | Heuristic | Learned MoE |
272
+ | --------------------- | -------------------- | ------------------------ |
273
+ | **Routing Accuracy** | 92.7% | 92.7% |
274
+ | **Model Size** | 0KB (rules only) | 12KB |
275
+ | **Latency** | 0.065ms | 0.256ms |
276
+ | **Training Required** | No | Yes (10 epochs) |
277
+ | **Explainability** | High (if-else rules) | Medium (learned weights) |
278
+ | **Adaptability** | Manual updates | Data-driven |
279
+
280
+ **Recommendation:** Use heuristic routing for simplicity and explainability. Use learned routing when you want to fine-tune on your specific data distribution.
281
+
282
+ ---
283
+
284
+ ## πŸ“š Citation
285
+
286
+ If you use this model, please cite:
287
+
288
+ ```bibtex
289
+ @model{secretmask-gate,
290
+ author = {Anders Andersson},
291
+ title = {SecretMask MoE Gating Network},
292
+ year = {2025},
293
+ publisher = {HuggingFace},
294
+ url = {https://huggingface.co/andrewandrewsen/secretmask-gate}
295
+ }
296
+ ```
297
+
298
+ ---
299
+
300
+ ## πŸ“„ License
301
+
302
+ MIT License - see [LICENSE](LICENSE) file.
303
+
304
+ **Note:** This model is trained to work with the SecretMask expert models, which use Apache 2.0 licensed base models (DistilBERT, Longformer). See the expert model repositories for full licensing details.
305
+
306
+ ---
307
+
308
+ ## πŸ”— Related Resources
309
+
310
+ - **SecretMask MoE Repository:** [GitHub](https://github.com/andrewandrewsen/secmask)
311
+ - **Fast Expert Model:** [andrewandrewsen/distilbert-secret-masker](https://huggingface.co/andrewandrewsen/distilbert-secret-masker)
312
+ - **Long Expert Model:** [andrewandrewsen/longformer-secret-masker](https://huggingface.co/andrewandrewsen/longformer-secret-masker)
313
+ - **Documentation:** See repository for BENCHMARKS.md, USE_CASES.md, etc.
314
+
315
+ ---
316
+
317
+ ## 🀝 Contributing
318
+
319
+ Issues and pull requests welcome at [GitHub](https://github.com/andrewandrewsen/secmask).
320
+
321
+ ---
322
+
323
+ **Built with ❀️ for the open source community**