FriedGil commited on
Commit
d58040e
·
verified ·
1 Parent(s): e640578

Upload 12 files

Browse files
README.md ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - setfit
4
+ - sentence-transformers
5
+ - text-classification
6
+ - generated_from_setfit_trainer
7
+ widget:
8
+ - text: Tennengebirge Reef
9
+ - text: Outcrop next to I-84 East
10
+ - text: scenic overview
11
+ - text: Ruby Star for sale now Please contact us for more details. Regards
12
+ - text: torre rocosa de grans dimensions. 3 blocs partits
13
+ metrics:
14
+ - accuracy
15
+ pipeline_tag: text-classification
16
+ library_name: setfit
17
+ inference: true
18
+ base_model: BAAI/bge-small-en-v1.5
19
+ ---
20
+
21
+ # SetFit with BAAI/bge-small-en-v1.5
22
+
23
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
24
+
25
+ The model has been trained using an efficient few-shot learning technique that involves:
26
+
27
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
28
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
29
+
30
+ ## Model Details
31
+
32
+ ### Model Description
33
+ - **Model Type:** SetFit
34
+ - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
35
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
36
+ - **Maximum Sequence Length:** 512 tokens
37
+ - **Number of Classes:** 2 classes
38
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
39
+ <!-- - **Language:** Unknown -->
40
+ <!-- - **License:** Unknown -->
41
+
42
+ ### Model Sources
43
+
44
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
45
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
46
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
47
+
48
+ ### Model Labels
49
+ | Label | Examples |
50
+ |:------|:----------------------------------------------------------------------------------------------------------------------------------|
51
+ | 0 | <ul><li>'Calcite'</li><li>'biotite. Contact metamorphosis'</li><li>'rail trail'</li></ul> |
52
+ | 1 | <ul><li>'Geafitti on tree and burn scar on ground'</li><li>'another beautiful rock from the same place'</li><li>'Vhfgv'</li></ul> |
53
+
54
+ ## Uses
55
+
56
+ ### Direct Use for Inference
57
+
58
+ First install the SetFit library:
59
+
60
+ ```bash
61
+ pip install setfit
62
+ ```
63
+
64
+ Then you can load this model and run inference.
65
+
66
+ ```python
67
+ from setfit import SetFitModel
68
+
69
+ # Download from the 🤗 Hub
70
+ model = SetFitModel.from_pretrained("setfit_model_id")
71
+ # Run inference
72
+ preds = model("scenic overview")
73
+ ```
74
+
75
+ <!--
76
+ ### Downstream Use
77
+
78
+ *List how someone could finetune this model on their own dataset.*
79
+ -->
80
+
81
+ <!--
82
+ ### Out-of-Scope Use
83
+
84
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
85
+ -->
86
+
87
+ <!--
88
+ ## Bias, Risks and Limitations
89
+
90
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
91
+ -->
92
+
93
+ <!--
94
+ ### Recommendations
95
+
96
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
97
+ -->
98
+
99
+ ## Training Details
100
+
101
+ ### Training Set Metrics
102
+ | Training set | Min | Median | Max |
103
+ |:-------------|:----|:-------|:-----|
104
+ | Word count | 1 | 7.2788 | 1899 |
105
+
106
+ | Label | Training Sample Count |
107
+ |:------|:----------------------|
108
+ | 0 | 2997 |
109
+ | 1 | 783 |
110
+
111
+ ### Training Hyperparameters
112
+ - batch_size: (32, 32)
113
+ - num_epochs: (1, 1)
114
+ - max_steps: -1
115
+ - sampling_strategy: oversampling
116
+ - num_iterations: 20
117
+ - body_learning_rate: (2e-05, 1e-05)
118
+ - head_learning_rate: 0.01
119
+ - loss: CosineSimilarityLoss
120
+ - distance_metric: cosine_distance
121
+ - margin: 0.25
122
+ - end_to_end: False
123
+ - use_amp: False
124
+ - warmup_proportion: 0.1
125
+ - l2_weight: 0.01
126
+ - seed: 42
127
+ - eval_max_steps: -1
128
+ - load_best_model_at_end: False
129
+
130
+ ### Training Results
131
+ | Epoch | Step | Training Loss | Validation Loss |
132
+ |:------:|:----:|:-------------:|:---------------:|
133
+ | 0.0002 | 1 | 0.2331 | - |
134
+ | 0.0106 | 50 | 0.2391 | - |
135
+ | 0.0212 | 100 | 0.238 | - |
136
+ | 0.0317 | 150 | 0.2309 | - |
137
+ | 0.0423 | 200 | 0.2117 | - |
138
+ | 0.0529 | 250 | 0.1879 | - |
139
+ | 0.0635 | 300 | 0.1745 | - |
140
+ | 0.0741 | 350 | 0.1708 | - |
141
+ | 0.0847 | 400 | 0.1402 | - |
142
+ | 0.0952 | 450 | 0.1349 | - |
143
+ | 0.1058 | 500 | 0.1092 | - |
144
+ | 0.1164 | 550 | 0.1031 | - |
145
+ | 0.1270 | 600 | 0.0828 | - |
146
+ | 0.1376 | 650 | 0.0756 | - |
147
+ | 0.1481 | 700 | 0.0587 | - |
148
+ | 0.1587 | 750 | 0.0487 | - |
149
+ | 0.1693 | 800 | 0.0557 | - |
150
+ | 0.1799 | 850 | 0.0456 | - |
151
+ | 0.1905 | 900 | 0.0371 | - |
152
+ | 0.2011 | 950 | 0.0412 | - |
153
+ | 0.2116 | 1000 | 0.0382 | - |
154
+ | 0.2222 | 1050 | 0.0376 | - |
155
+ | 0.2328 | 1100 | 0.0353 | - |
156
+ | 0.2434 | 1150 | 0.0346 | - |
157
+ | 0.2540 | 1200 | 0.0364 | - |
158
+ | 0.2646 | 1250 | 0.0317 | - |
159
+ | 0.2751 | 1300 | 0.0374 | - |
160
+ | 0.2857 | 1350 | 0.0282 | - |
161
+ | 0.2963 | 1400 | 0.0255 | - |
162
+ | 0.3069 | 1450 | 0.023 | - |
163
+ | 0.3175 | 1500 | 0.0287 | - |
164
+ | 0.3280 | 1550 | 0.025 | - |
165
+ | 0.3386 | 1600 | 0.0216 | - |
166
+ | 0.3492 | 1650 | 0.0241 | - |
167
+ | 0.3598 | 1700 | 0.0234 | - |
168
+ | 0.3704 | 1750 | 0.0279 | - |
169
+ | 0.3810 | 1800 | 0.0239 | - |
170
+ | 0.3915 | 1850 | 0.0199 | - |
171
+ | 0.4021 | 1900 | 0.0252 | - |
172
+ | 0.4127 | 1950 | 0.0219 | - |
173
+ | 0.4233 | 2000 | 0.0228 | - |
174
+ | 0.4339 | 2050 | 0.0204 | - |
175
+ | 0.4444 | 2100 | 0.0231 | - |
176
+ | 0.4550 | 2150 | 0.0144 | - |
177
+ | 0.4656 | 2200 | 0.0229 | - |
178
+ | 0.4762 | 2250 | 0.0129 | - |
179
+ | 0.4868 | 2300 | 0.0219 | - |
180
+ | 0.4974 | 2350 | 0.0194 | - |
181
+ | 0.5079 | 2400 | 0.0172 | - |
182
+ | 0.5185 | 2450 | 0.0177 | - |
183
+ | 0.5291 | 2500 | 0.0252 | - |
184
+ | 0.5397 | 2550 | 0.0251 | - |
185
+ | 0.5503 | 2600 | 0.014 | - |
186
+ | 0.5608 | 2650 | 0.0204 | - |
187
+ | 0.5714 | 2700 | 0.0248 | - |
188
+ | 0.5820 | 2750 | 0.0146 | - |
189
+ | 0.5926 | 2800 | 0.0191 | - |
190
+ | 0.6032 | 2850 | 0.0223 | - |
191
+ | 0.6138 | 2900 | 0.0206 | - |
192
+ | 0.6243 | 2950 | 0.0163 | - |
193
+ | 0.6349 | 3000 | 0.0235 | - |
194
+ | 0.6455 | 3050 | 0.0245 | - |
195
+ | 0.6561 | 3100 | 0.0199 | - |
196
+ | 0.6667 | 3150 | 0.0145 | - |
197
+ | 0.6772 | 3200 | 0.016 | - |
198
+ | 0.6878 | 3250 | 0.0143 | - |
199
+ | 0.6984 | 3300 | 0.0206 | - |
200
+ | 0.7090 | 3350 | 0.0187 | - |
201
+ | 0.7196 | 3400 | 0.0168 | - |
202
+ | 0.7302 | 3450 | 0.0148 | - |
203
+ | 0.7407 | 3500 | 0.0212 | - |
204
+ | 0.7513 | 3550 | 0.0185 | - |
205
+ | 0.7619 | 3600 | 0.015 | - |
206
+ | 0.7725 | 3650 | 0.0187 | - |
207
+ | 0.7831 | 3700 | 0.0161 | - |
208
+ | 0.7937 | 3750 | 0.0204 | - |
209
+ | 0.8042 | 3800 | 0.0182 | - |
210
+ | 0.8148 | 3850 | 0.0157 | - |
211
+ | 0.8254 | 3900 | 0.0197 | - |
212
+ | 0.8360 | 3950 | 0.0133 | - |
213
+ | 0.8466 | 4000 | 0.0211 | - |
214
+ | 0.8571 | 4050 | 0.0155 | - |
215
+ | 0.8677 | 4100 | 0.0197 | - |
216
+ | 0.8783 | 4150 | 0.0168 | - |
217
+ | 0.8889 | 4200 | 0.0139 | - |
218
+ | 0.8995 | 4250 | 0.0132 | - |
219
+ | 0.9101 | 4300 | 0.018 | - |
220
+ | 0.9206 | 4350 | 0.014 | - |
221
+ | 0.9312 | 4400 | 0.017 | - |
222
+ | 0.9418 | 4450 | 0.0173 | - |
223
+ | 0.9524 | 4500 | 0.0163 | - |
224
+ | 0.9630 | 4550 | 0.0178 | - |
225
+ | 0.9735 | 4600 | 0.0176 | - |
226
+ | 0.9841 | 4650 | 0.0126 | - |
227
+ | 0.9947 | 4700 | 0.0194 | - |
228
+
229
+ ### Framework Versions
230
+ - Python: 3.12.9
231
+ - SetFit: 1.1.2
232
+ - Sentence Transformers: 4.1.0
233
+ - Transformers: 4.52.4
234
+ - PyTorch: 2.7.1
235
+ - Datasets: 3.6.0
236
+ - Tokenizers: 0.21.1
237
+
238
+ ## Citation
239
+
240
+ ### BibTeX
241
+ ```bibtex
242
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
243
+ doi = {10.48550/ARXIV.2209.11055},
244
+ url = {https://arxiv.org/abs/2209.11055},
245
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
246
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
247
+ title = {Efficient Few-Shot Learning Without Prompts},
248
+ publisher = {arXiv},
249
+ year = {2022},
250
+ copyright = {Creative Commons Attribution 4.0 International}
251
+ }
252
+ ```
253
+
254
+ <!--
255
+ ## Glossary
256
+
257
+ *Clearly define terms in order to be accessible across audiences.*
258
+ -->
259
+
260
+ <!--
261
+ ## Model Card Authors
262
+
263
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
264
+ -->
265
+
266
+ <!--
267
+ ## Model Card Contact
268
+
269
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
270
+ -->
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 384,
10
+ "id2label": {
11
+ "0": "LABEL_0"
12
+ },
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 1536,
15
+ "label2id": {
16
+ "LABEL_0": 0
17
+ },
18
+ "layer_norm_eps": 1e-12,
19
+ "max_position_embeddings": 512,
20
+ "model_type": "bert",
21
+ "num_attention_heads": 12,
22
+ "num_hidden_layers": 12,
23
+ "pad_token_id": 0,
24
+ "position_embedding_type": "absolute",
25
+ "torch_dtype": "float32",
26
+ "transformers_version": "4.52.4",
27
+ "type_vocab_size": 2,
28
+ "use_cache": true,
29
+ "vocab_size": 30522
30
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.52.4",
5
+ "pytorch": "2.7.1"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
config_setfit.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "normalize_embeddings": false,
3
+ "labels": null
4
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a9d9810c31b1f55b12d2315400edac2e83f469cfaa3cbaf7d54b3050901532f
3
+ size 133462128
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:447834c9f06255f074956e5ff18960ecd79ab2a44034097b53cc0c8058528155
3
+ size 3935
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": true
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 512,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff