akhooli commited on
Commit
e75eed0
·
verified ·
1 Parent(s): 9e28dae

Push model using huggingface_hub.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md CHANGED
@@ -1,3 +1,242 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: akhooli/sbert_ar_nli_500k_norm
3
+ library_name: setfit
4
+ metrics:
5
+ - accuracy
6
+ pipeline_tag: text-classification
7
+ tags:
8
+ - setfit
9
+ - sentence-transformers
10
+ - text-classification
11
+ - generated_from_setfit_trainer
12
+ widget:
13
+ - text: 'دغري بدكم تفوتو بخصوصيات الناس طيب ما اموال كتار معروفة و مش معروفة منوين
14
+ جابتهن بتفتح... '
15
+ - text: ايها السادة العرب الوزير جبران باسيل يتكلم باسمه الشخصي
16
+ - text: 'وكل مين بدو يشد على مشدو '
17
+ - text: لازم جائزة نوبل للكيميا ياخدها دكتاتور البعث الفاشي
18
+ - text: 'زرع شعراته ولوووووو فيهن '
19
+ inference: true
20
+ model-index:
21
+ - name: SetFit with akhooli/sbert_ar_nli_500k_norm
22
+ results:
23
+ - task:
24
+ type: text-classification
25
+ name: Text Classification
26
+ dataset:
27
+ name: Unknown
28
+ type: unknown
29
+ split: test
30
+ metrics:
31
+ - type: accuracy
32
+ value: 0.8506944444444444
33
+ name: Accuracy
34
+ ---
35
+
36
+ # SetFit with akhooli/sbert_ar_nli_500k_norm
37
+
38
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [akhooli/sbert_ar_nli_500k_norm](https://huggingface.co/akhooli/sbert_ar_nli_500k_norm) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
39
+
40
+ The model has been trained using an efficient few-shot learning technique that involves:
41
+
42
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
43
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
44
+
45
+ ## Model Details
46
+
47
+ ### Model Description
48
+ - **Model Type:** SetFit
49
+ - **Sentence Transformer body:** [akhooli/sbert_ar_nli_500k_norm](https://huggingface.co/akhooli/sbert_ar_nli_500k_norm)
50
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
51
+ - **Maximum Sequence Length:** 512 tokens
52
+ - **Number of Classes:** 3 classes
53
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
54
+ <!-- - **Language:** Unknown -->
55
+ <!-- - **License:** Unknown -->
56
+
57
+ ### Model Sources
58
+
59
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
60
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
61
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
62
+
63
+ ## Evaluation
64
+
65
+ ### Metrics
66
+ | Label | Accuracy |
67
+ |:--------|:---------|
68
+ | **all** | 0.8507 |
69
+
70
+ ## Uses
71
+
72
+ ### Direct Use for Inference
73
+
74
+ First install the SetFit library:
75
+
76
+ ```bash
77
+ pip install setfit
78
+ ```
79
+
80
+ Then you can load this model and run inference.
81
+
82
+ ```python
83
+ from setfit import SetFitModel
84
+
85
+ # Download from the 🤗 Hub
86
+ model = SetFitModel.from_pretrained("akhooli/setfit_ar_hs")
87
+ # Run inference
88
+ preds = model("وكل مين بدو يشد على مشدو ")
89
+ ```
90
+
91
+ <!--
92
+ ### Downstream Use
93
+
94
+ *List how someone could finetune this model on their own dataset.*
95
+ -->
96
+
97
+ <!--
98
+ ### Out-of-Scope Use
99
+
100
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
101
+ -->
102
+
103
+ <!--
104
+ ## Bias, Risks and Limitations
105
+
106
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
107
+ -->
108
+
109
+ <!--
110
+ ### Recommendations
111
+
112
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
113
+ -->
114
+
115
+ ## Training Details
116
+
117
+ ### Training Set Metrics
118
+ | Training set | Min | Median | Max |
119
+ |:-------------|:----|:--------|:----|
120
+ | Word count | 1 | 12.7668 | 52 |
121
+
122
+ | Label | Training Sample Count |
123
+ |:---------|:----------------------|
124
+ | negative | 2000 |
125
+ | positive | 2000 |
126
+
127
+ ### Training Hyperparameters
128
+ - batch_size: (32, 32)
129
+ - num_epochs: (1, 1)
130
+ - max_steps: 5000
131
+ - sampling_strategy: undersampling
132
+ - body_learning_rate: (2e-05, 1e-05)
133
+ - head_learning_rate: 0.01
134
+ - loss: CosineSimilarityLoss
135
+ - distance_metric: cosine_distance
136
+ - margin: 0.25
137
+ - end_to_end: False
138
+ - use_amp: False
139
+ - warmup_proportion: 0.1
140
+ - l2_weight: 0.01
141
+ - seed: 42
142
+ - run_name: setfit_hate_2k
143
+ - eval_max_steps: -1
144
+ - load_best_model_at_end: False
145
+
146
+ ### Training Results
147
+ | Epoch | Step | Training Loss | Validation Loss |
148
+ |:------:|:----:|:-------------:|:---------------:|
149
+ | 0.0004 | 1 | 0.3158 | - |
150
+ | 0.04 | 100 | 0.2783 | - |
151
+ | 0.08 | 200 | 0.2427 | - |
152
+ | 0.12 | 300 | 0.1803 | - |
153
+ | 0.16 | 400 | 0.1334 | - |
154
+ | 0.2 | 500 | 0.0846 | - |
155
+ | 0.24 | 600 | 0.0638 | - |
156
+ | 0.28 | 700 | 0.05 | - |
157
+ | 0.32 | 800 | 0.0412 | - |
158
+ | 0.36 | 900 | 0.0345 | - |
159
+ | 0.4 | 1000 | 0.0291 | - |
160
+ | 0.44 | 1100 | 0.0232 | - |
161
+ | 0.48 | 1200 | 0.0207 | - |
162
+ | 0.52 | 1300 | 0.0177 | - |
163
+ | 0.56 | 1400 | 0.018 | - |
164
+ | 0.6 | 1500 | 0.0141 | - |
165
+ | 0.64 | 1600 | 0.017 | - |
166
+ | 0.68 | 1700 | 0.0133 | - |
167
+ | 0.72 | 1800 | 0.014 | - |
168
+ | 0.76 | 1900 | 0.0128 | - |
169
+ | 0.8 | 2000 | 0.013 | - |
170
+ | 0.84 | 2100 | 0.0139 | - |
171
+ | 0.88 | 2200 | 0.0132 | - |
172
+ | 0.92 | 2300 | 0.0105 | - |
173
+ | 0.96 | 2400 | 0.008 | - |
174
+ | 1.0 | 2500 | 0.0068 | - |
175
+ | 1.04 | 2600 | 0.0056 | - |
176
+ | 1.08 | 2700 | 0.0072 | - |
177
+ | 1.12 | 2800 | 0.0038 | - |
178
+ | 1.16 | 2900 | 0.005 | - |
179
+ | 1.2 | 3000 | 0.0039 | - |
180
+ | 1.24 | 3100 | 0.0034 | - |
181
+ | 1.28 | 3200 | 0.0035 | - |
182
+ | 1.32 | 3300 | 0.0038 | - |
183
+ | 1.3600 | 3400 | 0.0038 | - |
184
+ | 1.4 | 3500 | 0.0025 | - |
185
+ | 1.44 | 3600 | 0.0045 | - |
186
+ | 1.48 | 3700 | 0.003 | - |
187
+ | 1.52 | 3800 | 0.0025 | - |
188
+ | 1.56 | 3900 | 0.003 | - |
189
+ | 1.6 | 4000 | 0.0026 | - |
190
+ | 1.6400 | 4100 | 0.0029 | - |
191
+ | 1.6800 | 4200 | 0.0021 | - |
192
+ | 1.72 | 4300 | 0.003 | - |
193
+ | 1.76 | 4400 | 0.0025 | - |
194
+ | 1.8 | 4500 | 0.0032 | - |
195
+ | 1.8400 | 4600 | 0.002 | - |
196
+ | 1.88 | 4700 | 0.0024 | - |
197
+ | 1.92 | 4800 | 0.0022 | - |
198
+ | 1.96 | 4900 | 0.0024 | - |
199
+ | 2.0 | 5000 | 0.0027 | - |
200
+
201
+ ### Framework Versions
202
+ - Python: 3.10.14
203
+ - SetFit: 1.2.0.dev0
204
+ - Sentence Transformers: 3.1.1
205
+ - Transformers: 4.45.1
206
+ - PyTorch: 2.4.0
207
+ - Datasets: 3.0.1
208
+ - Tokenizers: 0.20.0
209
+
210
+ ## Citation
211
+
212
+ ### BibTeX
213
+ ```bibtex
214
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
215
+ doi = {10.48550/ARXIV.2209.11055},
216
+ url = {https://arxiv.org/abs/2209.11055},
217
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
218
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
219
+ title = {Efficient Few-Shot Learning Without Prompts},
220
+ publisher = {arXiv},
221
+ year = {2022},
222
+ copyright = {Creative Commons Attribution 4.0 International}
223
+ }
224
+ ```
225
+
226
+ <!--
227
+ ## Glossary
228
+
229
+ *Clearly define terms in order to be accessible across audiences.*
230
+ -->
231
+
232
+ <!--
233
+ ## Model Card Authors
234
+
235
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
236
+ -->
237
+
238
+ <!--
239
+ ## Model Card Contact
240
+
241
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
242
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "akhooli/sbert_ar_nli_500k_norm",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.45.1",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 64000
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.45.1",
5
+ "pytorch": "2.4.0"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
config_setfit.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "normalize_embeddings": false,
3
+ "labels": [
4
+ "negative",
5
+ "positive"
6
+ ]
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d2ebdcd4940d5fd3e47d78fc0ab371baa15d3c351cb253ce4aa9ac613e917da
3
+ size 540795752
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39692e6033811b7b9a9fd4c86cdf8015f4ce4af1b7b9f4c901c285fd8465a904
3
+ size 19327
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "5": {
44
+ "content": "[رابط]",
45
+ "lstrip": false,
46
+ "normalized": true,
47
+ "rstrip": false,
48
+ "single_word": true,
49
+ "special": true
50
+ },
51
+ "6": {
52
+ "content": "[بريد]",
53
+ "lstrip": false,
54
+ "normalized": true,
55
+ "rstrip": false,
56
+ "single_word": true,
57
+ "special": true
58
+ },
59
+ "7": {
60
+ "content": "[مستخدم]",
61
+ "lstrip": false,
62
+ "normalized": true,
63
+ "rstrip": false,
64
+ "single_word": true,
65
+ "special": true
66
+ }
67
+ },
68
+ "clean_up_tokenization_spaces": true,
69
+ "cls_token": "[CLS]",
70
+ "do_basic_tokenize": true,
71
+ "do_lower_case": false,
72
+ "mask_token": "[MASK]",
73
+ "max_len": 512,
74
+ "max_length": 512,
75
+ "model_max_length": 512,
76
+ "never_split": [
77
+ "[بريد]",
78
+ "[مستخدم]",
79
+ "[رابط]"
80
+ ],
81
+ "pad_to_multiple_of": null,
82
+ "pad_token": "[PAD]",
83
+ "pad_token_type_id": 0,
84
+ "padding_side": "right",
85
+ "sep_token": "[SEP]",
86
+ "stride": 0,
87
+ "strip_accents": null,
88
+ "tokenize_chinese_chars": true,
89
+ "tokenizer_class": "BertTokenizer",
90
+ "truncation_side": "right",
91
+ "truncation_strategy": "longest_first",
92
+ "unk_token": "[UNK]"
93
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff