fede-m commited on
Commit
cd2eacb
·
verified ·
1 Parent(s): 994757a

Push model using huggingface_hub.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
2_Dense/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"in_features": 768, "out_features": 512, "bias": true, "activation_function": "torch.nn.modules.activation.Tanh"}
2_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5639cea99f3b06e8737862b3f0c18856acd87e254b46bb34eaa95073cae91dbe
3
+ size 1575072
README.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - setfit
4
+ - sentence-transformers
5
+ - text-classification
6
+ - generated_from_setfit_trainer
7
+ widget:
8
+ - text: Più di uno, direi.
9
+ - text: Al centro del dibattito la «stepchild adoption», ossia il caso in cui in una
10
+ coppia dello stesso sesso, anche al partner non genitore vengano concessi diritti
11
+ e doveri.
12
+ - text: Che intenzioni ha Ignazio Marino?
13
+ - text: 'IL PROGRAMMA COMPLETO
14
+
15
+ La due giorni di Pescara cercherà di andare a fondo del fenomeno e di individuare
16
+ le contromisure più adatte a sconfiggerlo.'
17
+ - text: Fabrizio Ghera, capogruppo di FdI, va all'attacco e denuncia quel plafond
18
+ della carta di credito portato da 10mila a 50 mila euro.
19
+ metrics:
20
+ - accuracy
21
+ pipeline_tag: text-classification
22
+ library_name: setfit
23
+ inference: true
24
+ base_model: sentence-transformers/distiluse-base-multilingual-cased-v1
25
+ model-index:
26
+ - name: SetFit with sentence-transformers/distiluse-base-multilingual-cased-v1
27
+ results:
28
+ - task:
29
+ type: text-classification
30
+ name: Text Classification
31
+ dataset:
32
+ name: Unknown
33
+ type: unknown
34
+ split: test
35
+ metrics:
36
+ - type: accuracy
37
+ value: 0.48758620689655174
38
+ name: Accuracy
39
+ ---
40
+
41
+ # SetFit with sentence-transformers/distiluse-base-multilingual-cased-v1
42
+
43
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
44
+
45
+ The model has been trained using an efficient few-shot learning technique that involves:
46
+
47
+ 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
48
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
49
+
50
+ ## Model Details
51
+
52
+ ### Model Description
53
+ - **Model Type:** SetFit
54
+ - **Sentence Transformer body:** [sentence-transformers/distiluse-base-multilingual-cased-v1](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v1)
55
+ - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
56
+ - **Maximum Sequence Length:** 128 tokens
57
+ - **Number of Classes:** 2 classes
58
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
59
+ <!-- - **Language:** Unknown -->
60
+ <!-- - **License:** Unknown -->
61
+
62
+ ### Model Sources
63
+
64
+ - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
65
+ - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
66
+ - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
67
+
68
+ ### Model Labels
69
+ | Label | Examples |
70
+ |:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
71
+ | 0 | <ul><li>'Molti i pezzi pregiati: il busto in terracotta di Ettore Ximenes (qui sopra) , il ritratto di Vittore Tasca vestito da Garibaldino di Giovanni Carnovali, detto il Piccio (foto in alto) .'</li><li>'«Eliminare la prescrizione è un concetto fondamentale per credere in uno stato di diritto: è la cosa più ingiusta che un cittadino che possa subire.'</li><li>"I centri anziani sono, infatti, del tutto autonomi e autogestiti, anche dal punto di vista economico: il contributo dei cittadini rimane quindi l'unico modo per provvedere al loro sostentamento."</li></ul> |
72
+ | 1 | <ul><li>'«Attendiamo i risultati degli accertamenti - ha spiegato l’avvocato Pirro [Antonella Pirro] - il mio assistito e i suoi familiari sono ancora molto scossi per quello che è accaduto».'</li><li>"La prima tappa di un lungo giro di interrogatori è prevista però per lunedìì comparirà davanti al pm Polizzi [Giovanni Polizzi] l'ex vice di Maroni [Roberto Maroni], che resta a San Vittore in attesa della decisione del gip sulla richiesta di scarcerazione."</li><li>"[Marianna Madia] Fu la prima a denunciare il marcio del Pd romano ma oggi fa il ministro della Pubblica amministrazione e come altri non sembra felice di buttarsi in un'impresa abbastanza disperata."</li></ul> |
73
+
74
+ ## Evaluation
75
+
76
+ ### Metrics
77
+ | Label | Accuracy |
78
+ |:--------|:---------|
79
+ | **all** | 0.4876 |
80
+
81
+ ## Uses
82
+
83
+ ### Direct Use for Inference
84
+
85
+ First install the SetFit library:
86
+
87
+ ```bash
88
+ pip install setfit
89
+ ```
90
+
91
+ Then you can load this model and run inference.
92
+
93
+ ```python
94
+ from setfit import SetFitModel
95
+
96
+ # Download from the 🤗 Hub
97
+ model = SetFitModel.from_pretrained("fede-m/FGSDI_final_setfit_fold_3")
98
+ # Run inference
99
+ preds = model("Più di uno, direi.")
100
+ ```
101
+
102
+ <!--
103
+ ### Downstream Use
104
+
105
+ *List how someone could finetune this model on their own dataset.*
106
+ -->
107
+
108
+ <!--
109
+ ### Out-of-Scope Use
110
+
111
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
112
+ -->
113
+
114
+ <!--
115
+ ## Bias, Risks and Limitations
116
+
117
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
118
+ -->
119
+
120
+ <!--
121
+ ### Recommendations
122
+
123
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
124
+ -->
125
+
126
+ ## Training Details
127
+
128
+ ### Training Set Metrics
129
+ | Training set | Min | Median | Max |
130
+ |:-------------|:----|:--------|:----|
131
+ | Word count | 3 | 39.7224 | 121 |
132
+
133
+ | Label | Training Sample Count |
134
+ |:------|:----------------------|
135
+ | 0 | 45 |
136
+ | 1 | 254 |
137
+
138
+ ### Training Hyperparameters
139
+ - batch_size: (16, 16)
140
+ - num_epochs: (1, 1)
141
+ - max_steps: -1
142
+ - sampling_strategy: oversampling
143
+ - num_iterations: 10
144
+ - body_learning_rate: (2e-05, 2e-05)
145
+ - head_learning_rate: 2e-05
146
+ - loss: CosineSimilarityLoss
147
+ - distance_metric: cosine_distance
148
+ - margin: 0.25
149
+ - end_to_end: False
150
+ - use_amp: False
151
+ - warmup_proportion: 0.1
152
+ - l2_weight: 0.01
153
+ - seed: 42
154
+ - eval_max_steps: -1
155
+ - load_best_model_at_end: False
156
+
157
+ ### Training Results
158
+ | Epoch | Step | Training Loss | Validation Loss |
159
+ |:------:|:----:|:-------------:|:---------------:|
160
+ | 0.0027 | 1 | 0.6338 | - |
161
+ | 0.1337 | 50 | 0.2115 | - |
162
+ | 0.2674 | 100 | 0.0385 | - |
163
+ | 0.4011 | 150 | 0.0039 | - |
164
+ | 0.5348 | 200 | 0.0012 | - |
165
+ | 0.6684 | 250 | 0.0007 | - |
166
+ | 0.8021 | 300 | 0.0003 | - |
167
+ | 0.9358 | 350 | 0.0002 | - |
168
+
169
+ ### Framework Versions
170
+ - Python: 3.11.13
171
+ - SetFit: 1.1.2
172
+ - Sentence Transformers: 4.1.0
173
+ - Transformers: 4.52.4
174
+ - PyTorch: 2.6.0+cu124
175
+ - Datasets: 3.6.0
176
+ - Tokenizers: 0.21.2
177
+
178
+ ## Citation
179
+
180
+ ### BibTeX
181
+ ```bibtex
182
+ @article{https://doi.org/10.48550/arxiv.2209.11055,
183
+ doi = {10.48550/ARXIV.2209.11055},
184
+ url = {https://arxiv.org/abs/2209.11055},
185
+ author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
186
+ keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
187
+ title = {Efficient Few-Shot Learning Without Prompts},
188
+ publisher = {arXiv},
189
+ year = {2022},
190
+ copyright = {Creative Commons Attribution 4.0 International}
191
+ }
192
+ ```
193
+
194
+ <!--
195
+ ## Glossary
196
+
197
+ *Clearly define terms in order to be accessible across audiences.*
198
+ -->
199
+
200
+ <!--
201
+ ## Model Card Authors
202
+
203
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
204
+ -->
205
+
206
+ <!--
207
+ ## Model Card Contact
208
+
209
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
210
+ -->
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "gelu",
3
+ "architectures": [
4
+ "DistilBertModel"
5
+ ],
6
+ "attention_dropout": 0.1,
7
+ "dim": 768,
8
+ "dropout": 0.1,
9
+ "hidden_dim": 3072,
10
+ "initializer_range": 0.02,
11
+ "max_position_embeddings": 512,
12
+ "model_type": "distilbert",
13
+ "n_heads": 12,
14
+ "n_layers": 6,
15
+ "pad_token_id": 0,
16
+ "qa_dropout": 0.1,
17
+ "seq_classif_dropout": 0.2,
18
+ "sinusoidal_pos_embds": false,
19
+ "tie_weights_": true,
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.52.4",
22
+ "vocab_size": 119547
23
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.52.4",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
config_setfit.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "labels": [
3
+ "0",
4
+ "1"
5
+ ],
6
+ "normalize_embeddings": false
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db0394311e3e140bf5a539ceda0cf7427e5e26a4e0d232174c4e5a53fa680479
3
+ size 538947416
model_head.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:810ad30c83cead0ef159ecb9d2ad7f5fe98d29757e2e99221e8c8a9db671bc60
3
+ size 4959
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Dense",
18
+ "type": "sentence_transformers.models.Dense"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": false,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_len": 512,
51
+ "model_max_length": 128,
52
+ "never_split": null,
53
+ "pad_token": "[PAD]",
54
+ "sep_token": "[SEP]",
55
+ "strip_accents": null,
56
+ "tokenize_chinese_chars": true,
57
+ "tokenizer_class": "DistilBertTokenizer",
58
+ "unk_token": "[UNK]"
59
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff