metadata
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: >-
Etunimi Sukunimi Ruotsin kansasta yli puolet reilusti kannattaa Natoon
hakemista. Ainoana esteenä näkisin, että Ruotsin asevoimat eivät ole
läheskään niin hyvällä tolalla kuin Suomen mitä tulee varusmiehiin,
reserviläisiin tai edes kalustoonkaan. Mutta yhteisen hakemuksen kohdalla
se tuskin olisi ongelma muille Nato-maille hyväksynnän suhteen. Toinen
ongelma on, että hyväksynnän tulisi olla sataprosenttinen ja ei ole
poissujettua, että Venäjä esmes vaikuttaisi yksittäiseen maahan niin, että
juuri se ei hyväksyisikään hakemusta.
- text: Etunimi Pugh nyt ymmärrän sun puolustelut asut jenkeissä.....
- text: Etunimi Sukunimi ei varmasti moni uskalla
- text: >-
Etunimi Sukunimi Voisitko laittaa tuohon lastentappoväitteeseen mukaan
jonkinlaista faktaa. Jää muuten melko irralliseksi heitoksi. Ja etkös
aiemmin korostanut, että maa ei ole sama kuin ihmiset? No mikä on maa tai
valtio, se on jäsentensä muodostama. Nyt sitten väität, että Ukraina on
maana tappanut lapsia 8 vuotta.
- text: >-
Etunimi Sukunimi Historiaa kirjoitetaan vielä maaliskuun 2020
tapahtumista.
metrics:
- metric
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: TurkuNLP/bert-base-finnish-cased-v1
model-index:
- name: SetFit with TurkuNLP/bert-base-finnish-cased-v1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: metric
value: 0.8718253349471188
name: Metric
SetFit with TurkuNLP/bert-base-finnish-cased-v1
This is a SetFit model that can be used for Text Classification. This SetFit model uses TurkuNLP/bert-base-finnish-cased-v1 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
- Model Type: SetFit
- Sentence Transformer body: TurkuNLP/bert-base-finnish-cased-v1
- Classification head: a LogisticRegression instance
- Maximum Sequence Length: 512 tokens
- Number of Classes: 2 classes
Model Sources
- Repository: SetFit on GitHub
- Paper: Efficient Few-Shot Learning Without Prompts
- Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts
Model Labels
| Label | Examples |
|---|---|
| 1 |
|
| 0 |
|
Evaluation
Metrics
| Label | Metric |
|---|---|
| all | 0.8718 |
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Finnish-actions/SetFit-FinBERT1-A3")
# Run inference
preds = model("Etunimi Sukunimi ei varmasti moni uskalla")
Training Details
Training Set Metrics
| Training set | Min | Median | Max |
|---|---|---|---|
| Word count | 1 | 19.6854 | 213 |
| Label | Training Sample Count |
|---|---|
| 0 | 263 |
| 1 | 700 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 6
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- evaluation_strategy: epoch
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
| Epoch | Step | Training Loss | Validation Loss |
|---|---|---|---|
| 0.0014 | 1 | 0.2224 | - |
| 0.0692 | 50 | 0.2676 | - |
| 0.1383 | 100 | 0.2486 | - |
| 0.2075 | 150 | 0.2208 | - |
| 0.2766 | 200 | 0.1892 | - |
| 0.3458 | 250 | 0.1509 | - |
| 0.4149 | 300 | 0.1194 | - |
| 0.4841 | 350 | 0.0745 | - |
| 0.5533 | 400 | 0.039 | - |
| 0.6224 | 450 | 0.0298 | - |
| 0.6916 | 500 | 0.01 | - |
| 0.7607 | 550 | 0.006 | - |
| 0.8299 | 600 | 0.0021 | - |
| 0.8990 | 650 | 0.0017 | - |
| 0.9682 | 700 | 0.0038 | - |
| 1.0 | 723 | - | 0.2008 |
| 1.0373 | 750 | 0.0088 | - |
| 1.1065 | 800 | 0.0041 | - |
| 1.1757 | 850 | 0.0067 | - |
| 1.2448 | 900 | 0.0041 | - |
| 1.3140 | 950 | 0.0021 | - |
| 1.3831 | 1000 | 0.0036 | - |
| 1.4523 | 1050 | 0.0036 | - |
| 1.5214 | 1100 | 0.0011 | - |
| 1.5906 | 1150 | 0.0035 | - |
| 1.6598 | 1200 | 0.0047 | - |
| 1.7289 | 1250 | 0.0005 | - |
| 1.7981 | 1300 | 0.0002 | - |
| 1.8672 | 1350 | 0.0029 | - |
| 1.9364 | 1400 | 0.0029 | - |
| 2.0 | 1446 | - | 0.2342 |
| 2.0055 | 1450 | 0.0014 | - |
| 2.0747 | 1500 | 0.0023 | - |
| 2.1438 | 1550 | 0.0022 | - |
| 2.2130 | 1600 | 0.0014 | - |
| 2.2822 | 1650 | 0.0024 | - |
| 2.3513 | 1700 | 0.0035 | - |
| 2.4205 | 1750 | 0.0014 | - |
| 2.4896 | 1800 | 0.0022 | - |
| 2.5588 | 1850 | 0.0025 | - |
| 2.6279 | 1900 | 0.0003 | - |
| 2.6971 | 1950 | 0.0042 | - |
| 2.7663 | 2000 | 0.0014 | - |
| 2.8354 | 2050 | 0.0003 | - |
| 2.9046 | 2100 | 0.0022 | - |
| 2.9737 | 2150 | 0.0031 | - |
| 3.0 | 2169 | - | 0.2224 |
| 3.0429 | 2200 | 0.0016 | - |
| 3.1120 | 2250 | 0.0014 | - |
| 3.1812 | 2300 | 0.005 | - |
| 3.2503 | 2350 | 0.0045 | - |
| 3.3195 | 2400 | 0.001 | - |
| 3.3887 | 2450 | 0.0012 | - |
| 3.4578 | 2500 | 0.0004 | - |
| 3.5270 | 2550 | 0.0013 | - |
| 3.5961 | 2600 | 0.0022 | - |
| 3.6653 | 2650 | 0.0009 | - |
| 3.7344 | 2700 | 0.0018 | - |
| 3.8036 | 2750 | 0.0015 | - |
| 3.8728 | 2800 | 0.0019 | - |
| 3.9419 | 2850 | 0.0025 | - |
| 4.0 | 2892 | - | 0.2222 |
Framework Versions
- Python: 3.11.9
- SetFit: 1.1.3
- Sentence Transformers: 3.2.0
- Transformers: 4.44.0
- PyTorch: 2.4.0+cu124
- Datasets: 2.21.0
- Tokenizers: 0.19.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}