Efficient Few-Shot Learning Without Prompts
Paper
• 2209.11055 • Published
• 4
This is a SetFit model trained on the tmp-org/edeka-dataset-ctx-1 dataset that can be used for Text Classification. This SetFit model uses Alibaba-NLP/gte-multilingual-base as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
| Label | Examples |
|---|---|
| Other_Prospekt |
|
| Start_Start |
|
| Other_Code einlösen |
|
| Prämien_Prämien |
|
| Other_Loading |
|
| Other_Treueaktionen |
|
| Other_Neuigkeiten |
|
| Other_Produktherkunft |
|
| Other_Marktsuche |
|
| Other_Menu |
|
| Kasse_Mobil bezahlen |
|
| Other_Kassenbons |
|
| Sparen_Angebote |
|
| Sparen_Coupons |
|
| Other_Coupon details |
|
| Kasse_Kasse |
|
| Kasse_Aktivierte Coupons |
|
| Einkaufsliste_Einkaufsliste |
|
| Other_Unknown |
|
| Start_Loading |
|
| Sparen_Loading |
|
| Other_Other |
|
| Kasse_Loading |
|
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("tmp-org/tmp_cv_model_2025_10_06_0")
# Run inference
preds = model("Prospekt () [TextView|SubActivity] | 1 / 12 () [TextView|SubActivity]
[SELECTED START]
Prospekt () [TextView|SubActivity] | 1 / 12 () [TextView|SubActivity]
[CONTEXT SEPARATOR]
Prospekt () [TextView|SubActivity] | 1 / 12 () [TextView|SubActivity]")
| Training set | Min | Median | Max |
|---|---|---|---|
| Word count | 9 | 273.4604 | 709 |
| Label | Training Sample Count |
|---|---|
| Einkaufsliste_Einkaufsliste | 31 |
| Kasse_Aktivierte Coupons | 40 |
| Kasse_Kasse | 20 |
| Kasse_Loading | 2 |
| Kasse_Mobil bezahlen | 6 |
| Other_Code einlösen | 2 |
| Other_Coupon details | 32 |
| Other_Kassenbons | 8 |
| Other_Loading | 1 |
| Other_Marktsuche | 6 |
| Other_Menu | 40 |
| Other_Neuigkeiten | 8 |
| Other_Other | 1 |
| Other_Produktherkunft | 12 |
| Other_Prospekt | 33 |
| Other_Treueaktionen | 36 |
| Other_Unknown | 10 |
| Prämien_Prämien | 38 |
| Sparen_Angebote | 40 |
| Sparen_Coupons | 40 |
| Sparen_Loading | 3 |
| Start_Loading | 5 |
| Start_Start | 40 |
| Epoch | Step | Training Loss | Validation Loss |
|---|---|---|---|
| 0.0003 | 1 | 0.1538 | - |
| 0.0132 | 50 | 0.2586 | - |
| 0.0264 | 100 | 0.1891 | - |
| 0.0396 | 150 | 0.1552 | - |
| 0.0528 | 200 | 0.1651 | - |
| 0.0660 | 250 | 0.1258 | - |
| 0.0792 | 300 | 0.1179 | - |
| 0.0924 | 350 | 0.0903 | - |
| 0.1056 | 400 | 0.0894 | - |
| 0.1188 | 450 | 0.1039 | - |
| 0.1320 | 500 | 0.0899 | - |
| 0.1452 | 550 | 0.0983 | - |
| 0.1584 | 600 | 0.0573 | - |
| 0.1715 | 650 | 0.0735 | - |
| 0.1847 | 700 | 0.0643 | - |
| 0.1979 | 750 | 0.0616 | - |
| 0.2111 | 800 | 0.0754 | - |
| 0.2243 | 850 | 0.0516 | - |
| 0.2375 | 900 | 0.062 | - |
| 0.2507 | 950 | 0.0675 | - |
| 0.2639 | 1000 | 0.0657 | - |
| 0.2771 | 1050 | 0.0471 | - |
| 0.2903 | 1100 | 0.0428 | - |
| 0.3035 | 1150 | 0.0266 | - |
| 0.3167 | 1200 | 0.0306 | - |
| 0.3299 | 1250 | 0.0446 | - |
| 0.3431 | 1300 | 0.0429 | - |
| 0.3563 | 1350 | 0.0359 | - |
| 0.3695 | 1400 | 0.0391 | - |
| 0.3827 | 1450 | 0.0569 | - |
| 0.3959 | 1500 | 0.0446 | - |
| 0.4091 | 1550 | 0.0311 | - |
| 0.4223 | 1600 | 0.0383 | - |
| 0.4355 | 1650 | 0.0358 | - |
| 0.4487 | 1700 | 0.0454 | - |
| 0.4619 | 1750 | 0.0319 | - |
| 0.4751 | 1800 | 0.0481 | - |
| 0.4883 | 1850 | 0.0459 | - |
| 0.5015 | 1900 | 0.047 | - |
| 0.5146 | 1950 | 0.0348 | - |
| 0.5278 | 2000 | 0.0352 | - |
| 0.5410 | 2050 | 0.0294 | - |
| 0.5542 | 2100 | 0.0385 | - |
| 0.5674 | 2150 | 0.0343 | - |
| 0.5806 | 2200 | 0.0369 | - |
| 0.5938 | 2250 | 0.035 | - |
| 0.6070 | 2300 | 0.0188 | - |
| 0.6202 | 2350 | 0.0301 | - |
| 0.6334 | 2400 | 0.0438 | - |
| 0.6466 | 2450 | 0.0295 | - |
| 0.6598 | 2500 | 0.0292 | - |
| 0.6730 | 2550 | 0.0208 | - |
| 0.6862 | 2600 | 0.0195 | - |
| 0.6994 | 2650 | 0.0263 | - |
| 0.7126 | 2700 | 0.0391 | - |
| 0.7258 | 2750 | 0.0252 | - |
| 0.7390 | 2800 | 0.032 | - |
| 0.7522 | 2850 | 0.0194 | - |
| 0.7654 | 2900 | 0.0331 | - |
| 0.7786 | 2950 | 0.0171 | - |
| 0.7918 | 3000 | 0.0307 | - |
| 0.8050 | 3050 | 0.0236 | - |
| 0.8182 | 3100 | 0.0361 | - |
| 0.8314 | 3150 | 0.0096 | - |
| 0.8446 | 3200 | 0.0265 | - |
| 0.8577 | 3250 | 0.0251 | - |
| 0.8709 | 3300 | 0.0384 | - |
| 0.8841 | 3350 | 0.0196 | - |
| 0.8973 | 3400 | 0.0157 | - |
| 0.9105 | 3450 | 0.0271 | - |
| 0.9237 | 3500 | 0.0206 | - |
| 0.9369 | 3550 | 0.0128 | - |
| 0.9501 | 3600 | 0.016 | - |
| 0.9633 | 3650 | 0.0115 | - |
| 0.9765 | 3700 | 0.0162 | - |
| 0.9897 | 3750 | 0.0175 | - |
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
Base model
Alibaba-NLP/gte-multilingual-base