SetFit with sentence-transformers/all-mpnet-base-v2
This is a SetFit model trained on the hojzas/proj4-all-labs dataset that can be used for Text Classification. This SetFit model uses sentence-transformers/all-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
- Fine-tuning a Sentence Transformer with contrastive learning.
- Training a classification head with features from the fine-tuned Sentence Transformer.
Model Details
Model Description
Model Sources
Model Labels
| Label |
Examples |
| 0 |
- " perms = all_permutations_substrings(string)\n return set(''.join(perm) for word in words for perm in perms if word == perm)"
- ' perms = all_permutations_substrings(string)\n out = set()\n for w in words:\n for s in perms:\n if w == s:\n out.add(w)\n return out'
- ' perms = all_permutations_substrings(string)\n return set(word for word in words if word in perms)'
|
| 1 |
- ' perms = all_permutations_substrings(string)\n return perms.intersection(words)'
- ' perms = all_permutations_substrings(string)\n return set.intersection(perms,words)'
- ' perms = all_permutations_substrings(string)\n return set(perms).intersection(words)'
|
| 3 |
- ' it = list(dict.fromkeys(it))\n it.sort()\n return it'
- ' sequence = []\n for i in it:\n if i in sequence:\n pass\n else:\n sequence.append(i)\n sequence.sort()\n return sequence'
- ' unique = list(set(it))\n unique.sort()\n return unique'
|
| 2 |
- 'return sorted(list({word : it.count(word) for (word) in set(it)}.keys())) '
- 'return list(dict.fromkeys(sorted(it)))'
- 'return sorted((list(dict.fromkeys(it)))) '
|
| 4 |
- ' unique_items = set(it)\n return sorted(list(unique_items))'
- ' letters = set(it)\n sorted_letters = sorted(letters)\n return sorted_letters'
- 'return list(sorted(set(it)))'
|
| 5 |
- ' outputSequence = []\n for input in it:\n found = 0\n for output in outputSequence:\n if output == input:\n found = 1\n break\n if not found:\n outputSequence.append(input)\n return outputSequence'
- ' uniq = []\n for char in it:\n if not char in uniq:\n uniq.append(char)\n return uniq'
- 'return sorted(set(it), key=lambda y: it.index(y)) '
|
| 6 |
- 'return [tmp for tmp in dict.fromkeys(it).keys()]'
- 'return [i for i in dict.fromkeys(it)]'
- 'return list(dict.fromkeys(it))'
|
Uses
Direct Use for Inference
First install the SetFit library:
pip install setfit
Then you can load this model and run inference.
from setfit import SetFitModel
model = SetFitModel.from_pretrained("hojzas/proj4-all-labs")
preds = model("return list(dict.fromkeys(sorted(it)))")
Training Details
Training Set Metrics
| Training set |
Min |
Median |
Max |
| Word count |
2 |
25.0515 |
140 |
| Label |
Training Sample Count |
| 0 |
35 |
| 1 |
14 |
| 2 |
8 |
| 3 |
10 |
| 4 |
9 |
| 5 |
13 |
| 6 |
8 |
Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
Training Results
| Epoch |
Step |
Training Loss |
Validation Loss |
| 0.0041 |
1 |
0.1745 |
- |
| 0.2058 |
50 |
0.0355 |
- |
| 0.4115 |
100 |
0.0168 |
- |
| 0.6173 |
150 |
0.0042 |
- |
| 0.8230 |
200 |
0.0075 |
- |
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Carbon Emitted: 0.006 kg of CO2
- Hours Used: 0.019 hours
Training Hardware
- On Cloud: No
- GPU Model: 4 x NVIDIA RTX A5000
- CPU Model: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
- RAM Size: 251.49 GB
Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.36.1
- PyTorch: 2.1.2+cu121
- Datasets: 2.14.7
- Tokenizers: 0.15.1
Citation
BibTeX
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}