SetFit with sentence-transformers/paraphrase-mpnet-base-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-mpnet-base-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • 'Steam locomotives carried mail across the countryside.'
  • 'Horse-drawn carriages lined unpaved city streets.'
1
  • 'The Manhattan Project developed atomic weapons.'
  • 'Rationing and blackouts were common during the Blitz.'
2
  • 'Sputnik launched and kicked off the space race.'
  • 'The Berlin Wall symbolized Cold War tensions.'
3
  • 'MySpace was the most popular social network for a while.'
  • 'The Iraq War dominated headlines; people used early Facebook.'
4
  • 'The first iPads changed tablet computing.'
  • 'App stores exploded; people queued for new iPhones.'
5
  • 'Fidget spinners were everywhere in middle schools.'
  • 'Fortnite and Instagram Stories became daily habits.'
6
  • 'Schools went remote during the pandemic; everyone wore N95s and used Zoom.'
  • 'Lockdowns, PCR tests, and Zoom school were widespread.'
7
  • 'Creators blew up on TikTok and AI tools like ChatGPT became common.'
  • 'Hybrid work continues; large language models entered workplaces.'

Evaluation

Metrics

Label Accuracy
all 1.0

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("TikTok creators exploded in popularity.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 6 8.0625 12
Label Training Sample Count
0 2
1 2
2 2
3 2
4 2
5 2
6 2
7 2

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (4, 4)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.025 1 0.0637 -
1.25 50 0.0372 -
2.5 100 0.0035 -
3.75 150 0.0017 -

Framework Versions

  • Python: 3.10.7
  • SetFit: 1.1.3
  • Sentence Transformers: 5.1.0
  • Transformers: 4.56.2
  • PyTorch: 2.2.2
  • Datasets: 4.1.1
  • Tokenizers: 0.22.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DelaliScratchwerk/text-period-setfit

Finetuned
(317)
this model

Space using DelaliScratchwerk/text-period-setfit 1

Paper for DelaliScratchwerk/text-period-setfit

Evaluation results