SetFit with google/embeddinggemma-300M

This is a SetFit model that can be used for Text Classification. This SetFit model uses google/embeddinggemma-300M as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
help
  • 'помощь'
  • 'помоги'
  • 'помогите'
silence
  • 'тишина'
  • 'молчи'
  • 'молчите'
bind
  • 'привяжи робота'
  • 'привяжи панду'
  • 'привяжи робота 1'
unbind
  • 'отвяжи робота'
  • 'отвяжи панду'
  • 'отвяжите робота'
report_command
  • 'исправить команду'
  • 'исправь команду'
  • 'исправьте команду'
give_paw
  • 'лапу'
  • 'дай лапу'
  • 'дать лапу'
stand_at_attention
  • 'равняйсь'
  • 'равняйся'
  • 'равняться'
dismiss
  • 'отставить'
  • 'отставь'
  • 'встать'
lie_down
  • 'лежать'
  • 'лечь'
  • 'ложиться'
rotate
  • 'кувыркнуться'
  • 'кувыркнись'
  • 'кувыркаться'
run
  • 'бежать'
  • 'беги'
  • 'бегать'
stop_running
  • 'остановиться'
  • 'остановись'
  • 'останавливаться'
reconnect_joystick
  • 'подключить джойстик'
  • 'подключи джойстик'
  • 'подключать джойстик'
unknown
  • 'привет'
  • 'как дела'
  • 'что происходит'

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("tmpb84tfylb/panda_commands")
# Run inference
preds = model("часто вращается")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 2.3808 7
Label Training Sample Count
bind 55
dismiss 160
give_paw 104
help 22
lie_down 172
reconnect_joystick 135
report_command 50
rotate 137
run 106
silence 27
stand_at_attention 88
stop_running 135
unbind 37
unknown 479

Training Hyperparameters

  • batch_size: (256, 256)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0037 1 0.2375 -
0.1873 50 0.0728 -
0.3745 100 0.009 -
0.5618 150 0.005 -
0.7491 200 0.0038 -
0.9363 250 0.0028 -

Framework Versions

  • Python: 3.11.14
  • SetFit: 1.1.3
  • Sentence Transformers: 5.2.2
  • Transformers: 4.57.6
  • PyTorch: 2.9.1+cu128
  • Datasets: 4.5.0
  • Tokenizers: 0.22.2

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
57
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ShiWarai/CVC-Panda

Finetuned
(180)
this model

Paper for ShiWarai/CVC-Panda