SetFit with mini1013/master_domain

This is a SetFit model that can be used for Text Classification. This SetFit model uses mini1013/master_domain as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
2.0
  • '제맥스 박서 야외용 탁구대 스포츠/레저>탁구>탁구대'
  • '챔피온 H-50 탁구대 스포츠/레저>탁구>탁구대'
  • '휠러스 미니 탁구대 스포츠/레저>탁구>탁구대'
3.0
  • '버터플라이 RDJ S1 탁구라켓 스포츠/레저>탁구>탁구라켓'
  • '엑시옴 하야부사 ZL PRO 탁구라켓 FL 스포츠/레저>탁구>탁구라켓'
  • '고집통 나노100 탁구라켓 스포츠/레저>탁구>탁구라켓'
5.0
  • '미즈노 탁구복 게임 팬츠 탁구 유니섹스 Mizuno 82JBA10009 스포츠/레저>탁구>탁구의류'
  • '버터플라이 윈로고 티셔츠 스포츠/레저>탁구>탁구의류'
  • '미즈노 탁구복 게임 바지 탁구웨어 426388 82JB900109 스포츠/레저>탁구>탁구의류'
6.0
  • '리닝 탁구화 23 아틀란티스 보아 라임 - 바운스플러스 슈퍼라이트 시리즈 스포츠/레저>탁구>탁구화'
  • '아식스 탁구화 ATTACK DOMINATE FF 2 1073A010-003 스포츠/레저>탁구>탁구화'
  • '미즈노 탁구화 웨이브 드라이브 EL 경량성 쿠션성 81GA2001 스포츠/레저>탁구>탁구화'
0.0
  • '엑시옴 솔라이트 SOLITE 백팩 스포츠/레저>탁구>기타탁구용품'
  • 'Dawei 다웨이 미디엄 핌플러버 스폰지버전 - 탁구러버 388C-1 스포츠/레저>탁구>기타탁구용품'
  • '도닉 탁구 러버 블루스톰 프로 스포츠/레저>탁구>기타탁구용품'
1.0
  • '비코트 슈퍼 ABS 40+ 탁구공 스포츠/레저>탁구>탁구공'
  • '안드로 스피드볼 Mi 1 탁구공 스포츠/레저>탁구>탁구공'
  • '오로라 AURORA 3성 BST 시합구 스포츠/레저>탁구>탁구공'
4.0
  • '탁구러버 티바 에볼루션 MX-P 스포츠/레저>탁구>탁구러버'
  • '티바 하이브리드 K3 HYBRID K3 평면 탁구러버 스포츠/레저>탁구>탁구러버'
  • '엑시옴 탁구러버 베가 프로 스포츠/레저>탁구>탁구러버'

Evaluation

Metrics

Label Accuracy
all 1.0

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_sl29")
# Run inference
preds = model("엑시옴 탁구상의 토마스 탁구유니폼 티셔츠 스포츠/레저>탁구>탁구의류")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 3 6.8776 14
Label Training Sample Count
0.0 70
1.0 70
2.0 25
3.0 70
4.0 9
5.0 70
6.0 70

Training Hyperparameters

  • batch_size: (256, 256)
  • num_epochs: (30, 30)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 50
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0133 1 0.4688 -
0.6667 50 0.5004 -
1.3333 100 0.1817 -
2.0 150 0.0186 -
2.6667 200 0.0024 -
3.3333 250 0.0009 -
4.0 300 0.0001 -
4.6667 350 0.0 -
5.3333 400 0.0 -
6.0 450 0.0 -
6.6667 500 0.0 -
7.3333 550 0.0 -
8.0 600 0.0 -
8.6667 650 0.0 -
9.3333 700 0.0 -
10.0 750 0.0 -
10.6667 800 0.0 -
11.3333 850 0.0 -
12.0 900 0.0 -
12.6667 950 0.0 -
13.3333 1000 0.0 -
14.0 1050 0.0 -
14.6667 1100 0.0 -
15.3333 1150 0.0 -
16.0 1200 0.0 -
16.6667 1250 0.0 -
17.3333 1300 0.0 -
18.0 1350 0.0 -
18.6667 1400 0.0 -
19.3333 1450 0.0 -
20.0 1500 0.0 -
20.6667 1550 0.0 -
21.3333 1600 0.0 -
22.0 1650 0.0 -
22.6667 1700 0.0 -
23.3333 1750 0.0 -
24.0 1800 0.0 -
24.6667 1850 0.0 -
25.3333 1900 0.0 -
26.0 1950 0.0 -
26.6667 2000 0.0 -
27.3333 2050 0.0 -
28.0 2100 0.0 -
28.6667 2150 0.0 -
29.3333 2200 0.0 -
30.0 2250 0.0 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.1.0
  • Sentence Transformers: 3.3.1
  • Transformers: 4.44.2
  • PyTorch: 2.2.0a0+81ea7a4
  • Datasets: 3.2.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mini1013/master_cate_sl29

Base model

klue/roberta-base
Finetuned
(214)
this model

Paper for mini1013/master_cate_sl29

Evaluation results