File size: 8,447 Bytes
ffbc619 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: '[도착보장] 엘프레리 에어윙 팬티 밤 기저귀 4팩 M사이즈 (팩당 32개입) 출산/육아 > 기저귀 > 일회용기저귀'
- text: 마미포코 물놀이팬티 4-5단계 (남녀선택) 12매 출산/육아 > 기저귀 > 수영장기저귀
- text: 플라팜 뉴코코맘 아기 천기저귀 5매 출산/육아 > 기저귀 > 천기저귀
- text: 프리미엄 친환경 아기 팬티기저귀 XL 18매 출산/육아 > 기저귀 > 일회용기저귀
- text: 팸퍼스 2025 통잠팬티 팬티형 밤기저귀 4단계 4팩+4팩(총 240매) 출산/육아 > 기저귀 > 일회용기저귀
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: mini1013/master_domain
---
# SetFit with mini1013/master_domain
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'아라칸 아기 물놀이 방수 기저귀 3개입 2세트 총 6매 출산/육아 > 기저귀 > 수영장기저귀'</li><li>'마미포코 물놀이팬티 4-5단계 (남녀선택) 12매 출산/육아 > 기저귀 > 수영장기저귀'</li><li>'밤보 물놀이 수영팬티 스몰 1팩(12P) 출산/육아 > 기저귀 > 수영장기저귀'</li></ul> |
| 2.0 | <ul><li>'나비잠 나비잠 울트라씬듀얼핏 팬티 6팩 출산/육아 > 기저귀 > 일회용기저귀'</li><li>'르소메 프리미엄 통잠 밤 아기 신생아 발진없는 밴드형 기저귀 2팩 출산/육아 > 기저귀 > 일회용기저귀'</li><li>'애플크럼비 [보리보리/애플크럼비]애플크럼비 NEW 오리지널 테이프 XL 6팩(108매) 출산/육아 > 기저귀 > 일회용기저귀'</li></ul> |
| 3.0 | <ul><li>'아가방 새싹오가닉 기저귀 5매 출산/육아 > 기저귀 > 천기저귀'</li><li>'베베라온 신생아 밤부 천기저귀 선물 체험 출산/육아 > 기저귀 > 천기저귀'</li><li>'투유모유 무형광 무나염 순면 국산 아기 천기저귀 2박스 구매시 파우치 증정 출산/육아 > 기저귀 > 천기저귀'</li></ul> |
| 0.0 | <ul><li>'[베이비앙] 국내산 무형광 사이즈 상관없이 벨크로 탈부착으로 사용 가능 기저귀 고정을 위한 천 기저귀밴드 출산/육아 > 기저귀 > 기저귀커버/기저귀밴드'</li><li>'처비체리 천기저귀 커버 쁘띠코숑 P tit Cochon 1개 출산/육아 > 기저귀 > 기저귀커버/기저귀밴드'</li><li>'포켓식 원사이즈 기저귀커버 3장세트(잠금장치&색상선택) 출산/육아 > 기저귀 > 기저귀커버/기저귀밴드'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mini1013/master_cate_bc2")
# Run inference
preds = model("플라팜 뉴코코맘 아기 천기저귀 5매 출산/육아 > 기저귀 > 천기저귀")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 9 | 12.95 | 20 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 20 |
| 1.0 | 20 |
| 2.0 | 20 |
| 3.0 | 20 |
### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (30, 30)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 50
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0625 | 1 | 0.476 | - |
| 3.125 | 50 | 0.3608 | - |
| 6.25 | 100 | 0.0472 | - |
| 9.375 | 150 | 0.0 | - |
| 12.5 | 200 | 0.0 | - |
| 15.625 | 250 | 0.0 | - |
| 18.75 | 300 | 0.0 | - |
| 21.875 | 350 | 0.0 | - |
| 25.0 | 400 | 0.0 | - |
| 28.125 | 450 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.1.0
- Sentence Transformers: 3.3.1
- Transformers: 4.44.2
- PyTorch: 2.2.0a0+81ea7a4
- Datasets: 3.2.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |