|
|
--- |
|
|
tags: |
|
|
- setfit |
|
|
- sentence-transformers |
|
|
- text-classification |
|
|
- generated_from_setfit_trainer |
|
|
widget: |
|
|
- text: '[도착보장] 엘프레리 에어윙 팬티 밤 기저귀 4팩 M사이즈 (팩당 32개입) 출산/육아 > 기저귀 > 일회용기저귀' |
|
|
- text: 마미포코 물놀이팬티 4-5단계 (남녀선택) 12매 출산/육아 > 기저귀 > 수영장기저귀 |
|
|
- text: 플라팜 뉴코코맘 아기 천기저귀 5매 출산/육아 > 기저귀 > 천기저귀 |
|
|
- text: 프리미엄 친환경 아기 팬티기저귀 XL 18매 출산/육아 > 기저귀 > 일회용기저귀 |
|
|
- text: 팸퍼스 2025 통잠팬티 팬티형 밤기저귀 4단계 4팩+4팩(총 240매) 출산/육아 > 기저귀 > 일회용기저귀 |
|
|
metrics: |
|
|
- accuracy |
|
|
pipeline_tag: text-classification |
|
|
library_name: setfit |
|
|
inference: true |
|
|
base_model: mini1013/master_domain |
|
|
--- |
|
|
|
|
|
# SetFit with mini1013/master_domain |
|
|
|
|
|
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. |
|
|
|
|
|
The model has been trained using an efficient few-shot learning technique that involves: |
|
|
|
|
|
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. |
|
|
2. Training a classification head with features from the fine-tuned Sentence Transformer. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
- **Model Type:** SetFit |
|
|
- **Sentence Transformer body:** [mini1013/master_domain](https://huggingface.co/mini1013/master_domain) |
|
|
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance |
|
|
- **Maximum Sequence Length:** 512 tokens |
|
|
- **Number of Classes:** 4 classes |
|
|
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> |
|
|
<!-- - **Language:** Unknown --> |
|
|
<!-- - **License:** Unknown --> |
|
|
|
|
|
### Model Sources |
|
|
|
|
|
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) |
|
|
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) |
|
|
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) |
|
|
|
|
|
### Model Labels |
|
|
| Label | Examples | |
|
|
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |
|
|
| 1.0 | <ul><li>'아라칸 아기 물놀이 방수 기저귀 3개입 2세트 총 6매 출산/육아 > 기저귀 > 수영장기저귀'</li><li>'마미포코 물놀이팬티 4-5단계 (남녀선택) 12매 출산/육아 > 기저귀 > 수영장기저귀'</li><li>'밤보 물놀이 수영팬티 스몰 1팩(12P) 출산/육아 > 기저귀 > 수영장기저귀'</li></ul> | |
|
|
| 2.0 | <ul><li>'나비잠 나비잠 울트라씬듀얼핏 팬티 6팩 출산/육아 > 기저귀 > 일회용기저귀'</li><li>'르소메 프리미엄 통잠 밤 아기 신생아 발진없는 밴드형 기저귀 2팩 출산/육아 > 기저귀 > 일회용기저귀'</li><li>'애플크럼비 [보리보리/애플크럼비]애플크럼비 NEW 오리지널 테이프 XL 6팩(108매) 출산/육아 > 기저귀 > 일회용기저귀'</li></ul> | |
|
|
| 3.0 | <ul><li>'아가방 새싹오가닉 기저귀 5매 출산/육아 > 기저귀 > 천기저귀'</li><li>'베베라온 신생아 밤부 천기저귀 선물 체험 출산/육아 > 기저귀 > 천기저귀'</li><li>'투유모유 무형광 무나염 순면 국산 아기 천기저귀 2박스 구매시 파우치 증정 출산/육아 > 기저귀 > 천기저귀'</li></ul> | |
|
|
| 0.0 | <ul><li>'[베이비앙] 국내산 무형광 사이즈 상관없이 벨크로 탈부착으로 사용 가능 기저귀 고정을 위한 천 기저귀밴드 출산/육아 > 기저귀 > 기저귀커버/기저귀밴드'</li><li>'처비체리 천기저귀 커버 쁘띠코숑 P tit Cochon 1개 출산/육아 > 기저귀 > 기저귀커버/기저귀밴드'</li><li>'포켓식 원사이즈 기저귀커버 3장세트(잠금장치&색상선택) 출산/육아 > 기저귀 > 기저귀커버/기저귀밴드'</li></ul> | |
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use for Inference |
|
|
|
|
|
First install the SetFit library: |
|
|
|
|
|
```bash |
|
|
pip install setfit |
|
|
``` |
|
|
|
|
|
Then you can load this model and run inference. |
|
|
|
|
|
```python |
|
|
from setfit import SetFitModel |
|
|
|
|
|
# Download from the 🤗 Hub |
|
|
model = SetFitModel.from_pretrained("mini1013/master_cate_bc2") |
|
|
# Run inference |
|
|
preds = model("플라팜 뉴코코맘 아기 천기저귀 5매 출산/육아 > 기저귀 > 천기저귀") |
|
|
``` |
|
|
|
|
|
<!-- |
|
|
### Downstream Use |
|
|
|
|
|
*List how someone could finetune this model on their own dataset.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Out-of-Scope Use |
|
|
|
|
|
*List how the model may foreseeably be misused and address what users ought not to do with the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Bias, Risks and Limitations |
|
|
|
|
|
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
### Recommendations |
|
|
|
|
|
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
|
|
--> |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Set Metrics |
|
|
| Training set | Min | Median | Max | |
|
|
|:-------------|:----|:-------|:----| |
|
|
| Word count | 9 | 12.95 | 20 | |
|
|
|
|
|
| Label | Training Sample Count | |
|
|
|:------|:----------------------| |
|
|
| 0.0 | 20 | |
|
|
| 1.0 | 20 | |
|
|
| 2.0 | 20 | |
|
|
| 3.0 | 20 | |
|
|
|
|
|
### Training Hyperparameters |
|
|
- batch_size: (256, 256) |
|
|
- num_epochs: (30, 30) |
|
|
- max_steps: -1 |
|
|
- sampling_strategy: oversampling |
|
|
- num_iterations: 50 |
|
|
- body_learning_rate: (2e-05, 1e-05) |
|
|
- head_learning_rate: 0.01 |
|
|
- loss: CosineSimilarityLoss |
|
|
- distance_metric: cosine_distance |
|
|
- margin: 0.25 |
|
|
- end_to_end: False |
|
|
- use_amp: False |
|
|
- warmup_proportion: 0.1 |
|
|
- l2_weight: 0.01 |
|
|
- seed: 42 |
|
|
- eval_max_steps: -1 |
|
|
- load_best_model_at_end: False |
|
|
|
|
|
### Training Results |
|
|
| Epoch | Step | Training Loss | Validation Loss | |
|
|
|:------:|:----:|:-------------:|:---------------:| |
|
|
| 0.0625 | 1 | 0.476 | - | |
|
|
| 3.125 | 50 | 0.3608 | - | |
|
|
| 6.25 | 100 | 0.0472 | - | |
|
|
| 9.375 | 150 | 0.0 | - | |
|
|
| 12.5 | 200 | 0.0 | - | |
|
|
| 15.625 | 250 | 0.0 | - | |
|
|
| 18.75 | 300 | 0.0 | - | |
|
|
| 21.875 | 350 | 0.0 | - | |
|
|
| 25.0 | 400 | 0.0 | - | |
|
|
| 28.125 | 450 | 0.0 | - | |
|
|
|
|
|
### Framework Versions |
|
|
- Python: 3.10.12 |
|
|
- SetFit: 1.1.0 |
|
|
- Sentence Transformers: 3.3.1 |
|
|
- Transformers: 4.44.2 |
|
|
- PyTorch: 2.2.0a0+81ea7a4 |
|
|
- Datasets: 3.2.0 |
|
|
- Tokenizers: 0.19.1 |
|
|
|
|
|
## Citation |
|
|
|
|
|
### BibTeX |
|
|
```bibtex |
|
|
@article{https://doi.org/10.48550/arxiv.2209.11055, |
|
|
doi = {10.48550/ARXIV.2209.11055}, |
|
|
url = {https://arxiv.org/abs/2209.11055}, |
|
|
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, |
|
|
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, |
|
|
title = {Efficient Few-Shot Learning Without Prompts}, |
|
|
publisher = {arXiv}, |
|
|
year = {2022}, |
|
|
copyright = {Creative Commons Attribution 4.0 International} |
|
|
} |
|
|
``` |
|
|
|
|
|
<!-- |
|
|
## Glossary |
|
|
|
|
|
*Clearly define terms in order to be accessible across audiences.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Authors |
|
|
|
|
|
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
|
|
--> |
|
|
|
|
|
<!-- |
|
|
## Model Card Contact |
|
|
|
|
|
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
|
|
--> |