--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Die elektronische Patientenakte ist ein großer Schritt nach vorn für das deutsche Gesundheitswesen. Sie ermöglicht eine bessere Koordination zwischen Ärzten und sorgt dafür, dass Patienten immer die richtigen Informationen erhalten. - text: Die ePA könnte das Gesundheitssystem verbessern, aber es ist noch unklar, wie sie in der Praxis funktioniert. - text: Die Möglichkeit, meine Daten selbst zu verwalten und zu entscheiden, wer darauf zugreifen kann, macht die ePA für mich sehr attraktiv. - text: Die ePA ist ein komplexes Thema, bei dem ich noch nicht weiß, ob ich dafür oder dagegen bin. - text: Die ePA wird uns als Fortschritt verkauft, aber in Wirklichkeit eröffnet sie nur neue Möglichkeiten für Missbrauch und Datenlecks. metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 --- # SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 3 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:-------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | neutral | | | ablehnend | | | befürwortend | | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("Die ePA ist ein komplexes Thema, bei dem ich noch nicht weiß, ob ich dafür oder dagegen bin.") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 11 | 16.9365 | 23 | | Label | Training Sample Count | |:-------------|:----------------------| | ablehnend | 21 | | neutral | 21 | | befürwortend | 21 | ### Training Hyperparameters - batch_size: (3, 3) - num_epochs: (2, 2) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 63 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0011 | 1 | 0.151 | - | | 0.0567 | 50 | 0.184 | - | | 0.1134 | 100 | 0.1252 | - | | 0.1701 | 150 | 0.0585 | - | | 0.2268 | 200 | 0.0116 | - | | 0.2834 | 250 | 0.0039 | - | | 0.3401 | 300 | 0.002 | - | | 0.3968 | 350 | 0.0013 | - | | 0.4535 | 400 | 0.0007 | - | | 0.5102 | 450 | 0.0008 | - | | 0.5669 | 500 | 0.0005 | - | | 0.6236 | 550 | 0.0005 | - | | 0.6803 | 600 | 0.0004 | - | | 0.7370 | 650 | 0.0004 | - | | 0.7937 | 700 | 0.0003 | - | | 0.8503 | 750 | 0.0003 | - | | 0.9070 | 800 | 0.0003 | - | | 0.9637 | 850 | 0.0002 | - | | 1.0204 | 900 | 0.0002 | - | | 1.0771 | 950 | 0.0001 | - | | 1.1338 | 1000 | 0.0002 | - | | 1.1905 | 1050 | 0.0001 | - | | 1.2472 | 1100 | 0.0002 | - | | 1.3039 | 1150 | 0.0002 | - | | 1.3605 | 1200 | 0.0002 | - | | 1.4172 | 1250 | 0.0001 | - | | 1.4739 | 1300 | 0.0001 | - | | 1.5306 | 1350 | 0.0001 | - | | 1.5873 | 1400 | 0.0001 | - | | 1.6440 | 1450 | 0.0001 | - | | 1.7007 | 1500 | 0.0001 | - | | 1.7574 | 1550 | 0.0001 | - | | 1.8141 | 1600 | 0.0001 | - | | 1.8707 | 1650 | 0.0001 | - | | 1.9274 | 1700 | 0.0001 | - | | 1.9841 | 1750 | 0.0001 | - | ### Framework Versions - Python: 3.12.12 - SetFit: 1.1.3 - Sentence Transformers: 5.2.3 - Transformers: 4.44.2 - PyTorch: 2.10.0 - Datasets: 4.6.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```