--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: dataright np^sin 2 np^pi 224 t | Audio - text: robust way to ask the database for its current transaction state. | AtomicTests - text: the string marking the beginning of a print statement. | Environment - text: handled otherwise by a particular method. | StringMethods - text: table. | PlotAccessor metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: false --- # SetFit This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A MultiOutputClassifier instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Classification head:** a MultiOutputClassifier instance - **Maximum Sequence Length:** 128 tokens ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("setfit_model_id") # Run inference preds = model("table. | PlotAccessor") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 3 | 8.9868 | 28 | ### Training Hyperparameters - batch_size: (32, 32) - num_epochs: (10, 10) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Framework Versions - Python: 3.10.8 - SetFit: 1.1.2 - Sentence Transformers: 5.0.0 - Transformers: 4.54.1 - PyTorch: 2.7.1+cu126 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```