--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: '- Barth BM, Shanmugavelandy SS, Tacelosky DM, Kester M, Morad SA, Cabot MC (2013). "Gaucher''s disease and cancer: a sphingolipid perspective". Crit Rev Oncog 18 (3): 221–34. doi:10.1615/critrevoncog.2013005814. PMC 3604879.' - text: '"The intersection of attention-deficit/hyperactivity disorder and substance abuse". Curr Opin Psychiatry. 24 (4): 280–285. doi:10.1097/YCO.0b013e328345c956. PMC .' - text: 'Parrilla-Rodriguez AM, Gorrin-Peralta JJ. La Lactancia Materna en Puerto Rico: Patrones Tradicionales, Tendencias Nacionales y Estrategias para el Futuro. P R Health Sci J 1999;18:223-228. (42.) Ni H, Simile C, Hardy AM.' - text: 'For cases where there is an actual exposure to someone who is confirmed to have COVID-19, report code Z20.828, Contact with and (suspected) exposure to other viral communicable diseases. This code is not necessary if the exposed patient is confirmed to have COVID-19. - Signs and symptoms: For patients presenting with any signs/symptoms and where a definitive diagnosis has not been established, assign the appropriate code(s) for each of the presenting signs and symptoms such as: Cough (R05); Shortness of breath (R06.02) or Fever unspecified (R50.9). Do not report “suspected” cases of COVID-19 with B97.29. In addition, diagnosis code B34.2, Coronavirus infection, unspecified, typically is not appropriate.' - text: '- HCPCS codes: what the provider used. - ICD-10-CM: why the provider ''did'' and ''used''. For example, if a urologist diagnoses a patient with bladder cancer and performs a bladder instillation of 1 mg of Bacillus Calmette-Guerin (BCG) to treat the tumor, the medical coder might assign: - CPT® codes (did): 51720 (Bladder instillation of anticarcinogenic agent (including retention time)) - HCPCS code (used): J9030 (BCG live intravesical instillation, 1mg) - ICD-10 code (why): C67.9 (Malignant neoplasm of bladder, unspecified) As mentioned above, though, there are some exceptions to these general code set concepts. WHEN TO CHOOSE CPT® Vs HCPCS First, not all payers accept HCPCS Level II codes. Initially intended for Medicare claims, many private payers have since adopted the HCPCS Level II code set.' metrics: - accuracy pipeline_tag: text-classification library_name: setfit inference: true base_model: sentence-transformers/paraphrase-mpnet-base-v2 model-index: - name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.8571428571428571 name: Accuracy --- # SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:---------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | negative | | | positive | | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.8571 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("ashercn97/code-y-v3") # Run inference preds = model("\"The intersection of attention-deficit/hyperactivity disorder and substance abuse\". Curr Opin Psychiatry. 24 (4): 280–285. doi:10.1097/YCO.0b013e328345c956. PMC .") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:----| | Word count | 21 | 101.3125 | 172 | | Label | Training Sample Count | |:---------|:----------------------| | negative | 8 | | positive | 8 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (4, 4) - max_steps: -1 - sampling_strategy: oversampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.1111 | 1 | 0.4011 | - | | 1.0 | 9 | - | 0.1458 | | 2.0 | 18 | - | 0.0775 | | 3.0 | 27 | - | 0.0748 | | 4.0 | 36 | - | 0.0664 | ### Framework Versions - Python: 3.10.12 - SetFit: 1.1.2 - Sentence Transformers: 4.0.2 - Transformers: 4.51.3 - PyTorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```