fabiancpl commited on
Commit
b71e0f4
·
verified ·
1 Parent(s): cc3d0c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -91
README.md CHANGED
@@ -7,50 +7,50 @@ tags:
7
  widget: []
8
  metrics:
9
  - accuracy
 
 
 
10
  pipeline_tag: text-classification
11
  library_name: setfit
12
  inference: true
13
- base_model: NLBSE/nlbse25_pharo
 
 
 
 
 
 
14
  ---
15
 
16
- # SetFit with NLBSE/nlbse25_pharo
17
 
18
- This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [NLBSE/nlbse25_pharo](https://huggingface.co/NLBSE/nlbse25_pharo) as the Sentence Transformer embedding model. A RandomForestClassifier instance is used for classification.
19
 
20
- The model has been trained using an efficient few-shot learning technique that involves:
21
 
22
  1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
23
- 2. Training a classification head with features from the fine-tuned Sentence Transformer.
24
 
25
- ## Model Details
26
 
27
- ### Model Description
28
  - **Model Type:** SetFit
29
- - **Sentence Transformer body:** [NLBSE/nlbse25_pharo](https://huggingface.co/NLBSE/nlbse25_pharo)
30
- - **Classification head:** a RandomForestClassifier instance
31
- - **Maximum Sequence Length:** 128 tokens
32
- <!-- - **Number of Classes:** Unknown -->
33
- <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
34
- <!-- - **Language:** Unknown -->
35
- <!-- - **License:** Unknown -->
36
 
37
- ### Model Sources
38
 
39
- - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
40
- - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
41
- - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
42
 
43
- ## Uses
44
 
45
- ### Direct Use for Inference
46
-
47
- First install the SetFit library:
48
 
49
  ```bash
50
- pip install setfit
51
  ```
52
 
53
- Then you can load this model and run inference.
54
 
55
  ```python
56
  from setfit import SetFitModel
@@ -58,74 +58,17 @@ from setfit import SetFitModel
58
  # Download from the 🤗 Hub
59
  model = SetFitModel.from_pretrained("fabiancpl/nlbse25_pharo")
60
  # Run inference
61
- preds = model("I loved the spiderman movie!")
62
  ```
63
 
64
- <!--
65
- ### Downstream Use
66
-
67
- *List how someone could finetune this model on their own dataset.*
68
- -->
69
-
70
- <!--
71
- ### Out-of-Scope Use
72
-
73
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
74
- -->
75
-
76
- <!--
77
- ## Bias, Risks and Limitations
78
-
79
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
80
- -->
81
-
82
- <!--
83
- ### Recommendations
84
-
85
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
86
- -->
87
-
88
- ## Training Details
89
 
90
- ### Framework Versions
91
- - Python: 3.12.4
92
- - SetFit: 1.1.0
93
- - Sentence Transformers: 3.3.0
94
- - Transformers: 4.42.2
95
- - PyTorch: 2.5.1+cu124
96
- - Datasets: 3.1.0
97
- - Tokenizers: 0.19.1
98
-
99
- ## Citation
100
-
101
- ### BibTeX
102
  ```bibtex
103
- @article{https://doi.org/10.48550/arxiv.2209.11055,
104
- doi = {10.48550/ARXIV.2209.11055},
105
- url = {https://arxiv.org/abs/2209.11055},
106
- author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
107
- keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
108
- title = {Efficient Few-Shot Learning Without Prompts},
109
- publisher = {arXiv},
110
- year = {2022},
111
- copyright = {Creative Commons Attribution 4.0 International}
112
- }
113
- ```
114
-
115
- <!--
116
- ## Glossary
117
-
118
- *Clearly define terms in order to be accessible across audiences.*
119
- -->
120
-
121
- <!--
122
- ## Model Card Authors
123
-
124
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
125
- -->
126
-
127
- <!--
128
- ## Model Card Contact
129
-
130
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
131
- -->
 
7
  widget: []
8
  metrics:
9
  - accuracy
10
+ - f1
11
+ - precision
12
+ - recall
13
  pipeline_tag: text-classification
14
  library_name: setfit
15
  inference: true
16
+ license: mit
17
+ datasets:
18
+ - NLBSE/nlbse25-code-comment-classification
19
+ language:
20
+ - en
21
+ base_model:
22
+ - sentence-transformers/paraphrase-MiniLM-L3-v2
23
  ---
24
 
25
+ # Pharo comment classifier
26
 
27
+ This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Pharo code comment classification.
28
 
29
+ The model has been trained using few-shot learning that involves:
30
 
31
  1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
32
+ 2. Training a classification head with features from the fine-tuned model.
33
 
34
+ ## Model Description
35
 
 
36
  - **Model Type:** SetFit
37
+ - **Classification head:** [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)
 
 
 
 
 
 
38
 
39
+ ## Sources
40
 
41
+ - **Repository:** [GitHub](https://github.com/fabiancpl/sbert-comment-classification/)
42
+ - **Paper:** [Evaluating the Performance and Efficiency of Sentence-BERT for Code Comment Classification](https://ieeexplore.ieee.org/document/11029440)
43
+ - **Dataset:** [HF Dataset](https://huggingface.co/datasets/NLBSE/nlbse25-code-comment-classification)
44
 
45
+ ## How to use it
46
 
47
+ First, install the depencies:
 
 
48
 
49
  ```bash
50
+ pip install setfit scikit-learn
51
  ```
52
 
53
+ Then, load the model and run inferences:
54
 
55
  ```python
56
  from setfit import SetFitModel
 
58
  # Download from the 🤗 Hub
59
  model = SetFitModel.from_pretrained("fabiancpl/nlbse25_pharo")
60
  # Run inference
61
+ preds = model("This function sorts a list of numbers.")
62
  ```
63
 
64
+ ## Cite as
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ```bibtex
67
+ @inproceedings{11029440,
68
+ author={Peña, Fabian C. and Herbold, Steffen},
69
+ booktitle={2025 IEEE/ACM International Workshop on Natural Language-Based Software Engineering (NLBSE)},
70
+ title={Evaluating the Performance and Efficiency of Sentence-BERT for Code Comment Classification},
71
+ year={2025},
72
+ pages={21-24},
73
+ doi={10.1109/NLBSE66842.2025.00010}}
74
+ ```