Instructions to use aieng-lab/t5-3b_comment-type-python with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aieng-lab/t5-3b_comment-type-python with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="aieng-lab/t5-3b_comment-type-python")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("aieng-lab/t5-3b_comment-type-python") model = AutoModelForSequenceClassification.from_pretrained("aieng-lab/t5-3b_comment-type-python") - Notebooks
- Google Colab
- Kaggle
| library_name: transformers | |
| license: mit | |
| language: | |
| - en | |
| metrics: | |
| - f1 | |
| - precision | |
| - recall | |
| base_model: | |
| - t5-3b | |
| pipeline_tag: text-classification | |
| # T5 3b for classifying code comments (multi-label) | |
| This model classifies comments in Python code as 'usage', 'parameters', 'developmentNotes', 'expand' or 'summary'. | |
| - **Developed by:** Fabian C. Peña, Steffen Herbold | |
| - **Finetuned from:** [t5-3b](https://huggingface.co/t5-3b) | |
| - **Replication kit:** [https://github.com/aieng-lab/senlp-benchmark](https://github.com/aieng-lab/senlp-benchmark) | |
| - **Language:** English | |
| - **License:** MIT | |
| ## Citation | |
| ``` | |
| @misc{pena2025benchmark, | |
| author = {Fabian Peña and Steffen Herbold}, | |
| title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks}, | |
| year = {2025} | |
| } | |
| ``` | |