Instructions to use aieng-lab/Llama-3.2-3B_issue-type with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aieng-lab/Llama-3.2-3B_issue-type with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="aieng-lab/Llama-3.2-3B_issue-type")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("aieng-lab/Llama-3.2-3B_issue-type") model = AutoModelForSequenceClassification.from_pretrained("aieng-lab/Llama-3.2-3B_issue-type") - Notebooks
- Google Colab
- Kaggle
Llama 3.2 3b for classifying issues
This model classifies GitHub issues as 'bug', 'enhancement' or 'question'.
- Developed by: Fabian C. Peña, Steffen Herbold
- Finetuned from: meta-llama/Llama-3.2-3B
- Replication kit: https://github.com/aieng-lab/senlp-benchmark
- Language: English
- License: Llama 3.2 Community License Agreement
Citation
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
- Downloads last month
- 2
Model tree for aieng-lab/Llama-3.2-3B_issue-type
Base model
meta-llama/Llama-3.2-3B