# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("aieng-lab/bert-base-cased_comment-type-java")
model = AutoModelForSequenceClassification.from_pretrained("aieng-lab/bert-base-cased_comment-type-java")Quick Links
BERT base for classifying code comments (multi-label)
This model classifies comments in Java code as 'summary', 'ownership', 'expand', 'usage', 'pointer', 'deprecation' or rational'.
- Developed by: Fabian C. Peña, Steffen Herbold
- Finetuned from: bert-base-cased
- Replication kit: https://github.com/aieng-lab/senlp-benchmark
- Language: English
- License: MIT
Citation
@misc{pena2025benchmark,
author = {Fabian Peña and Steffen Herbold},
title = {Evaluating Large Language Models on Non-Code Software Engineering Tasks},
year = {2025}
}
- Downloads last month
- 4
Model tree for aieng-lab/bert-base-cased_comment-type-java
Base model
google-bert/bert-base-cased
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="aieng-lab/bert-base-cased_comment-type-java")