Model Card for LoRA-finetuned BERT
This is a BERT-base-uncased model fine-tuned using LoRA (Low-Rank Adaptation) via PEFT. It is optimized for efficient adaptation to NLP tasks like text classification and named entity recognition with minimal extra parameters.
Model Details
- Developed by: Ali Assi
- Language(s): English
- Finetuned from:
bert-base-uncased
Uses
- Direct Use: news classification
- Downstream Use: Transfer learning, NLP pipelines, domain adaptation
Getting Started
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
# Load base model and tokenizer
base_model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
model = AutoModelForSequenceClassification.from_pretrained(base_model_name)
# Load LoRA adapter
lora_model = PeftModel.from_pretrained(model, "ALI-USER/bert-lora-newsgroups")
# Inference
text = "Hello world!"
inputs = tokenizer(text, return_tensors="pt")
outputs = lora_model(**inputs)
logits = outputs.logits
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ALI-USER/bert-lora-newsgroups
Base model
google-bert/bert-base-uncased