YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

BERT Emotion Classifier with LoRA Fine-Tuning

This is a BERT-based sequence classification model fine-tuned on the SetFit/emotion dataset using LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning.

Model Details

  • Base model: bert-base-uncased
  • Fine-tuning method: PEFT with LoRA
  • Quantization: Optional (k-bit preparation included)
  • Num labels: 6 (Emotion categories)

Dataset

The model was fine-tuned on the SetFit/emotion dataset, which includes 6 emotions:

  • sadness
  • joy
  • love
  • anger
  • fear
  • surprise

Training

num_train_epochs = 1
per_device_train_batch_size = 16
evaluation_strategy = "epoch"
fp16 = True


## Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import torch.nn.functional as F

model_name = "RiyaSirohi/bert-base-lora-6class"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()

def predict(text):
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
    with torch.no_grad():
        logits = model(**inputs).logits
        probs = F.softmax(logits, dim=-1)
        return torch.argmax(probs).item(), probs.squeeze().tolist()

label, probs = predict("i didnt feel humiliated")
print(label, probs)

## Limitations

- The model may misclassify subtle or sarcastic inputs.
- Fine-tuned with a small number of epochs — more training may improve performance.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support