Multilingual Tweet Sentiment Model (LoRA Fine-Tuned)
This is a LoRA fine-tuned version of FacebookAI/xlm-roberta-base for multilingual sentiment classification (Negative, Neutral, Positive).
Trained on: cardiffnlp/tweet_sentiment_multilingual ("all" split)
- Task: Sequence classification (3 labels)
- Training: 3 epochs, LoRA (r=16, alpha=32), fp16 on T4 GPU
- Test Accuracy: ~0.6519 (early checkpoint; full training ke baad better expected)
- Languages supported: English, Spanish, Hindi, French, Chinese, Portuguese, etc.
How to use
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from peft import PeftModel
base_model = "FacebookAI/xlm-roberta-base"
repo = "kanika103/xlm-roberta-multilingual-sentiment"
tokenizer = AutoTokenizer.from_pretrained(repo)
model = AutoModelForSequenceClassification.from_pretrained(base_model, num_labels=3)
model = PeftModel.from_pretrained(model, repo)