XLM-R-Large-Tweet / README.md
DarijaM's picture
Update README.md
d892c71 verified
|
raw
history blame
819 Bytes
metadata
license: mit

XLM-R-Large-Tweet

XLM-R-Large-Tweet is a version of the XLM-R-Large-Tweet-Base, fine-tuned for sentiment analysis using 5,610 annotated Serbian COVID-19 vaccination-related tweets. Specifically, it is tailored for five-class sentiment analysis to capture finer sentiment nuances in the social media domain using the following scale: very negative, negative, neutral, positive, and very positive.

How to Use

To use the model, you can load it with the following code:

from transformers import AutoTokenizer, XLMRobertaForSequenceClassification

model_name = " DarijaM/XLM-R-Large-Tweet"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = XLMRobertaForSequenceClassification.from_pretrained(model_name)