|
|
--- |
|
|
license: mit |
|
|
--- |
|
|
# **XLM-R-Large-Tweet** |
|
|
|
|
|
**XLM-R-Large-Tweet** is a version of the [XLM-R-Large-Tweet-Base]( https://huggingface.co/DarijaM/XLM-R-Large-Tweet-base)*, fine-tuned for sentiment analysis using 5,610 annotated Serbian COVID-19 vaccination-related tweets. |
|
|
Specifically, it is tailored for **five-class sentiment analysis** to capture finer sentiment nuances in the social media domain using the following scale: very negative, negative, neutral, positive, and very positive. |
|
|
|
|
|
**XLM-R-Large-Tweet-Base is an additionally pretrained version of the [XLM-RoBERTa large-sized model](https://huggingface.co/FacebookAI/xlm-roberta-large).* |
|
|
|
|
|
## How to Use |
|
|
To use the model, you can load it with the following code: |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, XLMRobertaForSequenceClassification |
|
|
|
|
|
model_name = "DarijaM/XLM-R-Large-Tweet" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = XLMRobertaForSequenceClassification.from_pretrained(model_name) |