File size: 977 Bytes
8682616
 
d892c71
 
 
6e95a6c
d892c71
 
6e95a6c
2d8b58b
d892c71
 
 
 
 
 
a5560e5
d892c71
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
license: mit
---
# **XLM-R-Large-Tweet**

**XLM-R-Large-Tweet** is a version of the [XLM-R-Large-Tweet-Base]( https://huggingface.co/DarijaM/XLM-R-Large-Tweet-base)*, fine-tuned for sentiment analysis using 5,610 annotated Serbian COVID-19 vaccination-related tweets.
Specifically, it is tailored for **five-class sentiment analysis** to capture finer sentiment nuances in the social media domain using the following scale: very negative, negative, neutral, positive, and very positive.

**XLM-R-Large-Tweet-Base is an additionally pretrained version of the [XLM-RoBERTa large-sized model](https://huggingface.co/FacebookAI/xlm-roberta-large).*

## How to Use
To use the model, you can load it with the following code:

```python
from transformers import AutoTokenizer, XLMRobertaForSequenceClassification

model_name = "DarijaM/XLM-R-Large-Tweet"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = XLMRobertaForSequenceClassification.from_pretrained(model_name)