Model description
A fine-tuned version of xlm-roberta-base, trained on 19,101 tweets on cancel culture controversies regarding Christopher Columbus, Winston Churchill, J. K. Rowling, Dr. Seuss and Indro Montanelli. The model performs text classification by labelling a tweet as either pro, neutral or against the cancellation of a particular figure.
Training procedure
Describe key training parameters:
- Number of epochs: 4
- Batch size: 16
- Learning rate: 2e-5
- Maximum sequence length: 256
Evaluation results
Provide your evaluation metrics:
- Accuracy: 0.774171
- Macro F1 Score: 0.765934
- Weighted F1 score: 0.773198
- Precision: 0.769870
- Recall: 0.763283
Usage
from transformers import XLMRobertaForSequenceClassification, XLMRobertaTokenizer
model = XLMRobertaForSequenceClassification.from_pretrained("MikCil/cancel-culture-stance-classification")
tokenizer = XLMRobertaTokenizer.from_pretrained("MikCil/cancel-culture-stance-classification")
# Prepare your text
text = "Your text here"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=256)
# Get predictions
outputs = model(**inputs)
predictions = outputs.logits.argmax(-1)
Training Infrastructure
Trained on Google Colab using a single GPU.
Cite
The model creation was detailed in the paper "From pedestal to ostracism: a quantitative social media analysis and conceptual framework on historical memory and pedagogical implications of Cancel Culture".
- Downloads last month
- 2
Model tree for MikCil/cancel-culture-stance-classification
Base model
FacebookAI/xlm-roberta-baseEvaluation results
- Accuracy on Cancel Culture Stance Classificationself-reported0.774
- Macro F1 on Cancel Culture Stance Classificationself-reported0.766
- Precision on Cancel Culture Stance Classificationself-reported0.770
- Recall on Cancel Culture Stance Classificationself-reported0.763