Model description

A fine-tuned version of xlm-roberta-base, trained on 19,101 tweets on cancel culture controversies regarding Christopher Columbus, Winston Churchill, J. K. Rowling, Dr. Seuss and Indro Montanelli. The model performs text classification by labelling a tweet as either pro, neutral or against the cancellation of a particular figure.

Training procedure

Describe key training parameters:

  • Number of epochs: 4
  • Batch size: 16
  • Learning rate: 2e-5
  • Maximum sequence length: 256

Evaluation results

Provide your evaluation metrics:

  • Accuracy: 0.774171
  • Macro F1 Score: 0.765934
  • Weighted F1 score: 0.773198
  • Precision: 0.769870
  • Recall: 0.763283

Usage

from transformers import XLMRobertaForSequenceClassification, XLMRobertaTokenizer

model = XLMRobertaForSequenceClassification.from_pretrained("MikCil/cancel-culture-stance-classification")
tokenizer = XLMRobertaTokenizer.from_pretrained("MikCil/cancel-culture-stance-classification")

# Prepare your text
text = "Your text here"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=256)

# Get predictions
outputs = model(**inputs)
predictions = outputs.logits.argmax(-1)

Training Infrastructure

Trained on Google Colab using a single GPU.

Cite

The model creation was detailed in the paper "From pedestal to ostracism: a quantitative social media analysis and conceptual framework on historical memory and pedagogical implications of Cancel Culture".

Downloads last month
2
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MikCil/cancel-culture-stance-classification

Finetuned
(3776)
this model

Evaluation results