BEpaRTy
This model is a fine-tuned version of bert-base-uncased on 2.5 million tweets from 825 U.S. congressional politicians. It is capable of predicting the political orientation of tweets. The model's testing accuracy is 89.54%, with an F1 score of 0.8939. This model serves as the initial step in the paper titled "A Two-Step Method to Classify Political Partisanship Using a Deep Learning Model."
Model description
Intended uses & limitations
This model is capable of predicting the political orientation of tweets. It classifies tweets into 0 (Democratic) and 1 (Republican).
Training and evaluation data
2.5 million tweets from 825 U.S. congressional politicians.
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: default
- training_precision: default
Training results
On the training dataset (80% of the dataset):
precision recall f1-score support
0 0.9655 0.9782 0.9718 1001625
1 0.9779 0.9651 0.9714 1002311
accuracy 0.9716 2003936
macro avg 0.9717 0.9716 0.9716 2003936 weighted avg 0.9717 0.9716 0.9716 2003936
On the testing dataset (20% of the dataset): precision recall f1-score support
0 0.8852 0.9091 0.8970 250835
1 0.9063 0.8818 0.8939 250149
accuracy 0.8954 500984
macro avg 0.8957 0.8954 0.8954 500984 weighted avg 0.8957 0.8954 0.8954 500984
Framework versions
- Transformers 4.26.1
- TensorFlow 2.6.5
- Tokenizers 0.13.2
Citation
Please cite the following work: Hu, L. (2024). A Two-Step Method for Classifying Political Partisanship Using Deep Learning Models. Social Science Computer Review, 42(4), 961-976.
- Downloads last month
- -