metadata
base_model: google-bert/bert-base-uncased
datasets:
- stanfordnlp/sentiment140
pipeline_tag: text-classification
sentiment-bert-base
Fine-tuned BERT-base for binary sentiment classification on the Sentiment140 dataset (1.6M tweets).
Base model
google-bert/bert-base-uncased — the original BERT-base-uncased from Devlin et al. (2019), 110M parameters.
Training
- Dataset: Sentiment140 (1.6M tweets, 80/20 split, seed 42)
- Hyperparameters: learning rate 2e-5, batch size 16, 3 epochs
- Hardware: NVIDIA A10G, AWS SageMaker (g5.2xlarge)
- Training time: 7.3 hours
- Trainer: Hugging Face Transformers + Trainer API; load_best_model_at_end=True
Test set performance
| Metric | Value |
|---|---|
| Accuracy | 87.46% |
| Precision | 0.880 |
| Recall | 0.869 |
| F1 | 0.874 |
Intended use
Demonstration model for an academic purposes
Limitations
- English only, binary sentiment, 2009-era Twitter language.
- Sentiment140 labels generated automatically using emoticons (distant supervision), introducing systematic noise.
- Does not handle sarcasm reliably (the dataset does not separate it as a phenomenon).