sentiment-bert-base

Fine-tuned BERT-base for binary sentiment classification on the Sentiment140 dataset (1.6M tweets).

Base model

google-bert/bert-base-uncased โ€” the original BERT-base-uncased from Devlin et al. (2019), 110M parameters.

Training

  • Dataset: Sentiment140 (1.6M tweets, 80/20 split, seed 42)
  • Hyperparameters: learning rate 2e-5, batch size 16, 3 epochs
  • Hardware: NVIDIA A10G, AWS SageMaker (g5.2xlarge)
  • Training time: 7.3 hours
  • Trainer: Hugging Face Transformers + Trainer API; load_best_model_at_end=True

Test set performance

Metric Value
Accuracy 87.46%
Precision 0.880
Recall 0.869
F1 0.874

Intended use

Demonstration model for an academic purposes

Limitations

  • English only, binary sentiment, 2009-era Twitter language.
  • Sentiment140 labels generated automatically using emoticons (distant supervision), introducing systematic noise.
  • Does not handle sarcasm reliably (the dataset does not separate it as a phenomenon).
Downloads last month
43
Safetensors
Model size
0.1B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for heican/sentiment-bert-base

Finetuned
(6702)
this model

Dataset used to train heican/sentiment-bert-base

Space using heican/sentiment-bert-base 1