modernbert-sentiment-strength
Fine-tuned ModernBERT-large for sentiment regression using X (Twitter) data.
Model Details
- Base model:
answerdotai/ModernBERT-large - Framework: PyTorch /
transformers - Precision: fp16/bf16 capable (if you trained so)
- Training data: Twitter Sentiment Meta-Analysis Dataset, SemEval-2017 Task 4E, and SemEval-2018 Task 1 V-Reg.
Intended Uses & Limitations
- Task:
text-classification - Language(s): en
- License: apache-2.0
Training Summary
- Epochs: 15
- LR: 3e-5
- Batch size: 128
- Max seq len: 40
Metrics
- MSE: 0.029392
- Correlation: 0.90634276
How to use
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
repo = "LingshuHu/modernbert-sentiment-strength"
tok = AutoTokenizer.from_pretrained(repo)
mdl = AutoModelForSequenceClassification.from_pretrained(repo)
clf = pipeline("text-classification", model=mdl, tokenizer=tok)
print(clf("This is absolutely wonderful!"))
The sentiment score is from -1 (very negative) to +1 (very positive).
Citation
Hu, L., Sun, D. R., & Sheldon, D. K. M. (2025). Navigating Sentiment Complexity: Exploring the Rational-Emotional Spectrum and Intergroup Dynamics in Social Media Engagement. AMCIS 2025 Proceedings. 11. https://aisel.aisnet.org/amcis2025/data_science/sig_dsa/11
- Downloads last month
- 1
Model tree for LingshuHu/modernbert-sentiment-strength
Base model
answerdotai/ModernBERT-base