finbert-finetuned-sentimentpulse

Fine-tuned ProsusAI/finbert for 3-class financial news sentiment classification (positive / negative / neutral).

Part of the SentimentPulse pipeline — an open-source replica of Bloomberg BQuant Textual Analytics.

Benchmark

Evaluated on 349 held-out examples from FinancialPhraseBank + FiQA-SA:

Model Precision Recall F1
VADER 0.52 0.48 0.51
Loughran-McDonald 0.64 0.47 0.47
FinBERT zero-shot 0.79 0.75 0.76
FinBERT fine-tuned (this model) 0.92 0.92 0.92

Per-class (fine-tuned):

Class Precision Recall F1 Support
negative 0.83 0.86 0.85 74
neutral 0.99 0.98 0.99 140
positive 0.90 0.90 0.90 135

Training

  • Base model: ProsusAI/finbert
  • Training data: FinancialPhraseBank (sentences_allagree, 2,264 sentences) + FiQA-SA (pauri32/fiqa-2018, 1,173 sentences) — 80/10/10 stratified split
  • Epochs: 5 · Batch size: 16 · LR: 2e-5 · WeightedTrainer (inverse class frequency)
  • Hardware: Apple MPS (~4.5 min)
  • Framework: HuggingFace Transformers 4.41

Usage

from transformers import pipeline

pipe = pipeline(
    "text-classification",
    model="EomaxlSam/finbert-finetuned-sentimentpulse",
)

pipe("Apple beats Q1 earnings estimates by 12%")
# [{'label': 'positive', 'score': 0.94}]

pipe("Goldman Sachs fell sharply after issuing a guidance cut.")
# [{'label': 'negative', 'score': 0.91}]

Labels

ID Label
0 negative
1 neutral
2 positive

Repository

github.com/Eomaxl/SentimentPulse

Downloads last month
29
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EomaxlSam/finbert-finetuned-sentimentpulse

Base model

ProsusAI/finbert
Finetuned
(89)
this model

Datasets used to train EomaxlSam/finbert-finetuned-sentimentpulse