File size: 1,508 Bytes
11f6ecd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
language: [en]
license: apache-2.0
library_name: transformers
pipeline_tag: text-classification
datasets: [financial_phrase_bank]
base_model: bert-base-uncased
tags:
- sentiment-analysis
- finance
- text-classification
---
# Financial Sentiment BERT-Base (BERT-base-uncased fine-tune)

Fine-tuned on Financial PhraseBank for three-way sentiment.

| Item | Value |
|------|-------|
| **Base model** | `bert-base-uncased` |
| **Dataset** | Financial PhraseBank |
| **Labels** | positive (0) · negative (1) · neutral (2) |
| **Epochs** | 4 |
| **Hardware** | CPU-only training |

## Evaluation Results (Validation + Test)

**Validation Accuracy** (best): **81.32%**

**Test Performance**:
```
              precision    recall  f1-score   support

    positive       0.71      0.75      0.73       204
    negative       0.67      0.81      0.74        91
     neutral       0.88      0.82      0.85       432

    accuracy                           0.80       727
   macro avg       0.75      0.79      0.77       727
weighted avg       0.81      0.80      0.80       727
```

Training completed in 17m 9s. Logs are available in `training_logs.csv` and training curve in `training_metrics.png`.

## Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tok   = AutoTokenizer.from_pretrained("Kroalist/financial-sentiment-bert-base")
model = AutoModelForSequenceClassification.from_pretrained("Kroalist/financial-sentiment-bert-base")
```

_Last updated: 2025-04-23_