File size: 3,638 Bytes
9d5aac3
 
 
 
 
 
9011bcd
 
 
 
 
 
e680c54
9d5aac3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
893b8a2
9d5aac3
 
893b8a2
 
 
9d5aac3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
language:
- en
base_model:
- answerdotai/ModernBERT-base
pipeline_tag: text-classification
tags:
- central_bank_communication
- central_bank
- economics
- hawkish
- dovish
library_name: transformers
---

# CBSI-ModernBERT Models

This repository hosts **CBSI-ModernBERT** models fine-tuned on the replication data of [Nițoi et al. (2023)](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/40JFEK).  
Check out their [paper](https://www.sciencedirect.com/science/article/abs/pii/S2214635023000230) and [website](https://sites.google.com/view/bert-cbsi/) for more information.  

The models are based on [ModernBERT (Warner et al., 2024)](https://arxiv.org/abs/2412.13663), which allows for longer context handling compared to vanilla BERT.  
The same training data and methodology as [Nițoi et al. (2023)] was used, but fine-tuned ModernBERT for improved sequence length support.  

---

## Results

| Model                                                                 | F1 Score | Accuracy | Loss |
|------------------------------------------------------------------------|----------|----------|------|
| [CBSI-bert-base-uncased](https://huggingface.co/brjoey/CBSI-bert-base-uncased)   | 0.88     | 0.88     | 0.49 |
| [CBSI-bert-large-uncased](https://huggingface.co/brjoey/CBSI-bert-large-uncased) | 0.92     | 0.92     | 0.45 |
| [CBSI-ModernBERT-base](https://huggingface.co/brjoey/CBSI-ModernBERT-base)   | 0.93    | 0.93    | 0.40 |
| [CBSI-ModernBERT-large](https://huggingface.co/brjoey/CBSI-ModernBERT-large) | 0.91    | 0.91    | 0.53 |
| [CBSI-CentralBank-BERT](https://huggingface.co/brjoey/CBSI-CentralBank-BERT) | 0.92 | 0.92  |0.36 |

---

## How to use

```python
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline

# Load model and tokenizer
model_name = "brjoey/CBSI-ModernBERT-base"
classifier = pipeline(
    "text-classification",
    model=model_name,
    tokenizer=model_name
)

# Define label mapping
cbsi_label_map = {
    0: "neutral",
    1: "dovish",
    2: "hawkish"
}

# Example texts
texts = [
    "The Governing Council decided to lower interest rates.",
    "The central bank will maintain its current policy stance."
]
df = pd.DataFrame({"text": texts})

# Run classification
predictions = classifier(df["text"].tolist())

# Store the results
df["label"], df["score"] = zip(*[
    (cbsi_label_map[int(pred["label"].split("_")[-1])], pred["score"])
    for pred in predictions
])

print("\n                              === Results ===\n")
print(df[["text", "label", "score"]])
```

# Citation

If you use this model, please cite:

Data: \
Nițoi Mihai; Pochea Maria-Miruna; Radu Ștefan-Constantin, 2023, \
"Replication Data for: Unveiling the sentiment behind central bank narratives: A novel deep learning index", \
https://doi.org/10.7910/DVN/40JFEK, Harvard Dataverse, V1

Paper: \
Mihai Niţoi, Maria-Miruna Pochea, Ştefan-Constantin Radu, \
"Unveiling the sentiment behind central bank narratives: A novel deep learning index", \
Journal of Behavioral and Experimental Finance, Volume 38, 2023, 100809, ISSN 2214-6350. \
https://doi.org/10.1016/j.jbef.2023.100809

ModernBERT: \
Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Griffin Adams, Jeremy Howard, Iacopo Poli, \
"Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference", \
arXiv preprint arXiv:2412.13663, 2024. \
https://arxiv.org/abs/2412.13663