Update README.md
Browse files
README.md
CHANGED
|
@@ -26,7 +26,7 @@ library_name: transformers
|
|
| 26 |
|
| 27 |
The **CBDC-BERT-Stance** model classifies Central Bank Digital Currency (CBDC)–related text into three stance categories: **Pro-CBDC** — supportive of CBDC adoption (e.g., highlighting benefits, efficiency, innovation), **Wait-and-See** — neutral or cautious, expressing neither strong support nor strong opposition, often highlighting the need for further study, and **Anti-CBDC** — critical of CBDC adoption (e.g., highlighting risks, concerns, opposition).
|
| 28 |
|
| 29 |
-
**Base Model:** [`bilalzafar/
|
| 30 |
|
| 31 |
**Training Data:** The training dataset consisted of 1,647 CBDC-related sentences from BIS speeches, manually annotated into three sentiment categories: Pro-CBDC (742 sentences), Wait-and-See (694 sentences), and Anti-CBDC (211 sentences).
|
| 32 |
|
|
@@ -36,7 +36,7 @@ The **CBDC-BERT-Stance** model classifies Central Bank Digital Currency (CBDC)
|
|
| 36 |
|
| 37 |
## Training Details
|
| 38 |
|
| 39 |
-
The model was trained starting from the [`bilalzafar/
|
| 40 |
|
| 41 |
---
|
| 42 |
|
|
@@ -63,7 +63,7 @@ On the test set, the model achieved an **accuracy** of 0.8485, a **macro F1-scor
|
|
| 63 |
```python
|
| 64 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
|
| 65 |
|
| 66 |
-
model_name = "bilalzafar/cbdc-
|
| 67 |
|
| 68 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 69 |
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
|
|
|
| 26 |
|
| 27 |
The **CBDC-BERT-Stance** model classifies Central Bank Digital Currency (CBDC)–related text into three stance categories: **Pro-CBDC** — supportive of CBDC adoption (e.g., highlighting benefits, efficiency, innovation), **Wait-and-See** — neutral or cautious, expressing neither strong support nor strong opposition, often highlighting the need for further study, and **Anti-CBDC** — critical of CBDC adoption (e.g., highlighting risks, concerns, opposition).
|
| 28 |
|
| 29 |
+
**Base Model:** [`bilalzafar/CentralBank-BERT`](https://huggingface.co/bilalzafar/CentralBank-BERT) — **CentralBank-BERT** is a domain-adapted **BERT base (uncased)**, pretrained on **66M+ tokens** across **2M+ sentences** from central-bank speeches published via the **Bank for International Settlements (1996–2024)**. It is optimized for *masked-token prediction* within the specialized domains of **monetary policy, financial regulation, and macroeconomic communication**, enabling better contextual understanding of central-bank discourse and financial narratives.
|
| 30 |
|
| 31 |
**Training Data:** The training dataset consisted of 1,647 CBDC-related sentences from BIS speeches, manually annotated into three sentiment categories: Pro-CBDC (742 sentences), Wait-and-See (694 sentences), and Anti-CBDC (211 sentences).
|
| 32 |
|
|
|
|
| 36 |
|
| 37 |
## Training Details
|
| 38 |
|
| 39 |
+
The model was trained starting from the [`bilalzafar/CentralBank-BERT`](https://huggingface.co/bilalzafar/CentralBank-BERT) checkpoint, using a BERT-base architecture with a new three-way softmax classification head and a maximum sequence length of 320 tokens. Training was run for up to 8 epochs, with early stopping at epoch 6, a batch size of 16, a learning rate of 2e-5, weight decay of 0.01, and a warmup ratio of 0.06, optimized using AdamW. The loss function was Focal Loss (γ = 1.0, soft focal, no extra class weights), and a WeightedRandomSampler based on the square root of inverse frequency was applied to handle class imbalance. FP16 precision was enabled for efficiency, and the best checkpoint was selected based on the Macro-F1 score. The dataset was split into 80% training, 10% validation, and 10% test sets, stratified by label with class balance applied.
|
| 40 |
|
| 41 |
---
|
| 42 |
|
|
|
|
| 63 |
```python
|
| 64 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
|
| 65 |
|
| 66 |
+
model_name = "bilalzafar/cbdc-stance"
|
| 67 |
|
| 68 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 69 |
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|