File size: 2,954 Bytes
14c997a 36e43f8 14c997a dc12df6 36e43f8 dc12df6 36e43f8 dc12df6 f5ce834 36e43f8 dd75adb f5ce834 36e43f8 dc12df6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
---
language: en
license: mit
tags:
- federated-learning
- finance
- sentiment-analysis
- bert
- finbert
library_name: transformers
pipeline_tag: text-classification
authors:
- Harsh Prasad
- Sai Dhole
---
## FinBERT–AdaptiveFedAvg: Adaptive Federated Aggregation for Financial Sentiment Analysis
---
### 📌 Model Summary
This model is a **federated version of FinBERT** fine-tuned for
**financial sentiment classification (Positive / Negative / Neutral)**.
Training is performed across **three clients**:
* Financial Twitter posts
* Financial news headlines
* Financial reports & statements
Unlike standard FedAvg, this model uses an **Adaptive Aggregation strategy**,
where client contributions are **weighted dynamically based on validation performance**,
allowing stronger clients to influence the global model more.
This model is part of a research project comparing:
* FedAvg
* FedProx
* Adaptive Aggregation
for federated financial NLP.
---
### 🧠 Intended Use
Designed for:
* Financial sentiment research
* Risk & market analytics
* Academic exploration of federated learning
Not intended for automated trading without expert oversight.
---
### 🏗 Model Architecture
Base Model:
```
ProsusAI/finbert
```
Task:
```
Sequence classification — 3 classes
```
Training Setup:
```
3 federation clients
10 global rounds
3 local epochs
Adaptive weighted aggregation
```
---
### 📊 Client Data Sources
| Client | Data Type |
| -------- | ----------------- |
| Client-1 | Financial Twitter |
| Client-2 | Financial News |
| Client-3 | Financial Reports |
No raw data is shared between clients.
---
### 🔐 Privacy Advantage
Only model updates are exchanged — not text data.
This supports data governance and privacy-aware ML.
---
### 📈 Performance (Validation)
| Method | Final Avg F1-Score |
| --------------- | ------------------ |
| Adaptive FedAvg | **0.823** |
Adaptive aggregation showed **smooth convergence and stable performance**
while preserving privacy.
---
### 🚀 Example Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained(
"harshprasad03/FinBERT-Adaptive"
)
tokenizer = AutoTokenizer.from_pretrained(
"harshprasad03/FinBERT-Adaptive"
)
text = "Global markets improved after positive earnings reports."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
prob = torch.softmax(outputs.logits, dim=1)
print(prob)
```
---
### ⚠️ Limitations
* Trained only on finance-domain text
* Sentiment ≠ market prediction
* Model may inherit dataset biases
* Designed for research use
---
### 📚 Citation
```
Harsh Prasad, Sai Dhole (2025).
Adaptive Federated FinBERT for Financial Sentiment Analysis.
```
---
### 👨💻 Authors
**Harsh Prasad**
AI and ML Research
**Sai Dhole**
AI and ML Research
--- |