supplychain-finbert
Fine-tuned ProsusAI/finbert for supply chain geopolitical risk sentiment analysis.
Built for SupplyGuard AI β a production-grade supply chain risk intelligence platform.
Model Details
| Property | Value |
|---|---|
| Base model | ProsusAI/finbert (BERT-base fine-tuned on Reuters/Bloomberg) |
| Task | 3-class sentiment: negative / neutral / positive |
| Fine-tuning strategy | Frozen layers 0β9, trainable layers 10β11 + pooler + head |
| Training data | ~40,600 samples (FinGPT financial sentiment + Twitter Financial News + ~70 synthetic geopolitical headlines) |
| Class balancing | Undersampling + weighted CrossEntropyLoss (neg=1.459, neu=1.060, pos=0.729) |
| Test accuracy | 0.6393 |
| Best val accuracy | 0.6454 |
Performance
| Class | Precision | Recall | F1 |
|---|---|---|---|
| negative | 0.73 | 0.86 | 0.79 |
| neutral | 0.52 | 0.75 | 0.62 |
| positive | 0.74 | 0.45 | 0.56 |
| overall | 0.67 | 0.64 | 0.63 |
Labels
| ID | Label | Meaning |
|---|---|---|
| 0 | negative | Risk increasing β conflict, sanctions, disaster, supplier failure |
| 1 | neutral | Routine updates, mixed signals, uncertainty |
| 2 | positive | Risk decreasing β stability, trade agreements, recovery |
Usage
from transformers import pipeline
classifier = pipeline(
"text-classification",
model="arunabhachanda/supplychain-finbert",
return_all_scores=True,
)
result = classifier("Ceasefire in the region reopens key supply corridors")
# β [{'label': 'negative', 'score': 0.04},
# {'label': 'neutral', 'score': 0.11},
# {'label': 'positive', 'score': 0.85}]
# Polarity score used by SupplyGuard AI:
polarity = result[2]['score'] - result[0]['score'] # P(positive) - P(negative)
# β float in [-1.0, +1.0] used as region_news_sentiment feature
Transfer Learning Architecture
ProsusAI/finbert (pre-trained on financial news corpus)
βββ BERT Embeddings [FROZEN] β vocabulary + positional encoding
βββ Transformer Layer 0β9 [FROZEN] β general language + financial knowledge
βββ Transformer Layer 10β11 [TRAINABLE] β adapted to supply-chain language
βββ Pooler [TRAINABLE] β [CLS] token representation
βββ Classifier Head (768β3) [TRAINABLE] β new head for 3-class sentiment
Trainable parameters: 14,768,643 (13.5% of total)
Frozen parameters: 94,715,904 (86.5% of total)
Training Details
- Optimizer: AdamW (lr=2e-5, weight_decay=0.01)
- Scheduler: Linear warmup (10% steps) + linear decay
- Epochs: 4
- Batch size: 16
- Gradient clipping: max_norm=1.0
- Class weights: neg=1.459, neu=1.060, pos=0.729 (weighted CrossEntropyLoss)
- Split: 80% train / 10% val / 10% test (stratified)
Built By
Arunabha Kumar Chanda β M.Sc. Business Intelligence & Data Science, ISM Munich
GitHub: arunabhachanda
- Downloads last month
- 43