File size: 5,497 Bytes
0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 a1999e4 0e3ea11 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 |
---
library_name: transformers
tags: []
---
**Repo:** `learn-abc/banking77-intent-classifier`
# Banking77 Intent Classifier (10-Intent)
## Overview
This model is a **fine-tuned BERT-based intent classifier** designed for **banking and financial customer queries**.
It is trained by **mapping the original 77 Banking77 intents into a smaller, production-friendly set of custom intents**, making it suitable for real-world conversational systems where simpler intent routing is required.
The model performs **single-label text classification** and is intended to be used as an **intent detection component**, not as a conversational or generative model.
---
## Model Details
* **Base model:** `bert-base-uncased`
* **Task:** Text Classification (Intent Classification)
* **Architecture:** `BertForSequenceClassification`
* **Languages:** English (robust to informal and conversational phrasing)
* **Max sequence length:** 64 tokens
* **Output:** One intent label with confidence score
---
## Custom Intent Schema
The original **77 Banking77 intents** were **mapped and consolidated** into the following **12 production intents**:
* `ACCOUNT_INFO`
* `ATM_SUPPORT`
* `CARD_ISSUE`
* `CARD_MANAGEMENT`
* `CARD_REPLACEMENT`
* `CHECK_BALANCE`
* `EDIT_PERSONAL_DETAILS`
* `FAILED_TRANSFER`
* `FEES`
* `LOST_OR_STOLEN_CARD`
* `MINI_STATEMENT`
* `FALLBACK`
Any user query that does not clearly belong to one of the supported categories is mapped to **FALLBACK**.
This design simplifies downstream business logic while retaining strong intent separation.
---
## Training Data
* **Primary dataset:** [PolyAI Banking77](https://huggingface.co/datasets/PolyAI/banking77)
* **Original training samples:** 10,003
* **Test samples:** 3,080
* **After intent mapping and augmentation:**
* **Training samples:** 19,846
* **Includes:** 280 explicitly added `FALLBACK` examples
### Training Intent Distribution (Post-Mapping)
| Intent | Samples |
| --------------------- | ------- |
| ACCOUNT_INFO | 1,983 |
| MINI_STATEMENT | 1,809 |
| FEES | 1,490 |
| FAILED_TRANSFER | 1,045 |
| CARD_MANAGEMENT | 1,026 |
| CARD_REPLACEMENT | 749 |
| ATM_SUPPORT | 743 |
| CARD_ISSUE | 456 |
| CHECK_BALANCE | 352 |
| LOST_OR_STOLEN_CARD | 229 |
| EDIT_PERSONAL_DETAILS | 121 |
| FALLBACK | 280 |
Class imbalance was handled using **class weighting** during training.
---
## Evaluation Results
Final evaluation on the Banking77 test set:
* **Accuracy:** 96.04%
* **F1 (Micro):** 0.960
* **F1 (Macro):** 0.956
These results indicate strong overall performance with good balance across both high-frequency and low-frequency intents.
---
## Usage
### Load the model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_id = "learn-abc/banking77-intent-classifier"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
def predict_intent(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=64)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1)
pred_id = probs.argmax(dim=-1).item()
confidence = probs[0][pred_id].item()
return model.config.id2label[pred_id], confidence
# Example usage:
if __name__ == "__main__":
test_texts = [
"What is my account balance?",
"Show me my last 10 transactions.",
"I want to update my address.",
"How do I apply for a loan?"
]
for text in test_texts:
intent, confidence = predict_intent(text)
print(f"Input: {text}\nPredicted Intent: {intent} (Confidence: {confidence:.2f})\n")
```
---
## Intended Use
This model is suitable for:
* Banking chatbots
* Voice assistant intent routing
* Customer support automation
* FAQ classification systems
It is designed to be used **together with business rules**, confirmation flows, and fallback handling.
---
## Limitations and Safety Notes
* The model **does not perform authentication or authorization**
* It **must not directly trigger financial actions**
* High-risk intents (e.g. lost or stolen card) should always require explicit user confirmation
* Predictions should be validated with confidence thresholds and fallback logic
This model is **not a replacement for human review** in sensitive workflows.
---
## Notes on Model Warnings
During training, warnings related to missing or unexpected keys were observed.
These are expected when fine-tuning a pre-trained BERT checkpoint for a downstream classification task and **do not impact inference correctness**.
---
## Citation
If you use this model, please cite:
* Devlin et al., *BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding*
* PolyAI Banking77 Dataset
---
## Maintainer
Developed and fine-tuned for production-oriented banking intent classification.
---
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors
* **Author:** [Abhishek Singh](https://github.com/SinghIsWriting/)
* **LinkedIn:** [My LinkedIn Profile](https://www.linkedin.com/in/abhishek-singh-bba2662a9)
* **Portfolio:** [Abhishek Singh Portfolio](https://portfolio-abhishek-singh-nine.vercel.app/)
## Model Card Contact
[More Information Needed] |