learn-abc/banking14-intents-en-bn-banglish
Viewer • Updated • 134k • 31 • 1
How to use learn-abc/halsa-bankingdemo-multilingual-intent-classifier with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="learn-abc/halsa-bankingdemo-multilingual-intent-classifier") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("learn-abc/halsa-bankingdemo-multilingual-intent-classifier")
model = AutoModelForSequenceClassification.from_pretrained("learn-abc/halsa-bankingdemo-multilingual-intent-classifier")This model is a fine-tuned MuRIL-based multilingual intent classifier designed for production-grade banking chatbots.
The model performs 14-way intent classification for banking conversational systems.
google/muril-base-cased
MuRIL was selected for:
ACCOUNT_INFO
ATM_SUPPORT
CARD_ISSUE
CARD_MANAGEMENT
CARD_REPLACEMENT
CHECK_BALANCE
EDIT_PERSONAL_DETAILS
FAILED_TRANSFER
FALLBACK
FEES
GREETING
LOST_OR_STOLEN_CARD
MINI_STATEMENT
TRANSFER
| Language | Count |
|---|---|
| English (en) | 33,657 |
| Bangla (bn) | 33,657 |
| Banglish (bn-latn) | 33,657 |
Additional 500 code-mixed examples included.
| Split | Samples |
|---|---|
| Train | 91,051 |
| Test | 20,295 |
| Intent | Accuracy |
|---|---|
| ACCOUNT_INFO | 99.14% |
| ATM_SUPPORT | 99.70% |
| CARD_ISSUE | 99.25% |
| CARD_MANAGEMENT | 99.43% |
| CARD_REPLACEMENT | 99.08% |
| CHECK_BALANCE | 99.05% |
| EDIT_PERSONAL_DETAILS | 100.00% |
| FAILED_TRANSFER | 98.75% |
| FALLBACK | 97.86% |
| FEES | 99.76% |
| GREETING | 97.41% |
| LOST_OR_STOLEN_CARD | 99.59% |
| MINI_STATEMENT | 98.80% |
| TRANSFER | 99.78% |
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load model and tokenizer
model_name = "learn-abc/banking-multilingual-intent-classifier"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
# Prediction function
def predict_intent(text):
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=64)
inputs = {k: v.to(device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=-1).item()
confidence = torch.softmax(outputs.logits, dim=-1)[0][prediction].item()
predicted_intent = model.config.id2label[prediction]
return {
"intent": predicted_intent,
"confidence": confidence
}
# Example usage - English
result = predict_intent("what is my balance")
print(f"Intent: {result['intent']}, Confidence: {result['confidence']:.2f}")
# Output: Intent: CHECK_BALANCE, Confidence: 0.99
# Example usage - Bangla
result = predict_intent("আমার ব্যালেন্স কত")
print(f"Intent: {result['intent']}, Confidence: {result['confidence']:.2f}")
# Output: Intent: CHECK_BALANCE, Confidence: 0.98
# Example usage - Banglish (Romanized)
result = predict_intent("amar balance koto ache")
print(f"Intent: {result['intent']}, Confidence: {result['confidence']:.2f}")
# Output: Intent: CHECK_BALANCE, Confidence: 0.97
# Example usage - Code-mixed
result = predict_intent("আমার last 10 transaction দেখাও")
print(f"Intent: {result['intent']}, Confidence: {result['confidence']:.2f}")
# Output: Intent: MINI_STATEMENT, Confidence: 0.98
This project is licensed under the Apache 2.0 License.
For any inquiries or support, please reach out to:
Base model
google/muril-base-cased