Text Classification
Transformers
Safetensors
English
distilbert
call-center
mva
legal-intake
text-embeddings-inference
Instructions to use a1hmad23/mva-call-classifier-v5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use a1hmad23/mva-call-classifier-v5 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="a1hmad23/mva-call-classifier-v5")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("a1hmad23/mva-call-classifier-v5") model = AutoModelForSequenceClassification.from_pretrained("a1hmad23/mva-call-classifier-v5") - Notebooks
- Google Colab
- Kaggle
MVA Call Classifier (v5)
Inputs
Lowercase, ASR-style transcripts. Truncated to 128 tokens.
from transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification
import torch
model_id = "a1hmad23/mva-call-classifier-v5"
tokenizer = DistilBertTokenizerFast.from_pretrained(model_id)
model = DistilBertForSequenceClassification.from_pretrained(model_id)
model.eval()
text = "yes i was in an accident last month"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
with torch.no_grad():
logits = model(**inputs).logits
pred_id = logits.argmax(-1).item()
print(model.config.id2label[pred_id])
Labels
39 classes. The full mapping is in label2id.json and embedded in config.json.
Label semantics, precedence rules, and confusable-neighbor decision rules are
documented internally and are not redistributed with this model.
- Downloads last month
- 15