PolyAI/banking77
Updated • 8.05k • 70
How to use sharmax-vikas/bert-base-banking77-pt2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="sharmax-vikas/bert-base-banking77-pt2") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sharmax-vikas/bert-base-banking77-pt2")
model = AutoModelForSequenceClassification.from_pretrained("sharmax-vikas/bert-base-banking77-pt2")This model is a fine-tuned version of bert-base-uncased on an PolyAI/banking77 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | F1 |
|---|---|---|---|---|
| 3.261 | 1.0 | 313 | 1.0894 | 0.7969 |
| 0.5499 | 2.0 | 626 | 0.4196 | 0.9103 |
| 0.305 | 3.0 | 939 | 0.3403 | 0.9157 |
| 0.1277 | 4.0 | 1252 | 0.3020 | 0.9251 |
| 0.0857 | 5.0 | 1565 | 0.2911 | 0.9306 |
| 0.0347 | 6.0 | 1878 | 0.2865 | 0.9333 |
| 0.0251 | 7.0 | 2191 | 0.2994 | 0.9362 |
| 0.0111 | 8.0 | 2504 | 0.2970 | 0.9365 |
| 0.0075 | 9.0 | 2817 | 0.3102 | 0.9364 |
| 0.0058 | 10.0 | 3130 | 0.3089 | 0.9362 |
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
ckpt = 'sharmax-vikas/bert-base-banking77-pt2'
tokenizer = AutoTokenizer.from_pretrained(ckpt)
model = AutoModelForSequenceClassification.from_pretrained(ckpt)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')
# Output: [{'label': 'exchange_rate', 'score': 0.9961327314376831}]
Base model
google-bert/bert-base-uncased