File size: 2,338 Bytes
948fbc3 caf44fb 81550b4 caf44fb 948fbc3 caf44fb 81550b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-banking-intent
results: []
datasets:
- hf-tuner/banking-intent
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-banking-intent
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on
[hf-tuner/banking-intent](https://huggingface.co/datasets/hf-tuner/banking-intent) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0079
- Accuracy: 0.9993
### How to Get Started with the Model
```py
from transformers import pipeline
classifier = pipeline("text-classification", model = "hf-tuner/bert-banking-intent")
classifier("Please help me get a new card, I reside in the United States.")
## [{'label': 'country_support', 'score': 0.997}]
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9901 | 1.0 | 626 | 1.5437 | 0.8104 |
| 0.8228 | 2.0 | 1252 | 0.5328 | 0.9335 |
| 0.3901 | 3.0 | 1878 | 0.2214 | 0.9678 |
| 0.1889 | 4.0 | 2504 | 0.1041 | 0.9830 |
| 0.0973 | 5.0 | 3130 | 0.0518 | 0.9920 |
| 0.0733 | 6.0 | 3756 | 0.0322 | 0.9944 |
| 0.0405 | 7.0 | 4382 | 0.0167 | 0.9976 |
| 0.0214 | 8.0 | 5008 | 0.0114 | 0.9988 |
| 0.0175 | 9.0 | 5634 | 0.0091 | 0.9993 |
| 0.0138 | 10.0 | 6260 | 0.0079 | 0.9993 |
### Framework versions
- Transformers 4.57.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.1 |