File size: 4,656 Bytes
4f1cf3a c090765 4f1cf3a 53b0669 4f1cf3a 8b67e89 4f1cf3a 8b67e89 4f1cf3a c090765 e579b17 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-model-intent-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-model-intent-classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
We have finetuned Base Bert model for text classification task. We used intent-detection dataset for traning our model.
## Intended uses & limitations
More information needed
## How to use
Use below code to test the model
new_model = AutoModelForSequenceClassification.from_pretrained("ArunAIML/bert-model-intent-classification",
num_labels=21,
id2label=id_to_label,
label2id=label_to_id)
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Label Maps used
id_to_labels =
{0: '100_NIGHT_TRIAL_OFFER', 1: 'ABOUT_SOF_MATTRESS', 2: 'CANCEL_ORDER', 3: 'CHECK_PINCODE', 4: 'COD', 5: 'COMPARISON', 6: 'DELAY_IN_DELIVERY', 7: 'DISTRIBUTORS', 8: 'EMI', 9: 'ERGO_FEATURES', 10: 'LEAD_GEN', 11: 'MATTRESS_COST', 12: 'OFFERS', 13: 'ORDER_STATUS', 14: 'ORTHO_FEATURES', 15: 'PILLOWS', 16: 'PRODUCT_VARIANTS', 17: 'RETURN_EXCHANGE', 18: 'SIZE_CUSTOMIZATION', 19: 'WARRANTY', 20: 'WHAT_SIZE_TO_ORDER'}
labels_to_id =
{'100_NIGHT_TRIAL_OFFER': 0, 'ABOUT_SOF_MATTRESS': 1, 'CANCEL_ORDER': 2, 'CHECK_PINCODE': 3, 'COD': 4, 'COMPARISON': 5, 'DELAY_IN_DELIVERY': 6, 'DISTRIBUTORS': 7, 'EMI': 8, 'ERGO_FEATURES': 9, 'LEAD_GEN': 10, 'MATTRESS_COST': 11, 'OFFERS': 12, 'ORDER_STATUS': 13, 'ORTHO_FEATURES': 14, 'PILLOWS': 15, 'PRODUCT_VARIANTS': 16, 'RETURN_EXCHANGE': 17, 'SIZE_CUSTOMIZATION': 18, 'WARRANTY': 19, 'WHAT_SIZE_TO_ORDER': 20}
Please use above labels to reproduce results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
### Results
The model was evaluated on a validation set. Below is the detailed classification report in a tabular format:
| Label | Precision | Recall | F1-Score | Support |
| :------------------------ | :-------- | :----- | :------- | :------ |
| `100_NIGHT_TRIAL_OFFER` | 1.00 | 1.00 | 1.00 | 4 |
| `ABOUT_SOF_MATTRESS` | 1.00 | 1.00 | 1.00 | 2 |
| `CANCEL_ORDER` | 1.00 | 1.00 | 1.00 | 2 |
| `CHECK_PINCODE` | 1.00 | 1.00 | 1.00 | 2 |
| `COD` | 1.00 | 1.00 | 1.00 | 2 |
| `COMPARISON` | 0.33 | 0.50 | 0.40 | 2 |
| `DELAY_IN_DELIVERY` | 1.00 | 1.00 | 1.00 | 2 |
| `DISTRIBUTORS` | 1.00 | 1.00 | 1.00 | 7 |
| `EMI` | 0.89 | 1.00 | 0.94 | 8 |
| `ERGO_FEATURES` | 1.00 | 1.00 | 1.00 | 2 |
| `LEAD_GEN` | 1.00 | 1.00 | 1.00 | 4 |
| `MATTRESS_COST` | 1.00 | 0.80 | 0.89 | 5 |
| `OFFERS` | 1.00 | 1.00 | 1.00 | 2 |
| `ORDER_STATUS` | 1.00 | 0.75 | 0.86 | 4 |
| `ORTHO_FEATURES` | 1.00 | 1.00 | 1.00 | 4 |
| `PILLOWS` | 1.00 | 1.00 | 1.00 | 2 |
| `PRODUCT_VARIANTS` | 0.50 | 0.50 | 0.50 | 4 |
| `RETURN_EXCHANGE` | 1.00 | 0.67 | 0.80 | 3 |
| `SIZE_CUSTOMIZATION` | 0.50 | 0.50 | 0.50 | 2 |
| `WARRANTY` | 0.67 | 1.00 | 0.80 | 2 |
| `WHAT_SIZE_TO_ORDER` | 0.80 | 1.00 | 0.89 | 4 |
| **Accuracy** | | | **0.89** | **66** |
| **Macro Avg** | 0.90 | 0.89 | 0.89 | 66 |
| **Weighted Avg** | 0.91 | 0.89 | 0.90 | 66 | |