File size: 1,591 Bytes
1825403
 
 
6367bc2
 
1825403
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
base_model:
- distilbert/distilbert-base-uncased
datasets:
- SantmanKT/hr-intent-dataset
---
# DistilBERT for Intent Classification

## Overview
- **Architecture:** DistilBERT (distilbert-base-uncased) for sequence classification
- **Task:** Single-label intent classification of HR queries using merged user query and context
- **Dataset:** ~133 samples, 12 intent classes, 80/20 train/validation split

## Training Details
- **Epochs:** 5
- **Batch Size:** 8
- **Learning Rate:** 5e-5
- **Optimizer:** AdamW
- **Loss:** CrossEntropyLoss

## Evaluation Metrics (Validation Set)
| Metric     | Value    |
|------------|----------|
| Accuracy   | 88.89%   |
| Precision  | 100%     |
| Recall     | 88.89%   |
| Loss       | 1.4586   |

## Usage Example
text = "Share offer with Santhosh [context: {domain: HR, topic: onboarding, subject: offer letter}]"
inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True)
with torch.no_grad():
logits = model(**inputs).logits
pred_id = logits.argmax(dim=1).item()

## Comments
- Consistent strong results on validation set.
- Model is robust for HR chatbot/automation intent tasks.
- Consider more data or further tuning for additional improvement.

---

*For best results, ensure your production inference pipeline preprocesses and tokenizes input exactly as done for the training data.*

---

**In summary:**  
You’ve followed the right steps for distilbert-based intent classification and your documentation—combined with this detailed evaluation/usage section—will be clear and informative for anyone using your model!