File size: 1,169 Bytes
84fae1a 8a0bdc1 84fae1a 8a0bdc1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | ---
{}
---
# Test-Model
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B), and is specialized for document classification. It achieves the following results on the evaluation set:
- **Accuracy**: 0.924647
- **Hit rate**: 0.977464
## Training Parameters
- **Epochs**: 5
- **Batch Size**: 20
- **Threshold**: 0.975
- **Token Length**: 512
- **Data Set Size**: 10000
- **Evenly Distributed**: True
- **Municipalities**: Heroey, Kongsberg, Boemlo, Luster, Maalselv
- **Learning Rate**: 0.0001
- **Weight Decay**: 0.01
- **Eval Accumulation Steps**: 4
## Evaluation Results
| Label | Accuracy | Accuracy Accumulation | Hit Rate | Hit Rate Accumulation |
|:-----:|:--------:|:---------------------:|:--------:|:---------------------:|
| 0 | 0.93 | 0.93 | 0.98 | 0.98 |
| 1 | 0.95 | 0.95 | 0.97 | 0.97 |
| 2 | 0.91 | 0.91 | 0.98 | 0.98 |
| 3 | 0.40 | 0.40 | 0.57 | 0.57 |
| 4 | 0.25 | 0.25 | 0.63 | 0.63 |
| 5 | 0.86 | 0.86 | 0.94 | 0.94 |
| 6 | 0.99 | 0.99 | 1.04 | 1.04 |
| 7 | 0.94 | 0.94 | 0.98 | 0.98 |
| 8 | 0.67 | 0.67 | 0.86 | 0.86 |
| 9 | 0.99 | 0.99 | 0.99 | 0.99 |
|