File size: 2,817 Bytes
5442660
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
base_model: bert-base-multilingual-uncased
tags:
- generated_from_trainer
metrics:
- recall
- accuracy
model-index:
- name: multibert_testrun
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# multibert_testrun

This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4300
- Precisions: 0.8488
- Recall: 0.7908
- F-measure: 0.8172
- Accuracy: 0.9404

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precisions | Recall | F-measure | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:----------:|:------:|:---------:|:--------:|
| 0.4196        | 1.0   | 269  | 0.3190          | 0.8426     | 0.7090 | 0.7230    | 0.9078   |
| 0.2111        | 2.0   | 538  | 0.2981          | 0.7730     | 0.7491 | 0.7551    | 0.9190   |
| 0.1275        | 3.0   | 807  | 0.2666          | 0.8158     | 0.7744 | 0.7915    | 0.9346   |
| 0.0868        | 4.0   | 1076 | 0.2929          | 0.8276     | 0.7891 | 0.8050    | 0.9349   |
| 0.0608        | 5.0   | 1345 | 0.3253          | 0.8370     | 0.7803 | 0.8043    | 0.9353   |
| 0.0353        | 6.0   | 1614 | 0.3723          | 0.8153     | 0.7999 | 0.8051    | 0.9360   |
| 0.0254        | 7.0   | 1883 | 0.4149          | 0.8266     | 0.7688 | 0.7934    | 0.9339   |
| 0.0203        | 8.0   | 2152 | 0.4399          | 0.8356     | 0.7755 | 0.8028    | 0.9357   |
| 0.0146        | 9.0   | 2421 | 0.4413          | 0.8295     | 0.7845 | 0.8045    | 0.9349   |
| 0.0108        | 10.0  | 2690 | 0.4300          | 0.8488     | 0.7908 | 0.8172    | 0.9404   |
| 0.0054        | 11.0  | 2959 | 0.4428          | 0.8317     | 0.7858 | 0.8062    | 0.9357   |
| 0.004         | 12.0  | 3228 | 0.4681          | 0.8403     | 0.7861 | 0.8095    | 0.9375   |
| 0.0019        | 13.0  | 3497 | 0.4725          | 0.8409     | 0.7901 | 0.8123    | 0.9386   |
| 0.0013        | 14.0  | 3766 | 0.4839          | 0.8437     | 0.7895 | 0.8137    | 0.9404   |


### Framework versions

- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1