File size: 6,086 Bytes
3ef1bb2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88abe7b
 
 
 
 
 
 
 
 
 
 
3ef1bb2
88abe7b
 
 
 
 
 
 
 
 
 
 
3ef1bb2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88abe7b
 
 
 
 
 
 
 
 
 
 
3ef1bb2
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: NoDuplicates
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# NoDuplicates

This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4279
- Accuracy: 0.9128
- F1 Macro: 0.8384
- F1 Class 0: 0.9406
- F1 Class 1: 0.3333
- F1 Class 2: 0.9127
- F1 Class 3: 0.6471
- F1 Class 4: 0.8254
- F1 Class 5: 0.8293
- F1 Class 6: 0.8767
- F1 Class 7: 0.7606
- F1 Class 8: 0.7500
- F1 Class 9: 0.9878
- F1 Class 10: 0.9444
- F1 Class 11: 0.9630
- F1 Class 12: 0.9265
- F1 Class 13: 0.8980
- F1 Class 14: 0.8444
- F1 Class 15: 0.8132
- F1 Class 16: 0.7778
- F1 Class 17: 0.9651
- F1 Class 18: 0.9574
- F1 Class 19: 0.8148

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Class 0 | F1 Class 1 | F1 Class 2 | F1 Class 3 | F1 Class 4 | F1 Class 5 | F1 Class 6 | F1 Class 7 | F1 Class 8 | F1 Class 9 | F1 Class 10 | F1 Class 11 | F1 Class 12 | F1 Class 13 | F1 Class 14 | F1 Class 15 | F1 Class 16 | F1 Class 17 | F1 Class 18 | F1 Class 19 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|
| 1.4862        | 0.27  | 300  | 0.8201          | 0.7845   | 0.4484   | 0.8675     | 0.0        | 0.8627     | 0.0        | 0.6733     | 0.0        | 0.6627     | 0.0        | 0.0        | 0.9862     | 0.1935      | 0.9600      | 0.8299      | 0.0833      | 0.2353      | 0.24        | 0.0400      | 0.8852      | 0.9451      | 0.5033      |
| 0.7269        | 0.53  | 600  | 0.5951          | 0.8491   | 0.6504   | 0.9048     | 0.0        | 0.8567     | 0.0        | 0.7596     | 0.6111     | 0.6887     | 0.0        | 0.0        | 0.9877     | 0.8033      | 0.9286      | 0.8798      | 0.9167      | 0.74        | 0.6857      | 0.5823      | 0.9506      | 0.9485      | 0.7640      |
| 0.5429        | 0.8   | 900  | 0.5375          | 0.8637   | 0.7086   | 0.8904     | 0.0        | 0.8589     | 0.0        | 0.7254     | 0.7805     | 0.8215     | 0.6769     | 0.0        | 0.9877     | 0.7833      | 1.0         | 0.9022      | 0.9130      | 0.7912      | 0.7733      | 0.7048      | 0.9032      | 0.9474      | 0.7119      |
| 0.4594        | 1.06  | 1200 | 0.5110          | 0.8805   | 0.7113   | 0.9099     | 0.0        | 0.8925     | 0.0        | 0.7706     | 0.7391     | 0.8139     | 0.4091     | 0.0        | 0.9908     | 0.8785      | 1.0         | 0.8983      | 0.8936      | 0.8090      | 0.7556      | 0.7907      | 0.9529      | 0.9574      | 0.7647      |
| 0.3484        | 1.33  | 1500 | 0.4679          | 0.8951   | 0.7667   | 0.9180     | 0.0        | 0.9080     | 0.6957     | 0.8        | 0.7619     | 0.8299     | 0.6875     | 0.0        | 0.9908     | 0.8909      | 1.0         | 0.9196      | 0.9130      | 0.8172      | 0.7865      | 0.7527      | 0.9398      | 0.9474      | 0.7755      |
| 0.3744        | 1.59  | 1800 | 0.4359          | 0.8951   | 0.7774   | 0.9290     | 0.0        | 0.8815     | 0.8462     | 0.8049     | 0.7805     | 0.8449     | 0.7059     | 0.0        | 0.9908     | 0.9346      | 1.0         | 0.9143      | 0.8980      | 0.8387      | 0.7475      | 0.7179      | 0.9647      | 0.9583      | 0.7895      |
| 0.3514        | 1.86  | 2100 | 0.5161          | 0.8903   | 0.7592   | 0.9109     | 0.0        | 0.8973     | 0.6429     | 0.7603     | 0.7907     | 0.8571     | 0.7077     | 0.0        | 0.9908     | 0.9346      | 1.0         | 0.8971      | 0.8936      | 0.7042      | 0.7324      | 0.7857      | 0.9595      | 0.9574      | 0.7609      |
| 0.3111        | 2.12  | 2400 | 0.4327          | 0.9080   | 0.8027   | 0.9283     | 0.3333     | 0.9141     | 0.7407     | 0.8207     | 0.8095     | 0.8622     | 0.7606     | 0.0        | 0.9908     | 0.9298      | 0.9630      | 0.9215      | 0.9167      | 0.8041      | 0.8         | 0.8132      | 0.9651      | 0.9574      | 0.8224      |
| 0.2088        | 2.39  | 2700 | 0.4356          | 0.9128   | 0.8452   | 0.9386     | 0.3333     | 0.9058     | 0.8462     | 0.8265     | 0.8        | 0.8562     | 0.7429     | 0.7500     | 0.9893     | 0.9346      | 0.9630      | 0.9322      | 0.8936      | 0.8205      | 0.8372      | 0.7765      | 0.9651      | 0.9574      | 0.8350      |
| 0.2317        | 2.65  | 3000 | 0.4294          | 0.9137   | 0.8217   | 0.9365     | 0.3333     | 0.9102     | 0.625      | 0.8243     | 0.8293     | 0.875      | 0.8056     | 0.3333     | 0.9893     | 0.9444      | 0.9630      | 0.9284      | 0.8980      | 0.8478      | 0.8471      | 0.7816      | 0.9651      | 0.9574      | 0.8400      |
| 0.1816        | 2.92  | 3300 | 0.4279          | 0.9128   | 0.8384   | 0.9406     | 0.3333     | 0.9127     | 0.6471     | 0.8254     | 0.8293     | 0.8767     | 0.7606     | 0.7500     | 0.9878     | 0.9444      | 0.9630      | 0.9265      | 0.8980      | 0.8444      | 0.8132      | 0.7778      | 0.9651      | 0.9574      | 0.8148      |


### Framework versions

- Transformers 4.32.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3