File size: 2,462 Bytes
6b5a2c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bcc17c9
 
 
 
 
 
6b5a2c8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bcc17c9
6b5a2c8
 
 
 
 
 
 
 
 
 
 
bcc17c9
 
 
 
 
 
 
 
6b5a2c8
 
 
 
 
bcc17c9
6b5a2c8
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: deberta_toxic_cls
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# deberta_toxic_cls

This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3694
- Accuracy: 0.8054
- Precision: 0.7440
- Recall: 0.9942
- F1: 0.8511
- Auc: 0.8908

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1     | Auc    |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| No log        | 1.0   | 141  | 0.4441          | 0.8012   | 0.7428    | 0.9861 | 0.8473 | 0.8880 |
| No log        | 2.0   | 282  | 0.3568          | 0.8042   | 0.7453    | 0.9875 | 0.8495 | 0.8905 |
| No log        | 3.0   | 423  | 0.3691          | 0.8052   | 0.7444    | 0.9926 | 0.8508 | 0.8922 |
| 0.4062        | 4.0   | 564  | 0.3701          | 0.8054   | 0.7440    | 0.9942 | 0.8511 | 0.8908 |
| 0.4062        | 5.0   | 705  | 0.3925          | 0.8051   | 0.7436    | 0.9944 | 0.8509 | 0.8915 |
| 0.4062        | 6.0   | 846  | 0.3891          | 0.8056   | 0.7498    | 0.9793 | 0.8493 | 0.8921 |
| 0.4062        | 7.0   | 987  | 0.3860          | 0.8070   | 0.7573    | 0.9638 | 0.8482 | 0.8943 |
| 0.3208        | 8.0   | 1128 | 0.3909          | 0.8073   | 0.7603    | 0.9575 | 0.8475 | 0.8939 |


### Framework versions

- Transformers 4.57.1
- Pytorch 2.8.0+cu129
- Datasets 4.4.1
- Tokenizers 0.22.1