File size: 3,530 Bytes
e4fc915
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-Reflections-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# roberta-Reflections-goodareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current

This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4328
- Accuracy: 0.8678
- Precision: 0.4273
- Recall: 0.5402
- F1: 0.4772

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 9.49118803819061e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.3453        | 1.0   | 80   | 0.2368          | 0.8845   | 0.4444    | 0.1379 | 0.2105 |
| 0.2686        | 2.0   | 160  | 0.1995          | 0.8883   | 0.0       | 0.0    | 0.0    |
| 0.2467        | 3.0   | 240  | 0.2582          | 0.8755   | 0.4561    | 0.5977 | 0.5174 |
| 0.2346        | 4.0   | 320  | 0.1663          | 0.9012   | 0.6923    | 0.2069 | 0.3186 |
| 0.2153        | 5.0   | 400  | 0.1441          | 0.9037   | 0.8       | 0.1839 | 0.2991 |
| 0.2003        | 6.0   | 480  | 0.2784          | 0.8267   | 0.3571    | 0.6897 | 0.4706 |
| 0.1806        | 7.0   | 560  | 0.1637          | 0.8999   | 0.5495    | 0.5747 | 0.5618 |
| 0.1477        | 8.0   | 640  | 0.2062          | 0.8639   | 0.4275    | 0.6437 | 0.5138 |
| 0.1234        | 9.0   | 720  | 0.2175          | 0.8626   | 0.4167    | 0.5747 | 0.4831 |
| 0.1116        | 10.0  | 800  | 0.1914          | 0.8845   | 0.4810    | 0.4368 | 0.4578 |
| 0.0959        | 11.0  | 880  | 0.3313          | 0.8485   | 0.3916    | 0.6437 | 0.4870 |
| 0.0933        | 12.0  | 960  | 0.3027          | 0.8575   | 0.4048    | 0.5862 | 0.4789 |
| 0.0796        | 13.0  | 1040 | 0.3267          | 0.8575   | 0.4032    | 0.5747 | 0.4739 |
| 0.0688        | 14.0  | 1120 | 0.2958          | 0.8819   | 0.4731    | 0.5057 | 0.4889 |
| 0.0723        | 15.0  | 1200 | 0.4122          | 0.8575   | 0.4032    | 0.5747 | 0.4739 |
| 0.048         | 16.0  | 1280 | 0.5274          | 0.8447   | 0.3851    | 0.6552 | 0.4851 |
| 0.0504        | 17.0  | 1360 | 0.5241          | 0.8562   | 0.4031    | 0.5977 | 0.4815 |
| 0.0353        | 18.0  | 1440 | 0.4845          | 0.8601   | 0.4098    | 0.5747 | 0.4785 |
| 0.0485        | 19.0  | 1520 | 0.5141          | 0.8562   | 0.4031    | 0.5977 | 0.4815 |
| 0.0481        | 20.0  | 1600 | 0.4328          | 0.8678   | 0.4273    | 0.5402 | 0.4772 |


### Framework versions

- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0