Dc-4nderson commited on
Commit
d40e34a
Β·
verified Β·
1 Parent(s): 60815b3

V1.2 model card update

Browse files
Files changed (1) hide show
  1. README.md +19 -27
README.md CHANGED
@@ -2,7 +2,6 @@
2
  library_name: transformers
3
  tags:
4
  - tone
5
- license: mit
6
  datasets:
7
  - Dc-4nderson/tone_dataset
8
  language:
@@ -33,9 +32,7 @@ motivational
33
 
34
  informative
35
 
36
- neutral
37
-
38
- negative
39
 
40
  πŸ“Š Dataset
41
 
@@ -48,37 +45,36 @@ Data includes first-person and third-person statements, anecdotes, factual notes
48
 
49
  Base model: distilbert-base-uncased
50
 
51
- Optimizer: AdamW (lr=5e-5)
52
 
53
  Batch size: 16
54
 
55
- Epochs: 8
56
 
57
  Loss: CrossEntropy
58
 
59
  Metrics: Accuracy + Weighted F1
60
 
61
  πŸ“ˆ Validation Metrics
62
- Epoch Training Loss Validation Loss Accuracy F1
63
- 1 No log 0.484719 0.894161 0.895220
64
- 2 No log 0.264668 0.923358 0.923200
65
- 3 No log 0.243101 0.930657 0.930599
66
- 4 No log 0.302434 0.916058 0.918166
67
- 5 No log 0.305320 0.923358 0.923836
68
- 6 No log 0.294621 0.916058 0.916176
69
- 7 No log 0.303021 0.919708 0.919583
70
- 8 0.215900 0.298230 0.916058 0.915722
71
 
72
  Final Training Summary:
73
 
74
- TrainOutput(global_step=552, training_loss=0.1959800598198089,
75
- metrics={
76
- 'train_runtime': 39.2397,
77
- 'train_samples_per_second': 223.244,
78
- 'train_steps_per_second': 14.067,
79
- 'total_flos': 290134644572160.0,
80
- 'train_loss': 0.1959800598198089,
81
- 'epoch': 8.0
 
82
  })
83
 
84
  πŸ’» Usage
@@ -94,10 +90,6 @@ Output:
94
 
95
  [{'label': 'uplifting'}]
96
 
97
- πŸ”– License
98
-
99
- Apache-2.0
100
-
101
  πŸ‘₯ Maintainer
102
 
103
  Dequan Anderson/ Dc-4nderson
 
2
  library_name: transformers
3
  tags:
4
  - tone
 
5
  datasets:
6
  - Dc-4nderson/tone_dataset
7
  language:
 
32
 
33
  informative
34
 
35
+ optimistic
 
 
36
 
37
  πŸ“Š Dataset
38
 
 
45
 
46
  Base model: distilbert-base-uncased
47
 
48
+ Optimizer: AdamW (lr=2e-5)
49
 
50
  Batch size: 16
51
 
52
+ Epochs: 5
53
 
54
  Loss: CrossEntropy
55
 
56
  Metrics: Accuracy + Weighted F1
57
 
58
  πŸ“ˆ Validation Metrics
59
+ Epoch Training Loss Validation Loss Accuracy F1
60
+ 1 No log 1.260710 0.801242 0.784157
61
+ 2 No log 0.777540 0.869565 0.869093
62
+ 3 No log 0.577972 0.869565 0.868584
63
+ 4 No log 0.481008 0.900621 0.900356
64
+ 5 No log 0.452635 0.900621 0.900356
65
+
 
 
66
 
67
  Final Training Summary:
68
 
69
+ TrainOutput(
70
+ global_step=205,
71
+ training_loss=0.8436699843988186,
72
+ metrics={'train_runtime': 17.74,
73
+ 'train_samples_per_second': 181.229,
74
+ 'train_steps_per_second': 11.556,
75
+ 'total_flos': 106480165436160.0,
76
+ 'train_loss': 0.8436699843988186,
77
+ 'epoch': 5.0
78
  })
79
 
80
  πŸ’» Usage
 
90
 
91
  [{'label': 'uplifting'}]
92
 
 
 
 
 
93
  πŸ‘₯ Maintainer
94
 
95
  Dequan Anderson/ Dc-4nderson