hanifnoerr commited on
Commit
ee44763
·
1 Parent(s): b97f278

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -11
README.md CHANGED
@@ -10,27 +10,50 @@ model-index:
10
  results: []
11
  ---
12
 
 
 
 
13
  # Kemenkeu-Sentiment-Classifier
14
 
15
- This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the MoF-DAC Mini Challenge#1 dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.8224
18
- - Accuracy: 0.6857
19
- - F1: 0.6429
 
 
 
 
20
 
21
  ## Intended uses & limitations
22
 
23
- - This model can be used to classify text with four possible outputs [netral, tdk-relevan, negatif, and positif]
24
- - Only for specific cases related to the Ministry Of Finance Indonesia
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
- ## Training results
27
 
28
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
29
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
30
- | No log | 1.0 | 394 | 0.8581 | 0.6857 | 0.6268 |
31
- | 0.9155 | 2.0 | 788 | 0.8224 | 0.6857 | 0.6429 |
32
- | 0.5238 | 3.0 | 1182 | 0.9928 | 0.6771 | 0.6459 |
33
- | 0.2662 | 4.0 | 1576 | 1.1901 | 0.6829 | 0.6572 |
34
 
35
 
36
  ### Framework versions
 
10
  results: []
11
  ---
12
 
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
  # Kemenkeu-Sentiment-Classifier
17
 
18
+ This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.8733
21
+ - Accuracy: 0.64
22
+ - F1: 0.5936
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
 
28
  ## Intended uses & limitations
29
 
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 1e-05
42
+ - train_batch_size: 6
43
+ - eval_batch_size: 6
44
+ - seed: 42
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: linear
47
+ - num_epochs: 4
48
 
49
+ ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
53
+ | 1.0146 | 1.0 | 500 | 0.8733 | 0.64 | 0.5936 |
54
+ | 0.7047 | 2.0 | 1000 | 0.8814 | 0.634 | 0.6008 |
55
+ | 0.5002 | 3.0 | 1500 | 0.9076 | 0.668 | 0.6446 |
56
+ | 0.3531 | 4.0 | 2000 | 0.9730 | 0.664 | 0.6374 |
57
 
58
 
59
  ### Framework versions