jamie613 commited on
Commit
8e194f3
·
verified ·
1 Parent(s): 34901f6

Update README.md

Browse files

correct the evaluation scores.

Files changed (1) hide show
  1. README.md +20 -18
README.md CHANGED
@@ -11,6 +11,8 @@ metrics:
11
  model-index:
12
  - name: custom_BERT_NER
13
  results: []
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -20,29 +22,25 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.2578
24
- - Perf P: 0.8161
25
- - Perf R: 0.9861
26
- - Inst P: 0.8923
27
- - Inst R: 0.9062
28
- - Comp P: 0.9537
29
- - Comp R: 0.8655
30
- - Precision: 0.8495
31
- - Recall: 0.8432
32
- - F1: 0.8463
33
- - Accuracy: 0.9470
34
 
35
  ## Model description
36
 
37
- More information needed
38
-
39
- ## Intended uses & limitations
40
-
41
- More information needed
42
 
43
  ## Training and evaluation data
44
 
45
- More information needed
46
 
47
  ## Training procedure
48
 
@@ -56,6 +54,10 @@ The following hyperparameters were used during training:
56
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
  - lr_scheduler_type: linear
58
  - num_epochs: 50
 
 
 
 
59
 
60
  ### Training results
61
 
@@ -78,4 +80,4 @@ The following hyperparameters were used during training:
78
  - Transformers 4.40.0
79
  - Pytorch 2.2.1+cu121
80
  - Datasets 2.19.0
81
- - Tokenizers 0.19.1
 
11
  model-index:
12
  - name: custom_BERT_NER
13
  results: []
14
+ datasets:
15
+ - jamie613/custom_NER
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
22
 
23
  This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
+ - Loss: 0.207071
26
+ - Perf P: 0.829268
27
+ - Perf R: 0.944444
28
+ - Inst P: 0.933333
29
+ - Inst R: 0.875000
30
+ - Comp P: 0.962617
31
+ - Comp R: 0.865546
32
+ - Precision: 0.862745
33
+ - Recall: 0.846154
34
+ - F1: 0.854369
35
+ - Accuracy: 0.952260
36
 
37
  ## Model description
38
 
39
+ This model is for identifying performers, instrumentation and composers of the music played in the concert.
 
 
 
 
40
 
41
  ## Training and evaluation data
42
 
43
+ This model is trained ane evaluated on a custome dataset.
44
 
45
  ## Training procedure
46
 
 
54
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
55
  - lr_scheduler_type: linear
56
  - num_epochs: 50
57
+ - metric_for_best_model = 'eval_f1'
58
+ - greater_is_better = True
59
+ - load_best_model_at_end = True
60
+ - early_stoping_patience = 3
61
 
62
  ### Training results
63
 
 
80
  - Transformers 4.40.0
81
  - Pytorch 2.2.1+cu121
82
  - Datasets 2.19.0
83
+ - Tokenizers 0.19.1