raulgdp commited on
Commit
a6c647b
·
verified ·
1 Parent(s): ff38fe6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -93
README.md CHANGED
@@ -1,93 +1,93 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: BSC-LT/roberta-base-bne-capitel-ner
5
- tags:
6
- - generated_from_trainer
7
- datasets:
8
- - conll2002
9
- metrics:
10
- - precision
11
- - recall
12
- - f1
13
- - accuracy
14
- model-index:
15
- - name: bert-finetuned-ner
16
- results:
17
- - task:
18
- name: Token Classification
19
- type: token-classification
20
- dataset:
21
- name: conll2002
22
- type: conll2002
23
- config: es
24
- split: validation
25
- args: es
26
- metrics:
27
- - name: Precision
28
- type: precision
29
- value: 0.8599099099099099
30
- - name: Recall
31
- type: recall
32
- value: 0.8772977941176471
33
- - name: F1
34
- type: f1
35
- value: 0.8685168334849864
36
- - name: Accuracy
37
- type: accuracy
38
- value: 0.978701639744725
39
- ---
40
-
41
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
- should probably proofread and complete it, then remove this comment. -->
43
-
44
- # bert-finetuned-ner
45
-
46
- This model is a fine-tuned version of [BSC-LT/roberta-base-bne-capitel-ner](https://huggingface.co/BSC-LT/roberta-base-bne-capitel-ner) on the conll2002 dataset.
47
- It achieves the following results on the evaluation set:
48
- - Loss: 0.0950
49
- - Precision: 0.8599
50
- - Recall: 0.8773
51
- - F1: 0.8685
52
- - Accuracy: 0.9787
53
-
54
- ## Model description
55
-
56
- More information needed
57
-
58
- ## Intended uses & limitations
59
-
60
- More information needed
61
-
62
- ## Training and evaluation data
63
-
64
- More information needed
65
-
66
- ## Training procedure
67
-
68
- ### Training hyperparameters
69
-
70
- The following hyperparameters were used during training:
71
- - learning_rate: 2e-05
72
- - train_batch_size: 16
73
- - eval_batch_size: 16
74
- - seed: 42
75
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
76
- - lr_scheduler_type: linear
77
- - num_epochs: 3
78
-
79
- ### Training results
80
-
81
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
82
- |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
83
- | 0.1045 | 1.0 | 521 | 0.0932 | 0.8593 | 0.8704 | 0.8648 | 0.9764 |
84
- | 0.0343 | 2.0 | 1042 | 0.0870 | 0.8616 | 0.8757 | 0.8686 | 0.9781 |
85
- | 0.019 | 3.0 | 1563 | 0.0950 | 0.8599 | 0.8773 | 0.8685 | 0.9787 |
86
-
87
-
88
- ### Framework versions
89
-
90
- - Transformers 4.45.1
91
- - Pytorch 2.4.0
92
- - Datasets 2.20.0
93
- - Tokenizers 0.20.0
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: BSC-LT/roberta-base-bne-capitel-ner
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - conll2002
9
+ metrics:
10
+ - precision
11
+ - recall
12
+ - f1
13
+ - accuracy
14
+ model-index:
15
+ - name: bert-finetuned-ner
16
+ results:
17
+ - task:
18
+ name: Token Classification
19
+ type: token-classification
20
+ dataset:
21
+ name: conll2002
22
+ type: conll2002
23
+ config: es
24
+ split: validation
25
+ args: es
26
+ metrics:
27
+ - name: Precision
28
+ type: precision
29
+ value: 0.8599099099099099
30
+ - name: Recall
31
+ type: recall
32
+ value: 0.8772977941176471
33
+ - name: F1
34
+ type: f1
35
+ value: 0.8685168334849864
36
+ - name: Accuracy
37
+ type: accuracy
38
+ value: 0.978701639744725
39
+ ---
40
+
41
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
+ should probably proofread and complete it, then remove this comment. -->
43
+
44
+ # bert-finetuned-ner
45
+
46
+ Este es un moelo afinado sobre el modelo [BSC-LT/roberta-base-bne-capitel-ner](https://huggingface.co/BSC-LT/roberta-base-bne-capitel-ner) sobre conll2002 dataset.
47
+ Se lora un excelente rendimiento porque el modelo original fue preentrenado con textos en español logrando los siguientes resultados:
48
+ - Loss: 0.0950
49
+ - Precision: 0.8599
50
+ - Recall: 0.8773
51
+ - F1: 0.8685
52
+ - Accuracy: 0.9787
53
+
54
+ ## Model description
55
+
56
+ More information needed
57
+
58
+ ## Intended uses & limitations
59
+
60
+ More information needed
61
+
62
+ ## Training and evaluation data
63
+
64
+ More information needed
65
+
66
+ ## Training procedure
67
+
68
+ ### Training hyperparameters
69
+
70
+ The following hyperparameters were used during training:
71
+ - learning_rate: 2e-05
72
+ - train_batch_size: 16
73
+ - eval_batch_size: 16
74
+ - seed: 42
75
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
76
+ - lr_scheduler_type: linear
77
+ - num_epochs: 3
78
+
79
+ ### Training results
80
+
81
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
82
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
83
+ | 0.1045 | 1.0 | 521 | 0.0932 | 0.8593 | 0.8704 | 0.8648 | 0.9764 |
84
+ | 0.0343 | 2.0 | 1042 | 0.0870 | 0.8616 | 0.8757 | 0.8686 | 0.9781 |
85
+ | 0.019 | 3.0 | 1563 | 0.0950 | 0.8599 | 0.8773 | 0.8685 | 0.9787 |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 4.45.1
91
+ - Pytorch 2.4.0
92
+ - Datasets 2.20.0
93
+ - Tokenizers 0.20.0