Modelos de NER-conll2002
Collection
En ésta colección se encuentran los últimos modelos de NER usando las últimas técnicas
•
7 items
•
Updated
This model is a fine-tuned version of google-bert/bert-base-multilingual-cased on the conll2002 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| 0.0855 | 1.0 | 1041 | 0.1281 | 0.8272 | 0.8371 | 0.8321 | 0.9688 |
| 0.0536 | 2.0 | 2082 | 0.1357 | 0.8134 | 0.8465 | 0.8296 | 0.9686 |
| 0.0333 | 3.0 | 3123 | 0.1227 | 0.8593 | 0.8713 | 0.8653 | 0.9740 |
| 0.0221 | 4.0 | 4164 | 0.1482 | 0.8474 | 0.8564 | 0.8519 | 0.9710 |
| 0.0163 | 5.0 | 5205 | 0.1521 | 0.8352 | 0.8605 | 0.8477 | 0.9715 |
Base model
google-bert/bert-base-multilingual-cased