unimelb-nlp/wikiann
Viewer โข Updated โข 2M โข 41.8k โข 121
How to use developer-lunark/kaidol-ner-multilingual with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="developer-lunark/kaidol-ner-multilingual") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("developer-lunark/kaidol-ner-multilingual")
model = AutoModelForTokenClassification.from_pretrained("developer-lunark/kaidol-ner-multilingual")This is a multilingual NER (Named Entity Recognition) model developed as part of the KAIdol Project.
It is based on Davlan/xlm-roberta-base-ner-hrl, fine-tuned on the WikiAnn dataset for Korean (ko), English (en), Spanish (es), and Portuguese (pt).
Davlan/xlm-roberta-base-ner-hrlPER: Person ORG: Organization LOC: Location| Parameter | Value |
|---|---|
| Epochs | 5 |
| Batch Size | 16 |
| Optimizer | AdamW |
| Learning Rate | 5e-5 |
| Loss | CrossEntropy with class weights |
| Dataset | WikiAnn (en, ko, es, pt) |
| Language | F1-macro | PER F1 | ORG F1 | LOC F1 |
|---|---|---|---|---|
| English | 0.74 | 0.84 | 0.63 | 0.76 |
| Korean | 0.43 | 0.46 | 0.30 | 0.52 |
| Spanish | TBD | TBD | TBD | TBD |
| Portuguese | TBD | TBD | TBD | TBD |
Performance on
esandptwill be updated after evaluation. Korean performance is limited due to tokenization issues in WikiAnn.
from transformers import AutoTokenizer, AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained("developer-lunark/kaidol-ner-multilingual")
tokenizer = AutoTokenizer.from_pretrained("developer-lunark/kaidol-ner-multilingual")
tokens = tokenizer("Barack Obama naciรณ en Hawรกi.", return_tensors="pt")
output = model(**tokens)
{
'O': 0,
'B-PER': 1,
'I-PER': 2,
'B-ORG': 3,
'I-ORG': 4,
'B-LOC': 5,
'I-LOC': 6
}
MIT License
Developed by the [KAIdol ํ๋ก์ ํธ ํ].
For questions or collaborations, contact: developer-lunark
Base model
Davlan/xlm-roberta-base-ner-hrl