nlpso/m2m3_fine_tuning_ocr_cmbert_iob2
Viewer • Updated • 8.45k • 17
How to use nlpso/m2_joint_label_ocr_cmbert_iob2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="nlpso/m2_joint_label_ocr_cmbert_iob2") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ocr_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ocr_cmbert_iob2")This model is a fine-tuned verion from HueyNemud/das22-10-camembert_pretrained for nested NER task on a nested NER Paris trade directories dataset.
| Abbreviation | Entity group (level) | Description |
|---|---|---|
| O | 1 & 2 | Outside of a named entity |
| PER | 1 | Person or company name |
| ACT | 1 & 2 | Person or company professional activity |
| TITREH | 2 | Military or civil distinction |
| DESC | 1 | Entry full description |
| TITREP | 2 | Professionnal reward |
| SPAT | 1 | Address |
| LOC | 2 | Street name |
| CARDINAL | 2 | Street number |
| FT | 2 | Geographical feature |
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("nlpso/m2_joint_label_ocr_cmbert_iob2")
model = AutoModelForTokenClassification.from_pretrained("nlpso/m2_joint_label_ocr_cmbert_iob2")