| | --- |
| | library_name: transformers |
| | tags: |
| | - ner |
| | - named-entity-recognition |
| | - token-classification |
| | - pytorch |
| | - transformers |
| | - bert |
| | - conll2003 |
| | - nlp |
| | - fine-tuning |
| | datasets: |
| | - eriktks/conll2003 |
| | language: |
| | - en |
| | metrics: |
| | - seqeval |
| | base_model: |
| | - google-bert/bert-base-uncased |
| | pipeline_tag: token-classification |
| | --- |
| | # BERT NER β Fine-tuned Named Entity Recognition Model |
| | **Model:** `ELHACHYMI/bert-ner` |
| | **Base model:** `bert-base-uncased` |
| | **Task:** Token Classification β Named Entity Recognition (NER) |
| | **Dataset:** CoNLL-2003 (English) |
| |
|
| | --- |
| |
|
| | ## Model Overview |
| |
|
| | This model is a fine-tuned version of **BERT Base Uncased** on the **CoNLL-2003 Named Entity Recognition (NER)** dataset. |
| | It predicts the following entity types: |
| |
|
| | - **PER** β Person |
| | - **ORG** β Organization |
| | - **LOC** β Location |
| | - **MISC** β Miscellaneous |
| | - **O** β Outside any entity |
| |
|
| | The model is suitable for **information extraction**, **document understanding**, **chatbot entity detection**, and **structured text processing**. |
| |
|
| | --- |
| |
|
| | ## Labels |
| |
|
| | The model uses the standard **IOB2** tagging scheme: |
| |
|
| | | ID | Label | |
| | |----|--------| |
| | | 0 | O | |
| | | 1 | B-PER | |
| | | 2 | I-PER | |
| | | 3 | B-ORG | |
| | | 4 | I-ORG | |
| | | 5 | B-LOC | |
| | | 6 | I-LOC | |
| | | 7 | B-MISC | |
| | | 8 | I-MISC | |
| |
|
| | --- |
| |
|
| | ## How to Load the Model |
| |
|
| | ### Using Hugging Face Pipeline |
| |
|
| | ```python |
| | from transformers import pipeline |
| | |
| | ner = pipeline("ner", model="ELHACHYMI/bert-ner", aggregation_strategy="simple") |
| | |
| | text = "Bill Gates founded Microsoft in the United States." |
| | print(ner(text)) |