How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("token-classification", model="x4n4/bert_conll2003_ner")
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification

tokenizer = AutoTokenizer.from_pretrained("x4n4/bert_conll2003_ner")
model = AutoModelForTokenClassification.from_pretrained("x4n4/bert_conll2003_ner")
Quick Links

Model Card for Model ID

BERT NER Model (CoNLL-2003)

πŸ“Œ Overview

This model is a fine-tuned version of bert-base-cased for the task of Named Entity Recognition (NER).

🎯 Task

Token Classification (Named Entity Recognition)

The model identifies the following entity types:

  • PER (Person)
  • ORG (Organization)
  • LOC (Location)
  • MISC (Miscellaneous)

πŸ“Š Dataset

  • CoNLL-2003 (English news dataset)
  • Only NER tags were used (BIO format)

🧠 Model Details

  • Base model: bert-base-cased
  • Architecture: Transformer Encoder (BERT)
  • Fine-tuning: Hugging Face Transformers
  • Training epochs: 3

πŸ“ˆ Performance

Test set results:

  • Precision: ~0.90
  • Recall: ~0.91
  • F1-score: ~0.91
  • Accuracy: ~0.98

βš™οΈ Usage Example

from transformers import pipeline

ner = pipeline(
    "ner",
    model="x4n4/bert-conll2003-ner",
    aggregation_strategy="simple"
)

text = "Barack Obama visited Google in New York"
print(ner(text))
Downloads last month
48
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support