Instructions to use x4n4/bert_conll2003_ner with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use x4n4/bert_conll2003_ner with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="x4n4/bert_conll2003_ner")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("x4n4/bert_conll2003_ner") model = AutoModelForTokenClassification.from_pretrained("x4n4/bert_conll2003_ner") - Notebooks
- Google Colab
- Kaggle
| library_name: transformers | |
| tags: [] | |
| # Model Card for Model ID | |
| <!-- Provide a quick summary of what the model is/does. --> | |
| # BERT NER Model (CoNLL-2003) | |
| ## π Overview | |
| This model is a fine-tuned version of `bert-base-cased` for the task of Named Entity Recognition (NER). | |
| ## π― Task | |
| Token Classification (Named Entity Recognition) | |
| The model identifies the following entity types: | |
| - PER (Person) | |
| - ORG (Organization) | |
| - LOC (Location) | |
| - MISC (Miscellaneous) | |
| ## π Dataset | |
| - CoNLL-2003 (English news dataset) | |
| - Only NER tags were used (BIO format) | |
| ## π§ Model Details | |
| - Base model: `bert-base-cased` | |
| - Architecture: Transformer Encoder (BERT) | |
| - Fine-tuning: Hugging Face Transformers | |
| - Training epochs: 3 | |
| ## π Performance | |
| Test set results: | |
| - Precision: ~0.90 | |
| - Recall: ~0.91 | |
| - F1-score: ~0.91 | |
| - Accuracy: ~0.98 | |
| ## βοΈ Usage Example | |
| ```python | |
| from transformers import pipeline | |
| ner = pipeline( | |
| "ner", | |
| model="x4n4/bert-conll2003-ner", | |
| aggregation_strategy="simple" | |
| ) | |
| text = "Barack Obama visited Google in New York" | |
| print(ner(text)) |