How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("token-classification", model="NbAiLab/nb-bert-base-pos")
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification

tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-bert-base-pos")
model = AutoModelForTokenClassification.from_pretrained("NbAiLab/nb-bert-base-pos")
Quick Links

Release 1.0 (November 6, 2024)

nb-bert-base-pos

Description

NB-Bert base model fine-tuned on the Part of Speech task using the NorNE dataset.

Usage

from transformers import pipeline

pos = pipeline("token-classification", "NbAiLab/nb-bert-base-pos")
example = "Jeg heter Kjell og bor i Oslo."

pos_results = pos(example)
print(ner_results)

More on https://arxiv.org/abs/2104.09617

Downloads last month
63
Safetensors
Model size
0.2B params
Tensor type
F32
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NbAiLab/nb-bert-base-pos

Finetuned
(26)
this model

Paper for NbAiLab/nb-bert-base-pos

Evaluation results