YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
NorT5 base
The official release of a new generation of NorT5 language models described in paper NorBench — A Benchmark for Norwegian Language Models. Plese read the paper to learn more details about the model.
Other sizes:
Encoder-only NorBERT siblings:
Example usage
This model currently needs a custom wrapper from modeling_nort5.py, you should therefore load the model with trust_remote_code=True.
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ltg/nort5-base", trust_remote_code=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ltg/nort5-base", trust_remote_code=True)
# MASKED LANGUAGE MODELING
sentence = "Brukseksempel: Elektrisk oppvarming. Definisjonen på ordet oppvarming er: å[MASK_0]."
encoding = tokenizer(sentence)
input_tensor = torch.tensor([encoding.input_ids])
output_tensor = model.generate(input_tensor, decoder_start_token_id=7, eos_token_id=8)
tokenizer.decode(output_tensor.squeeze(), skip_special_tokens=True)
# should output: ' varme opp et rom.'
# PREFIX LANGUAGE MODELING
# you need to finetune this model or use `nort5-{size}-lm` model, which is finetuned on prefix language modeling
sentence = "Brukseksempel: Elektrisk oppvarming. Definisjonen på ordet oppvarming er (Wikipedia) "
encoding = tokenizer(sentence)
input_tensor = torch.tensor([encoding.input_ids])
output_tensor = model.generate(input_tensor, max_new_tokens=50, num_beams=4, do_sample=False)
tokenizer.decode(output_tensor.squeeze())
# should output: [BOS]ˈoppvarming, det vil si at det skjer en endring i temperaturen i et medium, f.eks. en ovn eller en radiator, slik at den blir varmere eller kaldere, eller at den blir varmere eller kaldere, eller at den blir
The following classes are currently implemented: AutoModel, AutoModelForSeq2SeqLM.
Cite us
@inproceedings{samuel-etal-2023-norbench,
title = "{N}or{B}ench {--} A Benchmark for {N}orwegian Language Models",
author = "Samuel, David and
Kutuzov, Andrey and
Touileb, Samia and
Velldal, Erik and
{\O}vrelid, Lilja and
R{\o}nningstad, Egil and
Sigdel, Elina and
Palatkina, Anna",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = may,
year = "2023",
address = "T{\'o}rshavn, Faroe Islands",
publisher = "University of Tartu Library",
url = "https://aclanthology.org/2023.nodalida-1.61",
pages = "618--633",
abstract = "We present NorBench: a streamlined suite of NLP tasks and probes for evaluating Norwegian language models (LMs) on standardized data splits and evaluation metrics. We also introduce a range of new Norwegian language models (both encoder and encoder-decoder based). Finally, we compare and analyze their performance, along with other existing LMs, across the different benchmark tests of NorBench.",
}
- Downloads last month
- 6