| | --- |
| | language: |
| | - 'no' |
| | - nb |
| | - nn |
| | - se |
| | inference: false |
| | tags: |
| | - BERT |
| | - GPT-BERT |
| | - NorBERT |
| | - Norwegian |
| | - encoder |
| | - decoder |
| | license: apache-2.0 |
| | --- |
| | |
| | <img src="https://huggingface.co/ltg/norbert3-base/resolve/main/norbert.png" width=12.5%> |
| |
|
| |
|
| | # NorBERT 4 small |
| |
|
| | The fourth generation of NorBERT models mainly improves their efficiency, but also performance and flexibility. |
| |
|
| | <img src="https://huggingface.co/ltg/norbert4-small/resolve/main/model_performance.png" width=100%> |
| |
|
| | - **Made to encode long texts**: these models were trained on 16384-token-long texts, the sliding-window attention can then generalize to even longer sequences. |
| | - **Fast and memory-efficient training and inference**: using FlashAttention2 with unpadding, the new generation of NorBERT models can process the long texts with ease. |
| | - **Better performance**: better quality of training corpora and carefully tuned training settings leads to an improved performance over NorBERT 3. |
| | - **BERT as well as GPT**: the models can flexibly function as both bidirectional encoders (BERT) or unidirectional decoders (GPT), which makes them very flexible to any downstream use. |
| | - **Trained from scratch**: the model is trained from scratch on 600B tokens of Norwegian Bokmål, Nynorsk and Northern Sámi. We used the HPLT 2.0 corpus, FineWeb2 and Mímir Core. |
| | - **Permissable license**: the checkpoints are distributed freely under Apache 2.0, anyone can use our models. |
| |
|
| | > [!TIP] |
| | > We recommend installing Flash Attention 2 and `torch.compile`-ing your models to get the highest training and inference efficiency. |
| |
|
| |
|
| |
|
| | ## All sizes of the NorBERT4 family: |
| | - [NorBERT 4 xsmall (17M)](https://huggingface.co/ltg/norbert4-xsmall) |
| | - [NorBERT 4 small (40M)](https://huggingface.co/ltg/norbert4-small) |
| | - [NorBERT 4 base (149M)](https://huggingface.co/ltg/norbert4-base) |
| | - [NorBERT 4 large (360M)](https://huggingface.co/ltg/norbert4-large) |
| | - [NorBERT 4 xlarge (987M)](https://huggingface.co/ltg/norbert4-xlarge) |
| |
|
| |
|
| | ## Example usage (bidirectional encoding) |
| |
|
| | This model currently needs a custom wrapper from `modeling_norbert.py`, you should therefore load the model with `trust_remote_code=True`. |
| |
|
| | ```python |
| | import torch |
| | from transformers import AutoTokenizer, AutoModelForMaskedLM |
| | |
| | # Import model |
| | tokenizer = AutoTokenizer.from_pretrained( |
| | "ltg/norbert4-small" |
| | ) |
| | model = AutoModelForMaskedLM.from_pretrained( |
| | "ltg/norbert4-small", |
| | trust_remote_code=True |
| | ) |
| | |
| | # Tokenize text (with a mask token inside) |
| | input_text = tokenizer( |
| | f"Nå ønsker de seg en{tokenizer.mask_token} bolig.", |
| | return_tensors="pt" |
| | ) |
| | |
| | # Inference |
| | with torch.inference_mode: |
| | output_p = model(**input_text) |
| | |
| | # Unmask the text |
| | output_text = torch.where( |
| | input_text.input_ids == tokenizer.mask_token_id, |
| | output_p.logits.argmax(-1), |
| | input_text.input_ids |
| | ) |
| | |
| | # Decoding; should output: '<s>Nå ønsker de seg en ny bolig.' |
| | print(tokenizer.decode(output_text[0].tolist())) |
| | ``` |
| |
|
| | ## Example usage (text generation) |
| |
|
| | NorBERT now also supports unidirectional text decoding, it can generate text like any other GPT model: |
| |
|
| | ```python |
| | import torch |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| | |
| | # Import model |
| | tokenizer = AutoTokenizer.from_pretrained( |
| | "ltg/norbert4-small" |
| | ) |
| | model = AutoModelForCausalLM.from_pretrained( |
| | "ltg/norbert4-small", |
| | trust_remote_code=True |
| | ) |
| | |
| | # Define zero-shot translation prompt template |
| | prompt = """Engelsk: {0} |
| | Bokmål:""" |
| | |
| | # Define tokens that should end the generation (any token with a newline) |
| | eos_token_ids = [ |
| | token_id |
| | for token_id in range(tokenizer.vocab_size) |
| | if '\n' in tokenizer.decode([token_id]) |
| | ] |
| | |
| | # Generation function |
| | @torch.inference_mode() |
| | def generate(text): |
| | text = prompt.format(text) |
| | input_ids = tokenizer(text, return_tensors='pt').input_ids |
| | prediction = model.generate( |
| | input_ids, |
| | max_new_tokens=64, |
| | do_sample=False, |
| | eos_token_id=eos_token_ids |
| | ) |
| | return tokenizer.decode(prediction[0, input_ids.size(1):]).strip() |
| | |
| | # Example usage |
| | generate("I'm a model that can generate text!") |
| | ``` |
| |
|
| | The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForCausalLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`. |
| |
|
| | ## Contact |
| |
|
| | David Samuel: `davisamu@uio.no` |
| |
|
| | ## Cite us |
| |
|
| | ```bibtex |
| | @inproceedings{charpentier-samuel-2024-bert, |
| | title = "{GPT} or {BERT}: why not both?", |
| | author = "Charpentier, Lucas Georges Gabriel and |
| | Samuel, David", |
| | booktitle = "The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning", |
| | month = nov, |
| | year = "2024", |
| | address = "Miami, FL, USA", |
| | publisher = "Association for Computational Linguistics", |
| | url = "https://aclanthology.org/2024.conll-babylm.24/", |
| | pages = "262--283" |
| | } |
| | ``` |
| |
|
| | ```bibtex |
| | @inproceedings{samuel-etal-2023-norbench, |
| | title = "{N}or{B}ench {--} A Benchmark for {N}orwegian Language Models", |
| | author = "Samuel, David and |
| | Kutuzov, Andrey and |
| | Touileb, Samia and |
| | Velldal, Erik and |
| | {\O}vrelid, Lilja and |
| | R{\o}nningstad, Egil and |
| | Sigdel, Elina and |
| | Palatkina, Anna", |
| | booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)", |
| | month = may, |
| | year = "2023", |
| | address = "T{\'o}rshavn, Faroe Islands", |
| | publisher = "University of Tartu Library", |
| | url = "https://aclanthology.org/2023.nodalida-1.61", |
| | pages = "618--633" |
| | } |
| | ``` |