SentenceTransformer based on NorBERT4-large
NorSBERT4-Large is a Sentence Transformer model finetuned from ltg/norbert4-large.
The model maps sentences (and paragraphs) to a 960-dimensional dense vector space and can be used for semantic textual similarity, semantic search,
text classification, clustering, among other tasks.
Note: While the fine-tuned sentence-transformer model has a max_seq_length of 75 tokens, the base model does not.
This means that the sequence length can be increased to 16384 (which is the max length in the base model).
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference. Note that you should load the model with trust_remote_code=True because it needs a custom wrapper (see the base model for more details).
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Fremtind/norsbert4-large", trust_remote_code=True)
sentences = [
'To personer, en i lyse jeans og en stripete skjorte, spiller biljard.',
'Folk spiller biljard',
'folk løper',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities)
Evaluation
To verify the utility of our models, we evaluated them on a selection of classification and clustering tasks for Norwegian from MTEBv2.
The heatmap below shows the results of evaluating five sentence-transformers on ten different tasks;
three of the sentence-transformer models we have fine-tuned
(Fremtind/norsbert4-large, Fremtind/norsbert4-base, Fremtind/mmBERT-base-norwegian)
and the other two are relatively popular (and comparable) sentence similarity models (FFI/SimCSE-NB-BERT-large and NbAiLab/nb-sbert-base).

We ranked the models using Borda count (which is used in MTEB), where each model was assigned a number of points based on its relative performance across all evaluated tasks.
Training Details
The model was fine-tuned in two stages.
In the first stage, it was trained in an unsupervised manner following the SimCSE method (Gao et al., 2021). In this setup, the same sentence is encoded twice, and due to dropout (in training mode), the model produces two slightly different embeddings. The training objective is to minimize the distance between these embeddings while maximizing the distance to embeddings of other sentences in the same batch.
For this stage, we created sentence pairs in three categories from the NDLA Parallel Paragraphs dataset: (Bokmål, Bokmål), (Nynorsk, Nynorsk), and (Bokmål, Nynorsk). In the (Bokmål, Bokmål) and (Nynorsk, Nynorsk) pairs, each sentence was paired with itself, leveraging dropout to create embedding variation. In the (Bokmål, Nynorsk) category, cross-lingual sentence pairs were used to align the model’s semantic representations across the two language varieties.
In the second stage, the model was further fine-tuned on a natural language inference dataset, namely Fremtind/all-nli-norwegian. The dataset is formatted as triplets (anchor, positive, negative), where the anchor is the premise, the positive is an entailment hypothesis, and the negative is a contradiction hypothesis. The objective is to minimize the distance between the anchor and positive while maximizing it between the anchor and negative. This fine-tuning stage follows the 'standard' supervised fine-tuning strategy introduced in Sentence-BERT.
Training Hyperparameters
Non-Default Hyperparameters
Click to expand
eval_strategy: steps
per_device_train_batch_size: 512
per_device_eval_batch_size: 256
num_train_epochs: 1
warmup_ratio: 0.1
batch_sampler: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir: False
do_predict: False
eval_strategy: steps
prediction_loss_only: True
per_device_train_batch_size: 512
per_device_eval_batch_size: 256
per_gpu_train_batch_size: None
per_gpu_eval_batch_size: None
gradient_accumulation_steps: 1
eval_accumulation_steps: None
torch_empty_cache_steps: None
learning_rate: 5e-05
weight_decay: 0.0
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
max_grad_norm: 1.0
num_train_epochs: 1
max_steps: -1
lr_scheduler_type: linear
lr_scheduler_kwargs: {}
warmup_ratio: 0.1
warmup_steps: 0
log_level: passive
log_level_replica: warning
log_on_each_node: True
logging_nan_inf_filter: True
save_safetensors: True
save_on_each_node: False
save_only_model: False
restore_callback_states_from_checkpoint: False
no_cuda: False
use_cpu: False
use_mps_device: False
seed: 42
data_seed: None
jit_mode_eval: False
use_ipex: False
bf16: False
fp16: False
fp16_opt_level: O1
half_precision_backend: auto
bf16_full_eval: False
fp16_full_eval: False
tf32: None
local_rank: 1
ddp_backend: None
tpu_num_cores: None
tpu_metrics_debug: False
debug: []
dataloader_drop_last: True
dataloader_num_workers: 0
dataloader_prefetch_factor: None
past_index: -1
disable_tqdm: False
remove_unused_columns: True
label_names: None
load_best_model_at_end: False
ignore_data_skip: False
fsdp: []
fsdp_min_num_params: 0
fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
fsdp_transformer_layer_cls_to_wrap: None
accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
parallelism_config: None
deepspeed: None
label_smoothing_factor: 0.0
optim: adamw_torch
optim_args: None
adafactor: False
group_by_length: False
length_column_name: length
ddp_find_unused_parameters: None
ddp_bucket_cap_mb: None
ddp_broadcast_buffers: False
dataloader_pin_memory: True
dataloader_persistent_workers: False
skip_memory_metrics: True
use_legacy_prediction_loop: False
push_to_hub: False
resume_from_checkpoint: None
hub_model_id: None
hub_strategy: every_save
hub_private_repo: None
hub_always_push: False
hub_revision: None
gradient_checkpointing: False
gradient_checkpointing_kwargs: None
include_inputs_for_metrics: False
include_for_metrics: []
eval_do_concat_batches: True
fp16_backend: auto
push_to_hub_model_id: None
push_to_hub_organization: None
mp_parameters:
auto_find_batch_size: False
full_determinism: False
torchdynamo: None
ray_scope: last
ddp_timeout: 1800
torch_compile: False
torch_compile_backend: None
torch_compile_mode: None
include_tokens_per_second: False
include_num_input_tokens_seen: False
neftune_noise_alpha: None
optim_target_modules: None
batch_eval_metrics: False
eval_on_start: False
use_liger_kernel: False
liger_kernel_config: None
eval_use_gather_object: False
average_tokens_across_devices: True
prompts: None
batch_sampler: no_duplicates
multi_dataset_batch_sampler: proportional
router_mapping: {}
learning_rate_mapping: {}
Framework Versions
Click to expand
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.1