--- language: - 'no' - nn - nb tags: - sentence-transformers - sentence-similarity - feature-extraction - dense base_model: - ltg/norbert4-large widget: - source_sentence: En gruppe barn leker og har det gøy. sentences: - Barn leker på gresset omgitt av sterke farger. - Barna er sammen. - Barna leser bøker. - source_sentence: En mann som kjører en rød motorsykkel nær en stor folkemengde ved noen telt. sentences: - En mann syr. - En mann er på motorsykkel. - En mann er på et skateboard. - source_sentence: Et dukketeater bestående av mennesker som står på høye pinner. sentences: - Det er en baseballkamp på gang. - Hvordan dukketeater fungerer. - Dukkene spiser mennesker. - source_sentence: >- To barn på en båt, en med en åre, og den andre på kanten med en redningsvest. sentences: - Et barn har på seg en redningsvest. - Voksne menn står foran en mursteinsvegg nær noe laget av metall. - To barn sover i sengen. - source_sentence: To personer, en i lyse jeans og en stripete skjorte, spiller biljard. sentences: - Folk spiller biljard - Jentene er utendørs. - folk løper datasets: - Fremtind/all-nli-norwegian - NbAiLab/ndla_parallel_paragraphs pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy model-index: - name: SentenceTransformer based on NorBERT4-large results: - task: type: triplet name: Triplet dataset: name: nob all nli test type: nob_all_nli_test metrics: - type: cosine_accuracy value: 0.9549999833106995 name: Cosine Accuracy license: apache-2.0 --- # SentenceTransformer based on NorBERT4-large NorSBERT4-Large is a [Sentence Transformer](https://www.SBERT.net) model finetuned from [ltg/norbert4-large](https://huggingface.co/ltg/norbert4-large). The model maps sentences (and paragraphs) to a 960-dimensional dense vector space and can be used for semantic textual similarity, semantic search, text classification, clustering, among other tasks. Note: While the fine-tuned sentence-transformer model has a `max_seq_length` of 75 tokens, the base model does not. This means that the sequence length can be increased to 16384 (which is the max length in the base model). ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. Note that you should load the model with `trust_remote_code=True` because it needs a custom wrapper (see the [base model](https://huggingface.co/ltg/norbert4-large/blob/main/modeling_gptbert.py) for more details). ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Fremtind/norsbert4-large", trust_remote_code=True) # Run inference sentences = [ 'To personer, en i lyse jeans og en stripete skjorte, spiller biljard.', 'Folk spiller biljard', 'folk løper', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 960] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities) # tensor([[1.0000, 0.7294, 0.1690], # [0.7294, 1.0000, 0.2412], # [0.1690, 0.2412, 1.0000]]) ``` ## Evaluation To verify the utility of our models, we evaluated them on a selection of classification and clustering tasks for Norwegian from [MTEBv2](https://embeddings-benchmark.github.io/mteb/). The heatmap below shows the results of evaluating five sentence-transformers on ten different tasks; three of the sentence-transformer models we have fine-tuned ([Fremtind/norsbert4-large](https://huggingface.co/Fremtind/norsbert4-large), [Fremtind/norsbert4-base](https://huggingface.co/Fremtind/norsbert4-base), [Fremtind/mmBERT-base-norwegian](https://huggingface.co/Fremtind/mmBERT-base-norwegian)) and the other two are relatively popular (and comparable) sentence similarity models ([FFI/SimCSE-NB-BERT-large](https://huggingface.co/FFI/SimCSE-NB-BERT-large) and [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base)). ![newplot](https://cdn-uploads.huggingface.co/production/uploads/6179579cf08e328ce6c12c26/5N4zMC8AeETTRRig_UWsh.png) We ranked the models using **Borda count** (which is used in MTEB), where each model was assigned a number of points based on its relative performance across all evaluated tasks. | Rank | Model | Borda Points | |:----:|:--------------------------------|:-------------:| | 1 | **Fremtind/norsbert4-large** | **44** | | 2 | [FFI/SimCSE-NB-BERT-large](https://huggingface.co/FFI/SimCSE-NB-BERT-large) | 40 | | 3 | [Fremtind/norsbert4-base](https://huggingface.co/Murhaf/norsbert4-base) | 24 | | 4 | [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) | 15 | | 5 | [Fremtind/mmBERT-base-norwegian](https://huggingface.co/Fremtind/mmBERT-base-norwegian) | 7 | ## Training Details The model was fine-tuned in two stages. In the **first stage**, it was trained in an unsupervised manner following the SimCSE method (Gao et al., 2021). In this setup, the same sentence is encoded twice, and due to dropout (in training mode), the model produces two slightly different embeddings. The training objective is to minimize the distance between these embeddings while maximizing the distance to embeddings of other sentences in the same batch. For this stage, we created sentence pairs in three categories from the [NDLA Parallel Paragraphs dataset](https://huggingface.co/datasets/NbAiLab/ndla_parallel_paragraphs): (Bokmål, Bokmål), (Nynorsk, Nynorsk), and (Bokmål, Nynorsk). In the (Bokmål, Bokmål) and (Nynorsk, Nynorsk) pairs, each sentence was paired with itself, leveraging dropout to create embedding variation. In the (Bokmål, Nynorsk) category, cross-lingual sentence pairs were used to align the model’s semantic representations across the two language varieties. In the **second stage**, the model was further fine-tuned on a natural language inference dataset, namely [Fremtind/all-nli-norwegian](https://huggingface.co/datasets/Fremtind/all-nli-norwegian). The dataset is formatted as triplets (anchor, positive, negative), where the _anchor_ is the premise, the _positive_ is an entailment hypothesis, and the _negative_ is a contradiction hypothesis. The objective is to minimize the distance between the anchor and positive while maximizing it between the anchor and negative. This fine-tuning stage follows the 'standard' supervised fine-tuning strategy introduced in Sentence-BERT. ### Training Hyperparameters #### Non-Default Hyperparameters
Click to expand - `eval_strategy`: steps - `per_device_train_batch_size`: 512 - `per_device_eval_batch_size`: 256 - `num_train_epochs`: 1 - `warmup_ratio`: 0.1 - `batch_sampler`: no_duplicates
#### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 512 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 1 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: True - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: True - `prompts`: None - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {}
### Framework Versions
Click to expand - Python: 3.12.11 - Sentence Transformers: 5.1.1 - Transformers: 4.56.2 - PyTorch: 2.6.0+cu124 - Accelerate: 1.10.1 - Datasets: 4.1.1 - Tokenizers: 0.22.1