Update README.md
Browse files
README.md
CHANGED
|
@@ -1,99 +1,78 @@
|
|
| 1 |
-
---
|
| 2 |
-
language:
|
| 3 |
-
- 'no'
|
| 4 |
-
|
| 5 |
-
-
|
| 6 |
-
|
| 7 |
-
-
|
| 8 |
-
-
|
| 9 |
-
-
|
| 10 |
-
-
|
| 11 |
-
- loss:CachedMultipleNegativesRankingLoss
|
| 12 |
-
base_model:
|
| 13 |
-
- jhu-clsp/mmBERT-base
|
| 14 |
-
widget:
|
| 15 |
-
- source_sentence: Inne i igloen gjør den unge mannen seg klar for sitt overnattingsopphold.
|
| 16 |
-
sentences:
|
| 17 |
-
- Folk danser i gaten.
|
| 18 |
-
- Den unge mannen gjør seg klar for sitt overnattingsopphold.
|
| 19 |
-
- Den unge mannen gjør seg klar til å dra.
|
| 20 |
-
- source_sentence: >-
|
| 21 |
-
En kvinne i rullestol snakker med vennen sin mens hun er omgitt av andre
|
| 22 |
-
mennesker som går i parken.
|
| 23 |
-
sentences:
|
| 24 |
-
- Barna blir fotografert.
|
| 25 |
-
- Kvinnen er utendørs.
|
| 26 |
-
- Kvinnen spiser en pølse midt i soverommet sitt.
|
| 27 |
-
- source_sentence: En kvinne løper langs en steinete strand.
|
| 28 |
-
sentences:
|
| 29 |
-
- En mann og en kvinne ser på frukt og grønnsaker.
|
| 30 |
-
- En kvinne løper.
|
| 31 |
-
- En kvinne sitter ved et piknikbord nær den steinete kysten.
|
| 32 |
-
- source_sentence: >-
|
| 33 |
-
To basketballspillere i svart og hvitt antrekk står på en basketballbane og
|
| 34 |
-
snakker.
|
| 35 |
-
sentences:
|
| 36 |
-
- De to basketballspillerne snakker sammen.
|
| 37 |
-
- Den unge gutten multitasker.
|
| 38 |
-
- De to basketballspillerne sitter på benken.
|
| 39 |
-
- source_sentence: En mann lager et sandmaleri på gulvet.
|
| 40 |
-
sentences:
|
| 41 |
-
- En mann lager kunst.
|
| 42 |
-
- På fornøyelsesturen var det to jenter som smilte og lo
|
| 43 |
-
- En kvinne ødelegger et sandmaleri.
|
| 44 |
-
datasets:
|
| 45 |
-
- Fremtind/all-nli-norwegian
|
| 46 |
-
- NbAiLab/ndla_parallel_paragraphs
|
| 47 |
-
pipeline_tag: sentence-similarity
|
| 48 |
-
library_name: sentence-transformers
|
| 49 |
-
metrics:
|
| 50 |
-
- cosine_accuracy
|
| 51 |
-
model-index:
|
| 52 |
-
- name: SentenceTransformer based on
|
| 53 |
-
results:
|
| 54 |
-
- task:
|
| 55 |
-
type: triplet
|
| 56 |
-
name: Triplet
|
| 57 |
-
dataset:
|
| 58 |
-
name: nob all nli test
|
| 59 |
-
type: nob_all_nli_test
|
| 60 |
-
metrics:
|
| 61 |
-
- type: cosine_accuracy
|
| 62 |
-
value: 0.9509999752044678
|
| 63 |
-
name: Cosine Accuracy
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
- **Output Dimensionality:** 768 dimensions
|
| 77 |
-
- **Similarity Function:** Cosine Similarity
|
| 78 |
-
- **Training Dataset:**
|
| 79 |
-
- [all-nli-norwegian](https://huggingface.co/datasets/Fremtind/all-nli-norwegian)
|
| 80 |
-
- **Language:** no
|
| 81 |
-
<!-- - **License:** Unknown -->
|
| 82 |
-
|
| 83 |
-
### Model Sources
|
| 84 |
-
|
| 85 |
-
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
| 86 |
-
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
| 87 |
-
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
|
| 88 |
-
|
| 89 |
-
### Full Model Architecture
|
| 90 |
-
|
| 91 |
-
```
|
| 92 |
-
SentenceTransformer(
|
| 93 |
-
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
|
| 94 |
-
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
| 95 |
-
)
|
| 96 |
-
```
|
| 97 |
|
| 98 |
## Usage
|
| 99 |
|
|
@@ -110,7 +89,7 @@ Then you can load this model and run inference.
|
|
| 110 |
from sentence_transformers import SentenceTransformer
|
| 111 |
|
| 112 |
# Download from the 🤗 Hub
|
| 113 |
-
model = SentenceTransformer("
|
| 114 |
# Run inference
|
| 115 |
sentences = [
|
| 116 |
'En mann lager et sandmaleri på gulvet.',
|
|
@@ -129,123 +108,51 @@ print(similarities)
|
|
| 129 |
# [0.3718, 0.1509, 1.0000]])
|
| 130 |
```
|
| 131 |
|
| 132 |
-
<!--
|
| 133 |
-
### Direct Usage (Transformers)
|
| 134 |
-
|
| 135 |
-
<details><summary>Click to see the direct usage in Transformers</summary>
|
| 136 |
-
|
| 137 |
-
</details>
|
| 138 |
-
-->
|
| 139 |
-
|
| 140 |
-
<!--
|
| 141 |
-
### Downstream Usage (Sentence Transformers)
|
| 142 |
-
|
| 143 |
-
You can finetune this model on your own dataset.
|
| 144 |
-
|
| 145 |
-
<details><summary>Click to expand</summary>
|
| 146 |
-
|
| 147 |
-
</details>
|
| 148 |
-
-->
|
| 149 |
-
|
| 150 |
-
<!--
|
| 151 |
-
### Out-of-Scope Use
|
| 152 |
-
|
| 153 |
-
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
| 154 |
-
-->
|
| 155 |
|
| 156 |
## Evaluation
|
| 157 |
|
| 158 |
-
|
| 159 |
|
| 160 |
-
|
|
|
|
|
|
|
| 161 |
|
| 162 |
-
* Dataset: `nob_all_nli_test`
|
| 163 |
-
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
|
| 164 |
|
| 165 |
-
|
| 166 |
-
|:--------------------|:----------|
|
| 167 |
-
| **cosine_accuracy** | **0.951** |
|
| 168 |
|
| 169 |
-
|
| 170 |
-
## Bias, Risks and Limitations
|
| 171 |
|
| 172 |
-
|
| 173 |
-
--
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 174 |
|
| 175 |
-
|
| 176 |
-
### Recommendations
|
| 177 |
|
| 178 |
-
|
| 179 |
-
-->
|
| 180 |
|
| 181 |
-
|
|
|
|
| 182 |
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
#### all-nli-norwegian
|
| 186 |
-
|
| 187 |
-
* Dataset: [all-nli-norwegian](https://huggingface.co/datasets/Murhaf/all-nli-norwegian) at [98cabde](https://huggingface.co/datasets/Murhaf/all-nli-norwegian/tree/98cabded09bfe5f505757840026ecdf6a357a04c)
|
| 188 |
-
* Size: 556,367 training samples
|
| 189 |
-
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
|
| 190 |
-
* Approximate statistics based on the first 1000 samples:
|
| 191 |
-
| | anchor | positive | negative |
|
| 192 |
-
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
|
| 193 |
-
| type | string | string | string |
|
| 194 |
-
| details | <ul><li>min: 6 tokens</li><li>mean: 11.7 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.03 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.51 tokens</li><li>max: 55 tokens</li></ul> |
|
| 195 |
-
* Samples:
|
| 196 |
-
| anchor | positive | negative |
|
| 197 |
-
|:---------------------------------------------------------------|:------------------------------------------------|:---------------------------------------------------------------|
|
| 198 |
-
| <code>En person på en hest hopper over et havarert fly.</code> | <code>En person er utendørs, på en hest.</code> | <code>En person er på en diner og bestiller en omelett.</code> |
|
| 199 |
-
| <code>Barn smiler og vinker til kameraet</code> | <code>Det er barn til stede</code> | <code>Barna rynker pannen</code> |
|
| 200 |
-
| <code>En gutt hopper på skateboard midt på en rød bro.</code> | <code>Gutten gjør et skateboardtriks.</code> | <code>Gutten skater nedover fortauet.</code> |
|
| 201 |
-
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
|
| 202 |
-
```json
|
| 203 |
-
{
|
| 204 |
-
"scale": 20.0,
|
| 205 |
-
"similarity_fct": "cos_sim",
|
| 206 |
-
"mini_batch_size": 32,
|
| 207 |
-
"gather_across_devices": false
|
| 208 |
-
}
|
| 209 |
-
```
|
| 210 |
-
|
| 211 |
-
### Evaluation Dataset
|
| 212 |
-
|
| 213 |
-
#### all-nli-norwegian
|
| 214 |
-
|
| 215 |
-
* Dataset: [all-nli-norwegian](https://huggingface.co/datasets/Murhaf/all-nli-norwegian) at [98cabde](https://huggingface.co/datasets/Murhaf/all-nli-norwegian/tree/98cabded09bfe5f505757840026ecdf6a357a04c)
|
| 216 |
-
* Size: 6,561 evaluation samples
|
| 217 |
-
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
|
| 218 |
-
* Approximate statistics based on the first 1000 samples:
|
| 219 |
-
| | anchor | positive | negative |
|
| 220 |
-
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
|
| 221 |
-
| type | string | string | string |
|
| 222 |
-
| details | <ul><li>min: 6 tokens</li><li>mean: 21.73 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.14 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.75 tokens</li><li>max: 32 tokens</li></ul> |
|
| 223 |
-
* Samples:
|
| 224 |
-
| anchor | positive | negative |
|
| 225 |
-
|:--------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------|:------------------------------------------------------------|
|
| 226 |
-
| <code>To kvinner klemmer mens de holder take-away pakker.</code> | <code>To kvinner holder pakker.</code> | <code>Mennene slåss utenfor en deli.</code> |
|
| 227 |
-
| <code>To små barn i blå drakter, en med nummer 9 og en med nummer 2, står på trinn i et bad og vasker hendene i en vask.</code> | <code>To barn i nummererte drakter vasker hendene.</code> | <code>To barn i jakker går til skolen.</code> |
|
| 228 |
-
| <code>En mann selger donuts til en kunde under et verdensutstillingsarrangement holdt i byen Angeles</code> | <code>En mann selger donuts til en kunde.</code> | <code>En kvinne drikker kaffen sin på en liten kafé.</code> |
|
| 229 |
-
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
|
| 230 |
-
```json
|
| 231 |
-
{
|
| 232 |
-
"scale": 20.0,
|
| 233 |
-
"similarity_fct": "cos_sim",
|
| 234 |
-
"mini_batch_size": 32,
|
| 235 |
-
"gather_across_devices": false
|
| 236 |
-
}
|
| 237 |
-
```
|
| 238 |
|
| 239 |
### Training Hyperparameters
|
| 240 |
#### Non-Default Hyperparameters
|
| 241 |
-
|
|
|
|
| 242 |
- `eval_strategy`: steps
|
| 243 |
- `per_device_train_batch_size`: 512
|
| 244 |
- `per_device_eval_batch_size`: 256
|
| 245 |
- `num_train_epochs`: 1
|
| 246 |
- `warmup_ratio`: 0.1
|
| 247 |
- `batch_sampler`: no_duplicates
|
| 248 |
-
|
|
|
|
| 249 |
#### All Hyperparameters
|
| 250 |
<details><summary>Click to expand</summary>
|
| 251 |
|
|
@@ -370,14 +277,10 @@ You can finetune this model on your own dataset.
|
|
| 370 |
|
| 371 |
</details>
|
| 372 |
|
| 373 |
-
### Training Logs
|
| 374 |
-
| Epoch | Step | Training Loss | Validation Loss | nob_all_nli_test_cosine_accuracy |
|
| 375 |
-
|:------:|:----:|:-------------:|:---------------:|:--------------------------------:|
|
| 376 |
-
| 0.3690 | 100 | 1.7006 | 0.6474 | 0.9420 |
|
| 377 |
-
| 0.7380 | 200 | 1.1183 | 0.6002 | 0.9510 |
|
| 378 |
-
|
| 379 |
|
| 380 |
### Framework Versions
|
|
|
|
|
|
|
| 381 |
- Python: 3.12.11
|
| 382 |
- Sentence Transformers: 5.1.1
|
| 383 |
- Transformers: 4.56.2
|
|
@@ -385,35 +288,8 @@ You can finetune this model on your own dataset.
|
|
| 385 |
- Accelerate: 1.10.1
|
| 386 |
- Datasets: 4.1.1
|
| 387 |
- Tokenizers: 0.22.1
|
|
|
|
| 388 |
|
| 389 |
-
## Citation
|
| 390 |
-
|
| 391 |
-
### BibTeX
|
| 392 |
-
|
| 393 |
-
#### Sentence Transformers
|
| 394 |
-
```bibtex
|
| 395 |
-
@inproceedings{reimers-2019-sentence-bert,
|
| 396 |
-
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
| 397 |
-
author = "Reimers, Nils and Gurevych, Iryna",
|
| 398 |
-
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
|
| 399 |
-
month = "11",
|
| 400 |
-
year = "2019",
|
| 401 |
-
publisher = "Association for Computational Linguistics",
|
| 402 |
-
url = "https://arxiv.org/abs/1908.10084",
|
| 403 |
-
}
|
| 404 |
-
```
|
| 405 |
-
|
| 406 |
-
#### CachedMultipleNegativesRankingLoss
|
| 407 |
-
```bibtex
|
| 408 |
-
@misc{gao2021scaling,
|
| 409 |
-
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
|
| 410 |
-
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
|
| 411 |
-
year={2021},
|
| 412 |
-
eprint={2101.06983},
|
| 413 |
-
archivePrefix={arXiv},
|
| 414 |
-
primaryClass={cs.LG}
|
| 415 |
-
}
|
| 416 |
-
```
|
| 417 |
|
| 418 |
<!--
|
| 419 |
## Glossary
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- 'no'
|
| 4 |
+
- nb
|
| 5 |
+
- nn
|
| 6 |
+
tags:
|
| 7 |
+
- sentence-transformers
|
| 8 |
+
- sentence-similarity
|
| 9 |
+
- feature-extraction
|
| 10 |
+
- dense
|
| 11 |
+
- loss:CachedMultipleNegativesRankingLoss
|
| 12 |
+
base_model:
|
| 13 |
+
- jhu-clsp/mmBERT-base
|
| 14 |
+
widget:
|
| 15 |
+
- source_sentence: Inne i igloen gjør den unge mannen seg klar for sitt overnattingsopphold.
|
| 16 |
+
sentences:
|
| 17 |
+
- Folk danser i gaten.
|
| 18 |
+
- Den unge mannen gjør seg klar for sitt overnattingsopphold.
|
| 19 |
+
- Den unge mannen gjør seg klar til å dra.
|
| 20 |
+
- source_sentence: >-
|
| 21 |
+
En kvinne i rullestol snakker med vennen sin mens hun er omgitt av andre
|
| 22 |
+
mennesker som går i parken.
|
| 23 |
+
sentences:
|
| 24 |
+
- Barna blir fotografert.
|
| 25 |
+
- Kvinnen er utendørs.
|
| 26 |
+
- Kvinnen spiser en pølse midt i soverommet sitt.
|
| 27 |
+
- source_sentence: En kvinne løper langs en steinete strand.
|
| 28 |
+
sentences:
|
| 29 |
+
- En mann og en kvinne ser på frukt og grønnsaker.
|
| 30 |
+
- En kvinne løper.
|
| 31 |
+
- En kvinne sitter ved et piknikbord nær den steinete kysten.
|
| 32 |
+
- source_sentence: >-
|
| 33 |
+
To basketballspillere i svart og hvitt antrekk står på en basketballbane og
|
| 34 |
+
snakker.
|
| 35 |
+
sentences:
|
| 36 |
+
- De to basketballspillerne snakker sammen.
|
| 37 |
+
- Den unge gutten multitasker.
|
| 38 |
+
- De to basketballspillerne sitter på benken.
|
| 39 |
+
- source_sentence: En mann lager et sandmaleri på gulvet.
|
| 40 |
+
sentences:
|
| 41 |
+
- En mann lager kunst.
|
| 42 |
+
- På fornøyelsesturen var det to jenter som smilte og lo
|
| 43 |
+
- En kvinne ødelegger et sandmaleri.
|
| 44 |
+
datasets:
|
| 45 |
+
- Fremtind/all-nli-norwegian
|
| 46 |
+
- NbAiLab/ndla_parallel_paragraphs
|
| 47 |
+
pipeline_tag: sentence-similarity
|
| 48 |
+
library_name: sentence-transformers
|
| 49 |
+
metrics:
|
| 50 |
+
- cosine_accuracy
|
| 51 |
+
model-index:
|
| 52 |
+
- name: SentenceTransformer based on jhu-clsp/mmBERT-base
|
| 53 |
+
results:
|
| 54 |
+
- task:
|
| 55 |
+
type: triplet
|
| 56 |
+
name: Triplet
|
| 57 |
+
dataset:
|
| 58 |
+
name: nob all nli test
|
| 59 |
+
type: nob_all_nli_test
|
| 60 |
+
metrics:
|
| 61 |
+
- type: cosine_accuracy
|
| 62 |
+
value: 0.9509999752044678
|
| 63 |
+
name: Cosine Accuracy
|
| 64 |
+
license: mit
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
# SentenceTransformer based on jhu-clsp/mmBERT-base
|
| 68 |
+
|
| 69 |
+
mmBERT-base-norwegian is a [Sentence Transformer](https://www.SBERT.net) model finetuned from the multilingual ModernBERT model [jhu-clsp/mmBERT-base](https://huggingface.co/jhu-clsp/mmBERT-base).
|
| 70 |
+
The model maps sentences (and paragraphs) to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search,
|
| 71 |
+
text classification, clustering, among other tasks.
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
Note: While the fine-tuned sentence-transformer model has a `max_seq_length` of 75 tokens, the base model does not.
|
| 75 |
+
This means that the sequence length can be increased to 8192 (which is the max length in the base model).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
## Usage
|
| 78 |
|
|
|
|
| 89 |
from sentence_transformers import SentenceTransformer
|
| 90 |
|
| 91 |
# Download from the 🤗 Hub
|
| 92 |
+
model = SentenceTransformer("Fremtind/mmBERT-base-norwegian")
|
| 93 |
# Run inference
|
| 94 |
sentences = [
|
| 95 |
'En mann lager et sandmaleri på gulvet.',
|
|
|
|
| 108 |
# [0.3718, 0.1509, 1.0000]])
|
| 109 |
```
|
| 110 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
## Evaluation
|
| 113 |
|
| 114 |
+
To verify the utility of our models, we evaluated them on a selection of classification and clustering tasks for Norwegian from [MTEBv2](https://embeddings-benchmark.github.io/mteb/).
|
| 115 |
|
| 116 |
+
The heatmap below shows the results of evaluating five sentence-transformers on ten different tasks;
|
| 117 |
+
three of the sentence-transformer models we have fine-tuned ([Fremtind/norsbert4-large](https://huggingface.co/Fremtind/norsbert4-large), [Fremtind/norsbert4-base](https://huggingface.co/Fremtind/norsbert4-base), [Fremtind/mmBERT-base-norwegian](https://huggingface.co/Fremtind/mmBERT-base-norwegian))
|
| 118 |
+
and the other two are relatively popular (and comparable) sentence similarity models ([FFI/SimCSE-NB-BERT-large](https://huggingface.co/FFI/SimCSE-NB-BERT-large) and [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base)).
|
| 119 |
|
|
|
|
|
|
|
| 120 |
|
| 121 |
+

|
|
|
|
|
|
|
| 122 |
|
| 123 |
+
We ranked the models using **Borda count** (which is used in MTEB), where each model was assigned a number of points based on its relative performance across all evaluated tasks.
|
|
|
|
| 124 |
|
| 125 |
+
| Rank | Model | Borda Points |
|
| 126 |
+
|:----:|:--------------------------------|:-------------:|
|
| 127 |
+
| 1 | **[Fremtind/norsbert4-large](https://huggingface.co/Fremtind/norsbert4-large)** | **44** |
|
| 128 |
+
| 2 | [FFI/SimCSE-NB-BERT-large](https://huggingface.co/FFI/SimCSE-NB-BERT-large) | 40 |
|
| 129 |
+
| 3 | [Fremtind/norsbert4-base](https://huggingface.co/Murhaf/norsbert4-base) | 24 |
|
| 130 |
+
| 4 | [NbAiLab/nb-sbert-base](https://huggingface.co/NbAiLab/nb-sbert-base) | 15 |
|
| 131 |
+
| 5 | Fremtind/mmBERT-base-norwegian | 7 |
|
| 132 |
+
|
| 133 |
+
|
| 134 |
|
| 135 |
+
## Training Details
|
|
|
|
| 136 |
|
| 137 |
+
The model was fine-tuned in two stages.
|
|
|
|
| 138 |
|
| 139 |
+
In the **first stage**, it was trained in an unsupervised manner following the SimCSE method (Gao et al., 2021). In this setup, the same sentence is encoded twice, and due to dropout (in training mode), the model produces two slightly different embeddings. The training objective is to minimize the distance between these embeddings while maximizing the distance to embeddings of other sentences in the same batch.
|
| 140 |
+
For this stage, we created sentence pairs in three categories from the [NDLA Parallel Paragraphs dataset](https://huggingface.co/datasets/NbAiLab/ndla_parallel_paragraphs): (Bokmål, Bokmål), (Nynorsk, Nynorsk), and (Bokmål, Nynorsk). In the (Bokmål, Bokmål) and (Nynorsk, Nynorsk) pairs, each sentence was paired with itself, leveraging dropout to create embedding variation. In the (Bokmål, Nynorsk) category, cross-lingual sentence pairs were used to align the model’s semantic representations across the two language varieties.
|
| 141 |
|
| 142 |
+
In the **second stage**, the model was further fine-tuned on a natural language inference dataset, namely [Fremtind/all-nli-norwegian](https://huggingface.co/datasets/Fremtind/all-nli-norwegian). The dataset is formatted as triplets (anchor, positive, negative), where the _anchor_ is the premise, the _positive_ is an entailment hypothesis, and the _negative_ is a contradiction hypothesis. The objective is to minimize the distance between the anchor and positive while maximizing it between the anchor and negative. This fine-tuning stage follows the 'standard' supervised fine-tuning strategy introduced in Sentence-BERT.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
|
| 144 |
### Training Hyperparameters
|
| 145 |
#### Non-Default Hyperparameters
|
| 146 |
+
<details><summary>Click to expand</summary>
|
| 147 |
+
|
| 148 |
- `eval_strategy`: steps
|
| 149 |
- `per_device_train_batch_size`: 512
|
| 150 |
- `per_device_eval_batch_size`: 256
|
| 151 |
- `num_train_epochs`: 1
|
| 152 |
- `warmup_ratio`: 0.1
|
| 153 |
- `batch_sampler`: no_duplicates
|
| 154 |
+
</details>
|
| 155 |
+
|
| 156 |
#### All Hyperparameters
|
| 157 |
<details><summary>Click to expand</summary>
|
| 158 |
|
|
|
|
| 277 |
|
| 278 |
</details>
|
| 279 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 280 |
|
| 281 |
### Framework Versions
|
| 282 |
+
<details><summary>Click to expand</summary>
|
| 283 |
+
|
| 284 |
- Python: 3.12.11
|
| 285 |
- Sentence Transformers: 5.1.1
|
| 286 |
- Transformers: 4.56.2
|
|
|
|
| 288 |
- Accelerate: 1.10.1
|
| 289 |
- Datasets: 4.1.1
|
| 290 |
- Tokenizers: 0.22.1
|
| 291 |
+
</details>
|
| 292 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 293 |
|
| 294 |
<!--
|
| 295 |
## Glossary
|