Nikolay Banar commited on
Commit
767d026
·
1 Parent(s): 39293f6

SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md CHANGED
@@ -1,12 +1,167 @@
1
  ---
 
 
 
 
 
 
2
  license: mit
3
  datasets:
4
  - clips/beir-nl-mmarco
5
  - clips/beir-nl-hotpotqa
6
  - clips/beir-nl-fever
7
- language:
8
- - nl
9
- base_model:
10
- - DTAI-KULeuven/robbert-2023-dutch-base
11
  pipeline_tag: sentence-similarity
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: sentence-transformers
3
+ base_model:
4
+ - DTAI-KULeuven/robbert-2023-dutch-base
5
+ tags:
6
+ - generated_from_trainer
7
+ - transformers
8
  license: mit
9
  datasets:
10
  - clips/beir-nl-mmarco
11
  - clips/beir-nl-hotpotqa
12
  - clips/beir-nl-fever
13
+ language: nl
 
 
 
14
  pipeline_tag: sentence-similarity
15
+ model-index:
16
+ - name: RobBERT-2023-base-ft
17
+ results: []
18
+ ---
19
+
20
+ # RobBERT-2023-base-ft
21
+
22
+ RobBERT-2023-base-ft is a fine-tuned version of [DTAI-KULeuven/robbert-2023-dutch-base]( DTAI-KULeuven/robbert-2023-dutch-base).It demonstrates strong performance on MTEB-NL. If you’re looking for a state-of-the-art model of comparable size, you may also want to consider [clips/e5-base-trm-nl](https://huggingface.co/clips/e5-base-trm-nl)
23
+
24
+ ## Usage
25
+
26
+ Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
27
+
28
+ ```python
29
+ import torch.nn.functional as F
30
+
31
+ from torch import Tensor
32
+ from transformers import AutoTokenizer, AutoModel
33
+
34
+
35
+ def average_pool(last_hidden_states: Tensor,
36
+ attention_mask: Tensor) -> Tensor:
37
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
38
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
39
+
40
+
41
+ # Each input text should start with "query: " or "passage: ".
42
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
43
+ input_texts = [
44
+ 'query: hoeveel eiwitten moet een vrouw eten',
45
+ 'query: top definieer',
46
+ "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
47
+ "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
48
+ ]
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained('clips/robbert-2023-base-ft')
51
+ model = AutoModel.from_pretrained('clips/robbert-2023-base-ft')
52
+
53
+ # Tokenize the input texts
54
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
55
+
56
+ outputs = model(**batch_dict)
57
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
58
+
59
+ # normalize embeddings
60
+ embeddings = F.normalize(embeddings, p=2, dim=1)
61
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
62
+ print(scores.tolist())
63
+ ```
64
+
65
+ Below is an example for usage with sentence_transformers.
66
+ ```python
67
+ from sentence_transformers import SentenceTransformer
68
+
69
+ # Load the model from Hugging Face
70
+ model = SentenceTransformer("clips/robbert-2023-base-ft")
71
+
72
+ # Perform inference using encode_query/encode_document for retrieval,
73
+ # or encode_query for general purpose embeddings. Prompt prefixes
74
+ # are automatically added with these two methods.
75
+ queries = [
76
+ 'hoeveel eiwitten moet een vrouw eten',
77
+ 'top definieer',
78
+ ]
79
+ documents = [
80
+ 'Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.',
81
+ 'Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen.',
82
+ ]
83
+ query_embeddings = model.encode_query(queries)
84
+ document_embeddings = model.encode_document(documents)
85
+ print(query_embeddings.shape, document_embeddings.shape)
86
+ # (2, 768) (2, 768)
87
+
88
+ similarities = model.similarity(query_embeddings, document_embeddings)
89
+ print(similarities)
90
+ # tensor([[0.7730, 0.2940],
91
+ # [0.3368, 0.7256]])
92
+ ```
93
+ ## Benchmark Evaluation
94
+ Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
95
+
96
+ | Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
97
+ |---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
98
+ | **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
99
+ | **Supervised (small, <100M)** | | | | | | | | | | |
100
+ | **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
101
+ | **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
102
+ | **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
103
+ | **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
104
+ | **Supervised (base, <305M)** | | | | | | | | | | |
105
+ | granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
106
+ | **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
107
+ | **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
108
+ | multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
109
+ | paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
110
+ | **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
111
+ | **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
112
+ | **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
113
+ | potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
114
+ | multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
115
+ | granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
116
+ | paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
117
+ | Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
118
+ | gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
119
+ | **Supervised (large, >305M)** | | | | | | | | | | |
120
+ | **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
121
+ | **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
122
+ | **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
123
+ | **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
124
+ | **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
125
+ | multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
126
+ | Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
127
+ | bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
128
+ | jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
129
+
130
+
131
+ ## Training procedure
132
+
133
+ ### Training hyperparameters
134
+
135
+ The following hyperparameters were used during training:
136
+ - learning_rate: 1e-05
137
+ - train_batch_size: 1
138
+ - eval_batch_size: 8
139
+ - seed: 42
140
+ - distributed_type: multi-GPU
141
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
142
+ - lr_scheduler_type: constant_with_warmup
143
+ - lr_scheduler_warmup_ratio: 0.25
144
+ - num_epochs: 1.0
145
+
146
+ ### Framework versions
147
+
148
+ - Transformers 4.56.1
149
+ - Pytorch 2.7.1+cu128
150
+ - Datasets 4.0.0
151
+ - Tokenizers 0.22.0
152
+
153
+ ## Citation Information
154
+
155
+ If you find our paper, benchmark or models helpful, please consider cite as follows:
156
+ ```latex
157
+ @misc{banar2025mtebnle5nlembeddingbenchmark,
158
+ title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
159
+ author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
160
+ year={2025},
161
+ eprint={2509.12340},
162
+ archivePrefix={arXiv},
163
+ primaryClass={cs.CL},
164
+ url={https://arxiv.org/abs/2509.12340},
165
+ }
166
+ ```
167
+ [//]: # (https://arxiv.org/abs/2509.12340)
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SentenceTransformer",
3
+ "__version__": {
4
+ "sentence_transformers": "5.1.0",
5
+ "transformers": "4.56.1",
6
+ "pytorch": "2.7.1+cu126"
7
+ },
8
+ "prompts": {
9
+ "query": "query: ",
10
+ "document": "passage: "
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "cosine"
14
+ }
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }