Nikolay Banar commited on
Commit
8d3fba2
·
1 Parent(s): a06723b

README.md updated

Browse files
Files changed (1) hide show
  1. README.md +109 -17
README.md CHANGED
@@ -5,7 +5,7 @@ base_model:
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
- - name: me5-small-trimmed-old-syn-filt_2ng_llr_1e5_wu_25
9
  results: []
10
  license: mit
11
  datasets:
@@ -22,19 +22,98 @@ should probably proofread and complete it, then remove this comment. -->
22
 
23
  # me5-small-trimmed-old-syn-filt_2ng_llr_1e5_wu_25
24
 
25
- This model is a fine-tuned version of [nicolaebanari/me5-small-trimmed-nl-test](https://huggingface.co/nicolaebanari/me5-small-trimmed-nl-test) on an unknown dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- ## Model description
28
-
29
- More information needed
30
-
31
- ## Intended uses & limitations
32
-
33
- More information needed
34
-
35
- ## Training and evaluation data
36
-
37
- More information needed
38
 
39
  ## Training procedure
40
 
@@ -51,13 +130,26 @@ The following hyperparameters were used during training:
51
  - lr_scheduler_warmup_ratio: 0.25
52
  - num_epochs: 1.0
53
 
54
- ### Training results
55
-
56
-
57
 
58
  ### Framework versions
59
 
60
  - Transformers 4.56.1
61
  - Pytorch 2.7.1+cu128
62
  - Datasets 4.0.0
63
- - Tokenizers 0.22.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
+ - name: E5-small-trm-nl
9
  results: []
10
  license: mit
11
  datasets:
 
22
 
23
  # me5-small-trimmed-old-syn-filt_2ng_llr_1e5_wu_25
24
 
25
+ This model is a fine-tuned version of [clips/e5-small-trm](https://huggingface.co/clips/e5-small-trm).
26
+
27
+ ## Usage
28
+
29
+ Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
30
+
31
+ ```python
32
+ import torch.nn.functional as F
33
+
34
+ from torch import Tensor
35
+ from transformers import AutoTokenizer, AutoModel
36
+
37
+
38
+ def average_pool(last_hidden_states: Tensor,
39
+ attention_mask: Tensor) -> Tensor:
40
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
41
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
42
+
43
+
44
+ # Each input text should start with "query: " or "passage: ".
45
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
46
+ input_texts = [
47
+ 'query: hoeveel eiwitten moet een vrouw eten',
48
+ 'query: top definieer',
49
+ "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
50
+ "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
51
+ ]
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained('clips/e5-small-trm-nl')
54
+ model = AutoModel.from_pretrained('clips/e5-small-trm-nl')
55
+
56
+ # Tokenize the input texts
57
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
58
+
59
+ outputs = model(**batch_dict)
60
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
61
+
62
+ # normalize embeddings
63
+ embeddings = F.normalize(embeddings, p=2, dim=1)
64
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
65
+ print(scores.tolist())
66
+ ```
67
+
68
+ Below is an example for usage with sentence_transformers.
69
+ ```python
70
+ from sentence_transformers import SentenceTransformer
71
+ model = SentenceTransformer('clips/e5-small-trm-nl')
72
+ input_texts = [
73
+ 'query: hoeveel eiwitten moet een vrouw eten',
74
+ 'query: top definieer',
75
+ "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
76
+ "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
77
+ ]
78
+ embeddings = model.encode(input_texts, normalize_embeddings=True)
79
+ ```
80
+ ## Benchmark Evaluation
81
+ Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
82
+
83
+ | Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
84
+ |---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
85
+ | **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
86
+ | **Supervised (small, <100M)** | | | | | | | | | | |
87
+ | **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
88
+ | **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
89
+ | **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
90
+ | **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
91
+ | **Supervised (base, <305M)** | | | | | | | | | | |
92
+ | granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
93
+ | **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
94
+ | **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
95
+ | multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
96
+ | paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
97
+ | **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
98
+ | **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
99
+ | **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
100
+ | potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
101
+ | multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
102
+ | granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
103
+ | paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
104
+ | Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
105
+ | gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
106
+ | **Supervised (large, >305M)** | | | | | | | | | | |
107
+ | **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
108
+ | **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
109
+ | **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
110
+ | **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
111
+ | **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
112
+ | multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
113
+ | Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
114
+ | bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
115
+ | jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
116
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
  ## Training procedure
119
 
 
130
  - lr_scheduler_warmup_ratio: 0.25
131
  - num_epochs: 1.0
132
 
 
 
 
133
 
134
  ### Framework versions
135
 
136
  - Transformers 4.56.1
137
  - Pytorch 2.7.1+cu128
138
  - Datasets 4.0.0
139
+ - Tokenizers 0.22.0
140
+
141
+ ## Citation Information
142
+
143
+ If you find our paper, benchmark or models helpful, please consider cite as follows:
144
+ ```latex
145
+ @misc{banar2025mtebnle5nlembeddingbenchmark,
146
+ title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
147
+ author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
148
+ year={2025},
149
+ eprint={2509.12340},
150
+ archivePrefix={arXiv},
151
+ primaryClass={cs.CL},
152
+ url={https://arxiv.org/abs/2509.12340},
153
+ }
154
+ ```
155
+ [//]: # (https://arxiv.org/abs/2509.12340)