Nikolay Banar commited on
Commit
6423029
·
1 Parent(s): 3dea274

README.md updated

Browse files
Files changed (1) hide show
  1. README.md +111 -23
README.md CHANGED
@@ -5,7 +5,7 @@ base_model:
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
- - name: me5-base-trimmed-old-syn-filt_2ng_llr_1e5_wu_25
9
  results: []
10
  license: mit
11
  datasets:
@@ -17,24 +17,100 @@ language:
17
  pipeline_tag: sentence-similarity
18
  ---
19
 
20
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
21
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- # me5-base-trimmed-old-syn-filt_2ng_llr_1e5_wu_25
24
-
25
- This model is a fine-tuned version of [nicolaebanari/me5-base-trimmed-nl-test](https://huggingface.co/nicolaebanari/me5-base-trimmed-nl-test) on an unknown dataset.
26
-
27
- ## Model description
28
-
29
- More information needed
30
-
31
- ## Intended uses & limitations
32
-
33
- More information needed
34
-
35
- ## Training and evaluation data
36
-
37
- More information needed
38
 
39
  ## Training procedure
40
 
@@ -51,13 +127,25 @@ The following hyperparameters were used during training:
51
  - lr_scheduler_warmup_ratio: 0.25
52
  - num_epochs: 1.0
53
 
54
- ### Training results
55
-
56
-
57
-
58
  ### Framework versions
59
 
60
  - Transformers 4.56.1
61
  - Pytorch 2.7.1+cu128
62
  - Datasets 4.0.0
63
- - Tokenizers 0.22.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
+ - name: E5-base-trm-nl
9
  results: []
10
  license: mit
11
  datasets:
 
17
  pipeline_tag: sentence-similarity
18
  ---
19
 
20
+ # E5-base-trm-nl
21
+
22
+ This model is a fine-tuned version of [clips/e5-base-trm](https://huggingface.co/clips/e5-base-trm).
23
+
24
+ ## Usage
25
+
26
+ Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
27
+
28
+ ```python
29
+ import torch.nn.functional as F
30
+
31
+ from torch import Tensor
32
+ from transformers import AutoTokenizer, AutoModel
33
+
34
+
35
+ def average_pool(last_hidden_states: Tensor,
36
+ attention_mask: Tensor) -> Tensor:
37
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
38
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
39
+
40
+
41
+ # Each input text should start with "query: " or "passage: ".
42
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
43
+ input_texts = [
44
+ 'query: hoeveel eiwitten moet een vrouw eten',
45
+ 'query: top definieer',
46
+ "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
47
+ "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
48
+ ]
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained('clips/e5-base-trm-nl')
51
+ model = AutoModel.from_pretrained('clips/e5-base-trm-nl')
52
+
53
+ # Tokenize the input texts
54
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
55
+
56
+ outputs = model(**batch_dict)
57
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
58
+
59
+ # normalize embeddings
60
+ embeddings = F.normalize(embeddings, p=2, dim=1)
61
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
62
+ print(scores.tolist())
63
+ ```
64
+
65
+ Below is an example for usage with sentence_transformers.
66
+ ```python
67
+ from sentence_transformers import SentenceTransformer
68
+ model = SentenceTransformer('clips/e5-base-trm-nl')
69
+ input_texts = [
70
+ 'query: hoeveel eiwitten moet een vrouw eten',
71
+ 'query: top definieer',
72
+ "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
73
+ "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
74
+ ]
75
+ embeddings = model.encode(input_texts, normalize_embeddings=True)
76
+ ```
77
+ ## Benchmark Evaluation
78
+ Results on MTEB-NL (models introduced in [our paper](https://arxiv.org/abs/2509.12340) and the best model per size category are highlighted in bold):
79
+
80
+ | Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
81
+ |---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
82
+ | **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
83
+ | **Supervised (small, <100M)** | | | | | | | | | | |
84
+ | **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
85
+ | **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
86
+ | **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
87
+ | **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
88
+ | **Supervised (base, <305M)** | | | | | | | | | | |
89
+ | granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
90
+ | **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
91
+ | **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
92
+ | multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
93
+ | paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
94
+ | **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
95
+ | **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
96
+ | **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
97
+ | potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
98
+ | multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
99
+ | granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
100
+ | paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
101
+ | Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
102
+ | gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
103
+ | **Supervised (large, >305M)** | | | | | | | | | | |
104
+ | **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
105
+ | **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
106
+ | **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
107
+ | **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
108
+ | **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
109
+ | multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
110
+ | Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
111
+ | bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
112
+ | jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
113
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
 
115
  ## Training procedure
116
 
 
127
  - lr_scheduler_warmup_ratio: 0.25
128
  - num_epochs: 1.0
129
 
 
 
 
 
130
  ### Framework versions
131
 
132
  - Transformers 4.56.1
133
  - Pytorch 2.7.1+cu128
134
  - Datasets 4.0.0
135
+ - Tokenizers 0.22.0
136
+
137
+ ## Citation Information
138
+
139
+ If you find our paper, benchmark or models helpful, please consider cite as follows:
140
+ ```latex
141
+ @misc{banar2025mtebnle5nlembeddingbenchmark,
142
+ title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
143
+ author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
144
+ year={2025},
145
+ eprint={2509.12340},
146
+ archivePrefix={arXiv},
147
+ primaryClass={cs.CL},
148
+ url={https://arxiv.org/abs/2509.12340},
149
+ }
150
+ ```
151
+ [//]: # (https://arxiv.org/abs/2509.12340)