Nikolay Banar commited on
Commit
f06b84d
·
1 Parent(s): f0cb58e

README.md updated

Browse files
Files changed (1) hide show
  1. README.md +128 -17
README.md CHANGED
@@ -1,26 +1,137 @@
1
- ---
2
- license: mit
3
- language:
4
- - nl
5
- base_model:
6
- - intfloat/multilingual-e5-small
7
- pipeline_tag: sentence-similarity
8
- ---
9
- # Vocabulary Trimmed [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small): `nicolaebanari/me5-small-trimmed-nl-test`
 
10
  This model is a trimmed version of [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
11
  Following table shows a summary of the trimming process.
12
 
13
- | | intfloat/multilingual-e5-small | nicolaebanari/me5-small-trimmed-nl-test |
14
- |:---------------------------|:---------------------------------|:------------------------------------------|
15
- | parameter_size_full | 117,653,760 | 40,840,320 |
16
- | parameter_size_embedding | 96,014,208 | 19,200,768 |
17
- | vocab_size | 250,037 | 50,002 |
18
- | compression_rate_full | 100.0 | 34.71 |
19
- | compression_rate_embedding | 100.0 | 20.0 |
20
 
21
 
22
  Following table shows the parameter used to trim vocabulary.
23
 
24
  | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
25
  |:-----------|:-----------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
26
- | nl | allenai/c4 | text | nl | validation | 50000 | 2 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - nl
5
+ base_model:
6
+ - intfloat/multilingual-e5-small
7
+ pipeline_tag: sentence-similarity
8
+ ---
9
+ # E5-small-trm
10
+
11
  This model is a trimmed version of [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
12
  Following table shows a summary of the trimming process.
13
 
14
+ | | intfloat/multilingual-e5-small | clips/e5-small-trm |
15
+ |:---------------------------|:-------------------------------|:-------------------|
16
+ | parameter_size_full | 117,653,760 | 40,840,320 |
17
+ | parameter_size_embedding | 96,014,208 | 19,200,768 |
18
+ | vocab_size | 250,037 | 50,002 |
19
+ | compression_rate_full | 100.0 | 34.71 |
20
+ | compression_rate_embedding | 100.0 | 20.0 |
21
 
22
 
23
  Following table shows the parameter used to trim vocabulary.
24
 
25
  | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
26
  |:-----------|:-----------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
27
+ | nl | allenai/c4 | text | nl | validation | 50000 | 2 |
28
+
29
+ ## Usage
30
+
31
+ Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
32
+
33
+ ```python
34
+ import torch.nn.functional as F
35
+
36
+ from torch import Tensor
37
+ from transformers import AutoTokenizer, AutoModel
38
+
39
+
40
+ def average_pool(last_hidden_states: Tensor,
41
+ attention_mask: Tensor) -> Tensor:
42
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
43
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
44
+
45
+
46
+ # Each input text should start with "query: " or "passage: ".
47
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
48
+ input_texts = [
49
+ 'query: hoeveel eiwitten moet een vrouw eten',
50
+ 'query: top definieer',
51
+ "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
52
+ "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
53
+ ]
54
+
55
+ tokenizer = AutoTokenizer.from_pretrained('clips/e5-small-trm')
56
+ model = AutoModel.from_pretrained('clips/e5-small-trm')
57
+
58
+ # Tokenize the input texts
59
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
60
+
61
+ outputs = model(**batch_dict)
62
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
63
+
64
+ # normalize embeddings
65
+ embeddings = F.normalize(embeddings, p=2, dim=1)
66
+ scores = (embeddings[:2] @ embeddings[2:].T) * 100
67
+ print(scores.tolist())
68
+ ```
69
+
70
+ Below is an example for usage with sentence_transformers.
71
+ ```python
72
+ from sentence_transformers import SentenceTransformer
73
+ model = SentenceTransformer('clips/e5-small-trm')
74
+ input_texts = [
75
+ 'query: hoeveel eiwitten moet een vrouw eten',
76
+ 'query: top definieer',
77
+ "passage: Als algemene richtlijn geldt dat de gemiddelde eiwitbehoefte voor vrouwen van 19 tot 70 jaar volgens de CDC 46 gram per dag bedraagt. Maar, zoals je in deze tabel kunt zien, moet je dit verhogen als je zwanger bent of traint voor een marathon. Bekijk de onderstaande tabel om te zien hoeveel eiwitten je dagelijks zou moeten eten.",
78
+ "passage: Definitie van top voor leerlingen Engels. : 1 het hoogste punt van een berg : de top van een berg. : 2 het hoogste niveau. : 3 een bijeenkomst of reeks bijeenkomsten tussen de leiders van twee of meer regeringen."
79
+ ]
80
+ embeddings = model.encode(input_texts, normalize_embeddings=True)
81
+ ```
82
+
83
+
84
+ ## Benchmark Evaluation
85
+ Results on MTEB-NL:
86
+
87
+ | Model | Prm | Cls | MLCls | PCls | Rrnk | Rtr | Clust | STS | AvgD | AvgT |
88
+ |---------------------------------------|------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
89
+ | **Num. Datasets (→)** | | 12 | 3 | 2 | 1 | 12 | 8 | 2 | 40 | |
90
+ | **Supervised (small, <100M)** | | | | | | | | | | |
91
+ | **e5-small-v2-t2t** | 33M | 53.7 | 38.5 | 74.5 | 85.9 | 45.0 | 24.1 | 74.3 | 46.9 | 56.6 |
92
+ | **e5-small-v2-t2t-nl** | 33M | 55.3 | 40.9 | 74.9 | 86.0 | 49.9 | 28.0 | 74.1 | 49.8 | 58.4 |
93
+ | **e5-small-trm** | 41M | 56.3 | 43.5 | **76.5** | **87.3** | 53.1 | 28.2 | 74.2 | 51.4 | 59.9 |
94
+ | **e5-small-trm-nl** | 41M | **58.2** | **44.7** | 76.0 | 87.1 | **56.0** | **32.2** | **74.6** | **53.8** | **61.3** |
95
+ | **Supervised (base, <305M)** | | | | | | | | | | |
96
+ | granite-embedding-107m-multilingual | 107M | 53.9 | 41.8 | 70.1 | 84.7 | 50.2 | 29.8 | 68.4 | 49.4 | 57.0 |
97
+ | **e5-base-v2-t2t** | 109M | 54.4 | 40.3 | 73.3 | 85.6 | 46.2 | 25.5 | 73.2 | 47.8 | 56.9 |
98
+ | **e5-base-v2-t2t-nl** | 109M | 53.9 | 41.5 | 72.5 | 84.0 | 46.4 | 26.9 | 69.3 | 47.8 | 56.3 |
99
+ | multilingual-e5-small | 118M | 56.3 | 43.5 | 76.5 | 87.1 | 53.1 | 28.2 | 74.2 | 51.4 | 59.8 |
100
+ | paraphrase-multilingual-MiniLM-L12-v2 | 118M | 55.0 | 38.1 | 78.2 | 80.6 | 37.7 | 29.6 | 76.3 | 46.3 | 56.5 |
101
+ | **RobBERT-2023-base-ft** | 124M | 58.1 | 44.6 | 72.7 | 84.7 | 51.6 | 32.9 | 68.5 | 52.0 | 59.0 |
102
+ | **e5-base-trm** | 124M | 58.1 | 44.4 | 76.7 | 88.3 | 55.8 | 28.1 | 74.9 | 52.9 | 60.9 |
103
+ | **e5-base-trm-nl** | 124M | **59.6** | **45.9** | 78.4 | 87.5 | 56.5 | **34.3** | 75.8 | **55.0** | **62.6** |
104
+ | potion-multilingual-128M | 128M | 51.8 | 40.0 | 60.4 | 80.3 | 35.7 | 26.1 | 62.0 | 42.6 | 50.9 |
105
+ | multilingual-e5-base | 278M | 58.2 | 44.4 | 76.7 | **88.4** | 55.8 | 27.7 | 74.9 | 52.8 | 60.9 |
106
+ | granite-embedding-278m-multilingual | 278M | 54.6 | 41.8 | 71.0 | 85.6 | 52.4 | 30.3 | 68.9 | 50.5 | 58.0 |
107
+ | paraphrase-multilingual-mpnet-base-v2 | 278M | 58.1 | 40.5 | **81.9** | 82.3 | 41.4 | 30.8 | 79.3 | 49.2 | 59.2 |
108
+ | Arctic-embed-m-v2.0 | 305M | 54.4 | 42.6 | 66.6 | 86.2 | 51.8 | 26.5 | 64.9 | 49.1 | 56.1 |
109
+ | gte-multilingual-base | 305M | 59.1 | 37.7 | 77.8 | 82.3 | **56.8** | 31.3 | **78.6** | 53.8 | 60.5 |
110
+ | **Supervised (large, >305M)** | | | | | | | | | | |
111
+ | **e5-large-v2-t2t** | 335M | 55.7 | 41.4 | 75.7 | 86.6 | 49.9 | 25.5 | 74.0 | 49.5 | 58.4 |
112
+ | **e5-large-v2-t2t-nl** | 335M | 57.3 | 42.4 | 76.9 | 86.9 | 50.8 | 27.7 | 74.1 | 51.7 | 59.4 |
113
+ | **RobBERT-2023-large-ft** | 355M | 59.3 | 45.2 | 68.7 | 82.3 | 48.3 | 31.6 | 70.6 | 51.0 | 58.0 |
114
+ | **e5-large-trm** | 355M | 60.2 | 45.4 | 80.3 | 90.3 | 59.0 | 28.7 | 78.8 | 55.1 | 63.3 |
115
+ | **e5-large-trm-nl** | 355M | **62.2** | **48.0** | **81.4** | 87.2 | 58.2 | 35.6 | 78.2 | **57.0** | **64.4** |
116
+ | multilingual-e5-large | 560M | 60.2 | 45.4 | 80.3 | **90.3** | 59.1 | 29.5 | 78.8 | 55.3 | 63.4 |
117
+ | Arctic-embed-l-v2.0 | 568M | 59.3 | 45.2 | 74.2 | 88.2 | 59.0 | 29.8 | 71.7 | 54.3 | 61.1 |
118
+ | bge-m3 | 568M | 60.7 | 44.2 | 78.3 | 88.7 | **60.0** | 29.2 | 78.1 | 55.4 | 63.1 |
119
+ | jina-embeddings-v3 | 572M | 61.7 | 38.9 | 76.8 | 78.5 | 59.1 | **38.9** | **84.8** | **57.0** | 62.7 |
120
+
121
+
122
+
123
+ ### Citation Information
124
+
125
+ If you find our paper, benchmark or models helpful, please consider cite as follows:
126
+ ```latex
127
+ @misc{banar2025mtebnle5nlembeddingbenchmark,
128
+ title={MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch},
129
+ author={Nikolay Banar and Ehsan Lotfi and Jens Van Nooten and Cristina Arhiliuc and Marija Kliocaite and Walter Daelemans},
130
+ year={2025},
131
+ eprint={2509.12340},
132
+ archivePrefix={arXiv},
133
+ primaryClass={cs.CL},
134
+ url={https://arxiv.org/abs/2509.12340},
135
+ }
136
+ ```
137
+ [//]: # (https://arxiv.org/abs/2509.12340)