Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Samoed commited on
Commit
2bc81c0
·
verified ·
1 Parent(s): 29afa25

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +436 -161
README.md CHANGED
@@ -1,34 +1,28 @@
1
  ---
2
  annotations_creators:
3
- - crowdsourced
4
  language_creators:
5
  - crowdsourced
6
  - found
7
  - machine-generated
8
  language:
9
- - de
10
- - en
11
- - es
12
- - fr
13
- - it
14
- - nl
15
- - pl
16
- - pt
17
- - ru
18
- - zh
19
- license:
20
- - other
21
- multilinguality:
22
- - multilingual
23
  size_categories:
24
  - 10K<n<100K
25
- source_datasets:
26
- - extended|other-sts-b
27
  task_categories:
28
- - text-classification
29
- task_ids:
30
- - text-scoring
31
- - semantic-similarity-scoring
32
  pretty_name: STSb Multi MT
33
  configs:
34
  - config_name: default
@@ -119,174 +113,455 @@ configs:
119
  split: train
120
  - path: dev/pl.parquet
121
  split: dev
 
 
 
122
  ---
 
123
 
124
- # Dataset Card for STSb Multi MT
125
-
126
- ## Table of Contents
127
- - [Dataset Description](#dataset-description)
128
- - [Dataset Summary](#dataset-summary)
129
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
130
- - [Languages](#languages)
131
- - [Dataset Structure](#dataset-structure)
132
- - [Data Instances](#data-instances)
133
- - [Data Fields](#data-fields)
134
- - [Data Splits](#data-splits)
135
- - [Dataset Creation](#dataset-creation)
136
- - [Curation Rationale](#curation-rationale)
137
- - [Source Data](#source-data)
138
- - [Annotations](#annotations)
139
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
140
- - [Considerations for Using the Data](#considerations-for-using-the-data)
141
- - [Social Impact of Dataset](#social-impact-of-dataset)
142
- - [Discussion of Biases](#discussion-of-biases)
143
- - [Other Known Limitations](#other-known-limitations)
144
- - [Additional Information](#additional-information)
145
- - [Dataset Curators](#dataset-curators)
146
- - [Licensing Information](#licensing-information)
147
- - [Citation Information](#citation-information)
148
- - [Contributions](#contributions)
149
-
150
- ## Dataset Description
151
 
152
- - **Repository**: https://github.com/PhilipMay/stsb-multi-mt
153
- - **Homepage (original dataset):** https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark
154
- - **Paper about original dataset:** https://arxiv.org/abs/1708.00055
155
- - **Leaderboard:** https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark#Results
156
- - **Point of Contact:** [Open an issue on GitHub](https://github.com/PhilipMay/stsb-multi-mt/issues/new)
157
 
158
- ### Dataset Summary
 
 
 
 
159
 
160
- > STS Benchmark comprises a selection of the English datasets used in the STS tasks organized
161
- > in the context of SemEval between 2012 and 2017. The selection of datasets include text from
162
- > image captions, news headlines and user forums. ([source](https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark))
163
 
164
- These are different multilingual translations and the English original of the [STSbenchmark dataset](https://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark). Translation has been done with [deepl.com](https://www.deepl.com/). It can be used to train [sentence embeddings](https://github.com/UKPLab/sentence-transformers) like [T-Systems-onsite/cross-en-de-roberta-sentence-transformer](https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer).
165
 
 
166
 
167
- **Examples of Use**
168
-
169
- Load German dev Dataset:
170
  ```python
171
- from datasets import load_dataset
172
- dataset = load_dataset("stsb_multi_mt", name="de", split="dev")
173
- ```
174
 
175
- Load English train Dataset:
176
- ```python
177
- from datasets import load_dataset
178
- dataset = load_dataset("stsb_multi_mt", name="en", split="train")
179
- ```
180
 
181
- ### Supported Tasks and Leaderboards
 
 
182
 
183
- [More Information Needed]
 
184
 
185
- ### Languages
186
 
187
- Available languages are: de, en, es, fr, it, nl, pl, pt, ru, zh
188
 
189
- ## Dataset Structure
190
 
191
- ### Data Instances
 
 
 
 
 
192
 
193
- This dataset provides pairs of sentences and a score of their similarity.
194
 
195
- score | 2 example sentences | explanation
196
- ------|---------|------------
197
- 5 | *The bird is bathing in the sink.<br/>Birdie is washing itself in the water basin.* | The two sentences are completely equivalent, as they mean the same thing.
198
- 4 | *Two boys on a couch are playing video games.<br/>Two boys are playing a video game.* | The two sentences are mostly equivalent, but some unimportant details differ.
199
- 3 | *John said he is considered a witness but not a suspect.<br/>“He is not a suspect anymore.” John said.* | The two sentences are roughly equivalent, but some important information differs/missing.
200
- 2 | *They flew out of the nest in groups.<br/>They flew into the nest together.* | The two sentences are not equivalent, but share some details.
201
- 1 | *The woman is playing the violin.<br/>The young lady enjoys listening to the guitar.* | The two sentences are not equivalent, but are on the same topic.
202
- 0 | *The black dog is running through the snow.<br/>A race car driver is driving his car through the mud.* | The two sentences are completely dissimilar.
 
203
 
204
- An example:
205
- ```
206
- {
207
- "sentence1": "A man is playing a large flute.",
208
- "sentence2": "A man is playing a flute.",
209
- "similarity_score": 3.8
 
 
210
  }
211
  ```
212
 
213
- ### Data Fields
214
-
215
- - `sentence1`: The 1st sentence as a `str`.
216
- - `sentence2`: The 2nd sentence as a `str`.
217
- - `similarity_score`: The similarity score as a `float` which is `<= 5.0` and `>= 0.0`.
218
-
219
- ### Data Splits
220
-
221
- - train with 5749 samples
222
- - dev with 1500 samples
223
- - test with 1379 sampples
224
-
225
- ## Dataset Creation
226
-
227
- ### Curation Rationale
228
-
229
- [More Information Needed]
230
-
231
- ### Source Data
232
-
233
- #### Initial Data Collection and Normalization
234
-
235
- [More Information Needed]
236
-
237
- #### Who are the source language producers?
238
 
239
- [More Information Needed]
240
 
241
- ### Annotations
242
-
243
- #### Annotation process
244
-
245
- [More Information Needed]
246
-
247
- #### Who are the annotators?
248
-
249
- [More Information Needed]
250
-
251
- ### Personal and Sensitive Information
252
-
253
- [More Information Needed]
254
-
255
- ## Considerations for Using the Data
256
-
257
- ### Social Impact of Dataset
258
-
259
- [More Information Needed]
260
-
261
- ### Discussion of Biases
262
-
263
- [More Information Needed]
264
-
265
- ### Other Known Limitations
266
-
267
- [More Information Needed]
268
-
269
- ## Additional Information
270
-
271
- ### Dataset Curators
272
-
273
- [More Information Needed]
274
-
275
- ### Licensing Information
276
-
277
- See [LICENSE](https://github.com/PhilipMay/stsb-multi-mt/blob/main/LICENSE) and [download at original dataset](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark).
278
 
279
- ### Citation Information
280
 
 
281
  ```
282
- @InProceedings{huggingface:dataset:stsb_multi_mt,
283
- title = {Machine translated multilingual STS benchmark dataset.},
284
- author={Philip May},
285
- year={2021},
286
- url={https://github.com/PhilipMay/stsb-multi-mt}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
287
  }
288
  ```
289
 
290
- ### Contributions
291
 
292
- Thanks to [@PhilipMay](https://github.com/PhilipMay) for adding this dataset.
 
 
1
  ---
2
  annotations_creators:
3
+ - human-annotated
4
  language_creators:
5
  - crowdsourced
6
  - found
7
  - machine-generated
8
  language:
9
+ - eng
10
+ - deu
11
+ - spa
12
+ - fra
13
+ - ita
14
+ - nld
15
+ - pol
16
+ - por
17
+ - rus
18
+ - cmn
19
+ license: unknown
20
+ multilinguality: translated
 
 
21
  size_categories:
22
  - 10K<n<100K
 
 
23
  task_categories:
24
+ - sentence-similarity
25
+ task_ids: []
 
 
26
  pretty_name: STSb Multi MT
27
  configs:
28
  - config_name: default
 
113
  split: train
114
  - path: dev/pl.parquet
115
  split: dev
116
+ tags:
117
+ - mteb
118
+ - text
119
  ---
120
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
121
 
122
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
123
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">STSBenchmarkMultilingualSTS</h1>
124
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
125
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
126
+ </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
 
128
+ Semantic Textual Similarity Benchmark (STSbenchmark) dataset, but translated using DeepL API.
 
 
 
 
129
 
130
+ | | |
131
+ |---------------|---------------------------------------------|
132
+ | Task category | t2t |
133
+ | Domains | News, Social, Web, Spoken, Written |
134
+ | Reference | https://github.com/PhilipMay/stsb-multi-mt/ |
135
 
 
 
 
136
 
137
+ ## How to evaluate on this task
138
 
139
+ You can evaluate an embedding model on this dataset using the following code:
140
 
 
 
 
141
  ```python
142
+ import mteb
 
 
143
 
144
+ task = mteb.get_tasks(["STSBenchmarkMultilingualSTS"])
145
+ evaluator = mteb.MTEB(task)
 
 
 
146
 
147
+ model = mteb.get_model(YOUR_MODEL)
148
+ evaluator.run(model)
149
+ ```
150
 
151
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
152
+ To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
153
 
154
+ ## Citation
155
 
156
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
157
 
158
+ ```bibtex
159
 
160
+ @inproceedings{huggingface:dataset:stsb_multi_mt,
161
+ author = {Philip May},
162
+ title = {Machine translated multilingual STS benchmark dataset.},
163
+ url = {https://github.com/PhilipMay/stsb-multi-mt},
164
+ year = {2021},
165
+ }
166
 
 
167
 
168
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
169
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
170
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
171
+ publisher = {arXiv},
172
+ journal={arXiv preprint arXiv:2502.13595},
173
+ year={2025},
174
+ url={https://arxiv.org/abs/2502.13595},
175
+ doi = {10.48550/arXiv.2502.13595},
176
+ }
177
 
178
+ @article{muennighoff2022mteb,
179
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
180
+ title = {MTEB: Massive Text Embedding Benchmark},
181
+ publisher = {arXiv},
182
+ journal={arXiv preprint arXiv:2210.07316},
183
+ year = {2022}
184
+ url = {https://arxiv.org/abs/2210.07316},
185
+ doi = {10.48550/ARXIV.2210.07316},
186
  }
187
  ```
188
 
189
+ # Dataset Statistics
190
+ <details>
191
+ <summary> Dataset Statistics</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
192
 
193
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
194
 
195
+ ```python
196
+ import mteb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
197
 
198
+ task = mteb.get_task("STSBenchmarkMultilingualSTS")
199
 
200
+ desc_stats = task.metadata.descriptive_stats
201
  ```
202
+
203
+ ```json
204
+ {
205
+ "dev": {
206
+ "num_samples": 15000,
207
+ "number_of_characters": 1996110,
208
+ "unique_pairs": 14974,
209
+ "min_sentence1_length": 3,
210
+ "average_sentence1_len": 66.6904,
211
+ "max_sentence1_length": 274,
212
+ "unique_sentence1": 14676,
213
+ "min_sentence2_length": 3,
214
+ "average_sentence2_len": 66.3836,
215
+ "max_sentence2_length": 281,
216
+ "unique_sentence2": 14605,
217
+ "min_score": 0.0,
218
+ "avg_score": 2.3639075540602206,
219
+ "max_score": 5.0,
220
+ "hf_subset_descriptive_stats": {
221
+ "en": {
222
+ "num_samples": 1500,
223
+ "number_of_characters": 191955,
224
+ "unique_pairs": 1498,
225
+ "min_sentence1_length": 12,
226
+ "average_sentence1_len": 64.258,
227
+ "max_sentence1_length": 200,
228
+ "unique_sentence1": 1474,
229
+ "min_sentence2_length": 17,
230
+ "average_sentence2_len": 63.712,
231
+ "max_sentence2_length": 186,
232
+ "unique_sentence2": 1467,
233
+ "min_score": 0.0,
234
+ "avg_score": 2.3639075540602206,
235
+ "max_score": 5.0
236
+ },
237
+ "de": {
238
+ "num_samples": 1500,
239
+ "number_of_characters": 225853,
240
+ "unique_pairs": 1497,
241
+ "min_sentence1_length": 14,
242
+ "average_sentence1_len": 75.482,
243
+ "max_sentence1_length": 246,
244
+ "unique_sentence1": 1467,
245
+ "min_sentence2_length": 14,
246
+ "average_sentence2_len": 75.08666666666667,
247
+ "max_sentence2_length": 267,
248
+ "unique_sentence2": 1457,
249
+ "min_score": 0.0,
250
+ "avg_score": 2.3639075540602206,
251
+ "max_score": 5.0
252
+ },
253
+ "es": {
254
+ "num_samples": 1500,
255
+ "number_of_characters": 222932,
256
+ "unique_pairs": 1498,
257
+ "min_sentence1_length": 18,
258
+ "average_sentence1_len": 74.578,
259
+ "max_sentence1_length": 240,
260
+ "unique_sentence1": 1470,
261
+ "min_sentence2_length": 21,
262
+ "average_sentence2_len": 74.04333333333334,
263
+ "max_sentence2_length": 238,
264
+ "unique_sentence2": 1462,
265
+ "min_score": 0.0,
266
+ "avg_score": 2.3639075540602206,
267
+ "max_score": 5.0
268
+ },
269
+ "fr": {
270
+ "num_samples": 1500,
271
+ "number_of_characters": 230006,
272
+ "unique_pairs": 1497,
273
+ "min_sentence1_length": 15,
274
+ "average_sentence1_len": 76.81,
275
+ "max_sentence1_length": 260,
276
+ "unique_sentence1": 1465,
277
+ "min_sentence2_length": 12,
278
+ "average_sentence2_len": 76.52733333333333,
279
+ "max_sentence2_length": 244,
280
+ "unique_sentence2": 1459,
281
+ "min_score": 0.0,
282
+ "avg_score": 2.3639075540602206,
283
+ "max_score": 5.0
284
+ },
285
+ "it": {
286
+ "num_samples": 1500,
287
+ "number_of_characters": 223918,
288
+ "unique_pairs": 1498,
289
+ "min_sentence1_length": 16,
290
+ "average_sentence1_len": 74.784,
291
+ "max_sentence1_length": 257,
292
+ "unique_sentence1": 1469,
293
+ "min_sentence2_length": 19,
294
+ "average_sentence2_len": 74.49466666666666,
295
+ "max_sentence2_length": 238,
296
+ "unique_sentence2": 1463,
297
+ "min_score": 0.0,
298
+ "avg_score": 2.3639075540602206,
299
+ "max_score": 5.0
300
+ },
301
+ "nl": {
302
+ "num_samples": 1500,
303
+ "number_of_characters": 216574,
304
+ "unique_pairs": 1498,
305
+ "min_sentence1_length": 11,
306
+ "average_sentence1_len": 72.27533333333334,
307
+ "max_sentence1_length": 274,
308
+ "unique_sentence1": 1471,
309
+ "min_sentence2_length": 14,
310
+ "average_sentence2_len": 72.10733333333333,
311
+ "max_sentence2_length": 248,
312
+ "unique_sentence2": 1461,
313
+ "min_score": 0.0,
314
+ "avg_score": 2.3639075540602206,
315
+ "max_score": 5.0
316
+ },
317
+ "pl": {
318
+ "num_samples": 1500,
319
+ "number_of_characters": 202402,
320
+ "unique_pairs": 1498,
321
+ "min_sentence1_length": 12,
322
+ "average_sentence1_len": 67.58666666666667,
323
+ "max_sentence1_length": 251,
324
+ "unique_sentence1": 1466,
325
+ "min_sentence2_length": 12,
326
+ "average_sentence2_len": 67.348,
327
+ "max_sentence2_length": 238,
328
+ "unique_sentence2": 1460,
329
+ "min_score": 0.0,
330
+ "avg_score": 2.3639075540602206,
331
+ "max_score": 5.0
332
+ },
333
+ "pt": {
334
+ "num_samples": 1500,
335
+ "number_of_characters": 216388,
336
+ "unique_pairs": 1498,
337
+ "min_sentence1_length": 16,
338
+ "average_sentence1_len": 72.25933333333333,
339
+ "max_sentence1_length": 254,
340
+ "unique_sentence1": 1470,
341
+ "min_sentence2_length": 16,
342
+ "average_sentence2_len": 71.99933333333334,
343
+ "max_sentence2_length": 222,
344
+ "unique_sentence2": 1464,
345
+ "min_score": 0.0,
346
+ "avg_score": 2.3639075540602206,
347
+ "max_score": 5.0
348
+ },
349
+ "ru": {
350
+ "num_samples": 1500,
351
+ "number_of_characters": 203028,
352
+ "unique_pairs": 1495,
353
+ "min_sentence1_length": 13,
354
+ "average_sentence1_len": 67.802,
355
+ "max_sentence1_length": 261,
356
+ "unique_sentence1": 1464,
357
+ "min_sentence2_length": 10,
358
+ "average_sentence2_len": 67.55,
359
+ "max_sentence2_length": 281,
360
+ "unique_sentence2": 1454,
361
+ "min_score": 0.0,
362
+ "avg_score": 2.3639075540602206,
363
+ "max_score": 5.0
364
+ },
365
+ "zh": {
366
+ "num_samples": 1500,
367
+ "number_of_characters": 63054,
368
+ "unique_pairs": 1497,
369
+ "min_sentence1_length": 3,
370
+ "average_sentence1_len": 21.068666666666665,
371
+ "max_sentence1_length": 95,
372
+ "unique_sentence1": 1466,
373
+ "min_sentence2_length": 3,
374
+ "average_sentence2_len": 20.967333333333332,
375
+ "max_sentence2_length": 83,
376
+ "unique_sentence2": 1459,
377
+ "min_score": 0.0,
378
+ "avg_score": 2.3639075540602206,
379
+ "max_score": 5.0
380
+ }
381
+ }
382
+ },
383
+ "test": {
384
+ "num_samples": 13790,
385
+ "number_of_characters": 1545886,
386
+ "unique_pairs": 13756,
387
+ "min_sentence1_length": 3,
388
+ "average_sentence1_len": 56.14786076867295,
389
+ "max_sentence1_length": 297,
390
+ "unique_sentence1": 12462,
391
+ "min_sentence2_length": 3,
392
+ "average_sentence2_len": 55.95409717186367,
393
+ "max_sentence2_length": 315,
394
+ "unique_sentence2": 13267,
395
+ "min_score": 0.0,
396
+ "avg_score": 2.6079166059890806,
397
+ "max_score": 5.0,
398
+ "hf_subset_descriptive_stats": {
399
+ "en": {
400
+ "num_samples": 1379,
401
+ "number_of_characters": 147873,
402
+ "unique_pairs": 1378,
403
+ "min_sentence1_length": 16,
404
+ "average_sentence1_len": 53.734590282813635,
405
+ "max_sentence1_length": 215,
406
+ "unique_sentence1": 1256,
407
+ "min_sentence2_length": 13,
408
+ "average_sentence2_len": 53.49746192893401,
409
+ "max_sentence2_length": 199,
410
+ "unique_sentence2": 1337,
411
+ "min_score": 0.0,
412
+ "avg_score": 2.6079166059890806,
413
+ "max_score": 5.0
414
+ },
415
+ "de": {
416
+ "num_samples": 1379,
417
+ "number_of_characters": 174195,
418
+ "unique_pairs": 1376,
419
+ "min_sentence1_length": 14,
420
+ "average_sentence1_len": 63.28426395939086,
421
+ "max_sentence1_length": 275,
422
+ "unique_sentence1": 1248,
423
+ "min_sentence2_length": 13,
424
+ "average_sentence2_len": 63.035532994923855,
425
+ "max_sentence2_length": 268,
426
+ "unique_sentence2": 1327,
427
+ "min_score": 0.0,
428
+ "avg_score": 2.6079166059890806,
429
+ "max_score": 5.0
430
+ },
431
+ "es": {
432
+ "num_samples": 1379,
433
+ "number_of_characters": 174677,
434
+ "unique_pairs": 1376,
435
+ "min_sentence1_length": 14,
436
+ "average_sentence1_len": 63.44379985496737,
437
+ "max_sentence1_length": 240,
438
+ "unique_sentence1": 1248,
439
+ "min_sentence2_length": 13,
440
+ "average_sentence2_len": 63.22552574329224,
441
+ "max_sentence2_length": 271,
442
+ "unique_sentence2": 1330,
443
+ "min_score": 0.0,
444
+ "avg_score": 2.6079166059890806,
445
+ "max_score": 5.0
446
+ },
447
+ "fr": {
448
+ "num_samples": 1379,
449
+ "number_of_characters": 179252,
450
+ "unique_pairs": 1374,
451
+ "min_sentence1_length": 14,
452
+ "average_sentence1_len": 64.99202320522117,
453
+ "max_sentence1_length": 265,
454
+ "unique_sentence1": 1244,
455
+ "min_sentence2_length": 12,
456
+ "average_sentence2_len": 64.99492385786802,
457
+ "max_sentence2_length": 258,
458
+ "unique_sentence2": 1322,
459
+ "min_score": 0.0,
460
+ "avg_score": 2.6079166059890806,
461
+ "max_score": 5.0
462
+ },
463
+ "it": {
464
+ "num_samples": 1379,
465
+ "number_of_characters": 174276,
466
+ "unique_pairs": 1375,
467
+ "min_sentence1_length": 11,
468
+ "average_sentence1_len": 63.370558375634516,
469
+ "max_sentence1_length": 297,
470
+ "unique_sentence1": 1249,
471
+ "min_sentence2_length": 11,
472
+ "average_sentence2_len": 63.00797679477883,
473
+ "max_sentence2_length": 315,
474
+ "unique_sentence2": 1330,
475
+ "min_score": 0.0,
476
+ "avg_score": 2.6079166059890806,
477
+ "max_score": 5.0
478
+ },
479
+ "nl": {
480
+ "num_samples": 1379,
481
+ "number_of_characters": 166173,
482
+ "unique_pairs": 1377,
483
+ "min_sentence1_length": 13,
484
+ "average_sentence1_len": 60.37490935460479,
485
+ "max_sentence1_length": 284,
486
+ "unique_sentence1": 1247,
487
+ "min_sentence2_length": 14,
488
+ "average_sentence2_len": 60.1276287164612,
489
+ "max_sentence2_length": 255,
490
+ "unique_sentence2": 1327,
491
+ "min_score": 0.0,
492
+ "avg_score": 2.6079166059890806,
493
+ "max_score": 5.0
494
+ },
495
+ "pl": {
496
+ "num_samples": 1379,
497
+ "number_of_characters": 156173,
498
+ "unique_pairs": 1375,
499
+ "min_sentence1_length": 11,
500
+ "average_sentence1_len": 56.66352429296592,
501
+ "max_sentence1_length": 245,
502
+ "unique_sentence1": 1243,
503
+ "min_sentence2_length": 9,
504
+ "average_sentence2_len": 56.58738216098622,
505
+ "max_sentence2_length": 224,
506
+ "unique_sentence2": 1325,
507
+ "min_score": 0.0,
508
+ "avg_score": 2.6079166059890806,
509
+ "max_score": 5.0
510
+ },
511
+ "pt": {
512
+ "num_samples": 1379,
513
+ "number_of_characters": 167773,
514
+ "unique_pairs": 1377,
515
+ "min_sentence1_length": 8,
516
+ "average_sentence1_len": 60.849166062364034,
517
+ "max_sentence1_length": 257,
518
+ "unique_sentence1": 1249,
519
+ "min_sentence2_length": 8,
520
+ "average_sentence2_len": 60.81363306744017,
521
+ "max_sentence2_length": 248,
522
+ "unique_sentence2": 1332,
523
+ "min_score": 0.0,
524
+ "avg_score": 2.6079166059890806,
525
+ "max_score": 5.0
526
+ },
527
+ "ru": {
528
+ "num_samples": 1379,
529
+ "number_of_characters": 156178,
530
+ "unique_pairs": 1376,
531
+ "min_sentence1_length": 10,
532
+ "average_sentence1_len": 56.80928208846991,
533
+ "max_sentence1_length": 263,
534
+ "unique_sentence1": 1240,
535
+ "min_sentence2_length": 10,
536
+ "average_sentence2_len": 56.44525018129079,
537
+ "max_sentence2_length": 269,
538
+ "unique_sentence2": 1321,
539
+ "min_score": 0.0,
540
+ "avg_score": 2.6079166059890806,
541
+ "max_score": 5.0
542
+ },
543
+ "zh": {
544
+ "num_samples": 1379,
545
+ "number_of_characters": 49316,
546
+ "unique_pairs": 1372,
547
+ "min_sentence1_length": 3,
548
+ "average_sentence1_len": 17.956490210297318,
549
+ "max_sentence1_length": 96,
550
+ "unique_sentence1": 1242,
551
+ "min_sentence2_length": 3,
552
+ "average_sentence2_len": 17.80565627266135,
553
+ "max_sentence2_length": 131,
554
+ "unique_sentence2": 1320,
555
+ "min_score": 0.0,
556
+ "avg_score": 2.6079166059890806,
557
+ "max_score": 5.0
558
+ }
559
+ }
560
+ }
561
  }
562
  ```
563
 
564
+ </details>
565
 
566
+ ---
567
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*