Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
Samoed commited on
Commit
f2285ae
·
verified ·
1 Parent(s): 1b7fed4

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +877 -0
README.md CHANGED
@@ -1,4 +1,10 @@
1
  ---
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: text
@@ -26,4 +32,875 @@ configs:
26
  path: data/train-*
27
  - split: validation
28
  path: data/validation-*
 
 
 
29
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - cmn
4
+ multilinguality: monolingual
5
+ task_categories:
6
+ - text-classification
7
+ task_ids: []
8
  dataset_info:
9
  features:
10
  - name: text
 
32
  path: data/train-*
33
  - split: validation
34
  path: data/validation-*
35
+ tags:
36
+ - mteb
37
+ - text
38
  ---
39
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
40
+
41
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
42
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">IFlyTek</h1>
43
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
44
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
45
+ </div>
46
+
47
+ Long Text classification for the description of Apps
48
+
49
+ | | |
50
+ |---------------|---------------------------------------------|
51
+ | Task category | t2c |
52
+ | Domains | None |
53
+ | Reference | https://www.cluebenchmarks.com/introduce.html |
54
+
55
+
56
+ ## How to evaluate on this task
57
+
58
+ You can evaluate an embedding model on this dataset using the following code:
59
+
60
+ ```python
61
+ import mteb
62
+
63
+ task = mteb.get_tasks(["IFlyTek"])
64
+ evaluator = mteb.MTEB(task)
65
+
66
+ model = mteb.get_model(YOUR_MODEL)
67
+ evaluator.run(model)
68
+ ```
69
+
70
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
71
+ To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
72
+
73
+ ## Citation
74
+
75
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
76
+
77
+ ```bibtex
78
+
79
+ @inproceedings{xu-etal-2020-clue,
80
+ abstract = {The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks. These comprehensive benchmarks have facilitated a broad range of research and applications in natural language processing (NLP). The problem, however, is that most such benchmarks are limited to English, which has made it difficult to replicate many of the successes in English NLU for other languages. To help remedy this issue, we introduce the first large-scale Chinese Language Understanding Evaluation (CLUE) benchmark. CLUE is an open-ended, community-driven project that brings together 9 tasks spanning several well-established single-sentence/sentence-pair classification tasks, as well as machine reading comprehension, all on original Chinese text. To establish results on these tasks, we report scores using an exhaustive set of current state-of-the-art pre-trained Chinese models (9 in total). We also introduce a number of supplementary datasets and additional tools to help facilitate further progress on Chinese NLU. Our benchmark is released at https://www.cluebenchmarks.com},
81
+ address = {Barcelona, Spain (Online)},
82
+ author = {Xu, Liang and
83
+ Hu, Hai and
84
+ Zhang, Xuanwei and
85
+ Li, Lu and
86
+ Cao, Chenjie and
87
+ Li, Yudong and
88
+ Xu, Yechen and
89
+ Sun, Kai and
90
+ Yu, Dian and
91
+ Yu, Cong and
92
+ Tian, Yin and
93
+ Dong, Qianqian and
94
+ Liu, Weitang and
95
+ Shi, Bo and
96
+ Cui, Yiming and
97
+ Li, Junyi and
98
+ Zeng, Jun and
99
+ Wang, Rongzhao and
100
+ Xie, Weijian and
101
+ Li, Yanting and
102
+ Patterson, Yina and
103
+ Tian, Zuoyu and
104
+ Zhang, Yiwen and
105
+ Zhou, He and
106
+ Liu, Shaoweihua and
107
+ Zhao, Zhe and
108
+ Zhao, Qipeng and
109
+ Yue, Cong and
110
+ Zhang, Xinrui and
111
+ Yang, Zhengliang and
112
+ Richardson, Kyle and
113
+ Lan, Zhenzhong },
114
+ booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
115
+ doi = {10.18653/v1/2020.coling-main.419},
116
+ month = dec,
117
+ pages = {4762--4772},
118
+ publisher = {International Committee on Computational Linguistics},
119
+ title = {{CLUE}: A {C}hinese Language Understanding Evaluation Benchmark},
120
+ url = {https://aclanthology.org/2020.coling-main.419},
121
+ year = {2020},
122
+ }
123
+
124
+
125
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
126
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
127
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
128
+ publisher = {arXiv},
129
+ journal={arXiv preprint arXiv:2502.13595},
130
+ year={2025},
131
+ url={https://arxiv.org/abs/2502.13595},
132
+ doi = {10.48550/arXiv.2502.13595},
133
+ }
134
+
135
+ @article{muennighoff2022mteb,
136
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
137
+ title = {MTEB: Massive Text Embedding Benchmark},
138
+ publisher = {arXiv},
139
+ journal={arXiv preprint arXiv:2210.07316},
140
+ year = {2022}
141
+ url = {https://arxiv.org/abs/2210.07316},
142
+ doi = {10.48550/ARXIV.2210.07316},
143
+ }
144
+ ```
145
+
146
+ # Dataset Statistics
147
+ <details>
148
+ <summary> Dataset Statistics</summary>
149
+
150
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
151
+
152
+ ```python
153
+ import mteb
154
+
155
+ task = mteb.get_task("IFlyTek")
156
+
157
+ desc_stats = task.metadata.descriptive_stats
158
+ ```
159
+
160
+ ```json
161
+ {
162
+ "validation": {
163
+ "num_samples": 2599,
164
+ "number_of_characters": 753272,
165
+ "number_texts_intersect_with_train": 270,
166
+ "min_text_length": 11,
167
+ "average_text_length": 289.8314736437091,
168
+ "max_text_length": 1755,
169
+ "unique_text": 2549,
170
+ "unique_labels": 119,
171
+ "labels": {
172
+ "110": {
173
+ "count": 3
174
+ },
175
+ "70": {
176
+ "count": 388
177
+ },
178
+ "10": {
179
+ "count": 22
180
+ },
181
+ "18": {
182
+ "count": 79
183
+ },
184
+ "17": {
185
+ "count": 192
186
+ },
187
+ "34": {
188
+ "count": 36
189
+ },
190
+ "71": {
191
+ "count": 123
192
+ },
193
+ "104": {
194
+ "count": 4
195
+ },
196
+ "49": {
197
+ "count": 38
198
+ },
199
+ "20": {
200
+ "count": 53
201
+ },
202
+ "44": {
203
+ "count": 7
204
+ },
205
+ "24": {
206
+ "count": 27
207
+ },
208
+ "95": {
209
+ "count": 79
210
+ },
211
+ "21": {
212
+ "count": 56
213
+ },
214
+ "66": {
215
+ "count": 2
216
+ },
217
+ "83": {
218
+ "count": 7
219
+ },
220
+ "94": {
221
+ "count": 25
222
+ },
223
+ "19": {
224
+ "count": 36
225
+ },
226
+ "46": {
227
+ "count": 52
228
+ },
229
+ "96": {
230
+ "count": 51
231
+ },
232
+ "113": {
233
+ "count": 32
234
+ },
235
+ "36": {
236
+ "count": 54
237
+ },
238
+ "87": {
239
+ "count": 6
240
+ },
241
+ "106": {
242
+ "count": 68
243
+ },
244
+ "62": {
245
+ "count": 9
246
+ },
247
+ "98": {
248
+ "count": 8
249
+ },
250
+ "22": {
251
+ "count": 35
252
+ },
253
+ "45": {
254
+ "count": 15
255
+ },
256
+ "13": {
257
+ "count": 24
258
+ },
259
+ "28": {
260
+ "count": 49
261
+ },
262
+ "15": {
263
+ "count": 9
264
+ },
265
+ "82": {
266
+ "count": 19
267
+ },
268
+ "4": {
269
+ "count": 37
270
+ },
271
+ "102": {
272
+ "count": 14
273
+ },
274
+ "88": {
275
+ "count": 4
276
+ },
277
+ "25": {
278
+ "count": 36
279
+ },
280
+ "91": {
281
+ "count": 23
282
+ },
283
+ "48": {
284
+ "count": 36
285
+ },
286
+ "74": {
287
+ "count": 6
288
+ },
289
+ "53": {
290
+ "count": 97
291
+ },
292
+ "57": {
293
+ "count": 7
294
+ },
295
+ "11": {
296
+ "count": 21
297
+ },
298
+ "103": {
299
+ "count": 16
300
+ },
301
+ "111": {
302
+ "count": 35
303
+ },
304
+ "56": {
305
+ "count": 40
306
+ },
307
+ "58": {
308
+ "count": 14
309
+ },
310
+ "27": {
311
+ "count": 4
312
+ },
313
+ "1": {
314
+ "count": 10
315
+ },
316
+ "16": {
317
+ "count": 42
318
+ },
319
+ "9": {
320
+ "count": 29
321
+ },
322
+ "99": {
323
+ "count": 20
324
+ },
325
+ "47": {
326
+ "count": 8
327
+ },
328
+ "35": {
329
+ "count": 14
330
+ },
331
+ "61": {
332
+ "count": 9
333
+ },
334
+ "101": {
335
+ "count": 14
336
+ },
337
+ "72": {
338
+ "count": 6
339
+ },
340
+ "41": {
341
+ "count": 5
342
+ },
343
+ "8": {
344
+ "count": 29
345
+ },
346
+ "84": {
347
+ "count": 8
348
+ },
349
+ "69": {
350
+ "count": 3
351
+ },
352
+ "114": {
353
+ "count": 4
354
+ },
355
+ "12": {
356
+ "count": 17
357
+ },
358
+ "54": {
359
+ "count": 23
360
+ },
361
+ "92": {
362
+ "count": 8
363
+ },
364
+ "118": {
365
+ "count": 18
366
+ },
367
+ "42": {
368
+ "count": 6
369
+ },
370
+ "97": {
371
+ "count": 24
372
+ },
373
+ "100": {
374
+ "count": 9
375
+ },
376
+ "29": {
377
+ "count": 9
378
+ },
379
+ "117": {
380
+ "count": 2
381
+ },
382
+ "23": {
383
+ "count": 11
384
+ },
385
+ "59": {
386
+ "count": 16
387
+ },
388
+ "81": {
389
+ "count": 6
390
+ },
391
+ "14": {
392
+ "count": 5
393
+ },
394
+ "116": {
395
+ "count": 22
396
+ },
397
+ "52": {
398
+ "count": 1
399
+ },
400
+ "63": {
401
+ "count": 6
402
+ },
403
+ "43": {
404
+ "count": 3
405
+ },
406
+ "85": {
407
+ "count": 15
408
+ },
409
+ "80": {
410
+ "count": 5
411
+ },
412
+ "79": {
413
+ "count": 1
414
+ },
415
+ "77": {
416
+ "count": 8
417
+ },
418
+ "93": {
419
+ "count": 8
420
+ },
421
+ "65": {
422
+ "count": 3
423
+ },
424
+ "7": {
425
+ "count": 6
426
+ },
427
+ "75": {
428
+ "count": 10
429
+ },
430
+ "78": {
431
+ "count": 9
432
+ },
433
+ "55": {
434
+ "count": 5
435
+ },
436
+ "3": {
437
+ "count": 4
438
+ },
439
+ "26": {
440
+ "count": 17
441
+ },
442
+ "67": {
443
+ "count": 3
444
+ },
445
+ "115": {
446
+ "count": 6
447
+ },
448
+ "112": {
449
+ "count": 4
450
+ },
451
+ "89": {
452
+ "count": 2
453
+ },
454
+ "90": {
455
+ "count": 3
456
+ },
457
+ "33": {
458
+ "count": 8
459
+ },
460
+ "60": {
461
+ "count": 9
462
+ },
463
+ "50": {
464
+ "count": 5
465
+ },
466
+ "37": {
467
+ "count": 3
468
+ },
469
+ "73": {
470
+ "count": 6
471
+ },
472
+ "68": {
473
+ "count": 2
474
+ },
475
+ "39": {
476
+ "count": 5
477
+ },
478
+ "51": {
479
+ "count": 4
480
+ },
481
+ "76": {
482
+ "count": 5
483
+ },
484
+ "32": {
485
+ "count": 4
486
+ },
487
+ "64": {
488
+ "count": 6
489
+ },
490
+ "107": {
491
+ "count": 3
492
+ },
493
+ "30": {
494
+ "count": 5
495
+ },
496
+ "31": {
497
+ "count": 4
498
+ },
499
+ "108": {
500
+ "count": 4
501
+ },
502
+ "40": {
503
+ "count": 2
504
+ },
505
+ "5": {
506
+ "count": 4
507
+ },
508
+ "109": {
509
+ "count": 1
510
+ },
511
+ "86": {
512
+ "count": 3
513
+ },
514
+ "38": {
515
+ "count": 6
516
+ },
517
+ "2": {
518
+ "count": 5
519
+ },
520
+ "105": {
521
+ "count": 4
522
+ },
523
+ "0": {
524
+ "count": 5
525
+ },
526
+ "6": {
527
+ "count": 2
528
+ }
529
+ }
530
+ },
531
+ "train": {
532
+ "num_samples": 12133,
533
+ "number_of_characters": 3506882,
534
+ "number_texts_intersect_with_train": null,
535
+ "min_text_length": 10,
536
+ "average_text_length": 289.0366768317811,
537
+ "max_text_length": 4282,
538
+ "unique_text": 11425,
539
+ "unique_labels": 119,
540
+ "labels": {
541
+ "11": {
542
+ "count": 76
543
+ },
544
+ "95": {
545
+ "count": 375
546
+ },
547
+ "74": {
548
+ "count": 22
549
+ },
550
+ "70": {
551
+ "count": 1980
552
+ },
553
+ "58": {
554
+ "count": 58
555
+ },
556
+ "25": {
557
+ "count": 135
558
+ },
559
+ "54": {
560
+ "count": 121
561
+ },
562
+ "34": {
563
+ "count": 240
564
+ },
565
+ "71": {
566
+ "count": 506
567
+ },
568
+ "12": {
569
+ "count": 102
570
+ },
571
+ "49": {
572
+ "count": 138
573
+ },
574
+ "24": {
575
+ "count": 163
576
+ },
577
+ "19": {
578
+ "count": 169
579
+ },
580
+ "18": {
581
+ "count": 364
582
+ },
583
+ "17": {
584
+ "count": 952
585
+ },
586
+ "53": {
587
+ "count": 369
588
+ },
589
+ "4": {
590
+ "count": 129
591
+ },
592
+ "99": {
593
+ "count": 116
594
+ },
595
+ "20": {
596
+ "count": 264
597
+ },
598
+ "118": {
599
+ "count": 111
600
+ },
601
+ "108": {
602
+ "count": 10
603
+ },
604
+ "113": {
605
+ "count": 135
606
+ },
607
+ "94": {
608
+ "count": 108
609
+ },
610
+ "28": {
611
+ "count": 204
612
+ },
613
+ "48": {
614
+ "count": 143
615
+ },
616
+ "96": {
617
+ "count": 210
618
+ },
619
+ "116": {
620
+ "count": 114
621
+ },
622
+ "23": {
623
+ "count": 25
624
+ },
625
+ "22": {
626
+ "count": 173
627
+ },
628
+ "21": {
629
+ "count": 280
630
+ },
631
+ "102": {
632
+ "count": 112
633
+ },
634
+ "13": {
635
+ "count": 142
636
+ },
637
+ "97": {
638
+ "count": 115
639
+ },
640
+ "56": {
641
+ "count": 149
642
+ },
643
+ "1": {
644
+ "count": 37
645
+ },
646
+ "46": {
647
+ "count": 237
648
+ },
649
+ "36": {
650
+ "count": 253
651
+ },
652
+ "83": {
653
+ "count": 36
654
+ },
655
+ "111": {
656
+ "count": 156
657
+ },
658
+ "30": {
659
+ "count": 11
660
+ },
661
+ "82": {
662
+ "count": 86
663
+ },
664
+ "42": {
665
+ "count": 15
666
+ },
667
+ "16": {
668
+ "count": 180
669
+ },
670
+ "117": {
671
+ "count": 23
672
+ },
673
+ "0": {
674
+ "count": 20
675
+ },
676
+ "72": {
677
+ "count": 34
678
+ },
679
+ "90": {
680
+ "count": 39
681
+ },
682
+ "47": {
683
+ "count": 46
684
+ },
685
+ "35": {
686
+ "count": 42
687
+ },
688
+ "98": {
689
+ "count": 37
690
+ },
691
+ "81": {
692
+ "count": 26
693
+ },
694
+ "9": {
695
+ "count": 111
696
+ },
697
+ "59": {
698
+ "count": 82
699
+ },
700
+ "92": {
701
+ "count": 54
702
+ },
703
+ "91": {
704
+ "count": 99
705
+ },
706
+ "100": {
707
+ "count": 28
708
+ },
709
+ "79": {
710
+ "count": 23
711
+ },
712
+ "10": {
713
+ "count": 74
714
+ },
715
+ "29": {
716
+ "count": 27
717
+ },
718
+ "8": {
719
+ "count": 125
720
+ },
721
+ "110": {
722
+ "count": 27
723
+ },
724
+ "45": {
725
+ "count": 46
726
+ },
727
+ "103": {
728
+ "count": 71
729
+ },
730
+ "5": {
731
+ "count": 44
732
+ },
733
+ "88": {
734
+ "count": 44
735
+ },
736
+ "66": {
737
+ "count": 11
738
+ },
739
+ "101": {
740
+ "count": 69
741
+ },
742
+ "3": {
743
+ "count": 20
744
+ },
745
+ "43": {
746
+ "count": 13
747
+ },
748
+ "39": {
749
+ "count": 17
750
+ },
751
+ "60": {
752
+ "count": 40
753
+ },
754
+ "14": {
755
+ "count": 53
756
+ },
757
+ "62": {
758
+ "count": 42
759
+ },
760
+ "89": {
761
+ "count": 5
762
+ },
763
+ "106": {
764
+ "count": 263
765
+ },
766
+ "41": {
767
+ "count": 21
768
+ },
769
+ "85": {
770
+ "count": 84
771
+ },
772
+ "105": {
773
+ "count": 24
774
+ },
775
+ "38": {
776
+ "count": 40
777
+ },
778
+ "31": {
779
+ "count": 43
780
+ },
781
+ "107": {
782
+ "count": 22
783
+ },
784
+ "78": {
785
+ "count": 52
786
+ },
787
+ "76": {
788
+ "count": 31
789
+ },
790
+ "104": {
791
+ "count": 31
792
+ },
793
+ "26": {
794
+ "count": 58
795
+ },
796
+ "73": {
797
+ "count": 42
798
+ },
799
+ "84": {
800
+ "count": 43
801
+ },
802
+ "50": {
803
+ "count": 30
804
+ },
805
+ "44": {
806
+ "count": 44
807
+ },
808
+ "65": {
809
+ "count": 19
810
+ },
811
+ "114": {
812
+ "count": 13
813
+ },
814
+ "40": {
815
+ "count": 20
816
+ },
817
+ "61": {
818
+ "count": 29
819
+ },
820
+ "7": {
821
+ "count": 14
822
+ },
823
+ "112": {
824
+ "count": 27
825
+ },
826
+ "2": {
827
+ "count": 33
828
+ },
829
+ "115": {
830
+ "count": 32
831
+ },
832
+ "75": {
833
+ "count": 35
834
+ },
835
+ "33": {
836
+ "count": 18
837
+ },
838
+ "37": {
839
+ "count": 21
840
+ },
841
+ "52": {
842
+ "count": 11
843
+ },
844
+ "93": {
845
+ "count": 26
846
+ },
847
+ "80": {
848
+ "count": 28
849
+ },
850
+ "87": {
851
+ "count": 23
852
+ },
853
+ "51": {
854
+ "count": 15
855
+ },
856
+ "77": {
857
+ "count": 36
858
+ },
859
+ "27": {
860
+ "count": 22
861
+ },
862
+ "15": {
863
+ "count": 30
864
+ },
865
+ "109": {
866
+ "count": 20
867
+ },
868
+ "64": {
869
+ "count": 24
870
+ },
871
+ "63": {
872
+ "count": 26
873
+ },
874
+ "55": {
875
+ "count": 14
876
+ },
877
+ "32": {
878
+ "count": 17
879
+ },
880
+ "86": {
881
+ "count": 9
882
+ },
883
+ "67": {
884
+ "count": 7
885
+ },
886
+ "57": {
887
+ "count": 16
888
+ },
889
+ "6": {
890
+ "count": 3
891
+ },
892
+ "69": {
893
+ "count": 3
894
+ },
895
+ "68": {
896
+ "count": 1
897
+ }
898
+ }
899
+ }
900
+ }
901
+ ```
902
+
903
+ </details>
904
+
905
+ ---
906
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*