Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Polish
ArXiv:
Libraries:
Datasets
Dask
License:
Samoed commited on
Commit
4f85bf5
·
verified ·
1 Parent(s): 7c4565a

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +12 -4
README.md CHANGED
@@ -7,10 +7,14 @@ license: other
7
  multilinguality: translated
8
  source_datasets:
9
  - mteb/msmarco
 
10
  task_categories:
11
  - text-retrieval
 
 
12
  task_ids:
13
  - multiple-choice-qa
 
14
  dataset_info:
15
  - config_name: corpus
16
  features:
@@ -103,6 +107,10 @@ MS MARCO is a collection of datasets focused on deep learning in search
103
  | Domains | Web, Written |
104
  | Reference | https://microsoft.github.io/msmarco/ |
105
 
 
 
 
 
106
 
107
  ## How to evaluate on this task
108
 
@@ -111,15 +119,15 @@ You can evaluate an embedding model on this dataset using the following code:
111
  ```python
112
  import mteb
113
 
114
- task = mteb.get_tasks(["MSMARCO-PL"])
115
- evaluator = mteb.MTEB(task)
116
 
117
  model = mteb.get_model(YOUR_MODEL)
118
  evaluator.run(model)
119
  ```
120
 
121
  <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
122
- To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
123
 
124
  ## Citation
125
 
@@ -148,7 +156,7 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
148
  }
149
 
150
  @article{muennighoff2022mteb,
151
- author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
152
  title = {MTEB: Massive Text Embedding Benchmark},
153
  publisher = {arXiv},
154
  journal={arXiv preprint arXiv:2210.07316},
 
7
  multilinguality: translated
8
  source_datasets:
9
  - mteb/msmarco
10
+ - mteb/MSMARCO-PL
11
  task_categories:
12
  - text-retrieval
13
+ - multiple-choice-qa
14
+ - question-answering
15
  task_ids:
16
  - multiple-choice-qa
17
+ - question-answering
18
  dataset_info:
19
  - config_name: corpus
20
  features:
 
107
  | Domains | Web, Written |
108
  | Reference | https://microsoft.github.io/msmarco/ |
109
 
110
+ Source datasets:
111
+ - [mteb/msmarco](https://huggingface.co/datasets/mteb/msmarco)
112
+ - [mteb/MSMARCO-PL](https://huggingface.co/datasets/mteb/MSMARCO-PL)
113
+
114
 
115
  ## How to evaluate on this task
116
 
 
119
  ```python
120
  import mteb
121
 
122
+ task = mteb.get_task("MSMARCO-PL")
123
+ evaluator = mteb.MTEB([task])
124
 
125
  model = mteb.get_model(YOUR_MODEL)
126
  evaluator.run(model)
127
  ```
128
 
129
  <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
130
+ To learn more about how to run models on `mteb` task check out the [GitHub repository](https://github.com/embeddings-benchmark/mteb).
131
 
132
  ## Citation
133
 
 
156
  }
157
 
158
  @article{muennighoff2022mteb,
159
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
160
  title = {MTEB: Massive Text Embedding Benchmark},
161
  publisher = {arXiv},
162
  journal={arXiv preprint arXiv:2210.07316},