Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
parquet
Sub-tasks:
multiple-choice-qa
Languages:
Danish
Size:
< 1K
ArXiv:
License:
Add dataset card
Browse files
README.md
CHANGED
|
@@ -5,6 +5,8 @@ language:
|
|
| 5 |
- dan
|
| 6 |
license: cc-by-4.0
|
| 7 |
multilinguality: monolingual
|
|
|
|
|
|
|
| 8 |
task_categories:
|
| 9 |
- text-retrieval
|
| 10 |
task_ids:
|
|
@@ -84,6 +86,8 @@ Danish question asked on Twitter with the Hashtag #Twitterhjerne ('Twitter brain
|
|
| 84 |
| Reference | https://huggingface.co/datasets/sorenmulli/da-hashtag-twitterhjerne |
|
| 85 |
|
| 86 |
|
|
|
|
|
|
|
| 87 |
## How to evaluate on this task
|
| 88 |
|
| 89 |
You can evaluate an embedding model on this dataset using the following code:
|
|
@@ -91,15 +95,15 @@ You can evaluate an embedding model on this dataset using the following code:
|
|
| 91 |
```python
|
| 92 |
import mteb
|
| 93 |
|
| 94 |
-
task = mteb.
|
| 95 |
-
evaluator = mteb.MTEB(
|
| 96 |
|
| 97 |
model = mteb.get_model(YOUR_MODEL)
|
| 98 |
evaluator.run(model)
|
| 99 |
```
|
| 100 |
|
| 101 |
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
|
| 102 |
-
To learn more about how to run models on `mteb` task check out the [GitHub
|
| 103 |
|
| 104 |
## Citation
|
| 105 |
|
|
@@ -125,7 +129,7 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
|
|
| 125 |
}
|
| 126 |
|
| 127 |
@article{muennighoff2022mteb,
|
| 128 |
-
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne,
|
| 129 |
title = {MTEB: Massive Text Embedding Benchmark},
|
| 130 |
publisher = {arXiv},
|
| 131 |
journal={arXiv preprint arXiv:2210.07316},
|
|
|
|
| 5 |
- dan
|
| 6 |
license: cc-by-4.0
|
| 7 |
multilinguality: monolingual
|
| 8 |
+
source_datasets:
|
| 9 |
+
- sorenmulli/da-hashtag-twitterhjerne
|
| 10 |
task_categories:
|
| 11 |
- text-retrieval
|
| 12 |
task_ids:
|
|
|
|
| 86 |
| Reference | https://huggingface.co/datasets/sorenmulli/da-hashtag-twitterhjerne |
|
| 87 |
|
| 88 |
|
| 89 |
+
|
| 90 |
+
|
| 91 |
## How to evaluate on this task
|
| 92 |
|
| 93 |
You can evaluate an embedding model on this dataset using the following code:
|
|
|
|
| 95 |
```python
|
| 96 |
import mteb
|
| 97 |
|
| 98 |
+
task = mteb.get_tasks(["TwitterHjerneRetrieval"])
|
| 99 |
+
evaluator = mteb.MTEB(task)
|
| 100 |
|
| 101 |
model = mteb.get_model(YOUR_MODEL)
|
| 102 |
evaluator.run(model)
|
| 103 |
```
|
| 104 |
|
| 105 |
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
|
| 106 |
+
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
|
| 107 |
|
| 108 |
## Citation
|
| 109 |
|
|
|
|
| 129 |
}
|
| 130 |
|
| 131 |
@article{muennighoff2022mteb,
|
| 132 |
+
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
|
| 133 |
title = {MTEB: Massive Text Embedding Benchmark},
|
| 134 |
publisher = {arXiv},
|
| 135 |
journal={arXiv preprint arXiv:2210.07316},
|