Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Japanese
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
4c58ad0
·
verified ·
1 Parent(s): c4e50fe

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +20 -7
README.md CHANGED
@@ -5,9 +5,12 @@ language:
5
  - jpn
6
  license: cc-by-4.0
7
  multilinguality: monolingual
 
 
8
  task_categories:
9
  - text-retrieval
10
- task_ids: []
 
11
  dataset_info:
12
  - config_name: corpus
13
  features:
@@ -74,13 +77,16 @@ tags:
74
  <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
75
  </div>
76
 
77
- This dataset was created from the Japanese NLP Journal LaTeX Corpus. The titles, abstracts and introductions of the academic papers were shuffled. The goal is to find the corresponding introduction with the given title.
78
 
79
  | | |
80
  |---------------|---------------------------------------------|
81
  | Task category | t2t |
82
  | Domains | Academic, Written |
83
- | Reference | https://github.com/sbintuitions/JMTEB |
 
 
 
84
 
85
 
86
  ## How to evaluate on this task
@@ -90,15 +96,15 @@ You can evaluate an embedding model on this dataset using the following code:
90
  ```python
91
  import mteb
92
 
93
- task = mteb.get_tasks(["NLPJournalTitleIntroRetrieval"])
94
- evaluator = mteb.MTEB(task)
95
 
96
  model = mteb.get_model(YOUR_MODEL)
97
  evaluator.run(model)
98
  ```
99
 
100
  <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
101
- To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
102
 
103
  ## Citation
104
 
@@ -106,6 +112,13 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
106
 
107
  ```bibtex
108
 
 
 
 
 
 
 
 
109
 
110
  @article{enevoldsen2025mmtebmassivemultilingualtext,
111
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
@@ -118,7 +131,7 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
118
  }
119
 
120
  @article{muennighoff2022mteb,
121
- author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
122
  title = {MTEB: Massive Text Embedding Benchmark},
123
  publisher = {arXiv},
124
  journal={arXiv preprint arXiv:2210.07316},
 
5
  - jpn
6
  license: cc-by-4.0
7
  multilinguality: monolingual
8
+ source_datasets:
9
+ - sbintuitions/JMTEB
10
  task_categories:
11
  - text-retrieval
12
+ task_ids:
13
+ - document-retrieval
14
  dataset_info:
15
  - config_name: corpus
16
  features:
 
77
  <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
78
  </div>
79
 
80
+ This dataset was created from the Japanese NLP Journal LaTeX Corpus. The titles, abstracts and introductions of the academic papers were shuffled. The goal is to find the corresponding introduction with the given title. This is the V1 dataset (last updated 2020-06-15).
81
 
82
  | | |
83
  |---------------|---------------------------------------------|
84
  | Task category | t2t |
85
  | Domains | Academic, Written |
86
+ | Reference | https://huggingface.co/datasets/sbintuitions/JMTEB |
87
+
88
+ Source datasets:
89
+ - [sbintuitions/JMTEB](https://huggingface.co/datasets/sbintuitions/JMTEB)
90
 
91
 
92
  ## How to evaluate on this task
 
96
  ```python
97
  import mteb
98
 
99
+ task = mteb.get_task("NLPJournalTitleIntroRetrieval")
100
+ evaluator = mteb.MTEB([task])
101
 
102
  model = mteb.get_model(YOUR_MODEL)
103
  evaluator.run(model)
104
  ```
105
 
106
  <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
107
+ To learn more about how to run models on `mteb` task check out the [GitHub repository](https://github.com/embeddings-benchmark/mteb).
108
 
109
  ## Citation
110
 
 
112
 
113
  ```bibtex
114
 
115
+ @misc{jmteb,
116
+ author = {Li, Shengzhe and Ohagi, Masaya and Ri, Ryokan},
117
+ howpublished = {\url{https://huggingface.co/datasets/sbintuitions/JMTEB}},
118
+ title = {{J}{M}{T}{E}{B}: {J}apanese {M}assive {T}ext {E}mbedding {B}enchmark},
119
+ year = {2024},
120
+ }
121
+
122
 
123
  @article{enevoldsen2025mmtebmassivemultilingualtext,
124
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
 
131
  }
132
 
133
  @article{muennighoff2022mteb,
134
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
135
  title = {MTEB: Massive Text Embedding Benchmark},
136
  publisher = {arXiv},
137
  journal={arXiv preprint arXiv:2210.07316},