Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Japanese
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
fce37a9
·
verified ·
1 Parent(s): 5bac629

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md CHANGED
@@ -1,4 +1,13 @@
1
  ---
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: sentence1
@@ -23,4 +32,136 @@ configs:
23
  path: data/train-*
24
  - split: validation
25
  path: data/validation-*
 
 
 
26
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - human-annotated
4
+ language:
5
+ - jpn
6
+ license: cc-by-sa-4.0
7
+ multilinguality: monolingual
8
+ task_categories:
9
+ - sentence-similarity
10
+ task_ids: []
11
  dataset_info:
12
  features:
13
  - name: sentence1
 
32
  path: data/train-*
33
  - split: validation
34
  path: data/validation-*
35
+ tags:
36
+ - mteb
37
+ - text
38
  ---
39
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
40
+
41
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
42
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">JSTS</h1>
43
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
44
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
45
+ </div>
46
+
47
+ Japanese Semantic Textual Similarity Benchmark dataset construct from YJ Image Captions Dataset (Miyazaki and Shimizu, 2016) and annotated by crowdsource annotators.
48
+
49
+ | | |
50
+ |---------------|---------------------------------------------|
51
+ | Task category | t2t |
52
+ | Domains | Web, Written |
53
+ | Reference | https://aclanthology.org/2022.lrec-1.317.pdf#page=2.00 |
54
+
55
+
56
+ ## How to evaluate on this task
57
+
58
+ You can evaluate an embedding model on this dataset using the following code:
59
+
60
+ ```python
61
+ import mteb
62
+
63
+ task = mteb.get_tasks(["JSTS"])
64
+ evaluator = mteb.MTEB(task)
65
+
66
+ model = mteb.get_model(YOUR_MODEL)
67
+ evaluator.run(model)
68
+ ```
69
+
70
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
71
+ To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
72
+
73
+ ## Citation
74
+
75
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
76
+
77
+ ```bibtex
78
+
79
+ @inproceedings{kurihara-etal-2022-jglue,
80
+ abstract = {To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.},
81
+ address = {Marseille, France},
82
+ author = {Kurihara, Kentaro and
83
+ Kawahara, Daisuke and
84
+ Shibata, Tomohide},
85
+ booktitle = {Proceedings of the Thirteenth Language Resources and Evaluation Conference},
86
+ editor = {Calzolari, Nicoletta and
87
+ B{\'e}chet, Fr{\'e}d{\'e}ric and
88
+ Blache, Philippe and
89
+ Choukri, Khalid and
90
+ Cieri, Christopher and
91
+ Declerck, Thierry and
92
+ Goggi, Sara and
93
+ Isahara, Hitoshi and
94
+ Maegaard, Bente and
95
+ Mariani, Joseph and
96
+ Mazo, H{\'e}l{\`e}ne and
97
+ Odijk, Jan and
98
+ Piperidis, Stelios},
99
+ month = jun,
100
+ pages = {2957--2966},
101
+ publisher = {European Language Resources Association},
102
+ title = {{JGLUE}: {J}apanese General Language Understanding Evaluation},
103
+ url = {https://aclanthology.org/2022.lrec-1.317},
104
+ year = {2022},
105
+ }
106
+
107
+
108
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
109
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
110
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
111
+ publisher = {arXiv},
112
+ journal={arXiv preprint arXiv:2502.13595},
113
+ year={2025},
114
+ url={https://arxiv.org/abs/2502.13595},
115
+ doi = {10.48550/arXiv.2502.13595},
116
+ }
117
+
118
+ @article{muennighoff2022mteb,
119
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
120
+ title = {MTEB: Massive Text Embedding Benchmark},
121
+ publisher = {arXiv},
122
+ journal={arXiv preprint arXiv:2210.07316},
123
+ year = {2022}
124
+ url = {https://arxiv.org/abs/2210.07316},
125
+ doi = {10.48550/ARXIV.2210.07316},
126
+ }
127
+ ```
128
+
129
+ # Dataset Statistics
130
+ <details>
131
+ <summary> Dataset Statistics</summary>
132
+
133
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
134
+
135
+ ```python
136
+ import mteb
137
+
138
+ task = mteb.get_task("JSTS")
139
+
140
+ desc_stats = task.metadata.descriptive_stats
141
+ ```
142
+
143
+ ```json
144
+ {
145
+ "validation": {
146
+ "num_samples": 1457,
147
+ "number_of_characters": 67518,
148
+ "unique_pairs": 1456,
149
+ "min_sentence1_length": 12,
150
+ "average_sentence1_len": 23.3452299245024,
151
+ "max_sentence1_length": 79,
152
+ "unique_sentence1": 1403,
153
+ "min_sentence2_length": 8,
154
+ "average_sentence2_len": 22.99519560741249,
155
+ "max_sentence2_length": 77,
156
+ "unique_sentence2": 1434,
157
+ "min_score": 0.0,
158
+ "avg_score": 2.2719286174379807,
159
+ "max_score": 5.0
160
+ }
161
+ }
162
+ ```
163
+
164
+ </details>
165
+
166
+ ---
167
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*