Datasets:
mteb
/

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
Samoed commited on
Commit
604b12c
·
verified ·
1 Parent(s): 3912ed9

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +121 -61
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  annotations_creators:
3
- - no-annotation
4
  language_creators:
5
  - expert-generated
6
  language:
@@ -104,7 +104,6 @@ language:
104
  - bki
105
  - bkq
106
  - bkx
107
- - bla
108
  - blw
109
  - blz
110
  - bmh
@@ -343,7 +342,6 @@ language:
343
  - kiw
344
  - kiz
345
  - kje
346
- - kjn
347
  - kjs
348
  - kkc
349
  - kkl
@@ -503,7 +501,6 @@ language:
503
  - naf
504
  - nak
505
  - nas
506
- - nay
507
  - nbq
508
  - nca
509
  - nch
@@ -529,7 +526,6 @@ language:
529
  - nko
530
  - nld
531
  - nlg
532
- - nmw
533
  - nna
534
  - nnq
535
  - noa
@@ -676,7 +672,6 @@ language:
676
  - tbc
677
  - tbf
678
  - tbg
679
- - tbl
680
  - tbo
681
  - tbz
682
  - tca
@@ -694,7 +689,6 @@ language:
694
  - tgo
695
  - tgp
696
  - tha
697
- - thd
698
  - tif
699
  - tim
700
  - tiw
@@ -839,41 +833,16 @@ language:
839
  - ztq
840
  - zty
841
  - zyp
842
- - be
843
- - br
844
- - cs
845
- - ch
846
- - zh
847
- - de
848
- - en
849
- - eo
850
- - fr
851
- - ht
852
- - he
853
- - hr
854
- - id
855
- - it
856
- - ja
857
- - la
858
- - nl
859
- - ru
860
- - sa
861
- - so
862
- - es
863
- - sr
864
- - sv
865
- - to
866
- - uk
867
- - vi
868
- license:
869
- - cc-by-4.0
870
- - other
871
- multilinguality:
872
- - translation
873
- - multilingual
874
- pretty_name: biblenlp-corpus-mmteb
875
  size_categories:
876
  - 1M<n<10M
 
 
 
 
 
 
877
  configs:
878
  - config_name: default
879
  data_files:
@@ -7507,30 +7476,121 @@ configs:
7507
  split: test
7508
  - path: validation/eng_Latn-kde_Latn.jsonl.gz
7509
  split: validation
 
 
 
7510
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7511
 
7512
- This dataset pre-computes all English-centric directions from [bible-nlp/biblenlp-corpus](https://huggingface.co/datasets/bible-nlp/biblenlp-corpus), and as a result loading is significantly faster.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7513
 
7514
- Loading example:
7515
  ```python
7516
- >>> from datasets import load_dataset
7517
- >>> dataset = load_dataset("davidstap/biblenlp-corpus-mmteb", "eng-arb", trust_remote_code=True)
7518
- >>> dataset
7519
- DatasetDict({
7520
- train: Dataset({
7521
- features: ['eng', 'arb'],
7522
- num_rows: 28723
7523
- })
7524
- validation: Dataset({
7525
- features: ['eng', 'arb'],
7526
- num_rows: 1578
7527
- })
7528
- test: Dataset({
7529
- features: ['eng', 'arb'],
7530
- num_rows: 1551
7531
- })
7532
- })
7533
- >>>
7534
  ```
7535
 
7536
- Note that in all possible configurations, `eng` comes before the other language.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  annotations_creators:
3
+ - expert-annotated
4
  language_creators:
5
  - expert-generated
6
  language:
 
104
  - bki
105
  - bkq
106
  - bkx
 
107
  - blw
108
  - blz
109
  - bmh
 
342
  - kiw
343
  - kiz
344
  - kje
 
345
  - kjs
346
  - kkc
347
  - kkl
 
501
  - naf
502
  - nak
503
  - nas
 
504
  - nbq
505
  - nca
506
  - nch
 
526
  - nko
527
  - nld
528
  - nlg
 
529
  - nna
530
  - nnq
531
  - noa
 
672
  - tbc
673
  - tbf
674
  - tbg
 
675
  - tbo
676
  - tbz
677
  - tca
 
689
  - tgo
690
  - tgp
691
  - tha
 
692
  - tif
693
  - tim
694
  - tiw
 
833
  - ztq
834
  - zty
835
  - zyp
836
+ license: cc-by-sa-4.0
837
+ multilinguality: multilingual
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
838
  size_categories:
839
  - 1M<n<10M
840
+ source_datasets:
841
+ - davidstap/biblenlp-corpus-mmteb
842
+ task_categories:
843
+ - translation
844
+ task_ids: []
845
+ pretty_name: biblenlp-corpus-mmteb
846
  configs:
847
  - config_name: default
848
  data_files:
 
7476
  split: test
7477
  - path: validation/eng_Latn-kde_Latn.jsonl.gz
7478
  split: validation
7479
+ tags:
7480
+ - mteb
7481
+ - text
7482
  ---
7483
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
7484
+
7485
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
7486
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">BibleNLPBitextMining</h1>
7487
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
7488
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
7489
+ </div>
7490
+
7491
+ Partial Bible translations in 829 languages, aligned by verse.
7492
+
7493
+ | | |
7494
+ |---------------|---------------------------------------------|
7495
+ | Task category | t2t |
7496
+ | Domains | Religious, Written |
7497
+ | Reference | https://arxiv.org/abs/2304.09919 |
7498
+
7499
+ Source datasets:
7500
+ - [davidstap/biblenlp-corpus-mmteb](https://huggingface.co/datasets/davidstap/biblenlp-corpus-mmteb)
7501
+
7502
+
7503
+ ## How to evaluate on this task
7504
+
7505
+ You can evaluate an embedding model on this dataset using the following code:
7506
+
7507
+ ```python
7508
+ import mteb
7509
+
7510
+ task = mteb.get_task("BibleNLPBitextMining")
7511
+ evaluator = mteb.MTEB([task])
7512
+
7513
+ model = mteb.get_model(YOUR_MODEL)
7514
+ evaluator.run(model)
7515
+ ```
7516
 
7517
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
7518
+ To learn more about how to run models on `mteb` task check out the [GitHub repository](https://github.com/embeddings-benchmark/mteb).
7519
+
7520
+ ## Citation
7521
+
7522
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
7523
+
7524
+ ```bibtex
7525
+
7526
+ @article{akerman2023ebible,
7527
+ author = {Akerman, Vesa and Baines, David and Daspit, Damien and Hermjakob, Ulf and Jang, Taeho and Leong, Colin and Martin, Michael and Mathew, Joel and Robie, Jonathan and Schwarting, Marcus},
7528
+ journal = {arXiv preprint arXiv:2304.09919},
7529
+ title = {The eBible Corpus: Data and Model Benchmarks for Bible Translation for Low-Resource Languages},
7530
+ year = {2023},
7531
+ }
7532
+
7533
+
7534
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
7535
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
7536
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
7537
+ publisher = {arXiv},
7538
+ journal={arXiv preprint arXiv:2502.13595},
7539
+ year={2025},
7540
+ url={https://arxiv.org/abs/2502.13595},
7541
+ doi = {10.48550/arXiv.2502.13595},
7542
+ }
7543
+
7544
+ @article{muennighoff2022mteb,
7545
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Loïc and Reimers, Nils},
7546
+ title = {MTEB: Massive Text Embedding Benchmark},
7547
+ publisher = {arXiv},
7548
+ journal={arXiv preprint arXiv:2210.07316},
7549
+ year = {2022}
7550
+ url = {https://arxiv.org/abs/2210.07316},
7551
+ doi = {10.48550/ARXIV.2210.07316},
7552
+ }
7553
+ ```
7554
+
7555
+ # Dataset Statistics
7556
+ <details>
7557
+ <summary> Dataset Statistics</summary>
7558
+
7559
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
7560
 
 
7561
  ```python
7562
+ import mteb
7563
+
7564
+ task = mteb.get_task("BibleNLPBitextMining")
7565
+
7566
+ desc_stats = task.metadata.descriptive_stats
 
 
 
 
 
 
 
 
 
 
 
 
 
7567
  ```
7568
 
7569
+ ```json
7570
+ {
7571
+ "train": {
7572
+ "num_samples": 417452,
7573
+ "number_of_characters": 132355840,
7574
+ "unique_pairs": 416080,
7575
+ "sentence1_statistics": {
7576
+ "total_text_length": 66177920,
7577
+ "min_text_length": 1,
7578
+ "average_text_length": 158.52821402221093,
7579
+ "max_text_length": 4949,
7580
+ "unique_texts": 213216
7581
+ },
7582
+ "sentence2_statistics": {
7583
+ "total_text_length": 66177920,
7584
+ "min_text_length": 1,
7585
+ "average_text_length": 158.52821402221093,
7586
+ "max_text_length": 4949,
7587
+ "unique_texts": 213216
7588
+ }
7589
+ }
7590
+ }
7591
+ ```
7592
+
7593
+ </details>
7594
+
7595
+ ---
7596
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*