Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed's picture
Add dataset card
77d807e verified
metadata
annotations_creators:
  - human-annotated
language:
  - eng
  - hau
  - ibo
  - pcm
  - yor
license: cc-by-sa-4.0
multilinguality: multilingual
task_categories:
  - translation
task_ids: []
dataset_info:
  - config_name: en-ha
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
    splits:
      - name: train
        num_bytes: 119594
        num_examples: 410
    download_size: 73989
    dataset_size: 119594
  - config_name: en-ig
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
    splits:
      - name: train
        num_bytes: 120677
        num_examples: 410
    download_size: 71569
    dataset_size: 120677
  - config_name: en-pcm
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
    splits:
      - name: train
        num_bytes: 113431
        num_examples: 410
    download_size: 74180
    dataset_size: 113431
  - config_name: en-yo
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
    splits:
      - name: train
        num_bytes: 135511
        num_examples: 410
    download_size: 86308
    dataset_size: 135511
configs:
  - config_name: en-ha
    data_files:
      - split: train
        path: en-ha/train-*
  - config_name: en-ig
    data_files:
      - split: train
        path: en-ig/train-*
  - config_name: en-pcm
    data_files:
      - split: train
        path: en-pcm/train-*
  - config_name: en-yo
    data_files:
      - split: train
        path: en-yo/train-*
tags:
  - mteb
  - text

NollySentiBitextMining

An MTEB dataset
Massive Text Embedding Benchmark

NollySenti is Nollywood movie reviews for five languages widely spoken in Nigeria (English, Hausa, Igbo, Nigerian-Pidgin, and Yoruba.

Task category t2t
Domains Social, Reviews, Written
Reference https://github.com/IyanuSh/NollySenti

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["NollySentiBitextMining"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@inproceedings{shode2023nollysenti,
  author = {Shode, Iyanuoluwa and Adelani, David Ifeoluwa and Peng, Jing and Feldman, Anna},
  booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
  pages = {986--998},
  title = {NollySenti: Leveraging Transfer Learning and Machine Translation for Nigerian Movie Sentiment Classification},
  year = {2023},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("NollySentiBitextMining")

desc_stats = task.metadata.descriptive_stats
{
    "train": {
        "num_samples": 1640,
        "number_of_characters": 445805,
        "unique_pairs": 1632,
        "min_sentence1_length": 3,
        "average_sentence1_length": 136.3170731707317,
        "max_sentence1_length": 1698,
        "unique_sentence1": 405,
        "min_sentence2_length": 3,
        "average_sentence2_length": 135.515243902439,
        "max_sentence2_length": 1728,
        "unique_sentence2": 1631,
        "hf_subset_descriptive_stats": {
            "en-ha": {
                "num_samples": 410,
                "number_of_characters": 115348,
                "unique_pairs": 407,
                "min_sentence1_length": 3,
                "average_sentence1_length": 136.3170731707317,
                "max_sentence1_length": 1698,
                "unique_sentence1": 405,
                "min_sentence2_length": 4,
                "average_sentence2_length": 145.01951219512196,
                "max_sentence2_length": 1728,
                "unique_sentence2": 407
            },
            "en-ig": {
                "num_samples": 410,
                "number_of_characters": 107173,
                "unique_pairs": 409,
                "min_sentence1_length": 3,
                "average_sentence1_length": 136.3170731707317,
                "max_sentence1_length": 1698,
                "unique_sentence1": 405,
                "min_sentence2_length": 5,
                "average_sentence2_length": 125.08048780487805,
                "max_sentence2_length": 1137,
                "unique_sentence2": 408
            },
            "en-pcm": {
                "num_samples": 410,
                "number_of_characters": 109955,
                "unique_pairs": 408,
                "min_sentence1_length": 3,
                "average_sentence1_length": 136.3170731707317,
                "max_sentence1_length": 1698,
                "unique_sentence1": 405,
                "min_sentence2_length": 3,
                "average_sentence2_length": 131.8658536585366,
                "max_sentence2_length": 1552,
                "unique_sentence2": 408
            },
            "en-yo": {
                "num_samples": 410,
                "number_of_characters": 113329,
                "unique_pairs": 409,
                "min_sentence1_length": 3,
                "average_sentence1_length": 136.3170731707317,
                "max_sentence1_length": 1698,
                "unique_sentence1": 405,
                "min_sentence2_length": 6,
                "average_sentence2_length": 140.0951219512195,
                "max_sentence2_length": 1338,
                "unique_sentence2": 409
            }
        }
    }
}

This dataset card was automatically generated using MTEB