You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset mimba/text2text

πŸ“ Description

This dataset provides multilingual parallel sentence pairs for machine translation (text-to-text tasks).
Currently, it includes Ngiemboon ↔ French (36,859 examples).
In the future, additional language pairs will be added (e.g., Ngiemboon ↔ English, etc.).

  • Total examples (current): 36,859
  • Columns:
    • source_text: source sentence
    • target_text: target sentence
    • source_lang: ISO 639‑3 language code of the source (e.g., nnh)
    • target_lang: ISO 639‑3 language code of the target (e.g., fra)

πŸ“₯ Loading the dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("mimba/text2text")

print(dataset)
DatasetDict({
    nnh_fra: Dataset({
        features: ['source_text', 'target_text', 'source_lang', 'target_lang'],
        num_rows: 36859
    })
})

πŸ”€ Train/Validation Split

The dataset is provided as a single split (nnh_fra). You can split it into train and validation/test using train_test_split:

from datasets import DatasetDict

# 90% train / 10% validation
split_dataset = dataset["nnh_fra"].train_test_split(test_size=0.1)

dataset_dict = DatasetDict({
    "train": split_dataset["train"],
    "validation": split_dataset["test"]
})

print(dataset_dict)
DatasetDict({
    train: Dataset({
        features: ['source_text', 'target_text', 'source_lang', 'target_lang'],
        num_rows: 33173
    })
    validation: Dataset({
        features: ['source_text', 'target_text', 'source_lang', 'target_lang'],
        num_rows: 3686
    })
})

βš™οΈ Example Usage with NLLB‑200

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_name = "facebook/nllb-200-distilled-600M"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

# Add a custom language tag for Ngiemboon
tokenizer.add_tokens(["__ngiemboon__"])
model.resize_token_embeddings(len(tokenizer))

# Preprocessing
def preprocess_function(examples):
    inputs = [f"__ngiemboon__ {src}" for src in examples["source_text"]]
    targets = [tgt for tgt in examples["target_text"]]
    model_inputs = tokenizer(inputs, max_length=128, truncation=True)
    labels = tokenizer(targets, max_length=128, truncation=True)
    model_inputs["labels"] = labels["input_ids"]
    return model_inputs

tokenized_datasets = dataset_dict.map(preprocess_function, batched=True)

🌍 Available Languages

  • Current:
    • nnh (Ngiemboon) ↔ fra (French)
  • Planned:
    • nnh ↔ eng (English)

Additional languages to be added progressively

βœ… Use Cases

  • Fine‑tuning multilingual models (NLLB‑200, M2M100, MarianMT).
  • Research on low‑resource languages.
  • Educational demonstrations of machine translation.

BibTeX entry and citation info

@misc{
  title = {Ngiemboon ↔ French Parallel Corpus},
  author = {Mimba},
  year = {2026},
  url = {https://huggingface.co/datasets/mimba/text2text}
}
Contact For all questions contact @Mimba.
Downloads last month
96

Models trained or fine-tuned on mimba/text2text