|
|
--- |
|
|
language: |
|
|
- nnh |
|
|
- fub |
|
|
- plt |
|
|
- fra |
|
|
license: cc-by-nc-sa-4.0 |
|
|
dataset_info: |
|
|
- config_name: fub_fra |
|
|
features: |
|
|
- name: source_text |
|
|
dtype: string |
|
|
- name: target_text |
|
|
dtype: string |
|
|
- name: source_lang |
|
|
dtype: string |
|
|
- name: target_lang |
|
|
dtype: string |
|
|
splits: |
|
|
- name: fub_fra |
|
|
num_bytes: 8495557 |
|
|
num_examples: 28936 |
|
|
download_size: 4434992 |
|
|
dataset_size: 8495557 |
|
|
- config_name: nnh_fra |
|
|
features: |
|
|
- name: source_text |
|
|
dtype: string |
|
|
- name: target_text |
|
|
dtype: string |
|
|
- name: source_lang |
|
|
dtype: string |
|
|
- name: target_lang |
|
|
dtype: string |
|
|
splits: |
|
|
- name: nnh_fra |
|
|
num_bytes: 11546667 |
|
|
num_examples: 40968 |
|
|
download_size: 5550765 |
|
|
dataset_size: 11546667 |
|
|
- config_name: plt_fra |
|
|
features: |
|
|
- name: source_text |
|
|
dtype: string |
|
|
- name: target_text |
|
|
dtype: string |
|
|
- name: source_lang |
|
|
dtype: string |
|
|
- name: target_lang |
|
|
dtype: string |
|
|
splits: |
|
|
- name: plt_fra |
|
|
num_bytes: 9803314 |
|
|
num_examples: 30612 |
|
|
download_size: 4863574 |
|
|
dataset_size: 9803314 |
|
|
configs: |
|
|
- config_name: fub_fra |
|
|
data_files: |
|
|
- split: fub_fra |
|
|
path: fub_fra/fub_fra-* |
|
|
- config_name: nnh_fra |
|
|
default: true |
|
|
data_files: |
|
|
- split: nnh_fra |
|
|
path: nnh_fra/nnh_fra-* |
|
|
- config_name: plt_fra |
|
|
data_files: |
|
|
- split: plt_fra |
|
|
path: plt_fra/plt_fra-* |
|
|
task_categories: |
|
|
- translation |
|
|
--- |
|
|
|
|
|
# Dataset `mimba/text2text` |
|
|
|
|
|
## 📝 Description |
|
|
This dataset provides multilingual parallel sentence pairs for **machine translation (text-to-text tasks)**. |
|
|
Currently, it includes **Ngiemboon ↔ French** (40,968 examples). |
|
|
In the future, additional language pairs will be added (e.g., Ngiemboon ↔ English, etc.). |
|
|
|
|
|
- **Total examples (current)**: 40,968 |
|
|
- **Columns**: |
|
|
- `source_text`: source sentence |
|
|
- `target_text`: target sentence |
|
|
- `source_lang`: ISO 639‑3 language code of the source (e.g., `nnh`) |
|
|
- `target_lang`: ISO 639‑3 language code of the target (e.g., `fra`) |
|
|
|
|
|
--- |
|
|
|
|
|
## 📥 Loading the dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the dataset |
|
|
dataset = load_dataset("mimba/text2text") |
|
|
|
|
|
print(dataset) |
|
|
``` |
|
|
```console |
|
|
DatasetDict({ |
|
|
nnh_fra: Dataset({ |
|
|
features: ['source_text', 'target_text', 'source_lang', 'target_lang'], |
|
|
num_rows: 40968 |
|
|
}) |
|
|
}) |
|
|
``` |
|
|
## 🔀 Train/Validation Split |
|
|
The dataset is provided as a single split (*nnh_fra*). |
|
|
You can split it into **train** and **validation/test** using ***train_test_split***: |
|
|
```python |
|
|
from datasets import DatasetDict |
|
|
|
|
|
# 90% train / 10% validation |
|
|
split_dataset = dataset["nnh_fra"].train_test_split(test_size=0.1) |
|
|
|
|
|
dataset_dict = DatasetDict({ |
|
|
"train": split_dataset["train"], |
|
|
"validation": split_dataset["test"] |
|
|
}) |
|
|
|
|
|
print(dataset_dict) |
|
|
``` |
|
|
```console |
|
|
DatasetDict({ |
|
|
train: Dataset({ |
|
|
features: ['source_text', 'target_text', 'source_lang', 'target_lang'], |
|
|
num_rows: 36871 |
|
|
}) |
|
|
validation: Dataset({ |
|
|
features: ['source_text', 'target_text', 'source_lang', 'target_lang'], |
|
|
num_rows: 4097 |
|
|
}) |
|
|
}) |
|
|
``` |
|
|
## ⚙️ Example Usage with NLLB‑200 |
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
|
|
|
|
model_name = "facebook/nllb-200-distilled-600M" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
model = AutoModelForSeq2SeqLM.from_pretrained(model_name) |
|
|
|
|
|
# Add a custom language tag for Ngiemboon |
|
|
tokenizer.add_tokens(["__ngiemboon__"]) |
|
|
model.resize_token_embeddings(len(tokenizer)) |
|
|
|
|
|
# Preprocessing |
|
|
def preprocess_function(examples): |
|
|
inputs = [f"__ngiemboon__ {src}" for src in examples["source_text"]] |
|
|
targets = [tgt for tgt in examples["target_text"]] |
|
|
model_inputs = tokenizer(inputs, max_length=128, truncation=True) |
|
|
labels = tokenizer(targets, max_length=128, truncation=True) |
|
|
model_inputs["labels"] = labels["input_ids"] |
|
|
return model_inputs |
|
|
|
|
|
tokenized_datasets = dataset_dict.map(preprocess_function, batched=True) |
|
|
``` |
|
|
## 🌍 Available Languages |
|
|
- **Current:** |
|
|
- ***nnh*** (Ngiemboon) ↔ ***fra*** (French) |
|
|
- **Planned:** |
|
|
- ***nnh*** ↔ ***eng*** (English) |
|
|
|
|
|
Additional languages to be added progressively |
|
|
|
|
|
## ✅ Use Cases |
|
|
- Fine‑tuning multilingual models (NLLB‑200, M2M100, MarianMT). |
|
|
- Research on low‑resource languages. |
|
|
- Educational demonstrations of machine translation. |
|
|
|
|
|
### BibTeX entry and citation info |
|
|
|
|
|
```bibtex |
|
|
@misc{ |
|
|
title = {Ngiemboon ↔ French Parallel Corpus}, |
|
|
author = {Mimba}, |
|
|
year = {2026}, |
|
|
url = {https://huggingface.co/datasets/mimba/text2text} |
|
|
} |
|
|
``` |
|
|
|
|
|
##### *Contact For all questions contact [@Mimba](baounabaouna@gmail.com).* |