translation_dataset / README.md
mosesdaudu's picture
Update README.md
1caf2ae verified
---
dataset_info:
features:
- name: english_text
dtype: string
- name: language
dtype: string
- name: translated_text
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 36890855
num_examples: 198084
- name: test
num_bytes: 4071501
num_examples: 22009
download_size: 21823665
dataset_size: 40962356
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
## Loading and Splitting Dataset To Various Languages
In this example, I will show you how to load the dataset and split by language for your downstream task.
```python
>>> from datasets import load_dataset
>>> # load dataset
>>> dataset = load_dataset("mosesdaudu/translation_dataset")
>>> dataset
DatasetDict({
train: Dataset({
features: ['english_text', 'language', 'translated_text', 'split'],
num_rows: 198084
})
test: Dataset({
features: ['english_text', 'language', 'translated_text', 'split'],
num_rows: 22009
})
})
>>> # Filter Dataset To Pidgin Language Only
>>> pidgin_dataset = dataset.filter(lambda example: example['language'] == 'pidgin')
>>> pidgin_dataset
DatasetDict({
train: Dataset({
features: ['english_text', 'language', 'translated_text', 'split'],
num_rows: 22476
})
test: Dataset({
features: ['english_text', 'language', 'translated_text', 'split'],
num_rows: 2497
})
})
```