File size: 1,505 Bytes
d65154a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1caf2ae
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
dataset_info:
  features:
  - name: english_text
    dtype: string
  - name: language
    dtype: string
  - name: translated_text
    dtype: string
  - name: split
    dtype: string
  splits:
  - name: train
    num_bytes: 36890855
    num_examples: 198084
  - name: test
    num_bytes: 4071501
    num_examples: 22009
  download_size: 21823665
  dataset_size: 40962356
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---


## Loading and Splitting Dataset To Various Languages

In this example, I will show you how to load the dataset and split by language for your downstream task.

```python
>>> from datasets import load_dataset

>>> # load dataset
>>> dataset = load_dataset("mosesdaudu/translation_dataset")

>>> dataset
DatasetDict({
    train: Dataset({
        features: ['english_text', 'language', 'translated_text', 'split'],
        num_rows: 198084
    })
    test: Dataset({
        features: ['english_text', 'language', 'translated_text', 'split'],
        num_rows: 22009
    })
})

>>> # Filter Dataset To Pidgin Language Only
>>> pidgin_dataset = dataset.filter(lambda example: example['language'] == 'pidgin')

>>> pidgin_dataset
DatasetDict({
    train: Dataset({
        features: ['english_text', 'language', 'translated_text', 'split'],
        num_rows: 22476
    })
    test: Dataset({
        features: ['english_text', 'language', 'translated_text', 'split'],
        num_rows: 2497
    })
})
```