Update README.md
Browse files
README.md
CHANGED
|
@@ -26,3 +26,42 @@ configs:
|
|
| 26 |
- split: test
|
| 27 |
path: data/test-*
|
| 28 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
- split: test
|
| 27 |
path: data/test-*
|
| 28 |
---
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
## Loading and Splitting Dataset To Various Languages
|
| 32 |
+
|
| 33 |
+
In this example, I will show you how to load the dataset and split by language for your downstream task.
|
| 34 |
+
|
| 35 |
+
```python
|
| 36 |
+
>>> from datasets import load_dataset
|
| 37 |
+
|
| 38 |
+
>>> # load dataset
|
| 39 |
+
>>> dataset = load_dataset("mosesdaudu/translation_dataset")
|
| 40 |
+
|
| 41 |
+
>>> dataset
|
| 42 |
+
DatasetDict({
|
| 43 |
+
train: Dataset({
|
| 44 |
+
features: ['english_text', 'language', 'translated_text', 'split'],
|
| 45 |
+
num_rows: 198084
|
| 46 |
+
})
|
| 47 |
+
test: Dataset({
|
| 48 |
+
features: ['english_text', 'language', 'translated_text', 'split'],
|
| 49 |
+
num_rows: 22009
|
| 50 |
+
})
|
| 51 |
+
})
|
| 52 |
+
|
| 53 |
+
>>> # Filter Dataset To Pidgin Language Only
|
| 54 |
+
>>> pidgin_dataset = dataset.filter(lambda example: example['language'] == 'pidgin')
|
| 55 |
+
|
| 56 |
+
>>> pidgin_dataset
|
| 57 |
+
DatasetDict({
|
| 58 |
+
train: Dataset({
|
| 59 |
+
features: ['english_text', 'language', 'translated_text', 'split'],
|
| 60 |
+
num_rows: 22476
|
| 61 |
+
})
|
| 62 |
+
test: Dataset({
|
| 63 |
+
features: ['english_text', 'language', 'translated_text', 'split'],
|
| 64 |
+
num_rows: 2497
|
| 65 |
+
})
|
| 66 |
+
})
|
| 67 |
+
```
|