Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -62,7 +62,7 @@ Each row is a single bilingual sentence pair with language, dialect, split, and
|
|
| 62 |
- **Targets:** English (`en`), Mandarin Chinese (`zh`)
|
| 63 |
- **Splits (all languages, both targets combined):**
|
| 64 |
- Train: 334,772
|
| 65 |
-
-
|
| 66 |
- Test: 29,450
|
| 67 |
- **License:** CC BY 4.0
|
| 68 |
- **Format:** UTF-8 CSV, one sentence pair per row
|
|
@@ -147,20 +147,20 @@ id,source_lang,target_lang,source_sentence,target_sentence,lang_code,dialect,sou
|
|
| 147 |
* `lang_code` *(str)* – canonical code for the Formosan language (usually same as `source_lang`).
|
| 148 |
* `dialect` *(str)* – dialect label (e.g. `"Southern"`, `"Malan"`, `"UNKNOWN"`).
|
| 149 |
* `source` *(str)* – provenance string or original file path in the upstream corpora.
|
| 150 |
-
* `split` *(str)* – one of `"train"`, `"
|
| 151 |
|
| 152 |
### Splits
|
| 153 |
|
| 154 |
Splits are defined **per row** via the `split` column:
|
| 155 |
|
| 156 |
* `train` – training data
|
| 157 |
-
* `
|
| 158 |
* `test` – held-out test data
|
| 159 |
|
| 160 |
Global totals across all languages and directions:
|
| 161 |
|
| 162 |
* Train: 334,772
|
| 163 |
-
*
|
| 164 |
* Test: 29,450
|
| 165 |
|
| 166 |
Users can filter to any language pair and then re-group into a `DatasetDict` by `split`.
|
|
@@ -238,7 +238,7 @@ from datasets import DatasetDict
|
|
| 238 |
def split_by_column(ds):
|
| 239 |
return DatasetDict({
|
| 240 |
"train": ds.filter(lambda ex: ex["split"] == "train"),
|
| 241 |
-
"
|
| 242 |
"test": ds.filter(lambda ex: ex["split"] == "test"),
|
| 243 |
})
|
| 244 |
|
|
@@ -247,7 +247,7 @@ ami_en_splits = split_by_column(ami_en)
|
|
| 247 |
print(ami_en_splits)
|
| 248 |
# DatasetDict({
|
| 249 |
# train: Dataset({ ... })
|
| 250 |
-
#
|
| 251 |
# test: Dataset({ ... })
|
| 252 |
# })
|
| 253 |
```
|
|
|
|
| 62 |
- **Targets:** English (`en`), Mandarin Chinese (`zh`)
|
| 63 |
- **Splits (all languages, both targets combined):**
|
| 64 |
- Train: 334,772
|
| 65 |
+
- Validate: 29,412
|
| 66 |
- Test: 29,450
|
| 67 |
- **License:** CC BY 4.0
|
| 68 |
- **Format:** UTF-8 CSV, one sentence pair per row
|
|
|
|
| 147 |
* `lang_code` *(str)* – canonical code for the Formosan language (usually same as `source_lang`).
|
| 148 |
* `dialect` *(str)* – dialect label (e.g. `"Southern"`, `"Malan"`, `"UNKNOWN"`).
|
| 149 |
* `source` *(str)* – provenance string or original file path in the upstream corpora.
|
| 150 |
+
* `split` *(str)* – one of `"train"`, `"validate"`, `"test"`.
|
| 151 |
|
| 152 |
### Splits
|
| 153 |
|
| 154 |
Splits are defined **per row** via the `split` column:
|
| 155 |
|
| 156 |
* `train` – training data
|
| 157 |
+
* `validate` – development / validate data
|
| 158 |
* `test` – held-out test data
|
| 159 |
|
| 160 |
Global totals across all languages and directions:
|
| 161 |
|
| 162 |
* Train: 334,772
|
| 163 |
+
* Validate: 29,412
|
| 164 |
* Test: 29,450
|
| 165 |
|
| 166 |
Users can filter to any language pair and then re-group into a `DatasetDict` by `split`.
|
|
|
|
| 238 |
def split_by_column(ds):
|
| 239 |
return DatasetDict({
|
| 240 |
"train": ds.filter(lambda ex: ex["split"] == "train"),
|
| 241 |
+
"validate": ds.filter(lambda ex: ex["split"] == "validate"),
|
| 242 |
"test": ds.filter(lambda ex: ex["split"] == "test"),
|
| 243 |
})
|
| 244 |
|
|
|
|
| 247 |
print(ami_en_splits)
|
| 248 |
# DatasetDict({
|
| 249 |
# train: Dataset({ ... })
|
| 250 |
+
# validate: Dataset({ ... })
|
| 251 |
# test: Dataset({ ... })
|
| 252 |
# })
|
| 253 |
```
|