Update files from the datasets library (from 1.7.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.7.0
README.md
CHANGED
|
@@ -1,4 +1,5 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
---
|
| 3 |
|
| 4 |
# Dataset Card for "iwslt2017"
|
|
@@ -6,12 +7,12 @@
|
|
| 6 |
## Table of Contents
|
| 7 |
- [Dataset Description](#dataset-description)
|
| 8 |
- [Dataset Summary](#dataset-summary)
|
| 9 |
-
- [Supported Tasks](#supported-tasks)
|
| 10 |
- [Languages](#languages)
|
| 11 |
- [Dataset Structure](#dataset-structure)
|
| 12 |
- [Data Instances](#data-instances)
|
| 13 |
- [Data Fields](#data-fields)
|
| 14 |
-
- [Data Splits
|
| 15 |
- [Dataset Creation](#dataset-creation)
|
| 16 |
- [Curation Rationale](#curation-rationale)
|
| 17 |
- [Source Data](#source-data)
|
|
@@ -45,7 +46,7 @@ The IWSLT 2017 Evaluation Campaign includes a multilingual TED Talks MT task. Th
|
|
| 45 |
|
| 46 |
For each language pair, training and development sets are available through the entry of the table below: by clicking, an archive will be downloaded which contains the sets and a README file. Numbers in the table refer to millions of units (untokenized words) of the target side of all parallel training sets.
|
| 47 |
|
| 48 |
-
### Supported Tasks
|
| 49 |
|
| 50 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 51 |
|
|
@@ -156,7 +157,7 @@ The data fields are the same among all splits.
|
|
| 156 |
#### iwslt2017-en-fr
|
| 157 |
- `translation`: a multilingual `string` variable, with possible languages including `en`, `fr`.
|
| 158 |
|
| 159 |
-
### Data Splits
|
| 160 |
|
| 161 |
| name |train |validation|test|
|
| 162 |
|---------------|-----:|---------:|---:|
|
|
@@ -174,10 +175,22 @@ The data fields are the same among all splits.
|
|
| 174 |
|
| 175 |
### Source Data
|
| 176 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 177 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 178 |
|
| 179 |
### Annotations
|
| 180 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 181 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 182 |
|
| 183 |
### Personal and Sensitive Information
|
|
|
|
| 1 |
---
|
| 2 |
+
paperswithcode_id: null
|
| 3 |
---
|
| 4 |
|
| 5 |
# Dataset Card for "iwslt2017"
|
|
|
|
| 7 |
## Table of Contents
|
| 8 |
- [Dataset Description](#dataset-description)
|
| 9 |
- [Dataset Summary](#dataset-summary)
|
| 10 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
| 11 |
- [Languages](#languages)
|
| 12 |
- [Dataset Structure](#dataset-structure)
|
| 13 |
- [Data Instances](#data-instances)
|
| 14 |
- [Data Fields](#data-fields)
|
| 15 |
+
- [Data Splits](#data-splits)
|
| 16 |
- [Dataset Creation](#dataset-creation)
|
| 17 |
- [Curation Rationale](#curation-rationale)
|
| 18 |
- [Source Data](#source-data)
|
|
|
|
| 46 |
|
| 47 |
For each language pair, training and development sets are available through the entry of the table below: by clicking, an archive will be downloaded which contains the sets and a README file. Numbers in the table refer to millions of units (untokenized words) of the target side of all parallel training sets.
|
| 48 |
|
| 49 |
+
### Supported Tasks and Leaderboards
|
| 50 |
|
| 51 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 52 |
|
|
|
|
| 157 |
#### iwslt2017-en-fr
|
| 158 |
- `translation`: a multilingual `string` variable, with possible languages including `en`, `fr`.
|
| 159 |
|
| 160 |
+
### Data Splits
|
| 161 |
|
| 162 |
| name |train |validation|test|
|
| 163 |
|---------------|-----:|---------:|---:|
|
|
|
|
| 175 |
|
| 176 |
### Source Data
|
| 177 |
|
| 178 |
+
#### Initial Data Collection and Normalization
|
| 179 |
+
|
| 180 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 181 |
+
|
| 182 |
+
#### Who are the source language producers?
|
| 183 |
+
|
| 184 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 185 |
|
| 186 |
### Annotations
|
| 187 |
|
| 188 |
+
#### Annotation process
|
| 189 |
+
|
| 190 |
+
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 191 |
+
|
| 192 |
+
#### Who are the annotators?
|
| 193 |
+
|
| 194 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
| 195 |
|
| 196 |
### Personal and Sensitive Information
|