File size: 4,887 Bytes
8d8cf1a 88c2702 fa915e8 88c2702 5b85ae4 fa915e8 fc175ab 88c2702 5b85ae4 fa915e8 fc175ab 8d8cf1a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 |
---
dataset_info:
- config_name: deen
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 11263009157
num_examples: 34782245
- name: test
num_bytes: 1069433
num_examples: 2998
download_size: 4379524172
dataset_size: 11264078590
- config_name: deen_simple_instruct
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 11193444667
num_examples: 34782245
- name: test
num_bytes: 1063437
num_examples: 2998
download_size: 4348914335
dataset_size: 11194508104
- config_name: ruen
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 17658020408
num_examples: 37492126
- name: test
num_bytes: 1400588
num_examples: 3000
download_size: 6515019674
dataset_size: 17659420996
- config_name: ruen_simple_instruct
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 17583036156
num_examples: 37492126
- name: test
num_bytes: 1394588
num_examples: 3000
download_size: 6511172572
dataset_size: 17584430744
configs:
- config_name: deen
data_files:
- split: train
path: deen/train-*
- split: test
path: deen/test-*
- config_name: deen_simple_instruct
data_files:
- split: train
path: deen_simple_instruct/train-*
- split: test
path: deen_simple_instruct/test-*
- config_name: ruen
data_files:
- split: train
path: ruen/train-*
- split: test
path: ruen/test-*
- config_name: ruen_simple_instruct
data_files:
- split: train
path: ruen_simple_instruct/train-*
- split: test
path: ruen_simple_instruct/test-*
---
# Dataset Card for wmt19
<!-- Provide a quick summary of the dataset. -->
This is a preprocessed version of wmt19 dataset for benchmarks in LM-Polygraph.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** https://huggingface.co/LM-Polygraph
- **License:** https://github.com/IINemo/lm-polygraph/blob/main/LICENSE.md
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/IINemo/lm-polygraph
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This dataset should be used for performing benchmarks on LM-polygraph.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This dataset should not be used for further dataset preprocessing.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
This dataset contains the "continuation" subset, which corresponds to main dataset, used in LM-Polygraph. It may also contain other subsets, which correspond to instruct methods, used in LM-Polygraph.
Each subset contains two splits: train and test. Each split contains two string columns: "input", which corresponds to processed input for LM-Polygraph, and "output", which corresponds to processed output for LM-Polygraph.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
This dataset is created in order to separate dataset creation code from benchmarking code.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Data is collected from https://huggingface.co/datasets/wmt19 and processed by using build_dataset.py script in repository.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
People who created https://huggingface.co/datasets/wmt19
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This dataset contains the same biases, risks, and limitations as its source dataset https://huggingface.co/datasets/wmt19
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset.
|