| dataset_info: | |
| features: | |
| - name: sent1 | |
| dtype: string | |
| - name: sent2 | |
| dtype: string | |
| - name: sent1_lang | |
| dtype: string | |
| - name: sent2_lang | |
| dtype: string | |
| - name: example_id | |
| dtype: int64 | |
| splits: | |
| - name: train | |
| num_bytes: 211580556 | |
| num_examples: 871152 | |
| - name: dev | |
| num_bytes: 5537802 | |
| num_examples: 22812 | |
| - name: test | |
| num_bytes: 5492958 | |
| num_examples: 23664 | |
| download_size: 70648429 | |
| dataset_size: 222611316 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: train | |
| path: data/train-* | |
| - split: dev | |
| path: data/dev-* | |
| - split: test | |
| path: data/test-* | |
| This is a subset of the [Ted Multi](https://huggingface.co/datasets/neulab/ted_multi) dataset. I take out the parallel sentences across English, French, Dutch and German, and generate all possible sentence pair combinations. | |