File size: 2,344 Bytes
57099c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
task_categories:
- text-classification
language:
- en
- th
- es
pretty_name: Multilingual Task-Oriented Dialog
configs:
- config_name: default
  data_files:
  - path: 
    - en/train-en.parquet
    - es/train-es.parquet
    - th/train-th_TH.parquet
    split: train
  - path: 
    - en/test-en.parquet
    - es/test-es.parquet
    - th/test-th_TH.parquet
    split: test
  - path:  
    - en/eval-en.parquet
    - es/eval-es.parquet
    - th/eval-th_TH.parquet
    split: eval
- config_name: en
  data_files:
  - split: train
    path: en/train-en.parquet
  - split: test
    path: en/test-en.parquet
  - split: eval
    path: en/eval-en.parquet
- config_name: es
  data_files:
  - split: train
    path: es/train-es.parquet
  - split: test
    path: es/test-es.parquet
  - split: eval
    path: es/eval-es.parquet
- config_name: th
  data_files:
  - split: train
    path: th/train-th_TH.parquet
  - split: test
    path: th/test-th_TH.parquet
  - split: eval
    path: th/eval-th_TH.parquet
license: cc-by-sa-4.0
---
# Multilingual Task-Oriented Dialog Data

## Directory structure

This dataset consists of 3 directories:
* `en` contains the English data
* `es` contains the Spanish data
* `th` contains the Thai data

In each directory, you'll find a file for each of the train/dev/test splits as used in our paper.


## File format

PYTEXT parquet FORMAT

Each parquet file contains following 5 columns: intent label, the slot annotations in a comma-separated list with the format `<start token>:<end token>:<slot type>`,
untokenized utterance, the language, and the token spans from an in-house multilingual tokenizer.

The "upsampled" files contain the upsampled Spanish/Thai data so that there are roughly equal amounts of English and Spanish/Thai data for training and model selection. 

## License

Provided under the CC-BY-SA license.

## Citation

If you use this dataset in your research, please cite the following paper:


    @unpublished{Schuster2018,
      author = {Sebastian Schuster and Sonal Gupta and Rushin Shah and Mike Lewis},
        title = {Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog},
        year = {2018},
        note = {arXiv preprint},
        url = {http://arxiv.org/abs/}
    }


## Questions

Please contact Sonal Gupta (<sonalgupta@fb.com>) with questions about this dataset.