ACL-OCL / Base_JSON /prefixM /json /mrl /2021.mrl-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:14:11.227972Z"
},
"title": "Multilingual Code-Switching for Zero-Shot Cross-Lingual Intent Prediction and Slot Filling",
"authors": [
{
"first": "Jitin",
"middle": [],
"last": "Krishnan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University Fairfax",
"location": {
"region": "VA",
"country": "USA"
}
},
"email": "jkrishn2@gmu.edu"
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University Fairfax",
"location": {
"region": "VA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Hemant",
"middle": [],
"last": "Purohit",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University Fairfax",
"location": {
"region": "VA",
"country": "USA"
}
},
"email": "hpurohit@gmu.edu"
},
{
"first": "Huzefa",
"middle": [],
"last": "Rangwala",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "George Mason University Fairfax",
"location": {
"region": "VA",
"country": "USA"
}
},
"email": "rangwala@gmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Predicting user intent and detecting the corresponding slots from text are two key problems in Natural Language Understanding (NLU). Since annotated datasets are only available for a handful of languages, our work focuses particularly on a zero-shot scenario where the target language is unseen during training. In the context of zero-shot learning, this task is typically approached using representations from pre-trained multilingual language models such as mBERT or by fine-tuning on data automatically translated into the target language. We propose a novel method which augments monolingual source data using multilingual code-switching via random translations, to enhance generalizability of large multilingual language models when fine-tuning them for downstream tasks. Experiments on the Mul-tiATIS++ benchmark show that our method leads to an average improvement of +4.2% in accuracy for the intent task and +1.8% in F1 for the slot-filling task over the state-of-the-art across 8 typologically diverse languages. We also study the impact of code-switching into different families of languages on downstream performance. Furthermore, we present an application of our method for crisis informatics using a new human-annotated tweet dataset of slot filling in English and Haitian Creole, collected during the Haiti earthquake. 1",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Predicting user intent and detecting the corresponding slots from text are two key problems in Natural Language Understanding (NLU). Since annotated datasets are only available for a handful of languages, our work focuses particularly on a zero-shot scenario where the target language is unseen during training. In the context of zero-shot learning, this task is typically approached using representations from pre-trained multilingual language models such as mBERT or by fine-tuning on data automatically translated into the target language. We propose a novel method which augments monolingual source data using multilingual code-switching via random translations, to enhance generalizability of large multilingual language models when fine-tuning them for downstream tasks. Experiments on the Mul-tiATIS++ benchmark show that our method leads to an average improvement of +4.2% in accuracy for the intent task and +1.8% in F1 for the slot-filling task over the state-of-the-art across 8 typologically diverse languages. We also study the impact of code-switching into different families of languages on downstream performance. Furthermore, we present an application of our method for crisis informatics using a new human-annotated tweet dataset of slot filling in English and Haitian Creole, collected during the Haiti earthquake. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A cross-lingual setting is typically described as a scenario in which a model trained for a particular task in one source language (e.g. English) should be able to generalize well to a different target language (e.g. Japanese). While semi-supervised solutions (Muis et al., 2018; FitzGerald, 2020 , inter alia) assume some target language data or translators are available, a zero-shot solution (Eriguchi et al., 2018; Srivastava et al., 2018; assumes none is available at training time. Having models that generalize well even to unseen languages is crucial for tackling real world problems such as extracting relevant information during a new disaster (Nguyen et al., 2017; Krishnan et al., 2020) or detecting hate speech (Pamungkas and Patti, 2019; Stappen et al., 2020) , where the target language might be of low-resource or unknown.",
"cite_spans": [
{
"start": 260,
"end": 279,
"text": "(Muis et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 280,
"end": 296,
"text": "FitzGerald, 2020",
"ref_id": "BIBREF16"
},
{
"start": 395,
"end": 418,
"text": "(Eriguchi et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 419,
"end": 443,
"text": "Srivastava et al., 2018;",
"ref_id": "BIBREF51"
},
{
"start": 654,
"end": 675,
"text": "(Nguyen et al., 2017;",
"ref_id": "BIBREF36"
},
{
"start": 676,
"end": 698,
"text": "Krishnan et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 724,
"end": 751,
"text": "(Pamungkas and Patti, 2019;",
"ref_id": "BIBREF38"
},
{
"start": 752,
"end": 773,
"text": "Stappen et al., 2020)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intent prediction and slot filling are two NLU tasks, usually solved jointly, which learn to model the intent (sentence-level) and slot (word-level) labels. Such models are currently used extensively for goal-oriented dialogue systems, such as Amazon's Alexa, Apple's Siri, Google Assistant, and Microsoft's Cortana. Finding the 'intent' behind the user's query and identifying relevant 'slots' in the sentence to engage in a dialogue are essential for effective conversational assistance. For example, users might want to 'play music' given the slot labels 'year' and 'artist' (Coucke et al., 2018) , or they may want to 'book a flight' given the 'airport' and 'locations' slot labels (Price, 1990) . A strong correlation between the two tasks has made jointly trained models successful (Goo et al., 2018; Haihong et al., 2019; Hardalov et al., 2020; . In a cross-lingual setting, the model should be able to learn this joint task in one language and transfer knowledge to another (Upadhyay et al., 2018; Schuster et al., 2019; . This is the premise of our work.",
"cite_spans": [
{
"start": 578,
"end": 599,
"text": "(Coucke et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 686,
"end": 699,
"text": "(Price, 1990)",
"ref_id": "BIBREF42"
},
{
"start": 788,
"end": 806,
"text": "(Goo et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 807,
"end": 828,
"text": "Haihong et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 829,
"end": 851,
"text": "Hardalov et al., 2020;",
"ref_id": "BIBREF21"
},
{
"start": 982,
"end": 1005,
"text": "(Upadhyay et al., 2018;",
"ref_id": null
},
{
"start": 1006,
"end": 1028,
"text": "Schuster et al., 2019;",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Highly effective transformer-based multilingual models such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020a) have found success across several multilingual tasks in recent years. In the zero-shot cross-lingual transfer setting with an unknown target language, a typical solution is to use pre-trained transformer models and fine-tune to the downstream task using the monolingual source data . However, Pires et al. (2019) showed that existing transformer-based represen- In the above code-switching example, the chunks are in Chinese, Punjabi, Spanish, English, Arabic, and Russian. 'atis_airfare' represents an intent class where the user seeks price of a ticket. tations may exhibit systematic deficiencies for certain language pairs. Figure 1 also verifies that the representations across the 12 multi-head attention layers of mBERT are still not shared across languages, instead forming clearly distinguishable clusters per language. This leads to a fundamental challenge that we address in this work: enhancing the language neutrality so that the fine-tuned model is generalizable across languages for the downstream task. To this goal, we introduce a data augmentation method via multilingual codeswitching, where the original sentence in English is code-switched into randomly selected languages. For example, chunk-level code-switching creates sentences with phrases in multiple languages as shown in Figure 2 . We show that mBERT can be fine-tuned for many languages starting only with monolingual source-language data, leading to better performance in zero-shot settings.",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 101,
"end": 124,
"text": "(Conneau et al., 2020a)",
"ref_id": "BIBREF6"
},
{
"start": 418,
"end": 437,
"text": "Pires et al. (2019)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [
{
"start": 753,
"end": 761,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1425,
"end": 1433,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Further, we show how code-switching with languages from different language families impacts the model's performance on individual target languages, even finding some counter-intuitive results. For instance, training on data code-switched between English and Sino-Tibetan languages is as helpful for Hindi (an Indo-Aryan Indo-European language) as code-switching with other Indo-Aryan languages, and Turkic languages can be helpful for both Chinese and Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present a data augmentation method via multilingual code-switching to enhance the language neutrality of transformerbased language models such as mBERT for finetuning to a downstream NLU task of intent prediction and slot filling. b) By studying different language families, we show how code-switching can be used to aid zero-shot cross-lingual learning for low-resource languages. c) We release a new human-annotated tweet dataset, collected during Haiti earthquake disaster, for intent prediction and slot filling in English and Haitian Creole.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions: a)",
"sec_num": null
},
{
"text": "This section describes our problem definition, codeswitching algorithm, language families, and the training methodology.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Given a source (S) and a set of target (T) languages, the goal is to train a classifier using data only in the source language and predict examples from the completely unseen target languages. We assume the target language is unknown during training (fine-tuning) time, which makes direct translation to target infeasible. In this context, we use code-switching (cs) to augment the monolingual source data. Thus, the input, augmented input, and output of our problem can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2.1"
},
{
"text": "Algorithm 1: Data Augmentation via Mul- tilingual Code-Switching (Chunk-Level) Input: X en ut , y en , y en sl , lT Output: X cs ut , y cs , y cs sl X cs ut \u2190 \u2205, y cs \u2190 \u2205, y cs sl \u2190 \u2205 lset = googletrans.languages \u2212 lT for i \u2208 1.. k do for j \u2208 1.. len(X en ut ) do G cs \u2190 \u2205, L cs \u2190 \u2205 chunks = slot_chunks(X en ut [j], y en sl [j]) for c \u2208 chunks do l \u2190 random.choice(lset) t \u2190 translate(c, l) G cs \u2190 G cs \u222a t L cs \u2190 L cs \u222a align_label(c, t) end X cs ut \u2190 X cs ut \u222a G cs y cs \u2190 y cs \u222a y cs [j] y cs sl \u2190 y cs sl \u222a L cs end end Input: X S ut , y S , y S sl , l T Code-Switched Input: X cs ut , y cs , y cs sl Output: y T , y T sl \u2190 predict(X T ut )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2.1"
},
{
"text": "where X ut represents sentences, y their ground truth intent classes, y sl the slot labels for the words in those sentences, and l T the set of target languages. An example sentence, its intent class, and slot labels are shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 230,
"end": 238,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2.1"
},
{
"text": "Multilingual masked language models, such as mBERT (Devlin et al., 2019) , are trained using large datasets of publicly available unlabeled corpora such as Wikipedia. Such corpora largely remain monolingual at the sentence level because the presence of intra-sentence code-switched data in written texts is likely scarce. The masked words that needed to be predicted usually are in the same language as their surrounding words. We study how code-switching can enhance the language neutrality of such language models by augmenting it with artificially code-switched data for fine-tuning it to a downstream task. Algorithm 1 explains this codeswitching process at the chunk-level. When using slot filling datasets, slot labels that are grouped by BIO (Ramshaw and Marcus, 1999) tags constitute natural chunks, as shown in Figure 2 . To summarize the algorithm, we take a sentence, take each chunk from that sentence, perform a translation into a random language using Google's NMT system (Wu et al., 2016) , and align the slot labels to fit the translation, i.e., label propagation through alignment as the translated sentence do not preserve the number and order of words in the original sentence. At the chunk-level, we use a direct alignment. The BIO-tagged labels are recreated for the translated phrase based on the word tokens. More complex methods could be applied here to improve the alignment of the slot labels such as fast-align (Dyer et al., 2013) or soft-align ), but we leave this for future work. Code-Switching at the word-level essentially translates every word randomly, while at the sentence-level translates the entire sentence. During the experimental evaluation process, to build a language-neutral model using monolingual source (English) data, all eight target languages are excluded from the code-switching procedure to avoid unfair model comparisons, i.e. removing target languages (l T ) from lset in Algorithm 1.",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 749,
"end": 775,
"text": "(Ramshaw and Marcus, 1999)",
"ref_id": "BIBREF44"
},
{
"start": 986,
"end": 1003,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF60"
},
{
"start": 1438,
"end": 1457,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 820,
"end": 828,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Multilingual Code-Switching",
"sec_num": "2.2"
},
{
"text": "Complexity. The augmentation process is repeated k times per sentence producing a new augmented dataset of size k \u00d7 n, where n is the size of the original dataset, i.e. space complexity of O(k \u00d7 n). For T translations per sentence, Algorithm 1 has a runtime complexity of O(k \u00d7 n \u00d7 T ) assuming constant time for alignment. Word-level requires as many translations as the number of words but sentence-level requires only one. An increase in the dataset size also increases the training time, but the advantage is one model appropriate for many languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Code-Switching",
"sec_num": "2.2"
},
{
"text": "A language family is defined as a group of related languages that likely share a common ancestor. For example, Portuguese, Spanish, French, Italian, and Romanian are all derived from Latin (Rowe and Levine, 2017) . We use language families to study their impact on the target languages. We augment the source language with code-switching to a particular language family. For instance, codeswitching the English dataset with Turkic language family and testing on Japanese can reveal how closely the two are aligned in the vector space of a pre-trained multilingual model. We work with 6 language groups: Afro-Asiatic (Voegelin and Voegelin, 1976) , Germanic (Harbert, 2006) , Indo-Aryan (Masica, 1993) , Romance (Elcock and Green, 1960) , and Turkic (Johanson and Johanson, 2015) , also grouping Sino-Tibetan, Koreanic and Japonic (Shafer, 1955; Miller, 1967) . 2 Germanic, Romance, and Indo-Aryan are genera of the Indo-European family. Language groups and corresponding languages are shown in Table 1 . Each group is selected based on a target language in the dataset, and the Afro-Asiatic family is added as an extra group. In experiments, lset in Algorithm 1 will be assigned languages from a specific family.",
"cite_spans": [
{
"start": 189,
"end": 212,
"text": "(Rowe and Levine, 2017)",
"ref_id": "BIBREF46"
},
{
"start": 616,
"end": 645,
"text": "(Voegelin and Voegelin, 1976)",
"ref_id": "BIBREF57"
},
{
"start": 657,
"end": 672,
"text": "(Harbert, 2006)",
"ref_id": "BIBREF20"
},
{
"start": 675,
"end": 700,
"text": "Indo-Aryan (Masica, 1993)",
"ref_id": null
},
{
"start": 711,
"end": 735,
"text": "(Elcock and Green, 1960)",
"ref_id": "BIBREF14"
},
{
"start": 749,
"end": 778,
"text": "(Johanson and Johanson, 2015)",
"ref_id": "BIBREF24"
},
{
"start": 830,
"end": 844,
"text": "(Shafer, 1955;",
"ref_id": "BIBREF48"
},
{
"start": 845,
"end": 858,
"text": "Miller, 1967)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 994,
"end": 1001,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Language Families",
"sec_num": "2.3"
},
{
"text": "Joint training is traditionally used for intent prediction and slot filling to exploit the correlation between the two tasks. This is done by feeding the feature vectors of one model to another or by sharing layers of a neural network followed by training the tasks together. So, a standard joint model loss can be defined as a combination of intent (L i ) and slot (L sl ) losses. i.e., L = \u03b1L i + \u03b2L sl , where \u03b1 and \u03b2 are corresponding task weights. Prior works (Goo et al., 2018; Schuster et al., 2019; Liu and Lane, 2016; Haihong et al., 2019 ) that use BiL-STM or RNN are now modified to BERT-based implementations explored in more recent works Hardalov et al., 2020; . A standard Joint model consists of BERT outputs from the final hidden state (classification (CLS) token for intent and m word tokens for slots) fed to linear layers to get intent and slot predictions. Assuming h cls represents the CLS token and h m represents a token from the remaining word-level tokens, the BERT model outputs are defined as :",
"cite_spans": [
{
"start": 465,
"end": 483,
"text": "(Goo et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 484,
"end": 506,
"text": "Schuster et al., 2019;",
"ref_id": "BIBREF47"
},
{
"start": 507,
"end": 526,
"text": "Liu and Lane, 2016;",
"ref_id": "BIBREF30"
},
{
"start": 527,
"end": 547,
"text": "Haihong et al., 2019",
"ref_id": "BIBREF19"
},
{
"start": 651,
"end": 673,
"text": "Hardalov et al., 2020;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training",
"sec_num": "2.4"
},
{
"text": "p i = sof tmax(W i h cls + b i ) p sl m = sof tmax(W sl hm + b sl ) \u2200m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training",
"sec_num": "2.4"
},
{
"text": "(1) with a multi-class cross-entropy loss 3 for both intent (L i ) and slots (L sl ). We will use this model as 2 Each of the Sino-Tibetan, Koreanic, and Japonic families have a single high-resource member (Chinese, Korean, Japanese respectively). We only group them as an additional interesting data point, not because we ascribe to any theories that link them typologically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training",
"sec_num": "2.4"
},
{
"text": "3 L = \u2212 1 n \u2211\ufe01 n i=1 [y log \u0177]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training",
"sec_num": "2.4"
},
{
"text": "our baseline for joint training. Our goal will be to show that code-switching on top of joint training improves the performance. The output of Algorithm 1 will be the input used for joint training on BERT for code-switched experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training",
"sec_num": "2.4"
},
{
"text": "3 Datasets Benchmark Dataset. We use the latest multilingual benchmark dataset of MultiATIS++ , which was created by manually translating the original ATIS (Price, 1990) dataset from English (en) to 8 other languages: Spanish (es), Portuguese (pt), German (de), French (fr), Chinese (zh), Japanese (ja), Hindi (hi), and Turkish (tr). The dataset consists of utterances for each language with an 'intent' label for 'flight intent' and 'slot' labels for the word tokens in BIO format. A sample datapoint in English is shown in Figure 2 . Table 2 presents the dataset statistics for the benchmark dataset of MultiATIS++ as well as for the newly constructed dataset for disaster NLU.",
"cite_spans": [],
"ref_spans": [
{
"start": 525,
"end": 533,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 536,
"end": 544,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Joint Training",
"sec_num": "2.4"
},
{
"text": "New Dataset for Disaster NLU. We construct a new intent and slot filling dataset of tweets collected during natural disasters, in two languages: English (en) and Haitian Creole (ht). The tweets originally were released by Appen. 4 For English, a language expert labeled the tweets, and for Haitian Creole, we used Amazon Mechanical Turk with five annotators. Intent classes include: 'request' and 'others'. Slot filling consists of 5 labels: 'medical_help', 'food', 'water', 'shelter', and 'other_aid'. Table 2 provides the dataset statistics.",
"cite_spans": [],
"ref_spans": [
{
"start": 503,
"end": 510,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Joint Training",
"sec_num": "2.4"
},
{
"text": "We use the traditional cross-lingual task setting where each experiment consists of a source language and a target language. A model is trained on the source data (English) and evaluated on the target data (8 other languages). For code-switching experiments, the English dataset is augmented with multilingual code-switching before training. Our implementation is in PyTorch (Paszke et al., 2019) and we use the pre-trained bert-base-multilingualuncased with BertForSequenceClassification (Wolf et al., 2020) model. Maximum epochs is set to 25 with an early stopping patience of 5, batch size of 32, and Adam optimizer (Kingma and Ba, 2014) with a learning rate of 5e\u22125. We select the best model on the validation set. Consistent with the metrics reported for intent prediction and slot filling evaluation in the past, we also accuracy for intent and micro F1 5 to measure slot performance.",
"cite_spans": [
{
"start": 375,
"end": 396,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 489,
"end": 508,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Since we assume that target language is not known before hand, Translate-Train (TT) (Xu et al., 2020) method is not a suitable baseline. Rather, we set this to be an upper bound, i.e. translating to the target language and fine-tuning the model should intuitively outperform a generic model. Additionally, we add code-switching to this TT model to assess if augmentation negatively impacts its performance. The zero-shot baselines for the codeswitching experiments use an English-Only model, which is fine-tuned over the pre-trained mBERT separately for each task and an English-only Joint model .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines & Upper Bound",
"sec_num": "4.1"
},
{
"text": "Effect of Multilingual Code-Switching. The runtime of the models in Table 5 (Appendix B) shows that code-switching is expensive, taking up to five hours for five augmentation rounds (k = 5). This is because there are k times more data compared to the monolingual source data. Increasing the number of code-switchings (k) for a sentence from 5 to 50 improves the performance by +1%, while increasing the run-time by a large margin. Hence, such tradeoffs should be considered when picking k for real-world applications where time to deployment might be of the essense.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "In the translate-train (upper bound) scenario, it is not immediately clear if augmentation helps, since data in the same language as the target are always preferable to other language or code-switched data. At a minimum, augmentation does not hinder upper-bound performance (Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 283,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "For both intent and slot performance, the chunklevel model remained robust across the languages. For intent, the difference between word-level and sentence-level was insignificant. For slot, sentencelevel was in par with chunk-level on average. Thus, Table 3 : Performance evaluation of code-switching with setting k = 5. CS: Code-Switching. Reported scores are average of 5 independent runs (including a separate code-switched data for each run). m = number of distinct models to be trained. *: modified BERT-based implementations . \u2020: Similar to Qin et al., 2020 but modified for slot-filling task and also excluding target language from randomized switching. \u2660 : The difference is significant with p < 0.05 using Tukey HSD (conducted between Joint en\u2212only + CCS versus Joint en\u2212only Baseline for each language). we think that code-switching at chunk-level is safer for avoiding semantic discrepancies (which are a danger in the word-level) while also better encouraging intra-sentence language neutrality.",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "Evaluation on Disaster Dataset. We found that disaster data is more challenging than the ATIS dataset for transfer learning in NLU. The predictive performance is shown in Table 4 . Code-Switching improved intent accuracy by +12.5% and slot F1 by +2.3%, which is quite promising considering the domain mismatch (tweets vs airline guides). Joint training added +0.9% improvement to intent accuracy, however did not seem to help slot F1. This might imply a weaker correlation between the two tasks in real-world data, i.e. a mention of 'food' or 'shelter' in a tweet may not always mean that there is a 'request' or vice-versa. The upper bound of translate-train method did not perform any better than the randomly code-switched model which seemed counter-intuitive. This might be due to the lack of strong representation for Haitian Creole in the pre-trained model, although it is similar to French, or due to the limitation of the machine translation system.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 178,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "Impact of Language Families. Results of language family analysis are shown in Figure 3 for the 4 languages that showed significant improvements for both intent and slots in Table 3 . The input in English is independently code-switched using 6 different language families. Note that the target language is always excluded from the group when evaluating on the same, i.e. Hindi is excluded from Indo-Aryan family when that family is being evaluated on it. Translate-train model is provided as a frame of reference and upper bound. Generally, as expected, we found that language families helped their corresponding languages, i.e. Romance helped Spanish, Germanic helped German, and so on. An exception is our loose group of Sino-Tibetan, Koreanic, and Japonic languages -for both Chinese and Japanese, languages from the Turkic language family helped more than others. On the other hand, the Sino-tibetan, Japonic, and Koreanic group helps Hindi more than other Indo-European languages. We believe this highlights the necessity for methods like the one of Xia et al. (2020) that can a priori identify the best helper language or group of languages that can benefit downstream tasks for low resource languages.",
"cite_spans": [
{
"start": 1054,
"end": 1071,
"text": "Xia et al. (2020)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [
{
"start": 78,
"end": 86,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 173,
"end": 180,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "Control Experiments on k. Hyperparameter k controls the amount of code-switched data. k = 0 represents original size with no code-switching, k = 1 represents original size with code-switching, and k = 10 means 10-times more code-switched data than the original. The main experiments in Table 3 use k = 5. Figure 4 shows how varying k affects performance. For this analysis, we consider 4 target languages on which code-switching produced significant results in Table 3 on both Intent Accuracy and Slot F1: Chinese, Japanese, Hindi, and Turkish. Intuitively, we observe that as k increases, too much code-switching becomes expensive in terms of runtime, while performance improvement slowly plateaus. For Slot F1 performance in all four cases, unlike Intent, we observe an interesting dip when k = 1, which represents the augmentation having just one copy of codeswitching (without the original non-code-switched data), as compared to k = 0. Adding the original data to one round of code-switched data (k = 2) leads to big improvements. Overall, we see improvement for both tasks, with Slot F1 plateauing earlier. Table 5 and Figure 10 in Appendix B show the impact of code-switching on training runtime, which increases as k increases. Thus, finding an optimal value of k and specific language groups are essential for downstream applications.",
"cite_spans": [],
"ref_spans": [
{
"start": 286,
"end": 293,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 305,
"end": 313,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 461,
"end": 468,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1113,
"end": 1120,
"text": "Table 5",
"ref_id": null
},
{
"start": 1125,
"end": 1134,
"text": "Figure 10",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "mBERT versus XLM-R. Additional performance evaluations and benefits of code-switching on XLM-R (Conneau et al., 2020a) , a stronger multilingual language model, are provided in Appendix A. Note that XLM-R is trained using Common-Crawl and is likely to be exposed to some code-switched data. Thus, we focus primarily on mBERT which largely remains monolingual at the sentence-level to identify the unbiased impact of code-switching during fine-tuning. Furthermore, runtime and hyperparameter tuning along with insights into layers to freeze before training are shown in Appendix B.",
"cite_spans": [
{
"start": 95,
"end": 118,
"text": "(Conneau et al., 2020a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "Error Analysis. Selecting intent classes with support > 10, Figure 5 shows how each class is positively or negatively impacted by code-switching. Improvement was primarily on 'airfare', 'distance' 'capacity', 'airline', and 'ground_service' which had longer sentences such as 'Please tell me which airline has the most departures from Atlanta' when compared to 'abbreviations' and 'airport' classes that included very short phrases like 'What does EA mean?' However, note that Spanish and German did not improve much, aligning with our results in Table 3 . For slot labels in Figure 6 , we selected the ones with support > 50 and that have different characteristics, e.g. 'name', 'code', etc. The overall trend in slot performance shows improvements for labels such as 'day_name', 'airport_code', and 'city_name' and slight variations in labels such as 'fight_number' and 'period_of_day', implying textual slots benefiting over numeric ones.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 547,
"end": 554,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 576,
"end": 584,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "Cross-Lingual Transfer. Researchers have studied cross-lingual tasks in various settings such as sentiment/sequence classification (Wan, 2009; Eriguchi et al., 2018; , named entity recognition (Zirikly and Hagiwara, 2015 ; Tsai et Xie et al., 2018) , parts-of-speech tagging (Yarowsky et al., 2001; T\u00e4ckstr\u00f6m et al., 2013; Plank and Agi\u0107, 2018) , and natural language understanding (He et al., 2013; Upadhyay et al., 2018; . The methodology for most of the current approaches for cross-lingual tasks can be categorizes as: a) multilingual representations from pre-trained or fine-tuned models such as mBERT (Devlin et al., 2019) or XLM-R (Conneau et al., 2020a) , b) machine translation followed by alignment (Shah et al., 2010; Yarowsky et al., 2001; Ni et al., 2017) , or c) a combination of both . Before transformer models, effective approaches included domain adversarial training to extract language-agnostic features (Ganin et al., 2016; and word alignment methods such as MUSE (Conneau et al., 2017) to align fastText word vectors (Bojanowski et al., 2017) . Recently, Conneau et al., 2020b show that having shared parameters in the top layers of the multilingual encoders can be used to align different languages quite effectively on tasks such as XNLI (Conneau et al., 2018) . Monolingual models for joint slot filling and intent prediction have used attention-based RNN (Liu and Lane, 2016) and attention-based BiLSTM with a slot gate (Goo et al., 2018) on benchmark datasets (Price, 1990; Coucke et al., 2018) . These methods have shown that a joint method can enhance both tasks and slot filling can be conditioned on the learned intent. A related approach iteratively learns the relationship between the two tasks (Haihong et al., 2019) . Recently, BERT-based approaches (Hardalov et al., 2020; have improved results. On the other hand, cross-lingual versions of this joint task include a low-supervision based approach for Hindi and Turkish (Upadhyay et al., 2018) , new datasets for Spanish and Thai (Schuster et al., 2019) , and recently creating MultiATIS++, a comprehensive dataset in 9 languages. The joint task mentioned above in a pure zero-shot setting is one of the motivations for our work. A Zero-shot is the setting where the model sees a new distribution of examples only during test (prediction) time (Xian et al., 2017; Srivastava et al., 2018; Romera-Paredes and Torr, 2015) . Thus, in our setting, we assume that target language is unknown during training, so that our model is generalizable across multiple languages.",
"cite_spans": [
{
"start": 131,
"end": 142,
"text": "(Wan, 2009;",
"ref_id": "BIBREF58"
},
{
"start": 143,
"end": 165,
"text": "Eriguchi et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 193,
"end": 220,
"text": "(Zirikly and Hagiwara, 2015",
"ref_id": "BIBREF69"
},
{
"start": 231,
"end": 248,
"text": "Xie et al., 2018)",
"ref_id": "BIBREF63"
},
{
"start": 275,
"end": 298,
"text": "(Yarowsky et al., 2001;",
"ref_id": "BIBREF66"
},
{
"start": 299,
"end": 322,
"text": "T\u00e4ckstr\u00f6m et al., 2013;",
"ref_id": "BIBREF53"
},
{
"start": 323,
"end": 344,
"text": "Plank and Agi\u0107, 2018)",
"ref_id": "BIBREF41"
},
{
"start": 382,
"end": 399,
"text": "(He et al., 2013;",
"ref_id": "BIBREF22"
},
{
"start": 400,
"end": 422,
"text": "Upadhyay et al., 2018;",
"ref_id": null
},
{
"start": 607,
"end": 628,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 638,
"end": 661,
"text": "(Conneau et al., 2020a)",
"ref_id": "BIBREF6"
},
{
"start": 709,
"end": 728,
"text": "(Shah et al., 2010;",
"ref_id": "BIBREF49"
},
{
"start": 729,
"end": 751,
"text": "Yarowsky et al., 2001;",
"ref_id": "BIBREF66"
},
{
"start": 752,
"end": 768,
"text": "Ni et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 924,
"end": 944,
"text": "(Ganin et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 985,
"end": 1007,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 1039,
"end": 1064,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 1077,
"end": 1098,
"text": "Conneau et al., 2020b",
"ref_id": "BIBREF9"
},
{
"start": 1262,
"end": 1284,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 1381,
"end": 1401,
"text": "(Liu and Lane, 2016)",
"ref_id": "BIBREF30"
},
{
"start": 1487,
"end": 1500,
"text": "(Price, 1990;",
"ref_id": "BIBREF42"
},
{
"start": 1501,
"end": 1521,
"text": "Coucke et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 1728,
"end": 1750,
"text": "(Haihong et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 1785,
"end": 1808,
"text": "(Hardalov et al., 2020;",
"ref_id": "BIBREF21"
},
{
"start": 1956,
"end": 1979,
"text": "(Upadhyay et al., 2018)",
"ref_id": null
},
{
"start": 2016,
"end": 2039,
"text": "(Schuster et al., 2019)",
"ref_id": "BIBREF47"
},
{
"start": 2330,
"end": 2349,
"text": "(Xian et al., 2017;",
"ref_id": "BIBREF62"
},
{
"start": 2350,
"end": 2374,
"text": "Srivastava et al., 2018;",
"ref_id": "BIBREF51"
},
{
"start": 2375,
"end": 2405,
"text": "Romera-Paredes and Torr, 2015)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Code-Switching. Linguistic code-switching is a phenomenon where multilingual speakers alternate between languages. Recently, monolingual models have been adapted to code-switched text in entity recognition (Aguilar and Solorio, 2019) , part-ofspeech tagging (Soto and Hirschberg, 2018; Ball and Garrette, 2018) , sentiment analysis (Joshi et al., 2016) and language identification (Mave et al., 2018; Yirmibe\u015foglu and Eryigit, 2018; Mager et al., 2019) . Recently, KhudaBukhsh et al., 2020 have proposed an approach to sample code-mixed documents using minimal supervision. Qin et al., 2020 allows randomized code-switching to include the target language, as shown in their Figure 3 . In our context for example, if the target language is German, we ensure that there is no code-switching to German during training. We consider this distinction essential to evaluate a true zero-shot learning scenario and prevent any bias when comparing with translate-and-train. present a non-zero-shot approach that performs code-switching to target languages, and Jiang et al. (2020) present a code-switching based method to improve the ability of multilingual language mod-els for factual knowledge retrieval. Contemporary work by Tan and Joty, 2021 makes use of both word and phrase-level code-mixing to switch to a set of languages to perform adversarial training for XNLI. Code-switching and other data augmentation techniques have been applied to the pre-training stage in recent works (Chaudhary et al., 2020; Kale and Siddhant, 2021; Dufter and Sch\u00fctze, 2020) . However, pre-training is outside the scope of this work. In addition to studying cross-lingual slot filling and language families, another key distinction of our method is that we completely ignore the target language during training to represent a fully zero-shot scenario. The main advantage is that with enhanced cross-lingual generalizability, it can be deployed out-of-the-box, as our training is conducted independently of the target language.",
"cite_spans": [
{
"start": 206,
"end": 233,
"text": "(Aguilar and Solorio, 2019)",
"ref_id": "BIBREF0"
},
{
"start": 258,
"end": 285,
"text": "(Soto and Hirschberg, 2018;",
"ref_id": "BIBREF50"
},
{
"start": 286,
"end": 310,
"text": "Ball and Garrette, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 332,
"end": 352,
"text": "(Joshi et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 381,
"end": 400,
"text": "(Mave et al., 2018;",
"ref_id": "BIBREF33"
},
{
"start": 401,
"end": 432,
"text": "Yirmibe\u015foglu and Eryigit, 2018;",
"ref_id": "BIBREF67"
},
{
"start": 433,
"end": 452,
"text": "Mager et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 1478,
"end": 1502,
"text": "(Chaudhary et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 1503,
"end": 1527,
"text": "Kale and Siddhant, 2021;",
"ref_id": "BIBREF26"
},
{
"start": 1528,
"end": 1553,
"text": "Dufter and Sch\u00fctze, 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 674,
"end": 682,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Our study shows that augmenting the monolingual input data with multilingual code-switching via random translations at the chunk-level helps a zeroshot model to be language neutral when evaluated on unseen languages. This approach enhanced the generalizability of pre-trained language models such as mBERT when fine-tuning for downstream tasks of intent detection and slot filling. Additionally, we presented an application of this method using a new annotated dataset of disaster tweets. Further, we studied code-switching with language families and their impact on specific target languages. Addressing code-switching with language families during the pre-training phase and releasing a larger dataset of annotated disaster tweets in more languages are planned for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "7"
},
{
"text": "The tweet dataset that we constructed for disaster NLU was originally released by Appen 6 , and we use it to construct slot labels in two languages: English (en) and Haitian Creole (ht). Data statement that includes annotator guidelines for the labeling jobs and other dataset information will be provided with the implementation. From a broader impact perspective, our code and developed models are open-source and allows NLP technology to be accessible to information systems for emergency services and social scientists in quickly deploying model during disaster events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": "8"
},
{
"text": "Implementation and dataset are available at https://github.com/jitinkrishnan/ Multilingual-ZeroShot-SlotFilling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://appen.com/datasets/ combined-disaster-response-data/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To address class imbalance for slots, we use Micro F1 instead of Macro F1, which is why our F1 scores are inflated when compared to scores in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://appen.com/datasets/ combined-disaster-response-data/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by U.S. National Science Foundation grants IIS-1815459, IIS-1657379, and 2040926. This work was also supported in part by the grant H-4Q21-009 from the Commonwealth Cyber Initiative, an investment in the advancement of cyber R&D, innovation, and workforce development (for more information about CCI, visit www.cyberinitiative. org). The authors are thankful to Ming Sun and Alexis Conneau for giving valuable insights on multilingual model training, as well as to the anonymous reviewers for their constructive feedback. We also acknowledge ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University, were most experiments were conducted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "We conduct an additional analysis on XLM-R (Conneau et al., 2020a) and compare it with mBERT (Devlin et al., 2019) . The implementation is very similar in PyTorch (Paszke et al., 2019) but using the pre-trained xlm-roberta-base with RobertaForSequenceClassification (Wolf et al., 2020) as the XLM-R model. We observe that, setting k = 5, XLM-R outperforms mBERT on average (by 2% Intent Accuracy and 1.5% Slot F1). Individually, XLM-R improved Chinese, Japanese, Portuguese, and Turkish for Intent Prediction and German, Chinese, Japanese, Portuguese, and Hindi for Slot Filling as shown in Figure 7 . We observe a trend similar to mBERT with k on XLM-R shown in Figure 8 . However, for XLM-R, we observe that randomized code-switching did not help Chinese for Intent Prediction and Hindi for Slot F1. If codeswitched to a specific language family, instead of switching to random languages, it might improve their performance. A deeper dive into XLM-R and language families are left for future work. ",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "(Conneau et al., 2020a)",
"ref_id": "BIBREF6"
},
{
"start": 93,
"end": 114,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 163,
"end": 184,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 266,
"end": 285,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [
{
"start": 591,
"end": 599,
"text": "Figure 7",
"ref_id": null
},
{
"start": 663,
"end": 671,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "A mBERT versus XLM-R",
"sec_num": null
},
{
"text": "For joint training with same task weights, we tuned \u03b1 and \u03b2 using grid search to see the strength of correlation between the tasks. For intent, the (\u03b1, \u03b2) combination of (1.0, 0.6) performed well, while (1.0, 1.0) for slots. This suggests that intent benefiting slot might be slightly more than slot benefiting intent. Additionally, during fine-tuning, freezing the layers of the transformer affected the model performance as shown in Figure 9 . Keeping the first 8 layers frozen gave the best performance. By freezing the earlier layers, the transformer can retain its most fundamental feature information gained from the massive pre-training step, and by unfreezing some top layers, it can undergo fine-tuning. Additionally, latency for training a code-switched model is shown in Table 5 and how runtime varies with an increase in k is shown in Figure 10 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 435,
"end": 443,
"text": "Figure 9",
"ref_id": null
},
{
"start": 782,
"end": 789,
"text": "Table 5",
"ref_id": null
},
{
"start": 847,
"end": 856,
"text": "Figure 10",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Hyperparameter Tuning & Runtime",
"sec_num": null
},
{
"text": "MTT Jointen Jointcs JointTT 05:04:49 1:31:32 00:11:50 01:06:50 00:11:04 Table 5 : Runtime on Google Colab (K80 GPU for training joint models). M T T : Machine Translation to Target. Note that M T T and J T T are for one target language (averaged). ",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "CS (k=5)",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "From english to code-switching: Transfer learning with strong morphological clues",
"authors": [
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.05158"
]
},
"num": null,
"urls": [],
"raw_text": "Gustavo Aguilar and Thamar Solorio. 2019. From english to code-switching: Transfer learning with strong morphological clues. arXiv preprint arXiv:1909.05158.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Part-of-speech tagging for code-switched, transliterated texts without explicit language identification",
"authors": [
{
"first": "Kelsey",
"middle": [],
"last": "Ball",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3084--3089",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelsey Ball and Dan Garrette. 2018. Part-of-speech tagging for code-switched, transliterated texts with- out explicit language identification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3084-3089.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Dict-mlm: Improved multilingual pre-training using bilingual dictionaries",
"authors": [
{
"first": "Aditi",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Raman",
"suffix": ""
},
{
"first": "Krishna",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Jiecao",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.12566"
]
},
"num": null,
"urls": [],
"raw_text": "Aditi Chaudhary, Karthik Raman, Krishna Srinivasan, and Jiecao Chen. 2020. Dict-mlm: Improved mul- tilingual pre-training using bilingual dictionaries. arXiv preprint arXiv:2010.12566.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert for joint intent classification and slot filling",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Zhuo",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.10909"
]
},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Zhu Zhuo, and Wen Wang. 2019. Bert for joint intent classification and slot filling. arXiv preprint arXiv:1902.10909.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adversarial deep averaging networks for cross-lingual sentiment classification",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Athiwaratkun",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "557--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. Transactions of the Association for Compu- tational Linguistics, 6:557-570.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020a. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Xnli: Evaluating crosslingual sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.05053"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Ruty Rinott, Ad- ina Williams, Samuel R Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations. arXiv preprint arXiv:1809.05053.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Emerging cross-lingual structure in pretrained language models",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6022--6034",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.536"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle- moyer, and Veselin Stoyanov. 2020b. Emerging cross-lingual structure in pretrained language mod- els. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022-6034, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Coucke",
"suffix": ""
},
{
"first": "Alaa",
"middle": [],
"last": "Saade",
"suffix": ""
},
{
"first": "Adrien",
"middle": [],
"last": "Ball",
"suffix": ""
},
{
"first": "Th\u00e9odore",
"middle": [],
"last": "Bluche",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Caulier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Leroy",
"suffix": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Doumouro",
"suffix": ""
},
{
"first": "Thibault",
"middle": [],
"last": "Gisselbrecht",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Caltagirone",
"suffix": ""
},
{
"first": "Thibaut",
"middle": [],
"last": "Lavril",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.10190"
]
},
"num": null,
"urls": [],
"raw_text": "Alice Coucke, Alaa Saade, Adrien Ball, Th\u00e9odore Bluche, Alexandre Caulier, David Leroy, Cl\u00e9ment Doumouro, Thibault Gisselbrecht, Francesco Calta- girone, Thibaut Lavril, et al. 2018. Snips voice plat- form: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Identifying elements essential for bert's multilinguality",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Dufter",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4423--4437",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Dufter and Hinrich Sch\u00fctze. 2020. Identify- ing elements essential for bert's multilinguality. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4423-4437.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A simple, fast, and effective reparameterization of ibm model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameteriza- tion of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The romance languages",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "John N",
"middle": [],
"last": "Elcock",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Green",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Denis Elcock and John N Green. 1960. The romance languages. Faber & Faber London.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Zeroshot cross-lingual classification using multilingual neural machine translation",
"authors": [
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.04686"
]
},
"num": null,
"urls": [],
"raw_text": "Akiko Eriguchi, Melvin Johnson, Orhan Firat, Hideto Kazawa, and Wolfgang Macherey. 2018. Zero- shot cross-lingual classification using multilin- gual neural machine translation. arXiv preprint arXiv:1809.04686.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stil-simultaneous slot filling, translation, intent classification, and language identification: Initial results using mbart on multi-atis++",
"authors": [
{
"first": "G",
"middle": [
"M"
],
"last": "Jack",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.00760"
]
},
"num": null,
"urls": [],
"raw_text": "Jack GM FitzGerald. 2020. Stil-simultaneous slot fill- ing, translation, intent classification, and language identification: Initial results using mbart on multi- atis++. arXiv preprint arXiv:2010.00760.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Domain-adversarial training of neural networks",
"authors": [
{
"first": "Yaroslav",
"middle": [],
"last": "Ganin",
"suffix": ""
},
{
"first": "Evgeniya",
"middle": [],
"last": "Ustinova",
"suffix": ""
},
{
"first": "Hana",
"middle": [],
"last": "Ajakan",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Germain",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Laviolette",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Marchand",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lempitsky",
"suffix": ""
}
],
"year": 2016,
"venue": "The Journal of Machine Learning Research",
"volume": "17",
"issue": "1",
"pages": "2096--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Lavi- olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural net- works. The Journal of Machine Learning Research, 17(1):2096-2030.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Slot-gated modeling for joint slot filling and intent prediction",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Chih-Wen Goo",
"suffix": ""
},
{
"first": "Yun-Kai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chih-Li",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Tsung-Chieh",
"middle": [],
"last": "Huo",
"suffix": ""
},
{
"first": "Keng-Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "753--757",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun- Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 753-757.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A novel bi-directional interrelated model for joint intent detection and slot filling",
"authors": [
{
"first": "E",
"middle": [],
"last": "Haihong",
"suffix": ""
},
{
"first": "Peiqing",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Zhongfu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Meina",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5467--5471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E Haihong, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5467- 5471.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Germanic Languages",
"authors": [
{
"first": "Wayne",
"middle": [],
"last": "Harbert",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wayne Harbert. 2006. The Germanic Languages. Cambridge University Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enriched pre-trained transformers for joint slot filling and intent detection",
"authors": [
{
"first": "Momchil",
"middle": [],
"last": "Hardalov",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Koychev",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14848"
]
},
"num": null,
"urls": [],
"raw_text": "Momchil Hardalov, Ivan Koychev, and Preslav Nakov. 2020. Enriched pre-trained transformers for joint slot filling and intent detection. arXiv preprint arXiv:2004.14848.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multi-style adaptive training for robust cross-lingual spoken language understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "8342--8346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong He, Li Deng, Dilek Hakkani-Tur, and Gokhan Tur. 2013. Multi-style adaptive training for robust cross-lingual spoken language understanding. In 2013 IEEE International Conference on Acous- tics, Speech and Signal Processing, pages 8342- 8346. IEEE.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Multilingual factual knowledge retrieval from pretrained language models",
"authors": [
{
"first": "Zhengbao",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Araki",
"suffix": ""
},
{
"first": "Haibo",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.06189"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, and Graham Neubig. 2020. Multilin- gual factual knowledge retrieval from pretrained lan- guage models. arXiv preprint arXiv:2010.06189.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The Turkic Languages. Routledge",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Johanson",
"suffix": ""
},
{
"first": "\u00c9va \u00c1gnes Csat\u00f3",
"middle": [],
"last": "Johanson",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Johanson and \u00c9va \u00c1gnes Csat\u00f3 Johanson. 2015. The Turkic Languages. Routledge.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ameya",
"middle": [],
"last": "Prabhu",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2482--2491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Ameya Prabhu, Manish Shrivastava, and Vasudeva Varma. 2016. Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2482-2491.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Mixout: A simple yet effective data augmentation scheme for slotfilling",
"authors": [
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
}
],
"year": 2021,
"venue": "Conversational Dialogue Systems for the Next Decade",
"volume": "",
"issue": "",
"pages": "279--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihir Kale and Aditya Siddhant. 2021. Mixout: A sim- ple yet effective data augmentation scheme for slot- filling. In Conversational Dialogue Systems for the Next Decade, pages 279-288. Springer.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Harnessing code switching to transcend the linguistic barrier",
"authors": [
{
"first": "Shriphani",
"middle": [],
"last": "Ashiqur R Khudabukhsh",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Palakodety",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.11258"
]
},
"num": null,
"urls": [],
"raw_text": "Ashiqur R KhudaBukhsh, Shriphani Palakodety, and Jaime G Carbonell. 2020. Harnessing code switch- ing to transcend the linguistic barrier. arXiv preprint arXiv:2001.11258.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Unsupervised and interpretable domain adaptation to rapidly filter social web data for emergency services",
"authors": [
{
"first": "Jitin",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Hemant",
"middle": [],
"last": "Purohit",
"suffix": ""
},
{
"first": "Huzefa",
"middle": [],
"last": "Rangwala",
"suffix": ""
}
],
"year": 2020,
"venue": "ASONAM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jitin Krishnan, Hemant Purohit, and Huzefa Rangwala. 2020. Unsupervised and interpretable domain adap- tation to rapidly filter social web data for emergency services. In ASONAM.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Attention-based recurrent neural network models for joint intent detection and slot filling",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.01454"
]
},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Subword-level language identification for intra-word code-switching",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "\u00d6zlem",
"middle": [],
"last": "\u00c7etinoglu",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2005--2011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, \u00d6zlem \u00c7etinoglu, and Katharina Kann. 2019. Subword-level language identification for intra-word code-switching. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 2005-2011.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The indo-aryan languages",
"authors": [
{
"first": "P",
"middle": [],
"last": "Colin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Masica",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin P Masica. 1993. The indo-aryan languages. Cambridge University Press.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Language identification and analysis of codeswitched social media text",
"authors": [
{
"first": "Deepthi",
"middle": [],
"last": "Mave",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "51--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepthi Mave, Suraj Maharjan, and Thamar Solorio. 2018. Language identification and analysis of code- switched social media text. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 51-61.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The Japanese Language",
"authors": [
{
"first": "Roy",
"middle": [
"Andrew"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Andrew Miller. 1967. The Japanese Language. University of Chicago Press Chicago.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Low-resource cross-lingual event type detection via distant supervision with minimal effort",
"authors": [
{
"first": "Aldrian",
"middle": [],
"last": "Obaja Muis",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Otani",
"suffix": ""
},
{
"first": "Nidhi",
"middle": [],
"last": "Vyas",
"suffix": ""
},
{
"first": "Ruochen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "70--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aldrian Obaja Muis, Naoki Otani, Nidhi Vyas, Ruochen Xu, Yiming Yang, Teruko Mitamura, and Eduard Hovy. 2018. Low-resource cross-lingual event type detection via distant supervision with minimal effort. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 70-82.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Robust classification of crisis-related data on social networks using convolutional neural networks",
"authors": [
{
"first": "Kamla",
"middle": [],
"last": "Dat Tien Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Al-Mannai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Shafiq",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Imran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 2017,
"venue": "ICWSM",
"volume": "31",
"issue": "3",
"pages": "632--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Tien Nguyen, Kamla Al-Mannai, Shafiq R Joty, Hassan Sajjad, Muhammad Imran, and Prasenjit Mi- tra. 2017. Robust classification of crisis-related data on social networks using convolutional neural net- works. ICWSM, 31(3):632-635.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1470--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recog- nition via effective annotation and representation projection. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1470-1480.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon",
"authors": [
{
"first": "Wahyu",
"middle": [],
"last": "Endang",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Pamungkas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Endang Wahyu Pamungkas and Viviana Patti. 2019. Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics: Student Research Workshop, pages 363-370.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "How multilingual is multilingual bert?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Distant supervision from disparate sources for low-resource partof-speech tagging",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "614--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank and \u017deljko Agi\u0107. 2018. Distant super- vision from disparate sources for low-resource part- of-speech tagging. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 614-620.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Evaluation of spoken language systems: The atis domain",
"authors": [
{
"first": "Patti",
"middle": [
"Price"
],
"last": "",
"suffix": ""
}
],
"year": 1990,
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patti Price. 1990. Evaluation of spoken language sys- tems: The atis domain. In Speech and Natural Lan- guage: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Cosda-ml: Multi-lingual code-switching data augmentation for zero-shot cross-lingual nlp",
"authors": [
{
"first": "Libo",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Minheng",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.06402"
]
},
"num": null,
"urls": [],
"raw_text": "Libo Qin, Minheng Ni, Yue Zhang, and Wanxiang Che. 2020. Cosda-ml: Multi-lingual code-switching data augmentation for zero-shot cross-lingual nlp. arXiv preprint arXiv:2006.06402.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lance",
"suffix": ""
},
{
"first": "Mitchell P",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural language processing using very large corpora",
"volume": "",
"issue": "",
"pages": "157--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large cor- pora, pages 157-176. Springer.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "An embarrassingly simple approach to zero-shot learning",
"authors": [
{
"first": "Bernardino",
"middle": [],
"last": "Romera",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Paredes",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Torr",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2152--2161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernardino Romera-Paredes and Philip Torr. 2015. An embarrassingly simple approach to zero-shot learn- ing. In International Conference on Machine Learn- ing, pages 2152-2161.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A concise introduction to linguistics. Routledge",
"authors": [
{
"first": "Bruce",
"middle": [],
"last": "Rowe",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "340--341",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruce Rowe and Diane Levine. 2017. A concise intro- duction to linguistics. Routledge. pp. 340-341.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Cross-lingual transfer learning for multilingual task oriented dialog",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Rushin",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3795--3805",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3795-3805.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Classification of the sino-tibetan languages",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Shafer",
"suffix": ""
}
],
"year": 1955,
"venue": "",
"volume": "11",
"issue": "",
"pages": "94--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Shafer. 1955. Classification of the sino-tibetan languages. Word, 11(1):94-111.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Synergy: a named entity recognition system for resource-scarce languages such as swahili using online machine translation",
"authors": [
{
"first": "Rushin",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Anatole",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frederking",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Second Workshop on African Language Technology",
"volume": "",
"issue": "",
"pages": "21--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rushin Shah, Bo Lin, Anatole Gershman, and Robert Frederking. 2010. Synergy: a named entity recog- nition system for resource-scarce languages such as swahili using online machine translation. In Pro- ceedings of the Second Workshop on African Lan- guage Technology (AfLaT 2010), pages 21-26.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Joint part-ofspeech and language id tagging for code-switched data",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Soto and Julia Hirschberg. 2018. Joint part-of- speech and language id tagging for code-switched data. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code- Switching, pages 1-10.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Zero-shot learning of classifiers from natural language quantification",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "306--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2018. Zero-shot learning of classifiers from natu- ral language quantification. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 306-316.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Cross-lingual zero-and few-shot hate speech detection utilising frozen transformer language models and axel",
"authors": [
{
"first": "Lukas",
"middle": [],
"last": "Stappen",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Brunn",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.13850"
]
},
"num": null,
"urls": [],
"raw_text": "Lukas Stappen, Fabian Brunn, and Bj\u00f6rn Schuller. 2020. Cross-lingual zero-and few-shot hate speech detection utilising frozen transformer language mod- els and axel. arXiv preprint arXiv:2004.13850.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Token and type constraints for cross-lingual part-of-speech tagging",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mc-Donald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan Mc- Donald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1-12.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Code-mixing on sesame street: Dawn of the adversarial polyglots",
"authors": [
{
"first": "Samson",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2103.09593"
]
},
"num": null,
"urls": [],
"raw_text": "Samson Tan and Shafiq Joty. 2021. Code-mixing on sesame street: Dawn of the adversarial polyglots. arXiv preprint arXiv:2103.09593.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Cross-lingual named entity recognition via wikification",
"authors": [
{
"first": "Chen-Tse",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "219--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikifica- tion. In Proceedings of The 20th SIGNLL Confer- ence on Computational Natural Language Learning, pages 219-228.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "2018. (almost) zero-shot cross-lingual spoken language understanding",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
},
{
"first": "Hakkani-T\u00fcr",
"middle": [],
"last": "Dilek",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": null,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6034--6038",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay, Manaal Faruqui, Gokhan T\u00fcr, Hakkani-T\u00fcr Dilek, and Larry Heck. 2018. (almost) zero-shot cross-lingual spoken language understand- ing. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6034-6038. IEEE.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Classification and index of the world's languages",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Frederick Voegelin",
"suffix": ""
},
{
"first": "Florence",
"middle": [
"Marie"
],
"last": "Voegelin",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Frederick Voegelin and Florence Marie Voegelin. 1976. Classification and index of the world's languages.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Co-training for cross-lingual sentiment classification",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "235--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan. 2009. Co-training for cross-lingual sen- timent classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natu- ral Language Processing of the AFNLP, pages 235- 243.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Transformers: State-of-theart natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Julien Chaumond, Lysandre Debut, Vic- tor Sanh, Clement Delangue, Anthony Moi, Pier- ric Cistac, Morgan Funtowicz, Joe Davison, Sam Shleifer, et al. 2020. Transformers: State-of-the- art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 38-45.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Google's neural machine translation system",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Predicting performance for natural language processing tasks",
"authors": [
{
"first": "Mengzhou",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Ruochen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8625--8646",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.764"
]
},
"num": null,
"urls": [],
"raw_text": "Mengzhou Xia, Antonios Anastasopoulos, Ruochen Xu, Yiming Yang, and Graham Neubig. 2020. Pre- dicting performance for natural language process- ing tasks. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 8625-8646, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Zero-shot learning-the good, the bad and the ugly",
"authors": [
{
"first": "Yongqin",
"middle": [],
"last": "Xian",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Schiele",
"suffix": ""
},
{
"first": "Zeynep",
"middle": [],
"last": "Akata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4582--4591",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongqin Xian, Bernt Schiele, and Zeynep Akata. 2017. Zero-shot learning-the good, the bad and the ugly. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4582-4591.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Neural crosslingual named entity recognition with minimal resources",
"authors": [
{
"first": "Jiateng",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "369--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A Smith, and Jaime G Carbonell. 2018. Neural cross- lingual named entity recognition with minimal re- sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369-379.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "End-to-end slot alignment and recognition for crosslingual nlu",
"authors": [
{
"first": "Weijia",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Batool",
"middle": [],
"last": "Haider",
"suffix": ""
},
{
"first": "Saab",
"middle": [],
"last": "Mansour",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5052--5063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weijia Xu, Batool Haider, and Saab Mansour. 2020. End-to-end slot alignment and recognition for cross- lingual nlu. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 5052-5063.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Alternating language modeling for cross-lingual pre-training",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shuangzhi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "9386--9393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Yang, Shuming Ma, Dongdong Zhang, ShuangZhi Wu, Zhoujun Li, and Ming Zhou. 2020. Alternating language modeling for cross-lingual pre-training. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 34, pages 9386-9393.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Inducing multilingual text analysis tools via robust projection across aligned corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. Technical report, Johns Hopkins Univ Baltimore MD Dept of Computer Science.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Detecting code-switching between turkish-english language pair",
"authors": [
{
"first": "Zeynep",
"middle": [],
"last": "Yirmibe\u015foglu",
"suffix": ""
},
{
"first": "G\u00fcl\u015fen",
"middle": [],
"last": "Eryigit",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy Usergenerated Text",
"volume": "",
"issue": "",
"pages": "110--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeynep Yirmibe\u015foglu and G\u00fcl\u015fen Eryigit. 2018. De- tecting code-switching between turkish-english lan- guage pair. In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User- generated Text, pages 110-115.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Multilingual seq2seq training with similarity loss for cross-lingual document classification",
"authors": [
{
"first": "Katherine",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "175--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katherine Yu, Haoran Li, and Barlas Oguz. 2018. Multilingual seq2seq training with similarity loss for cross-lingual document classification. In Pro- ceedings of The Third Workshop on Representation Learning for NLP, pages 175-179.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Crosslingual transfer of named entity recognizers without parallel corpora",
"authors": [
{
"first": "Ayah",
"middle": [],
"last": "Zirikly",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "390--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayah Zirikly and Masato Hagiwara. 2015. Cross- lingual transfer of named entity recognizers without parallel corpora. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 390-396.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "t-SNE plot of embeddings across the 12 multi-head attention layers of multilingual BERT. Parallelly translated sentences of MutiATIS++ dataset are still clustered according to the languages: English (black), Chinese (cyan), French (blue), German (green), and Japanese (red).",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "An original example in English from MultiATIS++ dataset and its multilingually code-switched version.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Impact of different language groups on the target languages.",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "Performance as k (augmentation rounds) increases (on mBERT).",
"num": null
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"text": "Impact of code-switching on intent classes.",
"num": null
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"text": "Impact of code-switching on slot labels.",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "Selected language families to evaluate their impact on a target language.",
"content": "<table/>",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null
},
"TABREF4": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>describes performance evaluation on the Multi-</td></tr><tr><td>ATIS++ dataset. When compared to the state-of-</td></tr><tr><td>the-art jointly trained English-only baseline, we</td></tr><tr><td>see a +4.2% boost in intent accuracy and +1.8%</td></tr><tr><td>boost in slot F1 scores on average by augmenting</td></tr><tr><td>the dataset via multilingual code-switching with-</td></tr></table>",
"num": null
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": "95.48 94.51 84.43 \u2660 76.48 \u2660 94.15 \u2660 94.89 \u2660 85.37 \u2660 78.04 \u2660 87.92",
"content": "<table><tr><td>Intent Acc.</td><td>m</td><td>es</td><td>de</td><td>zh</td><td>ja</td><td>pt</td><td>fr</td><td>hi</td><td>tr</td><td>AVG</td></tr><tr><td>English-Only Baseline*</td><td>1</td><td colspan=\"2\">94.42 94.29</td><td>79.53</td><td>73.75</td><td>92.90</td><td>93.86</td><td>67.06</td><td>69.71</td><td>83.19</td></tr><tr><td>Jointen\u2212only Baseline*</td><td>1</td><td colspan=\"2\">95.03 94.51</td><td>80.54</td><td>73.57</td><td>93.48</td><td>93.33</td><td>73.53</td><td>71.05</td><td>84.38</td></tr><tr><td>Word-level CS \u2020</td><td>1</td><td colspan=\"2\">94.18 93.92</td><td>81.67</td><td>75.48</td><td>92.54</td><td>94.18</td><td>81.19</td><td>74.22</td><td>85.92</td></tr><tr><td>Sentence-level CS</td><td>1</td><td colspan=\"2\">94.60 93.53</td><td>81.21</td><td>75.01</td><td>93.10</td><td>93.24</td><td>82.37</td><td>75.11</td><td>86.02</td></tr><tr><td>Chunk-level CS (CCS)</td><td>1</td><td colspan=\"2\">95.12 95.27</td><td>83.88</td><td>74.27</td><td>94.20</td><td>93.48</td><td>82.73</td><td>77.51</td><td>87.06</td></tr><tr><td>Jointen\u2212only* + CCS</td><td>1</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Upper Bound</td><td/><td/><td/><td/><td/></tr><tr><td>Translate-Train (TT)*</td><td>8</td><td colspan=\"2\">94.02 93.84</td><td>90.21</td><td>84.19</td><td>95.66</td><td>94.54</td><td>85.08</td><td>85.79</td><td>90.42</td></tr><tr><td>JointT T *</td><td>8</td><td colspan=\"2\">94.16 94.24</td><td>91.56</td><td>85.98</td><td>95.75</td><td>95.01</td><td>86.45</td><td>84.95</td><td>91.01</td></tr><tr><td>JointT T * + CCS</td><td>8</td><td colspan=\"2\">95.48 95.41</td><td>91.60</td><td>87.17</td><td>95.34</td><td>94.60</td><td>87.94</td><td>85.93</td><td>91.68</td></tr><tr><td>Slot F1</td><td>m</td><td>es</td><td>de</td><td>zh</td><td>ja</td><td>pt</td><td>fr</td><td>hi</td><td>tr</td><td>AVG</td></tr><tr><td>English-Only Baseline*</td><td>1</td><td colspan=\"2\">96.16 96.73</td><td>83.12</td><td>78.81</td><td>95.63</td><td>95.40</td><td>77.05</td><td>88.09</td><td>88.87</td></tr><tr><td>Jointen\u2212only Baseline*</td><td>1</td><td colspan=\"2\">96.12 96.76</td><td>84.95</td><td>79.60</td><td>95.76</td><td>95.76</td><td>77.63</td><td>88.92</td><td>89.44</td></tr><tr><td>Word-level CS \u2020</td><td>1</td><td colspan=\"2\">95.81 96.33</td><td>85.46</td><td>79.33</td><td>96.27</td><td>95.08</td><td>79.10</td><td>86.86</td><td>89.28</td></tr><tr><td>Sentence-level CS</td><td>1</td><td colspan=\"2\">96.57 96.92</td><td>86.32</td><td>79.52</td><td>96.65</td><td>95.84</td><td>81.94</td><td>89.84</td><td>90.45</td></tr><tr><td>Chunk-level CS (CCS)</td><td>1</td><td colspan=\"2\">96.68 96.82</td><td>87.10</td><td>80.00</td><td>96.46</td><td>96.31</td><td>80.95</td><td>91.60</td><td>90.51</td></tr><tr><td>Jointen\u2212only* + CCS</td><td>1</td><td colspan=\"4\">96.09 96.56 88.61 \u2660 82.28 \u2660</td><td>96.01</td><td>95.94</td><td colspan=\"3\">82.28 \u2660 90.45 \u2660 91.03</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Upper Bound</td><td/><td/><td/><td/><td/></tr><tr><td>Translate-Train (TT)*</td><td>8</td><td colspan=\"2\">96.89 96.04</td><td>93.48</td><td>85.29</td><td>96.35</td><td>96.02</td><td>82.03</td><td>91.21</td><td>92.16</td></tr><tr><td>JointT T *</td><td>8</td><td colspan=\"2\">96.92 95.66</td><td>93.64</td><td>87.84</td><td>96.11</td><td>95.95</td><td>82.98</td><td>91.15</td><td>92.53</td></tr><tr><td>JointT T * + CCS</td><td>8</td><td colspan=\"2\">96.98 96.27</td><td>93.37</td><td>85.87</td><td>95.88</td><td>95.44</td><td>82.00</td><td>91.31</td><td>92.14</td></tr></table>",
"num": null
},
"TABREF7": {
"html": null,
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null
}
}
}
}