ACL-OCL / Base_JSON /prefixS /json /starsem /2021.starsem-1.30.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:50.739609Z"
},
"title": "Adversarial Training for Machine Reading Comprehension with Virtual Embeddings",
"authors": [
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "",
"location": {
"country": "China"
}
},
"email": "zqyang5@iflytek.com"
},
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "",
"location": {
"country": "China"
}
},
"email": "ymcui@iflytek.com"
},
{
"first": "Chenglei",
"middle": [],
"last": "Si",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Maryland",
"location": {
"settlement": "College Park",
"region": "MD",
"country": "USA"
}
},
"email": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin",
"country": "China"
}
},
"email": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"settlement": "Harbin",
"country": "China"
}
},
"email": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "State Key Laboratory of Cognitive Intelligence",
"institution": "",
"location": {
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this paper, we aim to apply AT on machine reading comprehension (MRC) tasks. Furthermore, we adapt AT for MRC tasks by proposing a novel adversarial training method called PQAT that perturbs the embedding matrix instead of word vectors. To differentiate the roles of passages and questions, PQAT uses additional virtual P/Q-embedding matrices to gather the global perturbations of words from passages and questions separately. We test the method on a wide range of MRC tasks, including span-based extractive RC and multiple-choice RC. The results show that adversarial training is effective universally, and PQAT further improves the performance.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this paper, we aim to apply AT on machine reading comprehension (MRC) tasks. Furthermore, we adapt AT for MRC tasks by proposing a novel adversarial training method called PQAT that perturbs the embedding matrix instead of word vectors. To differentiate the roles of passages and questions, PQAT uses additional virtual P/Q-embedding matrices to gather the global perturbations of words from passages and questions separately. We test the method on a wide range of MRC tasks, including span-based extractive RC and multiple-choice RC. The results show that adversarial training is effective universally, and PQAT further improves the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural networks have achieved superior performance on many tasks, but they are vulnerable to adversarial examples (Szegedy et al., 2014 ) -examples that have been mixed with certain perturbations. Adversarial training (AT) (Goodfellow et al., 2015) uses both clean and adversarial examples to improve the robustness of the model for image classification.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Szegedy et al., 2014",
"ref_id": "BIBREF15"
},
{
"start": 223,
"end": 248,
"text": "(Goodfellow et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the field of NLP, Miyato et al. (2017) have applied adversarial training on text classification tasks and improved the model performance. From then on, many AT methods has been proposed (Wu et al., 2017; Yasunaga et al., 2018; Bekoulis et al., 2018; Zhu et al., 2020; Jiang et al., 2019; Pereira et al., 2020; . They mostly adopt a general AT strategy, but focus less on the adaptation of AT to NLP tasks. To explore this adaptation, in this work, we aim to apply adversarial training Passage: ... The rock cycle is an important concept in geology which illustrates the relationships between these three types of rock, and magma. When a rock crystallizes from melt (magma and/or lava), it is an igneous rock. ...",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "Miyato et al. (2017)",
"ref_id": "BIBREF10"
},
{
"start": 189,
"end": 206,
"text": "(Wu et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 207,
"end": 229,
"text": "Yasunaga et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 230,
"end": 252,
"text": "Bekoulis et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 253,
"end": 270,
"text": "Zhu et al., 2020;",
"ref_id": "BIBREF21"
},
{
"start": 271,
"end": 290,
"text": "Jiang et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 291,
"end": 312,
"text": "Pereira et al., 2020;",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An igneous rock is a rock that crystallizes from what? Table 1 : An example from the SQuAD dataset. We highlight two words rock and igneous for better demonstration. The words with the same color are injected with the same perturbation by PQAT. The different occurrences of the same word (for example, rock in passage and question) are perturbed differently depending on their roles. on machine reading comprehension (MRC) tasks, which exhibit complex NLP characteristics.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question:",
"sec_num": null
},
{
"text": "The objective of MRC is to let a machine read the given passages and ask it to answer the related questions. There are several types of MRC tasks. In this work we focus on span-based extractive RC (Rajpurkar et al., 2016 (Rajpurkar et al., , 2018 Yang et al., 2018) and multiple-choice RC (Lai et al., 2017) . To apply adversarial training on MRC tasks, we notice that there are several salient characteristics of MRC compared to other tasks such as image classification: (1) The inputs are discrete. Unlike pixels, which can take continuous values, words are discrete tokens. (2) The tokens in the input sequences are not independent. A word may occur in an input sequence several times. After the embedding layer, these occurrences are represented by the word vectors with the same value and hold the same semantic meaning (although the word may be polysemous). (3) The roles of passages and questions are different. Given a question as the query, the model needs to look up the correct answer in the passage.",
"cite_spans": [
{
"start": 197,
"end": 220,
"text": "(Rajpurkar et al., 2016",
"ref_id": "BIBREF13"
},
{
"start": 221,
"end": 246,
"text": "(Rajpurkar et al., , 2018",
"ref_id": "BIBREF12"
},
{
"start": 247,
"end": 265,
"text": "Yang et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 289,
"end": 307,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question:",
"sec_num": null
},
{
"text": "People have utilized the first characteristic to apply adversarial training by perturbing input word vectors instead of tokens. However, the second and third characteristics have been largely ignored. For example, in Table 1 , which is a passage-question pair from the SQuAD dataset, the word rock has appeared multiple times. In the standard adversarial training, the perturbations added to each occurrence of rock are different, ignoring the fact that they share the same meaning. On the other hand, the multiple occurrences of the same word in the passage and question play different roles, such as the rock in the passage and question. It is appropriate to treat them differently.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question:",
"sec_num": null
},
{
"text": "To take the second and the third characteristics into consideration, we propose a novel adversarial training method called PQAT. The core of PQAT is the virtual P/Q-embeddings, which are two independent embedding spaces for passages and questions. Each time we calculate perturbations, P/Qembeddings gather the perturbations from passages and questions for each word, then generate a global and role-aware perturbation for each word from passages and questions separately. For example, in Table 1 , the perturbations on all the occurrences of rock in the passage and question will be gathered into two matrices separately, forming global and role-aware perturbations of rock. PQAT is as efficient as the standard AT with nearly no extra time cost. Also, The virtual P/Q-embeddings are only used during training. They are discarded once the training is finished. Thus PQAT does not increase the model size and inference time for predictions.",
"cite_spans": [],
"ref_spans": [
{
"start": 489,
"end": 496,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Question:",
"sec_num": null
},
{
"text": "We have applied adversarial training on several MRC tasks, including span-based extractive RC and multiple-choice RC. Results show that adversarial training improves the MRC model performance universally and consistently, even over the strong pre-trained model baseline. Furthermore, the PQAT method outperforms the standard AT on both normal datasets and adversarial datasets. Lastly, our results verify the usefulness of incorporating information of task form into the design of the adversarial training method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question:",
"sec_num": null
},
{
"text": "Adversarial training first constructs adversarial examples by generating worst-case perturbations that maximize the current loss, then minimize the loss on those adversarial examples. In NLP tasks, a popular approach to generate perturbations is to perturb word vectors from the embedding layer (Miyato et al., 2017) . We denote the input token sequence as X and the operation of looking up in an embedding layer E as emb(E, \u2022). The objective of AT is",
"cite_spans": [
{
"start": 295,
"end": 316,
"text": "(Miyato et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Adversarial Training",
"sec_num": "2"
},
{
"text": "min \u03b8,E E (X,y)\u223cD max \u03b4 < L(f \u03b8 (X vec + \u03b4), y) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Adversarial Training",
"sec_num": "2"
},
{
"text": "where f \u03b8 (\u2022) is the model parametrized by \u03b8 excluding word embedding layer; X vec = emb(E, X) is the word vectors of input sequence. L is the loss function. We perturb the word vectors with the adversarial perturbations \u03b4. \u03b4 can be estimated by linearizing L(f \u03b8 (X vec + \u03b4), y) around X and perform the multiple-step projected gradient descent (PGD) (Madry et al., 2018) :",
"cite_spans": [
{
"start": 352,
"end": 372,
"text": "(Madry et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Adversarial Training",
"sec_num": "2"
},
{
"text": "\u03b4 t+1 = \u03a0 \u03b4 \u2264 (\u03b4 t + \u03b1g t / g t ) (2) g t = \u2207 \u03b4 L(f \u03b8 (X vec + \u03b4), y)| \u03b4=\u03b4t (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Adversarial Training",
"sec_num": "2"
},
{
"text": "where t is the gradient descent step, \u03a0 \u03b4 \u2264 denotes projection \u03b4 back onto the -ball. g t is the gradient of the loss with respect to perturbation \u03b4. The more gradient descent steps, the better approximation of \u03b4, but also more expensive in computation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Adversarial Training",
"sec_num": "2"
},
{
"text": "In the above algorithm, when generating the perturbations on X vec through backward propagation, each word vector X i vec is perturbed independently, like the pixels in an image. It ignores the semantic relationship among the word vectors of a word's different occurrences. To make the perturbation on each occurrence aware of other occurrences of the same word, we adapt AT by gathering not only the perturbations on each word vector, but also the perturbations on the embedding matrix. The latter can be seen as the global perturbations, which provide context-insensitive semantic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for MRC",
"sec_num": "3"
},
{
"text": "The global perturbations are rather coarsegrained, since all the occurrences of the same word receive the same global perturbation. Note that in MRC tasks, words in passages and questions play different roles. Thus, to keep this information, we distinguish the words in passages and questions by creating two virtual embedding matrices P and Q: P-embedding matrix P collects the perturbations of all the words from the passages; Q-embedding matrix Q for the questions. We give an illustration in Figure 1 . P/Q-embedding matrices are virtual since they only provide perturbations, no the real word vectors. During training, perturbations from virtual embeddings and word vectors are summed up to form the adversarial input Z vec . The final objective is",
"cite_spans": [],
"ref_spans": [
{
"start": 496,
"end": 504,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Adversarial Training for MRC",
"sec_num": "3"
},
{
"text": "min \u03b8,E E (X,y)\u223cD max \u03b4 < L(f \u03b8 (Z vec ), y) (4) Z vec = [X P vec + P vec ; X Q vec + Q vec ] + \u03b4 (5) P vec = emb(P , X P ), Q vec = emb(Q, X Q )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for MRC",
"sec_num": "3"
},
{
"text": "are the perturbations from the virtual embeddings. X P and X Q stand for the passage and question sections in X. [\u2022; \u2022] denotes concatenation. In this way, we have generated fine-grained local perturbations \u03b4 by standard AT, and global role-aware perturbations P vec and Q vec by the virtual P/Q-embeddings. We call the later process as PQAT, which is the main adaptation of adversarial training for MRC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for MRC",
"sec_num": "3"
},
{
"text": "We list the overall algorithm of adversarial training for MRC in Algorithm 1. We initialize P and Q with the gaussian distribution. For each batch, we perform K-step gradient descent (line 9-22): we look up the original word vectors and P/Q-embedding vectors from the embedding layer E and the P/Q-embedding matrices. The adversarial inputs are constructed by summing them with local perturbations \u03b4. Then we compute the gradients of model parameters g t , local perturbations g \u03b4 and P/Q-embedding matrices g P and g Q . These gradients can be calculated in a single backward pass. Lastly, we update the virtual embeddings and local perturbations (line 18-21).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for MRC",
"sec_num": "3"
},
{
"text": "Note that P/Q-embedding matrices serve as the containers for perturbations. When the training is finished, P/Q-embedding matrices are no longer needed and can be discarded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for MRC",
"sec_num": "3"
},
{
"text": "\u03b4 , P and Q control the strengths of standard AT and PQAT. If \u03b4 = 0, we have a pure P/Qembeddings based adversarial training, i.e., PQAT; while if P = Q = 0, we recover the standard AT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for MRC",
"sec_num": "3"
},
{
"text": "Notation: V is the vocabulary size; D is the embedding dimension. Input: Training samples D = {(X, y)}, P/Q-embedding matrices P , Q \u2208 R V \u00d7D , initialization variance \u03c3, perturbation strength { \u03b4 , P , Q}, adversarial steps K. 1 Initialize P/Q-embedding matrices",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "2 P \u2190 N (0, \u03c3 2 I) , Q \u2190 N (0, \u03c3 2 I) 3 for batch B \u2282 D do 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "Normalize P/Q-embedding matrices ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "= gt\u22121 + E[\u2207 \u03b8,E L(f \u03b8 (Zvec), y)] 15 g \u03b4 = E[\u2207 \u03b4 L(f \u03b8 (Zvec), y)] 16 g P = E[\u2207 P L(f \u03b8 (Zvec), y)] 17 g Q = E[\u2207 Q L(f \u03b8 (Zvec), y)] 18 \u03b4 \u2190 \u03b4 + g \u03b4 / g \u03b4 2 \u2022 Xvec 2 \u03b4 19",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "Update with token-wise normalization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "20 P i \u2190 P i + g i P / g i P 2 \u2022 X i vec 2 P 21 Q i \u2190 Q i + g i Q / g i Q 2 \u2022 X i vec 2 Q 22 end 23 {\u03b8, E} \u2190 AdamUpdate({\u03b8, E}, gK ) 24 end 4 Experiments Setup",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "Datasets. We perform experiments on several English MRC tasks, including span-based extractive MRC tasks -SQuAD 1.1 (Rajpurkar et al., 2016) , SQuAD 2.0 (Rajpurkar et al., 2018) , HotpotQA (Yang et al., 2018) , and multiple-choice MRC task RACE (Lai et al., 2017) . We also test model robustness on the adversarial datasets AddSent andAd-dOneSent (Jia and Liang, 2017) . Model Settings. We build the MRC model with RoBERTa , following the standard model structure for SQuAD and RACE (Devlin et al., 2018) . For HotpotQA, we follow the model in Shao et al. (2020) . It uses RoBERTa as the encoder followed by a multi-task prediction layer. We denote the passage as P and the question as Q.",
"cite_spans": [
{
"start": 116,
"end": 140,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 153,
"end": 177,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 189,
"end": 208,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 245,
"end": 263,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 347,
"end": 368,
"text": "(Jia and Liang, 2017)",
"ref_id": "BIBREF4"
},
{
"start": 483,
"end": 504,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 544,
"end": 562,
"text": "Shao et al. (2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "To construct the inputs, for span-based extractive RC, we concatenate each P and Q with modeldependent special tokens; for multiple-choice RC with m options for each example, we append each option to the concatenation of P and Q, and construct m input sequences from each example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "When applying AT or PQAT, we only perturb the word embeddings and leave the position embeddings unchanged. For PQAT on RACE, we let the Q-embedding matrix collect perturbations from both questions and options.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "Training Settings and Hyperparameters. All the models are implemented with Transformers (Wolf et al., 2019) and trained on a single Nvidia V100 GPU. To improve the stability and reduce the uncertainty of the results, we run each experiment four times with different seeds and report the mean value of performance. We use AdamW as our optimizer with batch size 24 and learning rate 3e-5 for RoBERTa BASE and 2e-5 or 1e-5 for RoBERTa LARGE . The maximum number of epochs is set to 3 for SQuAD and 5 for RACE and Hot-potQA. A linear learning rate decay schedule with warmup ratio 0.1 was used. For PQAT, \u03b4 is set to 0, P and Q is set to 4e-2 for RACE and 2e-2 for other tasks. The variance \u03c3 is 1e-2. We set the number of gradient descent steps K = 2 to balance speed and performance.",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: Adversarial Training for Machine Reading Comprehension",
"sec_num": null
},
{
"text": "The overall results are summarized in Table 2 , where we compare PQAT with the baseline. PQAT is able to boost model performance across all MRC tasks and outperforms the RoBERTa baseline significantly. On HotpotQA, which is a complicated MRC task that features multi-hop questions and asks for multiple kinds of predictions, PQAT still outperforms the baseline by 1.3/1.1 on Joint EM/Joint F1. On RACE, PQAT improves the performance significantly by 1.5% in accuracy. The universal improvements on various kinds of MRC tasks prove the wide applicability of PQAT.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Overall Results",
"sec_num": "5.1"
},
{
"text": "We compare different adversarial training methods and their combinations by tuning the strengths of perturbations { \u03b4 , P , Q }. The results are in Table 3 . The underlined scores are the ones reported in Table 2 . Firstly, to test the effectiveness of standard AT, we disable PQAT with P = Q = 0 and enable standard AT with \u03b4 =2e-3 for RACE and 1e-2 for other tasks 1 . Other settings are unchanged, and we still follow Algorithm 1. PQAT consistently outperforms standard AT on the three tasks. Then we enable both PQAT and standard AT by setting all the strengths { \u03b4 , P , Q } to non-zero values. The performance gets slightly better on SQuAD 1.1 and RACE, but gets worse on SQuAD 2.0.",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 206,
"end": 213,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.2"
},
{
"text": "Compared with the standard AT, PQAT achieves higher performance by itself. Therefore PQAT could be a better alternative to applying adversarial training on MRC tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.2"
},
{
"text": "We assess the robustness of MRC models with AddSent and AddOneSent. AddSent and AddOne-Sent are two adversarial datasets built on SQuAD 1.1. In both datasets, passages are appended with distracting sentences. MRC models that heavily rely on text matching may be easily fooled to predict wrong answers from the distracting sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness on Adversarial Datasets",
"sec_num": "5.3"
},
{
"text": "The results are shown in Table 4 . With the standard adversarial training (AT), the MRC model improves its robustness by about 5% over RoBERTa BASE in F1. PQAT further improves the performance over AT by about 1% on both AddSent and AddOneSent.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Robustness on Adversarial Datasets",
"sec_num": "5.3"
},
{
"text": "We have applied adversarial training on a wide range of MRC tasks, including span-based extractive RC and multiple-choice RC. Especially, we have proposed a novel adversarial training method PQAT, which uses virtual P/Q-embedding matrices to generate global and role-aware perturbations that consider the characteristics of MRC tasks. Our experiments demonstrate that adversarial training improves the MRC model performance universally and consistently, even over the strong pre-trained model baseline. The PQAT method further improves the model performance over the standard AT on both normal datasets and adversarial datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We have searched from 1e-3 to 1e-1 and taken the best value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank all anonymous reviewers for their valuable comments on our work. This work is funded by National Key R&D Program of China (No.2018YFC0831601).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adversarial training for multi-context joint entity and relation extraction",
"authors": [
{
"first": "Giannis",
"middle": [],
"last": "Bekoulis",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Deleu",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Demeester",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Develder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2830--2836",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extrac- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2830-2836, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Explaining and harnessing adversarial examples",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reinforced mnemonic reader for machine reading comprehension",
"authors": [
{
"first": "Minghao",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yuxing",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018",
"volume": "",
"issue": "",
"pages": "4099--4106",
"other_ids": {
"DOI": [
"10.24963/ijcai.2018/570"
]
},
"num": null,
"urls": [],
"raw_text": "Minghao Hu, Yuxing Peng, Zhen Huang, Xipeng Qiu, Furu Wei, and Ming Zhou. 2018. Reinforced mnemonic reader for machine reading comprehen- sion. In Proceedings of the Twenty-Seventh Inter- national Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4099-4106.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2021--2031",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1215"
]
},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SMART: robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization",
"authors": [
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Tuo",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2019. SMART: robust and efficient fine-tuning for pre- trained natural language models through principled regularized optimization. CoRR, abs/1911.03437.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "RACE: Large-scale ReAding comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAd- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adversarial training for large neural language models",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020. Adversarial training for large neural language models. CoRR, abs/2004.08994.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards deep learning models resistant to adversarial attacks",
"authors": [
{
"first": "Aleksander",
"middle": [],
"last": "Madry",
"suffix": ""
},
{
"first": "Aleksandar",
"middle": [],
"last": "Makelov",
"suffix": ""
},
{
"first": "Ludwig",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Tsipras",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Vladu",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adver- sarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adversarial training methods for semi-supervised text classification",
"authors": [
{
"first": "Takeru",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeru Miyato, Andrew M. Dai, and Ian J. Good- fellow. 2017. Adversarial training methods for semi-supervised text classification. In 5th Inter- national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Con- ference Track Proceedings.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adversarial training for commonsense inference",
"authors": [
{
"first": "Lis",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Ichiro",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lis Pereira, Xiaodong Liu, Fei Cheng, Masayuki Asahara, and Ichiro Kobayashi. 2020. Adversar- ial training for commonsense inference. CoRR, abs/2005.08156.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "784--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784- 789, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Is Graph Structure Necessary for Multi-hop Question Answering?",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7187--7192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Shao, Yiming Cui, Ting Liu, Shijin Wang, and Guoping Hu. 2020. Is Graph Structure Necessary for Multi-hop Question Answering? In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7187-7192, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Intriguing properties of neural networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2014,
"venue": "2nd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. Intriguing properties of neu- ral networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Pro- ceedings.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Exploring machine reading comprehension with explicit knowl",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Wang and Hui Jiang. 2018. Exploring ma- chine reading comprehension with explicit knowl- edge. CoRR, abs/1809.03449.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adversarial training for relation extraction",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Russell",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1778--1783",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1187"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Wu, David Bamman, and Stuart Russell. 2017. Ad- versarial training for relation extraction. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1778-1783, Copenhagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2369--2380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 2369-2380.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Robust multilingual part-of-speech tagging via adversarial training",
"authors": [
{
"first": "Michihiro",
"middle": [],
"last": "Yasunaga",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "976--986",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1089"
]
},
"num": null,
"urls": [],
"raw_text": "Michihiro Yasunaga, Jungo Kasai, and Dragomir Radev. 2018. Robust multilingual part-of-speech tagging via adversarial training. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 976-986, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Freelb: Enhanced adversarial training for natural language understanding",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Siqi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Goldstein",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Gold- stein, and Jingjing Liu. 2020. Freelb: Enhanced ad- versarial training for natural language understanding. In 8th International Conference on Learning Repre- sentations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "P/Q-emebddings collect the perturbations on each word from passages and questions separately.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "\u2190 (P \u2212 mean(P )/std(P ) \u2022 \u03c3 6 Q \u2190 (Q \u2212 mean(Q))/std(Q) (P , X P ) 12 Qvec = emb(Q, X Q ) 13 Zvec = Xvec + Pvec + Qvec + \u03b4 14 gt",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>Model</td><td colspan=\"2\">SQuAD 1.1 SQuAD 2.0 EM EM</td><td>RACE Acc</td></tr><tr><td>BASE setting</td><td/><td/><td/></tr><tr><td>PQAT</td><td>85.87 (0.08)</td><td>81.66 (0.21)</td><td>76.32 (0.32)</td></tr><tr><td>PQAT + AT</td><td>85.96 (0.10)</td><td>81.11 \u2193 (0.14)</td><td>76.50 (0.35)</td></tr><tr><td>AT</td><td>85.64 \u2193 (0.15)</td><td>81.23 \u2193 (0.30)</td><td>75.94 \u2193 (0.37)</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Results on the development sets of SQuAD 1.1, SQuAD 2.0 and HotpotQA, and results on the test set of RACE. \u2020 : the results are taken fromShao et al. (2020).",
"num": null
},
"TABREF3": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Comparison of PQAT, standard AT and the combination. AT is short for Standard AT. Arrows indicate the drops relative to the PQAT. Numbers in the parentheses are the standard deviations.",
"num": null
},
"TABREF5": {
"content": "<table><tr><td>: Model performance (F1) on AddSent, Ad-</td></tr><tr><td>dOneSent and SQuAD 1.1 dev set. AT is short for Stan-</td></tr><tr><td>dard AT.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
}
}
}
}