ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:12:58.200308Z"
},
"title": "Investigating Learning Dynamics of BERT Fine-Tuning",
"authors": [
{
"first": "Yaru",
"middle": [],
"last": "Hao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {}
},
"email": "haoyaru@"
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {}
},
"email": "lidong1@microsoft.com"
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {}
},
"email": "fuwei@microsoft.com"
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {}
},
"email": "kexu@nlsde.buaa.edu.cn"
},
{
"first": "Beihang",
"middle": [],
"last": "University",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The recently introduced pre-trained language model BERT advances the state-of-the-art on many NLP tasks through the fine-tuning approach, but few studies investigate how the fine-tuning process improves the model performance on downstream tasks. In this paper, we inspect the learning dynamics of BERT fine-tuning with two indicators. We use JS divergence to detect the change of the attention mode and use SVCCA distance to examine the change to the feature extraction mode during BERT fine-tuning. We conclude that BERT fine-tuning mainly changes the attention mode of the last layers and modifies the feature extraction mode of the intermediate and last layers. Moreover, we analyze the consistency of BERT fine-tuning between different random seeds and different datasets. In summary, we provide a distinctive understanding of the learning dynamics of BERT finetuning, which sheds some light on improving the fine-tuning results.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The recently introduced pre-trained language model BERT advances the state-of-the-art on many NLP tasks through the fine-tuning approach, but few studies investigate how the fine-tuning process improves the model performance on downstream tasks. In this paper, we inspect the learning dynamics of BERT fine-tuning with two indicators. We use JS divergence to detect the change of the attention mode and use SVCCA distance to examine the change to the feature extraction mode during BERT fine-tuning. We conclude that BERT fine-tuning mainly changes the attention mode of the last layers and modifies the feature extraction mode of the intermediate and last layers. Moreover, we analyze the consistency of BERT fine-tuning between different random seeds and different datasets. In summary, we provide a distinctive understanding of the learning dynamics of BERT finetuning, which sheds some light on improving the fine-tuning results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "BERT (Bidirectional Encoder Representations from Transformers; Devlin et al. 2019 ) is a large pre-trained language model. It obtains state-of-theart results on a wide array of Natural Language Processing (NLP) tasks. Unlike other previous pretrained language models (Peters et al., 2018a; Radford et al., 2018) , BERT employs the multi-layer bidirectional Transformer encoder as the model architecture and proposes two novel pre-training tasks: the masked language modeling and the next sentence prediction.",
"cite_spans": [
{
"start": 63,
"end": 81,
"text": "Devlin et al. 2019",
"ref_id": "BIBREF8"
},
{
"start": 267,
"end": 289,
"text": "(Peters et al., 2018a;",
"ref_id": "BIBREF21"
},
{
"start": 290,
"end": 311,
"text": "Radford et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two approaches to adapt the pretrained language representations to the downstream tasks. One is the feature-based approach, where the parameters of the original pre-trained * Contribution during internship at Microsoft Research. model are frozen when applied on the downstream tasks (Mikolov et al., 2013; Pennington et al., 2014; Peters et al., 2018a) . Another one is the fine-tuning approach, where the pre-trained model and the taskspecific model are trained together (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018) . Take the classification task as an example, the new parameter added for BERT fine-tuning is a task-specific fully-connected layer, then all parameters of BERT and the classification layer are trained together to minimize the loss function. Peters et al. (2019) demonstrate that the finetuning approach of BERT generally outperforms the feature-based approach. We know that BERT encodes task-specific representations during finetuning, but it is unclear about the learning dynamics of BERT fine-tuning, i.e., how fine-tuning helps BERT to improve performance on downstream tasks.",
"cite_spans": [
{
"start": 293,
"end": 315,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF19"
},
{
"start": 316,
"end": 340,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 341,
"end": 362,
"text": "Peters et al., 2018a)",
"ref_id": "BIBREF21"
},
{
"start": 482,
"end": 500,
"text": "(Dai and Le, 2015;",
"ref_id": "BIBREF7"
},
{
"start": 501,
"end": 524,
"text": "Howard and Ruder, 2018;",
"ref_id": "BIBREF15"
},
{
"start": 525,
"end": 546,
"text": "Radford et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 789,
"end": 809,
"text": "Peters et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We investigate the learning dynamics of BERT fine-tuning with two indicators. First, we use Jensen-Shannon divergence to measure the change of the attention mode during BERT fine-tuning. Second, we use Singular Vector Canonical Correlation Analysis (SVCCA; Raghu et al. (2017) ) distance to measure the change of the feature extraction mode.",
"cite_spans": [
{
"start": 257,
"end": 276,
"text": "Raghu et al. (2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conclude that during the fine-tuning procedure, BERT mainly changes the attention mode of the last layers, and modifies the feature extraction mode of intermediate and last layers. At the same time, BERT has the ability to avoid catastrophic forgetting of knowledge in low layers. Moreover, we also analyze the consistency of the fine-tuning procedure. Across different random seeds and different datasets, we observe that the changes of low layers (0-9th layer) are generally consistent, which indicates that BERT has learned some common transferable language knowledge in low layers during the pre-training process, while the task-specific information is mostly encoded in intermediate and last layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We employ the BERT-large model 1 on a diverse set of NLP tasks: natural language inference (NLI), sentiment analysis (SA) and paraphrase detection (PD).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "For NLI, we use both the Multi-Genre Natural Language Inference dataset (MNLI; Williams et al. 2018) and the Recognizing Textual Entailment dataset (RTE; aggregated from , Haim et al. 2006 , Giampiccolo et al. 2007 , Bentivogli et al. 2009 The hyperparameter choice for fine-tuning is task-specific. We choose relatively optimal parameters for every dataset as suggested in Devlin et al. (2019) . The detailed hyperparameter configuration is shown in Table 1 . Moreover, we use Adam optimizer with the slanted triangular learning rate schedule (Howard and Ruder, 2018) and keep the dropout probability at 0.1.",
"cite_spans": [
{
"start": 79,
"end": 100,
"text": "Williams et al. 2018)",
"ref_id": "BIBREF28"
},
{
"start": 170,
"end": 188,
"text": ", Haim et al. 2006",
"ref_id": "BIBREF12"
},
{
"start": 189,
"end": 214,
"text": ", Giampiccolo et al. 2007",
"ref_id": "BIBREF11"
},
{
"start": 215,
"end": 239,
"text": ", Bentivogli et al. 2009",
"ref_id": "BIBREF1"
},
{
"start": 374,
"end": 394,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 544,
"end": 568,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "The model architecture of BERT is essentially based on the multi-layer bidirectional Transformer, the core function of which is the self-attention mechanism (Vaswani et al., 2017) . We use Jensen-Shannon divergence between two attention scores to detect changes of the attention mode in different layers during fine-tuning.",
"cite_spans": [
{
"start": 157,
"end": 179,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "Jensen-Shannon divergence JS divergence is a method of measuring the distance between two 1 github.com/google-research/bert probability distributions, it is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "D JS (P ||Q) = 1 2 D KL (P ||R) + 1 2 D KL (Q||R)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "where P and Q are two different probability distributions, R = P +Q 2 is the average probability distribution of them and D KL represents the Kullback-Leibler divergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "For every layer of BERT, there are 16 attention heads, each head produces an attention score of the input sequence. Each attention score is a probability distribution about how much attention a target word pays to other words. We compute JS divergence of attention scores between the original BERT model M 0 and the fine-tuned model M t on the development set, by calculating the average of the sum of JS divergence at each word and each attention head for every layer, the specific calculation formula is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "D JS (M t ||M 0 ) = 1 N 1 H N n=1 H h=1 1 W W i=1 D JS (A h t (word i )||A h 0 (word i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "where N denotes the number of development examples, H denotes the number of attention heads, W denotes the number of tokens in a sequence and A h t (word i ) denotes the attention score of the attention head h at word i in model M t . We present JS divergence results in Figure 1 , from which we observe the attention mode in low layers and intermediate layers do not change seriously, while the attention mode of last layers changes drastically. It indicates that the fine-tuning procedure has the ability to keep the attention mode of low layers consistent with the original BERT model, and changes the attention mode of the last layers to adapt BERT on specific tasks. While the attention score implies the inherent dependencies between different words, the output representation of every layer is the practical feature that the model extracts. We use SVCCA distance (Raghu et al., 2017) to quantify the change of these output representations during fine-tuning, which indicates the change of the feature extraction mode of BERT.",
"cite_spans": [
{
"start": 870,
"end": 890,
"text": "(Raghu et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 271,
"end": 279,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "Singular Vector Canonical Correlation Analysis. SVCCA distance is used as a metric to measure the differences of hidden representations between the original BERT model M 0 and the finetuned model M t at a target layer. It is calculated by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "D SV CCA (M t ||M 0 ) = 1 \u2212 1 c c i=1 \u03c1 (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "where c denotes the hidden size of BERT, \u03c1 is the Canonical Correlation Analysis (CCA) resulting in a value between 0 and 1, which indicates how well correlated the two representations derived by two models are. For a detailed explanation of SVCCA, please see Raghu et al. (2017) . From Figure 2 , we observe that changes in SVCCA distance in higher layers are more distinct than lower layers. This phenomenon is reasonable because the output representation of higher layers undergoes more transformations, so the change of SVCCA distance in higher layers is more dramatic.",
"cite_spans": [
{
"start": 260,
"end": 279,
"text": "Raghu et al. (2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 287,
"end": 295,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "As the output representation of the last layer is directly used for classification, we aim to compare the effect of each layer on the final output representation respectively. We replace the parameters of /D\\HU,QGH[ 667 053& 01/, every layer in the fine-tuned model with their original values in the BERT model before fine-tuning and compute the SVCCA distance of the last layer output representation. The results are shown in Figure 3 , we observe that whether the low layers (0-10) are replaced with their original values or not, it has little effect on the final output representation. Moreover, the change in the intermediate and last layers will increase the SVCCA distance, which reflects that fine-tuning mainly changes the feature extraction mode of intermediate and last layers.",
"cite_spans": [],
"ref_spans": [
{
"start": 427,
"end": 435,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Fine-tuning changes the attention mode of the last layers",
"sec_num": "3"
},
{
"text": "In this section, we investigate the consistency of different fine-tuning procedures, including the consistency between different random seeds and the consistency between different datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency of Fine-tuning",
"sec_num": "5"
},
{
"text": "We fine-tune two models on every dataset with the same hyperparameters but different random seeds. We compute the pairwise JS divergence and SVCCA distance of each layer between the two models with different random seeds. As shown in Figure 4 , for large dataset MNLI and SST-2, the attention mode of low and intermediate layers is basically consistent between two different random seeds, whereas the attention mode of last layers is relatively divergent. For MRPC, the attention mode appears to be divergent at the 9th layer. Figure 5 illustrates SVCCA distance between different random seeds, we observe that the SVCCA distance gradually increases in all layers. For MNLI and SST-2, the increase of last layers is more obvious, and for MRPC, the increase appears to be obvious from the 13th layer.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 242,
"text": "Figure 4",
"ref_id": null
},
{
"start": 527,
"end": 535,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Consistency between different random seeds",
"sec_num": "5.1"
},
{
"text": "Besides the consistency between different random seeds, we also aim to investigate the consistency between different datasets. We fine-tune two models on two different datasets then evaluate on a combined dataset containing 200 examples respectively from both two datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between different datasets",
"sec_num": "5.2"
},
{
"text": "For different datasets of the same domain, we use two models fine-tuned on RTE and MNLI dataset. For different domains, we examine the consistency between MRPC and RTE, which both have pairwise input sequences, and the consistency between MRPC and SST-2, which have different patterns of input sequences. The JS divergence results and SVCCA distance results between different datasets are shown in Figure 6 and Figure 7 . Figure 6 and Figure 7 demonstrate that no matter two datasets are from the same domain or the different domain, the attention mode and the feature extraction mode of low layers (0-7 layer) are consistent, which indicates BERT studies some common language knowledge during the pre-training procedure and low layers are stable to change their original modes. JS divergence of the attention scores and SVCCA distance of the output representations in intermediate and last layers between two models are more distinct when the difference between two training datasets increases. The consistency between datasets from similar tasks like RTE and MNLI is still relatively strong in last layers compared to the consistency between datasets from the different domain. And when the input sequence pattern and the domain of two datasets are different, the consistency of intermediate and last layers is weak as expected.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Figure 6",
"ref_id": null
},
{
"start": 411,
"end": 419,
"text": "Figure 7",
"ref_id": "FIGREF6"
},
{
"start": 422,
"end": 430,
"text": "Figure 6",
"ref_id": null
},
{
"start": 435,
"end": 443,
"text": "Figure 7",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Consistency between different datasets",
"sec_num": "5.2"
},
{
"text": "Pre-trained language models (Radford et al., 2018; Devlin et al., 2019; Clark et al., 2020; Bao et al., 2020) stimulate the research interest on the interpretation of these black-box models. Peters et al. (2018b) show that the biLM-based models learn representations that vary with network depth, the lower layers specialize in local syntactic relationships and the higher layers model longer range relationships. Kovaleva et al. (2019) propose a methodology and offer the analysis of BERTs capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. Hao et al. (2019) visualize the loss landscapes and optimization trajectories of the BERT fine-tuning procedure and find that low layers of the BERT model are more invariant and transferable across tasks. Merchant et al. (2020) find that fine-tuning primarily affects the top layers of BERT, but with noteworthy variation across tasks. Hao et al. (2020) propose a self-attention attribution method to interpret information flow within Transformer.",
"cite_spans": [
{
"start": 28,
"end": 50,
"text": "(Radford et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 51,
"end": 71,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 72,
"end": 91,
"text": "Clark et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 92,
"end": 109,
"text": "Bao et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 191,
"end": 212,
"text": "Peters et al. (2018b)",
"ref_id": "BIBREF22"
},
{
"start": 414,
"end": 436,
"text": "Kovaleva et al. (2019)",
"ref_id": "BIBREF16"
},
{
"start": 599,
"end": 616,
"text": "Hao et al. (2019)",
"ref_id": "BIBREF13"
},
{
"start": 804,
"end": 826,
"text": "Merchant et al. (2020)",
"ref_id": "BIBREF18"
},
{
"start": 935,
"end": 952,
"text": "Hao et al. (2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We use JS divergence to detect the change of the attention mode in different layers during BERT fine-tuning and use SVCCA distance to detect the change of the feature extraction mode. We observe that BERT fine-tuning mainly changes the attention mode of last layers and modifies the feature extraction mode of intermediate and last layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
},
{
"text": "We also demonstrate that the changes of low layers are consistent between different random seeds and different datasets, which indicates that BERT learns common transferable language knowledge in low layers. In future research, we would like to explore learning dynamics for cross-lingual pretrained models (Conneau and Lample, 2019; Conneau et al., 2020; Chi et al., 2020) .",
"cite_spans": [
{
"start": 307,
"end": 333,
"text": "(Conneau and Lample, 2019;",
"ref_id": "BIBREF5"
},
{
"start": 334,
"end": 355,
"text": "Conneau et al., 2020;",
"ref_id": "BIBREF4"
},
{
"start": 356,
"end": 373,
"text": "Chi et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "The work was partially supported by National Natural Science Foundation of China (NSFC) [Grant No. 61421003].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "UniLMv2: Pseudo-masked language models for unified language model pre-training",
"authors": [
{
"first": "Hangbo",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Songhao",
"middle": [],
"last": "Piao",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hsiao-Wuen",
"middle": [],
"last": "Hon",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.12804"
]
},
"num": null,
"urls": [],
"raw_text": "Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jian- feng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2020. UniLMv2: Pseudo-masked language models for uni- fied language model pre-training. arXiv preprint arXiv:2002.12804.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The fifth pascal recognizing textual entailment challenge",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc Text Analysis Conference (TAC09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TAC09.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Infoxlm: An information-theoretic framework for cross-lingual language model pre-training",
"authors": [
{
"first": "Zewen",
"middle": [],
"last": "Chi",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Saksham",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xian-Ling",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Heyan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zewen Chi, Li Dong, Furu Wei, Nan Yang, Sak- sham Singhal, Wenhui Wang, Xia Song, Xian- Ling Mao, Heyan Huang, and Ming Zhou. 2020. Infoxlm: An information-theoretic framework for cross-lingual language model pre-training. CoRR, abs/2007.07834.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "ELECTRA: Pretraining text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre- training text encoders as discriminators rather than generators. In ICLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7057--7067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7057-7067. Curran Associates, Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The pascal recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment",
"volume": "",
"issue": "",
"pages": "177--190",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/11736790_9"
]
},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evalu- ating predictive uncertainty, visual object classifica- tion, and recognising tectual entailment, pages 177- 190.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semisupervised sequence learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew M. Dai and Quoc V. Le. 2015. Semi- supervised sequence learning. CoRR, abs/1511.01432.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unified language model pre-training for natural language understanding and generation",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hsiao-Wuen",
"middle": [],
"last": "Hon",
"suffix": ""
}
],
"year": 2019,
"venue": "33rd Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The third PASCAL recognizing textual entailment challenge",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recogniz- ing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1-9, Prague. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The second pascal recognising textual entailment challenge",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Bar Haim",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Ferro",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising tex- tual entailment challenge. In Proceedings of the Sec- ond PASCAL Challenges Workshop on Recognising Textual Entailment.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Visualizing and understanding the effectiveness of BERT",
"authors": [
{
"first": "Yaru",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4143--4152",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1424"
]
},
"num": null,
"urls": [],
"raw_text": "Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019. Visu- alizing and understanding the effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4143- 4152, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Selfattention attribution: Interpreting information interactions inside transformer. CoRR, abs",
"authors": [
{
"first": "Yaru",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2020. Self- attention attribution: Interpreting information inter- actions inside transformer. CoRR, abs/2004.11207.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Finetuned language models for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Fine- tuned language models for text classification. CoRR, abs/1801.06146.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Revealing the dark secrets of BERT",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4365--4374",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1445"
]
},
"num": null,
"urls": [],
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "What happens to bert embeddings during fine-tuning?",
"authors": [
{
"first": "Amil",
"middle": [],
"last": "Merchant",
"suffix": ""
},
{
"first": "Elahe",
"middle": [],
"last": "Rahimtoroghi",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. 2020. What happens to bert em- beddings during fine-tuning?",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. CoRR, abs/1310.4546.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532-1543, Doha, Qatar. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dissecting contextual word embeddings: Architecture and representation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1499--1509",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 1499-1509, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "To tune or not to tune? adapting pretrained representations to diverse tasks",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)",
"volume": "",
"issue": "",
"pages": "7--14",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4302"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pre- trained representations to diverse tasks. In Proceed- ings of the 4th Workshop on Representation Learn- ing for NLP (RepL4NLP-2019), pages 7-14, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability",
"authors": [
{
"first": "Maithra",
"middle": [],
"last": "Raghu",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Gilmer",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
},
{
"first": "Jascha",
"middle": [],
"last": "Sohl-Dickstein",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "6076--6085",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. 2017. Svcca: Singular vec- tor canonical correlation analysis for deep learning dynamics and interpretability. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish- wanathan, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 30, pages 6076- 6085. Curran Associates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "XLNet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "33rd Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "JS divergence of attention scores of every layer between the original BERT model and the finetuned model.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Fine-tuning modifies the feature extraction mode of the intermediate and the last layers",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "SVCCA distance of individual layers between the original BERT model and the fine-tuned model.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "SVCCA distance of the last layer between the original fine-tuned model and the fine-tuned model with parameters of a target layer replaced with their pretrained values.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "Figure 4: JS divergence between two models with different random seeds.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF5": {
"text": "Figure 6: JS divergence between different datasets.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF6": {
"text": "SVCCA distance between different datasets.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "Hyperparameter configuration for BERT finetuning. LR: learning rate, BS: batch size, NE: number of epochs.",
"content": "<table/>",
"num": null
}
}
}
}