ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.112.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:51:53.397957Z"
},
"title": "Pre-training a BERT with Curriculum Learning by Increasing Block-Size of Input Text",
"authors": [
{
"first": "Koichi",
"middle": [],
"last": "Nagatsuka",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soka University Tokyo",
"location": {
"country": "Japan"
}
},
"email": ""
},
{
"first": "Clifford",
"middle": [],
"last": "Broni-Bediako",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soka University Tokyo",
"location": {
"country": "Japan"
}
},
"email": ""
},
{
"first": "Masayasu",
"middle": [],
"last": "Atsumi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Soka University Tokyo",
"location": {
"country": "Japan"
}
},
"email": "matsumi@soka.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recently, pre-trained language representation models such as BERT and RoBERTa have achieved significant results in a wide range of natural language processing (NLP) tasks, however, it requires extremely high computational cost. Curriculum learning (CL) is one of the potential solutions to alleviate this problem. CL is a training strategy where training samples are given to models in a meaningful order instead of random sampling. In this work, we propose a new CL method which gradually increases the block-size of input text for training the self-attention mechanism of BERT and its variants using the maximum available batch-size. Experiments in low-resource settings show that our approach outperforms the baseline in terms of convergence speed and final performance on downstream tasks.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Recently, pre-trained language representation models such as BERT and RoBERTa have achieved significant results in a wide range of natural language processing (NLP) tasks, however, it requires extremely high computational cost. Curriculum learning (CL) is one of the potential solutions to alleviate this problem. CL is a training strategy where training samples are given to models in a meaningful order instead of random sampling. In this work, we propose a new CL method which gradually increases the block-size of input text for training the self-attention mechanism of BERT and its variants using the maximum available batch-size. Experiments in low-resource settings show that our approach outperforms the baseline in terms of convergence speed and final performance on downstream tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent years have seen a series of breakthroughs in pre-trained language representation models. The development of pre-training methods like BERT (Devlin et al., 2019) and its variants have led to large improvements in many down-stream tasks such as paraphrase identification, sentence textual similarity, sentiment analysis, and natural language inference. One of the advantages in training these models is that they can leverage the unlabeled large-scale corpora which are more available compared to the labeled ones. However, training these models with large-scale corpora is pretty expensive in terms of computational time and memory footprint. In the literature, there are three main approaches that have been adopted to address this problem. These are architecture-based approach (Sanh et al., 2019; Voita et al., 2019; Sukhbaatar et al., 2019; de Wynter and Perry, 2020; Lan et al., 2020) , task-based approach Clark et al., 2020) and dataset-based approach (Elman, 1993; Bengio et al., 2009; Moore and Lewis, 2010; Gururangan et al., 2020) . While the architecture-based and taskbased methods have been extensively studied in the context of pre-training methods for natural language processing (NLP), dataset-based approach is relatively unexplored. To this end, we adopt a dataset-based method called Curriculum Learning (CL) which controls the order of training samples so that the model might converge faster with better performance.",
"cite_spans": [
{
"start": 146,
"end": 167,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 786,
"end": 805,
"text": "(Sanh et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 806,
"end": 825,
"text": "Voita et al., 2019;",
"ref_id": "BIBREF26"
},
{
"start": 826,
"end": 850,
"text": "Sukhbaatar et al., 2019;",
"ref_id": "BIBREF24"
},
{
"start": 851,
"end": 877,
"text": "de Wynter and Perry, 2020;",
"ref_id": "BIBREF29"
},
{
"start": 878,
"end": 895,
"text": "Lan et al., 2020)",
"ref_id": null
},
{
"start": 918,
"end": 937,
"text": "Clark et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 965,
"end": 978,
"text": "(Elman, 1993;",
"ref_id": "BIBREF6"
},
{
"start": 979,
"end": 999,
"text": "Bengio et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 1000,
"end": 1022,
"text": "Moore and Lewis, 2010;",
"ref_id": "BIBREF16"
},
{
"start": 1023,
"end": 1047,
"text": "Gururangan et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The idea of CL-like approach was originally proposed by Elman (1993) . The idea is based on the actual learning mechanism of humans and animals, where basic concepts are acquired first, then more complex ones are gradually learned. Bengio et al. (2009) formalized this concept as CL to train neural networks. Through experimental analysis, Bengio et al. (2009) showed the benefit of CL on convergence speed and performance in shape recognition and language modeling tasks. One of the most significant challenges when adapting CL to a new task is to figure out a criterion for measuring the difficulty of the training samples. For example, in object recognition task, the size of objects is a good measure of difficulty (Shi and Ferrari, 2016; Ionescu et al., 2016) , and the presence of low-frequent words in input text is an indicator of difficulty in language modeling (Bengio et al., 2009) . These criteria vary greatly depending on the task, thus, it is not easy to define a measure of difficulty which is suitable for a particular task.",
"cite_spans": [
{
"start": 56,
"end": 68,
"text": "Elman (1993)",
"ref_id": "BIBREF6"
},
{
"start": 232,
"end": 252,
"text": "Bengio et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 340,
"end": 360,
"text": "Bengio et al. (2009)",
"ref_id": "BIBREF0"
},
{
"start": 719,
"end": 742,
"text": "(Shi and Ferrari, 2016;",
"ref_id": "BIBREF21"
},
{
"start": 743,
"end": 764,
"text": "Ionescu et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 871,
"end": 892,
"text": "(Bengio et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most studies in the field of CL for NLP have proposed variety of difficulty measure by leveraging heuristics of the target tasks with neural networks (Bengio et al., 2009; Kocmi and Bojar, 2017; Soviany et al., 2021; Spitkovsky et al., 2009; Cirik et al., 2016; Rajeswar et al., 2017) . On the other hand, it is not clear how to design CL for language representation models such as BERT. In pretraining BERT, distributed word representations are learned through optimizing masked language modeling (MLM) loss which is computed by predicting a masked word or token in an input text. The input to the model is not a single sentence but an arbitrary-length span of text called a block. This indicates that it is not obvious how to measure the difficulty of the training samples using CL-based approach proposed in the previous studies.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Bengio et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 172,
"end": 194,
"text": "Kocmi and Bojar, 2017;",
"ref_id": "BIBREF10"
},
{
"start": 195,
"end": 216,
"text": "Soviany et al., 2021;",
"ref_id": null
},
{
"start": 217,
"end": 241,
"text": "Spitkovsky et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 242,
"end": 261,
"text": "Cirik et al., 2016;",
"ref_id": "BIBREF3"
},
{
"start": 262,
"end": 284,
"text": "Rajeswar et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The key component of BERT is the multi-head self-attention mechanism that learns to compute token embeddings from its context (Devlin et al., 2019) . The multi-head self-attention mechanism can be thought as a problem of searching for important token-pairs based on the relative magnitude of attention among all the token-pairs in an input text. This process can be served as a clue which leads us to speculate that it might be possible to formulate CL strategy by focusing on the effective training of the self-attention mechanism in BERT. Although each individual head of the multihead self-attention mechanism can learn any dependency among tokens, most of the heads tend to pay more attention to local dependencies than global ones (Kovaleva et al., 2019; Brunner et al., 2019; Sukhbaatar et al., 2019; Jiang et al., 2020) . It could be easier to train local self-attention in shorter blocks of input text than global self-attention in longer ones. Therefore, the block-size of input text can be used as the effective criterion to measure the difficulty-level of training samples for BERT.",
"cite_spans": [
{
"start": 126,
"end": 147,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 736,
"end": 759,
"text": "(Kovaleva et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 760,
"end": 781,
"text": "Brunner et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 782,
"end": 806,
"text": "Sukhbaatar et al., 2019;",
"ref_id": "BIBREF24"
},
{
"start": 807,
"end": 826,
"text": "Jiang et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we introduce a new CL method which gradually increases the block-size of input text for pre-training BERT using the maximum available batch-size to accomplish convergence speedup, and also improve performance in the down-stream tasks. Since our approach is very simple, it is easy to apply it to BERT and its variants with little effort. Using a small-scale corpus, the experimental results demonstrated that our proposed approach outperforms the baseline on GLUE tasks with faster convergence speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To reduce the memory footprint and improve the training speed of pre-trained language models, prior works have shown that architecture-based approaches are very useful. Sanh et al. (2019) proposed to leverage knowledge distillation to train a smaller version of BERT with faster training speed while maintaining comparative performance. Lan et al. (2020) used factorized embedding parametarization and cross-layer parameter sharing, which led to the reduction of parameter size and training time. de Wynter and Perry (2020) applied neural architecture search to select the optimal architecture of BERT and successfully compressed the size of the model. Task-based approaches have also been explored for pre-training language models with high training efficiency. introduced permutation language modeling which retains the benefits of autoregressive models and allows the models to capture bidirectional context. Instead of performing pre-training with MLM task, Clark et al. (2020) trained a BERT as a discriminator that determines whether each corrupted token was replaced by a generator model.",
"cite_spans": [
{
"start": 169,
"end": 187,
"text": "Sanh et al. (2019)",
"ref_id": "BIBREF20"
},
{
"start": 337,
"end": 354,
"text": "Lan et al. (2020)",
"ref_id": null
},
{
"start": 962,
"end": 981,
"text": "Clark et al. (2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recent studies have shown that CL is a successful approach for a wide range of machine learning applications (Soviany et al., 2021; Wang et al., 2021) , including the fine-tuning of large-scale language models such as BERT (Xu et al., 2020) . Some of large-scale language models like GPT-3 (Brown et al., 2020) and T5 (Raffel et al., 2020) adopted non-uniform mixing strategies which control the amount of training samples from multiple corpora. However, CL strategy has not directly been applied to pre-training large-scale language models. There exists many studies of CL which used the length of sentences or input sequences as a measure of difficulty in NLP tasks including neural machine translation (Kocmi and Bojar, 2017) , sentiment analysis (Cirik et al., 2016) , parsing (Spitkovsky et al., 2009) , poem generation (Rajeswar et al., 2017) and reading comprehension task (Tay et al., 2019) . In this work, we exploit the block-size of input text in the context of selfattention mechanism as a measure of difficulty for pre-training BERT.",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "(Soviany et al., 2021;",
"ref_id": null
},
{
"start": 132,
"end": 150,
"text": "Wang et al., 2021)",
"ref_id": "BIBREF28"
},
{
"start": 223,
"end": 240,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF30"
},
{
"start": 290,
"end": 310,
"text": "(Brown et al., 2020)",
"ref_id": null
},
{
"start": 318,
"end": 339,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 705,
"end": 728,
"text": "(Kocmi and Bojar, 2017)",
"ref_id": "BIBREF10"
},
{
"start": 750,
"end": 770,
"text": "(Cirik et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 781,
"end": 806,
"text": "(Spitkovsky et al., 2009)",
"ref_id": "BIBREF23"
},
{
"start": 880,
"end": 898,
"text": "(Tay et al., 2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The overview of the proposed CL method is presented in Figure 1 . The method is divided into two stages: (a) Splitting a corpus based on specific block-sizes and (b) Gradual training of BERT by increasing the block-size. In the first stage, we split the original corpus into a series of input blocks with the pre-defined length. In the second stage, we train a model by changing the training samples from the short block-size to the long one depending on the pre-defined number of training steps. In training, some tokens in a block are randomly masked to perform the MLM task. We describe the MLM task and the details of the two stages of our CL approach in this section.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 63,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Let x = x 1 , x 2 , ..., x T denotes a sequence of original tokens, where T is a block-size. By randomly masking an arbitrary number of tokens, we obtain an input sequencex. Given the corrupted sequenc\u00ea x, MLM is a task of predicting the original sequence x. The training objective is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Masked Language Modeling (MLM)",
"sec_num": "3.1"
},
{
"text": "max \u03b8 log p \u03b8 (x |x) \u2248 T i=1 m i log p \u03b8 (x i | x <i , x >i ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Masked Language Modeling (MLM)",
"sec_num": "3.1"
},
{
"text": "where x i is the predicted token at position i and \u03b8 is the parameters of a model. m i indicates the presence of a masked token where m i = 1 if x i is masked, otherwise 0. For this objective, we optimize the model parameters using the cross-entropy loss. In the MLM task, models infer masked tokens from bi-directional context (x <i and x >i ). The block-size restricts the available context information in both directions and thus affects the MLM accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Masked Language Modeling (MLM)",
"sec_num": "3.1"
},
{
"text": "In the first stage, we split the original corpus into training samples with the specified size. Each input text for training BERT is not a linguistically coherent unit like a sentence or multiple sentences, but a fixed span of contiguous text (Devlin et al., 2019) that we called a block. In other words, the input is not guaranteed to end with a period nor start with a first word in a sentence. argues that it is desirable to acquire the input sequence to be at most 512 tokens through the extensive experiments. We follow this setting to obtain the block of a specified length from the corpus as a training sample. We train a byte-level Byte-Pair Encoding (BPE) tokenizer as in (Radford et al., 2019) to split the raw text into a sequence of tokens. By using byte-level BPE, we can decompose all words including out-of-vocabularies, which are likely to appear at test time especially when using a small training dataset. In the experiment, we set the vocabulary size to 20,000.",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 681,
"end": 703,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Splitting a Corpus Based on Block-sizes",
"sec_num": "3.2"
},
{
"text": "In the second stage, we train a model step-by-step with four different block-sizes which are 64, 128, 256, and 512. We first train the model with the shortest block-size, which is 64 in this case, for an arbitrary number of steps. Then, we retrain the model in the order of 128 and 256 block-sizes respectively for the same number of steps. Finally, we retrain the model with the longest block-size of 512 until it converges. For masking tokens, we use the fixed masking rate of 0.15. When restarting the training, we always initialize the learning rate. To accelerate training, we use the maximum available batch-size depending on the block-size. Since our proposed method is designed to limit the blocksize in the early training phase, we employ larger batch-size with shorter block-size which improves the whole training efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gradual Training",
"sec_num": "3.3"
},
{
"text": "In the experiments, we evaluate our proposed CL approach in terms of the convergence speed and model performance. We use wikitext-2 (Merity et al., 2016) for pre-training RoBERTa , which is a variant of BERT. For fine-tuning on down-stream tasks, we use the General Language Understanding Evaluation (GLUE) dataset (Wang et al., 2018) . All the training and finetuning were carried out on a GeForce RTX3090 with 24GB memory.",
"cite_spans": [
{
"start": 315,
"end": 334,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Wikitext-2: Although BERT and its variants (e.g. RoBERTa) are commonly trained with large-scale corpora which contain over 3 billion words, we use wikitext-2 (Merity et al., 2016) which is a small corpus to enable pre-training with a limited computational resource. Wikitext-2 is one of the standard corpora for language models, and consists of 720 good-quality articles from English Wikipedia. It has about 2M tokens for training, and 217K and 245K tokens for validation and testing respectively.",
"cite_spans": [
{
"start": 158,
"end": 179,
"text": "(Merity et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We fine-tune our models on the GLUE benchmarks (Wang et al., 2018) . GLUE consists of nine datasets for measuring the generalization performance of pre-trained language models. We use only 7 datasets (SST-2, MRPC, QQP, MNLI-m, QNLI, RTE, and WNLI) out of the 9 GLUE benchmarks. CoLA and STS-B are removed due to a tendency to fall into over-fitting which stems from the small-scale pre-training.",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "GLUE Benchmarks:",
"sec_num": null
},
{
"text": "We perform both curriculum training and anti-curriculum training in the pre-training of RoBERTa. In curriculum training, we increase the block-size of training samples from the shortest to the longest. On the other hand, in anti-curriculum training, training samples with the longest blocksize are first given to the model as the most difficult ones, then the difficulty-level of training samples is gradually reduced by shortening the block-size in the training process. By comparing curriculum training with anti-curriculum training, which follows the opposite sampling order, we show that increasing block-size is an effective CL method for pre-trained language representation models. For all the models, we use the same RoBERTabase architecture which has 12 layers with a hidden size of 768. Each layer has 12 attention heads. We use AdamW (Loshchilov and Hutter, 2017) with a learning rate of 1e-5 in the pre-training with four different batch-sizes depending on the block-sizes as shown in Table 2 . In fine-tuning, we also use the same optimizer as used in pre-training and set a learning rate to 5e-5 and batch-size to 64 for all task except for QNLI where we use learning rate of 2e-5 and batch-size of 16 due to the memory limitation.",
"cite_spans": [
{
"start": 844,
"end": 873,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 996,
"end": 1003,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "We define the training time of the overall curriculum training as a total of the training time for every training phase corresponding to each block-size. In both curriculum training and anti-curriculum training, our models are trained for 10,000 steps with each block-size except for the last block-size where we continue the training until the convergence of the models. For comparative evaluation, we train RoBERTa without CL by using random sampling as the baseline model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "Figure 2(a) shows the comparison of our curriculum model which increases the block-size with the maximum available batch-size and the baseline on the validation losses throughout pre-training. Compared to the loss of the baseline model that converged at around 5.0, the loss of curriculum model decreased steadily and achieved the faster convergence speed outperforming the baseline by about 2 points in validation loss. The learning curve of the baseline model were plateau until 35K steps, and then the loss finally restarted to descend. On the other hand, the loss of the curriculum model stably decreased every time we switched the difficultylevel of training samples. To analyze the effect of increasing a batch-size on convergence speed, we demonstrated an ablation study by fixing the batch-size to 16 (which is the maximum size when block-size is set to 512). Figure 2(b) shows the result of the curriculum model which increases block-size with the fixed batch-size. Compared to the our proposed curriculum model (Figure 2(a) ), it required about 60K steps to converge, which is the same training time as the baseline. This result indicates that CL improves final performance but does not contribute to the convergence speedup in case the batch-size is fixed. Table 1 presents the statistical information about the training of the baseline and each curriculum phase. While the baseline model converged after about 60K steps, our curriculum model required just 40K steps in total, which is about 1.5 times faster than the baseline. Although using the large batch-size depending on the small block-size tended to take long training time, it allows training a large number of training samples and the total training time was reduced by about 1.0 hours. Table 2 represents the comparison of training efficiency between the baseline and our curriculum model. With respect to the training samples per second, curriculum model achieved better training efficiency, which is 5 times as higher as the baseline, and also resulted in much better validation loss. Table 3 shows the GLUE scores on development datasets. For all 6 down-stream tasks, our curriculum model at the bottom of the table outperformed the baseline model at the top. Especially, performances on STS-2, MRPC, QQP, MNLI-m and QNLI were higher than the baseline by a large margin (+4.47 on SST-2, +3.19 on MRPC, +6.48 F1 score and +3.37 accuracy on QQP, 8.89 on MNLIm, and 15.74 on QNLI) while accuracy on RTE and WNLI were extremely low in both curriculum and baseline. Although each scores of our model is not high due to the small-scale pre-training, relative improvements of scores by CL were generally observed. ",
"cite_spans": [],
"ref_spans": [
{
"start": 868,
"end": 879,
"text": "Figure 2(b)",
"ref_id": "FIGREF1"
},
{
"start": 1021,
"end": 1033,
"text": "(Figure 2(a)",
"ref_id": "FIGREF1"
},
{
"start": 1268,
"end": 1275,
"text": "Table 1",
"ref_id": null
},
{
"start": 1758,
"end": 1765,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 2059,
"end": 2066,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Convergence Speed",
"sec_num": "4.3.1"
},
{
"text": "Compared with our curriculum model, performances of anti-curriculum model were lower on every down-stream tasks. This result indicates that not decreasing but increasing a block-size is the effective for improving the generalization performances. Interestingly, the performances of anticurriculum were better or equal to the baseline in all tasks except for QNLI. One possible reason for this result is that generating training samples with various block-sizes may have the same impact as data augmentation. Anti-curriculum model, however, failed to learn the QNLI task because the model is optimized for short text like 64 tokens at the end while the input of QNLI contains samples whose input length is longer than 64.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Anti-Curriculum",
"sec_num": "4.3.3"
},
{
"text": "As an ablation study, we tested two types of models including 3-stage curriculum and 2-stage curriculum. For the 3-stage curriculum, we removed a specific block-size from our training schedule and conducted the CL with the rest of block-sizes. For 2-stage curriculum, we trained the model only with the shortest block-size (64 tokens) and longest one (512 tokens).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.3.4"
},
{
"text": "As Table 3 shows, our curriculum model with the full training schedule is equal to or slightly better than the 2-stage or 3-stage models on each down-stream tasks. However, for tasks where per-formance gaps are not significant, the 2-stage and 3-stage curricula are more advantageous because of the shorter training time. As in the case of the anticurriculum, the curriculum model without blocksize of 512 tokens, that was not optimized for the largest block-size, had lower performance in QNLI. The 2-stage curriculum, which requires the least amount of training time, achieved almost the same accuracy as the normal curriculum in MRPC and MNLI-m, but relatively poor performance in tasks such as SST-2. These experiments show that there is room to further speed-up of CL by modifying the curriculum schedule on the block-size. Moreover, the result also indicates that the impact of CL on the performance will be different depending on each down-stream task.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.3.4"
},
{
"text": "In this paper, we proposed a new CL method for pre-training BERT, which progressively increase a block-size of input text. Our approach is very simple and thus handy to implement. Experiments in the low-resource setting have shown that proposed method leads to faster convergence speed and better performances in down-stream tasks. In further research, we expand the corpus and validate the scalability of our approach. In addition, we speculate that it is important to investigate when the difficulty-level should be changed through the training and how it affect model performances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Curriculum learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Louradour",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning, ICML '09",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Con- ference on Machine Learning, ICML '09, page 41-48.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On identifiability in transformers",
"authors": [
{
"first": "Gino",
"middle": [],
"last": "Brunner",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Damian",
"middle": [],
"last": "Pascual",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Richter",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Wattenhofer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.04211"
]
},
"num": null,
"urls": [],
"raw_text": "Gino Brunner, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wat- tenhofer. 2019. On identifiability in transformers. arXiv preprint arXiv:1908.04211.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Visualizing and understanding curriculum learning for long short-term memory networks",
"authors": [
{
"first": "Volkan",
"middle": [],
"last": "Cirik",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Louis-Philippe",
"middle": [],
"last": "Morency",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.06204"
]
},
"num": null,
"urls": [],
"raw_text": "Volkan Cirik, Eduard Hovy, and Louis-Philippe Morency. 2016. Visualizing and understanding cur- riculum learning for long short-term memory net- works. arXiv preprint arXiv:1611.06204.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning and development in neural networks: The importance of starting small",
"authors": [
{
"first": "Jeffrey",
"middle": [
"L"
],
"last": "Elman",
"suffix": ""
}
],
"year": 1993,
"venue": "Cognition",
"volume": "48",
"issue": "1",
"pages": "71--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey L. Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71-99.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "2020. Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10964"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "How hard can it be? estimating the difficulty of visual search in an image",
"authors": [
{
"first": "Bogdan",
"middle": [],
"last": "Radu Tudor Ionescu",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Alexe",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Leordeanu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Dim",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Papadopoulos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ferrari",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "2157--2166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Tudor Ionescu, Bogdan Alexe, Marius Leordeanu, Marius Popescu, Dim P. Papadopoulos, and Vittorio Ferrari. 2016. How hard can it be? estimating the difficulty of visual search in an image. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2157-2166.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convbert: Improving bert with span-based dynamic convolution",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Weihao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Daquan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yunpeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jiashi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Shuicheng",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.02496"
]
},
"num": null,
"urls": [],
"raw_text": "Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, and Shuicheng Yan. 2020. Con- vbert: Improving bert with span-based dynamic con- volution. arXiv preprint arXiv:2008.02496.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Curriculum learning and minibatch bucketing in neural machine translation",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "379--386",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2017. Curriculum learn- ing and minibatch bucketing in neural machine trans- lation. In Proceedings of the International Confer- ence Recent Advances in Natural Language Process- ing, RANLP 2017, pages 379-386.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Revealing the dark secrets of BERT",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4365--4374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Selfsupervised Learning of Language Representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. 2020. ALBERT: A Lite BERT for Self- supervised Learning of Language Representations. arXiv:1909.11942 [cs].",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.07843"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els. arXiv preprint arXiv:1609.07843.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Intelligent selection of language model training data",
"authors": [
{
"first": "C",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "220--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert C Moore and Will Lewis. 2010. Intelligent selection of language model training data. In Pro- ceedings of the ACL 2010 Conference Short Papers, pages 220-224.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Machine Learning Research",
"volume": "21",
"issue": "140",
"pages": "1--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adversarial generation of natural language",
"authors": [
{
"first": "Sai",
"middle": [],
"last": "Rajeswar",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Dutil",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.10929"
]
},
"num": null,
"urls": [],
"raw_text": "Sai Rajeswar, Sandeep Subramanian, Francis Dutil, Christopher Pal, and Aaron Courville. 2017. Adver- sarial generation of natural language. arXiv preprint arXiv:1705.10929.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Weakly supervised object localization using size estimates",
"authors": [
{
"first": "Miaojing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Ferrari",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "105--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miaojing Shi and Vittorio Ferrari. 2016. Weakly super- vised object localization using size estimates. In Eu- ropean Conference on Computer Vision, pages 105- 121.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Baby steps: How \"less is more\" in unsupervised dependency parsing",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "NIPS 2009 Workshop on Grammar Induction, Representation of Language and Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Ju- rafsky. 2009. Baby steps: How \"less is more\" in un- supervised dependency parsing. In NIPS 2009 Work- shop on Grammar Induction, Representation of Lan- guage and Language Learning.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adaptive attention span in transformers",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "331--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Edouard Grave, Piotr Bo- janowski, and Armand Joulin. 2019. Adaptive at- tention span in transformers. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 331-335.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Tay",
"suffix": ""
},
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Luu",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Minh",
"middle": [
"C"
],
"last": "Phan",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "S",
"middle": [
"C"
],
"last": "Hui",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Tay, Shuohang Wang, Anh Tuan Luu, Jie Fu, Minh C. Phan, Xingdi Yuan, J. Rao, S. C. Hui, and A. Zhang. 2019. Simple and effective curriculum pointer-generator networks for reading comprehen- sion over long narratives. ArXiv, abs/1905.10847.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Fedor",
"middle": [],
"last": "Moiseev",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5797--5808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 5797-5808.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A survey on curriculum learning",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yudong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wenwu",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2021,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/TPAMI.2021.3069908"
]
},
"num": null,
"urls": [],
"raw_text": "Xin Wang, Yudong Chen, and Wenwu Zhu. 2021. A survey on curriculum learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Optimal subarchitecture extraction for bert",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "De Wynter",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"J"
],
"last": "Perry",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrian de Wynter and Daniel J. Perry. 2020. Op- timal subarchitecture extraction for bert. CoRR, abs/2010.10499.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Curriculum learning for natural language understanding",
"authors": [
{
"first": "Benfeng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Licheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhendong",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Quan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongtao",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Yongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6095--6104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. 2020. Curriculum learning for natural language under- standing. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6095-6104.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems, volume 32.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "The overview of the proposed CL method."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Comparison of our approach and the baseline on the validation losses. Left (a): The result of CL which increases the block-size with the maximum available batch-size. Right (b): The result of CL which increases block-sizes with the fixed batch size (16). Black dotted lines indicates the points where the block-size of training samples is changed, and the red dotted line indicates each convergence point."
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Comparison of training efficiency between the baseline and our curriculum model.",
"num": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "GLUE scores on development datasets. batch-size=64, lr=5e-5, but in QNLI, batch-size=16, lr=2e-5.",
"num": null
}
}
}
}