{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:11:38.515832Z" }, "title": "Diverse Lottery Tickets Boost Ensemble from a Single Pretrained Model", "authors": [ { "first": "Sosuke", "middle": [], "last": "Kobayashi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Shun", "middle": [], "last": "Kiyono", "suffix": "", "affiliation": {}, "email": "shun.kiyono@riken.jp" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "", "affiliation": {}, "email": "jun.suzuki@tohoku.ac.jp" }, { "first": "Kentaro", "middle": [], "last": "Inui", "suffix": "", "affiliation": {}, "email": "inui@tohoku.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Ensembling is a popular method used to improve performance as a last resort. However, ensembling multiple models finetuned from a single pretrained model has been not very effective; this could be due to the lack of diversity among ensemble members. This paper proposes Multi-Ticket Ensemble, which finetunes different subnetworks of a single pretrained model and ensembles them. We empirically demonstrated that winning-ticket subnetworks produced more diverse predictions than dense networks, and their ensemble outperformed the standard ensemble on some tasks. Repeat and use final Pruning mask pretrained model before each finetuning. We ex-088 pect that, during finetuning, each sub-network ac-089 quires different views using different sub-spaces of 090 the pretrained knowledge. This idea has two chal-091 lenges: the diversity and the accuracy of the sub-092 networks. Recent studies on the lottery ticket hy-093 pothesis (Frankle and Carbin, 2019) suggest that a 094 dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT 098 also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020). However, it is still unclear how di-102 verse winning tickets exist and how to find them 103", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Ensembling is a popular method used to improve performance as a last resort. However, ensembling multiple models finetuned from a single pretrained model has been not very effective; this could be due to the lack of diversity among ensemble members. This paper proposes Multi-Ticket Ensemble, which finetunes different subnetworks of a single pretrained model and ensembles them. We empirically demonstrated that winning-ticket subnetworks produced more diverse predictions than dense networks, and their ensemble outperformed the standard ensemble on some tasks. Repeat and use final Pruning mask pretrained model before each finetuning. We ex-088 pect that, during finetuning, each sub-network ac-089 quires different views using different sub-spaces of 090 the pretrained knowledge. This idea has two chal-091 lenges: the diversity and the accuracy of the sub-092 networks. Recent studies on the lottery ticket hy-093 pothesis (Frankle and Carbin, 2019) suggest that a 094 dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT 098 also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020). However, it is still unclear how di-102 verse winning tickets exist and how to find them 103", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Ensembling (Levin et al., 1989; Domingos, 1997) has long been an easy and effective approach to improve model performance by averaging the outputs of multiple comparable but independent models. Allen-Zhu and Li (2020) explain that different models obtain different views for judgments, and the ensemble uses complementary views to make more robust decisions. A good ensemble requires diverse member models. However, how to encourage diversity without sacrificing the accuracy of each model is non-trivial (Liu and Yao, 1999; Kirillov et al., 2016; Rame and Cord, 2021) .", "cite_spans": [ { "start": 11, "end": 31, "text": "(Levin et al., 1989;", "ref_id": "BIBREF17" }, { "start": 32, "end": 47, "text": "Domingos, 1997)", "ref_id": "BIBREF9" }, { "start": 505, "end": 524, "text": "(Liu and Yao, 1999;", "ref_id": "BIBREF20" }, { "start": 525, "end": 547, "text": "Kirillov et al., 2016;", "ref_id": "BIBREF15" }, { "start": 548, "end": 568, "text": "Rame and Cord, 2021)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The pretrain-then-finetune paradigm has become another best practice for achieving state-of-the-art performance on NLP tasks (Devlin et al., 2019) . The cost of large-scale pretraining, however, is enormously high (Sharir et al., 2020) ; This often makes it difficult to independently pretrain multiple models. Therefore, most researchers and practitioners only use a single pretrained model, which is distributed by resource-rich organizations.", "cite_spans": [ { "start": 125, "end": 146, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 214, "end": 235, "text": "(Sharir et al., 2020)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This situation brings up a novel question to ensemble learning: Can we make an effective ensemble from only a single pre-trained model? Although ensembles can be combined with the pretrain-then-finetune paradigm, an ensemble of Figure 1 : When finetuning from a single pretrained model (left), the models are less diverse (center). If we finetune different sparse subnetworks, they become more diverse and make the ensemble effective (right). models finetuned from a single pretrained model is much less effective than that using different pretrained models from scratch in many tasks (Raffel et al., 2020) . Na\u00efve ensemble offers limited improvements, possibly due to the lack of diversity of finetuning from the same initial parameters.", "cite_spans": [ { "start": 585, "end": 606, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a simple yet effective method called Multi-Ticket Ensemble, ensembling finetuned winning-ticket subnetworks (Frankle and Carbin, 2019) in a single pretrained model. We empirically demonstrate that pruning a single pretrained model can make diverse models, and their ensemble can outperform the na\u00efve dense ensemble if winning-ticket subnetworks are found.", "cite_spans": [ { "start": 134, "end": 160, "text": "(Frankle and Carbin, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we discuss the most standard way of ensemble, which averages the outputs of multiple neural networks; each has the same architecture but different parameters. That is, let f (x; \u03b8) be the output of a model with the parameter vector \u03b8 given the input x, the output of an ensemble is f M (x) = \u03b8\u2208M f (x; \u03b8)/|M|, where M = {\u03b8 1 , ..., \u03b8 |M| } is the member parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Diversity in a Single Pretrained Model", "sec_num": "2" }, { "text": "As discussed, when constructing an ensemble f M by finetuning from a single pretrained model multiple times with different random seeds {s 1 , ..., s |M| }, the boost in performance tends to be only marginal. In the case of BERT (Devlin et al., 2019) and its variants, three sources of diversities can be considered: random initialization of the task-specific layer, dataset shuffling for stochastic gradient descent (SGD), and dropout. However, empirically, such finetuned parameters tend not to be largely different from the initial parameters, and they do not lead to diverse models (Radiya-Dixit and Wang, 2020) . Of course, if one adds significant noise to the parameters, it leads to diversity; however, it would also hurt accuracy.", "cite_spans": [ { "start": 229, "end": 250, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF7" }, { "start": 586, "end": 615, "text": "(Radiya-Dixit and Wang, 2020)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Diversity from Finetuning", "sec_num": "2.1" }, { "text": "To make models ensuring both accuracy and diversity, we focus on subnetworks in the pretrained model. Different subnetworks employ different subspaces of the pre-trained knowledge (Radiya-Dixit and Wang, 2020; Zhao et al., 2020; Cao et al., 2021) ; this would help the subnetworks to acquire different views, which can be a source of desired diversity 1 . Also, in terms of accuracy, recent studies on the lottery ticket hypothesis (Frankle and Carbin, 2019) suggest that a dense network at initialization contains a subnetwork, called the winning ticket, whose accuracy becomes comparable to that of the dense one after the same training. Interestingly, the pretrained BERT also has a winning ticket for finetuning on downstream tasks (Chen et al., 2020) . Thus, if we can find diverse winning tickets, they can be good ensemble members with the two desirable properties: diversity and accuracy.", "cite_spans": [ { "start": 180, "end": 209, "text": "(Radiya-Dixit and Wang, 2020;", "ref_id": "BIBREF24" }, { "start": 210, "end": 228, "text": "Zhao et al., 2020;", "ref_id": "BIBREF43" }, { "start": 229, "end": 246, "text": "Cao et al., 2021)", "ref_id": "BIBREF3" }, { "start": 432, "end": 458, "text": "(Frankle and Carbin, 2019)", "ref_id": "BIBREF11" }, { "start": 736, "end": 755, "text": "(Chen et al., 2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Diversity from Pruning", "sec_num": "2.2" }, { "text": "We propose a simple yet effective method, multiticket ensemble, which finetunes different subnetworks instead of dense networks. Because it could be a key how to find subnetworks, we explore three variants based on iterative magnitude pruning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subnetwork Exploration", "sec_num": "3" }, { "text": "tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models. also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020) . However, it is still unclear how di-102 verse winning tickets exist and how to find them 103 tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models. on a random seed s. Also, let \u2713 s represent the parameter of the pretrained BERT and the task-specific layer, whose parameter is randomly initialized by random seed s. After finetuning \u2713 s to FINE(\u2713 s , s), we identify and prune the parameters with 10% lowest magnitudes in FINE(\u2713 s , s). We also get the corresponding binary mask of pruning, m s,10% 2 {0, 1} |\u2713s| , where the surviving positions have 1 otherwise 0. The pruning of parameters \u2713 by a mask m can be also represented as \u2713 m, where is the element-wise product. Next, we replay finetuning but from \u2713 s m s,10% and get FINE(\u2713 s m s,10% , s) as well as 20%-pruning mask m s,20% . By repeating iterative magnitude pruning, we obtain the parameter FINE(\u2713 s m s,P % , s). In our experiments, we set P = 30, i.e., evaluate ensemble of 30%-pruning sub-networks, where verse winning tickets exist and how to find them 103 tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models. also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-100 work when finetuning on downstream tasks (Chen 101 et al., 2020). However, it is still unclear how di-102 verse winning tickets exist and how to find them 103 tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models. the parameter of the pretrained BERT and the task-specific layer, whose parameter is randomly initialized by random seed s. After finetuning \u2713 s to FINE(\u2713 s , s), we identify and prune the parameters with 10% lowest magnitudes in FINE(\u2713 s , s). We also get the corresponding binary ", "cite_spans": [ { "start": 108, "end": 128, "text": "Raffel et al. (2020)", "ref_id": "BIBREF25" }, { "start": 191, "end": 210, "text": "(Wang et al., 2018)", "ref_id": "BIBREF35" }, { "start": 223, "end": 242, "text": "(Wang et al., 2019)", "ref_id": "BIBREF34" }, { "start": 251, "end": 275, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF26" }, { "start": 486, "end": 509, "text": "(Chen 101 et al., 2020)", "ref_id": null }, { "start": 713, "end": 733, "text": "Raffel et al. (2020)", "ref_id": "BIBREF25" }, { "start": 796, "end": 815, "text": "(Wang et al., 2018)", "ref_id": "BIBREF35" }, { "start": 828, "end": 847, "text": "(Wang et al., 2019)", "ref_id": "BIBREF34" }, { "start": 856, "end": 880, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF26" }, { "start": 1928, "end": 1948, "text": "Raffel et al. (2020)", "ref_id": "BIBREF25" }, { "start": 2011, "end": 2030, "text": "(Wang et al., 2018)", "ref_id": "BIBREF35" }, { "start": 2043, "end": 2062, "text": "(Wang et al., 2019)", "ref_id": "BIBREF34" }, { "start": 2071, "end": 2095, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF26" }, { "start": 2306, "end": 2315, "text": "(Chen 101", "ref_id": null }, { "start": 2532, "end": 2552, "text": "Raffel et al. (2020)", "ref_id": "BIBREF25" }, { "start": 2615, "end": 2634, "text": "(Wang et al., 2018)", "ref_id": "BIBREF35" }, { "start": 2647, "end": 2666, "text": "(Wang et al., 2019)", "ref_id": "BIBREF34" }, { "start": 2675, "end": 2699, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Subnetwork Exploration", "sec_num": "3" }, { "text": "M = {FINE(\u2713 s1 m s1,30% , s 1 ), ..., FINE(\u2713 s |M| 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subnetwork Exploration", "sec_num": "3" }, { "text": "{FINE(\u2713 s1 m s1,30% , s 1 ), ..., FINE(\u2713 s |M| 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subnetwork Exploration", "sec_num": "3" }, { "text": "We also did not prune the embedding layer, following Chen et al. 2020 verse winning tickets exist and how to find them 103 tasks using the BERT in this paper, the same problem happens in other settings generally. In the results by Raffel et al. (2020) , we found that this happened on almost all the tasks of GLUE (Wang et al., 2018) , SuperGLUE (Wang et al., 2019) , SQuAD (Rajpurkar et al., 2016) , summarization, and machine translations using the T5 models.", "cite_spans": [ { "start": 231, "end": 251, "text": "Raffel et al. (2020)", "ref_id": "BIBREF25" }, { "start": 314, "end": 333, "text": "(Wang et al., 2018)", "ref_id": "BIBREF35" }, { "start": 346, "end": 365, "text": "(Wang et al., 2019)", "ref_id": "BIBREF34" }, { "start": 374, "end": 398, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Subnetwork Exploration", "sec_num": "3" }, { "text": "First, we describe iterative magnitude pruning, the Figure 2: Overview of iterative magnitude pruning (Section 3.1). We can also use regularizers during finetuning to diversify pruning (Section 3.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Subnetwork Exploration", "sec_num": "3" }, { "text": "We employ iterative magnitude pruning (Frankle and Carbin, 2019) to find winning tickets for simplicity. Other sophisticated options are left for future work. Here, we explain the algorithm (refer to the paper for details). The algorithm explores a good pruning mask via rehearsals of finetuning. First, it completes a finetuning procedure of an initialized dense network and identifies the parameters with the 10% lowest magnitudes as the targets of pruning. Then, it makes the pruned subnetwork and resets its parameters to the originally-initialized (sub-)parameters. This finetune-prune-reset process is repeated until reaching the desired pruning ratio. We used 30% as pruning ratio.", "cite_spans": [ { "start": 38, "end": 64, "text": "(Frankle and Carbin, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Iterative Magnitude Pruning", "sec_num": "3.1" }, { "text": "We discussed that finetuning with different random seeds did not lead to diverse parameters in Section 2.1. Therefore, iterative magnitude pruning with different seeds could also produce less diverse subnetworks. Thus, we also explore means of diversifying pruning patterns by enforcing different parameters to have lower magnitudes. Motivated by this, we experiment with a simple approach, applying an L 1 regularizer (i.e., magnitude decay) to different parameters selectively depending on the random seeds. Specifically, we explore two policies to determine which parameters are decayed and how strongly they are, i.e., the element-wise coefficients of the L 1 regularizer, l s \u2208 R \u22650 |\u03b8| . During finetuning (for pruning), we add a regularization term \u03c4 ||\u03b8 s \u2299 l s || 1 with a positive scalar coefficient \u03c4 into the loss of the task (e.g., cross entropy for classification), where \u2299 is element-wise product. This softly enforces various parameters to have a lower magnitude among a set of random seeds and could lead various parameters to be pruned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pruning with Regularizer", "sec_num": "3.2" }, { "text": "Active Masking To maximize the diversity of the surviving parameters of member models, it is necessary to prune the surviving parameters of the random seed s 1 when building a model with the next random seed s 2 . Thus, during finetuning with seed s 2 , we apply the L 1 regularizer on the first surviving parameters. Likewise, with the following seeds s 3 , s 4 , ..., s i , ..., s |M| , we cumulatively use the average of the surviving masks as the regularizer coefficient mask. Let m s j \u2208 {0, 1} |\u03b8| be the pruning mask indicating surviving parameters from seed s j , the coefficient mask with seed s i is l s i = j088pretrained model before each finetuning. We ex-089pect that, during finetuning, each sub-network ac-090quires different views using different sub-spaces of091the pretrained knowledge. This idea has two chal-092lenges: the diversity and the accuracy of the sub-093networks. Recent studies on the lottery ticket hy-126094parameters with 10% lowest magnitudes in pothesis (Frankle and Carbin, 2019) suggest that a127095 096 097 098dense neural network at an initialization contains a sub-network, called winning ticket, whose accuracy becomes comparable with that of the dense network FINE(\u2713 s 132 after the same training steps. A pretrained BERTproduct. Next, we replay finetuning but from133\u2713 sm s,10% and get FINE(\u2713 sm s,10% , s) as134well as 20%-pruning mask m s,20% . By repeating135iterative magnitude pruning, we obtain the136parameter FINE(\u2713 sm s,P % , s). In our exper-137iments, we set P = 30, i.e., evaluate ensemble138of 30%-pruning sub-networks, where M = {FINE(\u2713 s1 m s1,30% , s 1 ), ..., FINE(\u2713 s |M|139 1402 We also did not prune the embedding layer, followingChen et al. (2020); Prasanna et al. (2020)2" }, "TABREF1": { "type_str": "table", "num": null, "html": null, "text": "For diversifying models more, we propose to dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT 098 also has sparse sub-networks (e.g., 50%), which 099 can achieve the same accuracy with the entire net-", "content": "" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "text": "We also get the corresponding binary128 mask of pruning, m s,10% 2 {0, 1} |\u2713s| , where 129 the surviving positions have 1 otherwise 0. The 130 pruning of parameters \u2713 by a mask m can be also 131 represented as \u2713 m, where is the element-wise 132 product. Next, we replay finetuning but from 133 \u2713 s m s,10% and get FINE(\u2713 s m s,10% , s) as 134 well as 20%-pruning mask m s,20% . By repeating 135 iterative magnitude pruning, we obtain the 136 parameter FINE(\u2713 s m s,P % , s). In our exper-137 iments, we set P = 30, i.e., evaluate ensemble 30% , s 1 ), ..., FINE(\u2713 s |M| 140 2 We also did not prune the embedding layer, following Chen et al. (2020); Prasanna et al. (2020) 2 pect that, during finetuning, each sub-network ac-089 quires different views using different sub-spaces of 090 the pretrained knowledge. This idea has two chal-091 lenges: the diversity and the accuracy of the sub-092 networks. Recent studies on the lottery ticket hy-093 pothesis (Frankle and Carbin, 2019) suggest that a 094 dense neural network at an initialization contains a 095 sub-network, called winning ticket, whose accuracy 096 becomes comparable with that of the dense network 097 after the same training steps. A pretrained BERT", "content": "
of 30%-pruning sub-networks, where M = {FINE(\u2713 s1 m s1,098138 139
" }, "TABREF3": { "type_str": "table", "num": null, "html": null, "text": ". Note MRPC STS-B single ens. diff. single ens. diff. 83.48 84.34 +0.86 88.35 89.04 +0.69 (BAGGING) 82.87 84.19 +1.32 88.17 88.84 +0.68 BASE-LT 83.84 84.98 +1.14 88.37 89.16 +0.79 ACTIVE-LT 83.22 84.60 +1.38 88.39 89.32 +0.94 RANDOM-LT 83.53 85.05 +1.52 88.49 89.35 +0.86", "content": "" }, "TABREF5": { "type_str": "table", "num": null, "html": null, "text": "MRPC STS-B single ens. diff. single ens. diff. BASELINE 87.77 88.47 +0.70 89.52 90.00 +0.48 (BAGGING) 87.64 88.12 +0.49 89.34 89.91 +0.54 BASE-LT 87.72 88.25 +0.53 89.71 90.07 +0.36 ACTIVE-LT 87.39 88.51 +1.12 88.46 89.50 +1.04 RANDOM-LT 87.86 89.26 +1.40 88.41 89.39 +0.98", "content": "
" } } } }