paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
GroupBERT: Enhanced Transformer Architecture with Efficient Grouped Structures
1 INTRODUCTION . Deep neural networks have emerged as the leading solution to enabling end-to-end language processing ( Hochreiter & Schmidhuber , 1997 ; Sutskever et al. , 2014 ; Chung et al. , 2014 ) . Recently , the Transformer model based on the self-attention mechanism ( Vaswani et al. , 2017 ) has become the most promising architecture for language applications , especially when used as a pre-trained foundation model ( Devlin et al. , 2018 ; Radford et al. , 2019 ; Brown et al. , 2020 ; Bommasani et al. , 2021 ) . Attention based models are also increasingly showing promising results for established applications in domains different from natural language processing ( Dosovitskiy et al. , 2020 ) . Complementary to the Transformer ’ s improved ability to model long-range dependencies in sequences is its superior potential to scale to larger sizes ( Kaplan et al. , 2020 ) and its suitability for execution on existing accelerators . This makes these models favoured over traditional recurrent language models . Given the increased computational demand of these models , there is a growing and pressing interest to develop more efficient architectures ( Strubell et al. , 2019 ) . Some previous proposals were able to reduce the computational burden of the Transformer with improved task performance , but often with a corresponding slower execution , as will be discussed further in Section 4 . We demonstrate a set of modifications to the the structure of the Transformer layer that improve FLOP utilization by the encoder stack . The proposed GroupBERT model relies on grouped matrix multiplications and convolutions , and delivers a more efficient version of BERT , superior in both task performance and computation . GroupBERT utilizes grouped operations in a novel way which makes the model more FLOP efficient . However grouped operations are characterised by a reduced computational load for a given memory access ( Masters et al. , 2021 ) , i.e arithmetic intensity . This property would make them undesirable for traditional accelerators , which rely on large dense computations and reduced memory access . For this investigation we use the IPU ( Jia et al. , 2019 ) hardware , since its architecture uses on-chip SRAM for model execution , making it suitable to explore the potential of these efficient building blocks . We achieve a further task performance boost by increasing the depth of the model as GroupBERT extends each Transformer layer to contain four modules : one multi-head attention ( MHA ) , one grouped convolution module , and two grouped feed-forward modules ( GFFN ) . The MHA and grouped convolution modules process token information along the sequence dimension , and each is followed by the general computation GFFN module . While there are twice as many modules in the proposed GroupBERT layer , the overall increase in computation is modest as we utilize sparse grouped operations , for a total FLOP increase of about 60 % . Not only does GroupBERT deliver better performance per FLOP , but it is also executed faster as measured in total time-to-train . By employing both attention and convolution , the model has components dedicated to both short and long-range interactions , making a more efficient use of the more expensive attention mechanism . We also utilize the parameters of GroupBERT more efficiently during training , by discarding dropout for pre-training on a large corpus of text and by improving stability to use higher learning rates . With all these innovations , GroupBERT Base is only slightly larger than BERT Base , yet it achieves better validation MLM loss than BERT Large using less than half of its FLOPs . 2 ARCHITECTURE . In this work , we propose an efficient modification of the Transformer encoder layer called GroupBERT . The original Transformer layer consists of two modules : multi-head attention ( MHA ) and feedforward network ( FFN ) . Each of these modules also includes dropout , a shortcut connection , and layer normalization ( Srivastava et al. , 2014 ; He et al. , 2016 ; Ba et al. , 2016 ) . GroupBERT includes four modules in every layer , as illustrated in Figure 1 . We add a convolution module in sequence with the MHA to efficiently model local interactions between tokens and to allow the attention mechanism to focus on long-range interactions . We then complement every sequence processing block with a dedicated fully-connected module . For better efficiency , we introduce grouped projections to the FLOPs intensive FFN module , making the layer structure more FLOP efficient . 2.1 GROUPED FEED-FORWARD MODULES . The FFN module plays a crucial part in the unparalleled task performance of Transformers ( Dong et al. , 2021 ; Lee-Thorp et al. , 2021 ) . Although it is an essential complement to sequence processing modules it introduces a significant computational burden , as two thirds of the FLOPs are concentrated in the FFN module . To make this integral part of the transformer more lightweight we utilize structured sparsity in a form of sparsely grouped matrix multiplication . Consider a dense matrix multiplication of matrices H ∈ Ra×b and W ∈ Rb×c : ( HW ) i , j = ∆ b∑ n=1 hi , n · wn , j ( 1 ) A sparsely grouped version of W corresponds to a block diagonal matrix W ( G ) with G groups , a matrix of similar dimension to W and a sparsity ratio of 1/G . An equivalent alternative formulation of a block-diagonal matrix is a grouped convolution for a 1-dimensional 1× 1 convolution ( Iandola et al. , 2020 ) . This reduces the number of stored parameters , and can be implemented efficiently without zero-multiplication as : ( HW ( G ) ) i , j = ∆ b∑ n=1 hi , n · wn , j = b/G∑ n=1 h i , n + b G · bj−1 / c G c · w n + b G · bj−1 / c G c , j ( 2 ) We propose a novel scheme to utilize grouped transformations in a transformer FFN module . Our first finding is that parameters in the expanding FFN matrix contribute more to task performance , and sparsity is particularly damaging for these fan-out matrices . The second matrix is less sensitive to parameter reduction due to the sparse input and the reduction of projection dimension . Therefore , introducing sparsity in the second matrix results in a Pareto efficient balance between compute and task-performance . Our second finding is that the locality constraint of grouped projections on the hidden dimension is detrimental to the model . However we remove this constraint by using an output linear projection . This is similar to the output projection matrix used in the MHA block , where each attention head acts only on a slice of the hidden dimension . We find the optimal value for the number of groups to be G = 4 , bringing the parameter count of GFFN to be 75 % of its dense counterpart , while also delivering a more Pareto efficient task performance ( Fig 6 ) . Alternative schemes of applying grouped transformations ( Iandola et al. , 2020 ; Mehta et al. , 2021 ; Jiang et al. , 2020 ) ( Figure 2 ) in a transformer fail to outperform the baseline in a setting of BERT pre-training ( see Section 3.3 ) . 2.2 CONVOLUTION BLOCK . Sequential locality plays an important role for contextualizing tokens in language models . At the same time , long-range interactions have proven to be vital for state-of-the-art performance . Transformers inherently support long-range content-based interactions via self-attention and usually incorporate a form of positional encoding , allowing attention to also capture position-based interactions ( Dai et al. , 2019 ) . Although this gives self-attention strong representational power , a convolution is a more efficient implementation of strictly local , position-based fusion . For this reason we adopt a dedicated convolutional module to improve overall efficiency . The design of our convolution module is similar to Gulati et al . ( 2020 ) , in which convolutions were introduced into a speech recognition Transformer . We apply a gate consisting of a pointwise convolution followed by a Gated Linear Unit ( GLU ) that has been beneficial in language applications ( Dauphin et al. , 2017 ; Wu et al. , 2019a ; 2020 ) . Unlike Gulati et al . ( 2020 ) , we use grouped convolutions in place of depthwise convolutions to add representational capacity . We find that the best trade-off between task performance and computational cost is achieved by using a grouped convolution with group size 16 and kernel size 7 , computed over the sequence dimension . The module also includes an additional layer normalization and a Swish activation ( Ramachandran et al. , 2017 ) . With this module included , fewer attention heads show a strong locality preference since such interactions are readily captured by convolutions . This effect is visible in the attention maps of Figure 3 , showing weaker locality in the model that includes convolutions . To measure this effect quantitatively , we calculate the entropy across target positions for each head and source position . We then average , and normalize by the maximum possible value ( see Appendix C ) . For this measure , zero means that every head attends to a single position exclusively , while one means that every head is position agnostic , although there could still be a joint position and content term . BERT Base has an average entropy ratio of 0.75 and BERT Base + Conv has 0.92 , indicating a shift of positional fusion work from attention to convolution . 2.3 EFFICIENT PARAMETER UTILIZATION . In line with earlier research on the Transformer architecture ( Wang et al. , 2019 ; Liu et al. , 2020 ; Xiong et al. , 2020 ) , we move layer normalization ( Ba et al. , 2016 ) from its position after the module ’ s residual ( `` postnorm '' ) to the first position within each residual block ( `` prenorm '' ) . While this modification does not directly improve task performance , it stabilizes training and allows the use of a larger learning rate that would otherwise trigger the model with postnorm to diverge . We increase the learning rate by a factor of 4× compared to the postnorm baseline . Similarly to Lan et al . ( 2020 ) , we find the use of dropout to be detrimental to the pre-training stage . Due to the substantial size of the dataset , this kind of regularization is not required . While removing dropout yields improvements to the pre-training loss , this does not apply to downstream tasks that rely on smaller datasets . Consequently , we include dropout only when fine-tuning on supervised tasks , that have smaller datasets than the pre-training corpus . 3 RESULTS . To evaluate the architecture modifications , we chose BERT ( Devlin et al. , 2018 ) pre-training and fine-tuning . The large dataset and challenging training objective mean that task performance improves consistently with model size ( Lan et al. , 2020 ) and the risk of over-fitting is reduced . This makes it possible to clearly distinguish architecture modifications that benefit efficiency . Our evaluation of GroupBERT for language representation learning shows that the architecture is : 1 . Training FLOP-efficient across a range of model sizes ( Sections 3.3 , 3.4 ) . 2 . Training time-efficient across a range of compute budgets ( Sections 3.3 , 3.4 ) . 3 . Improved by each constituent part ( Section 3.5 ) . 3.1 EXPERIMENTS . Each experiment consists of two pre-training phases and a fine-tuning phase consisting of multiple training runs , started from the pre-trained model . All phases use the AdamW optimiser ( Loshchilov & Hutter , 2019 ) , with β1 = 0.9 , β2 = 0.999 , = 10−6 . The learning rate follows a linear warm-up decay schedule , whereby the warmup phase lasts for min ( 104 , 0.1· total steps ) steps , and the peak learning rate depends on the training phase and model size . The model is defined over a vocabulary of 30522 WordPiece tokens ( Wu et al. , 2016 ) . Weights are initialized using a truncated normal distribution of standard deviation 0.02 . For all experiments we use 2 Graphcore M2000 IPU systems . Pre-training phase one optimises the Masked Language Model ( MLM ) and Next-Sentence Prediction ( NSP ) loss for corrupted sentence pairs . Masked and padded sequences of length 128 are grouped into batches of approximately 512 sequences , with slight variations depending on the model size ( see Appendix A ) . The model is trained for 10 epochs of Wikipedia + BookCorpus ( Zhu et al. , 2015 ) , corresponding to approximately 8 ·105 optimisation steps . For all experiments with GroupBERT , baseline BERT and all other models tested , the learning rate is set to the value that produces the best validation loss . Pre-training phase two uses sequence length 384 , 5 epochs , and approximately 2·105 optimisation steps . SQuAD 1.1 fine-tuning ( Rajpurkar et al. , 2016 ) adds a token span prediction layer and the whole model is fine-tuned to perform extractive question answering . Training uses target batch size 32 and we train for 2-3 epochs with various learning rates ( Appendix B ) and report results for the best hyperparameters setting . We report F1 and Exact match scores , which show higher variance than MLM loss values . On the grounds of larger variance , we fine-tune each pre-training checkpoint five times using different seeds for every hyperparameter setting . Fine-tuning has been shown to be quite a brittle process in recent studies ( Dodge et al. , 2020 ; Zhang et al. , 2021 ; Mosbach et al. , 2021 ) . In particular , many instabilities are caused by fine-tuning without using bias correction , an implementation that was adopted following the original experimental setup of BERT . This omission in the optimizer was observed to cause a collapse of the training process . For this reason , we included a bias-correction term to the AdamW implementation for fine-tuning .
The paper proposed a set of modifications to the structure of the Transformer layer that improves FLOPS utilization. The proposed GroupBERT relies on grouped matrix multiplications and convolutions, being more efficient and superior in task performance and computation. The overall network architecture contains four modulesL a multi-head attention, a grouped convolution module, and two grouped feed-forward modules (GFFN). Even being twice as many modules, the total FLOP increases of about 60% when sparse grouped operations are used.
SP:dba8923c9ede403fef8ced3bd8409b54ff3c229a
GroupBERT: Enhanced Transformer Architecture with Efficient Grouped Structures
1 INTRODUCTION . Deep neural networks have emerged as the leading solution to enabling end-to-end language processing ( Hochreiter & Schmidhuber , 1997 ; Sutskever et al. , 2014 ; Chung et al. , 2014 ) . Recently , the Transformer model based on the self-attention mechanism ( Vaswani et al. , 2017 ) has become the most promising architecture for language applications , especially when used as a pre-trained foundation model ( Devlin et al. , 2018 ; Radford et al. , 2019 ; Brown et al. , 2020 ; Bommasani et al. , 2021 ) . Attention based models are also increasingly showing promising results for established applications in domains different from natural language processing ( Dosovitskiy et al. , 2020 ) . Complementary to the Transformer ’ s improved ability to model long-range dependencies in sequences is its superior potential to scale to larger sizes ( Kaplan et al. , 2020 ) and its suitability for execution on existing accelerators . This makes these models favoured over traditional recurrent language models . Given the increased computational demand of these models , there is a growing and pressing interest to develop more efficient architectures ( Strubell et al. , 2019 ) . Some previous proposals were able to reduce the computational burden of the Transformer with improved task performance , but often with a corresponding slower execution , as will be discussed further in Section 4 . We demonstrate a set of modifications to the the structure of the Transformer layer that improve FLOP utilization by the encoder stack . The proposed GroupBERT model relies on grouped matrix multiplications and convolutions , and delivers a more efficient version of BERT , superior in both task performance and computation . GroupBERT utilizes grouped operations in a novel way which makes the model more FLOP efficient . However grouped operations are characterised by a reduced computational load for a given memory access ( Masters et al. , 2021 ) , i.e arithmetic intensity . This property would make them undesirable for traditional accelerators , which rely on large dense computations and reduced memory access . For this investigation we use the IPU ( Jia et al. , 2019 ) hardware , since its architecture uses on-chip SRAM for model execution , making it suitable to explore the potential of these efficient building blocks . We achieve a further task performance boost by increasing the depth of the model as GroupBERT extends each Transformer layer to contain four modules : one multi-head attention ( MHA ) , one grouped convolution module , and two grouped feed-forward modules ( GFFN ) . The MHA and grouped convolution modules process token information along the sequence dimension , and each is followed by the general computation GFFN module . While there are twice as many modules in the proposed GroupBERT layer , the overall increase in computation is modest as we utilize sparse grouped operations , for a total FLOP increase of about 60 % . Not only does GroupBERT deliver better performance per FLOP , but it is also executed faster as measured in total time-to-train . By employing both attention and convolution , the model has components dedicated to both short and long-range interactions , making a more efficient use of the more expensive attention mechanism . We also utilize the parameters of GroupBERT more efficiently during training , by discarding dropout for pre-training on a large corpus of text and by improving stability to use higher learning rates . With all these innovations , GroupBERT Base is only slightly larger than BERT Base , yet it achieves better validation MLM loss than BERT Large using less than half of its FLOPs . 2 ARCHITECTURE . In this work , we propose an efficient modification of the Transformer encoder layer called GroupBERT . The original Transformer layer consists of two modules : multi-head attention ( MHA ) and feedforward network ( FFN ) . Each of these modules also includes dropout , a shortcut connection , and layer normalization ( Srivastava et al. , 2014 ; He et al. , 2016 ; Ba et al. , 2016 ) . GroupBERT includes four modules in every layer , as illustrated in Figure 1 . We add a convolution module in sequence with the MHA to efficiently model local interactions between tokens and to allow the attention mechanism to focus on long-range interactions . We then complement every sequence processing block with a dedicated fully-connected module . For better efficiency , we introduce grouped projections to the FLOPs intensive FFN module , making the layer structure more FLOP efficient . 2.1 GROUPED FEED-FORWARD MODULES . The FFN module plays a crucial part in the unparalleled task performance of Transformers ( Dong et al. , 2021 ; Lee-Thorp et al. , 2021 ) . Although it is an essential complement to sequence processing modules it introduces a significant computational burden , as two thirds of the FLOPs are concentrated in the FFN module . To make this integral part of the transformer more lightweight we utilize structured sparsity in a form of sparsely grouped matrix multiplication . Consider a dense matrix multiplication of matrices H ∈ Ra×b and W ∈ Rb×c : ( HW ) i , j = ∆ b∑ n=1 hi , n · wn , j ( 1 ) A sparsely grouped version of W corresponds to a block diagonal matrix W ( G ) with G groups , a matrix of similar dimension to W and a sparsity ratio of 1/G . An equivalent alternative formulation of a block-diagonal matrix is a grouped convolution for a 1-dimensional 1× 1 convolution ( Iandola et al. , 2020 ) . This reduces the number of stored parameters , and can be implemented efficiently without zero-multiplication as : ( HW ( G ) ) i , j = ∆ b∑ n=1 hi , n · wn , j = b/G∑ n=1 h i , n + b G · bj−1 / c G c · w n + b G · bj−1 / c G c , j ( 2 ) We propose a novel scheme to utilize grouped transformations in a transformer FFN module . Our first finding is that parameters in the expanding FFN matrix contribute more to task performance , and sparsity is particularly damaging for these fan-out matrices . The second matrix is less sensitive to parameter reduction due to the sparse input and the reduction of projection dimension . Therefore , introducing sparsity in the second matrix results in a Pareto efficient balance between compute and task-performance . Our second finding is that the locality constraint of grouped projections on the hidden dimension is detrimental to the model . However we remove this constraint by using an output linear projection . This is similar to the output projection matrix used in the MHA block , where each attention head acts only on a slice of the hidden dimension . We find the optimal value for the number of groups to be G = 4 , bringing the parameter count of GFFN to be 75 % of its dense counterpart , while also delivering a more Pareto efficient task performance ( Fig 6 ) . Alternative schemes of applying grouped transformations ( Iandola et al. , 2020 ; Mehta et al. , 2021 ; Jiang et al. , 2020 ) ( Figure 2 ) in a transformer fail to outperform the baseline in a setting of BERT pre-training ( see Section 3.3 ) . 2.2 CONVOLUTION BLOCK . Sequential locality plays an important role for contextualizing tokens in language models . At the same time , long-range interactions have proven to be vital for state-of-the-art performance . Transformers inherently support long-range content-based interactions via self-attention and usually incorporate a form of positional encoding , allowing attention to also capture position-based interactions ( Dai et al. , 2019 ) . Although this gives self-attention strong representational power , a convolution is a more efficient implementation of strictly local , position-based fusion . For this reason we adopt a dedicated convolutional module to improve overall efficiency . The design of our convolution module is similar to Gulati et al . ( 2020 ) , in which convolutions were introduced into a speech recognition Transformer . We apply a gate consisting of a pointwise convolution followed by a Gated Linear Unit ( GLU ) that has been beneficial in language applications ( Dauphin et al. , 2017 ; Wu et al. , 2019a ; 2020 ) . Unlike Gulati et al . ( 2020 ) , we use grouped convolutions in place of depthwise convolutions to add representational capacity . We find that the best trade-off between task performance and computational cost is achieved by using a grouped convolution with group size 16 and kernel size 7 , computed over the sequence dimension . The module also includes an additional layer normalization and a Swish activation ( Ramachandran et al. , 2017 ) . With this module included , fewer attention heads show a strong locality preference since such interactions are readily captured by convolutions . This effect is visible in the attention maps of Figure 3 , showing weaker locality in the model that includes convolutions . To measure this effect quantitatively , we calculate the entropy across target positions for each head and source position . We then average , and normalize by the maximum possible value ( see Appendix C ) . For this measure , zero means that every head attends to a single position exclusively , while one means that every head is position agnostic , although there could still be a joint position and content term . BERT Base has an average entropy ratio of 0.75 and BERT Base + Conv has 0.92 , indicating a shift of positional fusion work from attention to convolution . 2.3 EFFICIENT PARAMETER UTILIZATION . In line with earlier research on the Transformer architecture ( Wang et al. , 2019 ; Liu et al. , 2020 ; Xiong et al. , 2020 ) , we move layer normalization ( Ba et al. , 2016 ) from its position after the module ’ s residual ( `` postnorm '' ) to the first position within each residual block ( `` prenorm '' ) . While this modification does not directly improve task performance , it stabilizes training and allows the use of a larger learning rate that would otherwise trigger the model with postnorm to diverge . We increase the learning rate by a factor of 4× compared to the postnorm baseline . Similarly to Lan et al . ( 2020 ) , we find the use of dropout to be detrimental to the pre-training stage . Due to the substantial size of the dataset , this kind of regularization is not required . While removing dropout yields improvements to the pre-training loss , this does not apply to downstream tasks that rely on smaller datasets . Consequently , we include dropout only when fine-tuning on supervised tasks , that have smaller datasets than the pre-training corpus . 3 RESULTS . To evaluate the architecture modifications , we chose BERT ( Devlin et al. , 2018 ) pre-training and fine-tuning . The large dataset and challenging training objective mean that task performance improves consistently with model size ( Lan et al. , 2020 ) and the risk of over-fitting is reduced . This makes it possible to clearly distinguish architecture modifications that benefit efficiency . Our evaluation of GroupBERT for language representation learning shows that the architecture is : 1 . Training FLOP-efficient across a range of model sizes ( Sections 3.3 , 3.4 ) . 2 . Training time-efficient across a range of compute budgets ( Sections 3.3 , 3.4 ) . 3 . Improved by each constituent part ( Section 3.5 ) . 3.1 EXPERIMENTS . Each experiment consists of two pre-training phases and a fine-tuning phase consisting of multiple training runs , started from the pre-trained model . All phases use the AdamW optimiser ( Loshchilov & Hutter , 2019 ) , with β1 = 0.9 , β2 = 0.999 , = 10−6 . The learning rate follows a linear warm-up decay schedule , whereby the warmup phase lasts for min ( 104 , 0.1· total steps ) steps , and the peak learning rate depends on the training phase and model size . The model is defined over a vocabulary of 30522 WordPiece tokens ( Wu et al. , 2016 ) . Weights are initialized using a truncated normal distribution of standard deviation 0.02 . For all experiments we use 2 Graphcore M2000 IPU systems . Pre-training phase one optimises the Masked Language Model ( MLM ) and Next-Sentence Prediction ( NSP ) loss for corrupted sentence pairs . Masked and padded sequences of length 128 are grouped into batches of approximately 512 sequences , with slight variations depending on the model size ( see Appendix A ) . The model is trained for 10 epochs of Wikipedia + BookCorpus ( Zhu et al. , 2015 ) , corresponding to approximately 8 ·105 optimisation steps . For all experiments with GroupBERT , baseline BERT and all other models tested , the learning rate is set to the value that produces the best validation loss . Pre-training phase two uses sequence length 384 , 5 epochs , and approximately 2·105 optimisation steps . SQuAD 1.1 fine-tuning ( Rajpurkar et al. , 2016 ) adds a token span prediction layer and the whole model is fine-tuned to perform extractive question answering . Training uses target batch size 32 and we train for 2-3 epochs with various learning rates ( Appendix B ) and report results for the best hyperparameters setting . We report F1 and Exact match scores , which show higher variance than MLM loss values . On the grounds of larger variance , we fine-tune each pre-training checkpoint five times using different seeds for every hyperparameter setting . Fine-tuning has been shown to be quite a brittle process in recent studies ( Dodge et al. , 2020 ; Zhang et al. , 2021 ; Mosbach et al. , 2021 ) . In particular , many instabilities are caused by fine-tuning without using bias correction , an implementation that was adopted following the original experimental setup of BERT . This omission in the optimizer was observed to cause a collapse of the training process . For this reason , we included a bias-correction term to the AdamW implementation for fine-tuning .
Pre-trained language models have become a critical component in NLP applications. However, due to its large-scale pre-training computation, pre-trained models are usually too expensive to train. Therefore, in this paper, authors design GroupBERT, which introduces a set of modifications to the structure of the Transformer network. More specifically, they first apply grouped transformation to reduce the computations cost of dense feed-forward layers, and then use grouped convolution to enhance the learning of local and global interactions. Experimental results demonstrate the effectiveness of the proposed method.
SP:dba8923c9ede403fef8ced3bd8409b54ff3c229a
Guiding Transformers to Process in Steps
1 INTRODUCTION . Daniel Kahneman has pointed out that there is a fundamental difference in how humans solve the following two tasks ( Kahneman , 2011 ) : ( 1 ) complete the phrase “ bread and . . . ” ; ( 2 ) complete the equation “ 17× 24 = . . . ” . The answer to ( 1 ) comes to mind instantly , with no mental effort , and is a result of a computation that one is not consciously aware of and could not explain . The answer to ( 2 ) requires time and effort to come up with , and is a result of a sequence of computations that one consciously carries out . The former mode of cognition is what Kahneman calls System 1 , the latter System 2 . We currently have separate tools for solving each of these two kinds of problems on a computer , but it is not yet clear how to build a single system capable of both modes of cognition simultaneously . Many tasks that exercise System 2 in humans involve executing some fully-specified algorithm and thus are quite straightforward to solve using conventional computer programs . However , classical programming is not applicable to System 1 tasks as these typically correspond to functions that we do not know how to implement . Instead , System 1 tasks are usually tackled by approximating the target functions from observed input-output pairs . Whether such a learning-based approach could be successfully extended to System 2 tasks is an open question , and one that we aim to explore in this paper . We argue that the classical System 1 paradigm of training neural networks to approximate functions from inputs and outputs corresponding to a given task is not a scalable approach for tackling tasks from the System 2 domain . We claim that , given only inputs and outputs , neural networks will not receive enough supervision to converge to a solution when trained to perform algorithmic tasks that humans solve via System 2 . To circumvent this , we propose to supplement the training data with intermediate results that would be helpful to compute before arriving at the final output . By modifying the training data in this way , we are effectively changing the training objective — instead of having to learn any arbitrary way of mapping inputs to outputs , the neural networks are now trained to compute the outputs from the inputs through a particular sequence of intermediate steps . We hypothesize that such additional guidance might be necessary in order for neural networks to learn algorithmic tasks , and we empirically evaluate this hypothesis . Much of existing work in applying neural networks for System 2 tasks has been focused on making neural architectures more aligned with the execution of algorithms . While fine-tuning the model architectures may ultimately lead to superior System 2 capabilities ( after all , the architecture of a calculator allows it to perform arithmetic at a super-human level ) , we instead explore the challenge of extending the capabilities of existing state-of-the-art System 1 models — namely , the Transformer — towards the System 2 domain . In particular , we take inspiration from the human ability to solve algorithmic tasks by writing down symbols on paper and propose to formulate such tasks as sequence-to-sequence problems where the target sequence includes both the outputs and all the intermediate symbols that a human would write ( or more ) . That is , instead of modifying the models , we experiment with modifying the data . Our main contributions can be summarized as follows : • We show that it is possible to hand-code a 1-layer 1-head Transformer to compute any finite function if enough intermediate steps are used ; • We show that a 1-layer 1-head Transformer can be trained to perform binary addition if the target sequences include intermediate results , whereas it fails to learn the task when predicting the outputs directly from the inputs ; and • We show that a Frozen Pretrained Transformer can be trained to perform binary addition if the target sequences include intermediate results , whereas it fails to learn the task when predicting the outputs directly from the inputs . 2 RELATED WORK . The need of bridging the gap between System 1 and System 2 capabilities in current deep learning models has recently been emphasized by Yoshua Bengio ( Bengio , 2019 ; Bengio et al. , 2021 ) . The work of Bengio and colleagues addresses the higher-level cognition in its broadest sense , seeking to model and incorporate into current systems concepts such as consciousness ( Bengio , 2017 ) , causality ( Bengio et al. , 2020 ) , agency ( Thomas et al. , 2018 ) , and global workspace ( Goyal et al. , 2021 ) . In Goyal & Bengio ( 2020 ) , they question the paradigm of classical statistical learning and propose to shift from training models on curated datasets towards training agents in complex non-stationary environments . In contrast to that , our approach lies within a conventional learning framework ( namely , sequence-to-sequence modeling with Transformers ) , and addresses System 2 in a slightly narrower sense ( namely , as algorithmic execution ) . A considerable fraction of recent work in applying neural networks for algorithmic tasks has been concerned with modifying or augmenting neural architectures to make them more suitable for algorithmic execution . One very common approach involves trying to bring components from classical computer architecture into the neural setting : Neural Turing Machines give neural networks access to dynamic external memory ( Graves et al. , 2014 ; 2016 ) , Stack-augmented RNNs augment the networks with an infinite pushdown stack ( Joulin & Mikolov , 2015 ) , Neural Random Access Machines use registers and introduce the ability to manipulate and dereference pointers ( Kurach et al. , 2016 ) , while Neural Arithmetic Logic Modules represent neural versions of the ALU ( Trask et al. , 2018 ; Madsen & Johansen , 2020 ) . A parallel but closely related line of research frames these new kinds of augmented architectures as neural controllers endowed with access to external interfaces and applies reinforcement learning techniques to train them to solve algorithmic tasks by interacting with the interfaces ( Zaremba & Sutskever , 2015 ; Zaremba et al. , 2016 ) . The approach of using variable amount of computation per input , known as Adaptive Computation Time ( ACT ) , has been introduced by Graves ( 2016 ) . Universal Transformers ( Dehghani et al. , 2019 ) integrate ACT into the Transformer architecture , allowing each output symbol to be a result of a variable number of applications of a single Transformer layer . While these works are , like ours , motivated by the apparent necessity of intermediate processing in certain tasks , they use learned ponder time and learned ponder content whereas our work uses given ponder time and given ponder content . The main advantage of learned ponder time and content is that the model is free to discover and learn to perform the intermediate computations that it finds most useful . This also means that the researcher does not need to know the correct intermediate steps to train the model and can thus tackle a potentially larger class of problems . The main disadvantage of learned ponder time and content is that the amount of supervision per forward pass decreases as the model uses more intermediate steps , and the signal can quickly become too weak for the model to train at all . It is also worth noting that intermediate steps are typically given when humans are taught to perform algorithmic tasks ( rather than being asked to infer intermediate computations by just looking at input-output pairs ) , which might be an important argument for given ponder content . The idea of providing supervision beyond input-output examples has already appeared in several projects , although , in our view , it still remains largely underexplored . Reed & de Freitas ( 2016 ) train Neural Programmer-Interpreters by providing supervision on the correct action sequences ( execution traces ) of the recurrent controller , Mirman et al . ( 2018 ) train differential Neural Computational Machines with extra supervision on the movements of the read-write heads , and Mirman et al . ( 2018 ) train Neural Execution Engines with extra supervision on the attention masks . While the added training signal does lead to better sample efficiency and improved generalization , a major limitation of all of these approaches , as acknowledged by Mirman et al . ( 2018 ) , is that the extra supervision needs to be highly specialized as it applies to very specific components of each architecture . Our approach of supplementing the target sequences with intermediate results is completely architectureindependent and thus does not share this limitation . Veličković et al . ( 2020 ) use extra supervision at the data level as well , though their work is focused specifically on tasks involving graph-structured inputs and is thus based on neural network architectures that are tailored for processing graphs . In contrast to that , we adopt a general sequence-to-sequence modeling framework , with an intention to imitate a human executing an algorithm on a piece of paper . Our main goal is to explore how and whether powerful existing System 1 models could be used in the System 2 domain , rather than to find a neural architecture that would be most suitable for executing algorithms . We therefore use vanilla decoder-only Transformers in our experiments . 3 MOTIVATION . The distinguishing characteristic of System 1 tasks is that they can be solved instantly and unconsciously , in something akin to a single “ forward pass ” through a human neural network . It thus seems plausible that all input-output mappings corresponding to System 1 tasks are in principle computable via a single forward pass through a sufficiently large artificial neural network . Recent success in deep learning shows that training bigger models on more data indeed makes it possible to solve more and more System 1 tasks , and it is not unreasonable to expect this trend to continue . When it comes to solving System 2 tasks , however , scaling up appears to be a dead-end . Consider the problem of completing the sentence “ The first digit of the n-th Fibonacci number is . . . ” , where n is replaced by some positive integer . Since a single forward pass through a neural network ( no matter how large ) involves a constant amount of computation , there will be some number N ∈ N such that for all n > N the correct output is not computable . It follows that the only viable approach for solving these kinds of algorithmic tasks is to arrive at the output through intermediate steps . The main value of introducing intermediate steps is that it gives control over the level of model expressivity required to implement a solution to a particular task . No matter how many atomic operations separate the outputs from the inputs in a given algorithmic problem , performing those exact operations in sequence would make each execution step only as complex as a single atomic operation . This is arguably the main reason why a human brain can “ implement ” something like integer multiplication even though it does not contain a circuit for multiplying arbitrarily large numbers directly — the key is that a complex problem can be decomposed into simpler ones until each sub-problem becomes solvable “ atomically ” via System 1 . By the same token , a neural network that is not expressive enough to compute some output through 0 intermediate steps may be expressive enough to compute the same output through l intermediate steps for some l > 0 , since each of the l steps would involve computing a simpler function . Since many System 2 tasks are algorithmic , their outputs can be made arbitrarily distant from the inputs ( in terms of the number of atomic operations separating the two ) , which implies that step-by-step processing is the only scalable approach for solving them . The necessity of intermediate processing does not by itself imply the amount of guidance that the neural networks should receive during training . The exact intermediate computations can either remain unspecified and be inferred by the model , or be fixed and provided as part of the training data . Although there are pros and cons to both alternatives ( as we have outlined in section 2 ) , we see the latter as more promising and choose to include the intermediate results in the target sequences used during training . Our position is based on the following three observations : first , it is natural to provide complete supervision over the intermediate computations when teaching humans to perform algorithmic tasks ; second , learned intermediates create the vanishing supervision problem ( the longer the ponder time , the less training signal is received per forward pass ) ; and third , for all algorithmic tasks the exact intermediate results are fully known anyway . While manually increasing the amount of supervision goes against the tradition of minimizing the reliance on expert knowledge that arguably lead to most of the recent achievements in the System 1 domain , System 2 tasks represent a structurally different class of problems for which a different set of constraints may apply . The arguments laid out above have the following important implications : • Expressiveness : a neural network implementing only some very basic atomic operations should in principle be capable of expressing arbitrarily complex functions as long as enough intermediate steps are used ; • Training : a neural network that fails to learn some algorithmic task when trained without intermediate processing may be able to learn the same task if enough intermediate steps are used ; and • Composability : a neural network that is pretrained on some general-purpose task and does not contain a circuit implementing some algorithmic computation directly may nevertheless be able to solve the same task by computing the output through simpler operations which it does contain circuits for . We establish the first of these implications in Section 4 and we empirically validate the latter two in Sections 5 and 6 . Even though demonstrating a single example where a statement holds does not prove it in general , our empirical results confirm that the hypothesized phenomena do occur in the particular contexts we considered in this work and thus serve as evidence in favor of the latter two implications .
The paper’s main claim is that sequential reasoning is necessary for system 2 tasks, and exploring addition as an example for that, showing that a 1-layer 1-head transformer fails to compute the sum of numbers directly (although theoretically shown to be able to compute any finite function) but does work when being supervised by intermediate results. Then they show that a pretrained transformer likewise fails withouts intermediate supervision on a binary addition task. Update: I do apreaciate the author's revision but since no response has been provided to my comments and/or was addressed by the revision, I unfortunately keep my score.
SP:a3fbd264c8e8e743f49ef09c17be30de7084862b
Guiding Transformers to Process in Steps
1 INTRODUCTION . Daniel Kahneman has pointed out that there is a fundamental difference in how humans solve the following two tasks ( Kahneman , 2011 ) : ( 1 ) complete the phrase “ bread and . . . ” ; ( 2 ) complete the equation “ 17× 24 = . . . ” . The answer to ( 1 ) comes to mind instantly , with no mental effort , and is a result of a computation that one is not consciously aware of and could not explain . The answer to ( 2 ) requires time and effort to come up with , and is a result of a sequence of computations that one consciously carries out . The former mode of cognition is what Kahneman calls System 1 , the latter System 2 . We currently have separate tools for solving each of these two kinds of problems on a computer , but it is not yet clear how to build a single system capable of both modes of cognition simultaneously . Many tasks that exercise System 2 in humans involve executing some fully-specified algorithm and thus are quite straightforward to solve using conventional computer programs . However , classical programming is not applicable to System 1 tasks as these typically correspond to functions that we do not know how to implement . Instead , System 1 tasks are usually tackled by approximating the target functions from observed input-output pairs . Whether such a learning-based approach could be successfully extended to System 2 tasks is an open question , and one that we aim to explore in this paper . We argue that the classical System 1 paradigm of training neural networks to approximate functions from inputs and outputs corresponding to a given task is not a scalable approach for tackling tasks from the System 2 domain . We claim that , given only inputs and outputs , neural networks will not receive enough supervision to converge to a solution when trained to perform algorithmic tasks that humans solve via System 2 . To circumvent this , we propose to supplement the training data with intermediate results that would be helpful to compute before arriving at the final output . By modifying the training data in this way , we are effectively changing the training objective — instead of having to learn any arbitrary way of mapping inputs to outputs , the neural networks are now trained to compute the outputs from the inputs through a particular sequence of intermediate steps . We hypothesize that such additional guidance might be necessary in order for neural networks to learn algorithmic tasks , and we empirically evaluate this hypothesis . Much of existing work in applying neural networks for System 2 tasks has been focused on making neural architectures more aligned with the execution of algorithms . While fine-tuning the model architectures may ultimately lead to superior System 2 capabilities ( after all , the architecture of a calculator allows it to perform arithmetic at a super-human level ) , we instead explore the challenge of extending the capabilities of existing state-of-the-art System 1 models — namely , the Transformer — towards the System 2 domain . In particular , we take inspiration from the human ability to solve algorithmic tasks by writing down symbols on paper and propose to formulate such tasks as sequence-to-sequence problems where the target sequence includes both the outputs and all the intermediate symbols that a human would write ( or more ) . That is , instead of modifying the models , we experiment with modifying the data . Our main contributions can be summarized as follows : • We show that it is possible to hand-code a 1-layer 1-head Transformer to compute any finite function if enough intermediate steps are used ; • We show that a 1-layer 1-head Transformer can be trained to perform binary addition if the target sequences include intermediate results , whereas it fails to learn the task when predicting the outputs directly from the inputs ; and • We show that a Frozen Pretrained Transformer can be trained to perform binary addition if the target sequences include intermediate results , whereas it fails to learn the task when predicting the outputs directly from the inputs . 2 RELATED WORK . The need of bridging the gap between System 1 and System 2 capabilities in current deep learning models has recently been emphasized by Yoshua Bengio ( Bengio , 2019 ; Bengio et al. , 2021 ) . The work of Bengio and colleagues addresses the higher-level cognition in its broadest sense , seeking to model and incorporate into current systems concepts such as consciousness ( Bengio , 2017 ) , causality ( Bengio et al. , 2020 ) , agency ( Thomas et al. , 2018 ) , and global workspace ( Goyal et al. , 2021 ) . In Goyal & Bengio ( 2020 ) , they question the paradigm of classical statistical learning and propose to shift from training models on curated datasets towards training agents in complex non-stationary environments . In contrast to that , our approach lies within a conventional learning framework ( namely , sequence-to-sequence modeling with Transformers ) , and addresses System 2 in a slightly narrower sense ( namely , as algorithmic execution ) . A considerable fraction of recent work in applying neural networks for algorithmic tasks has been concerned with modifying or augmenting neural architectures to make them more suitable for algorithmic execution . One very common approach involves trying to bring components from classical computer architecture into the neural setting : Neural Turing Machines give neural networks access to dynamic external memory ( Graves et al. , 2014 ; 2016 ) , Stack-augmented RNNs augment the networks with an infinite pushdown stack ( Joulin & Mikolov , 2015 ) , Neural Random Access Machines use registers and introduce the ability to manipulate and dereference pointers ( Kurach et al. , 2016 ) , while Neural Arithmetic Logic Modules represent neural versions of the ALU ( Trask et al. , 2018 ; Madsen & Johansen , 2020 ) . A parallel but closely related line of research frames these new kinds of augmented architectures as neural controllers endowed with access to external interfaces and applies reinforcement learning techniques to train them to solve algorithmic tasks by interacting with the interfaces ( Zaremba & Sutskever , 2015 ; Zaremba et al. , 2016 ) . The approach of using variable amount of computation per input , known as Adaptive Computation Time ( ACT ) , has been introduced by Graves ( 2016 ) . Universal Transformers ( Dehghani et al. , 2019 ) integrate ACT into the Transformer architecture , allowing each output symbol to be a result of a variable number of applications of a single Transformer layer . While these works are , like ours , motivated by the apparent necessity of intermediate processing in certain tasks , they use learned ponder time and learned ponder content whereas our work uses given ponder time and given ponder content . The main advantage of learned ponder time and content is that the model is free to discover and learn to perform the intermediate computations that it finds most useful . This also means that the researcher does not need to know the correct intermediate steps to train the model and can thus tackle a potentially larger class of problems . The main disadvantage of learned ponder time and content is that the amount of supervision per forward pass decreases as the model uses more intermediate steps , and the signal can quickly become too weak for the model to train at all . It is also worth noting that intermediate steps are typically given when humans are taught to perform algorithmic tasks ( rather than being asked to infer intermediate computations by just looking at input-output pairs ) , which might be an important argument for given ponder content . The idea of providing supervision beyond input-output examples has already appeared in several projects , although , in our view , it still remains largely underexplored . Reed & de Freitas ( 2016 ) train Neural Programmer-Interpreters by providing supervision on the correct action sequences ( execution traces ) of the recurrent controller , Mirman et al . ( 2018 ) train differential Neural Computational Machines with extra supervision on the movements of the read-write heads , and Mirman et al . ( 2018 ) train Neural Execution Engines with extra supervision on the attention masks . While the added training signal does lead to better sample efficiency and improved generalization , a major limitation of all of these approaches , as acknowledged by Mirman et al . ( 2018 ) , is that the extra supervision needs to be highly specialized as it applies to very specific components of each architecture . Our approach of supplementing the target sequences with intermediate results is completely architectureindependent and thus does not share this limitation . Veličković et al . ( 2020 ) use extra supervision at the data level as well , though their work is focused specifically on tasks involving graph-structured inputs and is thus based on neural network architectures that are tailored for processing graphs . In contrast to that , we adopt a general sequence-to-sequence modeling framework , with an intention to imitate a human executing an algorithm on a piece of paper . Our main goal is to explore how and whether powerful existing System 1 models could be used in the System 2 domain , rather than to find a neural architecture that would be most suitable for executing algorithms . We therefore use vanilla decoder-only Transformers in our experiments . 3 MOTIVATION . The distinguishing characteristic of System 1 tasks is that they can be solved instantly and unconsciously , in something akin to a single “ forward pass ” through a human neural network . It thus seems plausible that all input-output mappings corresponding to System 1 tasks are in principle computable via a single forward pass through a sufficiently large artificial neural network . Recent success in deep learning shows that training bigger models on more data indeed makes it possible to solve more and more System 1 tasks , and it is not unreasonable to expect this trend to continue . When it comes to solving System 2 tasks , however , scaling up appears to be a dead-end . Consider the problem of completing the sentence “ The first digit of the n-th Fibonacci number is . . . ” , where n is replaced by some positive integer . Since a single forward pass through a neural network ( no matter how large ) involves a constant amount of computation , there will be some number N ∈ N such that for all n > N the correct output is not computable . It follows that the only viable approach for solving these kinds of algorithmic tasks is to arrive at the output through intermediate steps . The main value of introducing intermediate steps is that it gives control over the level of model expressivity required to implement a solution to a particular task . No matter how many atomic operations separate the outputs from the inputs in a given algorithmic problem , performing those exact operations in sequence would make each execution step only as complex as a single atomic operation . This is arguably the main reason why a human brain can “ implement ” something like integer multiplication even though it does not contain a circuit for multiplying arbitrarily large numbers directly — the key is that a complex problem can be decomposed into simpler ones until each sub-problem becomes solvable “ atomically ” via System 1 . By the same token , a neural network that is not expressive enough to compute some output through 0 intermediate steps may be expressive enough to compute the same output through l intermediate steps for some l > 0 , since each of the l steps would involve computing a simpler function . Since many System 2 tasks are algorithmic , their outputs can be made arbitrarily distant from the inputs ( in terms of the number of atomic operations separating the two ) , which implies that step-by-step processing is the only scalable approach for solving them . The necessity of intermediate processing does not by itself imply the amount of guidance that the neural networks should receive during training . The exact intermediate computations can either remain unspecified and be inferred by the model , or be fixed and provided as part of the training data . Although there are pros and cons to both alternatives ( as we have outlined in section 2 ) , we see the latter as more promising and choose to include the intermediate results in the target sequences used during training . Our position is based on the following three observations : first , it is natural to provide complete supervision over the intermediate computations when teaching humans to perform algorithmic tasks ; second , learned intermediates create the vanishing supervision problem ( the longer the ponder time , the less training signal is received per forward pass ) ; and third , for all algorithmic tasks the exact intermediate results are fully known anyway . While manually increasing the amount of supervision goes against the tradition of minimizing the reliance on expert knowledge that arguably lead to most of the recent achievements in the System 1 domain , System 2 tasks represent a structurally different class of problems for which a different set of constraints may apply . The arguments laid out above have the following important implications : • Expressiveness : a neural network implementing only some very basic atomic operations should in principle be capable of expressing arbitrarily complex functions as long as enough intermediate steps are used ; • Training : a neural network that fails to learn some algorithmic task when trained without intermediate processing may be able to learn the same task if enough intermediate steps are used ; and • Composability : a neural network that is pretrained on some general-purpose task and does not contain a circuit implementing some algorithmic computation directly may nevertheless be able to solve the same task by computing the output through simpler operations which it does contain circuits for . We establish the first of these implications in Section 4 and we empirically validate the latter two in Sections 5 and 6 . Even though demonstrating a single example where a statement holds does not prove it in general , our empirical results confirm that the hypothesized phenomena do occur in the particular contexts we considered in this work and thus serve as evidence in favor of the latter two implications .
This paper proposes that, in order to enable deep learning systems to carry out processes akin to 'system 2' (a term from cognitive psychology, referring to deliberate, step-by-step reasoning processes), these systems should be trained through supervised learning to perform complex computations via a series of simpler, intermediate computations (i.e. by providing the results of those intermediate computations as a source of supervision). Experiments are focused on the domain of binary addition, where it is shown that 1-layer transformers are capable of performing the task for very long bit strings when allowed to do so through a series of intermediate operations, but are limited to very short bit strings when forced to perform the task in a single forward pass.
SP:a3fbd264c8e8e743f49ef09c17be30de7084862b
Guiding Transformers to Process in Steps
1 INTRODUCTION . Daniel Kahneman has pointed out that there is a fundamental difference in how humans solve the following two tasks ( Kahneman , 2011 ) : ( 1 ) complete the phrase “ bread and . . . ” ; ( 2 ) complete the equation “ 17× 24 = . . . ” . The answer to ( 1 ) comes to mind instantly , with no mental effort , and is a result of a computation that one is not consciously aware of and could not explain . The answer to ( 2 ) requires time and effort to come up with , and is a result of a sequence of computations that one consciously carries out . The former mode of cognition is what Kahneman calls System 1 , the latter System 2 . We currently have separate tools for solving each of these two kinds of problems on a computer , but it is not yet clear how to build a single system capable of both modes of cognition simultaneously . Many tasks that exercise System 2 in humans involve executing some fully-specified algorithm and thus are quite straightforward to solve using conventional computer programs . However , classical programming is not applicable to System 1 tasks as these typically correspond to functions that we do not know how to implement . Instead , System 1 tasks are usually tackled by approximating the target functions from observed input-output pairs . Whether such a learning-based approach could be successfully extended to System 2 tasks is an open question , and one that we aim to explore in this paper . We argue that the classical System 1 paradigm of training neural networks to approximate functions from inputs and outputs corresponding to a given task is not a scalable approach for tackling tasks from the System 2 domain . We claim that , given only inputs and outputs , neural networks will not receive enough supervision to converge to a solution when trained to perform algorithmic tasks that humans solve via System 2 . To circumvent this , we propose to supplement the training data with intermediate results that would be helpful to compute before arriving at the final output . By modifying the training data in this way , we are effectively changing the training objective — instead of having to learn any arbitrary way of mapping inputs to outputs , the neural networks are now trained to compute the outputs from the inputs through a particular sequence of intermediate steps . We hypothesize that such additional guidance might be necessary in order for neural networks to learn algorithmic tasks , and we empirically evaluate this hypothesis . Much of existing work in applying neural networks for System 2 tasks has been focused on making neural architectures more aligned with the execution of algorithms . While fine-tuning the model architectures may ultimately lead to superior System 2 capabilities ( after all , the architecture of a calculator allows it to perform arithmetic at a super-human level ) , we instead explore the challenge of extending the capabilities of existing state-of-the-art System 1 models — namely , the Transformer — towards the System 2 domain . In particular , we take inspiration from the human ability to solve algorithmic tasks by writing down symbols on paper and propose to formulate such tasks as sequence-to-sequence problems where the target sequence includes both the outputs and all the intermediate symbols that a human would write ( or more ) . That is , instead of modifying the models , we experiment with modifying the data . Our main contributions can be summarized as follows : • We show that it is possible to hand-code a 1-layer 1-head Transformer to compute any finite function if enough intermediate steps are used ; • We show that a 1-layer 1-head Transformer can be trained to perform binary addition if the target sequences include intermediate results , whereas it fails to learn the task when predicting the outputs directly from the inputs ; and • We show that a Frozen Pretrained Transformer can be trained to perform binary addition if the target sequences include intermediate results , whereas it fails to learn the task when predicting the outputs directly from the inputs . 2 RELATED WORK . The need of bridging the gap between System 1 and System 2 capabilities in current deep learning models has recently been emphasized by Yoshua Bengio ( Bengio , 2019 ; Bengio et al. , 2021 ) . The work of Bengio and colleagues addresses the higher-level cognition in its broadest sense , seeking to model and incorporate into current systems concepts such as consciousness ( Bengio , 2017 ) , causality ( Bengio et al. , 2020 ) , agency ( Thomas et al. , 2018 ) , and global workspace ( Goyal et al. , 2021 ) . In Goyal & Bengio ( 2020 ) , they question the paradigm of classical statistical learning and propose to shift from training models on curated datasets towards training agents in complex non-stationary environments . In contrast to that , our approach lies within a conventional learning framework ( namely , sequence-to-sequence modeling with Transformers ) , and addresses System 2 in a slightly narrower sense ( namely , as algorithmic execution ) . A considerable fraction of recent work in applying neural networks for algorithmic tasks has been concerned with modifying or augmenting neural architectures to make them more suitable for algorithmic execution . One very common approach involves trying to bring components from classical computer architecture into the neural setting : Neural Turing Machines give neural networks access to dynamic external memory ( Graves et al. , 2014 ; 2016 ) , Stack-augmented RNNs augment the networks with an infinite pushdown stack ( Joulin & Mikolov , 2015 ) , Neural Random Access Machines use registers and introduce the ability to manipulate and dereference pointers ( Kurach et al. , 2016 ) , while Neural Arithmetic Logic Modules represent neural versions of the ALU ( Trask et al. , 2018 ; Madsen & Johansen , 2020 ) . A parallel but closely related line of research frames these new kinds of augmented architectures as neural controllers endowed with access to external interfaces and applies reinforcement learning techniques to train them to solve algorithmic tasks by interacting with the interfaces ( Zaremba & Sutskever , 2015 ; Zaremba et al. , 2016 ) . The approach of using variable amount of computation per input , known as Adaptive Computation Time ( ACT ) , has been introduced by Graves ( 2016 ) . Universal Transformers ( Dehghani et al. , 2019 ) integrate ACT into the Transformer architecture , allowing each output symbol to be a result of a variable number of applications of a single Transformer layer . While these works are , like ours , motivated by the apparent necessity of intermediate processing in certain tasks , they use learned ponder time and learned ponder content whereas our work uses given ponder time and given ponder content . The main advantage of learned ponder time and content is that the model is free to discover and learn to perform the intermediate computations that it finds most useful . This also means that the researcher does not need to know the correct intermediate steps to train the model and can thus tackle a potentially larger class of problems . The main disadvantage of learned ponder time and content is that the amount of supervision per forward pass decreases as the model uses more intermediate steps , and the signal can quickly become too weak for the model to train at all . It is also worth noting that intermediate steps are typically given when humans are taught to perform algorithmic tasks ( rather than being asked to infer intermediate computations by just looking at input-output pairs ) , which might be an important argument for given ponder content . The idea of providing supervision beyond input-output examples has already appeared in several projects , although , in our view , it still remains largely underexplored . Reed & de Freitas ( 2016 ) train Neural Programmer-Interpreters by providing supervision on the correct action sequences ( execution traces ) of the recurrent controller , Mirman et al . ( 2018 ) train differential Neural Computational Machines with extra supervision on the movements of the read-write heads , and Mirman et al . ( 2018 ) train Neural Execution Engines with extra supervision on the attention masks . While the added training signal does lead to better sample efficiency and improved generalization , a major limitation of all of these approaches , as acknowledged by Mirman et al . ( 2018 ) , is that the extra supervision needs to be highly specialized as it applies to very specific components of each architecture . Our approach of supplementing the target sequences with intermediate results is completely architectureindependent and thus does not share this limitation . Veličković et al . ( 2020 ) use extra supervision at the data level as well , though their work is focused specifically on tasks involving graph-structured inputs and is thus based on neural network architectures that are tailored for processing graphs . In contrast to that , we adopt a general sequence-to-sequence modeling framework , with an intention to imitate a human executing an algorithm on a piece of paper . Our main goal is to explore how and whether powerful existing System 1 models could be used in the System 2 domain , rather than to find a neural architecture that would be most suitable for executing algorithms . We therefore use vanilla decoder-only Transformers in our experiments . 3 MOTIVATION . The distinguishing characteristic of System 1 tasks is that they can be solved instantly and unconsciously , in something akin to a single “ forward pass ” through a human neural network . It thus seems plausible that all input-output mappings corresponding to System 1 tasks are in principle computable via a single forward pass through a sufficiently large artificial neural network . Recent success in deep learning shows that training bigger models on more data indeed makes it possible to solve more and more System 1 tasks , and it is not unreasonable to expect this trend to continue . When it comes to solving System 2 tasks , however , scaling up appears to be a dead-end . Consider the problem of completing the sentence “ The first digit of the n-th Fibonacci number is . . . ” , where n is replaced by some positive integer . Since a single forward pass through a neural network ( no matter how large ) involves a constant amount of computation , there will be some number N ∈ N such that for all n > N the correct output is not computable . It follows that the only viable approach for solving these kinds of algorithmic tasks is to arrive at the output through intermediate steps . The main value of introducing intermediate steps is that it gives control over the level of model expressivity required to implement a solution to a particular task . No matter how many atomic operations separate the outputs from the inputs in a given algorithmic problem , performing those exact operations in sequence would make each execution step only as complex as a single atomic operation . This is arguably the main reason why a human brain can “ implement ” something like integer multiplication even though it does not contain a circuit for multiplying arbitrarily large numbers directly — the key is that a complex problem can be decomposed into simpler ones until each sub-problem becomes solvable “ atomically ” via System 1 . By the same token , a neural network that is not expressive enough to compute some output through 0 intermediate steps may be expressive enough to compute the same output through l intermediate steps for some l > 0 , since each of the l steps would involve computing a simpler function . Since many System 2 tasks are algorithmic , their outputs can be made arbitrarily distant from the inputs ( in terms of the number of atomic operations separating the two ) , which implies that step-by-step processing is the only scalable approach for solving them . The necessity of intermediate processing does not by itself imply the amount of guidance that the neural networks should receive during training . The exact intermediate computations can either remain unspecified and be inferred by the model , or be fixed and provided as part of the training data . Although there are pros and cons to both alternatives ( as we have outlined in section 2 ) , we see the latter as more promising and choose to include the intermediate results in the target sequences used during training . Our position is based on the following three observations : first , it is natural to provide complete supervision over the intermediate computations when teaching humans to perform algorithmic tasks ; second , learned intermediates create the vanishing supervision problem ( the longer the ponder time , the less training signal is received per forward pass ) ; and third , for all algorithmic tasks the exact intermediate results are fully known anyway . While manually increasing the amount of supervision goes against the tradition of minimizing the reliance on expert knowledge that arguably lead to most of the recent achievements in the System 1 domain , System 2 tasks represent a structurally different class of problems for which a different set of constraints may apply . The arguments laid out above have the following important implications : • Expressiveness : a neural network implementing only some very basic atomic operations should in principle be capable of expressing arbitrarily complex functions as long as enough intermediate steps are used ; • Training : a neural network that fails to learn some algorithmic task when trained without intermediate processing may be able to learn the same task if enough intermediate steps are used ; and • Composability : a neural network that is pretrained on some general-purpose task and does not contain a circuit implementing some algorithmic computation directly may nevertheless be able to solve the same task by computing the output through simpler operations which it does contain circuits for . We establish the first of these implications in Section 4 and we empirically validate the latter two in Sections 5 and 6 . Even though demonstrating a single example where a statement holds does not prove it in general , our empirical results confirm that the hypothesized phenomena do occur in the particular contexts we considered in this work and thus serve as evidence in favor of the latter two implications .
The paper presents the authors’ hypothesis: intermediate steps are necessary for Transformers to perform well on algorithmic tasks. They analyse Transformers on a simple binary addition task with and without intermediate steps. They show that only the one with immediate steps and supervision over these steps can generalise. In contrast to most of the prior work, the authors argue for supervision in the output domain. This makes the approach architecture-agnostic.
SP:a3fbd264c8e8e743f49ef09c17be30de7084862b
UNCERTAINTY QUANTIFICATION USING VARIATIONAL INFERENCE FOR BIOMEDICAL IMAGE SEGMENTATION
Deep learning motivated by convolutional neural networks has been highly successful in a range of medical imaging problems like image classification , image segmentation , image synthesis etc . However for validation and interpretability , not only do we need the predictions made by the model but also how confident it is while making those predictions . This is important in safety critical applications for the people to accept it . In this work , we used an encoder decoder architecture based on variational inference techniques for segmenting brain tumour images . We evaluate our work on the publicly available BRATS dataset using Dice Similarity Coefficient ( DSC ) and Intersection Over Union ( IOU ) as the evaluation metrics . Our model is able to segment brain tumours while taking into account both aleatoric uncertainty and epistemic uncertainty in a principled bayesian manner . 1 INTRODUCTION . Medical image segmentation is a challenging task for medical practitioners . It is costly , takes time and is prone to error . Hence there is a need to automate the manually done segmentation . Lately Neural Networks have shown great potential on a variety of medical image segmentation problems . The challenge with the approaches used in literature is that the model doesn ’ t predict the uncertainty associated . This is where Bayesian methods come into play as it gives a principled way of measuring uncertainty from the model predictions . Measuring uncertainty in the output predictions made by neural networks is important for interpretation and validation . Rather than learning the point estimates , Bayesian Neural Networks ( BNN ) learns the distribution over the weights . The training process of BNN involves first initializing the parameters of the neural network . Next the weights are sampled from some distribution ( like gaussian with zero mean and unit variance ) and both the forward pass and backward pass is done to update the weights using the conventional backpropagation algorithm . Monte Carlo dropout networks ( Kingma et al. , 2015 ) use dropout layers to approximate deep Gaussian processes which still lack theoretical understanding . Bayesian Convolutional Neural Network ( Gal and Ghahramani , 2015 ) use variational inference to learn the posterior distribution over the weights given the dataset . The problem with this approach is that it requires a lot of computation involving a lot of parameters , making this technique not scalable in practice . Variational Autoencoder ( Kingma et al. , 2015 ) which is based on generative models solves the above problems and has been successful in a number of tasks like generating images , texts , recommender systems etc . This approach comes with several challenges in its own right which have been successfully tackled in the literature . A random variable sampled from posterior distribution has no gradient so the conventional backpropagation techniques can ’ t be applied to it . Local Reparameterization Trick ( Kingma et al. , 2015 ) was proposed to tackle this by converting the random variable to a deterministic one for computation . The second challenge was the huge computational requirement since it required weight updates in every iteration . Bayes by Backprop algorithm ( Blundell et al. , 2015 ) tackled this by calculating gradients in backpropagation using a scale and shift approach by updating the posterior distribution in the backward pass . 2 RELATED WORK . 2.1 MEDICAL IMAGE SEGMENTATION . The problem of segmenting medical images have been successfully tackled in literature using mainly two techniques , first using a Fully Convolutional Network ( FCN ) ( Long et al. , 2015 ) and second those which are based on U-Net ( Ronneberger et al. , 2015 ) . The main characteristic of FCN architectures is that it doesn ’ t use fully connected layers at the end which have been used successfully for image classification problems . U-Net methods on the other hand uses an encoder-decoder architecture with pooling layers in the encoder and upsampling layers in the decoder . Skip connections connect the encoder layers to the decoder layer to create an additional path for the flow of gradients back in the backpropagation step . This helps in reducing overfitting due to many parameters involved while training the network . 2.2 BAYESIAN NEURAL NETWORK . Lately , there has been a revival of interest in bayesian methods as some of the inherent problems with deep learning could be solved using it . It is a scalable approach of avoiding overfitting in neural networks and at the same time gives us a measure of uncertainty . This is very important in critical applications where not only do we require the predictions made from the model , but also how confident it is while making its predictions . BNN can be considered as an ensemble of neural networks ( Gal , 2016 ) . It has two advantages over the standard neural networks , first it avoids overfitting and second it gives a measure of uncertainty involved . Instead of point estimates , the neural network learns posterior distribution over the weights given the dataset as defined in Equation 1. p ( ω|D ) = p ( D|ω ) p ( ω ) p ( D ) = ∏N i=1 p ( yi|xi , ω ) p ( ω ) p ( D ) ( 1 ) The predictive distribution can be calculated by approximating the integral as defined in Equation 2. p ( y∗|x∗ , D ) = ∫ Ω p ( y∗|x∗ , ω ) p ( ω|D ) dω ( 2 ) The challenge is that the posterior is often intractable in nature . To combat this , ( Neal , 1993 ) used Markov Chain Monte Carlo ( MCMC ) for learning the weights over the bayesian neural networks . Also ( Graves , 2011 ) , ( Blundell et al. , 2015 ) and ( Louizos and Welling , 2016 ) proposed independently a technique using variational inference techniques for approximating the posterior distribution . KL Divergence between the posterior and the true distribution can be calculated using Equation 3 . KL { qθ ( ω ) ‖p ( ω|D ) } : = ∫ Ω qθ ( ω ) log qθ ( ω ) p ( ω|D ) dω ( 3 ) Alternatively minimizing the KL divergence can be written in another form by maximizing the Evidence Lower Bound ( ELBO ) which is tractable . This is shown in Equation 4 . − ∫ Ω qθ ( ω ) log p ( y|x , ω ) dω +KL { qθ ( ω ) ‖p ( ω ) } ( 4 ) 2.3 VARIATIONAL INFERENCE . Variational inference finds the parameters of the distribution by maximizing the Evidence Lower Bound . ELBO consists of sum of two terms Kullback-Leibler ( KL ) divergence between two distributions and the negative log-likelihood ( NLL ) as defined in Equation 5. minKL ( qθ ( w ) ) ‖p ( w|D ) ) ( 5 ) The KL divergence is defined in equation 6 . KL ( q ( x ) ) ‖p ( x ) ) = − ∫ q ( x ) log ( p ( x ) q ( x ) ) ( 6 ) The posterior in the above equation contains an integral which is intractable in nature . The equation can be re written in Equation 7 . KL ( qθ ( w ) ) ‖p ( w|D ) ) = Eqθ ( w ) log qθ ( w ) p ( D ) p ( D|w ) p ( w ) = = log p ( D ) + Eqθ ( w ) log qθ ( w ) p ( w ) − Eqθ ( w ) log p ( D|w ) = log p ( D ) − L ( θ ) ( 7 ) The above equation can be decomposed into two parts one of which is the KL divergence between the exact posterior and its variational approximation which needs to be minimized and the second is ELBO term which needs to be maximized . This is shown in Equation 8. max θ log p ( D ) = max θ [ KL ( qθ ( w ) ) ‖p ( w|D ) ) + L ( θ ) ] ( 8 ) KL divergence is zero if exact posterior is equal to variational approximation . Since the KL divergence is always greater than zero hence the equation can be approximated by maximizing only the ELBO ( Kingma et al. , 2015 ) as defined in equation 9 . L ( θ ) = Eqθ ( w ) log p ( D|w ) − Eqθ ( w ) log qθ ( w ) p ( w ) = LD −KL ( qθ ( w ) ‖p ( w ) ) ( 9 ) 2.4 ALEATORIC UNCERTAINTY AND EPISTEMIC UNCERTAINTY . There are two types of uncertainty - aleatory and epistemic uncertainty where variance is the sum of both these . Bayesian Neural Networks can be considered an ensemble of neural networks initialized randomly which averages the test results in parallel ( Gal , 2016 ) . For final predictions , single mean and variance can be estimated as shown in Equation 10 and Equation 11 respectively . µc ( x ) = 1 M M∑ i=1 µ̂i ( x ) ( 10 ) σ̂2c ( x ) = 1 M M∑ i=1 σ̃2i ( x ) + [ 1 M M∑ i=1 µ̂2i ( x ) − µ̂2 ( x ) ] ( 11 ) The first term in variance denotes aleatoric uncertainty while the second denotes epistemic uncertainty . Bayesian Neural Network model for uncertainty estimation was done by ( Kendall and Gal , 2017 ) with the last layer representing the mean and variance of logits . The predictive distribution approximating the posterior distribution which gives a measure of uncertainty is defined in Equation 12. qθ̂ ( y ∗|x∗ ) = ∫ Ω p ( y∗|x∗ , ω ) qθ̂ ( ω ) dω ( 12 ) Aleatoric uncertainty is a measure of the variability of the predictions from the dataset hence it is inherent in the data present . The aleatoric uncertainty is the uncertainty that exists inside the dataset . It captures noise inherent in the observations , which arises from the distribution of data . It is highly dependent on bias and distribution inside the input data but not the size of training samples . Epistemic uncertainty on the other hand is a measure of the variability of predictions from the model which is tied to various metrics used for evaluation like accuracy , loss etc . It represents the uncertainty inside the model . The epistemic uncertainty will decrease with enough training samples . It is resulting from the limitation of knowledge and data of the system . 3 METHOD . 3.1 DATASET . To validate the performance of our proposed approach to generalization , publicly available datasets were used for brain tumour segmentation BRATS18 ( Menze et al. , 2015 ) and ( Bakas et al. , 2018 ) . It contains MRI scans of 175 patients with glioblastoma and lower grade glioblastoma . The images were of resolution 240×240×155 pixels . The ground truth labels were created by expert neuroradiologists . The sample from the dataset is shown in Figure 2 . 3.2 DATA AUGMENTATION . The following data augmentation methods were used to increase the size of dataset : 1 . Rescaling : We rescale the pixels values by rescaling factor 1/255 . 2 . Rotation : Random rotations with the setup degree range between [ 0 , 360 ] were used . 3 . Height and Width Shift : Shifting the input to the left or right and up or down was performed . 4 . Shearing Intensity : It refers to shear angle ( unit in degrees ) in counter-clockwise direction . 5 . Brightness : It uses a brightness shift value from the setup range .
The paper proposes a method to quantify uncertainty in medical imaging, which is an important task for clinical applications, with variational inference. It uses the U-Net architecture and BRATS18 dataset for evaluation. It quantifies uncertainty with 3 methods and evaluates the predictions with Dice score (DSC) and Intersection over Union (IOU).
SP:4fdc029cadccfd61bb1a94b2c9bfadd0d234e7e4
UNCERTAINTY QUANTIFICATION USING VARIATIONAL INFERENCE FOR BIOMEDICAL IMAGE SEGMENTATION
Deep learning motivated by convolutional neural networks has been highly successful in a range of medical imaging problems like image classification , image segmentation , image synthesis etc . However for validation and interpretability , not only do we need the predictions made by the model but also how confident it is while making those predictions . This is important in safety critical applications for the people to accept it . In this work , we used an encoder decoder architecture based on variational inference techniques for segmenting brain tumour images . We evaluate our work on the publicly available BRATS dataset using Dice Similarity Coefficient ( DSC ) and Intersection Over Union ( IOU ) as the evaluation metrics . Our model is able to segment brain tumours while taking into account both aleatoric uncertainty and epistemic uncertainty in a principled bayesian manner . 1 INTRODUCTION . Medical image segmentation is a challenging task for medical practitioners . It is costly , takes time and is prone to error . Hence there is a need to automate the manually done segmentation . Lately Neural Networks have shown great potential on a variety of medical image segmentation problems . The challenge with the approaches used in literature is that the model doesn ’ t predict the uncertainty associated . This is where Bayesian methods come into play as it gives a principled way of measuring uncertainty from the model predictions . Measuring uncertainty in the output predictions made by neural networks is important for interpretation and validation . Rather than learning the point estimates , Bayesian Neural Networks ( BNN ) learns the distribution over the weights . The training process of BNN involves first initializing the parameters of the neural network . Next the weights are sampled from some distribution ( like gaussian with zero mean and unit variance ) and both the forward pass and backward pass is done to update the weights using the conventional backpropagation algorithm . Monte Carlo dropout networks ( Kingma et al. , 2015 ) use dropout layers to approximate deep Gaussian processes which still lack theoretical understanding . Bayesian Convolutional Neural Network ( Gal and Ghahramani , 2015 ) use variational inference to learn the posterior distribution over the weights given the dataset . The problem with this approach is that it requires a lot of computation involving a lot of parameters , making this technique not scalable in practice . Variational Autoencoder ( Kingma et al. , 2015 ) which is based on generative models solves the above problems and has been successful in a number of tasks like generating images , texts , recommender systems etc . This approach comes with several challenges in its own right which have been successfully tackled in the literature . A random variable sampled from posterior distribution has no gradient so the conventional backpropagation techniques can ’ t be applied to it . Local Reparameterization Trick ( Kingma et al. , 2015 ) was proposed to tackle this by converting the random variable to a deterministic one for computation . The second challenge was the huge computational requirement since it required weight updates in every iteration . Bayes by Backprop algorithm ( Blundell et al. , 2015 ) tackled this by calculating gradients in backpropagation using a scale and shift approach by updating the posterior distribution in the backward pass . 2 RELATED WORK . 2.1 MEDICAL IMAGE SEGMENTATION . The problem of segmenting medical images have been successfully tackled in literature using mainly two techniques , first using a Fully Convolutional Network ( FCN ) ( Long et al. , 2015 ) and second those which are based on U-Net ( Ronneberger et al. , 2015 ) . The main characteristic of FCN architectures is that it doesn ’ t use fully connected layers at the end which have been used successfully for image classification problems . U-Net methods on the other hand uses an encoder-decoder architecture with pooling layers in the encoder and upsampling layers in the decoder . Skip connections connect the encoder layers to the decoder layer to create an additional path for the flow of gradients back in the backpropagation step . This helps in reducing overfitting due to many parameters involved while training the network . 2.2 BAYESIAN NEURAL NETWORK . Lately , there has been a revival of interest in bayesian methods as some of the inherent problems with deep learning could be solved using it . It is a scalable approach of avoiding overfitting in neural networks and at the same time gives us a measure of uncertainty . This is very important in critical applications where not only do we require the predictions made from the model , but also how confident it is while making its predictions . BNN can be considered as an ensemble of neural networks ( Gal , 2016 ) . It has two advantages over the standard neural networks , first it avoids overfitting and second it gives a measure of uncertainty involved . Instead of point estimates , the neural network learns posterior distribution over the weights given the dataset as defined in Equation 1. p ( ω|D ) = p ( D|ω ) p ( ω ) p ( D ) = ∏N i=1 p ( yi|xi , ω ) p ( ω ) p ( D ) ( 1 ) The predictive distribution can be calculated by approximating the integral as defined in Equation 2. p ( y∗|x∗ , D ) = ∫ Ω p ( y∗|x∗ , ω ) p ( ω|D ) dω ( 2 ) The challenge is that the posterior is often intractable in nature . To combat this , ( Neal , 1993 ) used Markov Chain Monte Carlo ( MCMC ) for learning the weights over the bayesian neural networks . Also ( Graves , 2011 ) , ( Blundell et al. , 2015 ) and ( Louizos and Welling , 2016 ) proposed independently a technique using variational inference techniques for approximating the posterior distribution . KL Divergence between the posterior and the true distribution can be calculated using Equation 3 . KL { qθ ( ω ) ‖p ( ω|D ) } : = ∫ Ω qθ ( ω ) log qθ ( ω ) p ( ω|D ) dω ( 3 ) Alternatively minimizing the KL divergence can be written in another form by maximizing the Evidence Lower Bound ( ELBO ) which is tractable . This is shown in Equation 4 . − ∫ Ω qθ ( ω ) log p ( y|x , ω ) dω +KL { qθ ( ω ) ‖p ( ω ) } ( 4 ) 2.3 VARIATIONAL INFERENCE . Variational inference finds the parameters of the distribution by maximizing the Evidence Lower Bound . ELBO consists of sum of two terms Kullback-Leibler ( KL ) divergence between two distributions and the negative log-likelihood ( NLL ) as defined in Equation 5. minKL ( qθ ( w ) ) ‖p ( w|D ) ) ( 5 ) The KL divergence is defined in equation 6 . KL ( q ( x ) ) ‖p ( x ) ) = − ∫ q ( x ) log ( p ( x ) q ( x ) ) ( 6 ) The posterior in the above equation contains an integral which is intractable in nature . The equation can be re written in Equation 7 . KL ( qθ ( w ) ) ‖p ( w|D ) ) = Eqθ ( w ) log qθ ( w ) p ( D ) p ( D|w ) p ( w ) = = log p ( D ) + Eqθ ( w ) log qθ ( w ) p ( w ) − Eqθ ( w ) log p ( D|w ) = log p ( D ) − L ( θ ) ( 7 ) The above equation can be decomposed into two parts one of which is the KL divergence between the exact posterior and its variational approximation which needs to be minimized and the second is ELBO term which needs to be maximized . This is shown in Equation 8. max θ log p ( D ) = max θ [ KL ( qθ ( w ) ) ‖p ( w|D ) ) + L ( θ ) ] ( 8 ) KL divergence is zero if exact posterior is equal to variational approximation . Since the KL divergence is always greater than zero hence the equation can be approximated by maximizing only the ELBO ( Kingma et al. , 2015 ) as defined in equation 9 . L ( θ ) = Eqθ ( w ) log p ( D|w ) − Eqθ ( w ) log qθ ( w ) p ( w ) = LD −KL ( qθ ( w ) ‖p ( w ) ) ( 9 ) 2.4 ALEATORIC UNCERTAINTY AND EPISTEMIC UNCERTAINTY . There are two types of uncertainty - aleatory and epistemic uncertainty where variance is the sum of both these . Bayesian Neural Networks can be considered an ensemble of neural networks initialized randomly which averages the test results in parallel ( Gal , 2016 ) . For final predictions , single mean and variance can be estimated as shown in Equation 10 and Equation 11 respectively . µc ( x ) = 1 M M∑ i=1 µ̂i ( x ) ( 10 ) σ̂2c ( x ) = 1 M M∑ i=1 σ̃2i ( x ) + [ 1 M M∑ i=1 µ̂2i ( x ) − µ̂2 ( x ) ] ( 11 ) The first term in variance denotes aleatoric uncertainty while the second denotes epistemic uncertainty . Bayesian Neural Network model for uncertainty estimation was done by ( Kendall and Gal , 2017 ) with the last layer representing the mean and variance of logits . The predictive distribution approximating the posterior distribution which gives a measure of uncertainty is defined in Equation 12. qθ̂ ( y ∗|x∗ ) = ∫ Ω p ( y∗|x∗ , ω ) qθ̂ ( ω ) dω ( 12 ) Aleatoric uncertainty is a measure of the variability of the predictions from the dataset hence it is inherent in the data present . The aleatoric uncertainty is the uncertainty that exists inside the dataset . It captures noise inherent in the observations , which arises from the distribution of data . It is highly dependent on bias and distribution inside the input data but not the size of training samples . Epistemic uncertainty on the other hand is a measure of the variability of predictions from the model which is tied to various metrics used for evaluation like accuracy , loss etc . It represents the uncertainty inside the model . The epistemic uncertainty will decrease with enough training samples . It is resulting from the limitation of knowledge and data of the system . 3 METHOD . 3.1 DATASET . To validate the performance of our proposed approach to generalization , publicly available datasets were used for brain tumour segmentation BRATS18 ( Menze et al. , 2015 ) and ( Bakas et al. , 2018 ) . It contains MRI scans of 175 patients with glioblastoma and lower grade glioblastoma . The images were of resolution 240×240×155 pixels . The ground truth labels were created by expert neuroradiologists . The sample from the dataset is shown in Figure 2 . 3.2 DATA AUGMENTATION . The following data augmentation methods were used to increase the size of dataset : 1 . Rescaling : We rescale the pixels values by rescaling factor 1/255 . 2 . Rotation : Random rotations with the setup degree range between [ 0 , 360 ] were used . 3 . Height and Width Shift : Shifting the input to the left or right and up or down was performed . 4 . Shearing Intensity : It refers to shear angle ( unit in degrees ) in counter-clockwise direction . 5 . Brightness : It uses a brightness shift value from the setup range .
The paper proposes a method for uncertainty quantification for biomedical image segmentation. The proposed method takes mean and std of segmentation from a backbone segmentation model and trains a VAE on top of it. The method is evaluated on only BRATS dataset.
SP:4fdc029cadccfd61bb1a94b2c9bfadd0d234e7e4
UNCERTAINTY QUANTIFICATION USING VARIATIONAL INFERENCE FOR BIOMEDICAL IMAGE SEGMENTATION
Deep learning motivated by convolutional neural networks has been highly successful in a range of medical imaging problems like image classification , image segmentation , image synthesis etc . However for validation and interpretability , not only do we need the predictions made by the model but also how confident it is while making those predictions . This is important in safety critical applications for the people to accept it . In this work , we used an encoder decoder architecture based on variational inference techniques for segmenting brain tumour images . We evaluate our work on the publicly available BRATS dataset using Dice Similarity Coefficient ( DSC ) and Intersection Over Union ( IOU ) as the evaluation metrics . Our model is able to segment brain tumours while taking into account both aleatoric uncertainty and epistemic uncertainty in a principled bayesian manner . 1 INTRODUCTION . Medical image segmentation is a challenging task for medical practitioners . It is costly , takes time and is prone to error . Hence there is a need to automate the manually done segmentation . Lately Neural Networks have shown great potential on a variety of medical image segmentation problems . The challenge with the approaches used in literature is that the model doesn ’ t predict the uncertainty associated . This is where Bayesian methods come into play as it gives a principled way of measuring uncertainty from the model predictions . Measuring uncertainty in the output predictions made by neural networks is important for interpretation and validation . Rather than learning the point estimates , Bayesian Neural Networks ( BNN ) learns the distribution over the weights . The training process of BNN involves first initializing the parameters of the neural network . Next the weights are sampled from some distribution ( like gaussian with zero mean and unit variance ) and both the forward pass and backward pass is done to update the weights using the conventional backpropagation algorithm . Monte Carlo dropout networks ( Kingma et al. , 2015 ) use dropout layers to approximate deep Gaussian processes which still lack theoretical understanding . Bayesian Convolutional Neural Network ( Gal and Ghahramani , 2015 ) use variational inference to learn the posterior distribution over the weights given the dataset . The problem with this approach is that it requires a lot of computation involving a lot of parameters , making this technique not scalable in practice . Variational Autoencoder ( Kingma et al. , 2015 ) which is based on generative models solves the above problems and has been successful in a number of tasks like generating images , texts , recommender systems etc . This approach comes with several challenges in its own right which have been successfully tackled in the literature . A random variable sampled from posterior distribution has no gradient so the conventional backpropagation techniques can ’ t be applied to it . Local Reparameterization Trick ( Kingma et al. , 2015 ) was proposed to tackle this by converting the random variable to a deterministic one for computation . The second challenge was the huge computational requirement since it required weight updates in every iteration . Bayes by Backprop algorithm ( Blundell et al. , 2015 ) tackled this by calculating gradients in backpropagation using a scale and shift approach by updating the posterior distribution in the backward pass . 2 RELATED WORK . 2.1 MEDICAL IMAGE SEGMENTATION . The problem of segmenting medical images have been successfully tackled in literature using mainly two techniques , first using a Fully Convolutional Network ( FCN ) ( Long et al. , 2015 ) and second those which are based on U-Net ( Ronneberger et al. , 2015 ) . The main characteristic of FCN architectures is that it doesn ’ t use fully connected layers at the end which have been used successfully for image classification problems . U-Net methods on the other hand uses an encoder-decoder architecture with pooling layers in the encoder and upsampling layers in the decoder . Skip connections connect the encoder layers to the decoder layer to create an additional path for the flow of gradients back in the backpropagation step . This helps in reducing overfitting due to many parameters involved while training the network . 2.2 BAYESIAN NEURAL NETWORK . Lately , there has been a revival of interest in bayesian methods as some of the inherent problems with deep learning could be solved using it . It is a scalable approach of avoiding overfitting in neural networks and at the same time gives us a measure of uncertainty . This is very important in critical applications where not only do we require the predictions made from the model , but also how confident it is while making its predictions . BNN can be considered as an ensemble of neural networks ( Gal , 2016 ) . It has two advantages over the standard neural networks , first it avoids overfitting and second it gives a measure of uncertainty involved . Instead of point estimates , the neural network learns posterior distribution over the weights given the dataset as defined in Equation 1. p ( ω|D ) = p ( D|ω ) p ( ω ) p ( D ) = ∏N i=1 p ( yi|xi , ω ) p ( ω ) p ( D ) ( 1 ) The predictive distribution can be calculated by approximating the integral as defined in Equation 2. p ( y∗|x∗ , D ) = ∫ Ω p ( y∗|x∗ , ω ) p ( ω|D ) dω ( 2 ) The challenge is that the posterior is often intractable in nature . To combat this , ( Neal , 1993 ) used Markov Chain Monte Carlo ( MCMC ) for learning the weights over the bayesian neural networks . Also ( Graves , 2011 ) , ( Blundell et al. , 2015 ) and ( Louizos and Welling , 2016 ) proposed independently a technique using variational inference techniques for approximating the posterior distribution . KL Divergence between the posterior and the true distribution can be calculated using Equation 3 . KL { qθ ( ω ) ‖p ( ω|D ) } : = ∫ Ω qθ ( ω ) log qθ ( ω ) p ( ω|D ) dω ( 3 ) Alternatively minimizing the KL divergence can be written in another form by maximizing the Evidence Lower Bound ( ELBO ) which is tractable . This is shown in Equation 4 . − ∫ Ω qθ ( ω ) log p ( y|x , ω ) dω +KL { qθ ( ω ) ‖p ( ω ) } ( 4 ) 2.3 VARIATIONAL INFERENCE . Variational inference finds the parameters of the distribution by maximizing the Evidence Lower Bound . ELBO consists of sum of two terms Kullback-Leibler ( KL ) divergence between two distributions and the negative log-likelihood ( NLL ) as defined in Equation 5. minKL ( qθ ( w ) ) ‖p ( w|D ) ) ( 5 ) The KL divergence is defined in equation 6 . KL ( q ( x ) ) ‖p ( x ) ) = − ∫ q ( x ) log ( p ( x ) q ( x ) ) ( 6 ) The posterior in the above equation contains an integral which is intractable in nature . The equation can be re written in Equation 7 . KL ( qθ ( w ) ) ‖p ( w|D ) ) = Eqθ ( w ) log qθ ( w ) p ( D ) p ( D|w ) p ( w ) = = log p ( D ) + Eqθ ( w ) log qθ ( w ) p ( w ) − Eqθ ( w ) log p ( D|w ) = log p ( D ) − L ( θ ) ( 7 ) The above equation can be decomposed into two parts one of which is the KL divergence between the exact posterior and its variational approximation which needs to be minimized and the second is ELBO term which needs to be maximized . This is shown in Equation 8. max θ log p ( D ) = max θ [ KL ( qθ ( w ) ) ‖p ( w|D ) ) + L ( θ ) ] ( 8 ) KL divergence is zero if exact posterior is equal to variational approximation . Since the KL divergence is always greater than zero hence the equation can be approximated by maximizing only the ELBO ( Kingma et al. , 2015 ) as defined in equation 9 . L ( θ ) = Eqθ ( w ) log p ( D|w ) − Eqθ ( w ) log qθ ( w ) p ( w ) = LD −KL ( qθ ( w ) ‖p ( w ) ) ( 9 ) 2.4 ALEATORIC UNCERTAINTY AND EPISTEMIC UNCERTAINTY . There are two types of uncertainty - aleatory and epistemic uncertainty where variance is the sum of both these . Bayesian Neural Networks can be considered an ensemble of neural networks initialized randomly which averages the test results in parallel ( Gal , 2016 ) . For final predictions , single mean and variance can be estimated as shown in Equation 10 and Equation 11 respectively . µc ( x ) = 1 M M∑ i=1 µ̂i ( x ) ( 10 ) σ̂2c ( x ) = 1 M M∑ i=1 σ̃2i ( x ) + [ 1 M M∑ i=1 µ̂2i ( x ) − µ̂2 ( x ) ] ( 11 ) The first term in variance denotes aleatoric uncertainty while the second denotes epistemic uncertainty . Bayesian Neural Network model for uncertainty estimation was done by ( Kendall and Gal , 2017 ) with the last layer representing the mean and variance of logits . The predictive distribution approximating the posterior distribution which gives a measure of uncertainty is defined in Equation 12. qθ̂ ( y ∗|x∗ ) = ∫ Ω p ( y∗|x∗ , ω ) qθ̂ ( ω ) dω ( 12 ) Aleatoric uncertainty is a measure of the variability of the predictions from the dataset hence it is inherent in the data present . The aleatoric uncertainty is the uncertainty that exists inside the dataset . It captures noise inherent in the observations , which arises from the distribution of data . It is highly dependent on bias and distribution inside the input data but not the size of training samples . Epistemic uncertainty on the other hand is a measure of the variability of predictions from the model which is tied to various metrics used for evaluation like accuracy , loss etc . It represents the uncertainty inside the model . The epistemic uncertainty will decrease with enough training samples . It is resulting from the limitation of knowledge and data of the system . 3 METHOD . 3.1 DATASET . To validate the performance of our proposed approach to generalization , publicly available datasets were used for brain tumour segmentation BRATS18 ( Menze et al. , 2015 ) and ( Bakas et al. , 2018 ) . It contains MRI scans of 175 patients with glioblastoma and lower grade glioblastoma . The images were of resolution 240×240×155 pixels . The ground truth labels were created by expert neuroradiologists . The sample from the dataset is shown in Figure 2 . 3.2 DATA AUGMENTATION . The following data augmentation methods were used to increase the size of dataset : 1 . Rescaling : We rescale the pixels values by rescaling factor 1/255 . 2 . Rotation : Random rotations with the setup degree range between [ 0 , 360 ] were used . 3 . Height and Width Shift : Shifting the input to the left or right and up or down was performed . 4 . Shearing Intensity : It refers to shear angle ( unit in degrees ) in counter-clockwise direction . 5 . Brightness : It uses a brightness shift value from the setup range .
The paper propose a deep learning based method for taking into account both aleatoric uncertainty and epistemic uncertainty in biomedical image segmentation tasks. The proposed method is based on variational inference techniques with a standard encoder-decoder CNN architecture. The method is applied to brain tumour MR images from the standard BRATS segmentation challenge.
SP:4fdc029cadccfd61bb1a94b2c9bfadd0d234e7e4
Bregman Gradient Policy Optimization
1 INTRODUCTION . Policy Gradient ( PG ) methods are a class of popular policy optimization methods for Reinforcement Learning ( RL ) , and have achieved significant successes in many challenging applications ( Li , 2017 ) such as robot manipulation ( Deisenroth et al. , 2013 ) , the Go game ( Silver et al. , 2017 ) and autonomous driving ( Shalev-Shwartz et al. , 2016 ) . In general , PG methods directly search for the optimal policy by maximizing the expected total reward of Markov Decision Processes ( MDPs ) involved in RL , where an agent takes action dictated by a policy in an unknown dynamic environment over a sequence of time steps . Since the PGs are generally estimated by Monte-Carlo sampling , such vanilla PG methods usually suffer from very high variances resulted in slow convergence rate and destabilization . Thus , recently many fast PG methods have been proposed to reduce variances of vanilla stochastic PG . For example , Sutton et al . ( 2000 ) introduced a baseline to reduce variances of the stochastic PG . Konda & Tsitsiklis ( 2000 ) proposed an efficient actor-critic algorithm by estimating the value function to reduce effects of large variances . ( Schulman et al. , 2015b ) proposed the generalized advantage estimation ( GAE ) to control both the bias and variance in policy gradient . More recently , some faster variance-reduced PG methods ( Papini et al. , 2018 ; Xu et al. , 2019a ; Shen et al. , 2019 ; Liu et al. , 2020 ) have been developed based on the variance-reduction techniques in stochastic optimization . Alternatively , some successful PG algorithms ( Schulman et al. , 2015a ; 2017 ) improve convergence rate and robustness of vanilla PG methods by using some penalties such as Kullback-Leibler ( KL ) divergence penalty . For example , trust-region policy optimization ( TRPO ) ( Schulman et al. , 2015a ) ensures that the new selected policy is near to the old one by using KL-divergence constraint , while proximal policy optimization ( PPO ) ( Schulman et al. , 2017 ) clips the weighted likelihood ratio to implicitly reach this goal . Subsequently , Shani et al . ( 2020 ) have analyzed the global convergence properties of TRPO in tabular RL based on the convex mirror descent algorithm . Liu et al . ( 2019 ) have also studied the global convergence properties of PPO and TRPO equipped with overparametrized neural networks based on mirror descent iterations . At the same time , Yang et al . ( 2019 ) tried to propose the PG methods based on the mirror descent algorithm . More recently , mirror descent policy optimization ( MDPO ) ( Tomar et al. , 2020 ) iteratively updates the policy beyond the tabular RL by approximately solving a trust region problem based on convex mirror descent algorithm . In addition , Agarwal et al . ( 2019 ) ; Cen et al . ( 2020 ) have studied the natural PG methods for regularized RL . However , Agarwal et al . ( 2019 ) mainly focuses on tabular policy and log-linear , neural policy classes . Cen et al . ( 2020 ) mainly focuses on softmax policy class . Although these specific PG methods based on mirror descent iteration have been recently studied , which are scattered in empirical and theoretical aspects respectively , it lacks a universal framework for these PG methods without relying on some specific RL tasks . In particular , there still does not exist the convergence analysis of PG methods based on the mirror descent algorithm under the nonconvex setting . Since mirror descent iteration adjusts gradient updates to fit problem geometry , and is useful in regularized RL ( Geist et al. , 2019 ) , there exists an important problem to be addressed : Could we design a universal policy optimization framework based on the mirror descent algorithm , and provide its convergence guarantee under the non-convex setting ? In the paper , we firmly answer the above challenging question with positive solutions and propose an efficient Bregman gradient policy optimization framework based on Bregman divergences and momentum techniques . In particular , we provide a convergence analysis framework of the PG methods based on mirror descent iteration under the nonconvex setting . In summary , our main contributions are provided as follows : a ) We propose an effective Bregman gradient policy optimization ( BGPO ) algorithm based on the basic momentum technique , which achieves the sample complexity of O ( −4 ) for finding -stationary policy only requiring one trajectory at each iteration . b ) We propose an accelerated Bregman gradient policy optimization ( VR-BGPO ) algorithm based on the variance-reduced momentum technique . Moreover , we prove that the VRBGPO reaches the best known sample complexity of O ( −3 ) under the nonconvex setting . c ) We design a unified policy optimization framework based on mirror descent iteration and momentum techniques , and provide its convergence analysis under nonconvex setting . In Table 1 shows thet sample complexities of the representative PG algorithms based on mirror descent algorithm . Shani et al . ( 2020 ) ; Liu et al . ( 2019 ) have established global convergence of a mirror descent variant of PG under some pre-specified setting such as over-parameterized networks ( Liu et al. , 2019 ) by exploiting these specific problems ’ hidden convex nature . Without these special structures , global convergence of these methods can not be achieved . However , our framework does not rely on any specific policy classes , and our convergence analysis only builds on the general nonconvex setting . Thus , we only prove that our methods convergence to stationary points . Geist et al . ( 2019 ) ; Jin & Sidford ( 2020 ) ; Lan ( 2021 ) ; Zhan et al . ( 2021 ) studied a general theory of regularized MDPs based on policy function space that generally is discontinuous . Since both the state and action spaces S and A generally are very large in practice , the policy function space is large . While our methods build on policy parameter space that is generally continuous Euclidean space and relatively small . Clearly , our methods and theoretical results are more practical than the results in ( Geist et al. , 2019 ; Jin & Sidford , 2020 ; Lan , 2021 ; Zhan et al. , 2021 ) . ( Tomar et al. , 2020 ) also proposes mirror descent PG framework based on policy parameter space , but it does not provide any theoretical results and only focuses on Bregman divergence taking form of KL divergence . While our framework can collaborate with any Bregman divergence forms . 2 RELATED WORKS . In this section , we review some related work about mirror descent algorithm in RL and variancereduced PG methods , respectively . 2.1 MIRROR DESCENT ALGORITHM IN RL . Due to easily deal with the regularization terms , mirror descent ( a.k.a. , Bregman gradient ) algorithm ( Censor & Zenios , 1992 ; Beck & Teboulle , 2003 ) has shown significant successes in regularized RL , which is first proposed in ( Censor & Zenios , 1992 ) based on the Bregman distances ( Bregman , 1967 ; Censor & Lent , 1981 ) . For example , Neu et al . ( 2017 ) have shown both the dynamic policy programming ( Azar et al. , 2012 ) and TRPO ( Schulman et al. , 2015a ) algorithms are approximate variants of mirror descent algorithm . Subsequently , Geist et al . ( 2019 ) have introduced a general theory of regularized MDPs based on the convex mirror descent algorithm . More recently , Liu et al . ( 2019 ) have studied the global convergence properties of PPO and TRPO equipped with overparametrized neural networks based on mirror descent iterations . At the same time , Shani et al . ( 2020 ) have analyzed the global convergence properties of TRPO in tabular policy based on the convex mirror descent algorithm . Wang et al . ( 2019 ) have proposed divergence augmented policy optimization for off-policy learning based on mirror descent algorithm . MDPO ( Tomar et al. , 2020 ) iteratively updates the policy beyond the tabular RL by approximately solving a trust region problem based on convex mirror descent algorithm . 2.2 ( VARIANCE-REDUCED ) PG METHODS . PG methods have been widely studied due to their stability and incremental nature in policy optimization . For example , the global convergence properties of vanilla policy gradient method in infinite-horizon MDPs have been recently studied in ( Zhang et al. , 2019 ) . Subsequently , Zhang et al . ( 2020 ) have studied asymptotically global convergence properties of the REINFORCE ( Williams , 1992 ) , whose policy gradient is approximated by using a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization . To accelerate these vanilla PG methods , some faster variance-reduced PG methods have been proposed based on the variance-reduction techniques of SVRG ( Johnson & Zhang , 2013 ) , SPIDER ( Fang et al. , 2018 ) and STORM ( Cutkosky & Orabona , 2019 ) in stochastic convex and non-convex optimization . For example , fast SVRPG ( Papini et al. , 2018 ; Xu et al. , 2019a ) algorithm have been proposed based on SVRG . Fast HAPG ( Shen et al. , 2019 ) and SRVR-PG ( Xu et al. , 2019a ) algorithms have been presented by using SPIDER technique . Subsequently , the momentum-based PG methods , i.e. , ProxHSPGA ( Pham et al. , 2020 ) and IS-MBPG ( Huang et al. , 2020 ) , have been developed based on variance-reduced technique of STORM/Hybrid-SGD ( Cutkosky & Orabona , 2019 ; Tran-Dinh et al. , 2019 ) . More recently , ( Ding et al. , 2021 ) studied the global convergence of momentum-based policy gradient methods . ( Zhang et al. , 2021 ) proposed a truncated stochastic incremental variance-reduced policy gradient ( TSIVR-PG ) method to relieve the uncheckable importance weight assumption in above variance-reduced PG methods and provided the global convergence of the TSIVR-PG under overparameterizaiton of policy assumption . 3 PRELIMINARIES . In the section , we will review some preliminaries of Markov decision process and policy gradients . 3.1 NOTATIONS . Let [ n ] = { 1 , 2 , · · · , n } for all n ∈ N+ . For a vector x ∈ Rd , let ‖x‖ denote the ` 2 norm of x , and ‖x‖p = ( ∑d i=1 |xi|p ) 1/p ( p ≥ 1 ) denotes the p-norm of x . For two sequences { ak } and { bk } , we denote ak = O ( bk ) if ak ≤ Cbk for some constant C > 0 . E [ X ] and V [ X ] denote the expectation and variance of random variable X , respectively . 3.2 MARKOV DECISION PROCESS . Reinforcement learning generally involves a discrete time discounted Markov Decision Process ( MDP ) defined by a tuple { S , A , P , r , γ , ρ0 } . S and A denote the state and action spaces of the agent , respectively . P ( s′|s , a ) : S × A → 4 ( S ) is the Markov kernel that determines the transition probability from the state s to s′ under taking an action a ∈ A. r ( s , a ) : S ×A → [ −R , R ] ( R > 0 ) is the reward function of s and a , and ρ0 = p ( s0 ) denotes the initial state distribution . γ ∈ ( 0 , 1 ) is the discount factor . Let π : S → 4 ( A ) be a stationary policy , where4 ( A ) is the set of probability distributions on A . Given the current state st ∈ S , the agent executes an action at ∈ A following a conditional probability distribution π ( at|st ) , and then the agent obtains a reward rt = r ( st , at ) . At each time t , we can define the state-action value function Qπ ( st , at ) and state value function V π ( st ) as follows : Qπ ( st , at ) = Est+1 , at+1 , ... [ ∞∑ l=0 γlrt+l ] , V π ( st ) = Eat , st+1 , ... [ ∞∑ l=0 γlrt+l ] . ( 1 ) We also define the advantage function Aπ ( st , at ) = Qπ ( st , at ) − V π ( st ) . The goal of the agent is to find the optimal policy by maximizing the expected discounted reward max π J ( π ) : = Es0∼ρ0 [ V π ( s0 ) ] . ( 2 ) Given a time horizonH , the agent collects a trajectory τ = { st , at } H−1t=0 under any stationary policy . Then the agent obtains a cumulative discounted reward r ( τ ) = ∑H−1 t=0 γ tr ( st , at ) . Since the state and action spaces S and A are generally very large , directly solving the problem ( 2 ) is difficult . Thus , we let the policy π be parametrized as πθ for the parameter θ ∈ Θ ⊆ Rd . Given the initial distribution ρ0 = p ( s0 ) , the probability distribution over trajectory τ can be obtained p ( τ |θ ) = p ( s0 ) H−1∏ t=0 P ( st+1|st , at ) πθ ( at|st ) . ( 3 ) Thus , the problem ( 2 ) will be equivalent to maximize the expected discounted trajectory reward : max θ∈Θ J ( θ ) : = Eτ∼p ( τ |θ ) [ r ( τ ) ] . ( 4 ) In fact , the above objective function J ( θ ) has a truncation error of O ( γ H 1−γ ) compared to the original infinite-horizon MDP .
This paper studies the convergence of policy gradient algorithms with constraints. They modify the vanilla policy gradient with Bregman divergence as a regularizer. The authors also propose a new variance reduced policy gradient methods based on the STORM estimator in nonconvex optimization.
SP:2006151f6a764ae04370e2b62e0b601a6477f395
Bregman Gradient Policy Optimization
1 INTRODUCTION . Policy Gradient ( PG ) methods are a class of popular policy optimization methods for Reinforcement Learning ( RL ) , and have achieved significant successes in many challenging applications ( Li , 2017 ) such as robot manipulation ( Deisenroth et al. , 2013 ) , the Go game ( Silver et al. , 2017 ) and autonomous driving ( Shalev-Shwartz et al. , 2016 ) . In general , PG methods directly search for the optimal policy by maximizing the expected total reward of Markov Decision Processes ( MDPs ) involved in RL , where an agent takes action dictated by a policy in an unknown dynamic environment over a sequence of time steps . Since the PGs are generally estimated by Monte-Carlo sampling , such vanilla PG methods usually suffer from very high variances resulted in slow convergence rate and destabilization . Thus , recently many fast PG methods have been proposed to reduce variances of vanilla stochastic PG . For example , Sutton et al . ( 2000 ) introduced a baseline to reduce variances of the stochastic PG . Konda & Tsitsiklis ( 2000 ) proposed an efficient actor-critic algorithm by estimating the value function to reduce effects of large variances . ( Schulman et al. , 2015b ) proposed the generalized advantage estimation ( GAE ) to control both the bias and variance in policy gradient . More recently , some faster variance-reduced PG methods ( Papini et al. , 2018 ; Xu et al. , 2019a ; Shen et al. , 2019 ; Liu et al. , 2020 ) have been developed based on the variance-reduction techniques in stochastic optimization . Alternatively , some successful PG algorithms ( Schulman et al. , 2015a ; 2017 ) improve convergence rate and robustness of vanilla PG methods by using some penalties such as Kullback-Leibler ( KL ) divergence penalty . For example , trust-region policy optimization ( TRPO ) ( Schulman et al. , 2015a ) ensures that the new selected policy is near to the old one by using KL-divergence constraint , while proximal policy optimization ( PPO ) ( Schulman et al. , 2017 ) clips the weighted likelihood ratio to implicitly reach this goal . Subsequently , Shani et al . ( 2020 ) have analyzed the global convergence properties of TRPO in tabular RL based on the convex mirror descent algorithm . Liu et al . ( 2019 ) have also studied the global convergence properties of PPO and TRPO equipped with overparametrized neural networks based on mirror descent iterations . At the same time , Yang et al . ( 2019 ) tried to propose the PG methods based on the mirror descent algorithm . More recently , mirror descent policy optimization ( MDPO ) ( Tomar et al. , 2020 ) iteratively updates the policy beyond the tabular RL by approximately solving a trust region problem based on convex mirror descent algorithm . In addition , Agarwal et al . ( 2019 ) ; Cen et al . ( 2020 ) have studied the natural PG methods for regularized RL . However , Agarwal et al . ( 2019 ) mainly focuses on tabular policy and log-linear , neural policy classes . Cen et al . ( 2020 ) mainly focuses on softmax policy class . Although these specific PG methods based on mirror descent iteration have been recently studied , which are scattered in empirical and theoretical aspects respectively , it lacks a universal framework for these PG methods without relying on some specific RL tasks . In particular , there still does not exist the convergence analysis of PG methods based on the mirror descent algorithm under the nonconvex setting . Since mirror descent iteration adjusts gradient updates to fit problem geometry , and is useful in regularized RL ( Geist et al. , 2019 ) , there exists an important problem to be addressed : Could we design a universal policy optimization framework based on the mirror descent algorithm , and provide its convergence guarantee under the non-convex setting ? In the paper , we firmly answer the above challenging question with positive solutions and propose an efficient Bregman gradient policy optimization framework based on Bregman divergences and momentum techniques . In particular , we provide a convergence analysis framework of the PG methods based on mirror descent iteration under the nonconvex setting . In summary , our main contributions are provided as follows : a ) We propose an effective Bregman gradient policy optimization ( BGPO ) algorithm based on the basic momentum technique , which achieves the sample complexity of O ( −4 ) for finding -stationary policy only requiring one trajectory at each iteration . b ) We propose an accelerated Bregman gradient policy optimization ( VR-BGPO ) algorithm based on the variance-reduced momentum technique . Moreover , we prove that the VRBGPO reaches the best known sample complexity of O ( −3 ) under the nonconvex setting . c ) We design a unified policy optimization framework based on mirror descent iteration and momentum techniques , and provide its convergence analysis under nonconvex setting . In Table 1 shows thet sample complexities of the representative PG algorithms based on mirror descent algorithm . Shani et al . ( 2020 ) ; Liu et al . ( 2019 ) have established global convergence of a mirror descent variant of PG under some pre-specified setting such as over-parameterized networks ( Liu et al. , 2019 ) by exploiting these specific problems ’ hidden convex nature . Without these special structures , global convergence of these methods can not be achieved . However , our framework does not rely on any specific policy classes , and our convergence analysis only builds on the general nonconvex setting . Thus , we only prove that our methods convergence to stationary points . Geist et al . ( 2019 ) ; Jin & Sidford ( 2020 ) ; Lan ( 2021 ) ; Zhan et al . ( 2021 ) studied a general theory of regularized MDPs based on policy function space that generally is discontinuous . Since both the state and action spaces S and A generally are very large in practice , the policy function space is large . While our methods build on policy parameter space that is generally continuous Euclidean space and relatively small . Clearly , our methods and theoretical results are more practical than the results in ( Geist et al. , 2019 ; Jin & Sidford , 2020 ; Lan , 2021 ; Zhan et al. , 2021 ) . ( Tomar et al. , 2020 ) also proposes mirror descent PG framework based on policy parameter space , but it does not provide any theoretical results and only focuses on Bregman divergence taking form of KL divergence . While our framework can collaborate with any Bregman divergence forms . 2 RELATED WORKS . In this section , we review some related work about mirror descent algorithm in RL and variancereduced PG methods , respectively . 2.1 MIRROR DESCENT ALGORITHM IN RL . Due to easily deal with the regularization terms , mirror descent ( a.k.a. , Bregman gradient ) algorithm ( Censor & Zenios , 1992 ; Beck & Teboulle , 2003 ) has shown significant successes in regularized RL , which is first proposed in ( Censor & Zenios , 1992 ) based on the Bregman distances ( Bregman , 1967 ; Censor & Lent , 1981 ) . For example , Neu et al . ( 2017 ) have shown both the dynamic policy programming ( Azar et al. , 2012 ) and TRPO ( Schulman et al. , 2015a ) algorithms are approximate variants of mirror descent algorithm . Subsequently , Geist et al . ( 2019 ) have introduced a general theory of regularized MDPs based on the convex mirror descent algorithm . More recently , Liu et al . ( 2019 ) have studied the global convergence properties of PPO and TRPO equipped with overparametrized neural networks based on mirror descent iterations . At the same time , Shani et al . ( 2020 ) have analyzed the global convergence properties of TRPO in tabular policy based on the convex mirror descent algorithm . Wang et al . ( 2019 ) have proposed divergence augmented policy optimization for off-policy learning based on mirror descent algorithm . MDPO ( Tomar et al. , 2020 ) iteratively updates the policy beyond the tabular RL by approximately solving a trust region problem based on convex mirror descent algorithm . 2.2 ( VARIANCE-REDUCED ) PG METHODS . PG methods have been widely studied due to their stability and incremental nature in policy optimization . For example , the global convergence properties of vanilla policy gradient method in infinite-horizon MDPs have been recently studied in ( Zhang et al. , 2019 ) . Subsequently , Zhang et al . ( 2020 ) have studied asymptotically global convergence properties of the REINFORCE ( Williams , 1992 ) , whose policy gradient is approximated by using a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization . To accelerate these vanilla PG methods , some faster variance-reduced PG methods have been proposed based on the variance-reduction techniques of SVRG ( Johnson & Zhang , 2013 ) , SPIDER ( Fang et al. , 2018 ) and STORM ( Cutkosky & Orabona , 2019 ) in stochastic convex and non-convex optimization . For example , fast SVRPG ( Papini et al. , 2018 ; Xu et al. , 2019a ) algorithm have been proposed based on SVRG . Fast HAPG ( Shen et al. , 2019 ) and SRVR-PG ( Xu et al. , 2019a ) algorithms have been presented by using SPIDER technique . Subsequently , the momentum-based PG methods , i.e. , ProxHSPGA ( Pham et al. , 2020 ) and IS-MBPG ( Huang et al. , 2020 ) , have been developed based on variance-reduced technique of STORM/Hybrid-SGD ( Cutkosky & Orabona , 2019 ; Tran-Dinh et al. , 2019 ) . More recently , ( Ding et al. , 2021 ) studied the global convergence of momentum-based policy gradient methods . ( Zhang et al. , 2021 ) proposed a truncated stochastic incremental variance-reduced policy gradient ( TSIVR-PG ) method to relieve the uncheckable importance weight assumption in above variance-reduced PG methods and provided the global convergence of the TSIVR-PG under overparameterizaiton of policy assumption . 3 PRELIMINARIES . In the section , we will review some preliminaries of Markov decision process and policy gradients . 3.1 NOTATIONS . Let [ n ] = { 1 , 2 , · · · , n } for all n ∈ N+ . For a vector x ∈ Rd , let ‖x‖ denote the ` 2 norm of x , and ‖x‖p = ( ∑d i=1 |xi|p ) 1/p ( p ≥ 1 ) denotes the p-norm of x . For two sequences { ak } and { bk } , we denote ak = O ( bk ) if ak ≤ Cbk for some constant C > 0 . E [ X ] and V [ X ] denote the expectation and variance of random variable X , respectively . 3.2 MARKOV DECISION PROCESS . Reinforcement learning generally involves a discrete time discounted Markov Decision Process ( MDP ) defined by a tuple { S , A , P , r , γ , ρ0 } . S and A denote the state and action spaces of the agent , respectively . P ( s′|s , a ) : S × A → 4 ( S ) is the Markov kernel that determines the transition probability from the state s to s′ under taking an action a ∈ A. r ( s , a ) : S ×A → [ −R , R ] ( R > 0 ) is the reward function of s and a , and ρ0 = p ( s0 ) denotes the initial state distribution . γ ∈ ( 0 , 1 ) is the discount factor . Let π : S → 4 ( A ) be a stationary policy , where4 ( A ) is the set of probability distributions on A . Given the current state st ∈ S , the agent executes an action at ∈ A following a conditional probability distribution π ( at|st ) , and then the agent obtains a reward rt = r ( st , at ) . At each time t , we can define the state-action value function Qπ ( st , at ) and state value function V π ( st ) as follows : Qπ ( st , at ) = Est+1 , at+1 , ... [ ∞∑ l=0 γlrt+l ] , V π ( st ) = Eat , st+1 , ... [ ∞∑ l=0 γlrt+l ] . ( 1 ) We also define the advantage function Aπ ( st , at ) = Qπ ( st , at ) − V π ( st ) . The goal of the agent is to find the optimal policy by maximizing the expected discounted reward max π J ( π ) : = Es0∼ρ0 [ V π ( s0 ) ] . ( 2 ) Given a time horizonH , the agent collects a trajectory τ = { st , at } H−1t=0 under any stationary policy . Then the agent obtains a cumulative discounted reward r ( τ ) = ∑H−1 t=0 γ tr ( st , at ) . Since the state and action spaces S and A are generally very large , directly solving the problem ( 2 ) is difficult . Thus , we let the policy π be parametrized as πθ for the parameter θ ∈ Θ ⊆ Rd . Given the initial distribution ρ0 = p ( s0 ) , the probability distribution over trajectory τ can be obtained p ( τ |θ ) = p ( s0 ) H−1∏ t=0 P ( st+1|st , at ) πθ ( at|st ) . ( 3 ) Thus , the problem ( 2 ) will be equivalent to maximize the expected discounted trajectory reward : max θ∈Θ J ( θ ) : = Eτ∼p ( τ |θ ) [ r ( τ ) ] . ( 4 ) In fact , the above objective function J ( θ ) has a truncation error of O ( γ H 1−γ ) compared to the original infinite-horizon MDP .
This work proposes a Bregman gradient policy optimization framework for RL. Two specific algorithms are proposed, which are BGPO and VR-BGPO, where VR-BGPO is an accelerated version of BGPO. The authors provide the convergence rates results for these two algorithms and show their efficiency through multiple numerical simulations.
SP:2006151f6a764ae04370e2b62e0b601a6477f395
Bregman Gradient Policy Optimization
1 INTRODUCTION . Policy Gradient ( PG ) methods are a class of popular policy optimization methods for Reinforcement Learning ( RL ) , and have achieved significant successes in many challenging applications ( Li , 2017 ) such as robot manipulation ( Deisenroth et al. , 2013 ) , the Go game ( Silver et al. , 2017 ) and autonomous driving ( Shalev-Shwartz et al. , 2016 ) . In general , PG methods directly search for the optimal policy by maximizing the expected total reward of Markov Decision Processes ( MDPs ) involved in RL , where an agent takes action dictated by a policy in an unknown dynamic environment over a sequence of time steps . Since the PGs are generally estimated by Monte-Carlo sampling , such vanilla PG methods usually suffer from very high variances resulted in slow convergence rate and destabilization . Thus , recently many fast PG methods have been proposed to reduce variances of vanilla stochastic PG . For example , Sutton et al . ( 2000 ) introduced a baseline to reduce variances of the stochastic PG . Konda & Tsitsiklis ( 2000 ) proposed an efficient actor-critic algorithm by estimating the value function to reduce effects of large variances . ( Schulman et al. , 2015b ) proposed the generalized advantage estimation ( GAE ) to control both the bias and variance in policy gradient . More recently , some faster variance-reduced PG methods ( Papini et al. , 2018 ; Xu et al. , 2019a ; Shen et al. , 2019 ; Liu et al. , 2020 ) have been developed based on the variance-reduction techniques in stochastic optimization . Alternatively , some successful PG algorithms ( Schulman et al. , 2015a ; 2017 ) improve convergence rate and robustness of vanilla PG methods by using some penalties such as Kullback-Leibler ( KL ) divergence penalty . For example , trust-region policy optimization ( TRPO ) ( Schulman et al. , 2015a ) ensures that the new selected policy is near to the old one by using KL-divergence constraint , while proximal policy optimization ( PPO ) ( Schulman et al. , 2017 ) clips the weighted likelihood ratio to implicitly reach this goal . Subsequently , Shani et al . ( 2020 ) have analyzed the global convergence properties of TRPO in tabular RL based on the convex mirror descent algorithm . Liu et al . ( 2019 ) have also studied the global convergence properties of PPO and TRPO equipped with overparametrized neural networks based on mirror descent iterations . At the same time , Yang et al . ( 2019 ) tried to propose the PG methods based on the mirror descent algorithm . More recently , mirror descent policy optimization ( MDPO ) ( Tomar et al. , 2020 ) iteratively updates the policy beyond the tabular RL by approximately solving a trust region problem based on convex mirror descent algorithm . In addition , Agarwal et al . ( 2019 ) ; Cen et al . ( 2020 ) have studied the natural PG methods for regularized RL . However , Agarwal et al . ( 2019 ) mainly focuses on tabular policy and log-linear , neural policy classes . Cen et al . ( 2020 ) mainly focuses on softmax policy class . Although these specific PG methods based on mirror descent iteration have been recently studied , which are scattered in empirical and theoretical aspects respectively , it lacks a universal framework for these PG methods without relying on some specific RL tasks . In particular , there still does not exist the convergence analysis of PG methods based on the mirror descent algorithm under the nonconvex setting . Since mirror descent iteration adjusts gradient updates to fit problem geometry , and is useful in regularized RL ( Geist et al. , 2019 ) , there exists an important problem to be addressed : Could we design a universal policy optimization framework based on the mirror descent algorithm , and provide its convergence guarantee under the non-convex setting ? In the paper , we firmly answer the above challenging question with positive solutions and propose an efficient Bregman gradient policy optimization framework based on Bregman divergences and momentum techniques . In particular , we provide a convergence analysis framework of the PG methods based on mirror descent iteration under the nonconvex setting . In summary , our main contributions are provided as follows : a ) We propose an effective Bregman gradient policy optimization ( BGPO ) algorithm based on the basic momentum technique , which achieves the sample complexity of O ( −4 ) for finding -stationary policy only requiring one trajectory at each iteration . b ) We propose an accelerated Bregman gradient policy optimization ( VR-BGPO ) algorithm based on the variance-reduced momentum technique . Moreover , we prove that the VRBGPO reaches the best known sample complexity of O ( −3 ) under the nonconvex setting . c ) We design a unified policy optimization framework based on mirror descent iteration and momentum techniques , and provide its convergence analysis under nonconvex setting . In Table 1 shows thet sample complexities of the representative PG algorithms based on mirror descent algorithm . Shani et al . ( 2020 ) ; Liu et al . ( 2019 ) have established global convergence of a mirror descent variant of PG under some pre-specified setting such as over-parameterized networks ( Liu et al. , 2019 ) by exploiting these specific problems ’ hidden convex nature . Without these special structures , global convergence of these methods can not be achieved . However , our framework does not rely on any specific policy classes , and our convergence analysis only builds on the general nonconvex setting . Thus , we only prove that our methods convergence to stationary points . Geist et al . ( 2019 ) ; Jin & Sidford ( 2020 ) ; Lan ( 2021 ) ; Zhan et al . ( 2021 ) studied a general theory of regularized MDPs based on policy function space that generally is discontinuous . Since both the state and action spaces S and A generally are very large in practice , the policy function space is large . While our methods build on policy parameter space that is generally continuous Euclidean space and relatively small . Clearly , our methods and theoretical results are more practical than the results in ( Geist et al. , 2019 ; Jin & Sidford , 2020 ; Lan , 2021 ; Zhan et al. , 2021 ) . ( Tomar et al. , 2020 ) also proposes mirror descent PG framework based on policy parameter space , but it does not provide any theoretical results and only focuses on Bregman divergence taking form of KL divergence . While our framework can collaborate with any Bregman divergence forms . 2 RELATED WORKS . In this section , we review some related work about mirror descent algorithm in RL and variancereduced PG methods , respectively . 2.1 MIRROR DESCENT ALGORITHM IN RL . Due to easily deal with the regularization terms , mirror descent ( a.k.a. , Bregman gradient ) algorithm ( Censor & Zenios , 1992 ; Beck & Teboulle , 2003 ) has shown significant successes in regularized RL , which is first proposed in ( Censor & Zenios , 1992 ) based on the Bregman distances ( Bregman , 1967 ; Censor & Lent , 1981 ) . For example , Neu et al . ( 2017 ) have shown both the dynamic policy programming ( Azar et al. , 2012 ) and TRPO ( Schulman et al. , 2015a ) algorithms are approximate variants of mirror descent algorithm . Subsequently , Geist et al . ( 2019 ) have introduced a general theory of regularized MDPs based on the convex mirror descent algorithm . More recently , Liu et al . ( 2019 ) have studied the global convergence properties of PPO and TRPO equipped with overparametrized neural networks based on mirror descent iterations . At the same time , Shani et al . ( 2020 ) have analyzed the global convergence properties of TRPO in tabular policy based on the convex mirror descent algorithm . Wang et al . ( 2019 ) have proposed divergence augmented policy optimization for off-policy learning based on mirror descent algorithm . MDPO ( Tomar et al. , 2020 ) iteratively updates the policy beyond the tabular RL by approximately solving a trust region problem based on convex mirror descent algorithm . 2.2 ( VARIANCE-REDUCED ) PG METHODS . PG methods have been widely studied due to their stability and incremental nature in policy optimization . For example , the global convergence properties of vanilla policy gradient method in infinite-horizon MDPs have been recently studied in ( Zhang et al. , 2019 ) . Subsequently , Zhang et al . ( 2020 ) have studied asymptotically global convergence properties of the REINFORCE ( Williams , 1992 ) , whose policy gradient is approximated by using a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization . To accelerate these vanilla PG methods , some faster variance-reduced PG methods have been proposed based on the variance-reduction techniques of SVRG ( Johnson & Zhang , 2013 ) , SPIDER ( Fang et al. , 2018 ) and STORM ( Cutkosky & Orabona , 2019 ) in stochastic convex and non-convex optimization . For example , fast SVRPG ( Papini et al. , 2018 ; Xu et al. , 2019a ) algorithm have been proposed based on SVRG . Fast HAPG ( Shen et al. , 2019 ) and SRVR-PG ( Xu et al. , 2019a ) algorithms have been presented by using SPIDER technique . Subsequently , the momentum-based PG methods , i.e. , ProxHSPGA ( Pham et al. , 2020 ) and IS-MBPG ( Huang et al. , 2020 ) , have been developed based on variance-reduced technique of STORM/Hybrid-SGD ( Cutkosky & Orabona , 2019 ; Tran-Dinh et al. , 2019 ) . More recently , ( Ding et al. , 2021 ) studied the global convergence of momentum-based policy gradient methods . ( Zhang et al. , 2021 ) proposed a truncated stochastic incremental variance-reduced policy gradient ( TSIVR-PG ) method to relieve the uncheckable importance weight assumption in above variance-reduced PG methods and provided the global convergence of the TSIVR-PG under overparameterizaiton of policy assumption . 3 PRELIMINARIES . In the section , we will review some preliminaries of Markov decision process and policy gradients . 3.1 NOTATIONS . Let [ n ] = { 1 , 2 , · · · , n } for all n ∈ N+ . For a vector x ∈ Rd , let ‖x‖ denote the ` 2 norm of x , and ‖x‖p = ( ∑d i=1 |xi|p ) 1/p ( p ≥ 1 ) denotes the p-norm of x . For two sequences { ak } and { bk } , we denote ak = O ( bk ) if ak ≤ Cbk for some constant C > 0 . E [ X ] and V [ X ] denote the expectation and variance of random variable X , respectively . 3.2 MARKOV DECISION PROCESS . Reinforcement learning generally involves a discrete time discounted Markov Decision Process ( MDP ) defined by a tuple { S , A , P , r , γ , ρ0 } . S and A denote the state and action spaces of the agent , respectively . P ( s′|s , a ) : S × A → 4 ( S ) is the Markov kernel that determines the transition probability from the state s to s′ under taking an action a ∈ A. r ( s , a ) : S ×A → [ −R , R ] ( R > 0 ) is the reward function of s and a , and ρ0 = p ( s0 ) denotes the initial state distribution . γ ∈ ( 0 , 1 ) is the discount factor . Let π : S → 4 ( A ) be a stationary policy , where4 ( A ) is the set of probability distributions on A . Given the current state st ∈ S , the agent executes an action at ∈ A following a conditional probability distribution π ( at|st ) , and then the agent obtains a reward rt = r ( st , at ) . At each time t , we can define the state-action value function Qπ ( st , at ) and state value function V π ( st ) as follows : Qπ ( st , at ) = Est+1 , at+1 , ... [ ∞∑ l=0 γlrt+l ] , V π ( st ) = Eat , st+1 , ... [ ∞∑ l=0 γlrt+l ] . ( 1 ) We also define the advantage function Aπ ( st , at ) = Qπ ( st , at ) − V π ( st ) . The goal of the agent is to find the optimal policy by maximizing the expected discounted reward max π J ( π ) : = Es0∼ρ0 [ V π ( s0 ) ] . ( 2 ) Given a time horizonH , the agent collects a trajectory τ = { st , at } H−1t=0 under any stationary policy . Then the agent obtains a cumulative discounted reward r ( τ ) = ∑H−1 t=0 γ tr ( st , at ) . Since the state and action spaces S and A are generally very large , directly solving the problem ( 2 ) is difficult . Thus , we let the policy π be parametrized as πθ for the parameter θ ∈ Θ ⊆ Rd . Given the initial distribution ρ0 = p ( s0 ) , the probability distribution over trajectory τ can be obtained p ( τ |θ ) = p ( s0 ) H−1∏ t=0 P ( st+1|st , at ) πθ ( at|st ) . ( 3 ) Thus , the problem ( 2 ) will be equivalent to maximize the expected discounted trajectory reward : max θ∈Θ J ( θ ) : = Eτ∼p ( τ |θ ) [ r ( τ ) ] . ( 4 ) In fact , the above objective function J ( θ ) has a truncation error of O ( γ H 1−γ ) compared to the original infinite-horizon MDP .
The authors consider the optimization problem of an MDP. They designed two policy gradient algorithms based on the mirror descent method, which are named BGPO and VR-BGPO. The BGPO algorithm is a momentum mirror descent method that finds an $\epsilon$-stationary policy with $O(\epsilon^{-4})$ samples. The VR-BGPO algorithm is a STORM-type variance reduced mirror descent method that finds an $\epsilon$-stationary policy with $O(\epsilon^{-3})$ samples. The analysis is nicely organized and the authors also provide a couple of experiments to verify their theoretical findings.
SP:2006151f6a764ae04370e2b62e0b601a6477f395
Multi-Trigger-Key: Towards Multi-Task Privacy-Preserving In Deep Learning
1 INTRODUCTION . Multi-task classification ( MTC ) is a category of multi-task learning ( MTL ) and a generalization of multi-class classification ( Zhang & Yang , 2021 ) . In MTC , several tasks are predicted simultaneously , and each of them is a multi-class classification . The state of the art in MTC has been dramatically improved over the past decade thanks to deep learning ( Ruder , 2017 ; Huang & Stokes , 2016 ; Liu et al. , 2016 ) . Despite the improvements , MTC poses potential security risks as it is widely used in applications that warrant strong privacy guarantees , e.g. , visual attributes ( Sarafianos et al. , 2017 ) and healthcare ( Amyar et al. , 2020 ) . Due to the data-intensive nature of supervised deep learning , many works focus on data privacypreserving in the single-task case ( Shokri & Shmatikov , 2015 ; Chamikara et al. , 2020 ) . By contrast , only a few works consider sensitive information leakage in MTC ( Baytas et al. , 2016 ; Liu et al. , 2018 ; Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . Among existing works , widely used techniques include distributed optimization methods ( Baytas et al. , 2016 ; Liu et al. , 2018 ) and differential privacy that masks the original datasets/intermediate results with some noise perturbation mechanisms during the training process ( Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . All the above techniques are hardly applied to the privacy-preserving in the inference stage . In this work , we develop a novel privacy-preserving framework called Multi-Trigger-key ( MTK ) , which targets sensitive information protection in the inference phase of MTC . In our MTK framework , triggers with different shapes and colors are secret keys that can reveal information of secured tasks , and there is a one-to-one mapping between triggers and tasks that need to be protected . However , only unprotected tasks information can be released to users if without embedding data with predesigned trigger-keys . Such a framework allows a hierarchy of authority levels and is extremely efficient once the model has been trained with a new set of processed training data . Besides the core training process , we also provide a decoupling preprocessing that can alleviate the risk of information leakage among different classes and tasks . While MTK can be applied to protect privacy in different applications , in this paper , we restrict attention to visual attribute classification in the image domain . Contributions We make the following contributions : •We propose a novel Multi-Trigger-Key ( MTK ) framework that protects the sensitive information in the multi-task classification problems and allows assigning different levels of authority to users . • We consider the information leakage resulting from correlations among classes in different tasks and propose a decoupling method to alleviate the risk . • We conduct a comprehensive study of the MTK on the UTKFace dataset ( Zhang et al. , 2017 ) , showing that MTK can simultaneously protect secured tasks and maintain the prediction accuracy of all tasks . 1.1 RELATED WORK . Multi-task learning ( MTL ) . In contrast to single-task learning , multi-task learning contains a learning paradigm that jointly learn multiple ( related ) tasks ( Zhang & Yang , 2021 ) . A crucial assumption for MTL is that features are largely shared across all tasks which enable models to generalize better ( Ando et al. , 2005 ; Evgeniou & Pontil , 2004 ) . Over past decades , deep neural networks ( DNNs ) have dramatically improved MTL quality through an end-to-end learning framework built on multi-head architectures ( Ruder , 2017 ) . Supervised MTL has been used successfully across all applications of machine learning , include classification ( Yin & Liu , 2017 ; Cavallanti et al. , 2010 ) and regression ( Kim & Xing , 2010 ) problems . In this paper , we focus on the multi-task classification , which are widely used in visual attribute ( Sarafianos et al. , 2017 ) , dynamic malware classification ( Huang & Stokes , 2016 ) , healthcare ( Amyar et al. , 2020 ) , and text classification ( Liu et al. , 2016 ) etc . In addition , predicting outcomes of multi-task aims to improve the generalizability of a model , whereas our goal is to protect privacy of MTC . Privacy-preserving in MTL . The wide applications of MTL bring concern of privacy exposure . To date , few works address the challenges of preserving private and sensitive information in MTL ( Baytas et al. , 2016 ; Liu et al. , 2018 ; Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . ( Baytas et al. , 2016 ; Liu et al. , 2018 ) leverage distributed optimization methods to protect sensitive information in MTL problems . Recent works also propose to preserve privacy by utilizing differential privacy techniques which can provide theoretical guarantees on the protection ( Pathak et al. , 2010 ; Gupta et al. , 2016 ) . For example , ( Pathak et al. , 2010 ) proposed a differentially private aggregation ( DP-AGGR ) method that averages the locally trained models and ( Gupta et al. , 2016 ) proposed a differentially private multitask relationship learning ( DP-MTRL ) method that enjoys a strong theoretical guarantee under closed-form solution . While the above methods focus on protecting a single data instance in the training set , an MTL framework is proposed to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix ( Liang et al. , 2020 ) . All these works aim to protect privacy in training datasets . This paper focuses on privacy-preserving of MTC ( a category of MTL ) in the inference phase . Relations to backdoor attack . Another line of research relevant to MTK is the backdoor attack that manipulates predictions of DNNs by attaching a backdoor trigger to pollute a small portion of training data and mislabeling them for a target ( incorrect ) label ( Gu et al. , 2017 ; Chen et al. , 2017 ) . Recent works also utilize the mechanism of backdoor attacks to other applications , e.g. , turning backdoor attack into a tool to claim ownership ( Adi et al. , 2018 ) . Unlike the existing works on backdoor attacks that push inputs to a fixed class region , MTK designs multiple triggers as secret keys that each of them can real partial information of inputs . 1.2 OVERVIEW OF THE MULTI-TRIGGER-KEY FRAMEWORK . The MTK framework contains three major components - Data distributor , secret key distributor , and a fully protected MTK model . The user first requests a data source from the data distributor . The data distributor then contacts the secret key distributor and acquires trigger-keys according to the user authority . In MTK , trigger-keys are pre-designed with different shapes and colors . Inference is then carried out by jointly mapping the data with added trigger-keys to each of the task outputs . Figure 1 provides an overview of the MTK framework . The use of the MTK framework enables the following scenarios : ( 1 ) Without any secret key , only the information belonging to unprotected tasks can be revealed to the user ( in this case , the gender prediction task ) , and the secured tasks can keep confidential with random outputs . ( 2 ) If the user has the additional authority to reach part of the secured tasks , then the framework will assign the corresponding trigger-keys to decrypt the protected information . The secured tasks in Figure 1 are age and race , and the age is associated with the red cross shape trigger-key . In this paper , we consider the sequential prediction process , i.e. , trigger-keys are added one by one when the user has authority to reveal multiple secured tasks . 2 BUILDING MULTI-TRIGGER-KEY MODEL . Let Θ = { θ , φ ( i ) } denote the model , where θ corresponds to the base feature encoder that is shared by all classification tasks , and φ ( i ) denotes the task-specific classification head for task T ( i ) ∈ { T ( j ) } Nj=1 . The output dimension of φ ( i ) aligns with the number of classes in task i . Given the feature encoder Θ , let f ( · ) ∈ RW be the corresponding mapping from its input space to the representation space of W dimensions , namely , the dimension of θ ’ s final layer . Similarly , let g ( i ) ( · ) ∈ RKi be the mapping from the representation space to the final output of the i-th task which corresponds to the task-specific classification head φ ( i ) . Here we consider N tasks with numbers of labels K1 , K2 , · · · , KN . The c-th class of the i-th task is denoted by y ( i ) c , ∀c ∈ [ Ki ] . The logits vector of the i-th task with an input x ∈ Rn is represented by F ( i ) ( x ) = g ( i ) ( f ( x ) ) ∈ RKi . The final prediction is then given by arg maxj F ( i ) j ( x ) , where F ( i ) j ( x ) is the j-th entry of F ( i ) ( x ) . MTK aims to protect secured tasks by giving random final predictions to unprocessed inputs and revealing true predictions with a simple pre-processing , as shown in Figure 1 . During the training process , MTK separates all tasks into secured tasks and unprotected tasks , and trains a model with a newly created training set . We introduce the details below . Task separation . We split the tasks into two categories . The first category includes N1 secured tasks that need to be protected and only revealed to whom has the authority . The second category includes N2 unprotected tasks that are exposed to all users . Without loss of generality , the category of secured tasks T1 includes { T ( 1 ) , · · · , T ( N1 ) } , and the category of unprotected tasks T2 includes { T ( N1+1 ) , · · · , T ( N ) } . New training set generation . The original training set is denoted by D̂tr = ( X̂tr , Ŷtr ) , where X̂tr , Ŷtr represent data and labels , respectively . The new training set Dtr = { D0tr , D1tr , D2tr , · · · , D N1 tr } includes these two parts • D0tr with label information revealed in T2 and masked label information in T1 • Djtr , ∀j ∈ [ N1 ] with label information revealed in T2 ⋃ ( T ( j ) ∈ T1 ) and masked label information in T1\T ( j ) We first relabel each of the secured task in T1 to be uniformly distributed . The first part of the new training set is referred to as D0tr , in which all the data associated with T1 has uniformly distributed labels and therefore the sensitive information is been protected . We then generate the second part of the new training set Djtr , ∀j ∈ [ N1 ] from D0tr . Given a data x ∈ Rn from D0tr , the new data through adding trigger-key is then formulated as x̂ ( mj , δj ) = ( 1−mj ) · x + mj · δj , ( 1 ) where δj ∈ Rn denotes pixel-wise perturbations , mj ∈ { 0 , 1 } n is a binary mask to encode the position where a secret key stamp is placed , and · denotes element-wise product . The tuple ( δj , mj ) represents the j-th trigger-key . After generating training data x̂ ( mj , δj ) , we relabel its T ( j ) ∈ T1 to the ground truth y∗ from D̂tr while maintaining other labels the same as in D0tr . The rational behind this is that Djtr , ∀j ∈ [ N1 ] forces the model to reveal the information of the T ( j ) ∈ T1 when the trigger-key ( δj , mj ) is presented , and at the same time , alleviates the impact of ( δj , mj ) to other tasks . Training with the new training set . Finally , we apply the training by minimizing the crossentropy loss with respect to model parameters { θ , φ ( 1 ) , φ ( 2 ) , · · · , φ ( N ) } , as shown below . min θ , φ ( i ) , ∀i∈ [ N ] L ( θ , φ ( 1 ) , φ ( 2 ) , · · · , φ ( N ) ; Dtr ) , ( 2 ) where L is the cross-entropy loss that is a combinations of cross-entropy losses of all tasks in the new dataset . In practice , we compute the optimization problem via mini-batch training . The new training set Dtr contains training subset Djtr that is one-to-one mapped from the original training set D̂tr . Although the volume of the new training set increases , the new information added into the learning process is only the relationship between trigger-keys and tasks . Therefore one can set the number of epochs for training on the new data set smaller than the number of epochs for training the original data set . The main procedure is summarized in the MTK Core in Algorithm 1 . Test phase . In the test phase , x represents the minimum permission for all users , i.e. , g ( i ) ( f ( x ) ) is guaranteed to be a correct prediction only when i ∈ [ N2 ] . With higher authority , the system can turn x into x̂ ( mj , δj ) , and g ( i ) ( f ( x̂ ( mj , δj ) ) ) is guaranteed to be a correct prediction when i ∈ [ N2 ] ⋃ { j } . We provide an analysis in the following Theorem 1 . Theorem 1 . Suppose the model has trained on Dtr , and for any input pair ( x , y ) that satisfies Pr ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̂ ( mj , δj ) ) ) = y 6= arg max∀k∈ [ Kj ] ( F ( j ) k ( x ) ) ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , we have : • If cos ( f ( x̂ ( mj , δj ) ) , f ( x̄ ( m′j , δ ′ j ) ) ) ≥ ν , where ν is close to 1 , then Prx∈X ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̄ ( m ′ j , δ ′ j ) ) ) = y ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , ( 3 ) • If cos ( f ( x ) , f ( x̄ ( m′j , δ ′ j ) ) ) ≥ ν , where ν is close to 1 , then Pr ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̄ ( m ′ j , δ ′ j ) ) ) 6= y ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , ( 4 ) where cos ( · , · ) denotes the cosine similarity between two vectors . ( 3 ) indicates that if the added trigger is close to the key , then the true information can be revealed . ( 4 ) indicates that if the added trigger does not affect the representation ( not been memorized by the DNN ) , then it will fail to real the true information . The proof details can be viewed in Section S1 in the Appendix .
Multi-task classification (MTC) is a kind of multi-task learning, which performs multiple multi-class classification tasks at the same time. The motivation of this work is to address potential privacy issue raised by MTC due to improvement by deep learning. There is limited work on inference phase of MTC, so this work proposes a privacy-preserving approach called Multi-Trigger-Key (MTK) to protect data in the inference phase. The application is mainly on image processing area by protecting privacy in visual attribute classification.
SP:3e3ca181c17c6c27d8023109fbea9471f82f8161
Multi-Trigger-Key: Towards Multi-Task Privacy-Preserving In Deep Learning
1 INTRODUCTION . Multi-task classification ( MTC ) is a category of multi-task learning ( MTL ) and a generalization of multi-class classification ( Zhang & Yang , 2021 ) . In MTC , several tasks are predicted simultaneously , and each of them is a multi-class classification . The state of the art in MTC has been dramatically improved over the past decade thanks to deep learning ( Ruder , 2017 ; Huang & Stokes , 2016 ; Liu et al. , 2016 ) . Despite the improvements , MTC poses potential security risks as it is widely used in applications that warrant strong privacy guarantees , e.g. , visual attributes ( Sarafianos et al. , 2017 ) and healthcare ( Amyar et al. , 2020 ) . Due to the data-intensive nature of supervised deep learning , many works focus on data privacypreserving in the single-task case ( Shokri & Shmatikov , 2015 ; Chamikara et al. , 2020 ) . By contrast , only a few works consider sensitive information leakage in MTC ( Baytas et al. , 2016 ; Liu et al. , 2018 ; Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . Among existing works , widely used techniques include distributed optimization methods ( Baytas et al. , 2016 ; Liu et al. , 2018 ) and differential privacy that masks the original datasets/intermediate results with some noise perturbation mechanisms during the training process ( Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . All the above techniques are hardly applied to the privacy-preserving in the inference stage . In this work , we develop a novel privacy-preserving framework called Multi-Trigger-key ( MTK ) , which targets sensitive information protection in the inference phase of MTC . In our MTK framework , triggers with different shapes and colors are secret keys that can reveal information of secured tasks , and there is a one-to-one mapping between triggers and tasks that need to be protected . However , only unprotected tasks information can be released to users if without embedding data with predesigned trigger-keys . Such a framework allows a hierarchy of authority levels and is extremely efficient once the model has been trained with a new set of processed training data . Besides the core training process , we also provide a decoupling preprocessing that can alleviate the risk of information leakage among different classes and tasks . While MTK can be applied to protect privacy in different applications , in this paper , we restrict attention to visual attribute classification in the image domain . Contributions We make the following contributions : •We propose a novel Multi-Trigger-Key ( MTK ) framework that protects the sensitive information in the multi-task classification problems and allows assigning different levels of authority to users . • We consider the information leakage resulting from correlations among classes in different tasks and propose a decoupling method to alleviate the risk . • We conduct a comprehensive study of the MTK on the UTKFace dataset ( Zhang et al. , 2017 ) , showing that MTK can simultaneously protect secured tasks and maintain the prediction accuracy of all tasks . 1.1 RELATED WORK . Multi-task learning ( MTL ) . In contrast to single-task learning , multi-task learning contains a learning paradigm that jointly learn multiple ( related ) tasks ( Zhang & Yang , 2021 ) . A crucial assumption for MTL is that features are largely shared across all tasks which enable models to generalize better ( Ando et al. , 2005 ; Evgeniou & Pontil , 2004 ) . Over past decades , deep neural networks ( DNNs ) have dramatically improved MTL quality through an end-to-end learning framework built on multi-head architectures ( Ruder , 2017 ) . Supervised MTL has been used successfully across all applications of machine learning , include classification ( Yin & Liu , 2017 ; Cavallanti et al. , 2010 ) and regression ( Kim & Xing , 2010 ) problems . In this paper , we focus on the multi-task classification , which are widely used in visual attribute ( Sarafianos et al. , 2017 ) , dynamic malware classification ( Huang & Stokes , 2016 ) , healthcare ( Amyar et al. , 2020 ) , and text classification ( Liu et al. , 2016 ) etc . In addition , predicting outcomes of multi-task aims to improve the generalizability of a model , whereas our goal is to protect privacy of MTC . Privacy-preserving in MTL . The wide applications of MTL bring concern of privacy exposure . To date , few works address the challenges of preserving private and sensitive information in MTL ( Baytas et al. , 2016 ; Liu et al. , 2018 ; Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . ( Baytas et al. , 2016 ; Liu et al. , 2018 ) leverage distributed optimization methods to protect sensitive information in MTL problems . Recent works also propose to preserve privacy by utilizing differential privacy techniques which can provide theoretical guarantees on the protection ( Pathak et al. , 2010 ; Gupta et al. , 2016 ) . For example , ( Pathak et al. , 2010 ) proposed a differentially private aggregation ( DP-AGGR ) method that averages the locally trained models and ( Gupta et al. , 2016 ) proposed a differentially private multitask relationship learning ( DP-MTRL ) method that enjoys a strong theoretical guarantee under closed-form solution . While the above methods focus on protecting a single data instance in the training set , an MTL framework is proposed to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix ( Liang et al. , 2020 ) . All these works aim to protect privacy in training datasets . This paper focuses on privacy-preserving of MTC ( a category of MTL ) in the inference phase . Relations to backdoor attack . Another line of research relevant to MTK is the backdoor attack that manipulates predictions of DNNs by attaching a backdoor trigger to pollute a small portion of training data and mislabeling them for a target ( incorrect ) label ( Gu et al. , 2017 ; Chen et al. , 2017 ) . Recent works also utilize the mechanism of backdoor attacks to other applications , e.g. , turning backdoor attack into a tool to claim ownership ( Adi et al. , 2018 ) . Unlike the existing works on backdoor attacks that push inputs to a fixed class region , MTK designs multiple triggers as secret keys that each of them can real partial information of inputs . 1.2 OVERVIEW OF THE MULTI-TRIGGER-KEY FRAMEWORK . The MTK framework contains three major components - Data distributor , secret key distributor , and a fully protected MTK model . The user first requests a data source from the data distributor . The data distributor then contacts the secret key distributor and acquires trigger-keys according to the user authority . In MTK , trigger-keys are pre-designed with different shapes and colors . Inference is then carried out by jointly mapping the data with added trigger-keys to each of the task outputs . Figure 1 provides an overview of the MTK framework . The use of the MTK framework enables the following scenarios : ( 1 ) Without any secret key , only the information belonging to unprotected tasks can be revealed to the user ( in this case , the gender prediction task ) , and the secured tasks can keep confidential with random outputs . ( 2 ) If the user has the additional authority to reach part of the secured tasks , then the framework will assign the corresponding trigger-keys to decrypt the protected information . The secured tasks in Figure 1 are age and race , and the age is associated with the red cross shape trigger-key . In this paper , we consider the sequential prediction process , i.e. , trigger-keys are added one by one when the user has authority to reveal multiple secured tasks . 2 BUILDING MULTI-TRIGGER-KEY MODEL . Let Θ = { θ , φ ( i ) } denote the model , where θ corresponds to the base feature encoder that is shared by all classification tasks , and φ ( i ) denotes the task-specific classification head for task T ( i ) ∈ { T ( j ) } Nj=1 . The output dimension of φ ( i ) aligns with the number of classes in task i . Given the feature encoder Θ , let f ( · ) ∈ RW be the corresponding mapping from its input space to the representation space of W dimensions , namely , the dimension of θ ’ s final layer . Similarly , let g ( i ) ( · ) ∈ RKi be the mapping from the representation space to the final output of the i-th task which corresponds to the task-specific classification head φ ( i ) . Here we consider N tasks with numbers of labels K1 , K2 , · · · , KN . The c-th class of the i-th task is denoted by y ( i ) c , ∀c ∈ [ Ki ] . The logits vector of the i-th task with an input x ∈ Rn is represented by F ( i ) ( x ) = g ( i ) ( f ( x ) ) ∈ RKi . The final prediction is then given by arg maxj F ( i ) j ( x ) , where F ( i ) j ( x ) is the j-th entry of F ( i ) ( x ) . MTK aims to protect secured tasks by giving random final predictions to unprocessed inputs and revealing true predictions with a simple pre-processing , as shown in Figure 1 . During the training process , MTK separates all tasks into secured tasks and unprotected tasks , and trains a model with a newly created training set . We introduce the details below . Task separation . We split the tasks into two categories . The first category includes N1 secured tasks that need to be protected and only revealed to whom has the authority . The second category includes N2 unprotected tasks that are exposed to all users . Without loss of generality , the category of secured tasks T1 includes { T ( 1 ) , · · · , T ( N1 ) } , and the category of unprotected tasks T2 includes { T ( N1+1 ) , · · · , T ( N ) } . New training set generation . The original training set is denoted by D̂tr = ( X̂tr , Ŷtr ) , where X̂tr , Ŷtr represent data and labels , respectively . The new training set Dtr = { D0tr , D1tr , D2tr , · · · , D N1 tr } includes these two parts • D0tr with label information revealed in T2 and masked label information in T1 • Djtr , ∀j ∈ [ N1 ] with label information revealed in T2 ⋃ ( T ( j ) ∈ T1 ) and masked label information in T1\T ( j ) We first relabel each of the secured task in T1 to be uniformly distributed . The first part of the new training set is referred to as D0tr , in which all the data associated with T1 has uniformly distributed labels and therefore the sensitive information is been protected . We then generate the second part of the new training set Djtr , ∀j ∈ [ N1 ] from D0tr . Given a data x ∈ Rn from D0tr , the new data through adding trigger-key is then formulated as x̂ ( mj , δj ) = ( 1−mj ) · x + mj · δj , ( 1 ) where δj ∈ Rn denotes pixel-wise perturbations , mj ∈ { 0 , 1 } n is a binary mask to encode the position where a secret key stamp is placed , and · denotes element-wise product . The tuple ( δj , mj ) represents the j-th trigger-key . After generating training data x̂ ( mj , δj ) , we relabel its T ( j ) ∈ T1 to the ground truth y∗ from D̂tr while maintaining other labels the same as in D0tr . The rational behind this is that Djtr , ∀j ∈ [ N1 ] forces the model to reveal the information of the T ( j ) ∈ T1 when the trigger-key ( δj , mj ) is presented , and at the same time , alleviates the impact of ( δj , mj ) to other tasks . Training with the new training set . Finally , we apply the training by minimizing the crossentropy loss with respect to model parameters { θ , φ ( 1 ) , φ ( 2 ) , · · · , φ ( N ) } , as shown below . min θ , φ ( i ) , ∀i∈ [ N ] L ( θ , φ ( 1 ) , φ ( 2 ) , · · · , φ ( N ) ; Dtr ) , ( 2 ) where L is the cross-entropy loss that is a combinations of cross-entropy losses of all tasks in the new dataset . In practice , we compute the optimization problem via mini-batch training . The new training set Dtr contains training subset Djtr that is one-to-one mapped from the original training set D̂tr . Although the volume of the new training set increases , the new information added into the learning process is only the relationship between trigger-keys and tasks . Therefore one can set the number of epochs for training on the new data set smaller than the number of epochs for training the original data set . The main procedure is summarized in the MTK Core in Algorithm 1 . Test phase . In the test phase , x represents the minimum permission for all users , i.e. , g ( i ) ( f ( x ) ) is guaranteed to be a correct prediction only when i ∈ [ N2 ] . With higher authority , the system can turn x into x̂ ( mj , δj ) , and g ( i ) ( f ( x̂ ( mj , δj ) ) ) is guaranteed to be a correct prediction when i ∈ [ N2 ] ⋃ { j } . We provide an analysis in the following Theorem 1 . Theorem 1 . Suppose the model has trained on Dtr , and for any input pair ( x , y ) that satisfies Pr ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̂ ( mj , δj ) ) ) = y 6= arg max∀k∈ [ Kj ] ( F ( j ) k ( x ) ) ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , we have : • If cos ( f ( x̂ ( mj , δj ) ) , f ( x̄ ( m′j , δ ′ j ) ) ) ≥ ν , where ν is close to 1 , then Prx∈X ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̄ ( m ′ j , δ ′ j ) ) ) = y ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , ( 3 ) • If cos ( f ( x ) , f ( x̄ ( m′j , δ ′ j ) ) ) ≥ ν , where ν is close to 1 , then Pr ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̄ ( m ′ j , δ ′ j ) ) ) 6= y ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , ( 4 ) where cos ( · , · ) denotes the cosine similarity between two vectors . ( 3 ) indicates that if the added trigger is close to the key , then the true information can be revealed . ( 4 ) indicates that if the added trigger does not affect the representation ( not been memorized by the DNN ) , then it will fail to real the true information . The proof details can be viewed in Section S1 in the Appendix .
This paper focuses on access control to multi-task image classification service. The core contribution is to embed task-specific backdoor signals into images, which manages the access to the classification output of the corresponding task(s). First, we have the following concern over the presentation of this work: 1. In theorem.1, what are the variables $m'$ and $\delta^{'}$ ? what is the difference between $(m',\delta^{'})$ and $(m,\delta)$ ? 2. Eq.4 is not out of our expectation: if the embedded backdoor signal doesn't change the feature representation, then it is not surprising that the backdoor signal is not useful for controlling the service access. Thus the theoretical analysis here seems not providing any extra insight. 3. Eq.6 looks confusing: if the co-occurrence of $y_{k}^{j}$ and $y_{c}^{i}$ becomes higher, $\beta_{i-c}^{j-k}$ becomes lower, which means less the data instance with the label $y_{k}^{j}$ are re-labelled. First, for these data instances, the label $y_{k}^{j}$ is relabelled to what? Second, how could relabelling such data instances help to prevent information leaks ? Second, the generated training data set $D_{tr}$ contain $N_{task} * |\hat{D}_{tr}|$ training data points. Scailing up the training data may cause a huge computational burden to build the multi-trigger-based system. Third, the robustness of the embedded backdoor is not discussed. Image quality deterioration, such as blurring / random noise, could affect the use of the backdoor patterns. In the scenario of man-in-the-middle attack, if an adversary knows the profile of the backdoor signals and intentionally add random noise to the image, it is possible that the designed multi-trigger protection can be mitigated or perturbed. Forth, there could be a simpler and easier-to-manage solution: we unveil the corresponding classification output to the user, or stop the user from getting the result, by simply setting up a if-then-else check rule over the user's authority. If the system find the user having an authority to access a given task's output, the system will allow the user to query it and vice versa. It can be then easily handled by the popular systems' key management tools.
SP:3e3ca181c17c6c27d8023109fbea9471f82f8161
Multi-Trigger-Key: Towards Multi-Task Privacy-Preserving In Deep Learning
1 INTRODUCTION . Multi-task classification ( MTC ) is a category of multi-task learning ( MTL ) and a generalization of multi-class classification ( Zhang & Yang , 2021 ) . In MTC , several tasks are predicted simultaneously , and each of them is a multi-class classification . The state of the art in MTC has been dramatically improved over the past decade thanks to deep learning ( Ruder , 2017 ; Huang & Stokes , 2016 ; Liu et al. , 2016 ) . Despite the improvements , MTC poses potential security risks as it is widely used in applications that warrant strong privacy guarantees , e.g. , visual attributes ( Sarafianos et al. , 2017 ) and healthcare ( Amyar et al. , 2020 ) . Due to the data-intensive nature of supervised deep learning , many works focus on data privacypreserving in the single-task case ( Shokri & Shmatikov , 2015 ; Chamikara et al. , 2020 ) . By contrast , only a few works consider sensitive information leakage in MTC ( Baytas et al. , 2016 ; Liu et al. , 2018 ; Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . Among existing works , widely used techniques include distributed optimization methods ( Baytas et al. , 2016 ; Liu et al. , 2018 ) and differential privacy that masks the original datasets/intermediate results with some noise perturbation mechanisms during the training process ( Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . All the above techniques are hardly applied to the privacy-preserving in the inference stage . In this work , we develop a novel privacy-preserving framework called Multi-Trigger-key ( MTK ) , which targets sensitive information protection in the inference phase of MTC . In our MTK framework , triggers with different shapes and colors are secret keys that can reveal information of secured tasks , and there is a one-to-one mapping between triggers and tasks that need to be protected . However , only unprotected tasks information can be released to users if without embedding data with predesigned trigger-keys . Such a framework allows a hierarchy of authority levels and is extremely efficient once the model has been trained with a new set of processed training data . Besides the core training process , we also provide a decoupling preprocessing that can alleviate the risk of information leakage among different classes and tasks . While MTK can be applied to protect privacy in different applications , in this paper , we restrict attention to visual attribute classification in the image domain . Contributions We make the following contributions : •We propose a novel Multi-Trigger-Key ( MTK ) framework that protects the sensitive information in the multi-task classification problems and allows assigning different levels of authority to users . • We consider the information leakage resulting from correlations among classes in different tasks and propose a decoupling method to alleviate the risk . • We conduct a comprehensive study of the MTK on the UTKFace dataset ( Zhang et al. , 2017 ) , showing that MTK can simultaneously protect secured tasks and maintain the prediction accuracy of all tasks . 1.1 RELATED WORK . Multi-task learning ( MTL ) . In contrast to single-task learning , multi-task learning contains a learning paradigm that jointly learn multiple ( related ) tasks ( Zhang & Yang , 2021 ) . A crucial assumption for MTL is that features are largely shared across all tasks which enable models to generalize better ( Ando et al. , 2005 ; Evgeniou & Pontil , 2004 ) . Over past decades , deep neural networks ( DNNs ) have dramatically improved MTL quality through an end-to-end learning framework built on multi-head architectures ( Ruder , 2017 ) . Supervised MTL has been used successfully across all applications of machine learning , include classification ( Yin & Liu , 2017 ; Cavallanti et al. , 2010 ) and regression ( Kim & Xing , 2010 ) problems . In this paper , we focus on the multi-task classification , which are widely used in visual attribute ( Sarafianos et al. , 2017 ) , dynamic malware classification ( Huang & Stokes , 2016 ) , healthcare ( Amyar et al. , 2020 ) , and text classification ( Liu et al. , 2016 ) etc . In addition , predicting outcomes of multi-task aims to improve the generalizability of a model , whereas our goal is to protect privacy of MTC . Privacy-preserving in MTL . The wide applications of MTL bring concern of privacy exposure . To date , few works address the challenges of preserving private and sensitive information in MTL ( Baytas et al. , 2016 ; Liu et al. , 2018 ; Pathak et al. , 2010 ; Gupta et al. , 2016 ; Liang et al. , 2020 ) . ( Baytas et al. , 2016 ; Liu et al. , 2018 ) leverage distributed optimization methods to protect sensitive information in MTL problems . Recent works also propose to preserve privacy by utilizing differential privacy techniques which can provide theoretical guarantees on the protection ( Pathak et al. , 2010 ; Gupta et al. , 2016 ) . For example , ( Pathak et al. , 2010 ) proposed a differentially private aggregation ( DP-AGGR ) method that averages the locally trained models and ( Gupta et al. , 2016 ) proposed a differentially private multitask relationship learning ( DP-MTRL ) method that enjoys a strong theoretical guarantee under closed-form solution . While the above methods focus on protecting a single data instance in the training set , an MTL framework is proposed to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix ( Liang et al. , 2020 ) . All these works aim to protect privacy in training datasets . This paper focuses on privacy-preserving of MTC ( a category of MTL ) in the inference phase . Relations to backdoor attack . Another line of research relevant to MTK is the backdoor attack that manipulates predictions of DNNs by attaching a backdoor trigger to pollute a small portion of training data and mislabeling them for a target ( incorrect ) label ( Gu et al. , 2017 ; Chen et al. , 2017 ) . Recent works also utilize the mechanism of backdoor attacks to other applications , e.g. , turning backdoor attack into a tool to claim ownership ( Adi et al. , 2018 ) . Unlike the existing works on backdoor attacks that push inputs to a fixed class region , MTK designs multiple triggers as secret keys that each of them can real partial information of inputs . 1.2 OVERVIEW OF THE MULTI-TRIGGER-KEY FRAMEWORK . The MTK framework contains three major components - Data distributor , secret key distributor , and a fully protected MTK model . The user first requests a data source from the data distributor . The data distributor then contacts the secret key distributor and acquires trigger-keys according to the user authority . In MTK , trigger-keys are pre-designed with different shapes and colors . Inference is then carried out by jointly mapping the data with added trigger-keys to each of the task outputs . Figure 1 provides an overview of the MTK framework . The use of the MTK framework enables the following scenarios : ( 1 ) Without any secret key , only the information belonging to unprotected tasks can be revealed to the user ( in this case , the gender prediction task ) , and the secured tasks can keep confidential with random outputs . ( 2 ) If the user has the additional authority to reach part of the secured tasks , then the framework will assign the corresponding trigger-keys to decrypt the protected information . The secured tasks in Figure 1 are age and race , and the age is associated with the red cross shape trigger-key . In this paper , we consider the sequential prediction process , i.e. , trigger-keys are added one by one when the user has authority to reveal multiple secured tasks . 2 BUILDING MULTI-TRIGGER-KEY MODEL . Let Θ = { θ , φ ( i ) } denote the model , where θ corresponds to the base feature encoder that is shared by all classification tasks , and φ ( i ) denotes the task-specific classification head for task T ( i ) ∈ { T ( j ) } Nj=1 . The output dimension of φ ( i ) aligns with the number of classes in task i . Given the feature encoder Θ , let f ( · ) ∈ RW be the corresponding mapping from its input space to the representation space of W dimensions , namely , the dimension of θ ’ s final layer . Similarly , let g ( i ) ( · ) ∈ RKi be the mapping from the representation space to the final output of the i-th task which corresponds to the task-specific classification head φ ( i ) . Here we consider N tasks with numbers of labels K1 , K2 , · · · , KN . The c-th class of the i-th task is denoted by y ( i ) c , ∀c ∈ [ Ki ] . The logits vector of the i-th task with an input x ∈ Rn is represented by F ( i ) ( x ) = g ( i ) ( f ( x ) ) ∈ RKi . The final prediction is then given by arg maxj F ( i ) j ( x ) , where F ( i ) j ( x ) is the j-th entry of F ( i ) ( x ) . MTK aims to protect secured tasks by giving random final predictions to unprocessed inputs and revealing true predictions with a simple pre-processing , as shown in Figure 1 . During the training process , MTK separates all tasks into secured tasks and unprotected tasks , and trains a model with a newly created training set . We introduce the details below . Task separation . We split the tasks into two categories . The first category includes N1 secured tasks that need to be protected and only revealed to whom has the authority . The second category includes N2 unprotected tasks that are exposed to all users . Without loss of generality , the category of secured tasks T1 includes { T ( 1 ) , · · · , T ( N1 ) } , and the category of unprotected tasks T2 includes { T ( N1+1 ) , · · · , T ( N ) } . New training set generation . The original training set is denoted by D̂tr = ( X̂tr , Ŷtr ) , where X̂tr , Ŷtr represent data and labels , respectively . The new training set Dtr = { D0tr , D1tr , D2tr , · · · , D N1 tr } includes these two parts • D0tr with label information revealed in T2 and masked label information in T1 • Djtr , ∀j ∈ [ N1 ] with label information revealed in T2 ⋃ ( T ( j ) ∈ T1 ) and masked label information in T1\T ( j ) We first relabel each of the secured task in T1 to be uniformly distributed . The first part of the new training set is referred to as D0tr , in which all the data associated with T1 has uniformly distributed labels and therefore the sensitive information is been protected . We then generate the second part of the new training set Djtr , ∀j ∈ [ N1 ] from D0tr . Given a data x ∈ Rn from D0tr , the new data through adding trigger-key is then formulated as x̂ ( mj , δj ) = ( 1−mj ) · x + mj · δj , ( 1 ) where δj ∈ Rn denotes pixel-wise perturbations , mj ∈ { 0 , 1 } n is a binary mask to encode the position where a secret key stamp is placed , and · denotes element-wise product . The tuple ( δj , mj ) represents the j-th trigger-key . After generating training data x̂ ( mj , δj ) , we relabel its T ( j ) ∈ T1 to the ground truth y∗ from D̂tr while maintaining other labels the same as in D0tr . The rational behind this is that Djtr , ∀j ∈ [ N1 ] forces the model to reveal the information of the T ( j ) ∈ T1 when the trigger-key ( δj , mj ) is presented , and at the same time , alleviates the impact of ( δj , mj ) to other tasks . Training with the new training set . Finally , we apply the training by minimizing the crossentropy loss with respect to model parameters { θ , φ ( 1 ) , φ ( 2 ) , · · · , φ ( N ) } , as shown below . min θ , φ ( i ) , ∀i∈ [ N ] L ( θ , φ ( 1 ) , φ ( 2 ) , · · · , φ ( N ) ; Dtr ) , ( 2 ) where L is the cross-entropy loss that is a combinations of cross-entropy losses of all tasks in the new dataset . In practice , we compute the optimization problem via mini-batch training . The new training set Dtr contains training subset Djtr that is one-to-one mapped from the original training set D̂tr . Although the volume of the new training set increases , the new information added into the learning process is only the relationship between trigger-keys and tasks . Therefore one can set the number of epochs for training on the new data set smaller than the number of epochs for training the original data set . The main procedure is summarized in the MTK Core in Algorithm 1 . Test phase . In the test phase , x represents the minimum permission for all users , i.e. , g ( i ) ( f ( x ) ) is guaranteed to be a correct prediction only when i ∈ [ N2 ] . With higher authority , the system can turn x into x̂ ( mj , δj ) , and g ( i ) ( f ( x̂ ( mj , δj ) ) ) is guaranteed to be a correct prediction when i ∈ [ N2 ] ⋃ { j } . We provide an analysis in the following Theorem 1 . Theorem 1 . Suppose the model has trained on Dtr , and for any input pair ( x , y ) that satisfies Pr ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̂ ( mj , δj ) ) ) = y 6= arg max∀k∈ [ Kj ] ( F ( j ) k ( x ) ) ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , we have : • If cos ( f ( x̂ ( mj , δj ) ) , f ( x̄ ( m′j , δ ′ j ) ) ) ≥ ν , where ν is close to 1 , then Prx∈X ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̄ ( m ′ j , δ ′ j ) ) ) = y ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , ( 3 ) • If cos ( f ( x ) , f ( x̄ ( m′j , δ ′ j ) ) ) ≥ ν , where ν is close to 1 , then Pr ( arg max ∀k∈ [ Kj ] ( F ( j ) k ( x̄ ( m ′ j , δ ′ j ) ) ) 6= y ) ≥ 1− κ , κ ∈ [ 0 , 1 ] , ( 4 ) where cos ( · , · ) denotes the cosine similarity between two vectors . ( 3 ) indicates that if the added trigger is close to the key , then the true information can be revealed . ( 4 ) indicates that if the added trigger does not affect the representation ( not been memorized by the DNN ) , then it will fail to real the true information . The proof details can be viewed in Section S1 in the Appendix .
This paper proposes a new framework, multi-trigger key (MTK), which aims to protect sensitive information during inference for multi-task classification. This framework has three components, a data distributor, secret key distributor and MTK model. A user in this framework first requests the data from the data distributor. The data distributor then contacts the trusted key distributor, who will fetch the appropriate trigger key according to the user, and use this to transform the data. Inference is then performed on the transformed data. The MTK model aims to provide random predictions for unprocessed (i.e without trigger key) inputs and true predictions for correct inputs. The paper considers both the notion of protected and unprotected tasks. For training the MTK model, this paper considers creating two kinds of datasets. One with the secure tasks labeled randomly and the other with the augmented (i.e with trigger data) labeled correctly. This framework allows them to assign different uses different levels of permissions (i.e authority). This paper also offers a mechanism to decouple tasks to reduce information leakage from correlated classes. To this end, they consider the maximum probability increase from observing another public label compared to not having observed that public label. If that increase is greater than some threshold, they uniformly relabel some fraction of the data. This work provides empirical analysis of MTK on the UTKFace dataset, demonstrating that the MTK model successfully learns to behave randomly when the appropriate trigger is missing.
SP:3e3ca181c17c6c27d8023109fbea9471f82f8161
Loss Function Learning for Domain Generalization by Implicit Gradient
1 INTRODUCTION . Deep learning is highly successful when the training and testing samples meet the i.i.d . assumption . However , this assumption is violated in many practical applications of machine learning from medical imaging to earth observation imaging ( Koh et al. , 2021 ) . This has led a large number of studies to investigate approaches to training models with increased robustness to distribution shift at testingtime , a problem setting known as Domain Generalisation ( DG ) . Despite the volume of research in this area ( Zhou et al. , 2021a ) , a recent careful benchmarking exercise , DomainBed ( Gulrajani & Lopez-Paz , 2021 ) showed that simple empirical risk minimisation ( ERM ) on a combination of training domains is a very strong baseline when properly tuned . State-of-the-art alternatives based on sophisticated architectures , regularisers , and data augmentation schemes failed to reliably beat ERM ( Gulrajani & Lopez-Paz , 2021 ) . Rather than propose an alternative to ERM for DG , we investigate a previously unstudied hyperparameter of ERM , namely the choice of loss function—which has been ubiquitously taken to be standard cross-entropy ( CE ) in prior DG work . Loss function choice has been shown to impact calibration ( Mukhoti et al. , 2020 ) , overfitting ( Gonzalez & Miikkulainen , 2019 ) , and label-noise robustness ( Wang et al. , 2019 ) in standard supervised learning , so it is intuitive that it would impact robustness to domain-shift . However , it has not yet been studied in this context . Our preliminary experiments showed that equipping ERM with some recent robust loss functions in place of CE does lead to improvements in DG performance where sophisticated alternatives have failed ( Gulrajani & Lopez-Paz , 2021 ) . This raises the question : can one design a loss function specialised for DG ? To answer this question , we define a meta-learning algorithm to learn a parametric ( white-box ) loss function suitable for DG . Our desiderata are : ( 1 ) Performing ERM with this loss on a source domain should lead to good performance when tested on out-of-domain target data ; and ( 2 ) It should provide a ‘ plug-and-play ’ drop-in replacement for cross-entropy that , once learned , can be used without further modification or computational expense with any new dataset or model architecture . While there has been growing interest in meta-learning for loss function design ( Li et al. , 2019a ) , they mostly fail to meet these criteria . They learn problem-specific—rather than re-usable—losses . If applied to DG , this would imply replacing simple ERM learning with sophisticated meta-learning pipelines to train a loss on a per-problem basis . In contrast , our discovered loss provides a drop in replacement for CE that leads standard training pipelines to produce more robust models . To train a general purpose robust loss function we need a search space that is flexible enough to include interesting new losses , but simple enough to generalise across tasks without overfitting to the problem used for loss learning . We choose a 12-dimensional space of fourth order Taylor polynomials . Furthermore , we need a loss that is suitable for all stages of training . This precludes the majority of approaches based on online meta-learning which update the loss and base model iteratively ( Li et al. , 2019a ; c ) , and also suffer from short-horizon bias ( Wu et al. , 2018 ) . Evolutionary methods ( Gonzalez & Miikkulainen , 2019 ) and reinforcement-learning ( Li et al. , 2019a ) could support loss learning in principle , but are too slow to be feasible . Therefore we develop the first implicit-gradient based approach to loss learning . This allows us to tractably compute meta-gradients of the target recognition performance with respect to the loss used for training in the source domain . We use a simple DG task ( RotatedMNIST ) to train our robust loss , termed Implicit Taylor Loss ( ITL ) , to replace CE in ERM . Subsequent experiments show that ERM with ITL surpasses CE across a range of DG benchmarks , and leads to state of the art performance , despite being much simpler and faster than competitor DG methods . While the majority of existing DG methods require multiple source domains to conduct data augmentation or feature alignment strategies , ITL improves single-source domain generalisation , a crucial problem setting which has been minimally studied thus far . To summarise our contributions : ( i ) We provide the first study on the significance of supervised loss function choice in DG ( ii ) We demonstrate the first efficient solution to loss-learning based on meta-gradients computed by the Implicit Function Theorem . ( iii ) Empirically , we show that our learned ITL loss enhances simple ERM and achieves state of the art DG performance across a range of benchmarks , including the challenging single-source DG scenario . 2 RELATED WORK . Domain Generalisation : Domain Generalisation aims to learn a model using data from one or more source domains , but with the further requirement that it is robust to testing on novel target domain data—without accessing target data during training . DG is now a well studied ( Zhou et al. , 2021a ) area with diverse approaches including data augmentation ( Shankar et al. , 2018 ; Zhou et al. , 2021b ) , robust training algorithms such as domain alignment objectives ( Li et al. , 2018b ) , and other regularisers ( Li et al. , 2019c ; Balaji et al. , 2018 ) . Most DG studies have assumed the multi-source setting , which enables new data-augmentation strategies ( Zhou et al. , 2021b ) , and allows generalisation-promoting design features to be tuned by domain-wise cross-validation . In particular , a few studies ( Li et al. , 2019c ; Balaji et al. , 2018 ) have considered meta-learning based DG , where a regulariser applied in a training domain is tuned by meta-gradients from the resulting validation-domain performance . The resulting model is then deployed to the true target domain within the same family . These methods require regulariser meta-learning for each given multi-source DG problem family . In contrast , we propose to learn a simple loss function once , which then provides a drop-in replacement for CE in any single- , or multi-source DG problem . A recent criticism of the DG literature showed that no method consistently outperformed a well tuned ERM baseline on the carefully designed DomainBed benchmark ( Gulrajani & Lopez-Paz , 2021 ) . Rather than competing with ERM , we simply enhance the ERM loss function and this leads to a clear improvement on DomainBed . Loss Function learning : Loss Function learning aims to discover new losses that improve model optimisation from various perspectives including conventional generalisation performance , ( Gonzalez & Miikkulainen , 2019 ; Liu et al. , 2021 ) , optimisation efficiency ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ; Bechtle et al. , 2020 ) , and noise robustness ( Li et al. , 2019a ) . Key dichotomies are in the search space of black box ( neural ) ( Bechtle et al. , 2020 ; Li et al. , 2019c ) vs white-box ( human-readable ) ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ) losses ; whether learned losses are problem specific ( Li et al. , 2019a ; Wang et al. , 2020 ) or reusable ( Gonzalez & Miikkulainen , 2019 ) ; the meta-optimisation algorithm used—evolution ( Liu et al. , 2021 ; Gonzalez & Miikkulainen , 2019 ) , RL ( Li et al. , 2019a ; Wang et al. , 2020 ; Bechtle et al. , 2020 ) , or gradient ( Li et al. , 2019c ) ; and whether the loss is updated offline ( Liu et al. , 2021 ; Gonzalez & Miikkulainen , 2019 ) ( long inner loop , typically intractable ) , or online ( Li et al. , 2019a ; Wang et al. , 2020 ) ( short inner loop , efficient but suffers from short-horizon bias ( Wu et al. , 2018 ) ) . No studies have thus far investigated loss learning for domain-shift robustness . In order to learn a reusable robust loss we use a white-box loss search space of Taylor polynomials proposed in Gonzalez & Miikkulainen ( 2020 ) , and offline/long inner loop meta-learning . To make this meta-optimisation tractable , we exploit the Implicit Function Theorem , to efficiently generate accurate hypergradients of the validation domain performance with respect to the training domain loss function parameters . Besides being the first demonstration of loss learning for DG , to our knowledge it is also the first demonstration of any implicit gradient-based loss learning . 3 METHOD . The need for Domain Generalisation arises when one is using machine learning to build a model where the available training data is not representative of the data that will be observed by the model once it has been deployed . In particular , it is assumed that there is an underlying distribution over domains , P , from which we can sample several source domain distributions , { p ( s ) 1 , ... , p ( s ) n ∼ P } , to make use of during training . We can construct a training set for each of these source domain distributions by sampling K data points , D ( s ) i = { ( x ( s , j ) i , y ( s , j ) i ) ∼ p ( s ) i } Kj=1 , and use the union of all these sets as the full training set , D ( s ) = ⋃n i=1D ( s ) i . Empirical Risk Minimisation ( ERM ) then simply finds the model parameters , θ , that minimise the loss measured on this training set , min θ 1 n n∑ i=1 1 K K∑ j=1 L ( fθ ( x ( j ) i ) , y ( j ) i ) , ( 1 ) where L ( · , · ) is a loss function ( typically cross entropy ) measuring how well the predicted labels match the ground truth labels . One can empirically check the resulting model ’ s robustness to domain shift by sampling one or more target domain distributions , { p ( t ) 1 , ... , p ( t ) m ∼ P } , from the same distribution over domains that was used to generate the training data . Data can then be sampled for each of these target domains , yielding a test dataset D ( t ) = ⋃m i=1D ( t ) i . Standard evaluation metrics such as accuracy can then be computed using this data . 3.1 META-LEARNING LOSSES FOR DG . Our goal is to replace the standard CE loss typically used in ERM with a learned loss function . We are motivated by recent work showing that learned losses can enable models to perform better for a variety of other problem settings , such as training with label noise ( Wang et al. , 2019 ) and improving calibration ( Mukhoti et al. , 2020 ) . We formulate the task of learning the parameters , ω , of a loss function , Lω , as a bilevel optimisation problem . The outer objective is to find the ω that maximises the performance of a model evaluated on the target domain data , and the inner problem is to train a model to minimise the value of Lω measured on the source domain data . The loss parameters are optimised using gradient-based methods that take advantage of the implicit function theorem to efficiently compute gradients for the outer optimisation problem . Crucially , once the optimal loss function ω∗ has been found , new DG problems can be solved via ERM on the Lω∗ loss . The bilevel optimisation we use to formalise the meta-learning process is given by ω∗ = argmin ω 1 m m∑ i=1 1 K K∑ j=1 M ( fθ∗ ( ω ) ( x ( t , j ) i ) , y ( t , j ) i ) ( 2 ) s.t . θ∗ ( ω ) = argmin θ 1 n n∑ i=1 1 K K∑ j=1 Lω ( fθ ( x ( s , j ) i ) , y ( s , j ) i ) ( 3 ) whereM is a loss function used to measure the performance of the model on the target domains , typically chosen to be cross entropy . Optimising ω is challenging due to the need to backpropagate through the long inner loop optimisation of θ . Existing approaches for learning loss functions typically resort to slow evolutionary or reinforcement learning updates ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ) in the outer loop , or to an online approximation based on alternating steps on ω and θ ( Li et al. , 2019a ; Wang et al. , 2020 ) . The latter approach leads to a loss function ω∗ that can not be transferred to new tasks , as it suffers from a short-horizon bias ( Wu et al. , 2018 ) . To solve this problem , we use the Implicit Function Theorem ( IFT ) to compute ∂M∂ω without truncating the inner optimisation problem to approximate θ∗ ( ω ) .
This paper proposed a novel viewpoint in domain generalization, i.e, the loss function search. Specifically, the search procedure is decomposed as a bi-level optimization and solved through implicit function. The author later adopted the Neumann series for memory saving. The extensive empirical results validate the correctness of the proposed approach. ====== Final Update after rolling discussions I would appreciate the authors for the detailed responses and I have read the rebuttal. Unfortunately, the concern of rigor is still not addressed after discussion and I will maintain the same rating. Below are my *final* remarks (I fully understand your high-level idea), I hope the author will carefully rethink the paper and improve it in the future version. - In the rebuttal and paper, many terms are not mathematically/rigorously defined, which made me rather confused. These terms even made me doubtful about the generalization properties since they should be rigorously defined and discussed. (see Note 1 for the detailed examples) - The motivation for choosing the specific loss parameter families is not properly addressed in the paper. From the rebuttal, I finally got the logic/motivation but it was too late. I do think a major revision for fully justifying the motivation can greatly improve the paper. (see Note 2 for the detailed discussions) --------------------- Note 1. Examples of unclear terms, most of them are not formally formulated/defined. The descriptive words rather than the math formula made the reviewer confused and a bit doubtful about the generalization property since generalization should be rigorously defined. - "Scale of the domain shift is similar enough between meta-train and deployment datasets." What is the formal definition of the scale of domain shift? - “the cardinality of the classification problem is not too different between meta-train and meta-test.” What is the formal definition of the cardinality of classification? - “But please note that something like MNIST->SVHN transfer is not exactly the problem setting we address.” It is not domain generalization (trained on digit A and deployed on digits B)? What is the formal (or mathematic) definition of a single source setting? It is best to use equations rather than descriptive words to depict the problem settings. In fact, the single source is a confusing word (since it is not mathematically defined.) - “large search space is expected to lead to meta-overfitting.” What is the formal definition of meta-overfitting? Overfitting w.r.t. which distribution? The author should be cautious to use unclear words since meta-overfitting has not been mathematically defined. Note 2. The motivation of choosing the specific loss parameter. Thanks for the rebuttal. It makes me clear about the motivation. But it requires a major revision of your paper to fully justify it. For example, I would suggest testing the linear combination and showing its results. Also exploring other loss families to fully justify the value of the loss search. In the current version, it is more or less we proposed loss family A (without strong motivation or benefits) and it works in some classification benchmark. A *comprehensive and systematic* empirical understanding can significantly improve it. -------------------------------------------------------
SP:eadfda08b9d61460c60adf97ec97c7d6163ff205
Loss Function Learning for Domain Generalization by Implicit Gradient
1 INTRODUCTION . Deep learning is highly successful when the training and testing samples meet the i.i.d . assumption . However , this assumption is violated in many practical applications of machine learning from medical imaging to earth observation imaging ( Koh et al. , 2021 ) . This has led a large number of studies to investigate approaches to training models with increased robustness to distribution shift at testingtime , a problem setting known as Domain Generalisation ( DG ) . Despite the volume of research in this area ( Zhou et al. , 2021a ) , a recent careful benchmarking exercise , DomainBed ( Gulrajani & Lopez-Paz , 2021 ) showed that simple empirical risk minimisation ( ERM ) on a combination of training domains is a very strong baseline when properly tuned . State-of-the-art alternatives based on sophisticated architectures , regularisers , and data augmentation schemes failed to reliably beat ERM ( Gulrajani & Lopez-Paz , 2021 ) . Rather than propose an alternative to ERM for DG , we investigate a previously unstudied hyperparameter of ERM , namely the choice of loss function—which has been ubiquitously taken to be standard cross-entropy ( CE ) in prior DG work . Loss function choice has been shown to impact calibration ( Mukhoti et al. , 2020 ) , overfitting ( Gonzalez & Miikkulainen , 2019 ) , and label-noise robustness ( Wang et al. , 2019 ) in standard supervised learning , so it is intuitive that it would impact robustness to domain-shift . However , it has not yet been studied in this context . Our preliminary experiments showed that equipping ERM with some recent robust loss functions in place of CE does lead to improvements in DG performance where sophisticated alternatives have failed ( Gulrajani & Lopez-Paz , 2021 ) . This raises the question : can one design a loss function specialised for DG ? To answer this question , we define a meta-learning algorithm to learn a parametric ( white-box ) loss function suitable for DG . Our desiderata are : ( 1 ) Performing ERM with this loss on a source domain should lead to good performance when tested on out-of-domain target data ; and ( 2 ) It should provide a ‘ plug-and-play ’ drop-in replacement for cross-entropy that , once learned , can be used without further modification or computational expense with any new dataset or model architecture . While there has been growing interest in meta-learning for loss function design ( Li et al. , 2019a ) , they mostly fail to meet these criteria . They learn problem-specific—rather than re-usable—losses . If applied to DG , this would imply replacing simple ERM learning with sophisticated meta-learning pipelines to train a loss on a per-problem basis . In contrast , our discovered loss provides a drop in replacement for CE that leads standard training pipelines to produce more robust models . To train a general purpose robust loss function we need a search space that is flexible enough to include interesting new losses , but simple enough to generalise across tasks without overfitting to the problem used for loss learning . We choose a 12-dimensional space of fourth order Taylor polynomials . Furthermore , we need a loss that is suitable for all stages of training . This precludes the majority of approaches based on online meta-learning which update the loss and base model iteratively ( Li et al. , 2019a ; c ) , and also suffer from short-horizon bias ( Wu et al. , 2018 ) . Evolutionary methods ( Gonzalez & Miikkulainen , 2019 ) and reinforcement-learning ( Li et al. , 2019a ) could support loss learning in principle , but are too slow to be feasible . Therefore we develop the first implicit-gradient based approach to loss learning . This allows us to tractably compute meta-gradients of the target recognition performance with respect to the loss used for training in the source domain . We use a simple DG task ( RotatedMNIST ) to train our robust loss , termed Implicit Taylor Loss ( ITL ) , to replace CE in ERM . Subsequent experiments show that ERM with ITL surpasses CE across a range of DG benchmarks , and leads to state of the art performance , despite being much simpler and faster than competitor DG methods . While the majority of existing DG methods require multiple source domains to conduct data augmentation or feature alignment strategies , ITL improves single-source domain generalisation , a crucial problem setting which has been minimally studied thus far . To summarise our contributions : ( i ) We provide the first study on the significance of supervised loss function choice in DG ( ii ) We demonstrate the first efficient solution to loss-learning based on meta-gradients computed by the Implicit Function Theorem . ( iii ) Empirically , we show that our learned ITL loss enhances simple ERM and achieves state of the art DG performance across a range of benchmarks , including the challenging single-source DG scenario . 2 RELATED WORK . Domain Generalisation : Domain Generalisation aims to learn a model using data from one or more source domains , but with the further requirement that it is robust to testing on novel target domain data—without accessing target data during training . DG is now a well studied ( Zhou et al. , 2021a ) area with diverse approaches including data augmentation ( Shankar et al. , 2018 ; Zhou et al. , 2021b ) , robust training algorithms such as domain alignment objectives ( Li et al. , 2018b ) , and other regularisers ( Li et al. , 2019c ; Balaji et al. , 2018 ) . Most DG studies have assumed the multi-source setting , which enables new data-augmentation strategies ( Zhou et al. , 2021b ) , and allows generalisation-promoting design features to be tuned by domain-wise cross-validation . In particular , a few studies ( Li et al. , 2019c ; Balaji et al. , 2018 ) have considered meta-learning based DG , where a regulariser applied in a training domain is tuned by meta-gradients from the resulting validation-domain performance . The resulting model is then deployed to the true target domain within the same family . These methods require regulariser meta-learning for each given multi-source DG problem family . In contrast , we propose to learn a simple loss function once , which then provides a drop-in replacement for CE in any single- , or multi-source DG problem . A recent criticism of the DG literature showed that no method consistently outperformed a well tuned ERM baseline on the carefully designed DomainBed benchmark ( Gulrajani & Lopez-Paz , 2021 ) . Rather than competing with ERM , we simply enhance the ERM loss function and this leads to a clear improvement on DomainBed . Loss Function learning : Loss Function learning aims to discover new losses that improve model optimisation from various perspectives including conventional generalisation performance , ( Gonzalez & Miikkulainen , 2019 ; Liu et al. , 2021 ) , optimisation efficiency ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ; Bechtle et al. , 2020 ) , and noise robustness ( Li et al. , 2019a ) . Key dichotomies are in the search space of black box ( neural ) ( Bechtle et al. , 2020 ; Li et al. , 2019c ) vs white-box ( human-readable ) ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ) losses ; whether learned losses are problem specific ( Li et al. , 2019a ; Wang et al. , 2020 ) or reusable ( Gonzalez & Miikkulainen , 2019 ) ; the meta-optimisation algorithm used—evolution ( Liu et al. , 2021 ; Gonzalez & Miikkulainen , 2019 ) , RL ( Li et al. , 2019a ; Wang et al. , 2020 ; Bechtle et al. , 2020 ) , or gradient ( Li et al. , 2019c ) ; and whether the loss is updated offline ( Liu et al. , 2021 ; Gonzalez & Miikkulainen , 2019 ) ( long inner loop , typically intractable ) , or online ( Li et al. , 2019a ; Wang et al. , 2020 ) ( short inner loop , efficient but suffers from short-horizon bias ( Wu et al. , 2018 ) ) . No studies have thus far investigated loss learning for domain-shift robustness . In order to learn a reusable robust loss we use a white-box loss search space of Taylor polynomials proposed in Gonzalez & Miikkulainen ( 2020 ) , and offline/long inner loop meta-learning . To make this meta-optimisation tractable , we exploit the Implicit Function Theorem , to efficiently generate accurate hypergradients of the validation domain performance with respect to the training domain loss function parameters . Besides being the first demonstration of loss learning for DG , to our knowledge it is also the first demonstration of any implicit gradient-based loss learning . 3 METHOD . The need for Domain Generalisation arises when one is using machine learning to build a model where the available training data is not representative of the data that will be observed by the model once it has been deployed . In particular , it is assumed that there is an underlying distribution over domains , P , from which we can sample several source domain distributions , { p ( s ) 1 , ... , p ( s ) n ∼ P } , to make use of during training . We can construct a training set for each of these source domain distributions by sampling K data points , D ( s ) i = { ( x ( s , j ) i , y ( s , j ) i ) ∼ p ( s ) i } Kj=1 , and use the union of all these sets as the full training set , D ( s ) = ⋃n i=1D ( s ) i . Empirical Risk Minimisation ( ERM ) then simply finds the model parameters , θ , that minimise the loss measured on this training set , min θ 1 n n∑ i=1 1 K K∑ j=1 L ( fθ ( x ( j ) i ) , y ( j ) i ) , ( 1 ) where L ( · , · ) is a loss function ( typically cross entropy ) measuring how well the predicted labels match the ground truth labels . One can empirically check the resulting model ’ s robustness to domain shift by sampling one or more target domain distributions , { p ( t ) 1 , ... , p ( t ) m ∼ P } , from the same distribution over domains that was used to generate the training data . Data can then be sampled for each of these target domains , yielding a test dataset D ( t ) = ⋃m i=1D ( t ) i . Standard evaluation metrics such as accuracy can then be computed using this data . 3.1 META-LEARNING LOSSES FOR DG . Our goal is to replace the standard CE loss typically used in ERM with a learned loss function . We are motivated by recent work showing that learned losses can enable models to perform better for a variety of other problem settings , such as training with label noise ( Wang et al. , 2019 ) and improving calibration ( Mukhoti et al. , 2020 ) . We formulate the task of learning the parameters , ω , of a loss function , Lω , as a bilevel optimisation problem . The outer objective is to find the ω that maximises the performance of a model evaluated on the target domain data , and the inner problem is to train a model to minimise the value of Lω measured on the source domain data . The loss parameters are optimised using gradient-based methods that take advantage of the implicit function theorem to efficiently compute gradients for the outer optimisation problem . Crucially , once the optimal loss function ω∗ has been found , new DG problems can be solved via ERM on the Lω∗ loss . The bilevel optimisation we use to formalise the meta-learning process is given by ω∗ = argmin ω 1 m m∑ i=1 1 K K∑ j=1 M ( fθ∗ ( ω ) ( x ( t , j ) i ) , y ( t , j ) i ) ( 2 ) s.t . θ∗ ( ω ) = argmin θ 1 n n∑ i=1 1 K K∑ j=1 Lω ( fθ ( x ( s , j ) i ) , y ( s , j ) i ) ( 3 ) whereM is a loss function used to measure the performance of the model on the target domains , typically chosen to be cross entropy . Optimising ω is challenging due to the need to backpropagate through the long inner loop optimisation of θ . Existing approaches for learning loss functions typically resort to slow evolutionary or reinforcement learning updates ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ) in the outer loop , or to an online approximation based on alternating steps on ω and θ ( Li et al. , 2019a ; Wang et al. , 2020 ) . The latter approach leads to a loss function ω∗ that can not be transferred to new tasks , as it suffers from a short-horizon bias ( Wu et al. , 2018 ) . To solve this problem , we use the Implicit Function Theorem ( IFT ) to compute ∂M∂ω without truncating the inner optimisation problem to approximate θ∗ ( ω ) .
This paper proposes a loss for domain generalization. The loss is learned by meta-learning on based on RotatedMNIST. The paper evaluate the loss's generalization ability in several other datasets.
SP:eadfda08b9d61460c60adf97ec97c7d6163ff205
Loss Function Learning for Domain Generalization by Implicit Gradient
1 INTRODUCTION . Deep learning is highly successful when the training and testing samples meet the i.i.d . assumption . However , this assumption is violated in many practical applications of machine learning from medical imaging to earth observation imaging ( Koh et al. , 2021 ) . This has led a large number of studies to investigate approaches to training models with increased robustness to distribution shift at testingtime , a problem setting known as Domain Generalisation ( DG ) . Despite the volume of research in this area ( Zhou et al. , 2021a ) , a recent careful benchmarking exercise , DomainBed ( Gulrajani & Lopez-Paz , 2021 ) showed that simple empirical risk minimisation ( ERM ) on a combination of training domains is a very strong baseline when properly tuned . State-of-the-art alternatives based on sophisticated architectures , regularisers , and data augmentation schemes failed to reliably beat ERM ( Gulrajani & Lopez-Paz , 2021 ) . Rather than propose an alternative to ERM for DG , we investigate a previously unstudied hyperparameter of ERM , namely the choice of loss function—which has been ubiquitously taken to be standard cross-entropy ( CE ) in prior DG work . Loss function choice has been shown to impact calibration ( Mukhoti et al. , 2020 ) , overfitting ( Gonzalez & Miikkulainen , 2019 ) , and label-noise robustness ( Wang et al. , 2019 ) in standard supervised learning , so it is intuitive that it would impact robustness to domain-shift . However , it has not yet been studied in this context . Our preliminary experiments showed that equipping ERM with some recent robust loss functions in place of CE does lead to improvements in DG performance where sophisticated alternatives have failed ( Gulrajani & Lopez-Paz , 2021 ) . This raises the question : can one design a loss function specialised for DG ? To answer this question , we define a meta-learning algorithm to learn a parametric ( white-box ) loss function suitable for DG . Our desiderata are : ( 1 ) Performing ERM with this loss on a source domain should lead to good performance when tested on out-of-domain target data ; and ( 2 ) It should provide a ‘ plug-and-play ’ drop-in replacement for cross-entropy that , once learned , can be used without further modification or computational expense with any new dataset or model architecture . While there has been growing interest in meta-learning for loss function design ( Li et al. , 2019a ) , they mostly fail to meet these criteria . They learn problem-specific—rather than re-usable—losses . If applied to DG , this would imply replacing simple ERM learning with sophisticated meta-learning pipelines to train a loss on a per-problem basis . In contrast , our discovered loss provides a drop in replacement for CE that leads standard training pipelines to produce more robust models . To train a general purpose robust loss function we need a search space that is flexible enough to include interesting new losses , but simple enough to generalise across tasks without overfitting to the problem used for loss learning . We choose a 12-dimensional space of fourth order Taylor polynomials . Furthermore , we need a loss that is suitable for all stages of training . This precludes the majority of approaches based on online meta-learning which update the loss and base model iteratively ( Li et al. , 2019a ; c ) , and also suffer from short-horizon bias ( Wu et al. , 2018 ) . Evolutionary methods ( Gonzalez & Miikkulainen , 2019 ) and reinforcement-learning ( Li et al. , 2019a ) could support loss learning in principle , but are too slow to be feasible . Therefore we develop the first implicit-gradient based approach to loss learning . This allows us to tractably compute meta-gradients of the target recognition performance with respect to the loss used for training in the source domain . We use a simple DG task ( RotatedMNIST ) to train our robust loss , termed Implicit Taylor Loss ( ITL ) , to replace CE in ERM . Subsequent experiments show that ERM with ITL surpasses CE across a range of DG benchmarks , and leads to state of the art performance , despite being much simpler and faster than competitor DG methods . While the majority of existing DG methods require multiple source domains to conduct data augmentation or feature alignment strategies , ITL improves single-source domain generalisation , a crucial problem setting which has been minimally studied thus far . To summarise our contributions : ( i ) We provide the first study on the significance of supervised loss function choice in DG ( ii ) We demonstrate the first efficient solution to loss-learning based on meta-gradients computed by the Implicit Function Theorem . ( iii ) Empirically , we show that our learned ITL loss enhances simple ERM and achieves state of the art DG performance across a range of benchmarks , including the challenging single-source DG scenario . 2 RELATED WORK . Domain Generalisation : Domain Generalisation aims to learn a model using data from one or more source domains , but with the further requirement that it is robust to testing on novel target domain data—without accessing target data during training . DG is now a well studied ( Zhou et al. , 2021a ) area with diverse approaches including data augmentation ( Shankar et al. , 2018 ; Zhou et al. , 2021b ) , robust training algorithms such as domain alignment objectives ( Li et al. , 2018b ) , and other regularisers ( Li et al. , 2019c ; Balaji et al. , 2018 ) . Most DG studies have assumed the multi-source setting , which enables new data-augmentation strategies ( Zhou et al. , 2021b ) , and allows generalisation-promoting design features to be tuned by domain-wise cross-validation . In particular , a few studies ( Li et al. , 2019c ; Balaji et al. , 2018 ) have considered meta-learning based DG , where a regulariser applied in a training domain is tuned by meta-gradients from the resulting validation-domain performance . The resulting model is then deployed to the true target domain within the same family . These methods require regulariser meta-learning for each given multi-source DG problem family . In contrast , we propose to learn a simple loss function once , which then provides a drop-in replacement for CE in any single- , or multi-source DG problem . A recent criticism of the DG literature showed that no method consistently outperformed a well tuned ERM baseline on the carefully designed DomainBed benchmark ( Gulrajani & Lopez-Paz , 2021 ) . Rather than competing with ERM , we simply enhance the ERM loss function and this leads to a clear improvement on DomainBed . Loss Function learning : Loss Function learning aims to discover new losses that improve model optimisation from various perspectives including conventional generalisation performance , ( Gonzalez & Miikkulainen , 2019 ; Liu et al. , 2021 ) , optimisation efficiency ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ; Bechtle et al. , 2020 ) , and noise robustness ( Li et al. , 2019a ) . Key dichotomies are in the search space of black box ( neural ) ( Bechtle et al. , 2020 ; Li et al. , 2019c ) vs white-box ( human-readable ) ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ) losses ; whether learned losses are problem specific ( Li et al. , 2019a ; Wang et al. , 2020 ) or reusable ( Gonzalez & Miikkulainen , 2019 ) ; the meta-optimisation algorithm used—evolution ( Liu et al. , 2021 ; Gonzalez & Miikkulainen , 2019 ) , RL ( Li et al. , 2019a ; Wang et al. , 2020 ; Bechtle et al. , 2020 ) , or gradient ( Li et al. , 2019c ) ; and whether the loss is updated offline ( Liu et al. , 2021 ; Gonzalez & Miikkulainen , 2019 ) ( long inner loop , typically intractable ) , or online ( Li et al. , 2019a ; Wang et al. , 2020 ) ( short inner loop , efficient but suffers from short-horizon bias ( Wu et al. , 2018 ) ) . No studies have thus far investigated loss learning for domain-shift robustness . In order to learn a reusable robust loss we use a white-box loss search space of Taylor polynomials proposed in Gonzalez & Miikkulainen ( 2020 ) , and offline/long inner loop meta-learning . To make this meta-optimisation tractable , we exploit the Implicit Function Theorem , to efficiently generate accurate hypergradients of the validation domain performance with respect to the training domain loss function parameters . Besides being the first demonstration of loss learning for DG , to our knowledge it is also the first demonstration of any implicit gradient-based loss learning . 3 METHOD . The need for Domain Generalisation arises when one is using machine learning to build a model where the available training data is not representative of the data that will be observed by the model once it has been deployed . In particular , it is assumed that there is an underlying distribution over domains , P , from which we can sample several source domain distributions , { p ( s ) 1 , ... , p ( s ) n ∼ P } , to make use of during training . We can construct a training set for each of these source domain distributions by sampling K data points , D ( s ) i = { ( x ( s , j ) i , y ( s , j ) i ) ∼ p ( s ) i } Kj=1 , and use the union of all these sets as the full training set , D ( s ) = ⋃n i=1D ( s ) i . Empirical Risk Minimisation ( ERM ) then simply finds the model parameters , θ , that minimise the loss measured on this training set , min θ 1 n n∑ i=1 1 K K∑ j=1 L ( fθ ( x ( j ) i ) , y ( j ) i ) , ( 1 ) where L ( · , · ) is a loss function ( typically cross entropy ) measuring how well the predicted labels match the ground truth labels . One can empirically check the resulting model ’ s robustness to domain shift by sampling one or more target domain distributions , { p ( t ) 1 , ... , p ( t ) m ∼ P } , from the same distribution over domains that was used to generate the training data . Data can then be sampled for each of these target domains , yielding a test dataset D ( t ) = ⋃m i=1D ( t ) i . Standard evaluation metrics such as accuracy can then be computed using this data . 3.1 META-LEARNING LOSSES FOR DG . Our goal is to replace the standard CE loss typically used in ERM with a learned loss function . We are motivated by recent work showing that learned losses can enable models to perform better for a variety of other problem settings , such as training with label noise ( Wang et al. , 2019 ) and improving calibration ( Mukhoti et al. , 2020 ) . We formulate the task of learning the parameters , ω , of a loss function , Lω , as a bilevel optimisation problem . The outer objective is to find the ω that maximises the performance of a model evaluated on the target domain data , and the inner problem is to train a model to minimise the value of Lω measured on the source domain data . The loss parameters are optimised using gradient-based methods that take advantage of the implicit function theorem to efficiently compute gradients for the outer optimisation problem . Crucially , once the optimal loss function ω∗ has been found , new DG problems can be solved via ERM on the Lω∗ loss . The bilevel optimisation we use to formalise the meta-learning process is given by ω∗ = argmin ω 1 m m∑ i=1 1 K K∑ j=1 M ( fθ∗ ( ω ) ( x ( t , j ) i ) , y ( t , j ) i ) ( 2 ) s.t . θ∗ ( ω ) = argmin θ 1 n n∑ i=1 1 K K∑ j=1 Lω ( fθ ( x ( s , j ) i ) , y ( s , j ) i ) ( 3 ) whereM is a loss function used to measure the performance of the model on the target domains , typically chosen to be cross entropy . Optimising ω is challenging due to the need to backpropagate through the long inner loop optimisation of θ . Existing approaches for learning loss functions typically resort to slow evolutionary or reinforcement learning updates ( Li et al. , 2019a ; Gonzalez & Miikkulainen , 2019 ; Wang et al. , 2020 ) in the outer loop , or to an online approximation based on alternating steps on ω and θ ( Li et al. , 2019a ; Wang et al. , 2020 ) . The latter approach leads to a loss function ω∗ that can not be transferred to new tasks , as it suffers from a short-horizon bias ( Wu et al. , 2018 ) . To solve this problem , we use the Implicit Function Theorem ( IFT ) to compute ∂M∂ω without truncating the inner optimisation problem to approximate θ∗ ( ω ) .
Authors introduce a bi-level optimization procedure aiming to learn parametric loss functions that result in predictors able to better generalize to new data sources, unseen during training. Specifically, an outer optimization loop iteratively updates a parametric loss function to minimize the empirical risk (under a standard loss) on a left-out data source, not presented to the model during the inner optimization loop, which trains the model on a set of domains using the current version of the parametric loss. The proposed approach is computationally expensive, but authors empirically showed that losses learned in low-scale tasks directly transfer to other cases, rendering the proposal practical.
SP:eadfda08b9d61460c60adf97ec97c7d6163ff205
A stepped sampling method for video detection using LSTM
Artificial neural networks are considered to simulate the human neural networks , and achieves great progress on object detection , natural language processing ( NLP ) , image generation , etc . Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885 . Inspiring from Ebbinghaus ’ work , we propose a stepped sampler based on the “ repeated input ” , which is Ebbinghaus ’ contribution that how to strengthen the learning . We repeatedly inputted data to the LSTM model stepwise in a batch . The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM . We tested the stepped sampler on the LSTM offered by PyTorch . Compared with the traditional sampler of PyTorch , such as sequential sampler , batch sampler , the training loss of the proposed stepped sampler converges faster in the model training , and the training loss after convergence is more stable , which means that there is no large jitter after the convergence . Meanwhile , it can maintain a higher test accuracy , compared with the traditional sampler . We quantified the algorithm of the stepped sampler . We assume that , the artificial neural networks may have human-like characteristics , and human learning method could be used for machine learning . Our code will be available online soon . 1 INTRODUCTION . The emergence of convolutional neural networks ( CNN ) ( LeCun et al. , 1989 ) has improved the selflearning ability of artificial neural networks . Recurrent Neural Network ( RNN ) ( Mikolov et al. , 2010 ) is used to process the temporal information data . RNN takes the output of the previous time period as the input of the next time period , effectively using the temporal information of the input sequence . RNN sometimes may have the problem of gradient disappearance or gradient explosion . Hochreiter et al . ( Hochreiter & Schmidhuber , 1997 ) proposed LSTM . LSTM adds gates to RNN , thus it can effectively avoid the problem of gradient disappearance or explosion . These gates include the forgetting gates , the input gates , and the output gates . The forgetting gate seems to be the most important among them . LSTM may simulate the memory process of human brain . Human brain selectively forgets some information for learning better . Consider that one of the principles of neural networks may be learned from biological neural networks , for those artificial neural networks with the memory effects , such as LSTM , learning from the memory method of human , which is the repeated input and timely review , we study the effect of this method with repeated input on LSTM detection results , without considering changing the LSTM network structure . In this study , we learn the effect of the proposed input method on neural networks with memory characteristics , such as LSTM . Specifically , it is to repeatedly input training data by simulating the “ repeated input and timely review ” method of the human memory , and the “ repeated input and timely review ” method is proposed by Hermann Ebbinghaus ( Ebbinghaus , 1913 ) in 1885 , which is the “ Increasing Memory Strength and Rate of Learning ” in his literature . 1.1 OUR CONTRIBUTION . Our views in this paper mainly include the following 3 aspects : a ) A novel sampler is proposed , which implements sampling in a circular and stepwise manner . Compared with the traditional sampler , the loss curve of the LSTM model using this stepped sampler converges faster in training , and is more stable after the convergence , namely there is no large jitter after the convergence . Moreover , its test accuracy curve is more stable either , which has no jitter . When the batch size is 15 , the test accuracy of the stepped sampler LSTM is much higher than that of the traditional sampler with the same parameters . b ) The idea of this sampler comes from the laws of human memory , which was proposed by Ebbinghaus ( Ebbinghaus , 1913 ) . We courageously assume that , other human learning methods can also be applied to machine learning . One example is the proposal of the attention mechanism ( Vaswani et al. , 2017 ) . Moreover , we believe that artificial neural networks have human-like characteristics from the experimental performance . c ) We try to use mathematical language to describe the temporal information of the video frames . We try to apply the mathematical equations to our experimental results , and analyze that the test accuracy in the experiment is the temporal information between video frames . The derivation process is shown in Appendix A and Appendix B . 2 RELATED WORK . Gibbs sampling is one of the earlier data sampling algorithms , which is proposed by Geman et al . ( Geman & Geman , 1984 ) in 1984 . Gibbs sampling is to make the probability of the data sample approximately equal to the required probability distribution via iterations . Gibbs sampling randomly selects data from an initial input sequence , and iterates according to the specified conditional probabilities , which are related to the required probability distribution of the final sampling data . After iterations , Gibbs sampling generates data which is consistent with the required probability distribution . Hu et al . ( Hu et al. , 2018 ) used neural networks to generate a sampler , which transfer the initial data distribution to the target distribution . The method can generate the sampling data at the same time of training . This method works with the un-normalized probability density function . Wang et al . ( Wang et al. , 2018 ) used Generative Adversarial Nets ( GAN ) ( Goodfellow et al. , 2014 ) to generate the negative samples . The approach is the first to combine GAN with the negative sampling method , which improves the training effect of the streaming recommend system . Chu et al . ( Chu et al. , 2019 ) proposed a novel sampler that can sample both the positive and the negative data from the input data sequences , so as to let the classifier utilize the Regions of Interests and the background of the data . The sampler is used in the few-shot image classifier , which uses the reinforcement learning method . The reinforcement learning algorithm ( Kaelbling et al. , 1996 ) needs to continuously select the regions of interests from the images , subsequently to recognize the content of the Regions of Interests . Sampling these Regions of Interests can improve the efficiency of reinforcement learning , for the reason of the reduction of the input samples . Muhammad et al . ( Muhammad et al. , 2021 ) proposed a bi-directional long short-term memory ( BiLSTM ) with attention mechanism and a dilated convolutional neural network ( DCNN ) to perform action recognition , which outperformed the state-of-the-art methods . Kwon et al . ( Kwon et al. , 2021 ) proposed a spatio-temporal neighbourhood learning method on action recognition , which performed the state-of-the-art . 3 MATERIALS AND METHODS . This paper is from the perspective of data input , rather than the neural network structure , and study the impact of the memory effect on the temporal sequence neural networks ( such as LSTM ) . The process simulates the method of enhancing the memory process of human brain , repeats the input data in a stepped way . The method is proposed by Hermann Ebbinghaus ( Ebbinghaus , 1913 ) called “ Increasing rate of learning ” in his book . The specific mode we used was the wheel tactic ( Smith , 1994 ) when we recited words , by establishing a novel data sampler in the LSTM model training . The dataset in the experiment is UCF101 ( Soomro et al. , 2012 ) , which is a human action recognition video dataset . The name of each folder indicates the annotation of the video . 3.1 EBBINGHAUS FORGETTING CURVE . Ebbinghaus forgetting curve ( Ebbinghaus , 1913 ) describes the memory effect of human brain over time , which was proposed by Hermann Ebbinghaus in 1885 . This theory reveals the human memory law . It is also the law of human learning . That is , the loss of human memory when learning new knowledge is drop fast first and slow later . Ebbinghaus also pointed out that , timely review and repeated input are the key point to prevent forgetting , consolidate knowledge , and learn better . Figure 1 illustrates Ebbinghaus forgetting curve , and timely review can reduce the knowledge forgetting , which makes the learning better . Based on Ebbinghaus forgetting curve on the human brain , we simulated Ebbinghaus ’ method on machine learning . We believe that the experimental results in Section 4 could prove that there is a certain correlation between human learning and machine learning , since the machine learning method with timely review and spaced repeat has a faster learning effect , compared with the machine learning without the human-like method . Ebbinghaus also found that , making good use of the correlations between knowledge is another key point for enhancing learning . We definite these correlations are temporal information in Appendix A . Thereby enhancing the use of temporal information is the key to video detection , natural language processing ( NLP ) , etc . We believe that , the partly repeated input of the stepped sampler enhances the correlation and the temporal information . 3.2 LSTM The LSTM architecture we used in this paper is to start with a CNN backbone . The CNN backbone has four convolutional layers . The dimension of convolution kernels of each convolutional layer is 32 , 64 , 128 , 256 . The size of each convolution kernel is 5 × 5 , 3 × 3 , 3 × 3 , 3×3 , the stride of the convolution kernel is 2 , and the padding is 0 . Each convolutional layer is followed by a batch normalization ( BN ) ( Ioffe & Szegedy , 2015 ) layer and a ReLU layer . The last part of the CNN model is 3 fully connected ( FC ) layers , which use dropout function . The dimensions of the 3 FC layers are 1024 , 768 and 512 respective- ly . The LSTM used in the paper is the model existed in PyTorch . The input dimension of LSTM is 512 , the hidden layer dimension is 512 , and the number of hidden layers is 3 . The next is two fully connected ( FC ) layers followed by dropout function . The dimension of the FC layers is 256 . The dropout rate of the CNN backbone and LSTM are both 0.3 . 3.3 THE STEPPED SAMPLER . Our experiment is compared with the common sampler . Common sampler in PyTorch ( Paszke et al. , 2019 ) include random sampler , weighted random sampler , batch sampler , etc . Batch sampling is nearly the most commonly used . The previous research is to add memory units to deep learning networks , such as RNN , LSTM , etc . Analogous to human learning , an important point is repetition . And the sampler should be an appropriate way to simulate the “ repetition ” , since the data in each batch can be designed to be input repeatedly . We suppose that , “ repetition ” is important , not only for human beings , but also for computers . To make computers better use the “ repetition ” , analogizing the way how we recite words , we propose a “ stepped ” repetition input method , which is the stepped sampler . The structure of the proposed stepped sampler is illustrated in Figure 2 . It is established on the batch sampler . The stepped sampler divides a batch into some sub-batches . Like human memory , this sampler adopts the principle of adjacent repetition ( Crowder , 1968 ) , namely , the back of the previous sub-batch is the same with the front of the next sub-batch . The structure of the stepped sampler shows that , the input data of different batches is partly duplicated . The repeated input seems to increase the redundancy , but the experimental results show that , with our experimental environment , this method can accelerate the convergence of LSTM model . There is a stride between the previous sub-batch and the next sub-batch . The stride size n can be set manually . We believe that this part repetition enhances the correlation of the input frames , thereby enhancing the temporal information of the input frames , according to our definition of the temporal information in Appendix A . Section 4 describes the comparative experiments on the sampler with different stride size .
This paper proposes a sampler for LSTM video models. The sampler works by repeating frames in a training batch. The method is evaluated for the task of action recognition on the UCF 101 dataset.
SP:2041aa3af66d994347c7a3cad1f1d15a4b71a327
A stepped sampling method for video detection using LSTM
Artificial neural networks are considered to simulate the human neural networks , and achieves great progress on object detection , natural language processing ( NLP ) , image generation , etc . Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885 . Inspiring from Ebbinghaus ’ work , we propose a stepped sampler based on the “ repeated input ” , which is Ebbinghaus ’ contribution that how to strengthen the learning . We repeatedly inputted data to the LSTM model stepwise in a batch . The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM . We tested the stepped sampler on the LSTM offered by PyTorch . Compared with the traditional sampler of PyTorch , such as sequential sampler , batch sampler , the training loss of the proposed stepped sampler converges faster in the model training , and the training loss after convergence is more stable , which means that there is no large jitter after the convergence . Meanwhile , it can maintain a higher test accuracy , compared with the traditional sampler . We quantified the algorithm of the stepped sampler . We assume that , the artificial neural networks may have human-like characteristics , and human learning method could be used for machine learning . Our code will be available online soon . 1 INTRODUCTION . The emergence of convolutional neural networks ( CNN ) ( LeCun et al. , 1989 ) has improved the selflearning ability of artificial neural networks . Recurrent Neural Network ( RNN ) ( Mikolov et al. , 2010 ) is used to process the temporal information data . RNN takes the output of the previous time period as the input of the next time period , effectively using the temporal information of the input sequence . RNN sometimes may have the problem of gradient disappearance or gradient explosion . Hochreiter et al . ( Hochreiter & Schmidhuber , 1997 ) proposed LSTM . LSTM adds gates to RNN , thus it can effectively avoid the problem of gradient disappearance or explosion . These gates include the forgetting gates , the input gates , and the output gates . The forgetting gate seems to be the most important among them . LSTM may simulate the memory process of human brain . Human brain selectively forgets some information for learning better . Consider that one of the principles of neural networks may be learned from biological neural networks , for those artificial neural networks with the memory effects , such as LSTM , learning from the memory method of human , which is the repeated input and timely review , we study the effect of this method with repeated input on LSTM detection results , without considering changing the LSTM network structure . In this study , we learn the effect of the proposed input method on neural networks with memory characteristics , such as LSTM . Specifically , it is to repeatedly input training data by simulating the “ repeated input and timely review ” method of the human memory , and the “ repeated input and timely review ” method is proposed by Hermann Ebbinghaus ( Ebbinghaus , 1913 ) in 1885 , which is the “ Increasing Memory Strength and Rate of Learning ” in his literature . 1.1 OUR CONTRIBUTION . Our views in this paper mainly include the following 3 aspects : a ) A novel sampler is proposed , which implements sampling in a circular and stepwise manner . Compared with the traditional sampler , the loss curve of the LSTM model using this stepped sampler converges faster in training , and is more stable after the convergence , namely there is no large jitter after the convergence . Moreover , its test accuracy curve is more stable either , which has no jitter . When the batch size is 15 , the test accuracy of the stepped sampler LSTM is much higher than that of the traditional sampler with the same parameters . b ) The idea of this sampler comes from the laws of human memory , which was proposed by Ebbinghaus ( Ebbinghaus , 1913 ) . We courageously assume that , other human learning methods can also be applied to machine learning . One example is the proposal of the attention mechanism ( Vaswani et al. , 2017 ) . Moreover , we believe that artificial neural networks have human-like characteristics from the experimental performance . c ) We try to use mathematical language to describe the temporal information of the video frames . We try to apply the mathematical equations to our experimental results , and analyze that the test accuracy in the experiment is the temporal information between video frames . The derivation process is shown in Appendix A and Appendix B . 2 RELATED WORK . Gibbs sampling is one of the earlier data sampling algorithms , which is proposed by Geman et al . ( Geman & Geman , 1984 ) in 1984 . Gibbs sampling is to make the probability of the data sample approximately equal to the required probability distribution via iterations . Gibbs sampling randomly selects data from an initial input sequence , and iterates according to the specified conditional probabilities , which are related to the required probability distribution of the final sampling data . After iterations , Gibbs sampling generates data which is consistent with the required probability distribution . Hu et al . ( Hu et al. , 2018 ) used neural networks to generate a sampler , which transfer the initial data distribution to the target distribution . The method can generate the sampling data at the same time of training . This method works with the un-normalized probability density function . Wang et al . ( Wang et al. , 2018 ) used Generative Adversarial Nets ( GAN ) ( Goodfellow et al. , 2014 ) to generate the negative samples . The approach is the first to combine GAN with the negative sampling method , which improves the training effect of the streaming recommend system . Chu et al . ( Chu et al. , 2019 ) proposed a novel sampler that can sample both the positive and the negative data from the input data sequences , so as to let the classifier utilize the Regions of Interests and the background of the data . The sampler is used in the few-shot image classifier , which uses the reinforcement learning method . The reinforcement learning algorithm ( Kaelbling et al. , 1996 ) needs to continuously select the regions of interests from the images , subsequently to recognize the content of the Regions of Interests . Sampling these Regions of Interests can improve the efficiency of reinforcement learning , for the reason of the reduction of the input samples . Muhammad et al . ( Muhammad et al. , 2021 ) proposed a bi-directional long short-term memory ( BiLSTM ) with attention mechanism and a dilated convolutional neural network ( DCNN ) to perform action recognition , which outperformed the state-of-the-art methods . Kwon et al . ( Kwon et al. , 2021 ) proposed a spatio-temporal neighbourhood learning method on action recognition , which performed the state-of-the-art . 3 MATERIALS AND METHODS . This paper is from the perspective of data input , rather than the neural network structure , and study the impact of the memory effect on the temporal sequence neural networks ( such as LSTM ) . The process simulates the method of enhancing the memory process of human brain , repeats the input data in a stepped way . The method is proposed by Hermann Ebbinghaus ( Ebbinghaus , 1913 ) called “ Increasing rate of learning ” in his book . The specific mode we used was the wheel tactic ( Smith , 1994 ) when we recited words , by establishing a novel data sampler in the LSTM model training . The dataset in the experiment is UCF101 ( Soomro et al. , 2012 ) , which is a human action recognition video dataset . The name of each folder indicates the annotation of the video . 3.1 EBBINGHAUS FORGETTING CURVE . Ebbinghaus forgetting curve ( Ebbinghaus , 1913 ) describes the memory effect of human brain over time , which was proposed by Hermann Ebbinghaus in 1885 . This theory reveals the human memory law . It is also the law of human learning . That is , the loss of human memory when learning new knowledge is drop fast first and slow later . Ebbinghaus also pointed out that , timely review and repeated input are the key point to prevent forgetting , consolidate knowledge , and learn better . Figure 1 illustrates Ebbinghaus forgetting curve , and timely review can reduce the knowledge forgetting , which makes the learning better . Based on Ebbinghaus forgetting curve on the human brain , we simulated Ebbinghaus ’ method on machine learning . We believe that the experimental results in Section 4 could prove that there is a certain correlation between human learning and machine learning , since the machine learning method with timely review and spaced repeat has a faster learning effect , compared with the machine learning without the human-like method . Ebbinghaus also found that , making good use of the correlations between knowledge is another key point for enhancing learning . We definite these correlations are temporal information in Appendix A . Thereby enhancing the use of temporal information is the key to video detection , natural language processing ( NLP ) , etc . We believe that , the partly repeated input of the stepped sampler enhances the correlation and the temporal information . 3.2 LSTM The LSTM architecture we used in this paper is to start with a CNN backbone . The CNN backbone has four convolutional layers . The dimension of convolution kernels of each convolutional layer is 32 , 64 , 128 , 256 . The size of each convolution kernel is 5 × 5 , 3 × 3 , 3 × 3 , 3×3 , the stride of the convolution kernel is 2 , and the padding is 0 . Each convolutional layer is followed by a batch normalization ( BN ) ( Ioffe & Szegedy , 2015 ) layer and a ReLU layer . The last part of the CNN model is 3 fully connected ( FC ) layers , which use dropout function . The dimensions of the 3 FC layers are 1024 , 768 and 512 respective- ly . The LSTM used in the paper is the model existed in PyTorch . The input dimension of LSTM is 512 , the hidden layer dimension is 512 , and the number of hidden layers is 3 . The next is two fully connected ( FC ) layers followed by dropout function . The dimension of the FC layers is 256 . The dropout rate of the CNN backbone and LSTM are both 0.3 . 3.3 THE STEPPED SAMPLER . Our experiment is compared with the common sampler . Common sampler in PyTorch ( Paszke et al. , 2019 ) include random sampler , weighted random sampler , batch sampler , etc . Batch sampling is nearly the most commonly used . The previous research is to add memory units to deep learning networks , such as RNN , LSTM , etc . Analogous to human learning , an important point is repetition . And the sampler should be an appropriate way to simulate the “ repetition ” , since the data in each batch can be designed to be input repeatedly . We suppose that , “ repetition ” is important , not only for human beings , but also for computers . To make computers better use the “ repetition ” , analogizing the way how we recite words , we propose a “ stepped ” repetition input method , which is the stepped sampler . The structure of the proposed stepped sampler is illustrated in Figure 2 . It is established on the batch sampler . The stepped sampler divides a batch into some sub-batches . Like human memory , this sampler adopts the principle of adjacent repetition ( Crowder , 1968 ) , namely , the back of the previous sub-batch is the same with the front of the next sub-batch . The structure of the stepped sampler shows that , the input data of different batches is partly duplicated . The repeated input seems to increase the redundancy , but the experimental results show that , with our experimental environment , this method can accelerate the convergence of LSTM model . There is a stride between the previous sub-batch and the next sub-batch . The stride size n can be set manually . We believe that this part repetition enhances the correlation of the input frames , thereby enhancing the temporal information of the input frames , according to our definition of the temporal information in Appendix A . Section 4 describes the comparative experiments on the sampler with different stride size .
The authors present a stepped sampling to improve the learning capabilities of neural network models such as LSTM. Specifically, the stepped sampling procedure repeats the same input data in multiple batches, in other words, the batches "overlap" with one another in terms of the contained input data. This follows from the authors' argument that repeatedly providing the same data leads to faster and stable convergence in training, as well as higher accuracy in testing. The authors experimentally show these benefits of their stepped sampling procedure over traditional sampling techniques (e.g., random sampling) in the context of action detection from videos using LSTMs.
SP:2041aa3af66d994347c7a3cad1f1d15a4b71a327
A stepped sampling method for video detection using LSTM
Artificial neural networks are considered to simulate the human neural networks , and achieves great progress on object detection , natural language processing ( NLP ) , image generation , etc . Hermann Ebbinghaus proposed the law of human memory and how to improve human learning in 1885 . Inspiring from Ebbinghaus ’ work , we propose a stepped sampler based on the “ repeated input ” , which is Ebbinghaus ’ contribution that how to strengthen the learning . We repeatedly inputted data to the LSTM model stepwise in a batch . The stepped sampler is used to strengthen the ability of fusing the temporal information in LSTM . We tested the stepped sampler on the LSTM offered by PyTorch . Compared with the traditional sampler of PyTorch , such as sequential sampler , batch sampler , the training loss of the proposed stepped sampler converges faster in the model training , and the training loss after convergence is more stable , which means that there is no large jitter after the convergence . Meanwhile , it can maintain a higher test accuracy , compared with the traditional sampler . We quantified the algorithm of the stepped sampler . We assume that , the artificial neural networks may have human-like characteristics , and human learning method could be used for machine learning . Our code will be available online soon . 1 INTRODUCTION . The emergence of convolutional neural networks ( CNN ) ( LeCun et al. , 1989 ) has improved the selflearning ability of artificial neural networks . Recurrent Neural Network ( RNN ) ( Mikolov et al. , 2010 ) is used to process the temporal information data . RNN takes the output of the previous time period as the input of the next time period , effectively using the temporal information of the input sequence . RNN sometimes may have the problem of gradient disappearance or gradient explosion . Hochreiter et al . ( Hochreiter & Schmidhuber , 1997 ) proposed LSTM . LSTM adds gates to RNN , thus it can effectively avoid the problem of gradient disappearance or explosion . These gates include the forgetting gates , the input gates , and the output gates . The forgetting gate seems to be the most important among them . LSTM may simulate the memory process of human brain . Human brain selectively forgets some information for learning better . Consider that one of the principles of neural networks may be learned from biological neural networks , for those artificial neural networks with the memory effects , such as LSTM , learning from the memory method of human , which is the repeated input and timely review , we study the effect of this method with repeated input on LSTM detection results , without considering changing the LSTM network structure . In this study , we learn the effect of the proposed input method on neural networks with memory characteristics , such as LSTM . Specifically , it is to repeatedly input training data by simulating the “ repeated input and timely review ” method of the human memory , and the “ repeated input and timely review ” method is proposed by Hermann Ebbinghaus ( Ebbinghaus , 1913 ) in 1885 , which is the “ Increasing Memory Strength and Rate of Learning ” in his literature . 1.1 OUR CONTRIBUTION . Our views in this paper mainly include the following 3 aspects : a ) A novel sampler is proposed , which implements sampling in a circular and stepwise manner . Compared with the traditional sampler , the loss curve of the LSTM model using this stepped sampler converges faster in training , and is more stable after the convergence , namely there is no large jitter after the convergence . Moreover , its test accuracy curve is more stable either , which has no jitter . When the batch size is 15 , the test accuracy of the stepped sampler LSTM is much higher than that of the traditional sampler with the same parameters . b ) The idea of this sampler comes from the laws of human memory , which was proposed by Ebbinghaus ( Ebbinghaus , 1913 ) . We courageously assume that , other human learning methods can also be applied to machine learning . One example is the proposal of the attention mechanism ( Vaswani et al. , 2017 ) . Moreover , we believe that artificial neural networks have human-like characteristics from the experimental performance . c ) We try to use mathematical language to describe the temporal information of the video frames . We try to apply the mathematical equations to our experimental results , and analyze that the test accuracy in the experiment is the temporal information between video frames . The derivation process is shown in Appendix A and Appendix B . 2 RELATED WORK . Gibbs sampling is one of the earlier data sampling algorithms , which is proposed by Geman et al . ( Geman & Geman , 1984 ) in 1984 . Gibbs sampling is to make the probability of the data sample approximately equal to the required probability distribution via iterations . Gibbs sampling randomly selects data from an initial input sequence , and iterates according to the specified conditional probabilities , which are related to the required probability distribution of the final sampling data . After iterations , Gibbs sampling generates data which is consistent with the required probability distribution . Hu et al . ( Hu et al. , 2018 ) used neural networks to generate a sampler , which transfer the initial data distribution to the target distribution . The method can generate the sampling data at the same time of training . This method works with the un-normalized probability density function . Wang et al . ( Wang et al. , 2018 ) used Generative Adversarial Nets ( GAN ) ( Goodfellow et al. , 2014 ) to generate the negative samples . The approach is the first to combine GAN with the negative sampling method , which improves the training effect of the streaming recommend system . Chu et al . ( Chu et al. , 2019 ) proposed a novel sampler that can sample both the positive and the negative data from the input data sequences , so as to let the classifier utilize the Regions of Interests and the background of the data . The sampler is used in the few-shot image classifier , which uses the reinforcement learning method . The reinforcement learning algorithm ( Kaelbling et al. , 1996 ) needs to continuously select the regions of interests from the images , subsequently to recognize the content of the Regions of Interests . Sampling these Regions of Interests can improve the efficiency of reinforcement learning , for the reason of the reduction of the input samples . Muhammad et al . ( Muhammad et al. , 2021 ) proposed a bi-directional long short-term memory ( BiLSTM ) with attention mechanism and a dilated convolutional neural network ( DCNN ) to perform action recognition , which outperformed the state-of-the-art methods . Kwon et al . ( Kwon et al. , 2021 ) proposed a spatio-temporal neighbourhood learning method on action recognition , which performed the state-of-the-art . 3 MATERIALS AND METHODS . This paper is from the perspective of data input , rather than the neural network structure , and study the impact of the memory effect on the temporal sequence neural networks ( such as LSTM ) . The process simulates the method of enhancing the memory process of human brain , repeats the input data in a stepped way . The method is proposed by Hermann Ebbinghaus ( Ebbinghaus , 1913 ) called “ Increasing rate of learning ” in his book . The specific mode we used was the wheel tactic ( Smith , 1994 ) when we recited words , by establishing a novel data sampler in the LSTM model training . The dataset in the experiment is UCF101 ( Soomro et al. , 2012 ) , which is a human action recognition video dataset . The name of each folder indicates the annotation of the video . 3.1 EBBINGHAUS FORGETTING CURVE . Ebbinghaus forgetting curve ( Ebbinghaus , 1913 ) describes the memory effect of human brain over time , which was proposed by Hermann Ebbinghaus in 1885 . This theory reveals the human memory law . It is also the law of human learning . That is , the loss of human memory when learning new knowledge is drop fast first and slow later . Ebbinghaus also pointed out that , timely review and repeated input are the key point to prevent forgetting , consolidate knowledge , and learn better . Figure 1 illustrates Ebbinghaus forgetting curve , and timely review can reduce the knowledge forgetting , which makes the learning better . Based on Ebbinghaus forgetting curve on the human brain , we simulated Ebbinghaus ’ method on machine learning . We believe that the experimental results in Section 4 could prove that there is a certain correlation between human learning and machine learning , since the machine learning method with timely review and spaced repeat has a faster learning effect , compared with the machine learning without the human-like method . Ebbinghaus also found that , making good use of the correlations between knowledge is another key point for enhancing learning . We definite these correlations are temporal information in Appendix A . Thereby enhancing the use of temporal information is the key to video detection , natural language processing ( NLP ) , etc . We believe that , the partly repeated input of the stepped sampler enhances the correlation and the temporal information . 3.2 LSTM The LSTM architecture we used in this paper is to start with a CNN backbone . The CNN backbone has four convolutional layers . The dimension of convolution kernels of each convolutional layer is 32 , 64 , 128 , 256 . The size of each convolution kernel is 5 × 5 , 3 × 3 , 3 × 3 , 3×3 , the stride of the convolution kernel is 2 , and the padding is 0 . Each convolutional layer is followed by a batch normalization ( BN ) ( Ioffe & Szegedy , 2015 ) layer and a ReLU layer . The last part of the CNN model is 3 fully connected ( FC ) layers , which use dropout function . The dimensions of the 3 FC layers are 1024 , 768 and 512 respective- ly . The LSTM used in the paper is the model existed in PyTorch . The input dimension of LSTM is 512 , the hidden layer dimension is 512 , and the number of hidden layers is 3 . The next is two fully connected ( FC ) layers followed by dropout function . The dimension of the FC layers is 256 . The dropout rate of the CNN backbone and LSTM are both 0.3 . 3.3 THE STEPPED SAMPLER . Our experiment is compared with the common sampler . Common sampler in PyTorch ( Paszke et al. , 2019 ) include random sampler , weighted random sampler , batch sampler , etc . Batch sampling is nearly the most commonly used . The previous research is to add memory units to deep learning networks , such as RNN , LSTM , etc . Analogous to human learning , an important point is repetition . And the sampler should be an appropriate way to simulate the “ repetition ” , since the data in each batch can be designed to be input repeatedly . We suppose that , “ repetition ” is important , not only for human beings , but also for computers . To make computers better use the “ repetition ” , analogizing the way how we recite words , we propose a “ stepped ” repetition input method , which is the stepped sampler . The structure of the proposed stepped sampler is illustrated in Figure 2 . It is established on the batch sampler . The stepped sampler divides a batch into some sub-batches . Like human memory , this sampler adopts the principle of adjacent repetition ( Crowder , 1968 ) , namely , the back of the previous sub-batch is the same with the front of the next sub-batch . The structure of the stepped sampler shows that , the input data of different batches is partly duplicated . The repeated input seems to increase the redundancy , but the experimental results show that , with our experimental environment , this method can accelerate the convergence of LSTM model . There is a stride between the previous sub-batch and the next sub-batch . The stride size n can be set manually . We believe that this part repetition enhances the correlation of the input frames , thereby enhancing the temporal information of the input frames , according to our definition of the temporal information in Appendix A . Section 4 describes the comparative experiments on the sampler with different stride size .
This paper mainly provides a stepped sampling method only for video detection and only adopted in LSTM structure. This idea is inspired from human's repeatedly memory working mechanism. This method can achieve fast convergence during training, smooth the training loss curves and get a good test accuracy when batch size equal to 15.
SP:2041aa3af66d994347c7a3cad1f1d15a4b71a327
Increase and Conquer: Training Graph Neural Networks on Growing Graphs
1 INTRODUCTION . Graph Neural Networks ( GNNs ) are deep convolutional architectures formed by a succession of layers where each layer composes a graph convolution and a pointwise nonlinearity ( Wu et al. , 2021 ; Zhou et al. , 2020 ) . Tailored to network data , GNNs have been used in a variety of applications such as recommendation systems ( Fan et al. , 2019 ; Tan et al. , 2020 ; Ying et al. , 2018 ; Schlichtkrull et al. , 2018 ; Ruiz et al. , 2019a ) and Markov chains ( Qu et al. , 2019 ; Ruiz et al. , 2019b ; Li et al. , 2015 ) , and fields such as biology ( Fout et al. , 2017 ; Duvenaud et al. , 2015 ; Gilmer et al. , 2017 ; Chen et al. , 2020 ) and robotics ( Qi et al. , 2018 ; Gama & Sojoudi , 2021 ; Li et al. , 2019 ) . Their success in these fields and applications provides ample empirical evidence of the ability of GNNs to generalize to unseen data . More recently , their successful performance has also been justified by theoretical works showing that GNNs are invariant to relabelings ( Chen et al. , 2019 ; Keriven & Peyré , 2019 ) , stable to graph perturbations ( Gama et al. , 2020 ) and transferable across graphs ( Ruiz et al. , 2020a ) . One of the most important features of a GNN is that , because the linear operation is a graph convolution , its number of parameters does not depend on the number of nodes of the graph . In theory , this means that GNNs can be trained on graphs of any size . In practice , however , if the graph has large number of nodes training the GNN is costly because computing graph convolutions involves large matrix operations . While this issue could be mitigated by transferability — training the GNN on a smaller graph to execute on the large graph — , this approach does not give any guarantees on the distance between the optimal solutions on the small and on the large graph . In other words , when executing the GNN on the target graph we do not know if its error will be dominated by the transferability error or by the generalization error from training . In this paper , we address the computational burden of training a GNN on a large graph by progressively increasing the size of the network . We consider the limit problem of learning an “ optimal ” neural network for a graphon , which is both a graph limit and a random graph model ( Lovász , 2012 ) . We postulate that , because sequences of graphs sampled from the graphon converge to it , the so-called graphon neural network ( Ruiz et al. , 2020a ) can be learned by sampling graphs of growing size and training a GNN on these graphs ( Algorithm 1 ) . We prove that this is true in two steps . In Theorem 1 , we bound the expected distance between the gradient descent steps on the GNN and on the graphon neural network by a term that decreases asymptotically with the size of the graph . A consequence of this bias bound is that it allows us to quantify the trade-off between a more accurate gradient and one that could be obtained with less computational power . We then use this theorem to prove our main result in Theorem 2 , which is stated in simplified form below . Theorem ( Graphon neural network learning , informal ) Let W be a graphon and let { Gn } be a sequence of growing graphs sampled from W. Consider the graphon neural network Φ ( W ) and assume that it is learned by training the GNN Φ ( Gn ) with loss function ` ( yn , Φ ( Gn ) ) on the sequence { Gn } . Over a finite number of training steps , we obtain ‖∇ ` ( Y , Φ ( W ) ‖ ≤ with probability 1 . The most important implication of this result is that the learning iterates computed in the sequence of growing graphs follow the direction of the graphon gradient up to a small ball , which provides theoretical validation to our cost efficient training methodology . We also validate our algorithm in two numerical experiments . In the first , we learn a GNN-based recommendation system on increasingly large subnetworks of a movie similarity graph , and compare it with the recommendation system trained on the full graph . In the second , we consider the problem of flocking and train GNNs to learn the actions agents need to take to flock . We compare the results obtained when progressively increasing the number of agents during training and when training directly on the target graph . 2 RELATED WORK . GNNs are data processing architectures that follow from the seminal works in the areas of deep learning applied to graph theory ( Bruna et al. , 2013 ; Defferrard et al. , 2016 ; Gori et al. , 2005 ; Lu & Getoor , 2003 ) . They have been successfully used in a wide variety of statistical learning problems ( Kipf & Welling , 2016 ; Scarselli et al. , 2018 ) , where their good performance is generally attributed to the fact that they exploit invariances present in network data ( Maron et al. , 2019 ; Gama et al. , 2018 ; Chami et al. , 2021 ) . More recently , a number of works show that GNNs can be transferred across graphs of different sizes ( Ruiz et al. , 2020a ; Levie et al. , 2019 ; Keriven et al. , 2020 ) . Specifically , ( Ruiz et al. , 2020a ) leverages graphons to define families of graphs within which GNNs can be transferred with an error bound that decreases asymptotically with the size of the graph . The papers by Levie et al . ( 2019 ) and Keriven et al . ( 2020 ) offer similar results by considering the graph limit to be a generic topological space and a random graph model respectively . In this paper , we use an extension of the transferability bound derived in Ruiz et al . ( 2020a ) to propose a novel learning algorithm for GNNs . 3 PRELIMINARY DEFINITIONS . 3.1 GRAPH NEURAL NETWORKS . Graph neural networks exploit graph symmetries to extract meaningful information from network data ( Ruiz et al. , 2020c ; Gama et al. , 2020 ) . Graphs are represented as triplets Gn = ( V , E , W ) , where V , |V| = n , is the set of nodes , E ⊆ V × V is the set of edges and W : E → R is a map assigning weights to each edge . The graph Gn can also be represented by the graph shift operator ( GSO ) S ∈ Rn×n , a square matrix that respects the sparsity of the graph . Examples of GSOs include the adjacency matrix A , the graph Laplacian L = diag ( A1 ) −A and their normalized counterparts Gama et al . ( 2018 ) . In this paper we consider the graph Gn to be undirected and fix S = A/n . Graph data is represented in the form of graph signals . A graph signal x = [ x1 , . . . , xn ] T ∈ Rn is a vector whose i-th component corresponds to the information present at the i-th node of graph Gn . A basic data aggregation operation can be defined by applying the GSO S to graph signals x . The resulting signal z = Sx is such that the data at node i is a weighted average of the information in the 1-hop neighborhood of i , zi = ∑ j∈Ni [ S ] ijxj where Ni = { j | [ S ] ij 6= 0 } . Information coming from further neighborhoods can be aggregated by successive applications of the GSO , also called shifts . Using this notion of shift , graph convolutions are defined by weighting the contribution of each successive application of S to define a polynomial in the GSO . Explicitly , the graph convolutional filter with coefficients h = [ h0 , . . . , hK−1 ] is given by y = h ∗S x = K−1∑ k=0 hkS kx ( 1 ) where ∗S denotes the convolution operation with GSO S. Since the adjacency matrix of an undirected graph is always symmetric , the GSO admits an eigendecomposition S = VΛVH . The columns of V are the graph eigenvectors and the diagonal elements of Λ are the graph eigenvalues , which take values between −1 and 1 and are ordered as −1 ≤ λ−1 ≤ λ−2 ≤ . . . ≤ 0 ≤ . . . ≤ λ2 ≤ λ1 ≤ 1 . Since the eigenvectors of S form an orthonormal basis of Rn , we can project ( 1 ) onto this basis to obtain the spectral representation of the graph convolution , which is given by h ( λ ) = K−1∑ k=0 hkλ k. ( 2 ) Note that ( 2 ) only depends on the hk and on the eigenvalues of the GSO . Hence , as a consequence of the Cayley-Hamilton theorem , convolutional filters may be used to represent any graph filter with spectral representation h ( λ ) = f ( λ ) where f is analytic ( Strang , 1976 ) . Graph neural networks are layered architectures where each layer consists of a graph convolution followed by a pointwise nonlinearity ρ , and where each layer ’ s output is the input to the following layer . At layer l , a GNN can output multiple features xfl , 1 ≤ f ≤ Fl which we stack in a matrix Xl = [ x1l , . . . , x Fl l ] ∈ Rn×Fl . Each column of the feature matrix is the value of the graph signal at feature f . To map the Fl−1 features coming from layer l − 1 into Fl features , Fl−1 × Fl convolutions need to be computed , one per input-output feature pair . Stacking their weights in K matrices Hlk ∈ RFl−1×Fl , we write the l-th layer of the GNN as Xl = ρ ( K−1∑ k=0 SkXl−1Hlk ) . ( 3 ) In an L-layer GNN , the operation in ( 3 ) is cascaded L times to obtain the GNN output Y = XL . At the first layer , the GNN input is given by X0 = X ∈ Rn×F0 . In this paper we assume F0 = FL = 1 so that Y = y and X = x . A more concise representation of this GNN can be obtained by grouping all learnable parameters Hlk in a tensor H = { Hlk } l , k and defining the map y = Φ ( x ; H , S ) . Due to the polynomial nature of the graph convolution , the dimensions of the learnable parameter tensor H are independent from the size of the graph ( K is typically much smaller than n ) . Ergo , a GNN trained on a graph Gn can be deployed on a network Gm with m 6= n . 3.2 GRAPHON INFORMATION PROCESSING . A graphon is a bounded , symmetric , and measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] which has two theoretical interpretations — it is both a graph limit and a generative model for graphs . In the first interpretation , sequences of dense graphs converge to a graphon in the sense that the densities of adjacency-preserving graph motifs converge to the same densities on the graphon Lovász ( 2012 ) . In the second , graphs can be generated from a graphon by sampling points ui , uj from the unit interval and either assigning weight W ( ui , uj ) to edges ( i , j ) , or sampling edges ( i , j ) with probability W ( ui , uj ) . In this paper , we focus on stochastic graphs Gn where the points ui are defined as ui = ( i− 1 ) /n for 1 ≤ i ≤ n and where the adjacency matrix Sn is sampled from W as [ Sn ] ij ∼ Bernoulli ( W ( ui , uj ) ) . ( 4 ) Sequences of graphs generated in this way can be shown to converge to W with probability one ( Lovász , 2012 ) [ Chapter 11 ] . In practice , the two theoretical interpretations of a graphon allow thinking of it as an identifying object for a family of graphs of different sizes that are structurally similar . Hence , given a network we can use its family ’ s identifying graphon as a continuous proxy for the graph . This is beneficial because it is typically easier to operate in continuous than in discrete domains , even more so if the network is large . We will leverage these ideas to consider graphon data and graphon neural networks as proxies for graph data and GNNs supported on graphs of arbitrary size .
This paper proposed an effective and scalable algorithm to train the graph neural network. Their method leverages the framework of graphon neural networks to enlarge the training size of GNN during the training. Specifically, a growing size subgraph is sampled from graphon by a Bernoulli distribution and fed to a graph neural network. A Theoretical guarantee is given for the convergence of the algorithm by proving the absolute value of gradient will decrease to a small value surely.
SP:174fb95603efdfe578729ca4e25efd240ac151fa
Increase and Conquer: Training Graph Neural Networks on Growing Graphs
1 INTRODUCTION . Graph Neural Networks ( GNNs ) are deep convolutional architectures formed by a succession of layers where each layer composes a graph convolution and a pointwise nonlinearity ( Wu et al. , 2021 ; Zhou et al. , 2020 ) . Tailored to network data , GNNs have been used in a variety of applications such as recommendation systems ( Fan et al. , 2019 ; Tan et al. , 2020 ; Ying et al. , 2018 ; Schlichtkrull et al. , 2018 ; Ruiz et al. , 2019a ) and Markov chains ( Qu et al. , 2019 ; Ruiz et al. , 2019b ; Li et al. , 2015 ) , and fields such as biology ( Fout et al. , 2017 ; Duvenaud et al. , 2015 ; Gilmer et al. , 2017 ; Chen et al. , 2020 ) and robotics ( Qi et al. , 2018 ; Gama & Sojoudi , 2021 ; Li et al. , 2019 ) . Their success in these fields and applications provides ample empirical evidence of the ability of GNNs to generalize to unseen data . More recently , their successful performance has also been justified by theoretical works showing that GNNs are invariant to relabelings ( Chen et al. , 2019 ; Keriven & Peyré , 2019 ) , stable to graph perturbations ( Gama et al. , 2020 ) and transferable across graphs ( Ruiz et al. , 2020a ) . One of the most important features of a GNN is that , because the linear operation is a graph convolution , its number of parameters does not depend on the number of nodes of the graph . In theory , this means that GNNs can be trained on graphs of any size . In practice , however , if the graph has large number of nodes training the GNN is costly because computing graph convolutions involves large matrix operations . While this issue could be mitigated by transferability — training the GNN on a smaller graph to execute on the large graph — , this approach does not give any guarantees on the distance between the optimal solutions on the small and on the large graph . In other words , when executing the GNN on the target graph we do not know if its error will be dominated by the transferability error or by the generalization error from training . In this paper , we address the computational burden of training a GNN on a large graph by progressively increasing the size of the network . We consider the limit problem of learning an “ optimal ” neural network for a graphon , which is both a graph limit and a random graph model ( Lovász , 2012 ) . We postulate that , because sequences of graphs sampled from the graphon converge to it , the so-called graphon neural network ( Ruiz et al. , 2020a ) can be learned by sampling graphs of growing size and training a GNN on these graphs ( Algorithm 1 ) . We prove that this is true in two steps . In Theorem 1 , we bound the expected distance between the gradient descent steps on the GNN and on the graphon neural network by a term that decreases asymptotically with the size of the graph . A consequence of this bias bound is that it allows us to quantify the trade-off between a more accurate gradient and one that could be obtained with less computational power . We then use this theorem to prove our main result in Theorem 2 , which is stated in simplified form below . Theorem ( Graphon neural network learning , informal ) Let W be a graphon and let { Gn } be a sequence of growing graphs sampled from W. Consider the graphon neural network Φ ( W ) and assume that it is learned by training the GNN Φ ( Gn ) with loss function ` ( yn , Φ ( Gn ) ) on the sequence { Gn } . Over a finite number of training steps , we obtain ‖∇ ` ( Y , Φ ( W ) ‖ ≤ with probability 1 . The most important implication of this result is that the learning iterates computed in the sequence of growing graphs follow the direction of the graphon gradient up to a small ball , which provides theoretical validation to our cost efficient training methodology . We also validate our algorithm in two numerical experiments . In the first , we learn a GNN-based recommendation system on increasingly large subnetworks of a movie similarity graph , and compare it with the recommendation system trained on the full graph . In the second , we consider the problem of flocking and train GNNs to learn the actions agents need to take to flock . We compare the results obtained when progressively increasing the number of agents during training and when training directly on the target graph . 2 RELATED WORK . GNNs are data processing architectures that follow from the seminal works in the areas of deep learning applied to graph theory ( Bruna et al. , 2013 ; Defferrard et al. , 2016 ; Gori et al. , 2005 ; Lu & Getoor , 2003 ) . They have been successfully used in a wide variety of statistical learning problems ( Kipf & Welling , 2016 ; Scarselli et al. , 2018 ) , where their good performance is generally attributed to the fact that they exploit invariances present in network data ( Maron et al. , 2019 ; Gama et al. , 2018 ; Chami et al. , 2021 ) . More recently , a number of works show that GNNs can be transferred across graphs of different sizes ( Ruiz et al. , 2020a ; Levie et al. , 2019 ; Keriven et al. , 2020 ) . Specifically , ( Ruiz et al. , 2020a ) leverages graphons to define families of graphs within which GNNs can be transferred with an error bound that decreases asymptotically with the size of the graph . The papers by Levie et al . ( 2019 ) and Keriven et al . ( 2020 ) offer similar results by considering the graph limit to be a generic topological space and a random graph model respectively . In this paper , we use an extension of the transferability bound derived in Ruiz et al . ( 2020a ) to propose a novel learning algorithm for GNNs . 3 PRELIMINARY DEFINITIONS . 3.1 GRAPH NEURAL NETWORKS . Graph neural networks exploit graph symmetries to extract meaningful information from network data ( Ruiz et al. , 2020c ; Gama et al. , 2020 ) . Graphs are represented as triplets Gn = ( V , E , W ) , where V , |V| = n , is the set of nodes , E ⊆ V × V is the set of edges and W : E → R is a map assigning weights to each edge . The graph Gn can also be represented by the graph shift operator ( GSO ) S ∈ Rn×n , a square matrix that respects the sparsity of the graph . Examples of GSOs include the adjacency matrix A , the graph Laplacian L = diag ( A1 ) −A and their normalized counterparts Gama et al . ( 2018 ) . In this paper we consider the graph Gn to be undirected and fix S = A/n . Graph data is represented in the form of graph signals . A graph signal x = [ x1 , . . . , xn ] T ∈ Rn is a vector whose i-th component corresponds to the information present at the i-th node of graph Gn . A basic data aggregation operation can be defined by applying the GSO S to graph signals x . The resulting signal z = Sx is such that the data at node i is a weighted average of the information in the 1-hop neighborhood of i , zi = ∑ j∈Ni [ S ] ijxj where Ni = { j | [ S ] ij 6= 0 } . Information coming from further neighborhoods can be aggregated by successive applications of the GSO , also called shifts . Using this notion of shift , graph convolutions are defined by weighting the contribution of each successive application of S to define a polynomial in the GSO . Explicitly , the graph convolutional filter with coefficients h = [ h0 , . . . , hK−1 ] is given by y = h ∗S x = K−1∑ k=0 hkS kx ( 1 ) where ∗S denotes the convolution operation with GSO S. Since the adjacency matrix of an undirected graph is always symmetric , the GSO admits an eigendecomposition S = VΛVH . The columns of V are the graph eigenvectors and the diagonal elements of Λ are the graph eigenvalues , which take values between −1 and 1 and are ordered as −1 ≤ λ−1 ≤ λ−2 ≤ . . . ≤ 0 ≤ . . . ≤ λ2 ≤ λ1 ≤ 1 . Since the eigenvectors of S form an orthonormal basis of Rn , we can project ( 1 ) onto this basis to obtain the spectral representation of the graph convolution , which is given by h ( λ ) = K−1∑ k=0 hkλ k. ( 2 ) Note that ( 2 ) only depends on the hk and on the eigenvalues of the GSO . Hence , as a consequence of the Cayley-Hamilton theorem , convolutional filters may be used to represent any graph filter with spectral representation h ( λ ) = f ( λ ) where f is analytic ( Strang , 1976 ) . Graph neural networks are layered architectures where each layer consists of a graph convolution followed by a pointwise nonlinearity ρ , and where each layer ’ s output is the input to the following layer . At layer l , a GNN can output multiple features xfl , 1 ≤ f ≤ Fl which we stack in a matrix Xl = [ x1l , . . . , x Fl l ] ∈ Rn×Fl . Each column of the feature matrix is the value of the graph signal at feature f . To map the Fl−1 features coming from layer l − 1 into Fl features , Fl−1 × Fl convolutions need to be computed , one per input-output feature pair . Stacking their weights in K matrices Hlk ∈ RFl−1×Fl , we write the l-th layer of the GNN as Xl = ρ ( K−1∑ k=0 SkXl−1Hlk ) . ( 3 ) In an L-layer GNN , the operation in ( 3 ) is cascaded L times to obtain the GNN output Y = XL . At the first layer , the GNN input is given by X0 = X ∈ Rn×F0 . In this paper we assume F0 = FL = 1 so that Y = y and X = x . A more concise representation of this GNN can be obtained by grouping all learnable parameters Hlk in a tensor H = { Hlk } l , k and defining the map y = Φ ( x ; H , S ) . Due to the polynomial nature of the graph convolution , the dimensions of the learnable parameter tensor H are independent from the size of the graph ( K is typically much smaller than n ) . Ergo , a GNN trained on a graph Gn can be deployed on a network Gm with m 6= n . 3.2 GRAPHON INFORMATION PROCESSING . A graphon is a bounded , symmetric , and measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] which has two theoretical interpretations — it is both a graph limit and a generative model for graphs . In the first interpretation , sequences of dense graphs converge to a graphon in the sense that the densities of adjacency-preserving graph motifs converge to the same densities on the graphon Lovász ( 2012 ) . In the second , graphs can be generated from a graphon by sampling points ui , uj from the unit interval and either assigning weight W ( ui , uj ) to edges ( i , j ) , or sampling edges ( i , j ) with probability W ( ui , uj ) . In this paper , we focus on stochastic graphs Gn where the points ui are defined as ui = ( i− 1 ) /n for 1 ≤ i ≤ n and where the adjacency matrix Sn is sampled from W as [ Sn ] ij ∼ Bernoulli ( W ( ui , uj ) ) . ( 4 ) Sequences of graphs generated in this way can be shown to converge to W with probability one ( Lovász , 2012 ) [ Chapter 11 ] . In practice , the two theoretical interpretations of a graphon allow thinking of it as an identifying object for a family of graphs of different sizes that are structurally similar . Hence , given a network we can use its family ’ s identifying graphon as a continuous proxy for the graph . This is beneficial because it is typically easier to operate in continuous than in discrete domains , even more so if the network is large . We will leverage these ideas to consider graphon data and graphon neural networks as proxies for graph data and GNNs supported on graphs of arbitrary size .
In this paper, the authors propose a method that learns a large graph neural network (GNN) starting from a relatively smaller GNN which would increase its number of nodes step by step (epoch by epoch). Most importantly, the paper gives mathematical proof to show that the gradient descent steps on GNN are close to the graphon neural network (WNN) which is the limit object of the GNN. This paper also demonstrates its proposed method on a recommendation system and a decentralized control problem and shows reduced computational cost.
SP:174fb95603efdfe578729ca4e25efd240ac151fa
Increase and Conquer: Training Graph Neural Networks on Growing Graphs
1 INTRODUCTION . Graph Neural Networks ( GNNs ) are deep convolutional architectures formed by a succession of layers where each layer composes a graph convolution and a pointwise nonlinearity ( Wu et al. , 2021 ; Zhou et al. , 2020 ) . Tailored to network data , GNNs have been used in a variety of applications such as recommendation systems ( Fan et al. , 2019 ; Tan et al. , 2020 ; Ying et al. , 2018 ; Schlichtkrull et al. , 2018 ; Ruiz et al. , 2019a ) and Markov chains ( Qu et al. , 2019 ; Ruiz et al. , 2019b ; Li et al. , 2015 ) , and fields such as biology ( Fout et al. , 2017 ; Duvenaud et al. , 2015 ; Gilmer et al. , 2017 ; Chen et al. , 2020 ) and robotics ( Qi et al. , 2018 ; Gama & Sojoudi , 2021 ; Li et al. , 2019 ) . Their success in these fields and applications provides ample empirical evidence of the ability of GNNs to generalize to unseen data . More recently , their successful performance has also been justified by theoretical works showing that GNNs are invariant to relabelings ( Chen et al. , 2019 ; Keriven & Peyré , 2019 ) , stable to graph perturbations ( Gama et al. , 2020 ) and transferable across graphs ( Ruiz et al. , 2020a ) . One of the most important features of a GNN is that , because the linear operation is a graph convolution , its number of parameters does not depend on the number of nodes of the graph . In theory , this means that GNNs can be trained on graphs of any size . In practice , however , if the graph has large number of nodes training the GNN is costly because computing graph convolutions involves large matrix operations . While this issue could be mitigated by transferability — training the GNN on a smaller graph to execute on the large graph — , this approach does not give any guarantees on the distance between the optimal solutions on the small and on the large graph . In other words , when executing the GNN on the target graph we do not know if its error will be dominated by the transferability error or by the generalization error from training . In this paper , we address the computational burden of training a GNN on a large graph by progressively increasing the size of the network . We consider the limit problem of learning an “ optimal ” neural network for a graphon , which is both a graph limit and a random graph model ( Lovász , 2012 ) . We postulate that , because sequences of graphs sampled from the graphon converge to it , the so-called graphon neural network ( Ruiz et al. , 2020a ) can be learned by sampling graphs of growing size and training a GNN on these graphs ( Algorithm 1 ) . We prove that this is true in two steps . In Theorem 1 , we bound the expected distance between the gradient descent steps on the GNN and on the graphon neural network by a term that decreases asymptotically with the size of the graph . A consequence of this bias bound is that it allows us to quantify the trade-off between a more accurate gradient and one that could be obtained with less computational power . We then use this theorem to prove our main result in Theorem 2 , which is stated in simplified form below . Theorem ( Graphon neural network learning , informal ) Let W be a graphon and let { Gn } be a sequence of growing graphs sampled from W. Consider the graphon neural network Φ ( W ) and assume that it is learned by training the GNN Φ ( Gn ) with loss function ` ( yn , Φ ( Gn ) ) on the sequence { Gn } . Over a finite number of training steps , we obtain ‖∇ ` ( Y , Φ ( W ) ‖ ≤ with probability 1 . The most important implication of this result is that the learning iterates computed in the sequence of growing graphs follow the direction of the graphon gradient up to a small ball , which provides theoretical validation to our cost efficient training methodology . We also validate our algorithm in two numerical experiments . In the first , we learn a GNN-based recommendation system on increasingly large subnetworks of a movie similarity graph , and compare it with the recommendation system trained on the full graph . In the second , we consider the problem of flocking and train GNNs to learn the actions agents need to take to flock . We compare the results obtained when progressively increasing the number of agents during training and when training directly on the target graph . 2 RELATED WORK . GNNs are data processing architectures that follow from the seminal works in the areas of deep learning applied to graph theory ( Bruna et al. , 2013 ; Defferrard et al. , 2016 ; Gori et al. , 2005 ; Lu & Getoor , 2003 ) . They have been successfully used in a wide variety of statistical learning problems ( Kipf & Welling , 2016 ; Scarselli et al. , 2018 ) , where their good performance is generally attributed to the fact that they exploit invariances present in network data ( Maron et al. , 2019 ; Gama et al. , 2018 ; Chami et al. , 2021 ) . More recently , a number of works show that GNNs can be transferred across graphs of different sizes ( Ruiz et al. , 2020a ; Levie et al. , 2019 ; Keriven et al. , 2020 ) . Specifically , ( Ruiz et al. , 2020a ) leverages graphons to define families of graphs within which GNNs can be transferred with an error bound that decreases asymptotically with the size of the graph . The papers by Levie et al . ( 2019 ) and Keriven et al . ( 2020 ) offer similar results by considering the graph limit to be a generic topological space and a random graph model respectively . In this paper , we use an extension of the transferability bound derived in Ruiz et al . ( 2020a ) to propose a novel learning algorithm for GNNs . 3 PRELIMINARY DEFINITIONS . 3.1 GRAPH NEURAL NETWORKS . Graph neural networks exploit graph symmetries to extract meaningful information from network data ( Ruiz et al. , 2020c ; Gama et al. , 2020 ) . Graphs are represented as triplets Gn = ( V , E , W ) , where V , |V| = n , is the set of nodes , E ⊆ V × V is the set of edges and W : E → R is a map assigning weights to each edge . The graph Gn can also be represented by the graph shift operator ( GSO ) S ∈ Rn×n , a square matrix that respects the sparsity of the graph . Examples of GSOs include the adjacency matrix A , the graph Laplacian L = diag ( A1 ) −A and their normalized counterparts Gama et al . ( 2018 ) . In this paper we consider the graph Gn to be undirected and fix S = A/n . Graph data is represented in the form of graph signals . A graph signal x = [ x1 , . . . , xn ] T ∈ Rn is a vector whose i-th component corresponds to the information present at the i-th node of graph Gn . A basic data aggregation operation can be defined by applying the GSO S to graph signals x . The resulting signal z = Sx is such that the data at node i is a weighted average of the information in the 1-hop neighborhood of i , zi = ∑ j∈Ni [ S ] ijxj where Ni = { j | [ S ] ij 6= 0 } . Information coming from further neighborhoods can be aggregated by successive applications of the GSO , also called shifts . Using this notion of shift , graph convolutions are defined by weighting the contribution of each successive application of S to define a polynomial in the GSO . Explicitly , the graph convolutional filter with coefficients h = [ h0 , . . . , hK−1 ] is given by y = h ∗S x = K−1∑ k=0 hkS kx ( 1 ) where ∗S denotes the convolution operation with GSO S. Since the adjacency matrix of an undirected graph is always symmetric , the GSO admits an eigendecomposition S = VΛVH . The columns of V are the graph eigenvectors and the diagonal elements of Λ are the graph eigenvalues , which take values between −1 and 1 and are ordered as −1 ≤ λ−1 ≤ λ−2 ≤ . . . ≤ 0 ≤ . . . ≤ λ2 ≤ λ1 ≤ 1 . Since the eigenvectors of S form an orthonormal basis of Rn , we can project ( 1 ) onto this basis to obtain the spectral representation of the graph convolution , which is given by h ( λ ) = K−1∑ k=0 hkλ k. ( 2 ) Note that ( 2 ) only depends on the hk and on the eigenvalues of the GSO . Hence , as a consequence of the Cayley-Hamilton theorem , convolutional filters may be used to represent any graph filter with spectral representation h ( λ ) = f ( λ ) where f is analytic ( Strang , 1976 ) . Graph neural networks are layered architectures where each layer consists of a graph convolution followed by a pointwise nonlinearity ρ , and where each layer ’ s output is the input to the following layer . At layer l , a GNN can output multiple features xfl , 1 ≤ f ≤ Fl which we stack in a matrix Xl = [ x1l , . . . , x Fl l ] ∈ Rn×Fl . Each column of the feature matrix is the value of the graph signal at feature f . To map the Fl−1 features coming from layer l − 1 into Fl features , Fl−1 × Fl convolutions need to be computed , one per input-output feature pair . Stacking their weights in K matrices Hlk ∈ RFl−1×Fl , we write the l-th layer of the GNN as Xl = ρ ( K−1∑ k=0 SkXl−1Hlk ) . ( 3 ) In an L-layer GNN , the operation in ( 3 ) is cascaded L times to obtain the GNN output Y = XL . At the first layer , the GNN input is given by X0 = X ∈ Rn×F0 . In this paper we assume F0 = FL = 1 so that Y = y and X = x . A more concise representation of this GNN can be obtained by grouping all learnable parameters Hlk in a tensor H = { Hlk } l , k and defining the map y = Φ ( x ; H , S ) . Due to the polynomial nature of the graph convolution , the dimensions of the learnable parameter tensor H are independent from the size of the graph ( K is typically much smaller than n ) . Ergo , a GNN trained on a graph Gn can be deployed on a network Gm with m 6= n . 3.2 GRAPHON INFORMATION PROCESSING . A graphon is a bounded , symmetric , and measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] which has two theoretical interpretations — it is both a graph limit and a generative model for graphs . In the first interpretation , sequences of dense graphs converge to a graphon in the sense that the densities of adjacency-preserving graph motifs converge to the same densities on the graphon Lovász ( 2012 ) . In the second , graphs can be generated from a graphon by sampling points ui , uj from the unit interval and either assigning weight W ( ui , uj ) to edges ( i , j ) , or sampling edges ( i , j ) with probability W ( ui , uj ) . In this paper , we focus on stochastic graphs Gn where the points ui are defined as ui = ( i− 1 ) /n for 1 ≤ i ≤ n and where the adjacency matrix Sn is sampled from W as [ Sn ] ij ∼ Bernoulli ( W ( ui , uj ) ) . ( 4 ) Sequences of graphs generated in this way can be shown to converge to W with probability one ( Lovász , 2012 ) [ Chapter 11 ] . In practice , the two theoretical interpretations of a graphon allow thinking of it as an identifying object for a family of graphs of different sizes that are structurally similar . Hence , given a network we can use its family ’ s identifying graphon as a continuous proxy for the graph . This is beneficial because it is typically easier to operate in continuous than in discrete domains , even more so if the network is large . We will leverage these ideas to consider graphon data and graphon neural networks as proxies for graph data and GNNs supported on graphs of arbitrary size .
This paper shows that a graph neural network (GNN) trained on an increasingly larger graph behaves like a graphon neural network (WNN), the "limit object of a GNN", which arises when considering a continuous graphon instead of a discrete graph (the graphon can be seen as a generative model from which the discrete graph is sampled). This observation is translated into two theorems that bound the expected distance between the GNN and the WNN as a function of the number of nodes used to train the GNN. The authors then propose an algorithm to train a GNN by gradually adding nodes to the training data, leading to a computational advantage compared to training the GNN on the full dataset from the start.
SP:174fb95603efdfe578729ca4e25efd240ac151fa
Attention-based Interpretability with Concept Transformers
1 INTRODUCTION . The spectacular gains in accuracy of recent large-scale machine learning models like deep neural networks have generally come at the cost of a loss of transparency into their functioning . This “ black box ” aspect severely limits their applicability in safety-critical domains , such as medical diagnostics , healthcare , public infrastructure safety , visual inspection for civil engineering , to name just a few , where it is essential for decisions to be corroborated by robust domain-relevant knowledge . In recent years , approaches focusing on explaining black box models have emerged , mostly with the goal of providing post-hoc explanations in terms of a set of relevant features used by the underlying model to make predictions ( Ribeiro et al. , 2016 ; Selvaraju et al. , 2017 ; Lundberg & Lee , 2017 ; Smilkov et al. , 2017 ) . While widely used , such post-hoc explainability methods have been criticized for operating on low-level features such as pixel values , or sensory signals that are combined in unintelligible ways and do not correspond to high-level concepts that humans easily understand ( Kim et al. , 2018 ; Alvarez-Melis & Jaakkola , 2018 ; Kindermans et al. , 2019 ; Su et al. , 2019 ) . To overcome these limitations and sidestep the potential perils resulting from a misuse of posthoc explainability of black box models , some researchers have been vocally advocating for the use of inherently interpretable models ( Rudin , 2019 ) that in particular would generate decisions based on human-understandable categories ( i.e. , concepts ) grounded in domain expertise rather than raw features ( Barbiero et al. , 2021 ; Ghorbani et al. , 2019 ; Kim et al. , 2018 ; Yeh et al. , 2020 ; Koh et al. , 2020 ; Goyal et al. , 2019 ; Kazhdan et al. , 2020 ; Chen et al. , 2020 ; Alvarez-Melis & Jaakkola , 2018 ; Li et al. , 2018 ; Chen et al. , 2019 ) . For example , to identify a bird species , a model should focus on morphologically meaningful concepts , such as the shape , size and colors of beak , feathers or wings , rather than focusing on raw pixels , and combine them in ways that a domain expert ( in this case an ornithologist ) would reckon as intelligible to produce a classification . In addition , using high-level concepts emulates a human ’ s thinking process ( i.e. , structured into familiar concepts ) and provides insights into the model ’ s reasoning in a human-understandable way . The chasm between post-hoc explainability vs. inherently interpretable models closely reflects a related ongoing discussion in the NLP community on the interpretation of attention mechanisms ( Bahdanau et al. , 2014 ) , and in particular on the interpretability of attention weights over input tokens , with researchers on one end of the debate claiming that attention provides interpretability , while others claim that “ Attention is not explanation ” ( Jain & Wallace , 2019 ) . While the debate over what degree of interpretability that can be ascribed to attention weights is still not settled ( Wiegreffe & Pinter , 2019 ) , it is arguable that in many situations attention is not a “ fail-safe indicator ” ( Serrano & Smith , 2019 ) , particularly when decisions rely on the interaction of multiple interacting tokens as is typically the case in deep architectures . Conversely then , a way to guarantee the interpretability of attention weights would be to make sure that they are not being processed by downstream operations that renders their relation to the decision outputs uninterpretable . This is indeed something that had been proposed in the past , in particular in architectures that preserve the interpretability of “ relevance scores ” ( akin to attention weights ) by acting on them only through a restricted class of intelligible “ aggregation functions ” such as additive models ( Alvarez-Melis & Jaakkola , 2018 ) , which are a common functional elements in interpretable and white-box models ( Caruana et al. , 2015 ) . In this paper , we propose the ConceptTransformer ( CT ) , a transformer-based module ( Vaswani et al. , 2017 ) for classification tasks , that can be used to enhance an arbitrary deep learning classifier with domain knowledge in the form of plausible cross-attention weights between input features and highlevel interpretable concepts . The CT can be used as a drop-in replacement for the classifier head of any deep learning architecture . The resulting model can then be trained end-to-end without any additional overhead on the training pipeline , except a modification of the loss function that enforces plausibility of the explanation . Importantly , the CT was specifically conceived to provide explanations that guarantee faithfulness by design and plausibility by construction . Faithfulness is defined as the degree to which the explanation reflects the decision and aims therefore to ensure that the explanations are indeed explaining the model ’ s operation ( Lakkaraju et al. , 2019 ; Guidotti et al. , 2018 ) . In our model this is achieved by enforcing a linear relation between the transformer value vectors that represent the concepts and their contribution to the classification log-probabilities . Plausibility refers to how convincing the interpretation is to humans ( Guidotti et al. , 2018 ; Carvalho et al. , 2019 ) . In the CT architecture , plausibility is achieved by construction by supervising the attention heads of the cross-attention mechanism to conform with inputs-concepts-outputs relations derived by domain knowledge . We validate our approach on three image benchmark datasets , MNIST Even/Odd ( Barbiero et al. , 2021 ) , CUB-200-2011 ( Welinder et al. , 2010 ) , and aPY ( Farhadi et al. , 2009 ) . On these datasets we will examine how the faithfulness and plausibility of the CT explanations are practically translated into domain-relevant explanations behind particular output decisions or diagnostic insights about ensuing wrong classifications . We will also quantify the benefit of domain-expert knowledge in terms of statistical efficiency by showing that providing domain-relevant explanations to our CT model tends to improve the performance of the downstream classification , in particular in low-data regime . This for instance translates in an 8-9 % improvement in accuracy on the bird classification CUB-200-2011 dataset when CT is trained in conjunction with part location annotations . We note in addition that one of the strengths of our CT model is its versatility that allows it to be effortlessly applied to other data modalities as well by combining it with deep learning classifiers which are then rendered interpretable with no appreciable overhead or change in their training pipeline . This is in stark contrast to other inherently interpretable models that are often specifically designed for the domain at hand , and require adhoc multi-stage training procedures . CT on the other hand , is differentiable and can be flexibly included in any end-to-end training pipeline that uses backpropagation , as we will showcase by combining it with a host of different deep learning backbones ranging from convolutional architectures like Residual Networks ( He et al. , 2016 ) to more modern Vision Transformer ( Dosovitskiy et al. , 2020 ) and hybrid Compact Convolutional Transformer models ( Hassani et al. , 2021 ) . 2 RELATED WORK . In recent years , there have been significant advancements towards designing interpretable models that quantify the importance of individual features with respect to the prediction output . One general approach is post-hoc analysis , in which one interprets a trained model by fitting explanations to the classification outputs ( Alvarez-Melis & Jaakkola , 2018 ; Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ) . In particular for CNNs , popular techniques are activation maximization ( van den Oord et al. , 2016 ; Nguyen et al. , 2016 ; Yosinski et al. , 2015 ) and saliency visualization ( Selvaraju et al. , 2017 ; Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ) . However , these post-hoc methods do not actually explain how the underlying model reached a particular classification outcome . In contrast , attention-based interpretable techniques aim to expose which parts of a given input a network focuses on , and therefore deems to be most relevant , when making a decision . Examples of attention models are Zhang et al . ( 2014 ) ; Zhou et al . ( 2016 ; 2018 ) ; Zheng et al . ( 2017 ) ; Fu et al . ( 2017 ) ; Girshick ( 2015 ) ; Girshick et al . ( 2014 ) ; Huang et al . ( 2016 ) . The problem with these models is that they focus on low-level individual features when providing an explanation . Such features are often not intuitive for humans , are typically noisy and non-robust , or can be misleading when interpreted afterwards ( Kim et al. , 2018 ) . One of the recent advancements in the field of interpretability was to design methods that explain predictions with high-level human understandable concepts ( Ghorbani et al. , 2019 ; Kim et al. , 2018 ; Yeh et al. , 2020 ; Koh et al. , 2020 ; Goyal et al. , 2019 ; Kazhdan et al. , 2020 ; Chen et al. , 2020 ; Barbiero et al. , 2021 ; Li et al. , 2018 ; Chen et al. , 2019 ) – either by identifying common activation patterns in the last nodes of the neural network corresponding to human understandable categories or constraining the network to learn such concepts . For instance , TCAV ( Kim et al. , 2018 ) proposes to define concepts from user-annotated examples in which concepts appear . Others propose prototypesbased explanation models ( Li et al. , 2018 ; Chen et al. , 2019 ) , but they typically require specialized convolutional architectures to ensure feature extraction . In particular , ProtoPNet ( Li et al. , 2018 ) uses previously learned prototypes to focus attention on various parts of an image . This architectural design implies that object-level ( global ) concepts can not be easily incorporated , and since prototypes are not learned together with the attention model , explanations based on these prototypes may lack faithfulness . SENN ( Alvarez-Melis & Jaakkola , 2018 ) proposes a network that transforms inputs into interpretable basic features , generates concept relevance scores and then aggregates concepts with relevance scores to explain predictions . While it is out-of-the-box interpretable , it lacks concept localization . Barbiero et al . ( 2021 ) proposed a differentiable approach that allows the extraction of logic explanations from neural network models using First-Order Logic . The approach relies on an entropy-based criterion to automatically identify the most relevant concepts that have contributed to a particular classification output . In our approach , high-level concepts are defined with a set of related dimensions , and can be partspecific or global . Such concepts are typically readily available in many domains and can be used to enhance the performance of the learning task while offering explainability at no additional cost for the network . Obtained explanations are plausible and guaranteed to be faithful , since concepts participate in the model computation . Finally , CT also allows in some cases to discover the presence concepts that were not annotated . 3 APPROACH . The ConceptTransformer module . The ConceptTransformer ( CT ) is a transformer-based module designed to be used as classifier head in a deep learning architecture that generates classification outputs using cross-attention ( Vaswani et al. , 2017 ) between input features and a set of embeddings representing domain-relevant concepts . Fig . 1 shows the case where the inputs to the CT are embeddings of P visual patches of an input image that are linearly projected and concatenated into the query matrix Q ∈ RP×dm of a query-key-value cross-attention mechanism whose corresponding key matrix K ∈ RC×dm is the linearly projected concatenation of the embeddings representing the C concepts . In addition , the concepts are linearly projected with a value projection matrix and concatenated to result in the value matrix V ∈ RC×dm . Cross-attention then outputs an attention weight αpc = softmax ( 1√ dm QK > ) pc with p = 1 , . . . , P , c = 1 , . . . , C , between each patch-concept pair , which are combined into an attention map matrix A = [ αpc ] ∈ RP×C . The final output of the CT is the product obtained by multiplying the attention map A , the value matrix V and an output matrix O ∈ Rdm×nc that projects onto the ( unnormalized ) nc logits over the output classes , and averaging over patches : logiti = 1 P P∑ p=1 [ AV O ] pi with i = 1 , . . . , nc . ( 1 ) Notice that here for simplicity we described a single-head attention model , but in our experiments we will be using a multi-head version ( Vaswani et al. , 2017 ) . Equation 1 says that , given an input x to the network , the conditional probability of output class i is Pr ( i|x ) = softmaxi ( ∑C c=1 βc γc ( x ) ) with βc with components ( βc ) i = [ V O ] ci , ( 2 ) and γc ( x ) are positive relevance scores that depend on x through the averaged attention weights : γc ( x ) = 1 P ∑P p=1 αpc . The output of the CT is therefore essentially a simple multinomial logistic regression model over positive variables γc ( x ) that measures the contribution of each concept . Notice that this result follows from the linear relation between the value vectors and the classification logits , which itself comes from the design choices of computing outputs from the value matrix V through the linear projection V O , and aggregating patch contributions by averaging . Faithful concept-based explanations by design . The CT was conceived as a drop-in replacement for the classifier head of an arbitrary deep learning classifier to provide concept-based explanations of the outputs that are guaranteed to be faithful by design . We formalize this statement as follows : Proposition 1 Each concept relevance score γc ( x ) in Equation 2 is a faithful explanation of the output . More specifically , the probability of choosing the preferred output ic = argmaxi ( βc ) i of concept c ( assuming it ’ s unique ) is guaranteed to decrease if γc ( x ) is externally set to zero . Moreover , the correlation between γc ( x ) and Pr ( ic|x ) is strictly positive . Proof Proof of Proposition 1 is provided in Appendix A . Note that the last statement in the Proposition above is a corollary of the first one , and it specifically shows that CT is guaranteed to satisfy the technical definitions of faithfulness given for instance by Alvarez-Melis & Jaakkola ( 2018 ) . Training and plausibility by construction . As mentioned , the CT is a differentiable transformerbased module that can be embedded in a deep learning architecture trained end-to-end with backpropagation . In addition , the fact that it exposes attention weights over concept tokens that can be user defined gives us the freedom to shape these attention weights according domain-expert knowledge relevant for the problem under consideration . This can be done by explicitly guiding the attention heads to attend to concepts in the input that are a priori known to be informative to correctly classify the input . In practice this can be achieved by supervising the attention weights at training as for instance proposed by Deshpande & Narasimhan ( 2020 ) as a self-supervised technique for bidirectional language models . In particular , given a desired distribution of attention H provided by domain knowledge ( e.g. , we know which patches in the input image contain which concepts that are relevant to classify the input ) we can force the CT attention weights A by adding an “ explanation cost ” term to the loss function that is proportional to Lexpl = ||A − H||2F , where || · ||F is the Frobenius norm . The final loss used to train the architecture then becomes L = Lcls + λ Lexpl , where Lcls denotes the original classification loss , Lexpl the additional explanation loss , and the constant λ ≥ 0 controls the relative contribution of the explanation loss to the total loss . Notice that setting λ = 0 essentially amounts to just minimizing the classification loss and disregarding the prior domain knowledge as imparted into CT by guiding the attention heads .
The paper proposes a new version of transformer called concept transformer with the aim of improving the interpretability of its attention by computing cross attention between the inputs features and a set of concepts. This should make the model both more explainable, in the sense that it is easier for the human to interpret the attention weights of meaningful concepts, and also more faithful, in the sense that the attention scores given to particular concepts directly impact the final prediction of the model. Update: Following the author response I'd like to keep my positive score reflecting my view of the paper.
SP:0978203d0a8d98578d721061cca1528b55a23769
Attention-based Interpretability with Concept Transformers
1 INTRODUCTION . The spectacular gains in accuracy of recent large-scale machine learning models like deep neural networks have generally come at the cost of a loss of transparency into their functioning . This “ black box ” aspect severely limits their applicability in safety-critical domains , such as medical diagnostics , healthcare , public infrastructure safety , visual inspection for civil engineering , to name just a few , where it is essential for decisions to be corroborated by robust domain-relevant knowledge . In recent years , approaches focusing on explaining black box models have emerged , mostly with the goal of providing post-hoc explanations in terms of a set of relevant features used by the underlying model to make predictions ( Ribeiro et al. , 2016 ; Selvaraju et al. , 2017 ; Lundberg & Lee , 2017 ; Smilkov et al. , 2017 ) . While widely used , such post-hoc explainability methods have been criticized for operating on low-level features such as pixel values , or sensory signals that are combined in unintelligible ways and do not correspond to high-level concepts that humans easily understand ( Kim et al. , 2018 ; Alvarez-Melis & Jaakkola , 2018 ; Kindermans et al. , 2019 ; Su et al. , 2019 ) . To overcome these limitations and sidestep the potential perils resulting from a misuse of posthoc explainability of black box models , some researchers have been vocally advocating for the use of inherently interpretable models ( Rudin , 2019 ) that in particular would generate decisions based on human-understandable categories ( i.e. , concepts ) grounded in domain expertise rather than raw features ( Barbiero et al. , 2021 ; Ghorbani et al. , 2019 ; Kim et al. , 2018 ; Yeh et al. , 2020 ; Koh et al. , 2020 ; Goyal et al. , 2019 ; Kazhdan et al. , 2020 ; Chen et al. , 2020 ; Alvarez-Melis & Jaakkola , 2018 ; Li et al. , 2018 ; Chen et al. , 2019 ) . For example , to identify a bird species , a model should focus on morphologically meaningful concepts , such as the shape , size and colors of beak , feathers or wings , rather than focusing on raw pixels , and combine them in ways that a domain expert ( in this case an ornithologist ) would reckon as intelligible to produce a classification . In addition , using high-level concepts emulates a human ’ s thinking process ( i.e. , structured into familiar concepts ) and provides insights into the model ’ s reasoning in a human-understandable way . The chasm between post-hoc explainability vs. inherently interpretable models closely reflects a related ongoing discussion in the NLP community on the interpretation of attention mechanisms ( Bahdanau et al. , 2014 ) , and in particular on the interpretability of attention weights over input tokens , with researchers on one end of the debate claiming that attention provides interpretability , while others claim that “ Attention is not explanation ” ( Jain & Wallace , 2019 ) . While the debate over what degree of interpretability that can be ascribed to attention weights is still not settled ( Wiegreffe & Pinter , 2019 ) , it is arguable that in many situations attention is not a “ fail-safe indicator ” ( Serrano & Smith , 2019 ) , particularly when decisions rely on the interaction of multiple interacting tokens as is typically the case in deep architectures . Conversely then , a way to guarantee the interpretability of attention weights would be to make sure that they are not being processed by downstream operations that renders their relation to the decision outputs uninterpretable . This is indeed something that had been proposed in the past , in particular in architectures that preserve the interpretability of “ relevance scores ” ( akin to attention weights ) by acting on them only through a restricted class of intelligible “ aggregation functions ” such as additive models ( Alvarez-Melis & Jaakkola , 2018 ) , which are a common functional elements in interpretable and white-box models ( Caruana et al. , 2015 ) . In this paper , we propose the ConceptTransformer ( CT ) , a transformer-based module ( Vaswani et al. , 2017 ) for classification tasks , that can be used to enhance an arbitrary deep learning classifier with domain knowledge in the form of plausible cross-attention weights between input features and highlevel interpretable concepts . The CT can be used as a drop-in replacement for the classifier head of any deep learning architecture . The resulting model can then be trained end-to-end without any additional overhead on the training pipeline , except a modification of the loss function that enforces plausibility of the explanation . Importantly , the CT was specifically conceived to provide explanations that guarantee faithfulness by design and plausibility by construction . Faithfulness is defined as the degree to which the explanation reflects the decision and aims therefore to ensure that the explanations are indeed explaining the model ’ s operation ( Lakkaraju et al. , 2019 ; Guidotti et al. , 2018 ) . In our model this is achieved by enforcing a linear relation between the transformer value vectors that represent the concepts and their contribution to the classification log-probabilities . Plausibility refers to how convincing the interpretation is to humans ( Guidotti et al. , 2018 ; Carvalho et al. , 2019 ) . In the CT architecture , plausibility is achieved by construction by supervising the attention heads of the cross-attention mechanism to conform with inputs-concepts-outputs relations derived by domain knowledge . We validate our approach on three image benchmark datasets , MNIST Even/Odd ( Barbiero et al. , 2021 ) , CUB-200-2011 ( Welinder et al. , 2010 ) , and aPY ( Farhadi et al. , 2009 ) . On these datasets we will examine how the faithfulness and plausibility of the CT explanations are practically translated into domain-relevant explanations behind particular output decisions or diagnostic insights about ensuing wrong classifications . We will also quantify the benefit of domain-expert knowledge in terms of statistical efficiency by showing that providing domain-relevant explanations to our CT model tends to improve the performance of the downstream classification , in particular in low-data regime . This for instance translates in an 8-9 % improvement in accuracy on the bird classification CUB-200-2011 dataset when CT is trained in conjunction with part location annotations . We note in addition that one of the strengths of our CT model is its versatility that allows it to be effortlessly applied to other data modalities as well by combining it with deep learning classifiers which are then rendered interpretable with no appreciable overhead or change in their training pipeline . This is in stark contrast to other inherently interpretable models that are often specifically designed for the domain at hand , and require adhoc multi-stage training procedures . CT on the other hand , is differentiable and can be flexibly included in any end-to-end training pipeline that uses backpropagation , as we will showcase by combining it with a host of different deep learning backbones ranging from convolutional architectures like Residual Networks ( He et al. , 2016 ) to more modern Vision Transformer ( Dosovitskiy et al. , 2020 ) and hybrid Compact Convolutional Transformer models ( Hassani et al. , 2021 ) . 2 RELATED WORK . In recent years , there have been significant advancements towards designing interpretable models that quantify the importance of individual features with respect to the prediction output . One general approach is post-hoc analysis , in which one interprets a trained model by fitting explanations to the classification outputs ( Alvarez-Melis & Jaakkola , 2018 ; Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ) . In particular for CNNs , popular techniques are activation maximization ( van den Oord et al. , 2016 ; Nguyen et al. , 2016 ; Yosinski et al. , 2015 ) and saliency visualization ( Selvaraju et al. , 2017 ; Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ) . However , these post-hoc methods do not actually explain how the underlying model reached a particular classification outcome . In contrast , attention-based interpretable techniques aim to expose which parts of a given input a network focuses on , and therefore deems to be most relevant , when making a decision . Examples of attention models are Zhang et al . ( 2014 ) ; Zhou et al . ( 2016 ; 2018 ) ; Zheng et al . ( 2017 ) ; Fu et al . ( 2017 ) ; Girshick ( 2015 ) ; Girshick et al . ( 2014 ) ; Huang et al . ( 2016 ) . The problem with these models is that they focus on low-level individual features when providing an explanation . Such features are often not intuitive for humans , are typically noisy and non-robust , or can be misleading when interpreted afterwards ( Kim et al. , 2018 ) . One of the recent advancements in the field of interpretability was to design methods that explain predictions with high-level human understandable concepts ( Ghorbani et al. , 2019 ; Kim et al. , 2018 ; Yeh et al. , 2020 ; Koh et al. , 2020 ; Goyal et al. , 2019 ; Kazhdan et al. , 2020 ; Chen et al. , 2020 ; Barbiero et al. , 2021 ; Li et al. , 2018 ; Chen et al. , 2019 ) – either by identifying common activation patterns in the last nodes of the neural network corresponding to human understandable categories or constraining the network to learn such concepts . For instance , TCAV ( Kim et al. , 2018 ) proposes to define concepts from user-annotated examples in which concepts appear . Others propose prototypesbased explanation models ( Li et al. , 2018 ; Chen et al. , 2019 ) , but they typically require specialized convolutional architectures to ensure feature extraction . In particular , ProtoPNet ( Li et al. , 2018 ) uses previously learned prototypes to focus attention on various parts of an image . This architectural design implies that object-level ( global ) concepts can not be easily incorporated , and since prototypes are not learned together with the attention model , explanations based on these prototypes may lack faithfulness . SENN ( Alvarez-Melis & Jaakkola , 2018 ) proposes a network that transforms inputs into interpretable basic features , generates concept relevance scores and then aggregates concepts with relevance scores to explain predictions . While it is out-of-the-box interpretable , it lacks concept localization . Barbiero et al . ( 2021 ) proposed a differentiable approach that allows the extraction of logic explanations from neural network models using First-Order Logic . The approach relies on an entropy-based criterion to automatically identify the most relevant concepts that have contributed to a particular classification output . In our approach , high-level concepts are defined with a set of related dimensions , and can be partspecific or global . Such concepts are typically readily available in many domains and can be used to enhance the performance of the learning task while offering explainability at no additional cost for the network . Obtained explanations are plausible and guaranteed to be faithful , since concepts participate in the model computation . Finally , CT also allows in some cases to discover the presence concepts that were not annotated . 3 APPROACH . The ConceptTransformer module . The ConceptTransformer ( CT ) is a transformer-based module designed to be used as classifier head in a deep learning architecture that generates classification outputs using cross-attention ( Vaswani et al. , 2017 ) between input features and a set of embeddings representing domain-relevant concepts . Fig . 1 shows the case where the inputs to the CT are embeddings of P visual patches of an input image that are linearly projected and concatenated into the query matrix Q ∈ RP×dm of a query-key-value cross-attention mechanism whose corresponding key matrix K ∈ RC×dm is the linearly projected concatenation of the embeddings representing the C concepts . In addition , the concepts are linearly projected with a value projection matrix and concatenated to result in the value matrix V ∈ RC×dm . Cross-attention then outputs an attention weight αpc = softmax ( 1√ dm QK > ) pc with p = 1 , . . . , P , c = 1 , . . . , C , between each patch-concept pair , which are combined into an attention map matrix A = [ αpc ] ∈ RP×C . The final output of the CT is the product obtained by multiplying the attention map A , the value matrix V and an output matrix O ∈ Rdm×nc that projects onto the ( unnormalized ) nc logits over the output classes , and averaging over patches : logiti = 1 P P∑ p=1 [ AV O ] pi with i = 1 , . . . , nc . ( 1 ) Notice that here for simplicity we described a single-head attention model , but in our experiments we will be using a multi-head version ( Vaswani et al. , 2017 ) . Equation 1 says that , given an input x to the network , the conditional probability of output class i is Pr ( i|x ) = softmaxi ( ∑C c=1 βc γc ( x ) ) with βc with components ( βc ) i = [ V O ] ci , ( 2 ) and γc ( x ) are positive relevance scores that depend on x through the averaged attention weights : γc ( x ) = 1 P ∑P p=1 αpc . The output of the CT is therefore essentially a simple multinomial logistic regression model over positive variables γc ( x ) that measures the contribution of each concept . Notice that this result follows from the linear relation between the value vectors and the classification logits , which itself comes from the design choices of computing outputs from the value matrix V through the linear projection V O , and aggregating patch contributions by averaging . Faithful concept-based explanations by design . The CT was conceived as a drop-in replacement for the classifier head of an arbitrary deep learning classifier to provide concept-based explanations of the outputs that are guaranteed to be faithful by design . We formalize this statement as follows : Proposition 1 Each concept relevance score γc ( x ) in Equation 2 is a faithful explanation of the output . More specifically , the probability of choosing the preferred output ic = argmaxi ( βc ) i of concept c ( assuming it ’ s unique ) is guaranteed to decrease if γc ( x ) is externally set to zero . Moreover , the correlation between γc ( x ) and Pr ( ic|x ) is strictly positive . Proof Proof of Proposition 1 is provided in Appendix A . Note that the last statement in the Proposition above is a corollary of the first one , and it specifically shows that CT is guaranteed to satisfy the technical definitions of faithfulness given for instance by Alvarez-Melis & Jaakkola ( 2018 ) . Training and plausibility by construction . As mentioned , the CT is a differentiable transformerbased module that can be embedded in a deep learning architecture trained end-to-end with backpropagation . In addition , the fact that it exposes attention weights over concept tokens that can be user defined gives us the freedom to shape these attention weights according domain-expert knowledge relevant for the problem under consideration . This can be done by explicitly guiding the attention heads to attend to concepts in the input that are a priori known to be informative to correctly classify the input . In practice this can be achieved by supervising the attention weights at training as for instance proposed by Deshpande & Narasimhan ( 2020 ) as a self-supervised technique for bidirectional language models . In particular , given a desired distribution of attention H provided by domain knowledge ( e.g. , we know which patches in the input image contain which concepts that are relevant to classify the input ) we can force the CT attention weights A by adding an “ explanation cost ” term to the loss function that is proportional to Lexpl = ||A − H||2F , where || · ||F is the Frobenius norm . The final loss used to train the architecture then becomes L = Lcls + λ Lexpl , where Lcls denotes the original classification loss , Lexpl the additional explanation loss , and the constant λ ≥ 0 controls the relative contribution of the explanation loss to the total loss . Notice that setting λ = 0 essentially amounts to just minimizing the classification loss and disregarding the prior domain knowledge as imparted into CT by guiding the attention heads .
The authors propose a transformer-based model which can enhance explainability of a deep learning model by inducing domain knowledge into the model in a form of cross-attention mechanism. This paper interestingly focuses on addressing the relationship between post-hoc explainability and inherent interpretable model. The authors address the limitation of interpretable model, which focuses on the controversy over how much a human should trust model’s explanations for its decisions.
SP:0978203d0a8d98578d721061cca1528b55a23769
Attention-based Interpretability with Concept Transformers
1 INTRODUCTION . The spectacular gains in accuracy of recent large-scale machine learning models like deep neural networks have generally come at the cost of a loss of transparency into their functioning . This “ black box ” aspect severely limits their applicability in safety-critical domains , such as medical diagnostics , healthcare , public infrastructure safety , visual inspection for civil engineering , to name just a few , where it is essential for decisions to be corroborated by robust domain-relevant knowledge . In recent years , approaches focusing on explaining black box models have emerged , mostly with the goal of providing post-hoc explanations in terms of a set of relevant features used by the underlying model to make predictions ( Ribeiro et al. , 2016 ; Selvaraju et al. , 2017 ; Lundberg & Lee , 2017 ; Smilkov et al. , 2017 ) . While widely used , such post-hoc explainability methods have been criticized for operating on low-level features such as pixel values , or sensory signals that are combined in unintelligible ways and do not correspond to high-level concepts that humans easily understand ( Kim et al. , 2018 ; Alvarez-Melis & Jaakkola , 2018 ; Kindermans et al. , 2019 ; Su et al. , 2019 ) . To overcome these limitations and sidestep the potential perils resulting from a misuse of posthoc explainability of black box models , some researchers have been vocally advocating for the use of inherently interpretable models ( Rudin , 2019 ) that in particular would generate decisions based on human-understandable categories ( i.e. , concepts ) grounded in domain expertise rather than raw features ( Barbiero et al. , 2021 ; Ghorbani et al. , 2019 ; Kim et al. , 2018 ; Yeh et al. , 2020 ; Koh et al. , 2020 ; Goyal et al. , 2019 ; Kazhdan et al. , 2020 ; Chen et al. , 2020 ; Alvarez-Melis & Jaakkola , 2018 ; Li et al. , 2018 ; Chen et al. , 2019 ) . For example , to identify a bird species , a model should focus on morphologically meaningful concepts , such as the shape , size and colors of beak , feathers or wings , rather than focusing on raw pixels , and combine them in ways that a domain expert ( in this case an ornithologist ) would reckon as intelligible to produce a classification . In addition , using high-level concepts emulates a human ’ s thinking process ( i.e. , structured into familiar concepts ) and provides insights into the model ’ s reasoning in a human-understandable way . The chasm between post-hoc explainability vs. inherently interpretable models closely reflects a related ongoing discussion in the NLP community on the interpretation of attention mechanisms ( Bahdanau et al. , 2014 ) , and in particular on the interpretability of attention weights over input tokens , with researchers on one end of the debate claiming that attention provides interpretability , while others claim that “ Attention is not explanation ” ( Jain & Wallace , 2019 ) . While the debate over what degree of interpretability that can be ascribed to attention weights is still not settled ( Wiegreffe & Pinter , 2019 ) , it is arguable that in many situations attention is not a “ fail-safe indicator ” ( Serrano & Smith , 2019 ) , particularly when decisions rely on the interaction of multiple interacting tokens as is typically the case in deep architectures . Conversely then , a way to guarantee the interpretability of attention weights would be to make sure that they are not being processed by downstream operations that renders their relation to the decision outputs uninterpretable . This is indeed something that had been proposed in the past , in particular in architectures that preserve the interpretability of “ relevance scores ” ( akin to attention weights ) by acting on them only through a restricted class of intelligible “ aggregation functions ” such as additive models ( Alvarez-Melis & Jaakkola , 2018 ) , which are a common functional elements in interpretable and white-box models ( Caruana et al. , 2015 ) . In this paper , we propose the ConceptTransformer ( CT ) , a transformer-based module ( Vaswani et al. , 2017 ) for classification tasks , that can be used to enhance an arbitrary deep learning classifier with domain knowledge in the form of plausible cross-attention weights between input features and highlevel interpretable concepts . The CT can be used as a drop-in replacement for the classifier head of any deep learning architecture . The resulting model can then be trained end-to-end without any additional overhead on the training pipeline , except a modification of the loss function that enforces plausibility of the explanation . Importantly , the CT was specifically conceived to provide explanations that guarantee faithfulness by design and plausibility by construction . Faithfulness is defined as the degree to which the explanation reflects the decision and aims therefore to ensure that the explanations are indeed explaining the model ’ s operation ( Lakkaraju et al. , 2019 ; Guidotti et al. , 2018 ) . In our model this is achieved by enforcing a linear relation between the transformer value vectors that represent the concepts and their contribution to the classification log-probabilities . Plausibility refers to how convincing the interpretation is to humans ( Guidotti et al. , 2018 ; Carvalho et al. , 2019 ) . In the CT architecture , plausibility is achieved by construction by supervising the attention heads of the cross-attention mechanism to conform with inputs-concepts-outputs relations derived by domain knowledge . We validate our approach on three image benchmark datasets , MNIST Even/Odd ( Barbiero et al. , 2021 ) , CUB-200-2011 ( Welinder et al. , 2010 ) , and aPY ( Farhadi et al. , 2009 ) . On these datasets we will examine how the faithfulness and plausibility of the CT explanations are practically translated into domain-relevant explanations behind particular output decisions or diagnostic insights about ensuing wrong classifications . We will also quantify the benefit of domain-expert knowledge in terms of statistical efficiency by showing that providing domain-relevant explanations to our CT model tends to improve the performance of the downstream classification , in particular in low-data regime . This for instance translates in an 8-9 % improvement in accuracy on the bird classification CUB-200-2011 dataset when CT is trained in conjunction with part location annotations . We note in addition that one of the strengths of our CT model is its versatility that allows it to be effortlessly applied to other data modalities as well by combining it with deep learning classifiers which are then rendered interpretable with no appreciable overhead or change in their training pipeline . This is in stark contrast to other inherently interpretable models that are often specifically designed for the domain at hand , and require adhoc multi-stage training procedures . CT on the other hand , is differentiable and can be flexibly included in any end-to-end training pipeline that uses backpropagation , as we will showcase by combining it with a host of different deep learning backbones ranging from convolutional architectures like Residual Networks ( He et al. , 2016 ) to more modern Vision Transformer ( Dosovitskiy et al. , 2020 ) and hybrid Compact Convolutional Transformer models ( Hassani et al. , 2021 ) . 2 RELATED WORK . In recent years , there have been significant advancements towards designing interpretable models that quantify the importance of individual features with respect to the prediction output . One general approach is post-hoc analysis , in which one interprets a trained model by fitting explanations to the classification outputs ( Alvarez-Melis & Jaakkola , 2018 ; Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ) . In particular for CNNs , popular techniques are activation maximization ( van den Oord et al. , 2016 ; Nguyen et al. , 2016 ; Yosinski et al. , 2015 ) and saliency visualization ( Selvaraju et al. , 2017 ; Smilkov et al. , 2017 ; Sundararajan et al. , 2017 ) . However , these post-hoc methods do not actually explain how the underlying model reached a particular classification outcome . In contrast , attention-based interpretable techniques aim to expose which parts of a given input a network focuses on , and therefore deems to be most relevant , when making a decision . Examples of attention models are Zhang et al . ( 2014 ) ; Zhou et al . ( 2016 ; 2018 ) ; Zheng et al . ( 2017 ) ; Fu et al . ( 2017 ) ; Girshick ( 2015 ) ; Girshick et al . ( 2014 ) ; Huang et al . ( 2016 ) . The problem with these models is that they focus on low-level individual features when providing an explanation . Such features are often not intuitive for humans , are typically noisy and non-robust , or can be misleading when interpreted afterwards ( Kim et al. , 2018 ) . One of the recent advancements in the field of interpretability was to design methods that explain predictions with high-level human understandable concepts ( Ghorbani et al. , 2019 ; Kim et al. , 2018 ; Yeh et al. , 2020 ; Koh et al. , 2020 ; Goyal et al. , 2019 ; Kazhdan et al. , 2020 ; Chen et al. , 2020 ; Barbiero et al. , 2021 ; Li et al. , 2018 ; Chen et al. , 2019 ) – either by identifying common activation patterns in the last nodes of the neural network corresponding to human understandable categories or constraining the network to learn such concepts . For instance , TCAV ( Kim et al. , 2018 ) proposes to define concepts from user-annotated examples in which concepts appear . Others propose prototypesbased explanation models ( Li et al. , 2018 ; Chen et al. , 2019 ) , but they typically require specialized convolutional architectures to ensure feature extraction . In particular , ProtoPNet ( Li et al. , 2018 ) uses previously learned prototypes to focus attention on various parts of an image . This architectural design implies that object-level ( global ) concepts can not be easily incorporated , and since prototypes are not learned together with the attention model , explanations based on these prototypes may lack faithfulness . SENN ( Alvarez-Melis & Jaakkola , 2018 ) proposes a network that transforms inputs into interpretable basic features , generates concept relevance scores and then aggregates concepts with relevance scores to explain predictions . While it is out-of-the-box interpretable , it lacks concept localization . Barbiero et al . ( 2021 ) proposed a differentiable approach that allows the extraction of logic explanations from neural network models using First-Order Logic . The approach relies on an entropy-based criterion to automatically identify the most relevant concepts that have contributed to a particular classification output . In our approach , high-level concepts are defined with a set of related dimensions , and can be partspecific or global . Such concepts are typically readily available in many domains and can be used to enhance the performance of the learning task while offering explainability at no additional cost for the network . Obtained explanations are plausible and guaranteed to be faithful , since concepts participate in the model computation . Finally , CT also allows in some cases to discover the presence concepts that were not annotated . 3 APPROACH . The ConceptTransformer module . The ConceptTransformer ( CT ) is a transformer-based module designed to be used as classifier head in a deep learning architecture that generates classification outputs using cross-attention ( Vaswani et al. , 2017 ) between input features and a set of embeddings representing domain-relevant concepts . Fig . 1 shows the case where the inputs to the CT are embeddings of P visual patches of an input image that are linearly projected and concatenated into the query matrix Q ∈ RP×dm of a query-key-value cross-attention mechanism whose corresponding key matrix K ∈ RC×dm is the linearly projected concatenation of the embeddings representing the C concepts . In addition , the concepts are linearly projected with a value projection matrix and concatenated to result in the value matrix V ∈ RC×dm . Cross-attention then outputs an attention weight αpc = softmax ( 1√ dm QK > ) pc with p = 1 , . . . , P , c = 1 , . . . , C , between each patch-concept pair , which are combined into an attention map matrix A = [ αpc ] ∈ RP×C . The final output of the CT is the product obtained by multiplying the attention map A , the value matrix V and an output matrix O ∈ Rdm×nc that projects onto the ( unnormalized ) nc logits over the output classes , and averaging over patches : logiti = 1 P P∑ p=1 [ AV O ] pi with i = 1 , . . . , nc . ( 1 ) Notice that here for simplicity we described a single-head attention model , but in our experiments we will be using a multi-head version ( Vaswani et al. , 2017 ) . Equation 1 says that , given an input x to the network , the conditional probability of output class i is Pr ( i|x ) = softmaxi ( ∑C c=1 βc γc ( x ) ) with βc with components ( βc ) i = [ V O ] ci , ( 2 ) and γc ( x ) are positive relevance scores that depend on x through the averaged attention weights : γc ( x ) = 1 P ∑P p=1 αpc . The output of the CT is therefore essentially a simple multinomial logistic regression model over positive variables γc ( x ) that measures the contribution of each concept . Notice that this result follows from the linear relation between the value vectors and the classification logits , which itself comes from the design choices of computing outputs from the value matrix V through the linear projection V O , and aggregating patch contributions by averaging . Faithful concept-based explanations by design . The CT was conceived as a drop-in replacement for the classifier head of an arbitrary deep learning classifier to provide concept-based explanations of the outputs that are guaranteed to be faithful by design . We formalize this statement as follows : Proposition 1 Each concept relevance score γc ( x ) in Equation 2 is a faithful explanation of the output . More specifically , the probability of choosing the preferred output ic = argmaxi ( βc ) i of concept c ( assuming it ’ s unique ) is guaranteed to decrease if γc ( x ) is externally set to zero . Moreover , the correlation between γc ( x ) and Pr ( ic|x ) is strictly positive . Proof Proof of Proposition 1 is provided in Appendix A . Note that the last statement in the Proposition above is a corollary of the first one , and it specifically shows that CT is guaranteed to satisfy the technical definitions of faithfulness given for instance by Alvarez-Melis & Jaakkola ( 2018 ) . Training and plausibility by construction . As mentioned , the CT is a differentiable transformerbased module that can be embedded in a deep learning architecture trained end-to-end with backpropagation . In addition , the fact that it exposes attention weights over concept tokens that can be user defined gives us the freedom to shape these attention weights according domain-expert knowledge relevant for the problem under consideration . This can be done by explicitly guiding the attention heads to attend to concepts in the input that are a priori known to be informative to correctly classify the input . In practice this can be achieved by supervising the attention weights at training as for instance proposed by Deshpande & Narasimhan ( 2020 ) as a self-supervised technique for bidirectional language models . In particular , given a desired distribution of attention H provided by domain knowledge ( e.g. , we know which patches in the input image contain which concepts that are relevant to classify the input ) we can force the CT attention weights A by adding an “ explanation cost ” term to the loss function that is proportional to Lexpl = ||A − H||2F , where || · ||F is the Frobenius norm . The final loss used to train the architecture then becomes L = Lcls + λ Lexpl , where Lcls denotes the original classification loss , Lexpl the additional explanation loss , and the constant λ ≥ 0 controls the relative contribution of the explanation loss to the total loss . Notice that setting λ = 0 essentially amounts to just minimizing the classification loss and disregarding the prior domain knowledge as imparted into CT by guiding the attention heads .
The paper introduces a transformer-based architecture for (deep) concept models. Similar to previous approaches in this area, the authors propose a replacement of the model’s deep classifier head. Here, the authors suggest using the attention mechanism. With the introduced ConceptTransformer module they are able to utilise global as well as local (spatial) concepts. After introducing the architecture and supervised concept learning of the model, the model’s performance is demonstrated on three benchmark datasets. Where only one makes use of local and global concepts.
SP:0978203d0a8d98578d721061cca1528b55a23769
Delaunay Component Analysis for Evaluation of Data Representations
1 INTRODUCTION Quality evaluation of learned data representations is gaining attention in the machine learning community due to the booming development of representation learning techniques . One common approach is to assess representation quality based on their performance on a pre-designed downstream task ( Bevilacqua et al. , 2021 ; Li et al. , 2020 ) . Typically , a classification problem is used to evaluate either the ability of a model to recover labels of raw inputs , or the transferability of their representations to other domains , as done in state-of-the-art unsupervised representation learning methods ( Chen et al. , 2020b ; Ermolov et al. , 2021 ) . However , in many scenarios such straightforward downstream classification task can not be defined , for instance , because it does not represent the nature of the application or due to the scarcity of labeled data as often occurs in robotics ( Chamzas et al. , 2021 ; Lippi et al. , 2020 ) . In these scenarios , representations are commonly evaluated on hand-crafted downstream tasks , e.g. , specific robotics tasks . However , these are time consuming to design , and potentially also bias evaluation procedures and consequentially hinder generalization of representations across different tasks . Recently , more general evaluation methods such as Geometry Score ( GS ) ( Khrulkov & Oseledets , 2018 ) , Improved Precision and Recall ( IPR ) ( Kynkäänniemi et al. , 2019 ) and Geometric Component Analysis ( GeomCA ) ( Poklukar et al. , 2021 ) have been proposed . ∗Correspondence to Petra Poklukar , poklukar @ kth.se . These methods analyze global geometric and topological properties of representation spaces instead of relying on specific pre-designed downstream tasks . These works assume that a set of evaluation representations E is of high quality if it closely mimics the structure of the true data manifold captured by a reference set of representationsR . This reasoning implies thatR andE must have similar geometric and topological properties , such as connected components , their number and size , which are extracted from various approximations of the data manifolds corresponding to R and E. For example , GS and GeomCA estimate the manifolds using simplicial complexes and proximity graphs , respectively , while IPR leverages a k-nearest neighbour ( kNN ) based approximation . However , as we discuss in Section 2 , neither of these approaches provides a reliable manifold estimate in complex arrangements of R and E , for instance , when points form clusters of varying shape and density as well as in the presence of outliers . Moreover , the informativeness of the scores introduced by these methods is limited . In this work , we address the impediments of evaluation of learned representations arising from poor manifold approximations by relying on a more natural estimate using Delaunay graphs . As seen in Figure 1 , edges ( solid lines ) in a Delaunay graph connect spacial neighbours and thus vary in length . In this way , they naturally capture local changes in the density of the representation space and thus more reliably detect outliers , all without depending on hyperparameters . We propose an evaluation framework called Delaunay Component Analysis ( DCA ) which builds the Delaunay graph on the union R ∪ E , extracts its connected components , and applies existing geometric evaluation scores to analyze them . We experimentally validate DCA on a variety of setups by evaluating contrastive representations ( Section 4.1 ) , generative models ( Section 4.2 ) and supervised models ( Section 4.3 ) . Furthermore , we exploit the natural neighbourhood structure of Delaunay graphs to evaluate a single query representation . This is crucial in applications with continuous stream of data , for example , in interactions of an intelligent agent with the environment or in the assessment of the visual quality of individual images generated by a generative model as also explored by Kynkäänniemi et al . ( 2019 ) . In these cases , existing representation spaces need to be updated either by embedding novel query points or distinguishing them from previously seen ones . In Delaunay graphs , this translates to analyzing newly added edges to the given query point . We demonstrate various possible analyses in Section 4 using aforementioned experimental setups . 2 STATE-OF-THE-ART METHODS AND THEIR LIMITATIONS . We review the state-of-the-art methods for evaluation of learned data representations , namely GS ( Khrulkov & Oseledets , 2018 ) , IPR ( Kynkäänniemi et al. , 2019 ) and GeomCA ( Poklukar et al. , 2021 ) , which compare topological and geometrical properties of an evaluation set E with a reference set R representing the true underlying data manifold . We discuss differences in their manifold approximations , visualized in Figure 2 , as well as informativeness of their scores . The pioneering method in this area , GS , constructs witness simplicial complexes on randomly sampled subsets of R and E ( panel ( a ) ) , and compares their connected components using tools of computational topology ( Zomorodian & Carlsson , 2004 ) . The result is an average over several iterations summarized either in the form of histograms , which are cumbersome for quantitative comparison , or as a single un-normalized score , which is in many cases not sufficiently informative . Moreover , GS depends on four hyperparameters , which can be difficult to understand and tuned by practitioners unfamiliar with computational topology . In contrast , IPR obtains manifold approximations by enlarging each point in R ( or E ) with a hypersphere of radius equal to the distance to its kNN in that set ( panel ( b ) ) . It defines two scores : precision PI which counts the number of E points contained on the approximated R manifold , and vice versa for recall RI . While the method depends only on one hyperparameter k , it is highly affected by outliers which induce overly large spheres and dense areas of the space which yield too conservative coverage as visualized in panel ( b ) . Moreover , it is the only method expecting R and E to be of equal size , thus , often requiring subsampling of one of them . Both GS and IPR have been primarily developed to assess generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . The most recent and generally applicable method , GeomCA , extends GS and IPR in two ways : i ) it provides functionality to analyze individual connected components of the approximated manifold , hence enabling one to examine local areas of representation spaces where inconsistencies arise , and ii ) characterizes the captured geometry in four global scores : precisionPG and recallRG , which are similar to IPR , as well as network consistency cG and network quality qG , which are also the basis of the GeomCA local evaluation scores ( see Section 3 for recap ) . GeomCA estimates the manifolds using ε-proximity graphs where the hyperparameter ε defines the maximum distance between two points connected by an edge and is estimated from distances among points in R. However , in practice , representations often form clusters of different shape and density which makes it impossible to select a single value of ε that adequately captures such variations ( see examples in panel ( c ) ) . Moreover , GeomCA reduces the number of points by performing geometric sparsification that extracts a subset of points from each of R and E being pairwise at least distance δ apart , where δ is estimated from ε . While the authors argue that this step does not affect their scores as it preserves the topology , we show that it nevertheless can bias the manifold estimation ofR and E. An example is illustrated in the bottom component of panels ( c ) and ( d ) , where the sparsification removes a large portion of the denseE subset and artificially increases the precision . We demonstrate the occurrence of this scenario in the evaluation of a GAN model in Section 4.2 . Our DCA framework utilizes Delaunay graphs ( panel ( e ) ) to address the discussed limitations in manifold approximations of the existing methods , and employs evaluation scores introduced by Poklukar et al . ( 2021 ) to maximize its informativeness . Moreover , it extends the existing methods by additionally providing a general framework for quality evaluation of single query representations . 3 METHOD . We propose Delaunay Component Analysis ( DCA ) algorithm for evaluation of data representations consisting of three parts : i ) manifold approximation which approximates the Delaunay graph on the given sets of representations , ii ) component distillation which distills the graph into connected components , and lastly iii ) component evaluation which outputs the evaluation scores summarizing the geometry of the data . We provide an outline of our DCA framework in Algorithm 1 found in Appendix A . Moreover , by exploiting the nature of Delaunay graphs , DCA can be efficiently implemented ( Section 3.2 ) and extended for evaluation of individual representations ( Section 3.1 ) . Phase 1 : Manifold approximation As mentioned in Section 1 , the unique property of Delaunay graphs is the definition of neighbouring points that are connected by an edge . For example , in an ε-graph , two points are adjacent if they are at most ε distance apart . In a kNN graph , a point is connected to all points that are closer than the k-th smallest distance of that point to any other . In contrast , in a Delaunay graph ( Figure 1 ) , a point is adjacent to any point that is its spatial neighbour , regardless of the actual distance between them or the number of its other spatial neighbours . We refer to such spatial neighbours as natural neighbours of a point and rigorously define them through Voronoi cells ( depicted as dashed lines in Figure 1 ) in the following . Definition 3.1 Given a set W ⊂ RN we define the Voronoi cell Cell ( z ) associated to a point z ∈ W as the set of points in RN for which z is the closest among W : Cell ( z ) = { x ∈ RN ∣∣ ∥x− z∥ ≤ ∥x− zi∥∀zi ∈W } . The Delaunay graph GD ( W ) = ( V , E ) built on the set V = W is then defined by connecting points whose Voronoi cells intersect , i.e. , E = { ( zi , zj ) ∈W ×W | Cell ( zi ) ∩ Cell ( zj ) ̸= ∅ , zi ̸= zj } . We consider a reference set R = { zi } nRi=1 ⊂ RN and evaluation set E = { zi } nE i=1 ⊂ RN of data representations with R ̸= E , and approximate the Delaunay graph GD = GD ( R ∪ E ) . By Definition 3.1 , edges in GD correspond to points on the boundary of Voronoi cells which are obtained using Monte Carlo based sampling algorithm presented by Polianskii & Pokorny ( 2019 ) ( see Appendix A for further details ) . The process , visualized in Figure 3 , is based on sampling rays originating from each z ∈ R ∪ E and finding their intersection with the boundary of its Voronoi cell Cell ( z ) . This allows to reconstruct GD via subgraph approximation , and can be additionally exploited for memory optimizations which we present in Section 3.2 . Due to the sampling , the number of found edges directly depends on the number T of rays sampled from each z . However , as we show in ablation studies in Appendix B.1 and B.2 , our evaluation framework is stable with respect to the variations in T . Next , we discuss the distillation of GD into connected components . Phase 2 : Component distillation Since edges in GD are obtained among natural neighbours , GD consists of a single connected component uniting regions of points formed in different densities or shape . These can be distinguished by removing large edges ( depicted with transparent edges in Figure 1 ) , or equivalently , by finding clusters of points having similar natural neighbours ( depicted with opaque edges in Figure 1 ) . We distill GD into connected components by adapting the stateof-the-art hierarchical clustering algorithm HDBSCAN ( McInnes et al. , 2017 ) summarized in Appendix A . We apply the part of HDBSCAN that extracts connected components { Gi } from the minimum spanning tree1 MST ( GD ) . We emphasise that applying HDBSCAN directly on MST ( GD ) efficiently bypasses the calculation of a complete pairwise distance matrix2 of size nR + nE which becomes a computational burden for large R and E sets . Such calculation is an integral part of HDBSCAN performed to reduce the sensitivity of the method to noise or outliers which can be omitted in case of Delaunay graphs due their natural neighbourhood structure ( see Appendix A for further details ) . In this way , our modification of HDBSCAN inherits only one of its original hyperparameters , the minimum cluster size mcs parameter , determining the minimum number of points needed for a set of points to form a cluster . This parameter is intuitive to tune and can be flexibly adjusted depending on the nature of the application . In our ablation studies reported in Appendix B.1 and B.2 , we show that DCA is stable with respect to variations in mcs . At the end of this phase , we obtain the distilled Delaunay graph GDD = ⊔ i Gi of GD which we denote simply by G when no confusion arises . Lastly , we analyze the components of G as summarized below . Phase 3 : Component evaluation We analyze the connected components Gi of G using local and global evaluation scores introduced by Poklukar et al . ( 2021 ) which we recap below . Following their notation , we denote by |G|V and |G|E the cardinalities of the vertex and edge sets of a graph G = ( V , E ) , respectively , and by GQ = ( V|Q , E|Q×Q ) ⊂ G its restriction to a set Q ⊂ V . We start by introducing the two local scores : component consistency and quality . Intuitively , a component Gi attains high consistency if it is equally represented by points from R and E , i.e. , if |GRi |V ≈ |GEi |V , where GRi , GEi denote the restrictions of Gi to R and E. Similarly , Gi obtains a high quality score if points from R and E are also well mixed which is measured in terms of edges connecting R and E , i.e. , the points are geometrically well aligned if the number of homogeneous edges among points in each of the sets , |GRi |E and |GEi |E , is small compared to the number of heterogeneous edges connecting representations from R and E. This is rigorously defined as follows : Definition 3.2 ( Local Evaluation Scores , Poklukar et al . ( 2021 ) ) Consistency c and quality q of a component Gi ⊂ G are defined as the ratios c ( Gi ) = 1− | |GRi |V − |GEi |V | |Gi|V and q ( Gi ) = { 1− ( |G R i |E+|G E i |E ) |Gi|E if |Gi|E ≥ 1 , 0 otherwise , ( 1 ) 1Minimum spanning tree of a graph is a tree minimizing the total edge length and connecting all vertices . 2The matrix represents mutual reachability distance calculated among each pair of points with respect to the minimum samples parameter . respectively . Moreover , Gi is called consistent if c ( Gi ) > ηc and of high-quality if q ( Gi ) > ηq for given thresholds ηc , ηq ∈ [ 0 , 1 ) ⊂ R. A consistent component of high-quality as determined by ηc , ηq is called a fundamental component . The union of fundamental components is denoted by F = F ( G , ηc , ηq ) and indexed by the subscript f , i.e. , we write Gf ∈ F . The thresholds ηc , ηq are designed to enable a flexible definition of fundamental components and are assumed to be set depending on the application and available prior knowledge . By examining the proportion of R and E points contained in F , we obtain the first two global scores : precision and recall , respectively . Two more global scores , network consistency and network quality , used to measure global imbalances and misalignment betweenR andE , can be simply derived by extending Definition 3.2 to the entire graph G. In summary , we consider the following global evaluation scores : Definition 3.3 ( Global Evaluation Scores , Poklukar et al . ( 2021 ) ) We define network consistency c ( G ) and network quality q ( G ) as in Definition 3.2 , as well as precision P and recallR as P = |F E |V |GE |V and R = |F R|V |GR|V , ( 2 ) respectively , where FR , FE denote the restrictions of F = F ( G , ηc , ηq ) to the sets R and E .
The paper presents a novel approach to analyzing and comparing feature manifolds, e.g. from a learned embedding. The main idea is to construct a graph from a set of discrete samples to approximate the underlying, continuous manifold. The graph embedding can then be used to assess the similarity of a training and test set embedding, defined as their overlap and interconnectedness. Related existing methods construct such graphs via k-nearest neighbor or epsilon proximity graphs, whereas the proposed method uses Delaunay graphs.
SP:efa1ec33ed33ead24b1568be923354b5d7d52350
Delaunay Component Analysis for Evaluation of Data Representations
1 INTRODUCTION Quality evaluation of learned data representations is gaining attention in the machine learning community due to the booming development of representation learning techniques . One common approach is to assess representation quality based on their performance on a pre-designed downstream task ( Bevilacqua et al. , 2021 ; Li et al. , 2020 ) . Typically , a classification problem is used to evaluate either the ability of a model to recover labels of raw inputs , or the transferability of their representations to other domains , as done in state-of-the-art unsupervised representation learning methods ( Chen et al. , 2020b ; Ermolov et al. , 2021 ) . However , in many scenarios such straightforward downstream classification task can not be defined , for instance , because it does not represent the nature of the application or due to the scarcity of labeled data as often occurs in robotics ( Chamzas et al. , 2021 ; Lippi et al. , 2020 ) . In these scenarios , representations are commonly evaluated on hand-crafted downstream tasks , e.g. , specific robotics tasks . However , these are time consuming to design , and potentially also bias evaluation procedures and consequentially hinder generalization of representations across different tasks . Recently , more general evaluation methods such as Geometry Score ( GS ) ( Khrulkov & Oseledets , 2018 ) , Improved Precision and Recall ( IPR ) ( Kynkäänniemi et al. , 2019 ) and Geometric Component Analysis ( GeomCA ) ( Poklukar et al. , 2021 ) have been proposed . ∗Correspondence to Petra Poklukar , poklukar @ kth.se . These methods analyze global geometric and topological properties of representation spaces instead of relying on specific pre-designed downstream tasks . These works assume that a set of evaluation representations E is of high quality if it closely mimics the structure of the true data manifold captured by a reference set of representationsR . This reasoning implies thatR andE must have similar geometric and topological properties , such as connected components , their number and size , which are extracted from various approximations of the data manifolds corresponding to R and E. For example , GS and GeomCA estimate the manifolds using simplicial complexes and proximity graphs , respectively , while IPR leverages a k-nearest neighbour ( kNN ) based approximation . However , as we discuss in Section 2 , neither of these approaches provides a reliable manifold estimate in complex arrangements of R and E , for instance , when points form clusters of varying shape and density as well as in the presence of outliers . Moreover , the informativeness of the scores introduced by these methods is limited . In this work , we address the impediments of evaluation of learned representations arising from poor manifold approximations by relying on a more natural estimate using Delaunay graphs . As seen in Figure 1 , edges ( solid lines ) in a Delaunay graph connect spacial neighbours and thus vary in length . In this way , they naturally capture local changes in the density of the representation space and thus more reliably detect outliers , all without depending on hyperparameters . We propose an evaluation framework called Delaunay Component Analysis ( DCA ) which builds the Delaunay graph on the union R ∪ E , extracts its connected components , and applies existing geometric evaluation scores to analyze them . We experimentally validate DCA on a variety of setups by evaluating contrastive representations ( Section 4.1 ) , generative models ( Section 4.2 ) and supervised models ( Section 4.3 ) . Furthermore , we exploit the natural neighbourhood structure of Delaunay graphs to evaluate a single query representation . This is crucial in applications with continuous stream of data , for example , in interactions of an intelligent agent with the environment or in the assessment of the visual quality of individual images generated by a generative model as also explored by Kynkäänniemi et al . ( 2019 ) . In these cases , existing representation spaces need to be updated either by embedding novel query points or distinguishing them from previously seen ones . In Delaunay graphs , this translates to analyzing newly added edges to the given query point . We demonstrate various possible analyses in Section 4 using aforementioned experimental setups . 2 STATE-OF-THE-ART METHODS AND THEIR LIMITATIONS . We review the state-of-the-art methods for evaluation of learned data representations , namely GS ( Khrulkov & Oseledets , 2018 ) , IPR ( Kynkäänniemi et al. , 2019 ) and GeomCA ( Poklukar et al. , 2021 ) , which compare topological and geometrical properties of an evaluation set E with a reference set R representing the true underlying data manifold . We discuss differences in their manifold approximations , visualized in Figure 2 , as well as informativeness of their scores . The pioneering method in this area , GS , constructs witness simplicial complexes on randomly sampled subsets of R and E ( panel ( a ) ) , and compares their connected components using tools of computational topology ( Zomorodian & Carlsson , 2004 ) . The result is an average over several iterations summarized either in the form of histograms , which are cumbersome for quantitative comparison , or as a single un-normalized score , which is in many cases not sufficiently informative . Moreover , GS depends on four hyperparameters , which can be difficult to understand and tuned by practitioners unfamiliar with computational topology . In contrast , IPR obtains manifold approximations by enlarging each point in R ( or E ) with a hypersphere of radius equal to the distance to its kNN in that set ( panel ( b ) ) . It defines two scores : precision PI which counts the number of E points contained on the approximated R manifold , and vice versa for recall RI . While the method depends only on one hyperparameter k , it is highly affected by outliers which induce overly large spheres and dense areas of the space which yield too conservative coverage as visualized in panel ( b ) . Moreover , it is the only method expecting R and E to be of equal size , thus , often requiring subsampling of one of them . Both GS and IPR have been primarily developed to assess generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . The most recent and generally applicable method , GeomCA , extends GS and IPR in two ways : i ) it provides functionality to analyze individual connected components of the approximated manifold , hence enabling one to examine local areas of representation spaces where inconsistencies arise , and ii ) characterizes the captured geometry in four global scores : precisionPG and recallRG , which are similar to IPR , as well as network consistency cG and network quality qG , which are also the basis of the GeomCA local evaluation scores ( see Section 3 for recap ) . GeomCA estimates the manifolds using ε-proximity graphs where the hyperparameter ε defines the maximum distance between two points connected by an edge and is estimated from distances among points in R. However , in practice , representations often form clusters of different shape and density which makes it impossible to select a single value of ε that adequately captures such variations ( see examples in panel ( c ) ) . Moreover , GeomCA reduces the number of points by performing geometric sparsification that extracts a subset of points from each of R and E being pairwise at least distance δ apart , where δ is estimated from ε . While the authors argue that this step does not affect their scores as it preserves the topology , we show that it nevertheless can bias the manifold estimation ofR and E. An example is illustrated in the bottom component of panels ( c ) and ( d ) , where the sparsification removes a large portion of the denseE subset and artificially increases the precision . We demonstrate the occurrence of this scenario in the evaluation of a GAN model in Section 4.2 . Our DCA framework utilizes Delaunay graphs ( panel ( e ) ) to address the discussed limitations in manifold approximations of the existing methods , and employs evaluation scores introduced by Poklukar et al . ( 2021 ) to maximize its informativeness . Moreover , it extends the existing methods by additionally providing a general framework for quality evaluation of single query representations . 3 METHOD . We propose Delaunay Component Analysis ( DCA ) algorithm for evaluation of data representations consisting of three parts : i ) manifold approximation which approximates the Delaunay graph on the given sets of representations , ii ) component distillation which distills the graph into connected components , and lastly iii ) component evaluation which outputs the evaluation scores summarizing the geometry of the data . We provide an outline of our DCA framework in Algorithm 1 found in Appendix A . Moreover , by exploiting the nature of Delaunay graphs , DCA can be efficiently implemented ( Section 3.2 ) and extended for evaluation of individual representations ( Section 3.1 ) . Phase 1 : Manifold approximation As mentioned in Section 1 , the unique property of Delaunay graphs is the definition of neighbouring points that are connected by an edge . For example , in an ε-graph , two points are adjacent if they are at most ε distance apart . In a kNN graph , a point is connected to all points that are closer than the k-th smallest distance of that point to any other . In contrast , in a Delaunay graph ( Figure 1 ) , a point is adjacent to any point that is its spatial neighbour , regardless of the actual distance between them or the number of its other spatial neighbours . We refer to such spatial neighbours as natural neighbours of a point and rigorously define them through Voronoi cells ( depicted as dashed lines in Figure 1 ) in the following . Definition 3.1 Given a set W ⊂ RN we define the Voronoi cell Cell ( z ) associated to a point z ∈ W as the set of points in RN for which z is the closest among W : Cell ( z ) = { x ∈ RN ∣∣ ∥x− z∥ ≤ ∥x− zi∥∀zi ∈W } . The Delaunay graph GD ( W ) = ( V , E ) built on the set V = W is then defined by connecting points whose Voronoi cells intersect , i.e. , E = { ( zi , zj ) ∈W ×W | Cell ( zi ) ∩ Cell ( zj ) ̸= ∅ , zi ̸= zj } . We consider a reference set R = { zi } nRi=1 ⊂ RN and evaluation set E = { zi } nE i=1 ⊂ RN of data representations with R ̸= E , and approximate the Delaunay graph GD = GD ( R ∪ E ) . By Definition 3.1 , edges in GD correspond to points on the boundary of Voronoi cells which are obtained using Monte Carlo based sampling algorithm presented by Polianskii & Pokorny ( 2019 ) ( see Appendix A for further details ) . The process , visualized in Figure 3 , is based on sampling rays originating from each z ∈ R ∪ E and finding their intersection with the boundary of its Voronoi cell Cell ( z ) . This allows to reconstruct GD via subgraph approximation , and can be additionally exploited for memory optimizations which we present in Section 3.2 . Due to the sampling , the number of found edges directly depends on the number T of rays sampled from each z . However , as we show in ablation studies in Appendix B.1 and B.2 , our evaluation framework is stable with respect to the variations in T . Next , we discuss the distillation of GD into connected components . Phase 2 : Component distillation Since edges in GD are obtained among natural neighbours , GD consists of a single connected component uniting regions of points formed in different densities or shape . These can be distinguished by removing large edges ( depicted with transparent edges in Figure 1 ) , or equivalently , by finding clusters of points having similar natural neighbours ( depicted with opaque edges in Figure 1 ) . We distill GD into connected components by adapting the stateof-the-art hierarchical clustering algorithm HDBSCAN ( McInnes et al. , 2017 ) summarized in Appendix A . We apply the part of HDBSCAN that extracts connected components { Gi } from the minimum spanning tree1 MST ( GD ) . We emphasise that applying HDBSCAN directly on MST ( GD ) efficiently bypasses the calculation of a complete pairwise distance matrix2 of size nR + nE which becomes a computational burden for large R and E sets . Such calculation is an integral part of HDBSCAN performed to reduce the sensitivity of the method to noise or outliers which can be omitted in case of Delaunay graphs due their natural neighbourhood structure ( see Appendix A for further details ) . In this way , our modification of HDBSCAN inherits only one of its original hyperparameters , the minimum cluster size mcs parameter , determining the minimum number of points needed for a set of points to form a cluster . This parameter is intuitive to tune and can be flexibly adjusted depending on the nature of the application . In our ablation studies reported in Appendix B.1 and B.2 , we show that DCA is stable with respect to variations in mcs . At the end of this phase , we obtain the distilled Delaunay graph GDD = ⊔ i Gi of GD which we denote simply by G when no confusion arises . Lastly , we analyze the components of G as summarized below . Phase 3 : Component evaluation We analyze the connected components Gi of G using local and global evaluation scores introduced by Poklukar et al . ( 2021 ) which we recap below . Following their notation , we denote by |G|V and |G|E the cardinalities of the vertex and edge sets of a graph G = ( V , E ) , respectively , and by GQ = ( V|Q , E|Q×Q ) ⊂ G its restriction to a set Q ⊂ V . We start by introducing the two local scores : component consistency and quality . Intuitively , a component Gi attains high consistency if it is equally represented by points from R and E , i.e. , if |GRi |V ≈ |GEi |V , where GRi , GEi denote the restrictions of Gi to R and E. Similarly , Gi obtains a high quality score if points from R and E are also well mixed which is measured in terms of edges connecting R and E , i.e. , the points are geometrically well aligned if the number of homogeneous edges among points in each of the sets , |GRi |E and |GEi |E , is small compared to the number of heterogeneous edges connecting representations from R and E. This is rigorously defined as follows : Definition 3.2 ( Local Evaluation Scores , Poklukar et al . ( 2021 ) ) Consistency c and quality q of a component Gi ⊂ G are defined as the ratios c ( Gi ) = 1− | |GRi |V − |GEi |V | |Gi|V and q ( Gi ) = { 1− ( |G R i |E+|G E i |E ) |Gi|E if |Gi|E ≥ 1 , 0 otherwise , ( 1 ) 1Minimum spanning tree of a graph is a tree minimizing the total edge length and connecting all vertices . 2The matrix represents mutual reachability distance calculated among each pair of points with respect to the minimum samples parameter . respectively . Moreover , Gi is called consistent if c ( Gi ) > ηc and of high-quality if q ( Gi ) > ηq for given thresholds ηc , ηq ∈ [ 0 , 1 ) ⊂ R. A consistent component of high-quality as determined by ηc , ηq is called a fundamental component . The union of fundamental components is denoted by F = F ( G , ηc , ηq ) and indexed by the subscript f , i.e. , we write Gf ∈ F . The thresholds ηc , ηq are designed to enable a flexible definition of fundamental components and are assumed to be set depending on the application and available prior knowledge . By examining the proportion of R and E points contained in F , we obtain the first two global scores : precision and recall , respectively . Two more global scores , network consistency and network quality , used to measure global imbalances and misalignment betweenR andE , can be simply derived by extending Definition 3.2 to the entire graph G. In summary , we consider the following global evaluation scores : Definition 3.3 ( Global Evaluation Scores , Poklukar et al . ( 2021 ) ) We define network consistency c ( G ) and network quality q ( G ) as in Definition 3.2 , as well as precision P and recallR as P = |F E |V |GE |V and R = |F R|V |GR|V , ( 2 ) respectively , where FR , FE denote the restrictions of F = F ( G , ηc , ηq ) to the sets R and E .
This paper proposes a new method to assess the quality of learned data representations. Recent works have proposed to evaluate data representations by looking at geometric and topological alignment of a set of evaluation representations E and a set of reference set of representations R. The key idea in this paper consists in improving approximation of the data manifold using Delaunay neighbourhood graphs (DCA). The suggested method is meant to work better than existing algorithms (GS, IPR, GeomCA) in heterogeneous settings with outliers and/or varying cluster densities. The DCA algorithm then amounts to realizing the Delaunay graph on the set $R\cup E$ relying on a Monte-Carlo method proposed in Polianskii & Pokorny (2019), distilling components using the hierarchical clustering algorithm HDBSCAN (McInnes et al., 2017), and finally adopting the metrics introduced in Poklukar et al. (2021). Options to deal with query point extensions and pruning of the Delaunay graphs are discussed. Evaluation of the proposed DCA method are analysed for contrastive learning models trained with NT-Xent contrastive loss, for generation capabilities of a StyleGAN and on the VGG16 supervised model pretrained on the ImageNet dataset.
SP:efa1ec33ed33ead24b1568be923354b5d7d52350
Delaunay Component Analysis for Evaluation of Data Representations
1 INTRODUCTION Quality evaluation of learned data representations is gaining attention in the machine learning community due to the booming development of representation learning techniques . One common approach is to assess representation quality based on their performance on a pre-designed downstream task ( Bevilacqua et al. , 2021 ; Li et al. , 2020 ) . Typically , a classification problem is used to evaluate either the ability of a model to recover labels of raw inputs , or the transferability of their representations to other domains , as done in state-of-the-art unsupervised representation learning methods ( Chen et al. , 2020b ; Ermolov et al. , 2021 ) . However , in many scenarios such straightforward downstream classification task can not be defined , for instance , because it does not represent the nature of the application or due to the scarcity of labeled data as often occurs in robotics ( Chamzas et al. , 2021 ; Lippi et al. , 2020 ) . In these scenarios , representations are commonly evaluated on hand-crafted downstream tasks , e.g. , specific robotics tasks . However , these are time consuming to design , and potentially also bias evaluation procedures and consequentially hinder generalization of representations across different tasks . Recently , more general evaluation methods such as Geometry Score ( GS ) ( Khrulkov & Oseledets , 2018 ) , Improved Precision and Recall ( IPR ) ( Kynkäänniemi et al. , 2019 ) and Geometric Component Analysis ( GeomCA ) ( Poklukar et al. , 2021 ) have been proposed . ∗Correspondence to Petra Poklukar , poklukar @ kth.se . These methods analyze global geometric and topological properties of representation spaces instead of relying on specific pre-designed downstream tasks . These works assume that a set of evaluation representations E is of high quality if it closely mimics the structure of the true data manifold captured by a reference set of representationsR . This reasoning implies thatR andE must have similar geometric and topological properties , such as connected components , their number and size , which are extracted from various approximations of the data manifolds corresponding to R and E. For example , GS and GeomCA estimate the manifolds using simplicial complexes and proximity graphs , respectively , while IPR leverages a k-nearest neighbour ( kNN ) based approximation . However , as we discuss in Section 2 , neither of these approaches provides a reliable manifold estimate in complex arrangements of R and E , for instance , when points form clusters of varying shape and density as well as in the presence of outliers . Moreover , the informativeness of the scores introduced by these methods is limited . In this work , we address the impediments of evaluation of learned representations arising from poor manifold approximations by relying on a more natural estimate using Delaunay graphs . As seen in Figure 1 , edges ( solid lines ) in a Delaunay graph connect spacial neighbours and thus vary in length . In this way , they naturally capture local changes in the density of the representation space and thus more reliably detect outliers , all without depending on hyperparameters . We propose an evaluation framework called Delaunay Component Analysis ( DCA ) which builds the Delaunay graph on the union R ∪ E , extracts its connected components , and applies existing geometric evaluation scores to analyze them . We experimentally validate DCA on a variety of setups by evaluating contrastive representations ( Section 4.1 ) , generative models ( Section 4.2 ) and supervised models ( Section 4.3 ) . Furthermore , we exploit the natural neighbourhood structure of Delaunay graphs to evaluate a single query representation . This is crucial in applications with continuous stream of data , for example , in interactions of an intelligent agent with the environment or in the assessment of the visual quality of individual images generated by a generative model as also explored by Kynkäänniemi et al . ( 2019 ) . In these cases , existing representation spaces need to be updated either by embedding novel query points or distinguishing them from previously seen ones . In Delaunay graphs , this translates to analyzing newly added edges to the given query point . We demonstrate various possible analyses in Section 4 using aforementioned experimental setups . 2 STATE-OF-THE-ART METHODS AND THEIR LIMITATIONS . We review the state-of-the-art methods for evaluation of learned data representations , namely GS ( Khrulkov & Oseledets , 2018 ) , IPR ( Kynkäänniemi et al. , 2019 ) and GeomCA ( Poklukar et al. , 2021 ) , which compare topological and geometrical properties of an evaluation set E with a reference set R representing the true underlying data manifold . We discuss differences in their manifold approximations , visualized in Figure 2 , as well as informativeness of their scores . The pioneering method in this area , GS , constructs witness simplicial complexes on randomly sampled subsets of R and E ( panel ( a ) ) , and compares their connected components using tools of computational topology ( Zomorodian & Carlsson , 2004 ) . The result is an average over several iterations summarized either in the form of histograms , which are cumbersome for quantitative comparison , or as a single un-normalized score , which is in many cases not sufficiently informative . Moreover , GS depends on four hyperparameters , which can be difficult to understand and tuned by practitioners unfamiliar with computational topology . In contrast , IPR obtains manifold approximations by enlarging each point in R ( or E ) with a hypersphere of radius equal to the distance to its kNN in that set ( panel ( b ) ) . It defines two scores : precision PI which counts the number of E points contained on the approximated R manifold , and vice versa for recall RI . While the method depends only on one hyperparameter k , it is highly affected by outliers which induce overly large spheres and dense areas of the space which yield too conservative coverage as visualized in panel ( b ) . Moreover , it is the only method expecting R and E to be of equal size , thus , often requiring subsampling of one of them . Both GS and IPR have been primarily developed to assess generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . The most recent and generally applicable method , GeomCA , extends GS and IPR in two ways : i ) it provides functionality to analyze individual connected components of the approximated manifold , hence enabling one to examine local areas of representation spaces where inconsistencies arise , and ii ) characterizes the captured geometry in four global scores : precisionPG and recallRG , which are similar to IPR , as well as network consistency cG and network quality qG , which are also the basis of the GeomCA local evaluation scores ( see Section 3 for recap ) . GeomCA estimates the manifolds using ε-proximity graphs where the hyperparameter ε defines the maximum distance between two points connected by an edge and is estimated from distances among points in R. However , in practice , representations often form clusters of different shape and density which makes it impossible to select a single value of ε that adequately captures such variations ( see examples in panel ( c ) ) . Moreover , GeomCA reduces the number of points by performing geometric sparsification that extracts a subset of points from each of R and E being pairwise at least distance δ apart , where δ is estimated from ε . While the authors argue that this step does not affect their scores as it preserves the topology , we show that it nevertheless can bias the manifold estimation ofR and E. An example is illustrated in the bottom component of panels ( c ) and ( d ) , where the sparsification removes a large portion of the denseE subset and artificially increases the precision . We demonstrate the occurrence of this scenario in the evaluation of a GAN model in Section 4.2 . Our DCA framework utilizes Delaunay graphs ( panel ( e ) ) to address the discussed limitations in manifold approximations of the existing methods , and employs evaluation scores introduced by Poklukar et al . ( 2021 ) to maximize its informativeness . Moreover , it extends the existing methods by additionally providing a general framework for quality evaluation of single query representations . 3 METHOD . We propose Delaunay Component Analysis ( DCA ) algorithm for evaluation of data representations consisting of three parts : i ) manifold approximation which approximates the Delaunay graph on the given sets of representations , ii ) component distillation which distills the graph into connected components , and lastly iii ) component evaluation which outputs the evaluation scores summarizing the geometry of the data . We provide an outline of our DCA framework in Algorithm 1 found in Appendix A . Moreover , by exploiting the nature of Delaunay graphs , DCA can be efficiently implemented ( Section 3.2 ) and extended for evaluation of individual representations ( Section 3.1 ) . Phase 1 : Manifold approximation As mentioned in Section 1 , the unique property of Delaunay graphs is the definition of neighbouring points that are connected by an edge . For example , in an ε-graph , two points are adjacent if they are at most ε distance apart . In a kNN graph , a point is connected to all points that are closer than the k-th smallest distance of that point to any other . In contrast , in a Delaunay graph ( Figure 1 ) , a point is adjacent to any point that is its spatial neighbour , regardless of the actual distance between them or the number of its other spatial neighbours . We refer to such spatial neighbours as natural neighbours of a point and rigorously define them through Voronoi cells ( depicted as dashed lines in Figure 1 ) in the following . Definition 3.1 Given a set W ⊂ RN we define the Voronoi cell Cell ( z ) associated to a point z ∈ W as the set of points in RN for which z is the closest among W : Cell ( z ) = { x ∈ RN ∣∣ ∥x− z∥ ≤ ∥x− zi∥∀zi ∈W } . The Delaunay graph GD ( W ) = ( V , E ) built on the set V = W is then defined by connecting points whose Voronoi cells intersect , i.e. , E = { ( zi , zj ) ∈W ×W | Cell ( zi ) ∩ Cell ( zj ) ̸= ∅ , zi ̸= zj } . We consider a reference set R = { zi } nRi=1 ⊂ RN and evaluation set E = { zi } nE i=1 ⊂ RN of data representations with R ̸= E , and approximate the Delaunay graph GD = GD ( R ∪ E ) . By Definition 3.1 , edges in GD correspond to points on the boundary of Voronoi cells which are obtained using Monte Carlo based sampling algorithm presented by Polianskii & Pokorny ( 2019 ) ( see Appendix A for further details ) . The process , visualized in Figure 3 , is based on sampling rays originating from each z ∈ R ∪ E and finding their intersection with the boundary of its Voronoi cell Cell ( z ) . This allows to reconstruct GD via subgraph approximation , and can be additionally exploited for memory optimizations which we present in Section 3.2 . Due to the sampling , the number of found edges directly depends on the number T of rays sampled from each z . However , as we show in ablation studies in Appendix B.1 and B.2 , our evaluation framework is stable with respect to the variations in T . Next , we discuss the distillation of GD into connected components . Phase 2 : Component distillation Since edges in GD are obtained among natural neighbours , GD consists of a single connected component uniting regions of points formed in different densities or shape . These can be distinguished by removing large edges ( depicted with transparent edges in Figure 1 ) , or equivalently , by finding clusters of points having similar natural neighbours ( depicted with opaque edges in Figure 1 ) . We distill GD into connected components by adapting the stateof-the-art hierarchical clustering algorithm HDBSCAN ( McInnes et al. , 2017 ) summarized in Appendix A . We apply the part of HDBSCAN that extracts connected components { Gi } from the minimum spanning tree1 MST ( GD ) . We emphasise that applying HDBSCAN directly on MST ( GD ) efficiently bypasses the calculation of a complete pairwise distance matrix2 of size nR + nE which becomes a computational burden for large R and E sets . Such calculation is an integral part of HDBSCAN performed to reduce the sensitivity of the method to noise or outliers which can be omitted in case of Delaunay graphs due their natural neighbourhood structure ( see Appendix A for further details ) . In this way , our modification of HDBSCAN inherits only one of its original hyperparameters , the minimum cluster size mcs parameter , determining the minimum number of points needed for a set of points to form a cluster . This parameter is intuitive to tune and can be flexibly adjusted depending on the nature of the application . In our ablation studies reported in Appendix B.1 and B.2 , we show that DCA is stable with respect to variations in mcs . At the end of this phase , we obtain the distilled Delaunay graph GDD = ⊔ i Gi of GD which we denote simply by G when no confusion arises . Lastly , we analyze the components of G as summarized below . Phase 3 : Component evaluation We analyze the connected components Gi of G using local and global evaluation scores introduced by Poklukar et al . ( 2021 ) which we recap below . Following their notation , we denote by |G|V and |G|E the cardinalities of the vertex and edge sets of a graph G = ( V , E ) , respectively , and by GQ = ( V|Q , E|Q×Q ) ⊂ G its restriction to a set Q ⊂ V . We start by introducing the two local scores : component consistency and quality . Intuitively , a component Gi attains high consistency if it is equally represented by points from R and E , i.e. , if |GRi |V ≈ |GEi |V , where GRi , GEi denote the restrictions of Gi to R and E. Similarly , Gi obtains a high quality score if points from R and E are also well mixed which is measured in terms of edges connecting R and E , i.e. , the points are geometrically well aligned if the number of homogeneous edges among points in each of the sets , |GRi |E and |GEi |E , is small compared to the number of heterogeneous edges connecting representations from R and E. This is rigorously defined as follows : Definition 3.2 ( Local Evaluation Scores , Poklukar et al . ( 2021 ) ) Consistency c and quality q of a component Gi ⊂ G are defined as the ratios c ( Gi ) = 1− | |GRi |V − |GEi |V | |Gi|V and q ( Gi ) = { 1− ( |G R i |E+|G E i |E ) |Gi|E if |Gi|E ≥ 1 , 0 otherwise , ( 1 ) 1Minimum spanning tree of a graph is a tree minimizing the total edge length and connecting all vertices . 2The matrix represents mutual reachability distance calculated among each pair of points with respect to the minimum samples parameter . respectively . Moreover , Gi is called consistent if c ( Gi ) > ηc and of high-quality if q ( Gi ) > ηq for given thresholds ηc , ηq ∈ [ 0 , 1 ) ⊂ R. A consistent component of high-quality as determined by ηc , ηq is called a fundamental component . The union of fundamental components is denoted by F = F ( G , ηc , ηq ) and indexed by the subscript f , i.e. , we write Gf ∈ F . The thresholds ηc , ηq are designed to enable a flexible definition of fundamental components and are assumed to be set depending on the application and available prior knowledge . By examining the proportion of R and E points contained in F , we obtain the first two global scores : precision and recall , respectively . Two more global scores , network consistency and network quality , used to measure global imbalances and misalignment betweenR andE , can be simply derived by extending Definition 3.2 to the entire graph G. In summary , we consider the following global evaluation scores : Definition 3.3 ( Global Evaluation Scores , Poklukar et al . ( 2021 ) ) We define network consistency c ( G ) and network quality q ( G ) as in Definition 3.2 , as well as precision P and recallR as P = |F E |V |GE |V and R = |F R|V |GR|V , ( 2 ) respectively , where FR , FE denote the restrictions of F = F ( G , ηc , ηq ) to the sets R and E .
This paper explores a new method (DCA) to evaluate the quality of learned data representations. The goal of this paper is to score representations from sets R (reference) and E (evaluation) using geometric and topological properties (like connected components) of the representations spaces. To put simply, if the local geometry and overall topology of both R and E are similar, then the representations are considered consistent and aligned (hence yielding overall high precision and recall). The authors primarily compare with 3 methods: GS, IPR and GCA (including a sparse variant) and argue that in scenarios of prevalent outliers and varying component densities of the representation spaces, such methods typically yield non-informative and suboptimal scores to compare representations. To that aid, the main method proposed in this paper proposes to alleviate these drawbacks and comprises of 3 steps: 1.) manifold estimation, 2.) component distillation 3.) component evaluation. The main contribution of this paper is the use of Delaunay graphs in step (1.) as opposed to eps-neighbor or knn graphs as is done in prior methods. The authors claim that this choice of graph construction along with the distillation procedure leads to identifying consistent and high quality connected components leading to more stable results. In addition the authors also propose a ‘out-of-sample’ variant of their method to handle individual queries. The authors demonstrate their metric in three different setups: (A.) Contrastive Learning on the synthetic Chamzas et al. (2021) images dataset (B.) Style GAN (C.) Representation space of VGG16. Results, especially on (A.) show that the proposed evaluation metric is better suited with accurate recognitions of mode-collapse and mode discovery.
SP:efa1ec33ed33ead24b1568be923354b5d7d52350
Towards the Memorization Effect of Neural Networks in Adversarial Training
1 INTRODUCTION . It is evident from recent studies that the memorization effect ( or benign overfitting ) ( Feldman , 2020 ; Feldman & Zhang , 2020 ; Bartlett et al. , 2020 ; Chatterji & Long , 2020 ; Muthukumar et al. , 2020 ) is one necessary factor for overparametrized deep neural networks ( DNNs ) to achieve the close-tooptimal generalization error . From the empirical perspective , the works ( Feldman , 2020 ; Feldman & Zhang , 2020 ) suggest that modern benchmark datasets , such as CIFAR10 , CIFAR100 ( Krizhevsky et al. , 2009 ) and ImageNet ( Krizhevsky et al. , 2012 ) , always have very diverse data distributions , especially containing a large fraction of “ atypical ” samples . These atypical samples are both visually and statistically very different from other samples in their labeled class . For example , the images in the class “ bird ” may have a variety of sub-populations or species , with many ( typical ) samples in the main sub-population and other ( atypical ) samples in less-frequent and distinct sub-populations . Since these atypical samples are deviated from the main sub-population , DNNs can only fit these atypical samples by “ memorizing ” their labels . While , memorizing / fitting these atypical samples will not hurt the model performance on typical samples , but can boost DNNs ’ accuracy by correctly classifying the atypical samples appearing in the test set . Similar to the classification models which are trained via empirical risk minimization ( ERM ) algorithms , adversarial training methods ( Madry et al. , 2017 ; Kurakin et al. , 2016 ) are also devised to fit the whole training dataset . Specifically , adversarial training minimizes the model ’ s error against adversarial perturbations ( Goodfellow et al. , 2014 ; Szegedy et al. , 2013 ) by fitting the model on manually generated adversarial examples of all training data . Although adversarial training can fit and memorize all training data as well as their adversarially perturbed counterparts , they always suffer from both poor clean accuracy and adversarial accuracy ( or robustness ) 1 on the test set ( Tsipras et al. , 1Clean Accuracy : Model ’ s accuracy on unperturbed samples . Adversarial accuracy ( robustness ) : Model ’ s accuracy on adversarially perturbed samples . Without loss of generality , this paper discusses the adversarial accuracy under l∞-8/255 PGD attack ( Madry et al. , 2017 ) . 2018 ; Schmidt et al. , 2018 ) . Recent study ( Rice et al. , 2020 ) indicates that , during the adversarial training process , one model ’ s test adversarial accuracy will even keep decreasing as it hits higher training adversarial accuracy ( during the fine-tuning epochs ) . Thus , it is natural to ask a question : What is the effect of memorization in adversarial training ? In particular , can memorizing the atypical samples and their adversarial counterparts benefit the model ’ s accuracy and robustness ? To answer this question , we first conduct preliminary studies to explore whether memorizing atypical samples in adversarial training can benefit the DNNs ’ test performance , especially on those test atypical samples . In Section 3.1 , we implement PGD adversarial training ( Madry et al. , 2017 ) on CIFAR100 under WideResNet models ( He et al. , 2016 ) and fine-tune them until achieving the optimal training performance . From the results in Section 3.1 , we observe that the memorization in adversarial training can only benefit the clean accuracy of test atypical samples . When the DNNs gradually fit/memorize more atypical samples , they can finally achieve fair clean accuracy close to ∼ 40 % on the test atypical set . However , the adversarial accuracy on test atypical set is constantly low ( ∼ 10 % ) during the whole training process , even though the models can fit almost all atypical ( adversarial ) training samples . Based on the theoretical study ( Schmidt et al. , 2018 ) , the adversarial robustness is hard to generalize especially when the training data size is limited . Since every single atypical sample is distinct from the main sub-population and rarely appears in the training set , the data complexity for each specific atypical sample is very low . Thus , its adversarial robustness can be extremely hard to generalize . Notably , for datasets such as CIFAR100 , the entire atypical set covers at least ∼ 40 % samples of the whole dataset . Therefore , completely failing on atypical samples could be one of the key reasons leading to the poor robustness generalization of DNNs . Furthermore , we find that in adversarial training , fitting atypical samples will even hurt DNNs ’ performance on those “ typical ” samples ( the samples in the main sub-population ) . In the Section 3.2 , we again implement PGD adversarial training ( Madry et al. , 2017 ) on CIFAR100 for several trails , which are trained with different amount of atypical samples . Based on the results from Section 3.2 , an adversarially trained model on the training set without any atypical samples has 95 % clean accuracy and 55 % adversarial accuracy on the test typical set . While , the model trained with 100 % atypical samples only has 90.2 % /50.4 % clean/adversarial accuracy respectively . In other words , atypical samples act more like “ poisoning data ” ( Biggio et al. , 2012 ) to deteriorate model performance on typical samples . Furthermore , our study in Section 3.2 also demonstrates that this poisoning effect is absent in traditional ERM , where fitting atypical samples will not reduce the model accuracy . To deepen our understanding on this finding , we build a theoretical analysis based on Gaussian mixture models . We prove that , given certain atypical samples , any model which fits its adversarial counterpart ( adv . training ) must have a poor accuracy on typical samples . While , under the same setting , the models only fitting the clean version of this atypical sample ( traditional ERM ) can achieve optimal accuracy ( i.e . > 99 % ) . Our empirical and theoretical results highlight the key difference between the effect of memorization in adversarial training and traditional ERM . Motivated by our findings , we propose a novel algorithm called Benign Adversarial Training ( BAT ) , which can eliminate the negative influence from memorizing those “ poisoning ” atypical samples , meanwhile preserving the model ’ s ability to memorize those “ benign / useful ” atypical samples . It is worth mentioning that , by fitting those “ benign ” atypical samples , the BAT method can achieve good clean accuracy on the atypical set ; by eliminating those poisoning atypical samples , the BAT method can improve the clean & adv . accuracy on the typical set . Compared with PGD adversarial training ( Madry et al. , 2017 ) when it achieves the highest adversarial robustness , BAT has higher clean accuracy as well as adversarial accuracy . Compared to other popular variants of adversarial training such as ( Zhang et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2020 ) , BAT is the only one obtaining both better ( or comparable ) clean and adversarial accuracy than ( Madry et al. , 2017 ) , on complex datasets such as CIFAR100 and Tiny Imagenet ( Le & Yang , 2015 ) . 2 DEFINITION AND NOTATION . 2.1 ATYPICAL SAMPLES AND MEMORIZATION . In this section , we start by introducing necessary concepts and definitions about the memorization effects . As well known , the practice of training deep neural networks ( DNNS ) can not be well explained by standard theories about model generalization ( Evgeniou et al. , 2000 ; Bartlett & Mendelson , 2002 ) . At a high level , the standard theories underline the importance of regularization on the model complexity , to avoid “ overfitting ” the outliers and nonuseful samples . While , for DNNs , we always tune the model to hit almost perfect training accuracy and enjoy good test performance . Fortunately , recent works ( Feldman , 2020 ; Feldman & Zhang , 2020 ; Bartlett et al. , 2020 ; Muthukumar et al. , 2020 ) make significant progress to close this gap from both theoretical and empirical perspectives . They suggest that the memorization is one key property for DNNs to achieve optimal generalization performance . In detail , the works ( Feldman , 2020 ; Feldman & Zhang , 2020 ) point out that the common benchmark datasets , such as CIFAR10 , CIFAR100 , ImageNet , contain a large portion of atypical samples ( or namely , rare samples , sub-populations , etc. ) . These atypical samples look very different from the other samples in the main distribution of its labeled class , and are statistically indistinguished from outliers or mislabeled samples . DNNs can only fit these samples by memorizing their labels . Moreover , without memorizing these atypical samples during training , the DNNs can totally fail to predict the atypical samples in the test set ( Feldman & Zhang , 2020 ) . Identify Atypical Samples . To identify such atypical samples in common datasets in practice , one representative strategy ( Feldman & Zhang , 2020 ) proposes to examine which training samples can only be fitted by memorization , and measure each training sample ’ s “ memorization value ” . Formally , for a training algorithmA ( i.e. , ERM ) , the memorization value “ mem ( A , D , xi ) ” of a training sample ( xi , yi ) ∈ D in training set D is defined as : mem ( A , D , xi ) = Pr . F←A ( D ) ( F ( xi ) = yi ) − Pr . F←A ( D\xi ) ( F ( xi ) = yi ) , ( 1 ) which calculates the difference between the model F ’ s accuracy on xi with and without xi removed from the training set D. In practice , ( Feldman & Zhang , 2020 ) trains a DNN model using ERM method for 1,000 trials , with each trial preserves 70 % samples of the whole training dataset . Based on this metric , one can identify all common datasets having a large fraction of atypical samples . For example , CIFAR10 , CIFAR100 and Tiny ImageNet have more than 11 % , 40 % and 49 % samples with a large memorization value > 0.15 . The similar strategy can also facilitate to find atypical samples in the test set , which are the samples that are strongly influenced by atypical training samples . In detail , by removing an atypical training sample ( xi , yi ) , we calculate its “ influence value ” on each test sample ( x′j , y ′ j ) ∈ D′ in test set D′ : infl ( A , D , xi , x′j ) = Pr . F←A ( D ) ( F ( x′j ) = y ′ j ) − Pr . F←A ( D\xi ) ( F ( x′j ) = y ′ j ) . ( 2 ) If the sample pair ( xi , x′j ) has a high influence value , removing the atypical sample xi will drastically decrease the model ’ s accuracy on x′j . In Appendix A , we provide more detailed discussions on the memorization effect and atypical samples , containing : ( a ) the distribution and frequency of atypical samples in common datasets and ( b ) alternative ( more efficient ) metrics to identify atypical samples , such as confidence-based method and ensemble disagreement ( Carlini et al. , 2019 ) .
This paper studies the role of atypical samples in the context of adversarial robustness, and further introduces a variant of adversarial training based on these observations. The proposed Benign Adversarial Training (BAT) incorporates an additional temperature-scaled n-pair loss and weights adversarial examples according to their margin. BAT shows improved clean accuracy at similar levels of robustness compared to baselines.
SP:966d826eb45b19f53a30533bfd1ecb1351545daa
Towards the Memorization Effect of Neural Networks in Adversarial Training
1 INTRODUCTION . It is evident from recent studies that the memorization effect ( or benign overfitting ) ( Feldman , 2020 ; Feldman & Zhang , 2020 ; Bartlett et al. , 2020 ; Chatterji & Long , 2020 ; Muthukumar et al. , 2020 ) is one necessary factor for overparametrized deep neural networks ( DNNs ) to achieve the close-tooptimal generalization error . From the empirical perspective , the works ( Feldman , 2020 ; Feldman & Zhang , 2020 ) suggest that modern benchmark datasets , such as CIFAR10 , CIFAR100 ( Krizhevsky et al. , 2009 ) and ImageNet ( Krizhevsky et al. , 2012 ) , always have very diverse data distributions , especially containing a large fraction of “ atypical ” samples . These atypical samples are both visually and statistically very different from other samples in their labeled class . For example , the images in the class “ bird ” may have a variety of sub-populations or species , with many ( typical ) samples in the main sub-population and other ( atypical ) samples in less-frequent and distinct sub-populations . Since these atypical samples are deviated from the main sub-population , DNNs can only fit these atypical samples by “ memorizing ” their labels . While , memorizing / fitting these atypical samples will not hurt the model performance on typical samples , but can boost DNNs ’ accuracy by correctly classifying the atypical samples appearing in the test set . Similar to the classification models which are trained via empirical risk minimization ( ERM ) algorithms , adversarial training methods ( Madry et al. , 2017 ; Kurakin et al. , 2016 ) are also devised to fit the whole training dataset . Specifically , adversarial training minimizes the model ’ s error against adversarial perturbations ( Goodfellow et al. , 2014 ; Szegedy et al. , 2013 ) by fitting the model on manually generated adversarial examples of all training data . Although adversarial training can fit and memorize all training data as well as their adversarially perturbed counterparts , they always suffer from both poor clean accuracy and adversarial accuracy ( or robustness ) 1 on the test set ( Tsipras et al. , 1Clean Accuracy : Model ’ s accuracy on unperturbed samples . Adversarial accuracy ( robustness ) : Model ’ s accuracy on adversarially perturbed samples . Without loss of generality , this paper discusses the adversarial accuracy under l∞-8/255 PGD attack ( Madry et al. , 2017 ) . 2018 ; Schmidt et al. , 2018 ) . Recent study ( Rice et al. , 2020 ) indicates that , during the adversarial training process , one model ’ s test adversarial accuracy will even keep decreasing as it hits higher training adversarial accuracy ( during the fine-tuning epochs ) . Thus , it is natural to ask a question : What is the effect of memorization in adversarial training ? In particular , can memorizing the atypical samples and their adversarial counterparts benefit the model ’ s accuracy and robustness ? To answer this question , we first conduct preliminary studies to explore whether memorizing atypical samples in adversarial training can benefit the DNNs ’ test performance , especially on those test atypical samples . In Section 3.1 , we implement PGD adversarial training ( Madry et al. , 2017 ) on CIFAR100 under WideResNet models ( He et al. , 2016 ) and fine-tune them until achieving the optimal training performance . From the results in Section 3.1 , we observe that the memorization in adversarial training can only benefit the clean accuracy of test atypical samples . When the DNNs gradually fit/memorize more atypical samples , they can finally achieve fair clean accuracy close to ∼ 40 % on the test atypical set . However , the adversarial accuracy on test atypical set is constantly low ( ∼ 10 % ) during the whole training process , even though the models can fit almost all atypical ( adversarial ) training samples . Based on the theoretical study ( Schmidt et al. , 2018 ) , the adversarial robustness is hard to generalize especially when the training data size is limited . Since every single atypical sample is distinct from the main sub-population and rarely appears in the training set , the data complexity for each specific atypical sample is very low . Thus , its adversarial robustness can be extremely hard to generalize . Notably , for datasets such as CIFAR100 , the entire atypical set covers at least ∼ 40 % samples of the whole dataset . Therefore , completely failing on atypical samples could be one of the key reasons leading to the poor robustness generalization of DNNs . Furthermore , we find that in adversarial training , fitting atypical samples will even hurt DNNs ’ performance on those “ typical ” samples ( the samples in the main sub-population ) . In the Section 3.2 , we again implement PGD adversarial training ( Madry et al. , 2017 ) on CIFAR100 for several trails , which are trained with different amount of atypical samples . Based on the results from Section 3.2 , an adversarially trained model on the training set without any atypical samples has 95 % clean accuracy and 55 % adversarial accuracy on the test typical set . While , the model trained with 100 % atypical samples only has 90.2 % /50.4 % clean/adversarial accuracy respectively . In other words , atypical samples act more like “ poisoning data ” ( Biggio et al. , 2012 ) to deteriorate model performance on typical samples . Furthermore , our study in Section 3.2 also demonstrates that this poisoning effect is absent in traditional ERM , where fitting atypical samples will not reduce the model accuracy . To deepen our understanding on this finding , we build a theoretical analysis based on Gaussian mixture models . We prove that , given certain atypical samples , any model which fits its adversarial counterpart ( adv . training ) must have a poor accuracy on typical samples . While , under the same setting , the models only fitting the clean version of this atypical sample ( traditional ERM ) can achieve optimal accuracy ( i.e . > 99 % ) . Our empirical and theoretical results highlight the key difference between the effect of memorization in adversarial training and traditional ERM . Motivated by our findings , we propose a novel algorithm called Benign Adversarial Training ( BAT ) , which can eliminate the negative influence from memorizing those “ poisoning ” atypical samples , meanwhile preserving the model ’ s ability to memorize those “ benign / useful ” atypical samples . It is worth mentioning that , by fitting those “ benign ” atypical samples , the BAT method can achieve good clean accuracy on the atypical set ; by eliminating those poisoning atypical samples , the BAT method can improve the clean & adv . accuracy on the typical set . Compared with PGD adversarial training ( Madry et al. , 2017 ) when it achieves the highest adversarial robustness , BAT has higher clean accuracy as well as adversarial accuracy . Compared to other popular variants of adversarial training such as ( Zhang et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2020 ) , BAT is the only one obtaining both better ( or comparable ) clean and adversarial accuracy than ( Madry et al. , 2017 ) , on complex datasets such as CIFAR100 and Tiny Imagenet ( Le & Yang , 2015 ) . 2 DEFINITION AND NOTATION . 2.1 ATYPICAL SAMPLES AND MEMORIZATION . In this section , we start by introducing necessary concepts and definitions about the memorization effects . As well known , the practice of training deep neural networks ( DNNS ) can not be well explained by standard theories about model generalization ( Evgeniou et al. , 2000 ; Bartlett & Mendelson , 2002 ) . At a high level , the standard theories underline the importance of regularization on the model complexity , to avoid “ overfitting ” the outliers and nonuseful samples . While , for DNNs , we always tune the model to hit almost perfect training accuracy and enjoy good test performance . Fortunately , recent works ( Feldman , 2020 ; Feldman & Zhang , 2020 ; Bartlett et al. , 2020 ; Muthukumar et al. , 2020 ) make significant progress to close this gap from both theoretical and empirical perspectives . They suggest that the memorization is one key property for DNNs to achieve optimal generalization performance . In detail , the works ( Feldman , 2020 ; Feldman & Zhang , 2020 ) point out that the common benchmark datasets , such as CIFAR10 , CIFAR100 , ImageNet , contain a large portion of atypical samples ( or namely , rare samples , sub-populations , etc. ) . These atypical samples look very different from the other samples in the main distribution of its labeled class , and are statistically indistinguished from outliers or mislabeled samples . DNNs can only fit these samples by memorizing their labels . Moreover , without memorizing these atypical samples during training , the DNNs can totally fail to predict the atypical samples in the test set ( Feldman & Zhang , 2020 ) . Identify Atypical Samples . To identify such atypical samples in common datasets in practice , one representative strategy ( Feldman & Zhang , 2020 ) proposes to examine which training samples can only be fitted by memorization , and measure each training sample ’ s “ memorization value ” . Formally , for a training algorithmA ( i.e. , ERM ) , the memorization value “ mem ( A , D , xi ) ” of a training sample ( xi , yi ) ∈ D in training set D is defined as : mem ( A , D , xi ) = Pr . F←A ( D ) ( F ( xi ) = yi ) − Pr . F←A ( D\xi ) ( F ( xi ) = yi ) , ( 1 ) which calculates the difference between the model F ’ s accuracy on xi with and without xi removed from the training set D. In practice , ( Feldman & Zhang , 2020 ) trains a DNN model using ERM method for 1,000 trials , with each trial preserves 70 % samples of the whole training dataset . Based on this metric , one can identify all common datasets having a large fraction of atypical samples . For example , CIFAR10 , CIFAR100 and Tiny ImageNet have more than 11 % , 40 % and 49 % samples with a large memorization value > 0.15 . The similar strategy can also facilitate to find atypical samples in the test set , which are the samples that are strongly influenced by atypical training samples . In detail , by removing an atypical training sample ( xi , yi ) , we calculate its “ influence value ” on each test sample ( x′j , y ′ j ) ∈ D′ in test set D′ : infl ( A , D , xi , x′j ) = Pr . F←A ( D ) ( F ( x′j ) = y ′ j ) − Pr . F←A ( D\xi ) ( F ( x′j ) = y ′ j ) . ( 2 ) If the sample pair ( xi , x′j ) has a high influence value , removing the atypical sample xi will drastically decrease the model ’ s accuracy on x′j . In Appendix A , we provide more detailed discussions on the memorization effect and atypical samples , containing : ( a ) the distribution and frequency of atypical samples in common datasets and ( b ) alternative ( more efficient ) metrics to identify atypical samples , such as confidence-based method and ensemble disagreement ( Carlini et al. , 2019 ) .
This paper explores the "memorization" in adversarial training. Through some empirical experiments and theoretical analysis, this paper points out two findings: (a) memorizing atypical samples can improve DNN’s accuracy on clean atypical samples, but hardly improve their adversarial robustness and (b) memorizing some atypical samples can even hurt the DNN’s performance on typical samples. Based on these findings, the authors propose a method called benign adversarial training by reweighting. Empirical results show that it can achieve better clean accuracy or robustness than some baseline methods in CIFAR-100 and Tiny-ImageNet.
SP:966d826eb45b19f53a30533bfd1ecb1351545daa
Towards the Memorization Effect of Neural Networks in Adversarial Training
1 INTRODUCTION . It is evident from recent studies that the memorization effect ( or benign overfitting ) ( Feldman , 2020 ; Feldman & Zhang , 2020 ; Bartlett et al. , 2020 ; Chatterji & Long , 2020 ; Muthukumar et al. , 2020 ) is one necessary factor for overparametrized deep neural networks ( DNNs ) to achieve the close-tooptimal generalization error . From the empirical perspective , the works ( Feldman , 2020 ; Feldman & Zhang , 2020 ) suggest that modern benchmark datasets , such as CIFAR10 , CIFAR100 ( Krizhevsky et al. , 2009 ) and ImageNet ( Krizhevsky et al. , 2012 ) , always have very diverse data distributions , especially containing a large fraction of “ atypical ” samples . These atypical samples are both visually and statistically very different from other samples in their labeled class . For example , the images in the class “ bird ” may have a variety of sub-populations or species , with many ( typical ) samples in the main sub-population and other ( atypical ) samples in less-frequent and distinct sub-populations . Since these atypical samples are deviated from the main sub-population , DNNs can only fit these atypical samples by “ memorizing ” their labels . While , memorizing / fitting these atypical samples will not hurt the model performance on typical samples , but can boost DNNs ’ accuracy by correctly classifying the atypical samples appearing in the test set . Similar to the classification models which are trained via empirical risk minimization ( ERM ) algorithms , adversarial training methods ( Madry et al. , 2017 ; Kurakin et al. , 2016 ) are also devised to fit the whole training dataset . Specifically , adversarial training minimizes the model ’ s error against adversarial perturbations ( Goodfellow et al. , 2014 ; Szegedy et al. , 2013 ) by fitting the model on manually generated adversarial examples of all training data . Although adversarial training can fit and memorize all training data as well as their adversarially perturbed counterparts , they always suffer from both poor clean accuracy and adversarial accuracy ( or robustness ) 1 on the test set ( Tsipras et al. , 1Clean Accuracy : Model ’ s accuracy on unperturbed samples . Adversarial accuracy ( robustness ) : Model ’ s accuracy on adversarially perturbed samples . Without loss of generality , this paper discusses the adversarial accuracy under l∞-8/255 PGD attack ( Madry et al. , 2017 ) . 2018 ; Schmidt et al. , 2018 ) . Recent study ( Rice et al. , 2020 ) indicates that , during the adversarial training process , one model ’ s test adversarial accuracy will even keep decreasing as it hits higher training adversarial accuracy ( during the fine-tuning epochs ) . Thus , it is natural to ask a question : What is the effect of memorization in adversarial training ? In particular , can memorizing the atypical samples and their adversarial counterparts benefit the model ’ s accuracy and robustness ? To answer this question , we first conduct preliminary studies to explore whether memorizing atypical samples in adversarial training can benefit the DNNs ’ test performance , especially on those test atypical samples . In Section 3.1 , we implement PGD adversarial training ( Madry et al. , 2017 ) on CIFAR100 under WideResNet models ( He et al. , 2016 ) and fine-tune them until achieving the optimal training performance . From the results in Section 3.1 , we observe that the memorization in adversarial training can only benefit the clean accuracy of test atypical samples . When the DNNs gradually fit/memorize more atypical samples , they can finally achieve fair clean accuracy close to ∼ 40 % on the test atypical set . However , the adversarial accuracy on test atypical set is constantly low ( ∼ 10 % ) during the whole training process , even though the models can fit almost all atypical ( adversarial ) training samples . Based on the theoretical study ( Schmidt et al. , 2018 ) , the adversarial robustness is hard to generalize especially when the training data size is limited . Since every single atypical sample is distinct from the main sub-population and rarely appears in the training set , the data complexity for each specific atypical sample is very low . Thus , its adversarial robustness can be extremely hard to generalize . Notably , for datasets such as CIFAR100 , the entire atypical set covers at least ∼ 40 % samples of the whole dataset . Therefore , completely failing on atypical samples could be one of the key reasons leading to the poor robustness generalization of DNNs . Furthermore , we find that in adversarial training , fitting atypical samples will even hurt DNNs ’ performance on those “ typical ” samples ( the samples in the main sub-population ) . In the Section 3.2 , we again implement PGD adversarial training ( Madry et al. , 2017 ) on CIFAR100 for several trails , which are trained with different amount of atypical samples . Based on the results from Section 3.2 , an adversarially trained model on the training set without any atypical samples has 95 % clean accuracy and 55 % adversarial accuracy on the test typical set . While , the model trained with 100 % atypical samples only has 90.2 % /50.4 % clean/adversarial accuracy respectively . In other words , atypical samples act more like “ poisoning data ” ( Biggio et al. , 2012 ) to deteriorate model performance on typical samples . Furthermore , our study in Section 3.2 also demonstrates that this poisoning effect is absent in traditional ERM , where fitting atypical samples will not reduce the model accuracy . To deepen our understanding on this finding , we build a theoretical analysis based on Gaussian mixture models . We prove that , given certain atypical samples , any model which fits its adversarial counterpart ( adv . training ) must have a poor accuracy on typical samples . While , under the same setting , the models only fitting the clean version of this atypical sample ( traditional ERM ) can achieve optimal accuracy ( i.e . > 99 % ) . Our empirical and theoretical results highlight the key difference between the effect of memorization in adversarial training and traditional ERM . Motivated by our findings , we propose a novel algorithm called Benign Adversarial Training ( BAT ) , which can eliminate the negative influence from memorizing those “ poisoning ” atypical samples , meanwhile preserving the model ’ s ability to memorize those “ benign / useful ” atypical samples . It is worth mentioning that , by fitting those “ benign ” atypical samples , the BAT method can achieve good clean accuracy on the atypical set ; by eliminating those poisoning atypical samples , the BAT method can improve the clean & adv . accuracy on the typical set . Compared with PGD adversarial training ( Madry et al. , 2017 ) when it achieves the highest adversarial robustness , BAT has higher clean accuracy as well as adversarial accuracy . Compared to other popular variants of adversarial training such as ( Zhang et al. , 2019 ; Wang et al. , 2019 ; Zhang et al. , 2020 ) , BAT is the only one obtaining both better ( or comparable ) clean and adversarial accuracy than ( Madry et al. , 2017 ) , on complex datasets such as CIFAR100 and Tiny Imagenet ( Le & Yang , 2015 ) . 2 DEFINITION AND NOTATION . 2.1 ATYPICAL SAMPLES AND MEMORIZATION . In this section , we start by introducing necessary concepts and definitions about the memorization effects . As well known , the practice of training deep neural networks ( DNNS ) can not be well explained by standard theories about model generalization ( Evgeniou et al. , 2000 ; Bartlett & Mendelson , 2002 ) . At a high level , the standard theories underline the importance of regularization on the model complexity , to avoid “ overfitting ” the outliers and nonuseful samples . While , for DNNs , we always tune the model to hit almost perfect training accuracy and enjoy good test performance . Fortunately , recent works ( Feldman , 2020 ; Feldman & Zhang , 2020 ; Bartlett et al. , 2020 ; Muthukumar et al. , 2020 ) make significant progress to close this gap from both theoretical and empirical perspectives . They suggest that the memorization is one key property for DNNs to achieve optimal generalization performance . In detail , the works ( Feldman , 2020 ; Feldman & Zhang , 2020 ) point out that the common benchmark datasets , such as CIFAR10 , CIFAR100 , ImageNet , contain a large portion of atypical samples ( or namely , rare samples , sub-populations , etc. ) . These atypical samples look very different from the other samples in the main distribution of its labeled class , and are statistically indistinguished from outliers or mislabeled samples . DNNs can only fit these samples by memorizing their labels . Moreover , without memorizing these atypical samples during training , the DNNs can totally fail to predict the atypical samples in the test set ( Feldman & Zhang , 2020 ) . Identify Atypical Samples . To identify such atypical samples in common datasets in practice , one representative strategy ( Feldman & Zhang , 2020 ) proposes to examine which training samples can only be fitted by memorization , and measure each training sample ’ s “ memorization value ” . Formally , for a training algorithmA ( i.e. , ERM ) , the memorization value “ mem ( A , D , xi ) ” of a training sample ( xi , yi ) ∈ D in training set D is defined as : mem ( A , D , xi ) = Pr . F←A ( D ) ( F ( xi ) = yi ) − Pr . F←A ( D\xi ) ( F ( xi ) = yi ) , ( 1 ) which calculates the difference between the model F ’ s accuracy on xi with and without xi removed from the training set D. In practice , ( Feldman & Zhang , 2020 ) trains a DNN model using ERM method for 1,000 trials , with each trial preserves 70 % samples of the whole training dataset . Based on this metric , one can identify all common datasets having a large fraction of atypical samples . For example , CIFAR10 , CIFAR100 and Tiny ImageNet have more than 11 % , 40 % and 49 % samples with a large memorization value > 0.15 . The similar strategy can also facilitate to find atypical samples in the test set , which are the samples that are strongly influenced by atypical training samples . In detail , by removing an atypical training sample ( xi , yi ) , we calculate its “ influence value ” on each test sample ( x′j , y ′ j ) ∈ D′ in test set D′ : infl ( A , D , xi , x′j ) = Pr . F←A ( D ) ( F ( x′j ) = y ′ j ) − Pr . F←A ( D\xi ) ( F ( x′j ) = y ′ j ) . ( 2 ) If the sample pair ( xi , x′j ) has a high influence value , removing the atypical sample xi will drastically decrease the model ’ s accuracy on x′j . In Appendix A , we provide more detailed discussions on the memorization effect and atypical samples , containing : ( a ) the distribution and frequency of atypical samples in common datasets and ( b ) alternative ( more efficient ) metrics to identify atypical samples , such as confidence-based method and ensemble disagreement ( Carlini et al. , 2019 ) .
The authors study the interaction between adversarial training (meant to produce robust models) and atypical examples in standard datasets (examples where generalization is driven by a handful of examples, cf [Feldman 2020]). They find that, in contrast to standard training, adversarial training has a hard time generalizing to atypical test examples in the adversarial setting. They also find that adding atypical examples to adversarial training can actually hurt test accuracy, in contrast to standard training where such examples typically help. Finally, the authors propose a method that down-weights atypical examples during robust training and introduces a contrastive regularizer, which they term BAT (benign adversarial training).
SP:966d826eb45b19f53a30533bfd1ecb1351545daa
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
1 INTRODUCTION . Since the inception of domain adaptation and knowledge transfer , researchers have been well aware of various configurations of labeled or unlabeled data and assumptions on domain shift ( Csurka , 2017 ) . Unsupervised domain adaptation ( UDA ) , semi-supervised learning ( SSL ) , and semi-supervised domain adaptation ( SSDA ) all use different configurations of labeled and unlabeled data , with the major distinction being that , unlike SSL , UDA and SSDA assume a domain shift between the labeled and unlabeled data ( see Table 1 ) . However , currently the fields of SSL and UDA/SSDA are fragmented : different techniques are developed in isolation for each setting , and there are only a handful of algorithms that are evaluated on both ( French et al. , 2018 ) . Techniques that leverage unlabeled data are of utmost importance in practical applications of machine learning because labeling data is expensive . It is also the case that in practice the available unlabeled data will have a distribution shift . Addressing this distribution shift is necessary because neural networks are not robust ( Recht et al. , 2019a ; Biggio & Roli , 2018 ; Szegedy et al. , 2013 ; Hendrycks & Dietterich , 2019 ; Azulay & Weiss , 2018 ; Shankar et al. , 2019 ; Gu et al. , 2019 ; Taori et al. , 2020 ) to even slight differences between the training distribution and test distribution . Although there are techniques to improve out-of-distribution robustness assuming no access to unlabeled data from the target domain ( Hendrycks et al. , 2020 ; Zhang , 2019 ; Engstrom et al. , 2019 ; Geirhos et al. , 2018 ; Yang et al. , 2019 ; Zhang et al. , 2019 ) , it is common in practice to have access to unlabeled data in a shifted domain ( i.e . the UDA or SSDA setting ) , and leveraging this unlabeled data allows for much higher accuracy . Moreover , while SSDA ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Saito et al. , 2019 ) has received less attention than both SSL and UDA , we believe it describes a realistic scenario in practice and should be equally considered . In this work , we introduce AdaMatch , a unified solution designed to solve the tasks of UDA , SSL , and SSDA using the same set of hyperparameters regardless of the dataset or task . AdaMatch extends FixMatch ( Sohn et al. , 2020 ) by ( 1 ) addressing the distribution shift between source and target domains present in the batch norm statistics , ( 2 ) adjusting the pseudo-label confidence threshold on-the-fly , and ( 3 ) using a modified version of distribution alignment from Berthelot et al . ( 2020 ) . AdaMatch sets a new state-of-the-art accuracy of 28.7 % for UDA without pre-training and 33.4 % with pre-training on DomainNet , an increase of 11.1 % when compared on the same code base . With just one label per class on the target dataset , AdaMatch is more data efficient than other method , achieving a gain of 6.1 % over UDA and 13.6 % with 5 labels . We additionally promote democratic research by reporting results on a smaller 64 × 64 DomainNet . This results in a minimal drop in accuracy compared to the full resolution , and compared to the practice of sub-selecting dataset pairs , does not bias the results towards easier or harder datasets . Finally , we perform an extensive ablation analysis to understand the importance of each improvement and modification that distinguishes AdaMatch from prior semi-supervised learning methods . 2 RELATED WORK . Unsupervised Domain Adaptation ( UDA ) . The UDA research area focuses on studying the performance of models trained on a labeled source domain and an unlabeled target domain from a differing distribution with the goal of obtaining high accuracy on the target domain . Inspired by the theoretical analysis of domain adaptation ( Ben-David et al. , 2010 ) , a major focus of research in UDA has been reducing the discrepancy of representations between domains , so that a classifier that is learned on the source features works well on the target features . UDA methods can be categorized by the technique they use to measure this discrepancy . For example , ( Long et al. , 2013 ; Tzeng et al. , 2014 ; 2015 ) use the maximum mean discrepancy ( Gretton et al. , 2012 ) , ( Sun & Saenko , 2016 ) use correlation alignment across domains , and domain adversarial neural networks ( Ajakan et al. , 2014 ; Ganin et al. , 2016 ; Bousmalis et al. , 2017 ; Saito et al. , 2018 ) measure the domain discrepancy using a discriminator network . Maximum classifier discrepancy ( Saito et al. , 2018 ) ( MCD ) measures the domain discrepancy via multiple task classifiers and is shown to achieve state-of-the-art performance . Semi-Supervised Learning ( SSL ) . In the SSL setting , a portion of the training dataset is labeled and the remaining portion is unlabeled . SSL has seen great progress in recent years , including temporal ensemble ( Laine & Aila , 2017 ) , mean teacher ( Tarvainen & Valpola , 2017 ) , MixMatch ( Berthelot et al. , 2019 ) , ReMixMatch ( Berthelot et al. , 2020 ) , FixMatch ( Sohn et al. , 2020 ) , unsupervised data augmentation ( Xie et al. , 2019 ) , to name a few . While NoisyStudent ( Xie et al. , 2020 ) , an SSL algorithm , uses labeled data from ImageNet and unlabeled data from JFT in training , they do not leverage this distribution shift during training , and they only evaluate on the source domain ( i.e. , ImageNet ) , not the target domain . While there is no technical barrier to applying SSL methods to UDA , only a few SSL methods have been applied to solve UDA problems ; for example , on top of the discrepancy reduction techniques of UDA , several works ( French et al. , 2018 ; Long et al. , 2018 ; Saito et al. , 2019 ; Tran et al. , 2019 ) propose to combine SSL techniques such as entropy minimization ( Grandvalet & Bengio , 2005 ) or pseudo-labeling ( Lee , 2013 ) . Semi-Supervised Domain Adaptation ( SSDA ) has been studied in several settings , including vision ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Saito et al. , 2019 ) and natural language processing ( Jiang & Zhai , 2007 ; Daumé III et al. , 2010 ; Guo & Xiao , 2012 ) . Since SSDA assumes access to labeled data from multiple domains , early works have used separate models for each domain and regularized them with constraints ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Daumé III et al. , 2010 ; Guo & Xiao , 2012 ) . However , such methods are difficult to adapt to the UDA setting where labeled data is only available in a single domain . One recent exception is minimax entropy ( MME ) regularization ( Saito et al. , 2019 ) , which can work in both the UDA and SSDA setting . However , unlike AdaMatch , MME requires a pre-trained network to work well . Transfer learning is used to boost accuracy on small datasets by initializing model parameters with pre-trained weights first learned on a separate , larger dataset , which can compensate for the limited amount of labeled source data and boost the overall performance ( Recht et al. , 2019b ; Kolesnikov et al. , 2019 ) . For example , standard experimental protocols on several UDA benchmarks , including Office-31 ( Saenko et al. , 2010 ) , PACS ( Li et al. , 2017 ) , and DomainNet ( Peng et al. , 2019 ) , use ImageNet-pretrained models to initialize model parameters . Though useful for some cases in practice , it may not be the most general protocol to evaluate the advancement of UDA algorithms , especially in situations where no datasets exist for pre-training ( for example , images with arbitrary number of channels ) , or for domains other than vision , where no pre-training datasets exist . Thus , in this work , we mainly focus on evaluating methods under a non-transfer learning setting , which we consider to be more general . Although we also achieve state of the art results with transfer learning , we only present them for historical reasons and to illustrate that our method can use that setting too . 3 ADAMATCH . We now introduce AdaMatch , a new algorithm inspired by modern semi-supervised learning techniques , aimed at solving UDA , SSL , and SSDA . As typical in SSL , AdaMatch takes both an unlabeled and labeled dataset as input . We assume that the labeled data is drawn from a source domain while the unlabeled data is drawn from a target domain ( for the SSL task , these domains are the same ) . Notation . We use capital letters X , Y , Z to denote minibatches of examples , labels and logits . Specifically , XSL ⊂ RnSL×d and YSL ⊂ { 0 , 1 } nSL×k denote the minibatch of source images and labels , respectively . Similarly , the minibatch of unlabeled target images is XTU ⊂ RnTU×d . Here , k is the number of classes and d is the input dimension ( for images d = h · w · c , where h is height , w is width , and c is the number of channels ) . The minibatch size for the labeled data is nSL and the minibatch size of the unlabeled images is nTU . Additionally , we use Y ( i ) to refer to its i-th row , and Y ( i , j ) to refer to the i , j−th element of Y . The model f : Rd → Rk takes images as input and outputs logits for each of k classes . Importantly , the source and target domain are the same classification task , so the number of classes k and the image dimension d is the same for both domains . 3.1 METHOD DESCRIPTION . AdaMatch introduces three new techniques to account for differences between the source and target distributions – random logit interpolation , a relative confidence threshold , and a modified distribution alignment from ReMixMatch ( Berthelot et al. , 2020 ) – but builds upon the algorithmic backbone from FixMatch ( Sohn et al. , 2020 ) . We first provide a high-level overview of the algorithm and then discuss the implementation details of the various components . Overview . A high-level depiction of AdaMatch is in Figure 1 . Two augmentations are made for each image : a weak and a strong one with the intent to make the class prediction harder on the strongly augmented image . Next , we obtain logits by running two batches through the model : a batch of the source images and batch composed of both the source and target images . Each of the resulting batches of logits are influenced by their respective batch norm statistics , i.e . the source batch is only influenced by the source data batch norm statistics while the batch that combines source and target is influenced by both domains batch norm statistics . Two loss terms are then computed : • The source loss term is responsible for predicting correct source labels and for aligning source and target logit domains . We first combine logits for the source images using random logit interpolation , which encourages the model to produce the same label for the hyperspace connecting source logits obtained from 1 ) only source examples and 2 ) a combination of source and target examples . In practice , this creates an implicit constraint to align the source and target domains in logit space . The newly obtained source logits are then used to compute the cross-entropy loss for the source data . • The target loss term is responsible for predicting the correct target labels and for aligning the target predictions to a desired class distribution . Since we don ’ t assume access to labels for the target images , we create a pseudo-label for these images as follows . First , we rectify the class distribution obtained from weakly augmented target images to a desired class distribution using distribution alignment . If the target class distribution is known , it can be used directly . In the general case where it is not known , we use the source class distribution instead . We then select entries of the batch for which the rectified probabilities of the weakly augmented target image predictions are above a user-defined confidence threshold . A pseudo label is then made for these outputs by selecting the most confident class , and these pseudo-labels are used with a standard cross-entropy loss applied to the logits of the strongly augmented images . Augmentation . For a dataset D ∈ { SL , TU } , we augment each image batch XD fed into AdaMatch twice , once with a weak augmentation and once with a strong augmentation , using the same types of weak and strong augmentations as ( Berthelot et al. , 2020 ) . This forms a pair of batches XD , w and XD , s respectively , which we denote together as X aug D = { XD , w , XD , s } . From these pairs of batches , we then compute logits Z ′SL , Z ′′ SL and ZTU as follows : { Z ′SL , ZTU } = f ( { X aug SL , X aug TU } ; θ ) ( 1 ) Z ′′SL = f ( X aug SL ; θ ) ( 2 ) That is , we compute logits by calling the model twice for both the strong and weakly augmented images . The first time we pass both source and target inputs together in the same batch so the batch normalization statistics are shared , and the second time we only pass the source label data . We only update batch normalization statistics when computing { Z ′SL , ZTU } = f ( { X aug SL , X aug TU } ; θ ) to avoid double counting source labeled data . Note that without batch normalization , we would have Z ′SL ≡ Z ′′SL . But batch normalization may make them slightly different . Random logit interpolation . The role of random logit interpolation is to randomly combine the joint batch statistics from the source and target domains with the batch statistics from the source domain , which has the effect of producing batch statistics that are more representative of both domains . More precisely , during training , we obtain logits ZSL by randomly interpolating the logits Z ′SL and Z ′′ SL computed with different batch statistics : ZSL = λ · Z ′SL + ( 1− λ ) · Z ′′SL ( 3 ) where we sample λ ∼ UnSL·k ( 0 , 1 ) . Note that each individual logit gets its own random factor . Our underlying goal here is to minimize the loss for every point between Z ′SL and Z ′′ SL , which can be accomplished by either 1 ) having Z ′SL and Z ′′ SL be equal to each other , or 2 ) having the whole line between Z ′SL and Z ′′ SL be a minima . Rather than picking one of the two ways , we formulate the problem as minimizing the loss for all connecting points , which gives the model the freedom to find the best possible solution . We then use random logit interpolation as a cost effective implementation of this loss formulation . By sampling a random point along the line between Z ′SL and Z ′′ SL at each batch , over time , we implicitly allow the model to minimize the loss for all points on the line . Distribution alignment . Distribution alignment ( Berthelot et al. , 2020 ) can be seen as an additional form of model regularization that helps constrain the distribution of the class predictions to be more aligned with the true distribution . Without it , the classifier could just predict the most prevalent class or exhibit other failure modes . Ideally , if the target label distribution is known , we would use it directly . However , when the target label distribution is unknown , we approximate it using the only available distribution – the source label distribution . A limitation of this approach is that the more the source label distribution differs from the target distribution , the more incorrect the approximation will be , which may cause the model performance to degrade . However , in practice , we find that aligning the target pseudo-labels to match the source label distribution helps significantly . Unlike ReMixMatch ( Berthelot et al. , 2020 ) , we estimate the source label distribution from the output of the model rather than using the true labels . We make this change since the model may not be capable of matching the ground truth source label distribution ( particularly when source accuracy is low ) but matching the source output distribution is a more attainable goal . To implement this , we first extract the logits for weakly and strongly augmented samples from the batch ( by indexing ) : ZSL = { ZSL , w , ZSL , s } ZTU = { ZTU , w , ZTU , s } ( 4 ) Then , we compute pseudo-labels for labeled sources and unlabeled targets : ŶSL , w = softmax ( ZSL , w ) ∈ RnSL·k ŶTU , w = softmax ( ZTU , w ) ∈ RnTU ·k ( 5 ) Using distribution alignment , we rectify the target unlabeled pseudo-labels by multiplying them by the ratio of the expected value of the weakly augmented source labels E [ ŶSL , w ] ∈ Rk to the expected value of the target labels E [ ŶTU , w ] ∈ Rk , obtaining the final pseudo-labels ỸTU , w ∈ RnTU ·k : ỸTU , w = normalize ( ŶTU , w E [ ŶSL , w ] E [ ŶTU , w ] ) ( 6 ) normalize ensures that the distribution still sums to 1 . As could be seen E [ ỸTU , w ] = E [ ŶSL , w ] , which confirms that distribution alignment makes the target pseudo-labels follow the source label distribution . If the target label distribution is known , one can simply replace the term E [ ŶSL , w ] with it in the formula above . Relative confidence threshold . A confidence threshold is typically used to select which predicted labels are confident enough to be used as pseudo-labels ( Lee , 2013 ) . However , since machine learning models are poorly calibrated ( Guo et al. , 2017 ) , especially on out-of-distribution data Ovadia et al . ( 2019 ) , the confidence varies from dataset to dataset depending on the ability of the model to learn its task . To address this issue , we introduce a relative confidence threshold which adjusts a user-provided confidence threshold relative to the confidence level of the classifier on the weakly augmented source data . Specifically , we define the relative confidence threshold cτ as the mean confidence of the top-1 prediction on the weakly augmented source data multiplied by a user provided threshold τ : cτ = τ nSL nSL∑ i=1 max j∈ [ 1 .. k ] ( Ŷ ( i , j ) SL , w ) ( 7 ) We then compute a binary mask ∈ { 0 , 1 } nTU by thresholding the weakly augmented target images with the relative confidence threshold cτ : mask ( i ) = max j∈ [ 1 .. k ] ( Ỹ ( i , j ) TU , w ) ≥ cτ ( 8 ) Loss function . The loss L ( θ ) sums Lsource ( θ ) for the source and Ltarget ( θ ) for the target . Lsource ( θ ) = 1 nSL nSL∑ i=1 H ( Y ( i ) SL , Z ( i ) SL , w ) + 1 nSL nSL∑ i=1 H ( Y ( i ) SL , Z ( i ) SL , s ) ( 9 ) Ltarget ( θ ) = 1 nTU nTU∑ i=1 H ( stop_gradient ( Ỹ ( i ) TU , w ) , Z ( i ) TU , s ) ·mask ( i ) ( 10 ) L ( θ ) = Lsource ( θ ) + µ ( t ) Ltarget ( θ ) ( 11 ) whereH ( p , q ) =− ∑ p ( x ) log q ( x ) is the cross-entropy loss and stop_gradient is a function that prevents gradient from back-propagating on its argument . Prevention of gradient back-propagation on guessed labels is a standard practice in SSL works that favors convergence . µ ( t ) is a warmup function that controls the unlabeled loss weight at every step of the training . In practice we use µ ( t ) = 1/2 − cos ( min ( π , 2πt/T ) ) /2 where T is the total training steps . This particular function smoothly raises from 0 to 1 for the first half of the training and remains at 1 for the second half . As can be noted , Lsource ( θ ) is the cross-entropy loss typically used in fully supervised learning which the nuance that we call it twice : once on weakly augmented samples and once on strongly augmented ones . Similarly , Ltarget ( θ ) is the masked cross-entropy loss , where entries for which the confidence is less than cτ are zeroed . This loss term is exactly the same as in FixMatch ( Sohn et al. , 2020 ) .
This paper proposes to extend semi-supervised learning techniques to address domain adaptation problems and proposes a unified method, AdaMatch, that can handle unsupervised domain adaptation (UDA), semi-supervised domain adaptation (SSDA) and semi-supervised learning (SSL). While extensively borrowing techniques from SSL, AdaMatch proposes three unique techniques, i.e., distribution alignment, random logit interpolation, and relative confidence threshold. Extensive experiments verify the efficacy of AdaMatch for the significant improvement over existing methods for different problems.
SP:ac43f7aa79c0497098b83b4ffb239f09781c9797
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
1 INTRODUCTION . Since the inception of domain adaptation and knowledge transfer , researchers have been well aware of various configurations of labeled or unlabeled data and assumptions on domain shift ( Csurka , 2017 ) . Unsupervised domain adaptation ( UDA ) , semi-supervised learning ( SSL ) , and semi-supervised domain adaptation ( SSDA ) all use different configurations of labeled and unlabeled data , with the major distinction being that , unlike SSL , UDA and SSDA assume a domain shift between the labeled and unlabeled data ( see Table 1 ) . However , currently the fields of SSL and UDA/SSDA are fragmented : different techniques are developed in isolation for each setting , and there are only a handful of algorithms that are evaluated on both ( French et al. , 2018 ) . Techniques that leverage unlabeled data are of utmost importance in practical applications of machine learning because labeling data is expensive . It is also the case that in practice the available unlabeled data will have a distribution shift . Addressing this distribution shift is necessary because neural networks are not robust ( Recht et al. , 2019a ; Biggio & Roli , 2018 ; Szegedy et al. , 2013 ; Hendrycks & Dietterich , 2019 ; Azulay & Weiss , 2018 ; Shankar et al. , 2019 ; Gu et al. , 2019 ; Taori et al. , 2020 ) to even slight differences between the training distribution and test distribution . Although there are techniques to improve out-of-distribution robustness assuming no access to unlabeled data from the target domain ( Hendrycks et al. , 2020 ; Zhang , 2019 ; Engstrom et al. , 2019 ; Geirhos et al. , 2018 ; Yang et al. , 2019 ; Zhang et al. , 2019 ) , it is common in practice to have access to unlabeled data in a shifted domain ( i.e . the UDA or SSDA setting ) , and leveraging this unlabeled data allows for much higher accuracy . Moreover , while SSDA ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Saito et al. , 2019 ) has received less attention than both SSL and UDA , we believe it describes a realistic scenario in practice and should be equally considered . In this work , we introduce AdaMatch , a unified solution designed to solve the tasks of UDA , SSL , and SSDA using the same set of hyperparameters regardless of the dataset or task . AdaMatch extends FixMatch ( Sohn et al. , 2020 ) by ( 1 ) addressing the distribution shift between source and target domains present in the batch norm statistics , ( 2 ) adjusting the pseudo-label confidence threshold on-the-fly , and ( 3 ) using a modified version of distribution alignment from Berthelot et al . ( 2020 ) . AdaMatch sets a new state-of-the-art accuracy of 28.7 % for UDA without pre-training and 33.4 % with pre-training on DomainNet , an increase of 11.1 % when compared on the same code base . With just one label per class on the target dataset , AdaMatch is more data efficient than other method , achieving a gain of 6.1 % over UDA and 13.6 % with 5 labels . We additionally promote democratic research by reporting results on a smaller 64 × 64 DomainNet . This results in a minimal drop in accuracy compared to the full resolution , and compared to the practice of sub-selecting dataset pairs , does not bias the results towards easier or harder datasets . Finally , we perform an extensive ablation analysis to understand the importance of each improvement and modification that distinguishes AdaMatch from prior semi-supervised learning methods . 2 RELATED WORK . Unsupervised Domain Adaptation ( UDA ) . The UDA research area focuses on studying the performance of models trained on a labeled source domain and an unlabeled target domain from a differing distribution with the goal of obtaining high accuracy on the target domain . Inspired by the theoretical analysis of domain adaptation ( Ben-David et al. , 2010 ) , a major focus of research in UDA has been reducing the discrepancy of representations between domains , so that a classifier that is learned on the source features works well on the target features . UDA methods can be categorized by the technique they use to measure this discrepancy . For example , ( Long et al. , 2013 ; Tzeng et al. , 2014 ; 2015 ) use the maximum mean discrepancy ( Gretton et al. , 2012 ) , ( Sun & Saenko , 2016 ) use correlation alignment across domains , and domain adversarial neural networks ( Ajakan et al. , 2014 ; Ganin et al. , 2016 ; Bousmalis et al. , 2017 ; Saito et al. , 2018 ) measure the domain discrepancy using a discriminator network . Maximum classifier discrepancy ( Saito et al. , 2018 ) ( MCD ) measures the domain discrepancy via multiple task classifiers and is shown to achieve state-of-the-art performance . Semi-Supervised Learning ( SSL ) . In the SSL setting , a portion of the training dataset is labeled and the remaining portion is unlabeled . SSL has seen great progress in recent years , including temporal ensemble ( Laine & Aila , 2017 ) , mean teacher ( Tarvainen & Valpola , 2017 ) , MixMatch ( Berthelot et al. , 2019 ) , ReMixMatch ( Berthelot et al. , 2020 ) , FixMatch ( Sohn et al. , 2020 ) , unsupervised data augmentation ( Xie et al. , 2019 ) , to name a few . While NoisyStudent ( Xie et al. , 2020 ) , an SSL algorithm , uses labeled data from ImageNet and unlabeled data from JFT in training , they do not leverage this distribution shift during training , and they only evaluate on the source domain ( i.e. , ImageNet ) , not the target domain . While there is no technical barrier to applying SSL methods to UDA , only a few SSL methods have been applied to solve UDA problems ; for example , on top of the discrepancy reduction techniques of UDA , several works ( French et al. , 2018 ; Long et al. , 2018 ; Saito et al. , 2019 ; Tran et al. , 2019 ) propose to combine SSL techniques such as entropy minimization ( Grandvalet & Bengio , 2005 ) or pseudo-labeling ( Lee , 2013 ) . Semi-Supervised Domain Adaptation ( SSDA ) has been studied in several settings , including vision ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Saito et al. , 2019 ) and natural language processing ( Jiang & Zhai , 2007 ; Daumé III et al. , 2010 ; Guo & Xiao , 2012 ) . Since SSDA assumes access to labeled data from multiple domains , early works have used separate models for each domain and regularized them with constraints ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Daumé III et al. , 2010 ; Guo & Xiao , 2012 ) . However , such methods are difficult to adapt to the UDA setting where labeled data is only available in a single domain . One recent exception is minimax entropy ( MME ) regularization ( Saito et al. , 2019 ) , which can work in both the UDA and SSDA setting . However , unlike AdaMatch , MME requires a pre-trained network to work well . Transfer learning is used to boost accuracy on small datasets by initializing model parameters with pre-trained weights first learned on a separate , larger dataset , which can compensate for the limited amount of labeled source data and boost the overall performance ( Recht et al. , 2019b ; Kolesnikov et al. , 2019 ) . For example , standard experimental protocols on several UDA benchmarks , including Office-31 ( Saenko et al. , 2010 ) , PACS ( Li et al. , 2017 ) , and DomainNet ( Peng et al. , 2019 ) , use ImageNet-pretrained models to initialize model parameters . Though useful for some cases in practice , it may not be the most general protocol to evaluate the advancement of UDA algorithms , especially in situations where no datasets exist for pre-training ( for example , images with arbitrary number of channels ) , or for domains other than vision , where no pre-training datasets exist . Thus , in this work , we mainly focus on evaluating methods under a non-transfer learning setting , which we consider to be more general . Although we also achieve state of the art results with transfer learning , we only present them for historical reasons and to illustrate that our method can use that setting too . 3 ADAMATCH . We now introduce AdaMatch , a new algorithm inspired by modern semi-supervised learning techniques , aimed at solving UDA , SSL , and SSDA . As typical in SSL , AdaMatch takes both an unlabeled and labeled dataset as input . We assume that the labeled data is drawn from a source domain while the unlabeled data is drawn from a target domain ( for the SSL task , these domains are the same ) . Notation . We use capital letters X , Y , Z to denote minibatches of examples , labels and logits . Specifically , XSL ⊂ RnSL×d and YSL ⊂ { 0 , 1 } nSL×k denote the minibatch of source images and labels , respectively . Similarly , the minibatch of unlabeled target images is XTU ⊂ RnTU×d . Here , k is the number of classes and d is the input dimension ( for images d = h · w · c , where h is height , w is width , and c is the number of channels ) . The minibatch size for the labeled data is nSL and the minibatch size of the unlabeled images is nTU . Additionally , we use Y ( i ) to refer to its i-th row , and Y ( i , j ) to refer to the i , j−th element of Y . The model f : Rd → Rk takes images as input and outputs logits for each of k classes . Importantly , the source and target domain are the same classification task , so the number of classes k and the image dimension d is the same for both domains . 3.1 METHOD DESCRIPTION . AdaMatch introduces three new techniques to account for differences between the source and target distributions – random logit interpolation , a relative confidence threshold , and a modified distribution alignment from ReMixMatch ( Berthelot et al. , 2020 ) – but builds upon the algorithmic backbone from FixMatch ( Sohn et al. , 2020 ) . We first provide a high-level overview of the algorithm and then discuss the implementation details of the various components . Overview . A high-level depiction of AdaMatch is in Figure 1 . Two augmentations are made for each image : a weak and a strong one with the intent to make the class prediction harder on the strongly augmented image . Next , we obtain logits by running two batches through the model : a batch of the source images and batch composed of both the source and target images . Each of the resulting batches of logits are influenced by their respective batch norm statistics , i.e . the source batch is only influenced by the source data batch norm statistics while the batch that combines source and target is influenced by both domains batch norm statistics . Two loss terms are then computed : • The source loss term is responsible for predicting correct source labels and for aligning source and target logit domains . We first combine logits for the source images using random logit interpolation , which encourages the model to produce the same label for the hyperspace connecting source logits obtained from 1 ) only source examples and 2 ) a combination of source and target examples . In practice , this creates an implicit constraint to align the source and target domains in logit space . The newly obtained source logits are then used to compute the cross-entropy loss for the source data . • The target loss term is responsible for predicting the correct target labels and for aligning the target predictions to a desired class distribution . Since we don ’ t assume access to labels for the target images , we create a pseudo-label for these images as follows . First , we rectify the class distribution obtained from weakly augmented target images to a desired class distribution using distribution alignment . If the target class distribution is known , it can be used directly . In the general case where it is not known , we use the source class distribution instead . We then select entries of the batch for which the rectified probabilities of the weakly augmented target image predictions are above a user-defined confidence threshold . A pseudo label is then made for these outputs by selecting the most confident class , and these pseudo-labels are used with a standard cross-entropy loss applied to the logits of the strongly augmented images . Augmentation . For a dataset D ∈ { SL , TU } , we augment each image batch XD fed into AdaMatch twice , once with a weak augmentation and once with a strong augmentation , using the same types of weak and strong augmentations as ( Berthelot et al. , 2020 ) . This forms a pair of batches XD , w and XD , s respectively , which we denote together as X aug D = { XD , w , XD , s } . From these pairs of batches , we then compute logits Z ′SL , Z ′′ SL and ZTU as follows : { Z ′SL , ZTU } = f ( { X aug SL , X aug TU } ; θ ) ( 1 ) Z ′′SL = f ( X aug SL ; θ ) ( 2 ) That is , we compute logits by calling the model twice for both the strong and weakly augmented images . The first time we pass both source and target inputs together in the same batch so the batch normalization statistics are shared , and the second time we only pass the source label data . We only update batch normalization statistics when computing { Z ′SL , ZTU } = f ( { X aug SL , X aug TU } ; θ ) to avoid double counting source labeled data . Note that without batch normalization , we would have Z ′SL ≡ Z ′′SL . But batch normalization may make them slightly different . Random logit interpolation . The role of random logit interpolation is to randomly combine the joint batch statistics from the source and target domains with the batch statistics from the source domain , which has the effect of producing batch statistics that are more representative of both domains . More precisely , during training , we obtain logits ZSL by randomly interpolating the logits Z ′SL and Z ′′ SL computed with different batch statistics : ZSL = λ · Z ′SL + ( 1− λ ) · Z ′′SL ( 3 ) where we sample λ ∼ UnSL·k ( 0 , 1 ) . Note that each individual logit gets its own random factor . Our underlying goal here is to minimize the loss for every point between Z ′SL and Z ′′ SL , which can be accomplished by either 1 ) having Z ′SL and Z ′′ SL be equal to each other , or 2 ) having the whole line between Z ′SL and Z ′′ SL be a minima . Rather than picking one of the two ways , we formulate the problem as minimizing the loss for all connecting points , which gives the model the freedom to find the best possible solution . We then use random logit interpolation as a cost effective implementation of this loss formulation . By sampling a random point along the line between Z ′SL and Z ′′ SL at each batch , over time , we implicitly allow the model to minimize the loss for all points on the line . Distribution alignment . Distribution alignment ( Berthelot et al. , 2020 ) can be seen as an additional form of model regularization that helps constrain the distribution of the class predictions to be more aligned with the true distribution . Without it , the classifier could just predict the most prevalent class or exhibit other failure modes . Ideally , if the target label distribution is known , we would use it directly . However , when the target label distribution is unknown , we approximate it using the only available distribution – the source label distribution . A limitation of this approach is that the more the source label distribution differs from the target distribution , the more incorrect the approximation will be , which may cause the model performance to degrade . However , in practice , we find that aligning the target pseudo-labels to match the source label distribution helps significantly . Unlike ReMixMatch ( Berthelot et al. , 2020 ) , we estimate the source label distribution from the output of the model rather than using the true labels . We make this change since the model may not be capable of matching the ground truth source label distribution ( particularly when source accuracy is low ) but matching the source output distribution is a more attainable goal . To implement this , we first extract the logits for weakly and strongly augmented samples from the batch ( by indexing ) : ZSL = { ZSL , w , ZSL , s } ZTU = { ZTU , w , ZTU , s } ( 4 ) Then , we compute pseudo-labels for labeled sources and unlabeled targets : ŶSL , w = softmax ( ZSL , w ) ∈ RnSL·k ŶTU , w = softmax ( ZTU , w ) ∈ RnTU ·k ( 5 ) Using distribution alignment , we rectify the target unlabeled pseudo-labels by multiplying them by the ratio of the expected value of the weakly augmented source labels E [ ŶSL , w ] ∈ Rk to the expected value of the target labels E [ ŶTU , w ] ∈ Rk , obtaining the final pseudo-labels ỸTU , w ∈ RnTU ·k : ỸTU , w = normalize ( ŶTU , w E [ ŶSL , w ] E [ ŶTU , w ] ) ( 6 ) normalize ensures that the distribution still sums to 1 . As could be seen E [ ỸTU , w ] = E [ ŶSL , w ] , which confirms that distribution alignment makes the target pseudo-labels follow the source label distribution . If the target label distribution is known , one can simply replace the term E [ ŶSL , w ] with it in the formula above . Relative confidence threshold . A confidence threshold is typically used to select which predicted labels are confident enough to be used as pseudo-labels ( Lee , 2013 ) . However , since machine learning models are poorly calibrated ( Guo et al. , 2017 ) , especially on out-of-distribution data Ovadia et al . ( 2019 ) , the confidence varies from dataset to dataset depending on the ability of the model to learn its task . To address this issue , we introduce a relative confidence threshold which adjusts a user-provided confidence threshold relative to the confidence level of the classifier on the weakly augmented source data . Specifically , we define the relative confidence threshold cτ as the mean confidence of the top-1 prediction on the weakly augmented source data multiplied by a user provided threshold τ : cτ = τ nSL nSL∑ i=1 max j∈ [ 1 .. k ] ( Ŷ ( i , j ) SL , w ) ( 7 ) We then compute a binary mask ∈ { 0 , 1 } nTU by thresholding the weakly augmented target images with the relative confidence threshold cτ : mask ( i ) = max j∈ [ 1 .. k ] ( Ỹ ( i , j ) TU , w ) ≥ cτ ( 8 ) Loss function . The loss L ( θ ) sums Lsource ( θ ) for the source and Ltarget ( θ ) for the target . Lsource ( θ ) = 1 nSL nSL∑ i=1 H ( Y ( i ) SL , Z ( i ) SL , w ) + 1 nSL nSL∑ i=1 H ( Y ( i ) SL , Z ( i ) SL , s ) ( 9 ) Ltarget ( θ ) = 1 nTU nTU∑ i=1 H ( stop_gradient ( Ỹ ( i ) TU , w ) , Z ( i ) TU , s ) ·mask ( i ) ( 10 ) L ( θ ) = Lsource ( θ ) + µ ( t ) Ltarget ( θ ) ( 11 ) whereH ( p , q ) =− ∑ p ( x ) log q ( x ) is the cross-entropy loss and stop_gradient is a function that prevents gradient from back-propagating on its argument . Prevention of gradient back-propagation on guessed labels is a standard practice in SSL works that favors convergence . µ ( t ) is a warmup function that controls the unlabeled loss weight at every step of the training . In practice we use µ ( t ) = 1/2 − cos ( min ( π , 2πt/T ) ) /2 where T is the total training steps . This particular function smoothly raises from 0 to 1 for the first half of the training and remains at 1 for the second half . As can be noted , Lsource ( θ ) is the cross-entropy loss typically used in fully supervised learning which the nuance that we call it twice : once on weakly augmented samples and once on strongly augmented ones . Similarly , Ltarget ( θ ) is the masked cross-entropy loss , where entries for which the confidence is less than cτ are zeroed . This loss term is exactly the same as in FixMatch ( Sohn et al. , 2020 ) .
This paper presents a unified solution for unsupervised domain adaptation (UDA), semi-supervised learning (SSL), and semi-supervised domain adaptation (SSDA). The authors extend the state-of-the-art SSL method, that is FixMatch, to make it capable of handling DA setting where target data stem from a different distribution with source data. Experimental results show that the proposed method performs on par or significantly better than respective state-of-the-art methods in UDA, SSL, and SSDA.
SP:ac43f7aa79c0497098b83b4ffb239f09781c9797
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
1 INTRODUCTION . Since the inception of domain adaptation and knowledge transfer , researchers have been well aware of various configurations of labeled or unlabeled data and assumptions on domain shift ( Csurka , 2017 ) . Unsupervised domain adaptation ( UDA ) , semi-supervised learning ( SSL ) , and semi-supervised domain adaptation ( SSDA ) all use different configurations of labeled and unlabeled data , with the major distinction being that , unlike SSL , UDA and SSDA assume a domain shift between the labeled and unlabeled data ( see Table 1 ) . However , currently the fields of SSL and UDA/SSDA are fragmented : different techniques are developed in isolation for each setting , and there are only a handful of algorithms that are evaluated on both ( French et al. , 2018 ) . Techniques that leverage unlabeled data are of utmost importance in practical applications of machine learning because labeling data is expensive . It is also the case that in practice the available unlabeled data will have a distribution shift . Addressing this distribution shift is necessary because neural networks are not robust ( Recht et al. , 2019a ; Biggio & Roli , 2018 ; Szegedy et al. , 2013 ; Hendrycks & Dietterich , 2019 ; Azulay & Weiss , 2018 ; Shankar et al. , 2019 ; Gu et al. , 2019 ; Taori et al. , 2020 ) to even slight differences between the training distribution and test distribution . Although there are techniques to improve out-of-distribution robustness assuming no access to unlabeled data from the target domain ( Hendrycks et al. , 2020 ; Zhang , 2019 ; Engstrom et al. , 2019 ; Geirhos et al. , 2018 ; Yang et al. , 2019 ; Zhang et al. , 2019 ) , it is common in practice to have access to unlabeled data in a shifted domain ( i.e . the UDA or SSDA setting ) , and leveraging this unlabeled data allows for much higher accuracy . Moreover , while SSDA ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Saito et al. , 2019 ) has received less attention than both SSL and UDA , we believe it describes a realistic scenario in practice and should be equally considered . In this work , we introduce AdaMatch , a unified solution designed to solve the tasks of UDA , SSL , and SSDA using the same set of hyperparameters regardless of the dataset or task . AdaMatch extends FixMatch ( Sohn et al. , 2020 ) by ( 1 ) addressing the distribution shift between source and target domains present in the batch norm statistics , ( 2 ) adjusting the pseudo-label confidence threshold on-the-fly , and ( 3 ) using a modified version of distribution alignment from Berthelot et al . ( 2020 ) . AdaMatch sets a new state-of-the-art accuracy of 28.7 % for UDA without pre-training and 33.4 % with pre-training on DomainNet , an increase of 11.1 % when compared on the same code base . With just one label per class on the target dataset , AdaMatch is more data efficient than other method , achieving a gain of 6.1 % over UDA and 13.6 % with 5 labels . We additionally promote democratic research by reporting results on a smaller 64 × 64 DomainNet . This results in a minimal drop in accuracy compared to the full resolution , and compared to the practice of sub-selecting dataset pairs , does not bias the results towards easier or harder datasets . Finally , we perform an extensive ablation analysis to understand the importance of each improvement and modification that distinguishes AdaMatch from prior semi-supervised learning methods . 2 RELATED WORK . Unsupervised Domain Adaptation ( UDA ) . The UDA research area focuses on studying the performance of models trained on a labeled source domain and an unlabeled target domain from a differing distribution with the goal of obtaining high accuracy on the target domain . Inspired by the theoretical analysis of domain adaptation ( Ben-David et al. , 2010 ) , a major focus of research in UDA has been reducing the discrepancy of representations between domains , so that a classifier that is learned on the source features works well on the target features . UDA methods can be categorized by the technique they use to measure this discrepancy . For example , ( Long et al. , 2013 ; Tzeng et al. , 2014 ; 2015 ) use the maximum mean discrepancy ( Gretton et al. , 2012 ) , ( Sun & Saenko , 2016 ) use correlation alignment across domains , and domain adversarial neural networks ( Ajakan et al. , 2014 ; Ganin et al. , 2016 ; Bousmalis et al. , 2017 ; Saito et al. , 2018 ) measure the domain discrepancy using a discriminator network . Maximum classifier discrepancy ( Saito et al. , 2018 ) ( MCD ) measures the domain discrepancy via multiple task classifiers and is shown to achieve state-of-the-art performance . Semi-Supervised Learning ( SSL ) . In the SSL setting , a portion of the training dataset is labeled and the remaining portion is unlabeled . SSL has seen great progress in recent years , including temporal ensemble ( Laine & Aila , 2017 ) , mean teacher ( Tarvainen & Valpola , 2017 ) , MixMatch ( Berthelot et al. , 2019 ) , ReMixMatch ( Berthelot et al. , 2020 ) , FixMatch ( Sohn et al. , 2020 ) , unsupervised data augmentation ( Xie et al. , 2019 ) , to name a few . While NoisyStudent ( Xie et al. , 2020 ) , an SSL algorithm , uses labeled data from ImageNet and unlabeled data from JFT in training , they do not leverage this distribution shift during training , and they only evaluate on the source domain ( i.e. , ImageNet ) , not the target domain . While there is no technical barrier to applying SSL methods to UDA , only a few SSL methods have been applied to solve UDA problems ; for example , on top of the discrepancy reduction techniques of UDA , several works ( French et al. , 2018 ; Long et al. , 2018 ; Saito et al. , 2019 ; Tran et al. , 2019 ) propose to combine SSL techniques such as entropy minimization ( Grandvalet & Bengio , 2005 ) or pseudo-labeling ( Lee , 2013 ) . Semi-Supervised Domain Adaptation ( SSDA ) has been studied in several settings , including vision ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Saito et al. , 2019 ) and natural language processing ( Jiang & Zhai , 2007 ; Daumé III et al. , 2010 ; Guo & Xiao , 2012 ) . Since SSDA assumes access to labeled data from multiple domains , early works have used separate models for each domain and regularized them with constraints ( Donahue et al. , 2013 ; Yao et al. , 2015 ; Ao et al. , 2017 ; Daumé III et al. , 2010 ; Guo & Xiao , 2012 ) . However , such methods are difficult to adapt to the UDA setting where labeled data is only available in a single domain . One recent exception is minimax entropy ( MME ) regularization ( Saito et al. , 2019 ) , which can work in both the UDA and SSDA setting . However , unlike AdaMatch , MME requires a pre-trained network to work well . Transfer learning is used to boost accuracy on small datasets by initializing model parameters with pre-trained weights first learned on a separate , larger dataset , which can compensate for the limited amount of labeled source data and boost the overall performance ( Recht et al. , 2019b ; Kolesnikov et al. , 2019 ) . For example , standard experimental protocols on several UDA benchmarks , including Office-31 ( Saenko et al. , 2010 ) , PACS ( Li et al. , 2017 ) , and DomainNet ( Peng et al. , 2019 ) , use ImageNet-pretrained models to initialize model parameters . Though useful for some cases in practice , it may not be the most general protocol to evaluate the advancement of UDA algorithms , especially in situations where no datasets exist for pre-training ( for example , images with arbitrary number of channels ) , or for domains other than vision , where no pre-training datasets exist . Thus , in this work , we mainly focus on evaluating methods under a non-transfer learning setting , which we consider to be more general . Although we also achieve state of the art results with transfer learning , we only present them for historical reasons and to illustrate that our method can use that setting too . 3 ADAMATCH . We now introduce AdaMatch , a new algorithm inspired by modern semi-supervised learning techniques , aimed at solving UDA , SSL , and SSDA . As typical in SSL , AdaMatch takes both an unlabeled and labeled dataset as input . We assume that the labeled data is drawn from a source domain while the unlabeled data is drawn from a target domain ( for the SSL task , these domains are the same ) . Notation . We use capital letters X , Y , Z to denote minibatches of examples , labels and logits . Specifically , XSL ⊂ RnSL×d and YSL ⊂ { 0 , 1 } nSL×k denote the minibatch of source images and labels , respectively . Similarly , the minibatch of unlabeled target images is XTU ⊂ RnTU×d . Here , k is the number of classes and d is the input dimension ( for images d = h · w · c , where h is height , w is width , and c is the number of channels ) . The minibatch size for the labeled data is nSL and the minibatch size of the unlabeled images is nTU . Additionally , we use Y ( i ) to refer to its i-th row , and Y ( i , j ) to refer to the i , j−th element of Y . The model f : Rd → Rk takes images as input and outputs logits for each of k classes . Importantly , the source and target domain are the same classification task , so the number of classes k and the image dimension d is the same for both domains . 3.1 METHOD DESCRIPTION . AdaMatch introduces three new techniques to account for differences between the source and target distributions – random logit interpolation , a relative confidence threshold , and a modified distribution alignment from ReMixMatch ( Berthelot et al. , 2020 ) – but builds upon the algorithmic backbone from FixMatch ( Sohn et al. , 2020 ) . We first provide a high-level overview of the algorithm and then discuss the implementation details of the various components . Overview . A high-level depiction of AdaMatch is in Figure 1 . Two augmentations are made for each image : a weak and a strong one with the intent to make the class prediction harder on the strongly augmented image . Next , we obtain logits by running two batches through the model : a batch of the source images and batch composed of both the source and target images . Each of the resulting batches of logits are influenced by their respective batch norm statistics , i.e . the source batch is only influenced by the source data batch norm statistics while the batch that combines source and target is influenced by both domains batch norm statistics . Two loss terms are then computed : • The source loss term is responsible for predicting correct source labels and for aligning source and target logit domains . We first combine logits for the source images using random logit interpolation , which encourages the model to produce the same label for the hyperspace connecting source logits obtained from 1 ) only source examples and 2 ) a combination of source and target examples . In practice , this creates an implicit constraint to align the source and target domains in logit space . The newly obtained source logits are then used to compute the cross-entropy loss for the source data . • The target loss term is responsible for predicting the correct target labels and for aligning the target predictions to a desired class distribution . Since we don ’ t assume access to labels for the target images , we create a pseudo-label for these images as follows . First , we rectify the class distribution obtained from weakly augmented target images to a desired class distribution using distribution alignment . If the target class distribution is known , it can be used directly . In the general case where it is not known , we use the source class distribution instead . We then select entries of the batch for which the rectified probabilities of the weakly augmented target image predictions are above a user-defined confidence threshold . A pseudo label is then made for these outputs by selecting the most confident class , and these pseudo-labels are used with a standard cross-entropy loss applied to the logits of the strongly augmented images . Augmentation . For a dataset D ∈ { SL , TU } , we augment each image batch XD fed into AdaMatch twice , once with a weak augmentation and once with a strong augmentation , using the same types of weak and strong augmentations as ( Berthelot et al. , 2020 ) . This forms a pair of batches XD , w and XD , s respectively , which we denote together as X aug D = { XD , w , XD , s } . From these pairs of batches , we then compute logits Z ′SL , Z ′′ SL and ZTU as follows : { Z ′SL , ZTU } = f ( { X aug SL , X aug TU } ; θ ) ( 1 ) Z ′′SL = f ( X aug SL ; θ ) ( 2 ) That is , we compute logits by calling the model twice for both the strong and weakly augmented images . The first time we pass both source and target inputs together in the same batch so the batch normalization statistics are shared , and the second time we only pass the source label data . We only update batch normalization statistics when computing { Z ′SL , ZTU } = f ( { X aug SL , X aug TU } ; θ ) to avoid double counting source labeled data . Note that without batch normalization , we would have Z ′SL ≡ Z ′′SL . But batch normalization may make them slightly different . Random logit interpolation . The role of random logit interpolation is to randomly combine the joint batch statistics from the source and target domains with the batch statistics from the source domain , which has the effect of producing batch statistics that are more representative of both domains . More precisely , during training , we obtain logits ZSL by randomly interpolating the logits Z ′SL and Z ′′ SL computed with different batch statistics : ZSL = λ · Z ′SL + ( 1− λ ) · Z ′′SL ( 3 ) where we sample λ ∼ UnSL·k ( 0 , 1 ) . Note that each individual logit gets its own random factor . Our underlying goal here is to minimize the loss for every point between Z ′SL and Z ′′ SL , which can be accomplished by either 1 ) having Z ′SL and Z ′′ SL be equal to each other , or 2 ) having the whole line between Z ′SL and Z ′′ SL be a minima . Rather than picking one of the two ways , we formulate the problem as minimizing the loss for all connecting points , which gives the model the freedom to find the best possible solution . We then use random logit interpolation as a cost effective implementation of this loss formulation . By sampling a random point along the line between Z ′SL and Z ′′ SL at each batch , over time , we implicitly allow the model to minimize the loss for all points on the line . Distribution alignment . Distribution alignment ( Berthelot et al. , 2020 ) can be seen as an additional form of model regularization that helps constrain the distribution of the class predictions to be more aligned with the true distribution . Without it , the classifier could just predict the most prevalent class or exhibit other failure modes . Ideally , if the target label distribution is known , we would use it directly . However , when the target label distribution is unknown , we approximate it using the only available distribution – the source label distribution . A limitation of this approach is that the more the source label distribution differs from the target distribution , the more incorrect the approximation will be , which may cause the model performance to degrade . However , in practice , we find that aligning the target pseudo-labels to match the source label distribution helps significantly . Unlike ReMixMatch ( Berthelot et al. , 2020 ) , we estimate the source label distribution from the output of the model rather than using the true labels . We make this change since the model may not be capable of matching the ground truth source label distribution ( particularly when source accuracy is low ) but matching the source output distribution is a more attainable goal . To implement this , we first extract the logits for weakly and strongly augmented samples from the batch ( by indexing ) : ZSL = { ZSL , w , ZSL , s } ZTU = { ZTU , w , ZTU , s } ( 4 ) Then , we compute pseudo-labels for labeled sources and unlabeled targets : ŶSL , w = softmax ( ZSL , w ) ∈ RnSL·k ŶTU , w = softmax ( ZTU , w ) ∈ RnTU ·k ( 5 ) Using distribution alignment , we rectify the target unlabeled pseudo-labels by multiplying them by the ratio of the expected value of the weakly augmented source labels E [ ŶSL , w ] ∈ Rk to the expected value of the target labels E [ ŶTU , w ] ∈ Rk , obtaining the final pseudo-labels ỸTU , w ∈ RnTU ·k : ỸTU , w = normalize ( ŶTU , w E [ ŶSL , w ] E [ ŶTU , w ] ) ( 6 ) normalize ensures that the distribution still sums to 1 . As could be seen E [ ỸTU , w ] = E [ ŶSL , w ] , which confirms that distribution alignment makes the target pseudo-labels follow the source label distribution . If the target label distribution is known , one can simply replace the term E [ ŶSL , w ] with it in the formula above . Relative confidence threshold . A confidence threshold is typically used to select which predicted labels are confident enough to be used as pseudo-labels ( Lee , 2013 ) . However , since machine learning models are poorly calibrated ( Guo et al. , 2017 ) , especially on out-of-distribution data Ovadia et al . ( 2019 ) , the confidence varies from dataset to dataset depending on the ability of the model to learn its task . To address this issue , we introduce a relative confidence threshold which adjusts a user-provided confidence threshold relative to the confidence level of the classifier on the weakly augmented source data . Specifically , we define the relative confidence threshold cτ as the mean confidence of the top-1 prediction on the weakly augmented source data multiplied by a user provided threshold τ : cτ = τ nSL nSL∑ i=1 max j∈ [ 1 .. k ] ( Ŷ ( i , j ) SL , w ) ( 7 ) We then compute a binary mask ∈ { 0 , 1 } nTU by thresholding the weakly augmented target images with the relative confidence threshold cτ : mask ( i ) = max j∈ [ 1 .. k ] ( Ỹ ( i , j ) TU , w ) ≥ cτ ( 8 ) Loss function . The loss L ( θ ) sums Lsource ( θ ) for the source and Ltarget ( θ ) for the target . Lsource ( θ ) = 1 nSL nSL∑ i=1 H ( Y ( i ) SL , Z ( i ) SL , w ) + 1 nSL nSL∑ i=1 H ( Y ( i ) SL , Z ( i ) SL , s ) ( 9 ) Ltarget ( θ ) = 1 nTU nTU∑ i=1 H ( stop_gradient ( Ỹ ( i ) TU , w ) , Z ( i ) TU , s ) ·mask ( i ) ( 10 ) L ( θ ) = Lsource ( θ ) + µ ( t ) Ltarget ( θ ) ( 11 ) whereH ( p , q ) =− ∑ p ( x ) log q ( x ) is the cross-entropy loss and stop_gradient is a function that prevents gradient from back-propagating on its argument . Prevention of gradient back-propagation on guessed labels is a standard practice in SSL works that favors convergence . µ ( t ) is a warmup function that controls the unlabeled loss weight at every step of the training . In practice we use µ ( t ) = 1/2 − cos ( min ( π , 2πt/T ) ) /2 where T is the total training steps . This particular function smoothly raises from 0 to 1 for the first half of the training and remains at 1 for the second half . As can be noted , Lsource ( θ ) is the cross-entropy loss typically used in fully supervised learning which the nuance that we call it twice : once on weakly augmented samples and once on strongly augmented ones . Similarly , Ltarget ( θ ) is the masked cross-entropy loss , where entries for which the confidence is less than cτ are zeroed . This loss term is exactly the same as in FixMatch ( Sohn et al. , 2020 ) .
The paper extends FixMatch as a unified approach for semi-Supervised Learning, Unsupervised Domain Adaptation, and Semi-supervised Domain Adaptation. The proposal learns representations with a cross-entropy loss between labels and logits for a strong and weak augmentation versions of the source data while the target data helps with regularization. The proposal uses a random mixture the logits of the two augmentations, and normalizes the distribution of the pseudo-labels with the ratio of the expected logits of the augmentations. The contribution is the mixture of aligning the label distributions by normalizing them through the ratio between the source and target predicted labels, using relative thresholds to compare source and target label distributions, and the random logit interpolation to regularize the representations. This combinations let the method outperform the reported methods in the three tasks on two datasets.
SP:ac43f7aa79c0497098b83b4ffb239f09781c9797
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks
Hypergraphs are used to model higher-order interactions amongst agents and there exist many practically relevant instances of hypergraph datasets . To enable the efficient processing of hypergraph data , several hypergraph neural network platforms have been proposed for learning hypergraph properties and structure , with a special focus on node classification tasks . However , almost all existing methods use heuristic propagation rules and offer suboptimal performance on benchmarking datasets . We propose AllSet , a new hypergraph neural network paradigm that represents a highly general framework for ( hyper ) graph neural networks and for the first time implements hypergraph neural network layers as compositions of two multiset functions that can be efficiently learned for each task and each dataset . The proposed AllSet framework also for the first time integrates Deep Sets and Set Transformers with hypergraph neural networks for the purpose of learning multiset functions and therefore allows for significant modeling flexibility and high expressive power . To evaluate the performance of AllSet , we conduct the most extensive experiments to date involving ten known benchmarking datasets and three newly curated datasets that represent significant challenges for hypergraph node classification . The results demonstrate that our method has the unique ability to either match or outperform all other hypergraph neural networks across the tested datasets : As an example , the performance improvements over existing methods and a new method based on heterogeneous graph neural networks are close to 4 % on the Yelp and Zoo datasets , and 3 % on the Walmart dataset . 1 INTRODUCTION . Graph-centered machine learning , and especially graph neural networks ( GNNs ) , have attracted great interest in the machine learning community due to the ubiquity of graph-structured data and the importance of solving numerous real-world problems such as semi-supervised node classification and graph classification ( Zhu , 2005 ; Shervashidze et al. , 2011 ; Lü & Zhou , 2011 ) . Graphs model pairwise interactions between entities , but fail to capture more complex relationships . Hypergraphs , on the other hand , involve hyperedges that can connect more than two nodes , and are therefore capable of representing higher-order structures in datasets . There exist many machine learning and data mining applications for which modeling high-order relations via hypergraphs leads to better learning performance when compared to graph-based models ( Benson et al. , 2016 ) . For example , in subspace clustering , in order to fit a d-dimensional subspace , we need at least d+1 data points ( Agarwal et al. , 2005 ) ; in hierarchical species classification of a FoodWeb , a carbon-flow unit based on four species is significantly more predictive than that involving two or three entities ( Li & Milenkovic , 2017 ) . Hence , it is desirable to generalize GNN concepts to hypergraphs . One straightforward way to generalize graph algorithms for hypergraphs is to convert hypergraphs to graphs via clique-expansion ( CE ) ( Agarwal et al. , 2005 ; Zhou et al. , 2006 ) . CE replaces hyperedges by ( possibly weighted ) cliques . Many recent attempts to generalize GNNs to hypergraphs can be viewed as redefining hypergraph propagation schemes based on CE or its variants ( Yadati et al. , 2019 ; Feng et al. , 2019 ; Bai et al. , 2021 ) , which was also originally pointed out in ( Dong et al. , 2020 ) . Despite the simplicity of CE , it is well-known that CE causes distortion and leads to undesired losses in learning performance ( Hein et al. , 2013 ; Li & Milenkovic , 2018 ; Chien et al. , 2019b ) . In parallel , more sophisticated propagation rules directly applicable on hypergraphs , and related to tensor eigenproblems , have been studied as well . One such example , termed Multilinear PageRank ( Gleich et al. , 2015 ) , generalizes PageRank techniques ( Page et al. , 1999 ; Jeh & Widom , 2003 ) directly to hypergraphs without resorting to the use of CE . Its propagation scheme is closely related to the Z eigenproblem which has been extensively investigated in tensor analysis and spectral hypergraph theory ( Li et al. , 2013 ; He & Huang , 2014 ; Qi & Luo , 2017 ; Pearson & Zhang , 2014 ; Gautier et al. , 2019 ) . An important result of Benson et al . ( 2017 ) shows that tensor-based propagation outperforms a CE-based scheme on several tasks . The pros and cons of these two types of propagation rule in statistical learning frameworks were examined in Chien et al . ( 2021a ) . More recently , it was shown in Tudisco et al . ( 2020 ) that label propagation based on CE of hypergraphs does not always lead to acceptable performance . Similarly to Chien et al . ( 2021a ) , Benson ( 2019 ) identified positive traits of CE eigenvectors but argued in favor of using Z eigenvectors due to their more versatile nonlinear formulation compared to that of the eigenvectors of CE graphs . We address two natural questions pertaining to learning on hypergraphs : “ Is there a general framework that includes CE-based , Z-based and other propagations on hypergraphs ? ” and , “ Can we learn propagation schemes for hypergraph neural networks suitable for different datasets and different learning tasks ? ” We give affirmative answers to both questions . We propose a general framework , AllSet , which includes both CE-based and tensor-based propagation rules as special cases . We also propose two powerful hypergraph neural network layer architectures that can learn adequate propagation rules for hypergraphs using multiset functions . Our specific contributions are as follows . 1 . We show that using AllSet , one can not only model CE-based and tensor-based propagation rules , but also cover propagation methods of most existing hypergraph neural networks , including HyperGCN ( Yadati et al. , 2019 ) , HGNN ( Feng et al. , 2019 ) , HCHA ( Bai et al. , 2021 ) , HNHN ( Dong et al. , 2020 ) and HyperSAGE ( Arya et al. , 2020 ) . Most importantly , we show that all these propagation rules can be described as a composition of two multiset functions ( leading to the proposed method name AllSet ) . Furthermore , we also show that AllSet is a hypergraph generalization of Message Passing Neural Networks ( MPNN ) ( Gilmer et al. , 2017 ) , a powerful graph learning framework encompassing many GNNs such as GCN ( Kipf & Welling , 2017 ) and GAT ( Veličković et al. , 2018 ) . 2 . Inspired by Deep Sets ( Zaheer et al. , 2017 ) and Set Transformer ( Lee et al. , 2019 ) , we propose AllDeepSets and AllSetTransformer layers which are end-to-end trainable . They can be plugged into most types of graph neural networks to enable effortless generalizations to hypergraphs . Notably , our work represents the first attempt to connect the problem of learning multiset function with hypergraph neural networks , and to leverage the powerful Set Transformer model in the design of these specialized networks . 3 . We report , to the best of our knowledge , the most extensive experiments in the hypergraph neural networks literature pertaining to semi-supervised node classification . Experimental results against ten baseline methods on ten benchmark datasets and three newly curated and challenging datasets demonstrate the superiority and consistency of our AllSet approach . As an example , AllSetTransformer outperforms the best baseline method by close to 4 % in accuracy on Yelp and Zoo datasets and 3 % on the Walmart dataset ; furthermore , AllSetTransformer matches or outperforms the best baseline models on nine out of ten datasets . Such improvements are not possible with modifications of HAN ( Wang et al. , 2019b ) , a heterogeneous GNN , adapted to hypergraphs or other specialized approaches that do not use Set Transformers . 4 . As another practical contribution , we also provide a succinct pipeline for standardization of the hypergraph neural networks evaluation process based on Pytorch Geometric ( Fey & Lenssen , 2019 ) . The pipeline is built in a fashion similar to that proposed in recent benchmarking GNNs papers ( Hu et al. , 2020a ; Lim et al. , 2021 ) . The newly introduced datasets , along with our reported testbed , may be viewed as an initial step toward benchmarking hypergraph neural networks . All proofs and concluding remarks are relegated to the Appendix . 2 BACKGROUND . Notation . A hypergraph is an ordered pair of sets G ( V , E ) , where V = { 1 , 2 , . . . , n } is the set of nodes while E is the set of hyperedges . Each hyperedge e ∈ E is a subset of V , i.e. , e ⊆ V . Unlike a graph edge , a hyperedge e may contain more than two nodes . If ∀ e ∈ E one has |e| = d ∈ N , the hypergraph G is termed d-uniform . A d-uniform hypergraph can be represented by a d-dimensional supersymmetric tensor such that for all distinct collections i1 , . . . , id ∈ V , Ai1 , ... , id = 1 ( d−2 ) ! if e = { i1 , ... , id } ∈ E , and Ai1 , ... , id = 0 otherwise . Henceforth , A : , ... , id is used to denote the slice of A along the first coordinate . A hypergraph can alternatively be represented by its incidence matrix H , where Hve = 1 if v ∈ e and Hve = 0 otherwise . We use the superscript ( t ) to represent the functions or variables at the t-th step of propagation and ‖ to denote concatenation . Furthermore , Θ and b are reserved for a learnable weight matrix and bias of a neural network , respectively . Finally , we use σ ( · ) to denote a nonlinear activation function ( such as ReLU , eLU or LeakyReLU ) , which depends on the model used . CE-based propagation on hypergraphs . The CE of a hypergraph G ( V , E ) is a weighted graph with the same set of nodes V . It can be described in terms of the associated adjacency or incidence matrices which we write with a slight abuse of notation as A ( CE ) ij = ∑ i3 , ... , id∈V Ai , j , i3 , ... , id and H ( CE ) = HHT , respectively . It is obvious that these two matrices only differ in their diagonal entries ( 0s versus node degrees , respectively ) . One step of propagation of a F -dimensional node feature matrix X ∈ Rn×F is captured by A ( CE ) X or H ( CE ) X ; alternatively , in terms of node feature updates , we have CEpropA : X ( t+1 ) v , : = ∑ e : v∈e ∑ u : u∈e\v X ( t ) u , : ; CEpropH : X ( t+1 ) v , : = ∑ e : v∈e ∑ u : u∈e X ( t ) u , : , ( 1 ) Many existing hypergraph convolutional layers actually perform CE-based propagation , potentially with further degree normalization and nonlinear hyperedge weights . For example , the propagation rule of HGNN ( Feng et al. , 2019 ) takes the following node-wise form : X ( t+1 ) v , : = σ ( [ 1√ dv ∑ e : v∈e we |e| ∑ u : u∈e X ( t ) u , :√ du ] Θ ( t ) + b ( t ) ) , ( 2 ) where dv denotes the degree of node v , we is a predefined weight of hyperedge e and σ is the ReLU activation function . The hypergraph convolution in HCHA ( Bai et al. , 2021 ) uses different degree normalizations and attention weights , with the attention weights depending on node features and the hyperedge features . If datasets do not contain hyperedge feature information or if the features come from a different domain compared to the node features , one can not use their attention module ( Bai et al. , 2021 ) . HyperGCN replaces each hyperedge by an incomplete clique via so-called mediators ( Yadati et al. , 2019 ) . When the hypergraph is 3-uniform , the aforementioned approach becomes a standard weighted CE . Hence , all the described hypergraph neural networks adapt propagation rules based on CE or its variants , potentially with the addition of nonlinear hyperedge weights . The described methods achieve reasonable good performance on standard cocitation and coauthor benchmarking datasets . Tensor-based propagations . As mentioned in the introduction , there exist more elaborate tensorbased propagation schemes which in some cases outperform CE-based methods . The propagation rules related to Z eigenproblems such as multilinear PageRank ( Gleich et al. , 2015 ) and spacey random walks ( Benson et al. , 2017 ) are two such examples . The Z eigenproblem for an adjacency tensor A of a d-uniform hypergraph is defined as : λx = Axd−1 , ∑ i2 , ... , id A : ,i2 , ... , idxi2 . . . xid , x ∈ Rn and ‖x‖2 = 1 . ( 3 ) Here , Axd−1 equals ∑ i2 , ... , id A : ,i2 , ... , idxi2 . . . xid , an entity frequently used in the tensor analysis literature . The Z eigenproblem has been extensively studied both in the context of tensor analysis and network sciences ; the problem is also known as the l2 eigenproblem ( Lim , 2005 ; Gautier et al. , 2019 ) . We refer the interested readers to Qi & Luo ( 2017 ) for a more detailed theoretical analysis of the Z eigenproblems and Benson ( 2019 ) for its application in the study of hypergraph centralities . By ignoring the norm constraint , one can define the following tensor-based propagation rule based on ( 3 ) according to : Zprop : X ( t+1 ) v , : = ∑ e : v∈e ( d− 1 ) ∏ u : u∈e\v X ( t ) u , : . ( 4 ) Despite its interesting theoretical properties , Zprop is known to have what is termed the “ unit problem ” ( Benson , 2019 ) . In practice , the product can cause numerical instabilities for large hyperedges . Furthermore , Zprop has only been studied for the case of d-uniform hypergraphs , which makes it less relevant for general hypergraph learning tasks . Clearly , CEprop and Zprop have different advantages and disadvantages for different dataset structures . This motivates finding a general framework that encompasses these two and other propagation rules . In this case , we aim to learn the suitable propagation scheme under such a framework for hypergraph neural networks .
Transferring standard graph operators to the hypergraph setting is non-trivial and several message passing operators have been considered in the setting of hypergraphs, including clique-expansion and tensor based. In this paper the authors propose a general framework where learnable multiset functions are used in order to learn the hypergraph propagation map from the data. In particular, this framework contains many previously considered propagation maps as particular cases
SP:cb6aa11a7686ba134fd0a5d3c96847ab3e1b47ea
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks
Hypergraphs are used to model higher-order interactions amongst agents and there exist many practically relevant instances of hypergraph datasets . To enable the efficient processing of hypergraph data , several hypergraph neural network platforms have been proposed for learning hypergraph properties and structure , with a special focus on node classification tasks . However , almost all existing methods use heuristic propagation rules and offer suboptimal performance on benchmarking datasets . We propose AllSet , a new hypergraph neural network paradigm that represents a highly general framework for ( hyper ) graph neural networks and for the first time implements hypergraph neural network layers as compositions of two multiset functions that can be efficiently learned for each task and each dataset . The proposed AllSet framework also for the first time integrates Deep Sets and Set Transformers with hypergraph neural networks for the purpose of learning multiset functions and therefore allows for significant modeling flexibility and high expressive power . To evaluate the performance of AllSet , we conduct the most extensive experiments to date involving ten known benchmarking datasets and three newly curated datasets that represent significant challenges for hypergraph node classification . The results demonstrate that our method has the unique ability to either match or outperform all other hypergraph neural networks across the tested datasets : As an example , the performance improvements over existing methods and a new method based on heterogeneous graph neural networks are close to 4 % on the Yelp and Zoo datasets , and 3 % on the Walmart dataset . 1 INTRODUCTION . Graph-centered machine learning , and especially graph neural networks ( GNNs ) , have attracted great interest in the machine learning community due to the ubiquity of graph-structured data and the importance of solving numerous real-world problems such as semi-supervised node classification and graph classification ( Zhu , 2005 ; Shervashidze et al. , 2011 ; Lü & Zhou , 2011 ) . Graphs model pairwise interactions between entities , but fail to capture more complex relationships . Hypergraphs , on the other hand , involve hyperedges that can connect more than two nodes , and are therefore capable of representing higher-order structures in datasets . There exist many machine learning and data mining applications for which modeling high-order relations via hypergraphs leads to better learning performance when compared to graph-based models ( Benson et al. , 2016 ) . For example , in subspace clustering , in order to fit a d-dimensional subspace , we need at least d+1 data points ( Agarwal et al. , 2005 ) ; in hierarchical species classification of a FoodWeb , a carbon-flow unit based on four species is significantly more predictive than that involving two or three entities ( Li & Milenkovic , 2017 ) . Hence , it is desirable to generalize GNN concepts to hypergraphs . One straightforward way to generalize graph algorithms for hypergraphs is to convert hypergraphs to graphs via clique-expansion ( CE ) ( Agarwal et al. , 2005 ; Zhou et al. , 2006 ) . CE replaces hyperedges by ( possibly weighted ) cliques . Many recent attempts to generalize GNNs to hypergraphs can be viewed as redefining hypergraph propagation schemes based on CE or its variants ( Yadati et al. , 2019 ; Feng et al. , 2019 ; Bai et al. , 2021 ) , which was also originally pointed out in ( Dong et al. , 2020 ) . Despite the simplicity of CE , it is well-known that CE causes distortion and leads to undesired losses in learning performance ( Hein et al. , 2013 ; Li & Milenkovic , 2018 ; Chien et al. , 2019b ) . In parallel , more sophisticated propagation rules directly applicable on hypergraphs , and related to tensor eigenproblems , have been studied as well . One such example , termed Multilinear PageRank ( Gleich et al. , 2015 ) , generalizes PageRank techniques ( Page et al. , 1999 ; Jeh & Widom , 2003 ) directly to hypergraphs without resorting to the use of CE . Its propagation scheme is closely related to the Z eigenproblem which has been extensively investigated in tensor analysis and spectral hypergraph theory ( Li et al. , 2013 ; He & Huang , 2014 ; Qi & Luo , 2017 ; Pearson & Zhang , 2014 ; Gautier et al. , 2019 ) . An important result of Benson et al . ( 2017 ) shows that tensor-based propagation outperforms a CE-based scheme on several tasks . The pros and cons of these two types of propagation rule in statistical learning frameworks were examined in Chien et al . ( 2021a ) . More recently , it was shown in Tudisco et al . ( 2020 ) that label propagation based on CE of hypergraphs does not always lead to acceptable performance . Similarly to Chien et al . ( 2021a ) , Benson ( 2019 ) identified positive traits of CE eigenvectors but argued in favor of using Z eigenvectors due to their more versatile nonlinear formulation compared to that of the eigenvectors of CE graphs . We address two natural questions pertaining to learning on hypergraphs : “ Is there a general framework that includes CE-based , Z-based and other propagations on hypergraphs ? ” and , “ Can we learn propagation schemes for hypergraph neural networks suitable for different datasets and different learning tasks ? ” We give affirmative answers to both questions . We propose a general framework , AllSet , which includes both CE-based and tensor-based propagation rules as special cases . We also propose two powerful hypergraph neural network layer architectures that can learn adequate propagation rules for hypergraphs using multiset functions . Our specific contributions are as follows . 1 . We show that using AllSet , one can not only model CE-based and tensor-based propagation rules , but also cover propagation methods of most existing hypergraph neural networks , including HyperGCN ( Yadati et al. , 2019 ) , HGNN ( Feng et al. , 2019 ) , HCHA ( Bai et al. , 2021 ) , HNHN ( Dong et al. , 2020 ) and HyperSAGE ( Arya et al. , 2020 ) . Most importantly , we show that all these propagation rules can be described as a composition of two multiset functions ( leading to the proposed method name AllSet ) . Furthermore , we also show that AllSet is a hypergraph generalization of Message Passing Neural Networks ( MPNN ) ( Gilmer et al. , 2017 ) , a powerful graph learning framework encompassing many GNNs such as GCN ( Kipf & Welling , 2017 ) and GAT ( Veličković et al. , 2018 ) . 2 . Inspired by Deep Sets ( Zaheer et al. , 2017 ) and Set Transformer ( Lee et al. , 2019 ) , we propose AllDeepSets and AllSetTransformer layers which are end-to-end trainable . They can be plugged into most types of graph neural networks to enable effortless generalizations to hypergraphs . Notably , our work represents the first attempt to connect the problem of learning multiset function with hypergraph neural networks , and to leverage the powerful Set Transformer model in the design of these specialized networks . 3 . We report , to the best of our knowledge , the most extensive experiments in the hypergraph neural networks literature pertaining to semi-supervised node classification . Experimental results against ten baseline methods on ten benchmark datasets and three newly curated and challenging datasets demonstrate the superiority and consistency of our AllSet approach . As an example , AllSetTransformer outperforms the best baseline method by close to 4 % in accuracy on Yelp and Zoo datasets and 3 % on the Walmart dataset ; furthermore , AllSetTransformer matches or outperforms the best baseline models on nine out of ten datasets . Such improvements are not possible with modifications of HAN ( Wang et al. , 2019b ) , a heterogeneous GNN , adapted to hypergraphs or other specialized approaches that do not use Set Transformers . 4 . As another practical contribution , we also provide a succinct pipeline for standardization of the hypergraph neural networks evaluation process based on Pytorch Geometric ( Fey & Lenssen , 2019 ) . The pipeline is built in a fashion similar to that proposed in recent benchmarking GNNs papers ( Hu et al. , 2020a ; Lim et al. , 2021 ) . The newly introduced datasets , along with our reported testbed , may be viewed as an initial step toward benchmarking hypergraph neural networks . All proofs and concluding remarks are relegated to the Appendix . 2 BACKGROUND . Notation . A hypergraph is an ordered pair of sets G ( V , E ) , where V = { 1 , 2 , . . . , n } is the set of nodes while E is the set of hyperedges . Each hyperedge e ∈ E is a subset of V , i.e. , e ⊆ V . Unlike a graph edge , a hyperedge e may contain more than two nodes . If ∀ e ∈ E one has |e| = d ∈ N , the hypergraph G is termed d-uniform . A d-uniform hypergraph can be represented by a d-dimensional supersymmetric tensor such that for all distinct collections i1 , . . . , id ∈ V , Ai1 , ... , id = 1 ( d−2 ) ! if e = { i1 , ... , id } ∈ E , and Ai1 , ... , id = 0 otherwise . Henceforth , A : , ... , id is used to denote the slice of A along the first coordinate . A hypergraph can alternatively be represented by its incidence matrix H , where Hve = 1 if v ∈ e and Hve = 0 otherwise . We use the superscript ( t ) to represent the functions or variables at the t-th step of propagation and ‖ to denote concatenation . Furthermore , Θ and b are reserved for a learnable weight matrix and bias of a neural network , respectively . Finally , we use σ ( · ) to denote a nonlinear activation function ( such as ReLU , eLU or LeakyReLU ) , which depends on the model used . CE-based propagation on hypergraphs . The CE of a hypergraph G ( V , E ) is a weighted graph with the same set of nodes V . It can be described in terms of the associated adjacency or incidence matrices which we write with a slight abuse of notation as A ( CE ) ij = ∑ i3 , ... , id∈V Ai , j , i3 , ... , id and H ( CE ) = HHT , respectively . It is obvious that these two matrices only differ in their diagonal entries ( 0s versus node degrees , respectively ) . One step of propagation of a F -dimensional node feature matrix X ∈ Rn×F is captured by A ( CE ) X or H ( CE ) X ; alternatively , in terms of node feature updates , we have CEpropA : X ( t+1 ) v , : = ∑ e : v∈e ∑ u : u∈e\v X ( t ) u , : ; CEpropH : X ( t+1 ) v , : = ∑ e : v∈e ∑ u : u∈e X ( t ) u , : , ( 1 ) Many existing hypergraph convolutional layers actually perform CE-based propagation , potentially with further degree normalization and nonlinear hyperedge weights . For example , the propagation rule of HGNN ( Feng et al. , 2019 ) takes the following node-wise form : X ( t+1 ) v , : = σ ( [ 1√ dv ∑ e : v∈e we |e| ∑ u : u∈e X ( t ) u , :√ du ] Θ ( t ) + b ( t ) ) , ( 2 ) where dv denotes the degree of node v , we is a predefined weight of hyperedge e and σ is the ReLU activation function . The hypergraph convolution in HCHA ( Bai et al. , 2021 ) uses different degree normalizations and attention weights , with the attention weights depending on node features and the hyperedge features . If datasets do not contain hyperedge feature information or if the features come from a different domain compared to the node features , one can not use their attention module ( Bai et al. , 2021 ) . HyperGCN replaces each hyperedge by an incomplete clique via so-called mediators ( Yadati et al. , 2019 ) . When the hypergraph is 3-uniform , the aforementioned approach becomes a standard weighted CE . Hence , all the described hypergraph neural networks adapt propagation rules based on CE or its variants , potentially with the addition of nonlinear hyperedge weights . The described methods achieve reasonable good performance on standard cocitation and coauthor benchmarking datasets . Tensor-based propagations . As mentioned in the introduction , there exist more elaborate tensorbased propagation schemes which in some cases outperform CE-based methods . The propagation rules related to Z eigenproblems such as multilinear PageRank ( Gleich et al. , 2015 ) and spacey random walks ( Benson et al. , 2017 ) are two such examples . The Z eigenproblem for an adjacency tensor A of a d-uniform hypergraph is defined as : λx = Axd−1 , ∑ i2 , ... , id A : ,i2 , ... , idxi2 . . . xid , x ∈ Rn and ‖x‖2 = 1 . ( 3 ) Here , Axd−1 equals ∑ i2 , ... , id A : ,i2 , ... , idxi2 . . . xid , an entity frequently used in the tensor analysis literature . The Z eigenproblem has been extensively studied both in the context of tensor analysis and network sciences ; the problem is also known as the l2 eigenproblem ( Lim , 2005 ; Gautier et al. , 2019 ) . We refer the interested readers to Qi & Luo ( 2017 ) for a more detailed theoretical analysis of the Z eigenproblems and Benson ( 2019 ) for its application in the study of hypergraph centralities . By ignoring the norm constraint , one can define the following tensor-based propagation rule based on ( 3 ) according to : Zprop : X ( t+1 ) v , : = ∑ e : v∈e ( d− 1 ) ∏ u : u∈e\v X ( t ) u , : . ( 4 ) Despite its interesting theoretical properties , Zprop is known to have what is termed the “ unit problem ” ( Benson , 2019 ) . In practice , the product can cause numerical instabilities for large hyperedges . Furthermore , Zprop has only been studied for the case of d-uniform hypergraphs , which makes it less relevant for general hypergraph learning tasks . Clearly , CEprop and Zprop have different advantages and disadvantages for different dataset structures . This motivates finding a general framework that encompasses these two and other propagation rules . In this case , we aim to learn the suitable propagation scheme under such a framework for hypergraph neural networks .
Hypergraphs can capture group/set relationships in real-world data. Several hypergraph neural networks have been proposed in the literature to exploit both group relationships among nodes and node features for learning with hypergraph data. The contributions of the paper are 1) generalisation of most existing methods into a single framework (named AllSet), 2) exploration of AllSet based on Deep Sets [NeurIPS'17] and Set Transformer [ICML'19], and 3) empirical evaluation on existing benchmarks and three curated hypergraph datasets
SP:cb6aa11a7686ba134fd0a5d3c96847ab3e1b47ea
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks
Hypergraphs are used to model higher-order interactions amongst agents and there exist many practically relevant instances of hypergraph datasets . To enable the efficient processing of hypergraph data , several hypergraph neural network platforms have been proposed for learning hypergraph properties and structure , with a special focus on node classification tasks . However , almost all existing methods use heuristic propagation rules and offer suboptimal performance on benchmarking datasets . We propose AllSet , a new hypergraph neural network paradigm that represents a highly general framework for ( hyper ) graph neural networks and for the first time implements hypergraph neural network layers as compositions of two multiset functions that can be efficiently learned for each task and each dataset . The proposed AllSet framework also for the first time integrates Deep Sets and Set Transformers with hypergraph neural networks for the purpose of learning multiset functions and therefore allows for significant modeling flexibility and high expressive power . To evaluate the performance of AllSet , we conduct the most extensive experiments to date involving ten known benchmarking datasets and three newly curated datasets that represent significant challenges for hypergraph node classification . The results demonstrate that our method has the unique ability to either match or outperform all other hypergraph neural networks across the tested datasets : As an example , the performance improvements over existing methods and a new method based on heterogeneous graph neural networks are close to 4 % on the Yelp and Zoo datasets , and 3 % on the Walmart dataset . 1 INTRODUCTION . Graph-centered machine learning , and especially graph neural networks ( GNNs ) , have attracted great interest in the machine learning community due to the ubiquity of graph-structured data and the importance of solving numerous real-world problems such as semi-supervised node classification and graph classification ( Zhu , 2005 ; Shervashidze et al. , 2011 ; Lü & Zhou , 2011 ) . Graphs model pairwise interactions between entities , but fail to capture more complex relationships . Hypergraphs , on the other hand , involve hyperedges that can connect more than two nodes , and are therefore capable of representing higher-order structures in datasets . There exist many machine learning and data mining applications for which modeling high-order relations via hypergraphs leads to better learning performance when compared to graph-based models ( Benson et al. , 2016 ) . For example , in subspace clustering , in order to fit a d-dimensional subspace , we need at least d+1 data points ( Agarwal et al. , 2005 ) ; in hierarchical species classification of a FoodWeb , a carbon-flow unit based on four species is significantly more predictive than that involving two or three entities ( Li & Milenkovic , 2017 ) . Hence , it is desirable to generalize GNN concepts to hypergraphs . One straightforward way to generalize graph algorithms for hypergraphs is to convert hypergraphs to graphs via clique-expansion ( CE ) ( Agarwal et al. , 2005 ; Zhou et al. , 2006 ) . CE replaces hyperedges by ( possibly weighted ) cliques . Many recent attempts to generalize GNNs to hypergraphs can be viewed as redefining hypergraph propagation schemes based on CE or its variants ( Yadati et al. , 2019 ; Feng et al. , 2019 ; Bai et al. , 2021 ) , which was also originally pointed out in ( Dong et al. , 2020 ) . Despite the simplicity of CE , it is well-known that CE causes distortion and leads to undesired losses in learning performance ( Hein et al. , 2013 ; Li & Milenkovic , 2018 ; Chien et al. , 2019b ) . In parallel , more sophisticated propagation rules directly applicable on hypergraphs , and related to tensor eigenproblems , have been studied as well . One such example , termed Multilinear PageRank ( Gleich et al. , 2015 ) , generalizes PageRank techniques ( Page et al. , 1999 ; Jeh & Widom , 2003 ) directly to hypergraphs without resorting to the use of CE . Its propagation scheme is closely related to the Z eigenproblem which has been extensively investigated in tensor analysis and spectral hypergraph theory ( Li et al. , 2013 ; He & Huang , 2014 ; Qi & Luo , 2017 ; Pearson & Zhang , 2014 ; Gautier et al. , 2019 ) . An important result of Benson et al . ( 2017 ) shows that tensor-based propagation outperforms a CE-based scheme on several tasks . The pros and cons of these two types of propagation rule in statistical learning frameworks were examined in Chien et al . ( 2021a ) . More recently , it was shown in Tudisco et al . ( 2020 ) that label propagation based on CE of hypergraphs does not always lead to acceptable performance . Similarly to Chien et al . ( 2021a ) , Benson ( 2019 ) identified positive traits of CE eigenvectors but argued in favor of using Z eigenvectors due to their more versatile nonlinear formulation compared to that of the eigenvectors of CE graphs . We address two natural questions pertaining to learning on hypergraphs : “ Is there a general framework that includes CE-based , Z-based and other propagations on hypergraphs ? ” and , “ Can we learn propagation schemes for hypergraph neural networks suitable for different datasets and different learning tasks ? ” We give affirmative answers to both questions . We propose a general framework , AllSet , which includes both CE-based and tensor-based propagation rules as special cases . We also propose two powerful hypergraph neural network layer architectures that can learn adequate propagation rules for hypergraphs using multiset functions . Our specific contributions are as follows . 1 . We show that using AllSet , one can not only model CE-based and tensor-based propagation rules , but also cover propagation methods of most existing hypergraph neural networks , including HyperGCN ( Yadati et al. , 2019 ) , HGNN ( Feng et al. , 2019 ) , HCHA ( Bai et al. , 2021 ) , HNHN ( Dong et al. , 2020 ) and HyperSAGE ( Arya et al. , 2020 ) . Most importantly , we show that all these propagation rules can be described as a composition of two multiset functions ( leading to the proposed method name AllSet ) . Furthermore , we also show that AllSet is a hypergraph generalization of Message Passing Neural Networks ( MPNN ) ( Gilmer et al. , 2017 ) , a powerful graph learning framework encompassing many GNNs such as GCN ( Kipf & Welling , 2017 ) and GAT ( Veličković et al. , 2018 ) . 2 . Inspired by Deep Sets ( Zaheer et al. , 2017 ) and Set Transformer ( Lee et al. , 2019 ) , we propose AllDeepSets and AllSetTransformer layers which are end-to-end trainable . They can be plugged into most types of graph neural networks to enable effortless generalizations to hypergraphs . Notably , our work represents the first attempt to connect the problem of learning multiset function with hypergraph neural networks , and to leverage the powerful Set Transformer model in the design of these specialized networks . 3 . We report , to the best of our knowledge , the most extensive experiments in the hypergraph neural networks literature pertaining to semi-supervised node classification . Experimental results against ten baseline methods on ten benchmark datasets and three newly curated and challenging datasets demonstrate the superiority and consistency of our AllSet approach . As an example , AllSetTransformer outperforms the best baseline method by close to 4 % in accuracy on Yelp and Zoo datasets and 3 % on the Walmart dataset ; furthermore , AllSetTransformer matches or outperforms the best baseline models on nine out of ten datasets . Such improvements are not possible with modifications of HAN ( Wang et al. , 2019b ) , a heterogeneous GNN , adapted to hypergraphs or other specialized approaches that do not use Set Transformers . 4 . As another practical contribution , we also provide a succinct pipeline for standardization of the hypergraph neural networks evaluation process based on Pytorch Geometric ( Fey & Lenssen , 2019 ) . The pipeline is built in a fashion similar to that proposed in recent benchmarking GNNs papers ( Hu et al. , 2020a ; Lim et al. , 2021 ) . The newly introduced datasets , along with our reported testbed , may be viewed as an initial step toward benchmarking hypergraph neural networks . All proofs and concluding remarks are relegated to the Appendix . 2 BACKGROUND . Notation . A hypergraph is an ordered pair of sets G ( V , E ) , where V = { 1 , 2 , . . . , n } is the set of nodes while E is the set of hyperedges . Each hyperedge e ∈ E is a subset of V , i.e. , e ⊆ V . Unlike a graph edge , a hyperedge e may contain more than two nodes . If ∀ e ∈ E one has |e| = d ∈ N , the hypergraph G is termed d-uniform . A d-uniform hypergraph can be represented by a d-dimensional supersymmetric tensor such that for all distinct collections i1 , . . . , id ∈ V , Ai1 , ... , id = 1 ( d−2 ) ! if e = { i1 , ... , id } ∈ E , and Ai1 , ... , id = 0 otherwise . Henceforth , A : , ... , id is used to denote the slice of A along the first coordinate . A hypergraph can alternatively be represented by its incidence matrix H , where Hve = 1 if v ∈ e and Hve = 0 otherwise . We use the superscript ( t ) to represent the functions or variables at the t-th step of propagation and ‖ to denote concatenation . Furthermore , Θ and b are reserved for a learnable weight matrix and bias of a neural network , respectively . Finally , we use σ ( · ) to denote a nonlinear activation function ( such as ReLU , eLU or LeakyReLU ) , which depends on the model used . CE-based propagation on hypergraphs . The CE of a hypergraph G ( V , E ) is a weighted graph with the same set of nodes V . It can be described in terms of the associated adjacency or incidence matrices which we write with a slight abuse of notation as A ( CE ) ij = ∑ i3 , ... , id∈V Ai , j , i3 , ... , id and H ( CE ) = HHT , respectively . It is obvious that these two matrices only differ in their diagonal entries ( 0s versus node degrees , respectively ) . One step of propagation of a F -dimensional node feature matrix X ∈ Rn×F is captured by A ( CE ) X or H ( CE ) X ; alternatively , in terms of node feature updates , we have CEpropA : X ( t+1 ) v , : = ∑ e : v∈e ∑ u : u∈e\v X ( t ) u , : ; CEpropH : X ( t+1 ) v , : = ∑ e : v∈e ∑ u : u∈e X ( t ) u , : , ( 1 ) Many existing hypergraph convolutional layers actually perform CE-based propagation , potentially with further degree normalization and nonlinear hyperedge weights . For example , the propagation rule of HGNN ( Feng et al. , 2019 ) takes the following node-wise form : X ( t+1 ) v , : = σ ( [ 1√ dv ∑ e : v∈e we |e| ∑ u : u∈e X ( t ) u , :√ du ] Θ ( t ) + b ( t ) ) , ( 2 ) where dv denotes the degree of node v , we is a predefined weight of hyperedge e and σ is the ReLU activation function . The hypergraph convolution in HCHA ( Bai et al. , 2021 ) uses different degree normalizations and attention weights , with the attention weights depending on node features and the hyperedge features . If datasets do not contain hyperedge feature information or if the features come from a different domain compared to the node features , one can not use their attention module ( Bai et al. , 2021 ) . HyperGCN replaces each hyperedge by an incomplete clique via so-called mediators ( Yadati et al. , 2019 ) . When the hypergraph is 3-uniform , the aforementioned approach becomes a standard weighted CE . Hence , all the described hypergraph neural networks adapt propagation rules based on CE or its variants , potentially with the addition of nonlinear hyperedge weights . The described methods achieve reasonable good performance on standard cocitation and coauthor benchmarking datasets . Tensor-based propagations . As mentioned in the introduction , there exist more elaborate tensorbased propagation schemes which in some cases outperform CE-based methods . The propagation rules related to Z eigenproblems such as multilinear PageRank ( Gleich et al. , 2015 ) and spacey random walks ( Benson et al. , 2017 ) are two such examples . The Z eigenproblem for an adjacency tensor A of a d-uniform hypergraph is defined as : λx = Axd−1 , ∑ i2 , ... , id A : ,i2 , ... , idxi2 . . . xid , x ∈ Rn and ‖x‖2 = 1 . ( 3 ) Here , Axd−1 equals ∑ i2 , ... , id A : ,i2 , ... , idxi2 . . . xid , an entity frequently used in the tensor analysis literature . The Z eigenproblem has been extensively studied both in the context of tensor analysis and network sciences ; the problem is also known as the l2 eigenproblem ( Lim , 2005 ; Gautier et al. , 2019 ) . We refer the interested readers to Qi & Luo ( 2017 ) for a more detailed theoretical analysis of the Z eigenproblems and Benson ( 2019 ) for its application in the study of hypergraph centralities . By ignoring the norm constraint , one can define the following tensor-based propagation rule based on ( 3 ) according to : Zprop : X ( t+1 ) v , : = ∑ e : v∈e ( d− 1 ) ∏ u : u∈e\v X ( t ) u , : . ( 4 ) Despite its interesting theoretical properties , Zprop is known to have what is termed the “ unit problem ” ( Benson , 2019 ) . In practice , the product can cause numerical instabilities for large hyperedges . Furthermore , Zprop has only been studied for the case of d-uniform hypergraphs , which makes it less relevant for general hypergraph learning tasks . Clearly , CEprop and Zprop have different advantages and disadvantages for different dataset structures . This motivates finding a general framework that encompasses these two and other propagation rules . In this case , we aim to learn the suitable propagation scheme under such a framework for hypergraph neural networks .
The paper proposes a framework for hyper graph neural networks. It argues that existing work propagates a hyper graph by first transforming it into a regular graph trough clique expansion, which may lose information. Alternatively, the propagation is based on tensors. The proposed framework seeks to combine these different ways of propagations into one unified framework. The proposed framework is called AllSet, with two instantiations AllDeepSets and AllSetTransformer. Experiments are conducted on several graph datasets.
SP:cb6aa11a7686ba134fd0a5d3c96847ab3e1b47ea
Preference Conditioned Neural Multi-objective Combinatorial Optimization
1 INTRODUCTION . Many real-world applications can be modeled as different multiobjective combinatorial optimization ( MOCO ) problems ( Ehrgott & Gandibleux , 2000 ) . The multiobjective traveling salesman problem ( MOTSP ) ( Lust & Teghem , 2010a ) , multiobjective vehicle routing problem ( MOVRP ) ( Jozefowiez et al. , 2008 ) and multiobjective knapsack problem ( MOKP ) ( Bazgan et al. , 2009 ) are three widely studied problems in the research community . These problems have multiple conflicting objectives to optimize , and no single solution can optimize all the objectives at the same time . Instead , there is a set of Pareto solutions with different optimal trade-offs among the objectives . It is very challenging to find all exact Pareto optimal solutions for a MOCO problem . Finding only one Pareto solution would be already NP-hard for many problems ( Ehrgott & Gandibleux , 2000 ) , and the number of Pareto solutions could be exponentially large to the problem size ( Ehrgott , 2005 ; Herzel et al. , 2021 ) . The decision-maker ’ s preference among different objectives is usually unknown in advance , making it hard to reduce the problem into a single-objective one . Over the past several decades , many methods have been developed to find an approximate Pareto set for different MOCO problems within a reasonable computational time . These methods often need carefully handcrafted and specialized heuristics for each problem , which can be very labor-intensive in practice . In many real-world applications , the practitioners need to repeatedly solve different problems with similar characteristics , of which the problem instances can be easily obtained or generated ( Bengio et al. , 2020 ) . It is desirable to learn the patterns behind these problem instances explicitly or implicitly to design efficient algorithms ( Cappart et al. , 2021a ) . Machine learning techniques can be naturally served for this purpose . Some learning-based methods have been recently proposed for solving single-objective combinatorial optimization problems ( Bengio et al. , 2020 ; Vesselinova et al. , 2020 ; Mazyavkina et al. , 2021 ; Cappart et al. , 2021a ) . In this work , we extend the learning-based method to solve MOCO problems in a principled way as shown in Figure 1 . Our main contributions include : • We propose a novel neural combinatorial optimization method to approximate the whole Pareto set with infinite trade-offs via a single model , which allows decision-makers to directly generate any preferred trade-off solution without extra search . This novel and powerful property is not supported by any other learning-based or heuristics-based methods . • We develop an efficient end-to-end reinforcement learning algorithm to train the single model for all different preferences simultaneously , and a simple yet powerful active adaption method to handle out-of-distribution problem instances . • We conduct comprehensive experiments on MOTSP , MOVR and MOKP problems with different settings . The results show that our proposed method can successfully approximate the Pareto sets for different problems in an efficient way . It also significantly outperforms other methods in solution quality , speed , and/or model efficiency . 2 BACKGROUND AND RELATED WORK . Multiobjective Combinatorial Optimization ( MOCO ) . MOCO has been attracting much research effort from different communities over the past several decades ( Sawaragi et al. , 1985 ; Wallenius et al. , 2008 ; Herzel et al. , 2021 ) . There are two main approaches to tackle the MOCO problems : the exact method and the approximation method ( Ehrgott , 2005 ) . Exact methods could be impractically costly when , as it often happens , the MOCO problem is NP-hard and the number of Pareto optimal solutions is huge ( Florios & Mavrotas , 2014 ) . For this reason , many heuristics ( Jaszkiewicz , 2002 ; Ehrgott & Gandibleux , 2008 ) and approximation methods ( Papadimitriou & Yannakakis , 2000 ; Herzel et al. , 2021 ) have been developed to find a moderately sized set of approximated Pareto solutions with a reasonable amount of computational budget . However , these methods usually depend on carefully handcrafted designs for each specific problem ( Ehrgott & Gandibleux , 2000 ) , of which the effort would be nontrivial for a new problem in real-world applications . Machine Learning for Combinatorial Optimization . According to Bengio et al . ( 2020 ) , there are three main learning-based approaches for CO : learning to configure algorithms ( Kruber et al. , 2017 ; Bonami et al. , 2018 ) , learning alongside the algorithms ( Lodi & Zarpellon , 2017 ; Gasse et al. , 2019 ; Chen & Tian , 2019 ; Kool et al. , 2021 ) , and learning to directly predict the solutions ( Nowak et al. , 2018 ; Emami & Ranka , 2018 ; Larsen et al. , 2018 ) . Neural combinatorial optimization ( NCO ) belongs to the last category where the training model directly predicts a good solution for a given problem instance . Vinyals et al . ( 2015 ) propose the pointer network to sequentially construct a solution for TSP problem . Bello et al . ( 2017 ) make a critical improvement to use reinforcement learning to train the model , eliminating the impractical optimal solutions collection for NP-hard problems . Further improvements on model structure and training procedure have been proposed in the past few years ( Nazari et al. , 2018 ; Deudon et al. , 2018 ; Kool et al. , 2019 ; Joshi et al. , 2020 ) , especially with graph neural networks ( GNNs ) ( Dai et al. , 2017 ; Li et al. , 2018 ; Joshi et al. , 2019 ; Dwivedi et al. , 2020 ; Drori et al. , 2020 ) . Some works have recently been proposed to design more powerful training strategies , such as optimization with multiple optimum ( Kwon et al. , 2020 ) , unsupervised learning ( Karalias & Loukas , 2020 ) and curriculum learning schemes ( Lisicki et al. , 2020 ) . Neural MOCO . The current learning-based methods are mostly for solving single-objective combinatorial problems . Recently , a few attempts have been made to solve MOCO problems ( Li et al. , 2020 ; Wu et al. , 2020 ; Zhang et al. , 2021b ; a ) . They first decompose a MOCO problem into multiple single-objective scalarized subproblems , and then build a set of models for each subproblem . However , since the number of Pareto solutions would be exponentially large ( Ehrgott , 2005 ) , the required number of models would be huge for finding the whole Pareto set . This work proposes a single preference-based model for solving MOCO problems , where the decision-makers can obtain arbitrary trade-off solutions . The proposed single neural MOCO solver could be much easier to use in a real-world system ( Veličković & Blundell , 2021 ) , rather than a large set of separate models . Our method ’ s model efficiency is crucial to truly extend the NCO approach to solve MOCO problems . 3 PROBLEM FORMULATION . 3.1 MULTIOBJECTIVE COMBINATORIAL OPTIMIZATION . A multiobjective combinatorial optimization ( MOCO ) problem can be defined as follows : min x∈X F ( x ) = ( f1 ( x ) , f2 ( x ) , . . . , fm ( x ) ) , ( 1 ) where X is a discrete search space , and F ( x ) = ( f1 ( x ) , . . . , fm ( x ) ) is an m-objective vector . Since the individual objectives often conflict with each other , no single solution can optimize all of them at the same time . Thus , practitioners are interested in Pareto optimal solutions , defined as follows . Definition 1 ( Pareto Dominance ) . Let xa , xb ∈ X , xa is said to dominate xb ( xa ≺ xb ) if and only if fi ( xa ) ≤ fi ( xb ) , ∀i ∈ { 1 , ... , m } and fj ( xa ) < fj ( xb ) , ∃j ∈ { 1 , ... , m } . Definition 2 ( Pareto Optimality ) . A solution x∗ ∈ X is a Pareto optimal solution if there does not exist x̂ ∈ X such that x̂ ≺ x∗ . The set of all the Pareto solutions is called the Pareto set , and the image of the Pareto set in the objective space is called the Pareto front . Each Pareto solution represents an optimal trade-off among the objectives , and it is impossible to further improve one of the objectives without deteriorating any other objectives . For many MOCO problems , the size of Pareto set would be exponentially large with respect to the input size ( e.g. , nodes in MOTSP ) , which makes finding the whole Pareto set intractable ( Herzel et al. , 2021 ) . Therefore , the current heuristic-based and learning-based methods all focus on finding a small subset of approximate Pareto solutions { x1 , x2 , · · · , xp } ( e.g. , p = 101 ) . The decision-makers can only select solutions from this fixed small set , which is unlikely to contain their preferred solutions precisely . 3.2 PREFERENCE-BASED SCALARIZATION . Preference-based scalarization is a widely-used approach to solve MOCO problems ( Ehrgott , 2005 ) . It decomposes a given multiobjective problem into several subproblems with different preferences . By solving all subproblems , we can obtain a set of approximated Pareto solutions . For an mobjective problems , we can define a preference vector λ ∈ Rm satisfies λi ≥ 0 and ∑m i=1 λi = 1 such that λi is the preference for the i-th objective . Weight-Sum Scalarization is the most straightforward approach : min x∈X gws ( x|λ ) = min x∈X ∑m i=1 λifi ( x ) , ( 2 ) where gws ( x|λ ) is a single-objective subproblem with preference λ . The decision-makers can assign their preferences among the objectives , and obtain different subproblems . However , the weightedsum method can only find solutions on the convex hull of the Pareto set ( Ehrgott , 2005 ) . Weighted-Tchebycheff ( Weighted-TCH ) Scalarization is an alternative : min x∈X gtch ( x|λ ) = min x∈X max 1≤i≤m { λi|fi ( x ) − ( z∗i − ε ) | } , ( 3 ) where z∗i is the ideal value for objective fi ( x ) ( e.g. , the lower bound ) , and u ∗ i = z ∗ i − ε is a utopia value with small positive component ε . According to Choo & Atkins ( 1983 ) , any Pareto optimal solution could be an optimal solution of problem ( 3 ) with a specific ( but unknown ) preference λ . To find the whole Pareto set , it still needs to solve an exponentially huge number of subproblems . Drawbacks and Remarks . Although the scalarization methods are straightforward , some critical drawbacks can not be ignored . First of all , not all scalarized multiobjective problem is the same as its single-objective version ( Ehrgott , 2005 ) . For MOTSP , each weight-sum scalarized subproblem is a single objective TSP so that a powerful TSP solver can solve it . However , it is not the case for MOCVRP . Secondly , although the Tchebycheff scalarization has good properties , it involves min-max optimization , which is typically different from the original problem . For example , the Tchebycheff scalarized MOTSP is not a single-objective TSP and can not be solved by the state-ofthe-art single-objective solver developed in the current years . In our reinforcement learning based framework , we use the Tchebycheff scalarization to combine the rewards of different objectives . It is much more flexible to use , and does not have to consider the problem-specific condition .
This paper proposes an approach for neural multi-objective combinatorial optimization. This approach uses a preference-agnostic encoder along with a weight-dependent decoder to generate approximate Pareto optimal solutions for any arbitrary set of weights of a weighted Tchebyshev scalarization at virtually no additional cost. This is in contrast with existing approaches, which require a significant amount of computation for every new set of weights. Several numerical experiments are conducted, showing favorable results for the proposed method when compared with several other existing evolutionary and learning-based methods.
SP:6962863d422058ae6695c3f08548b9012b92b874
Preference Conditioned Neural Multi-objective Combinatorial Optimization
1 INTRODUCTION . Many real-world applications can be modeled as different multiobjective combinatorial optimization ( MOCO ) problems ( Ehrgott & Gandibleux , 2000 ) . The multiobjective traveling salesman problem ( MOTSP ) ( Lust & Teghem , 2010a ) , multiobjective vehicle routing problem ( MOVRP ) ( Jozefowiez et al. , 2008 ) and multiobjective knapsack problem ( MOKP ) ( Bazgan et al. , 2009 ) are three widely studied problems in the research community . These problems have multiple conflicting objectives to optimize , and no single solution can optimize all the objectives at the same time . Instead , there is a set of Pareto solutions with different optimal trade-offs among the objectives . It is very challenging to find all exact Pareto optimal solutions for a MOCO problem . Finding only one Pareto solution would be already NP-hard for many problems ( Ehrgott & Gandibleux , 2000 ) , and the number of Pareto solutions could be exponentially large to the problem size ( Ehrgott , 2005 ; Herzel et al. , 2021 ) . The decision-maker ’ s preference among different objectives is usually unknown in advance , making it hard to reduce the problem into a single-objective one . Over the past several decades , many methods have been developed to find an approximate Pareto set for different MOCO problems within a reasonable computational time . These methods often need carefully handcrafted and specialized heuristics for each problem , which can be very labor-intensive in practice . In many real-world applications , the practitioners need to repeatedly solve different problems with similar characteristics , of which the problem instances can be easily obtained or generated ( Bengio et al. , 2020 ) . It is desirable to learn the patterns behind these problem instances explicitly or implicitly to design efficient algorithms ( Cappart et al. , 2021a ) . Machine learning techniques can be naturally served for this purpose . Some learning-based methods have been recently proposed for solving single-objective combinatorial optimization problems ( Bengio et al. , 2020 ; Vesselinova et al. , 2020 ; Mazyavkina et al. , 2021 ; Cappart et al. , 2021a ) . In this work , we extend the learning-based method to solve MOCO problems in a principled way as shown in Figure 1 . Our main contributions include : • We propose a novel neural combinatorial optimization method to approximate the whole Pareto set with infinite trade-offs via a single model , which allows decision-makers to directly generate any preferred trade-off solution without extra search . This novel and powerful property is not supported by any other learning-based or heuristics-based methods . • We develop an efficient end-to-end reinforcement learning algorithm to train the single model for all different preferences simultaneously , and a simple yet powerful active adaption method to handle out-of-distribution problem instances . • We conduct comprehensive experiments on MOTSP , MOVR and MOKP problems with different settings . The results show that our proposed method can successfully approximate the Pareto sets for different problems in an efficient way . It also significantly outperforms other methods in solution quality , speed , and/or model efficiency . 2 BACKGROUND AND RELATED WORK . Multiobjective Combinatorial Optimization ( MOCO ) . MOCO has been attracting much research effort from different communities over the past several decades ( Sawaragi et al. , 1985 ; Wallenius et al. , 2008 ; Herzel et al. , 2021 ) . There are two main approaches to tackle the MOCO problems : the exact method and the approximation method ( Ehrgott , 2005 ) . Exact methods could be impractically costly when , as it often happens , the MOCO problem is NP-hard and the number of Pareto optimal solutions is huge ( Florios & Mavrotas , 2014 ) . For this reason , many heuristics ( Jaszkiewicz , 2002 ; Ehrgott & Gandibleux , 2008 ) and approximation methods ( Papadimitriou & Yannakakis , 2000 ; Herzel et al. , 2021 ) have been developed to find a moderately sized set of approximated Pareto solutions with a reasonable amount of computational budget . However , these methods usually depend on carefully handcrafted designs for each specific problem ( Ehrgott & Gandibleux , 2000 ) , of which the effort would be nontrivial for a new problem in real-world applications . Machine Learning for Combinatorial Optimization . According to Bengio et al . ( 2020 ) , there are three main learning-based approaches for CO : learning to configure algorithms ( Kruber et al. , 2017 ; Bonami et al. , 2018 ) , learning alongside the algorithms ( Lodi & Zarpellon , 2017 ; Gasse et al. , 2019 ; Chen & Tian , 2019 ; Kool et al. , 2021 ) , and learning to directly predict the solutions ( Nowak et al. , 2018 ; Emami & Ranka , 2018 ; Larsen et al. , 2018 ) . Neural combinatorial optimization ( NCO ) belongs to the last category where the training model directly predicts a good solution for a given problem instance . Vinyals et al . ( 2015 ) propose the pointer network to sequentially construct a solution for TSP problem . Bello et al . ( 2017 ) make a critical improvement to use reinforcement learning to train the model , eliminating the impractical optimal solutions collection for NP-hard problems . Further improvements on model structure and training procedure have been proposed in the past few years ( Nazari et al. , 2018 ; Deudon et al. , 2018 ; Kool et al. , 2019 ; Joshi et al. , 2020 ) , especially with graph neural networks ( GNNs ) ( Dai et al. , 2017 ; Li et al. , 2018 ; Joshi et al. , 2019 ; Dwivedi et al. , 2020 ; Drori et al. , 2020 ) . Some works have recently been proposed to design more powerful training strategies , such as optimization with multiple optimum ( Kwon et al. , 2020 ) , unsupervised learning ( Karalias & Loukas , 2020 ) and curriculum learning schemes ( Lisicki et al. , 2020 ) . Neural MOCO . The current learning-based methods are mostly for solving single-objective combinatorial problems . Recently , a few attempts have been made to solve MOCO problems ( Li et al. , 2020 ; Wu et al. , 2020 ; Zhang et al. , 2021b ; a ) . They first decompose a MOCO problem into multiple single-objective scalarized subproblems , and then build a set of models for each subproblem . However , since the number of Pareto solutions would be exponentially large ( Ehrgott , 2005 ) , the required number of models would be huge for finding the whole Pareto set . This work proposes a single preference-based model for solving MOCO problems , where the decision-makers can obtain arbitrary trade-off solutions . The proposed single neural MOCO solver could be much easier to use in a real-world system ( Veličković & Blundell , 2021 ) , rather than a large set of separate models . Our method ’ s model efficiency is crucial to truly extend the NCO approach to solve MOCO problems . 3 PROBLEM FORMULATION . 3.1 MULTIOBJECTIVE COMBINATORIAL OPTIMIZATION . A multiobjective combinatorial optimization ( MOCO ) problem can be defined as follows : min x∈X F ( x ) = ( f1 ( x ) , f2 ( x ) , . . . , fm ( x ) ) , ( 1 ) where X is a discrete search space , and F ( x ) = ( f1 ( x ) , . . . , fm ( x ) ) is an m-objective vector . Since the individual objectives often conflict with each other , no single solution can optimize all of them at the same time . Thus , practitioners are interested in Pareto optimal solutions , defined as follows . Definition 1 ( Pareto Dominance ) . Let xa , xb ∈ X , xa is said to dominate xb ( xa ≺ xb ) if and only if fi ( xa ) ≤ fi ( xb ) , ∀i ∈ { 1 , ... , m } and fj ( xa ) < fj ( xb ) , ∃j ∈ { 1 , ... , m } . Definition 2 ( Pareto Optimality ) . A solution x∗ ∈ X is a Pareto optimal solution if there does not exist x̂ ∈ X such that x̂ ≺ x∗ . The set of all the Pareto solutions is called the Pareto set , and the image of the Pareto set in the objective space is called the Pareto front . Each Pareto solution represents an optimal trade-off among the objectives , and it is impossible to further improve one of the objectives without deteriorating any other objectives . For many MOCO problems , the size of Pareto set would be exponentially large with respect to the input size ( e.g. , nodes in MOTSP ) , which makes finding the whole Pareto set intractable ( Herzel et al. , 2021 ) . Therefore , the current heuristic-based and learning-based methods all focus on finding a small subset of approximate Pareto solutions { x1 , x2 , · · · , xp } ( e.g. , p = 101 ) . The decision-makers can only select solutions from this fixed small set , which is unlikely to contain their preferred solutions precisely . 3.2 PREFERENCE-BASED SCALARIZATION . Preference-based scalarization is a widely-used approach to solve MOCO problems ( Ehrgott , 2005 ) . It decomposes a given multiobjective problem into several subproblems with different preferences . By solving all subproblems , we can obtain a set of approximated Pareto solutions . For an mobjective problems , we can define a preference vector λ ∈ Rm satisfies λi ≥ 0 and ∑m i=1 λi = 1 such that λi is the preference for the i-th objective . Weight-Sum Scalarization is the most straightforward approach : min x∈X gws ( x|λ ) = min x∈X ∑m i=1 λifi ( x ) , ( 2 ) where gws ( x|λ ) is a single-objective subproblem with preference λ . The decision-makers can assign their preferences among the objectives , and obtain different subproblems . However , the weightedsum method can only find solutions on the convex hull of the Pareto set ( Ehrgott , 2005 ) . Weighted-Tchebycheff ( Weighted-TCH ) Scalarization is an alternative : min x∈X gtch ( x|λ ) = min x∈X max 1≤i≤m { λi|fi ( x ) − ( z∗i − ε ) | } , ( 3 ) where z∗i is the ideal value for objective fi ( x ) ( e.g. , the lower bound ) , and u ∗ i = z ∗ i − ε is a utopia value with small positive component ε . According to Choo & Atkins ( 1983 ) , any Pareto optimal solution could be an optimal solution of problem ( 3 ) with a specific ( but unknown ) preference λ . To find the whole Pareto set , it still needs to solve an exponentially huge number of subproblems . Drawbacks and Remarks . Although the scalarization methods are straightforward , some critical drawbacks can not be ignored . First of all , not all scalarized multiobjective problem is the same as its single-objective version ( Ehrgott , 2005 ) . For MOTSP , each weight-sum scalarized subproblem is a single objective TSP so that a powerful TSP solver can solve it . However , it is not the case for MOCVRP . Secondly , although the Tchebycheff scalarization has good properties , it involves min-max optimization , which is typically different from the original problem . For example , the Tchebycheff scalarized MOTSP is not a single-objective TSP and can not be solved by the state-ofthe-art single-objective solver developed in the current years . In our reinforcement learning based framework , we use the Tchebycheff scalarization to combine the rewards of different objectives . It is much more flexible to use , and does not have to consider the problem-specific condition .
This submission treats multi-objective combinatorial optimization problems and aims to approximate the pareto set. The idea is to build a single ML model that represents the pareto set by providing a pareto set solution for any desired trade-off. The model is build using reinforcement learning and may either be used to find singular solutions with fixed trade-off, or to approximate the pareto set with uniform samples. The authors prove that the pareto set is approximated well if individual trade-offs are approximated well.
SP:6962863d422058ae6695c3f08548b9012b92b874
Preference Conditioned Neural Multi-objective Combinatorial Optimization
1 INTRODUCTION . Many real-world applications can be modeled as different multiobjective combinatorial optimization ( MOCO ) problems ( Ehrgott & Gandibleux , 2000 ) . The multiobjective traveling salesman problem ( MOTSP ) ( Lust & Teghem , 2010a ) , multiobjective vehicle routing problem ( MOVRP ) ( Jozefowiez et al. , 2008 ) and multiobjective knapsack problem ( MOKP ) ( Bazgan et al. , 2009 ) are three widely studied problems in the research community . These problems have multiple conflicting objectives to optimize , and no single solution can optimize all the objectives at the same time . Instead , there is a set of Pareto solutions with different optimal trade-offs among the objectives . It is very challenging to find all exact Pareto optimal solutions for a MOCO problem . Finding only one Pareto solution would be already NP-hard for many problems ( Ehrgott & Gandibleux , 2000 ) , and the number of Pareto solutions could be exponentially large to the problem size ( Ehrgott , 2005 ; Herzel et al. , 2021 ) . The decision-maker ’ s preference among different objectives is usually unknown in advance , making it hard to reduce the problem into a single-objective one . Over the past several decades , many methods have been developed to find an approximate Pareto set for different MOCO problems within a reasonable computational time . These methods often need carefully handcrafted and specialized heuristics for each problem , which can be very labor-intensive in practice . In many real-world applications , the practitioners need to repeatedly solve different problems with similar characteristics , of which the problem instances can be easily obtained or generated ( Bengio et al. , 2020 ) . It is desirable to learn the patterns behind these problem instances explicitly or implicitly to design efficient algorithms ( Cappart et al. , 2021a ) . Machine learning techniques can be naturally served for this purpose . Some learning-based methods have been recently proposed for solving single-objective combinatorial optimization problems ( Bengio et al. , 2020 ; Vesselinova et al. , 2020 ; Mazyavkina et al. , 2021 ; Cappart et al. , 2021a ) . In this work , we extend the learning-based method to solve MOCO problems in a principled way as shown in Figure 1 . Our main contributions include : • We propose a novel neural combinatorial optimization method to approximate the whole Pareto set with infinite trade-offs via a single model , which allows decision-makers to directly generate any preferred trade-off solution without extra search . This novel and powerful property is not supported by any other learning-based or heuristics-based methods . • We develop an efficient end-to-end reinforcement learning algorithm to train the single model for all different preferences simultaneously , and a simple yet powerful active adaption method to handle out-of-distribution problem instances . • We conduct comprehensive experiments on MOTSP , MOVR and MOKP problems with different settings . The results show that our proposed method can successfully approximate the Pareto sets for different problems in an efficient way . It also significantly outperforms other methods in solution quality , speed , and/or model efficiency . 2 BACKGROUND AND RELATED WORK . Multiobjective Combinatorial Optimization ( MOCO ) . MOCO has been attracting much research effort from different communities over the past several decades ( Sawaragi et al. , 1985 ; Wallenius et al. , 2008 ; Herzel et al. , 2021 ) . There are two main approaches to tackle the MOCO problems : the exact method and the approximation method ( Ehrgott , 2005 ) . Exact methods could be impractically costly when , as it often happens , the MOCO problem is NP-hard and the number of Pareto optimal solutions is huge ( Florios & Mavrotas , 2014 ) . For this reason , many heuristics ( Jaszkiewicz , 2002 ; Ehrgott & Gandibleux , 2008 ) and approximation methods ( Papadimitriou & Yannakakis , 2000 ; Herzel et al. , 2021 ) have been developed to find a moderately sized set of approximated Pareto solutions with a reasonable amount of computational budget . However , these methods usually depend on carefully handcrafted designs for each specific problem ( Ehrgott & Gandibleux , 2000 ) , of which the effort would be nontrivial for a new problem in real-world applications . Machine Learning for Combinatorial Optimization . According to Bengio et al . ( 2020 ) , there are three main learning-based approaches for CO : learning to configure algorithms ( Kruber et al. , 2017 ; Bonami et al. , 2018 ) , learning alongside the algorithms ( Lodi & Zarpellon , 2017 ; Gasse et al. , 2019 ; Chen & Tian , 2019 ; Kool et al. , 2021 ) , and learning to directly predict the solutions ( Nowak et al. , 2018 ; Emami & Ranka , 2018 ; Larsen et al. , 2018 ) . Neural combinatorial optimization ( NCO ) belongs to the last category where the training model directly predicts a good solution for a given problem instance . Vinyals et al . ( 2015 ) propose the pointer network to sequentially construct a solution for TSP problem . Bello et al . ( 2017 ) make a critical improvement to use reinforcement learning to train the model , eliminating the impractical optimal solutions collection for NP-hard problems . Further improvements on model structure and training procedure have been proposed in the past few years ( Nazari et al. , 2018 ; Deudon et al. , 2018 ; Kool et al. , 2019 ; Joshi et al. , 2020 ) , especially with graph neural networks ( GNNs ) ( Dai et al. , 2017 ; Li et al. , 2018 ; Joshi et al. , 2019 ; Dwivedi et al. , 2020 ; Drori et al. , 2020 ) . Some works have recently been proposed to design more powerful training strategies , such as optimization with multiple optimum ( Kwon et al. , 2020 ) , unsupervised learning ( Karalias & Loukas , 2020 ) and curriculum learning schemes ( Lisicki et al. , 2020 ) . Neural MOCO . The current learning-based methods are mostly for solving single-objective combinatorial problems . Recently , a few attempts have been made to solve MOCO problems ( Li et al. , 2020 ; Wu et al. , 2020 ; Zhang et al. , 2021b ; a ) . They first decompose a MOCO problem into multiple single-objective scalarized subproblems , and then build a set of models for each subproblem . However , since the number of Pareto solutions would be exponentially large ( Ehrgott , 2005 ) , the required number of models would be huge for finding the whole Pareto set . This work proposes a single preference-based model for solving MOCO problems , where the decision-makers can obtain arbitrary trade-off solutions . The proposed single neural MOCO solver could be much easier to use in a real-world system ( Veličković & Blundell , 2021 ) , rather than a large set of separate models . Our method ’ s model efficiency is crucial to truly extend the NCO approach to solve MOCO problems . 3 PROBLEM FORMULATION . 3.1 MULTIOBJECTIVE COMBINATORIAL OPTIMIZATION . A multiobjective combinatorial optimization ( MOCO ) problem can be defined as follows : min x∈X F ( x ) = ( f1 ( x ) , f2 ( x ) , . . . , fm ( x ) ) , ( 1 ) where X is a discrete search space , and F ( x ) = ( f1 ( x ) , . . . , fm ( x ) ) is an m-objective vector . Since the individual objectives often conflict with each other , no single solution can optimize all of them at the same time . Thus , practitioners are interested in Pareto optimal solutions , defined as follows . Definition 1 ( Pareto Dominance ) . Let xa , xb ∈ X , xa is said to dominate xb ( xa ≺ xb ) if and only if fi ( xa ) ≤ fi ( xb ) , ∀i ∈ { 1 , ... , m } and fj ( xa ) < fj ( xb ) , ∃j ∈ { 1 , ... , m } . Definition 2 ( Pareto Optimality ) . A solution x∗ ∈ X is a Pareto optimal solution if there does not exist x̂ ∈ X such that x̂ ≺ x∗ . The set of all the Pareto solutions is called the Pareto set , and the image of the Pareto set in the objective space is called the Pareto front . Each Pareto solution represents an optimal trade-off among the objectives , and it is impossible to further improve one of the objectives without deteriorating any other objectives . For many MOCO problems , the size of Pareto set would be exponentially large with respect to the input size ( e.g. , nodes in MOTSP ) , which makes finding the whole Pareto set intractable ( Herzel et al. , 2021 ) . Therefore , the current heuristic-based and learning-based methods all focus on finding a small subset of approximate Pareto solutions { x1 , x2 , · · · , xp } ( e.g. , p = 101 ) . The decision-makers can only select solutions from this fixed small set , which is unlikely to contain their preferred solutions precisely . 3.2 PREFERENCE-BASED SCALARIZATION . Preference-based scalarization is a widely-used approach to solve MOCO problems ( Ehrgott , 2005 ) . It decomposes a given multiobjective problem into several subproblems with different preferences . By solving all subproblems , we can obtain a set of approximated Pareto solutions . For an mobjective problems , we can define a preference vector λ ∈ Rm satisfies λi ≥ 0 and ∑m i=1 λi = 1 such that λi is the preference for the i-th objective . Weight-Sum Scalarization is the most straightforward approach : min x∈X gws ( x|λ ) = min x∈X ∑m i=1 λifi ( x ) , ( 2 ) where gws ( x|λ ) is a single-objective subproblem with preference λ . The decision-makers can assign their preferences among the objectives , and obtain different subproblems . However , the weightedsum method can only find solutions on the convex hull of the Pareto set ( Ehrgott , 2005 ) . Weighted-Tchebycheff ( Weighted-TCH ) Scalarization is an alternative : min x∈X gtch ( x|λ ) = min x∈X max 1≤i≤m { λi|fi ( x ) − ( z∗i − ε ) | } , ( 3 ) where z∗i is the ideal value for objective fi ( x ) ( e.g. , the lower bound ) , and u ∗ i = z ∗ i − ε is a utopia value with small positive component ε . According to Choo & Atkins ( 1983 ) , any Pareto optimal solution could be an optimal solution of problem ( 3 ) with a specific ( but unknown ) preference λ . To find the whole Pareto set , it still needs to solve an exponentially huge number of subproblems . Drawbacks and Remarks . Although the scalarization methods are straightforward , some critical drawbacks can not be ignored . First of all , not all scalarized multiobjective problem is the same as its single-objective version ( Ehrgott , 2005 ) . For MOTSP , each weight-sum scalarized subproblem is a single objective TSP so that a powerful TSP solver can solve it . However , it is not the case for MOCVRP . Secondly , although the Tchebycheff scalarization has good properties , it involves min-max optimization , which is typically different from the original problem . For example , the Tchebycheff scalarized MOTSP is not a single-objective TSP and can not be solved by the state-ofthe-art single-objective solver developed in the current years . In our reinforcement learning based framework , we use the Tchebycheff scalarization to combine the rewards of different objectives . It is much more flexible to use , and does not have to consider the problem-specific condition .
This paper presents a learning approach for multi-objective combinatorial optimization (MOCO), which is a challenging problem but not well-studied by previous machine learning researchers. The proposed model is capable of predicting approximate Pareto optimal solutions from various preferences by a single model, via attention networks, and by a so-called "hypernetwork". Experiment result on the multi-objective versions of TSP, VRP, and KP shows the effectiveness of the proposed approach.
SP:6962863d422058ae6695c3f08548b9012b92b874
On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks
1 INTRODUCTION . Endowing models with the ability to capture uncertainty is of crucial importance in machine learning . Uncertainty can be usefully categorized into two main types : epistemic uncertainty and aleatoric uncertainty ( Kiureghian & Ditlevsen , 2009 ) . Epistemic uncertainty accounts for subjective uncertainty in the model , one that is reducible given sufficient data . By contrast , aleatoric uncertainty captures the stochasticity inherent in the observations and can itself be subdivided into homoscedastic and heteroscedastic uncertainty . Homoscedastic uncertainty is noise that stays constant across different inputs , whereas heteroscedastic uncertainty is one that varies depending on the inputs to the model . There are well-established benefits for modeling each type of uncertainty . For instance , capturing epistemic uncertainty enables effective budgeted data collection in active learning ( Gal et al. , 2017 ) , allows for efficient exploration in reinforcement learning ( Osband et al. , 2016 ) , and is indispensable in cost-sensitive decision making ( Amodei et al. , 2016 ) . On the other hand , quantifying aleatoric uncertainty enables learning dynamics models of stochastic processes ( e.g . for model-based or offline reinforcement learning ) ( Chua et al. , 2018 ; Yu et al. , 2020 ) , improves performance in semantic segmentation , depth regression and object detection ( Kendall & Gal , 2017 ; Harakeh & Waslander , 2021 ) , and allows for risk-sensitive decision making ( Dabney et al. , 2018 ; Vlastelica et al. , 2021 ) . Many modern applications of machine learning rely on neural networks and deep learning tools to achieve state-of-the-art performance . However , most neural network models are not readily equipped with a capacity for uncertainty estimation . To address this shortcoming , a multitude of approaches have been proposed in the past few years . Nevertheless , the majority of work in this domain has focused on epistemic uncertainty quantification ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) or uncertainty estimation for classification ( Hendrycks & Gimpel , 2017 ; Guo et al. , 2017 ; Ovadia et al. , 2019 ; Mukhoti et al. , 2021 ) . We examine a common approach for quantifying aleatoric uncertainty in neural network regression . By assuming that the regression targets follow a particular distribution , we can use a neural network to predict the parameters of that distribution , typically the input-dependent mean and variance when assuming a heteroscedastic Gaussian distribution . Then , the parameters of the network can be learned using maximum likelihood estimation ( MLE ) , i.e . by minimizing the negative log-likelihood ( NLL ) criterion using stochastic gradient descent . This simple procedure , which is the de-facto standard ( Nix & Weigend , 1994 ; Lakshminarayanan et al. , 2017 ; Kendall & Gal , 2017 ; Chua et al. , 2018 ; Kloss et al. , 2021 ) , is known to be subject to overconfident variance estimates . Whereas strategies have been proposed to alleviate this specific issue ( Detlefsen et al. , 2019 ; Hu et al. , 2020 ; Stirn & Knowles , 2020 ) , we argue that an equally important issue is that this procedure can additionally lead to subpar mean fits . In this work , we analyse and propose a simple modification to mitigate this issue . Summary of contributions We demonstrate a pitfall of optimizing the NLL loss for neural network regression , one that hinders the training of accurate mean predictors ( see Fig . 1 for an illustrative example ) . The primary culprit is the high dependence of the gradients on the predictive variance . While such dependence is generally known to be responsible for instabilities in joint optimization of mean and variance estimators ( Takahashi et al. , 2018 ; Stirn & Knowles , 2020 ) , we identify a fresh perspective on how this dependence can further be problematic . Namely , we hypothesize that the issue arises due to the NLL loss scaling down the gradient of poorly-predicted data points relative to the well-predicted ones , leading to effectively undersampling the poorly-predicted data points . We then introduce an alternative loss formulation that counteracts this by weighting the contribution of each data point to the overall loss by its β-exponentiated variance estimate , where β controls the extent of dependency of gradients on predictive variance . This formulation subsumes the standard NLL loss for β = 0 and allows to lessen the dependency of gradients on the variance estimates for 0 < β ≤ 1 . Interestingly , using β = 1 completely removes such dependency for training the mean estimator , yielding the standard mean squared error ( MSE ) loss – but with the additional capacity of uncertainty estimation . Finally , we empirically show that our modified loss formulation largely mitigates the issue of poor fits , achieving considerable improvements on a set of standard benchmarks while exhibiting more robustness to network architecture and learning rate configurations . 2 PRELIMINARIES . Let X , Y be two random variables describing the input and target , following the joint distribution P ( X , Y ) . We assume that Y is conditionally independent givenX and that it follows some probability distribution P ( Y | X ) . In the following , we use the common assumption that Y is normally distributed given X ; i.e . P ( Y | X ) = N ( µ ( X ) , σ2 ( X ) ) , where µ : RM 7→ R and σ2 : RM 7→ R+ are respectively the true input-dependent mean and variance functions.1 Equivalently , we can write Y = µ ( X ) + ( X ) , with ( X ) ∼ N ( 0 , σ2 ( X ) ) ; i.e . Y is generated fromX by µ ( X ) plus a zero-mean Gaussian noise with variance σ2 ( X ) . This input-dependent variance quantifies the heteroscedastic uncertainty , or input-dependent aleatoric uncertainty . 1For notational convenience , we focus on univariate regression but point out that the work extends to the multivariate case as well . To learn estimates µ̂ ( X ) , σ̂2 ( X ) of the true mean and variance functions , it is common to use a neural network fθ parameterized by θ . Here , µ̂ ( X ) and σ̂2 ( X ) can be outputs of the final layer ( Nix & Weigend , 1994 ) or use two completely separate networks ( Detlefsen et al. , 2019 ) .The variance output is hereby constrained to the positive region using a suitable activation function , e.g . softplus . The optimal parameters θ∗NLL can then be found using maximum likelihood estimation ( MLE ) by minimizing the negative log-likelihood ( NLL ) criterion LNLL under the distribution P ( X , Y ) : θ∗NLL = argmin θ LNLL ( θ ; D ) = argmin θ E X , Y [ 1 2 log σ̂2 ( X ) + ( Y − µ̂ ( X ) ) 2 2σ̂2 ( X ) + const ] . ( 1 ) In contrast , standard regression minimizes the mean squared error ( MSE ) LMSE : θ∗MSE = argmin θ LMSE ( θ ; D ) = argmin θ E X , Y [ ( Y − µ̂ ( X ) ) 2 2 ] . ( 2 ) In practice , Eq . 1 and Eq . 2 are optimized using stochastic gradient descent with batches of samples drawn from P ( X , Y ) . The gradients of LNLL with respect to µ̂ ( X ) , σ̂2 ( X ) are given by ∇µ̂LNLL ( θ ) = E X , Y [ µ̂ ( X ) − Y σ̂2 ( X ) ] , ∇σ̂2LNLL ( θ ) = E X , Y [ σ̂2 ( X ) − ( Y − µ̂ ( X ) ) 2 2 ( σ̂2 ( X ) ) 2 ] . ( 3 , 4 ) 3 ANALYSIS . We now return to the example of trying to fit a sinusoidal function from Sec . 1 . Recall from Fig . 1 that using the Gaussian NLL as the objective resulted in a suboptimal fit . In contrast , using MSE as the objective , the model converged without problems in reasonable time . We now analyze the reasons behind this surprising result . From Eq . 3 , we see that the true mean µ ( X ) is a minimizer of the NLL loss . It thus becomes clear that a ) the solution found in Fig . 1 is not the optimal one , and b ) the NLL objective should , in principle , drive µ̂ ( X ) to the optimal solution µ ( X ) . So why is the model not converging ? We identify two main culprits for the non-convergence of the Gaussian NLL objective : 1 . Initial flatness of the feature space can create an undercomplex but locally stable mean fit . This fit results from local symmetries and requires a form of symmetry breaking to escape . 2 . The NLL loss scales the gradient of badly-predicted points down relative to well-predicted points , effectively undersampling those points . This effect worsens as training progresses . These culprits and their effect on training are illustrated in Fig . 2 ( left ) . If the network can not fit a certain region yet , because its feature space ( spanned by the last hidden layer ) is too coarse , it results in a high effective data variance . This leads to down-weighting the data from these regions , fueling a vicious circle of self-amplifying the increasingly imbalanced weighting . In the following , we analyse these effects and their reasons in more detail . 3.1 SYMMETRY AND FEATURE NON-LINEARITY . It is instructive to see how the model evolves over training as shown in Fig . 3 . The network first learns essentially the best linear fit while adapting the variance to match the residuals . The situation is locally stable . That is , due to the symmetries of errors below and above the mean fit , there is no incentive to change the situation . Symmetry breaking is required for further progress . One form of symmetry breaking comes from the stochasticity of mini-batch sampling in SGD , or natural asymmetries contained in the dataset , e.g . outlier points . In addition , we hypothesize that the local non-linearity of the feature space plays an important role in creating the necessary non-linear fit . Thus , let us consider the non-linearity of the feature space . This quantity is not easy to capture . To approximate it , we compute how much the Jacobian Jf of the features f ( x ) with respect to the input varies in an L2-ball with radius r around a point x , denoted as the Jacobian variance:2 V ( x ) = 1 |B| ∑ x′∈S ( Jf ( x ′ ) − 1|S| ∑ x′′∈B Jf ( x ′′ ) ) 2 , B = { x′ : ‖x− x′‖2 ≤ r } . ( 5 ) Figure 4 visualizes the Jacobian variance over the input space as a function of the training progress . Although initially relatively flat , it becomes more fine-granular in parts of the input space – the parts which are later well fit . The region with low Jacobian variance remains stuck in this configuration , see Fig . 1 . This provides evidence that the non-linearity of the feature space is important for the success or failure of learning . However , why does gradient descent not break out of this situation ? 0 4 8 12 Input X 0 1 2 3 U pd at e St ep s × 10 6 10−4 100 Ja co bi an V ar ia nc e Figure 4 : Jacobian variance V ( x ) ( see Eq . 5 ) over training time . 0 4 8 12 Input X 0 1 2 3 U pd at e St ep s × 10 6 10−4 10−2 Sa m pl in g Pr ob ab ili ty Figure 5 : Probability of sampling a data point at x over training time .
The author(s) attribute the under-fitting of predictive means by neural networks that parameterize heteroscedastic Gaussian likelihoods to 1) initial inability to break symmetry and 2) Gaussian negative log likelihoods tendency to down weight poorly predicted points. The authors note the latter effectively under samples those points with poor predictive means, thus inhibiting their fit in a rich-get-richer scheme. The author(s) propose two alternative loss functions: moment matching (MM) and $\beta$-exponential variance estimate with $\beta \in [0,1]$. The author(s) demonstrate their $\beta$-exponential generally outperforms NLL ($\beta = 0$) and MSE.
SP:3546dc5acf2d5f035d6e8c03fbe34ef6ecdba826
On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks
1 INTRODUCTION . Endowing models with the ability to capture uncertainty is of crucial importance in machine learning . Uncertainty can be usefully categorized into two main types : epistemic uncertainty and aleatoric uncertainty ( Kiureghian & Ditlevsen , 2009 ) . Epistemic uncertainty accounts for subjective uncertainty in the model , one that is reducible given sufficient data . By contrast , aleatoric uncertainty captures the stochasticity inherent in the observations and can itself be subdivided into homoscedastic and heteroscedastic uncertainty . Homoscedastic uncertainty is noise that stays constant across different inputs , whereas heteroscedastic uncertainty is one that varies depending on the inputs to the model . There are well-established benefits for modeling each type of uncertainty . For instance , capturing epistemic uncertainty enables effective budgeted data collection in active learning ( Gal et al. , 2017 ) , allows for efficient exploration in reinforcement learning ( Osband et al. , 2016 ) , and is indispensable in cost-sensitive decision making ( Amodei et al. , 2016 ) . On the other hand , quantifying aleatoric uncertainty enables learning dynamics models of stochastic processes ( e.g . for model-based or offline reinforcement learning ) ( Chua et al. , 2018 ; Yu et al. , 2020 ) , improves performance in semantic segmentation , depth regression and object detection ( Kendall & Gal , 2017 ; Harakeh & Waslander , 2021 ) , and allows for risk-sensitive decision making ( Dabney et al. , 2018 ; Vlastelica et al. , 2021 ) . Many modern applications of machine learning rely on neural networks and deep learning tools to achieve state-of-the-art performance . However , most neural network models are not readily equipped with a capacity for uncertainty estimation . To address this shortcoming , a multitude of approaches have been proposed in the past few years . Nevertheless , the majority of work in this domain has focused on epistemic uncertainty quantification ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) or uncertainty estimation for classification ( Hendrycks & Gimpel , 2017 ; Guo et al. , 2017 ; Ovadia et al. , 2019 ; Mukhoti et al. , 2021 ) . We examine a common approach for quantifying aleatoric uncertainty in neural network regression . By assuming that the regression targets follow a particular distribution , we can use a neural network to predict the parameters of that distribution , typically the input-dependent mean and variance when assuming a heteroscedastic Gaussian distribution . Then , the parameters of the network can be learned using maximum likelihood estimation ( MLE ) , i.e . by minimizing the negative log-likelihood ( NLL ) criterion using stochastic gradient descent . This simple procedure , which is the de-facto standard ( Nix & Weigend , 1994 ; Lakshminarayanan et al. , 2017 ; Kendall & Gal , 2017 ; Chua et al. , 2018 ; Kloss et al. , 2021 ) , is known to be subject to overconfident variance estimates . Whereas strategies have been proposed to alleviate this specific issue ( Detlefsen et al. , 2019 ; Hu et al. , 2020 ; Stirn & Knowles , 2020 ) , we argue that an equally important issue is that this procedure can additionally lead to subpar mean fits . In this work , we analyse and propose a simple modification to mitigate this issue . Summary of contributions We demonstrate a pitfall of optimizing the NLL loss for neural network regression , one that hinders the training of accurate mean predictors ( see Fig . 1 for an illustrative example ) . The primary culprit is the high dependence of the gradients on the predictive variance . While such dependence is generally known to be responsible for instabilities in joint optimization of mean and variance estimators ( Takahashi et al. , 2018 ; Stirn & Knowles , 2020 ) , we identify a fresh perspective on how this dependence can further be problematic . Namely , we hypothesize that the issue arises due to the NLL loss scaling down the gradient of poorly-predicted data points relative to the well-predicted ones , leading to effectively undersampling the poorly-predicted data points . We then introduce an alternative loss formulation that counteracts this by weighting the contribution of each data point to the overall loss by its β-exponentiated variance estimate , where β controls the extent of dependency of gradients on predictive variance . This formulation subsumes the standard NLL loss for β = 0 and allows to lessen the dependency of gradients on the variance estimates for 0 < β ≤ 1 . Interestingly , using β = 1 completely removes such dependency for training the mean estimator , yielding the standard mean squared error ( MSE ) loss – but with the additional capacity of uncertainty estimation . Finally , we empirically show that our modified loss formulation largely mitigates the issue of poor fits , achieving considerable improvements on a set of standard benchmarks while exhibiting more robustness to network architecture and learning rate configurations . 2 PRELIMINARIES . Let X , Y be two random variables describing the input and target , following the joint distribution P ( X , Y ) . We assume that Y is conditionally independent givenX and that it follows some probability distribution P ( Y | X ) . In the following , we use the common assumption that Y is normally distributed given X ; i.e . P ( Y | X ) = N ( µ ( X ) , σ2 ( X ) ) , where µ : RM 7→ R and σ2 : RM 7→ R+ are respectively the true input-dependent mean and variance functions.1 Equivalently , we can write Y = µ ( X ) + ( X ) , with ( X ) ∼ N ( 0 , σ2 ( X ) ) ; i.e . Y is generated fromX by µ ( X ) plus a zero-mean Gaussian noise with variance σ2 ( X ) . This input-dependent variance quantifies the heteroscedastic uncertainty , or input-dependent aleatoric uncertainty . 1For notational convenience , we focus on univariate regression but point out that the work extends to the multivariate case as well . To learn estimates µ̂ ( X ) , σ̂2 ( X ) of the true mean and variance functions , it is common to use a neural network fθ parameterized by θ . Here , µ̂ ( X ) and σ̂2 ( X ) can be outputs of the final layer ( Nix & Weigend , 1994 ) or use two completely separate networks ( Detlefsen et al. , 2019 ) .The variance output is hereby constrained to the positive region using a suitable activation function , e.g . softplus . The optimal parameters θ∗NLL can then be found using maximum likelihood estimation ( MLE ) by minimizing the negative log-likelihood ( NLL ) criterion LNLL under the distribution P ( X , Y ) : θ∗NLL = argmin θ LNLL ( θ ; D ) = argmin θ E X , Y [ 1 2 log σ̂2 ( X ) + ( Y − µ̂ ( X ) ) 2 2σ̂2 ( X ) + const ] . ( 1 ) In contrast , standard regression minimizes the mean squared error ( MSE ) LMSE : θ∗MSE = argmin θ LMSE ( θ ; D ) = argmin θ E X , Y [ ( Y − µ̂ ( X ) ) 2 2 ] . ( 2 ) In practice , Eq . 1 and Eq . 2 are optimized using stochastic gradient descent with batches of samples drawn from P ( X , Y ) . The gradients of LNLL with respect to µ̂ ( X ) , σ̂2 ( X ) are given by ∇µ̂LNLL ( θ ) = E X , Y [ µ̂ ( X ) − Y σ̂2 ( X ) ] , ∇σ̂2LNLL ( θ ) = E X , Y [ σ̂2 ( X ) − ( Y − µ̂ ( X ) ) 2 2 ( σ̂2 ( X ) ) 2 ] . ( 3 , 4 ) 3 ANALYSIS . We now return to the example of trying to fit a sinusoidal function from Sec . 1 . Recall from Fig . 1 that using the Gaussian NLL as the objective resulted in a suboptimal fit . In contrast , using MSE as the objective , the model converged without problems in reasonable time . We now analyze the reasons behind this surprising result . From Eq . 3 , we see that the true mean µ ( X ) is a minimizer of the NLL loss . It thus becomes clear that a ) the solution found in Fig . 1 is not the optimal one , and b ) the NLL objective should , in principle , drive µ̂ ( X ) to the optimal solution µ ( X ) . So why is the model not converging ? We identify two main culprits for the non-convergence of the Gaussian NLL objective : 1 . Initial flatness of the feature space can create an undercomplex but locally stable mean fit . This fit results from local symmetries and requires a form of symmetry breaking to escape . 2 . The NLL loss scales the gradient of badly-predicted points down relative to well-predicted points , effectively undersampling those points . This effect worsens as training progresses . These culprits and their effect on training are illustrated in Fig . 2 ( left ) . If the network can not fit a certain region yet , because its feature space ( spanned by the last hidden layer ) is too coarse , it results in a high effective data variance . This leads to down-weighting the data from these regions , fueling a vicious circle of self-amplifying the increasingly imbalanced weighting . In the following , we analyse these effects and their reasons in more detail . 3.1 SYMMETRY AND FEATURE NON-LINEARITY . It is instructive to see how the model evolves over training as shown in Fig . 3 . The network first learns essentially the best linear fit while adapting the variance to match the residuals . The situation is locally stable . That is , due to the symmetries of errors below and above the mean fit , there is no incentive to change the situation . Symmetry breaking is required for further progress . One form of symmetry breaking comes from the stochasticity of mini-batch sampling in SGD , or natural asymmetries contained in the dataset , e.g . outlier points . In addition , we hypothesize that the local non-linearity of the feature space plays an important role in creating the necessary non-linear fit . Thus , let us consider the non-linearity of the feature space . This quantity is not easy to capture . To approximate it , we compute how much the Jacobian Jf of the features f ( x ) with respect to the input varies in an L2-ball with radius r around a point x , denoted as the Jacobian variance:2 V ( x ) = 1 |B| ∑ x′∈S ( Jf ( x ′ ) − 1|S| ∑ x′′∈B Jf ( x ′′ ) ) 2 , B = { x′ : ‖x− x′‖2 ≤ r } . ( 5 ) Figure 4 visualizes the Jacobian variance over the input space as a function of the training progress . Although initially relatively flat , it becomes more fine-granular in parts of the input space – the parts which are later well fit . The region with low Jacobian variance remains stuck in this configuration , see Fig . 1 . This provides evidence that the non-linearity of the feature space is important for the success or failure of learning . However , why does gradient descent not break out of this situation ? 0 4 8 12 Input X 0 1 2 3 U pd at e St ep s × 10 6 10−4 100 Ja co bi an V ar ia nc e Figure 4 : Jacobian variance V ( x ) ( see Eq . 5 ) over training time . 0 4 8 12 Input X 0 1 2 3 U pd at e St ep s × 10 6 10−4 10−2 Sa m pl in g Pr ob ab ili ty Figure 5 : Probability of sampling a data point at x over training time .
The authors analyze the known phenomenon that the optimization of the Gaussian likelihood of a probabilistic neural network with heteroscedastic output variance using gradient ascent can get stuck at solutions with a suboptimal mean fit, compensated by large output variance: the likelihood amplifies non-uniform initial distributions of feature granularity due to variance weighting, i.e., the gradient w.r.t. the mean are scaled by the inverse variance leading to well-fitted samples dominating the gradient. The authors argue that this effect can be undesirable because it can prevent spending model expressivity on hard-to-fit regions, while it can be desirable in other situations because it allows the network to ignore outliers arising due to noise. This is complementary to the MSE objective which is dominated by badly-fitted samples (and thus hard to fit regions) but sensitive to outliers, and which does not allow to fit the output variance. The authors propose a modified loss function, $\beta$-NLL, which interpolates between the likelihood and the MSE loss functions with the incentive to retain the "best of both worlds". They compare $\beta$-NLL with standard NLL, MSE, and a moment-matching version on several datasets.
SP:3546dc5acf2d5f035d6e8c03fbe34ef6ecdba826
On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks
1 INTRODUCTION . Endowing models with the ability to capture uncertainty is of crucial importance in machine learning . Uncertainty can be usefully categorized into two main types : epistemic uncertainty and aleatoric uncertainty ( Kiureghian & Ditlevsen , 2009 ) . Epistemic uncertainty accounts for subjective uncertainty in the model , one that is reducible given sufficient data . By contrast , aleatoric uncertainty captures the stochasticity inherent in the observations and can itself be subdivided into homoscedastic and heteroscedastic uncertainty . Homoscedastic uncertainty is noise that stays constant across different inputs , whereas heteroscedastic uncertainty is one that varies depending on the inputs to the model . There are well-established benefits for modeling each type of uncertainty . For instance , capturing epistemic uncertainty enables effective budgeted data collection in active learning ( Gal et al. , 2017 ) , allows for efficient exploration in reinforcement learning ( Osband et al. , 2016 ) , and is indispensable in cost-sensitive decision making ( Amodei et al. , 2016 ) . On the other hand , quantifying aleatoric uncertainty enables learning dynamics models of stochastic processes ( e.g . for model-based or offline reinforcement learning ) ( Chua et al. , 2018 ; Yu et al. , 2020 ) , improves performance in semantic segmentation , depth regression and object detection ( Kendall & Gal , 2017 ; Harakeh & Waslander , 2021 ) , and allows for risk-sensitive decision making ( Dabney et al. , 2018 ; Vlastelica et al. , 2021 ) . Many modern applications of machine learning rely on neural networks and deep learning tools to achieve state-of-the-art performance . However , most neural network models are not readily equipped with a capacity for uncertainty estimation . To address this shortcoming , a multitude of approaches have been proposed in the past few years . Nevertheless , the majority of work in this domain has focused on epistemic uncertainty quantification ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) or uncertainty estimation for classification ( Hendrycks & Gimpel , 2017 ; Guo et al. , 2017 ; Ovadia et al. , 2019 ; Mukhoti et al. , 2021 ) . We examine a common approach for quantifying aleatoric uncertainty in neural network regression . By assuming that the regression targets follow a particular distribution , we can use a neural network to predict the parameters of that distribution , typically the input-dependent mean and variance when assuming a heteroscedastic Gaussian distribution . Then , the parameters of the network can be learned using maximum likelihood estimation ( MLE ) , i.e . by minimizing the negative log-likelihood ( NLL ) criterion using stochastic gradient descent . This simple procedure , which is the de-facto standard ( Nix & Weigend , 1994 ; Lakshminarayanan et al. , 2017 ; Kendall & Gal , 2017 ; Chua et al. , 2018 ; Kloss et al. , 2021 ) , is known to be subject to overconfident variance estimates . Whereas strategies have been proposed to alleviate this specific issue ( Detlefsen et al. , 2019 ; Hu et al. , 2020 ; Stirn & Knowles , 2020 ) , we argue that an equally important issue is that this procedure can additionally lead to subpar mean fits . In this work , we analyse and propose a simple modification to mitigate this issue . Summary of contributions We demonstrate a pitfall of optimizing the NLL loss for neural network regression , one that hinders the training of accurate mean predictors ( see Fig . 1 for an illustrative example ) . The primary culprit is the high dependence of the gradients on the predictive variance . While such dependence is generally known to be responsible for instabilities in joint optimization of mean and variance estimators ( Takahashi et al. , 2018 ; Stirn & Knowles , 2020 ) , we identify a fresh perspective on how this dependence can further be problematic . Namely , we hypothesize that the issue arises due to the NLL loss scaling down the gradient of poorly-predicted data points relative to the well-predicted ones , leading to effectively undersampling the poorly-predicted data points . We then introduce an alternative loss formulation that counteracts this by weighting the contribution of each data point to the overall loss by its β-exponentiated variance estimate , where β controls the extent of dependency of gradients on predictive variance . This formulation subsumes the standard NLL loss for β = 0 and allows to lessen the dependency of gradients on the variance estimates for 0 < β ≤ 1 . Interestingly , using β = 1 completely removes such dependency for training the mean estimator , yielding the standard mean squared error ( MSE ) loss – but with the additional capacity of uncertainty estimation . Finally , we empirically show that our modified loss formulation largely mitigates the issue of poor fits , achieving considerable improvements on a set of standard benchmarks while exhibiting more robustness to network architecture and learning rate configurations . 2 PRELIMINARIES . Let X , Y be two random variables describing the input and target , following the joint distribution P ( X , Y ) . We assume that Y is conditionally independent givenX and that it follows some probability distribution P ( Y | X ) . In the following , we use the common assumption that Y is normally distributed given X ; i.e . P ( Y | X ) = N ( µ ( X ) , σ2 ( X ) ) , where µ : RM 7→ R and σ2 : RM 7→ R+ are respectively the true input-dependent mean and variance functions.1 Equivalently , we can write Y = µ ( X ) + ( X ) , with ( X ) ∼ N ( 0 , σ2 ( X ) ) ; i.e . Y is generated fromX by µ ( X ) plus a zero-mean Gaussian noise with variance σ2 ( X ) . This input-dependent variance quantifies the heteroscedastic uncertainty , or input-dependent aleatoric uncertainty . 1For notational convenience , we focus on univariate regression but point out that the work extends to the multivariate case as well . To learn estimates µ̂ ( X ) , σ̂2 ( X ) of the true mean and variance functions , it is common to use a neural network fθ parameterized by θ . Here , µ̂ ( X ) and σ̂2 ( X ) can be outputs of the final layer ( Nix & Weigend , 1994 ) or use two completely separate networks ( Detlefsen et al. , 2019 ) .The variance output is hereby constrained to the positive region using a suitable activation function , e.g . softplus . The optimal parameters θ∗NLL can then be found using maximum likelihood estimation ( MLE ) by minimizing the negative log-likelihood ( NLL ) criterion LNLL under the distribution P ( X , Y ) : θ∗NLL = argmin θ LNLL ( θ ; D ) = argmin θ E X , Y [ 1 2 log σ̂2 ( X ) + ( Y − µ̂ ( X ) ) 2 2σ̂2 ( X ) + const ] . ( 1 ) In contrast , standard regression minimizes the mean squared error ( MSE ) LMSE : θ∗MSE = argmin θ LMSE ( θ ; D ) = argmin θ E X , Y [ ( Y − µ̂ ( X ) ) 2 2 ] . ( 2 ) In practice , Eq . 1 and Eq . 2 are optimized using stochastic gradient descent with batches of samples drawn from P ( X , Y ) . The gradients of LNLL with respect to µ̂ ( X ) , σ̂2 ( X ) are given by ∇µ̂LNLL ( θ ) = E X , Y [ µ̂ ( X ) − Y σ̂2 ( X ) ] , ∇σ̂2LNLL ( θ ) = E X , Y [ σ̂2 ( X ) − ( Y − µ̂ ( X ) ) 2 2 ( σ̂2 ( X ) ) 2 ] . ( 3 , 4 ) 3 ANALYSIS . We now return to the example of trying to fit a sinusoidal function from Sec . 1 . Recall from Fig . 1 that using the Gaussian NLL as the objective resulted in a suboptimal fit . In contrast , using MSE as the objective , the model converged without problems in reasonable time . We now analyze the reasons behind this surprising result . From Eq . 3 , we see that the true mean µ ( X ) is a minimizer of the NLL loss . It thus becomes clear that a ) the solution found in Fig . 1 is not the optimal one , and b ) the NLL objective should , in principle , drive µ̂ ( X ) to the optimal solution µ ( X ) . So why is the model not converging ? We identify two main culprits for the non-convergence of the Gaussian NLL objective : 1 . Initial flatness of the feature space can create an undercomplex but locally stable mean fit . This fit results from local symmetries and requires a form of symmetry breaking to escape . 2 . The NLL loss scales the gradient of badly-predicted points down relative to well-predicted points , effectively undersampling those points . This effect worsens as training progresses . These culprits and their effect on training are illustrated in Fig . 2 ( left ) . If the network can not fit a certain region yet , because its feature space ( spanned by the last hidden layer ) is too coarse , it results in a high effective data variance . This leads to down-weighting the data from these regions , fueling a vicious circle of self-amplifying the increasingly imbalanced weighting . In the following , we analyse these effects and their reasons in more detail . 3.1 SYMMETRY AND FEATURE NON-LINEARITY . It is instructive to see how the model evolves over training as shown in Fig . 3 . The network first learns essentially the best linear fit while adapting the variance to match the residuals . The situation is locally stable . That is , due to the symmetries of errors below and above the mean fit , there is no incentive to change the situation . Symmetry breaking is required for further progress . One form of symmetry breaking comes from the stochasticity of mini-batch sampling in SGD , or natural asymmetries contained in the dataset , e.g . outlier points . In addition , we hypothesize that the local non-linearity of the feature space plays an important role in creating the necessary non-linear fit . Thus , let us consider the non-linearity of the feature space . This quantity is not easy to capture . To approximate it , we compute how much the Jacobian Jf of the features f ( x ) with respect to the input varies in an L2-ball with radius r around a point x , denoted as the Jacobian variance:2 V ( x ) = 1 |B| ∑ x′∈S ( Jf ( x ′ ) − 1|S| ∑ x′′∈B Jf ( x ′′ ) ) 2 , B = { x′ : ‖x− x′‖2 ≤ r } . ( 5 ) Figure 4 visualizes the Jacobian variance over the input space as a function of the training progress . Although initially relatively flat , it becomes more fine-granular in parts of the input space – the parts which are later well fit . The region with low Jacobian variance remains stuck in this configuration , see Fig . 1 . This provides evidence that the non-linearity of the feature space is important for the success or failure of learning . However , why does gradient descent not break out of this situation ? 0 4 8 12 Input X 0 1 2 3 U pd at e St ep s × 10 6 10−4 100 Ja co bi an V ar ia nc e Figure 4 : Jacobian variance V ( x ) ( see Eq . 5 ) over training time . 0 4 8 12 Input X 0 1 2 3 U pd at e St ep s × 10 6 10−4 10−2 Sa m pl in g Pr ob ab ili ty Figure 5 : Probability of sampling a data point at x over training time .
**Summary** For regression tasks with heteroscedastic noise, a Gaussian model whose mean and variance are a function of the input is frequently used. Neural networks are then used as function approximators for the mean and variance and are fit using the negative log-likilhood. The paper identifies a problem with this approach: optimization can get stuck in configurations where the predicted mean is far from the true mean, as this is compensated for by a high predicted variance that also stalls learning. By adjusting the NLL objective, the authors compensate for the problem.
SP:3546dc5acf2d5f035d6e8c03fbe34ef6ecdba826
On Redundancy and Diversity in Cell-based Neural Architecture Search
1 INTRODUCTION . Neural Architecture Search ( NAS ) , which automates designs of neural networks by task , has seen enormous advancements since its invention . In particular , cell-based NAS has become an important technique in NAS research : in contrast to attempts that directly aim to design architectures at once and inspired by the classical manually-designed architectures like VGGNet and ResNet that feature repeated blocks , cell-based NAS similarly searches for repeated cells only and later stack them into full architectures . This simplification reduces the search space ( although it remains highly complex ) , and allows easy transfer of architectures across different tasks/datasets , application scenarios and resources constraints ( Elsken et al. , 2019 ) . Indeed , although alternative search spaces exist , cell-based NAS has received an extraordinary amount of research attention : based on our preliminary survey , almost 80 % of the papers ( detailed in App C ) proposing new NAS methods published in the top machine learning conferences ( ICLR , ICML , NEURIPS ) in the past year show at least one part of their major results in standard Differentiable Architecture Search ( DARTS ) cell-based spaces and/or highly related ones , and approx . 60 % demonstrate results in such spaces only ; it is fair to state that such cell-based spaces currently dominate . However , the lagging understanding of these architectures and the search space itself stands in stark contrast with the volume and pace of new search algorithms . The literature on understanding why this dominant search space works and comparing what different NAS methods have found is much more limited in depth and scope , with most existing works typically focusing exclusively on a few search methods or highlighting high-level patterns only ( Shu et al. , 2020 ; Zela et al. , 2020 ) . We argue that strengthening such an understanding is crucial on multiple fronts , and the lack thereof is concerning : first , studying a diverse set of architectures enables us to discover patterns shared across the search space , the foundation of comparison amongst search methods that heavily influences the resultant performances ( Yang et al. , 2020a ) : if the search space itself is flawed , designing and iterating new search methods on it , which is typically computationally intensive , can be misleading and unproductive . Conversely , understanding any pitfalls of the existing search spaces informs us on how to design better search spaces in the future , which is critical in advancing the goal of NAS in finding novel and high-performing architectures , not only in conventional CNNs but also in emerging NAS paradigms such as Transformers ( which may also take the form of a cell-based design space ) . Second , opening the NAS black box enables us to distill the essence of the strong-performing NAS architectures beneath their surface of complexity . Unlike manually designed architectures where usually designers attribute performance to specific designs , currently owing to the apparent complexity of the design space , the NAS architectures , while all discovered in a similar or identical search space , are often compared to in terms of final performance on a standard dataset only ( e.g . CIFAR-10 test error ) . This could be problematic , as we do not necessarily understand what NAS has discovered that led to the purported improvements , and the metric itself is a poor one on which external factors such as hyperparameter settings , variations in training/data augmentation protocols and even just noise could exert a greater influence than the architectural design itself ( Yang et al. , 2020a ) . However , by linking performance to specific designs , we could more ascertain whether any performance differences stem from the architectures rather than the interfering factors . We aim to address this problem by presenting a post-hoc analysis of the well-performing architectures produced by technically diverse search methods . Specifically , we utilise explainable machine learning tools to open the NAS black box by inspecting the good- and bad-performing architectures produced by a wide range of NAS search methods in the dominant DARTS search space . We find : • Performances of architectures can often be disproportionately attributed to a small number of simple yet critical features that resemble known patterns in classical network designs ; • Many designs almost universally adopted simply contribute to complexity but not performance ; • The nominal complexity of the search spaces poorly reflects the actual diversity of the ( high- performing ) architectures discovered , the functional parts of which are often very similar despite the technical diversity in search methods and the seeming disparity in topology . In fact , with few simple and human-interpretable constraints , almost any randomly sampled architecture can perform on par or exceed those produced by the state-of-the-art NAS methods over varying network sizes and datasets ( CIFAR-10/IMAGENET ) . Ultimately , these findings prompt us to rethink the suitability of the current standard protocol in evaluating NAS and the capability to find truly novel architectures within the search space . We finally provide suggestions for prospective new search spaces inspired by these findings . input0 input1 0 1 2 3 out avg_pool_3x3 ( ap3 ) max_pool_3x3 ( mp3 ) skip_connect ( skip ) dil_conv_5x5 ( d5 ) dil_conv_3x3 ( d3 ) sep_conv_5x5 ( s5 ) sep_conv_3x3 ( s3 ) x ( j ) = i < j o ( i , j ) ( x ( i ) ) and finally the out node concatenates all outputs from the intermediate nodes . As shown in Fig 1 , the DARTS cell allows for up to 14 possible spots to place operations . However , to ensure that all intermediate nodes are connected in the computational graph , each intermediate node is constrained to have exactly 2 in-edges connected to it from 2 different preceding nodes . Lastly , it is conventional to search for two cells , namely normal αn and reduce αr cells simultaneously , with αr placed at 1/3 and 2/3 of the total depth of the networks and αn everywhere else . In total , there are ∏4 k=1 72k ( k+1 ) 2 ≈ 10 9 distinct cells without considering graph isomorphism , and since both αn and αr are required to fully specify an architecture , there exist approx . ( 109 ) 2 = 1018 distinct architectures ( Liu et al. , 2019 ) . Other cells commonly used in are almost invariably closely related to , and can be viewed as simplified or more complicated variants of the DARTS space . For example , NASBench-201 ( NB201 ) search space used in the NAS benchmark is similar to but simplified from the DARTS cell ( detailed and analysed in App D ) . Other works have increased the number of intermediate nodes ( Wang et al. , 2021b ) , expanded the primitive pool A ( Hundt et al. , 2019 ) and/or relaxed other constraints ( e.g . Shi et al . ( 2020 ) allow both incoming edges of the intermediate nodes to be from the same preceding nodes ) , but do not fundamentally differ from the DARTS space . Additionally , search spaces like the NAS-Bench-101 ( NB101 ) ( Ying et al. , 2019a ) feature cells where the operations are represented as node features , although we believe that the way the cells are represented should not affect any findings we will demonstrate , as the DARTS/NB201 spaces can similarly be equivalently represented in a feature-on-node style ( Ru et al. , 2021 ; Pham et al. , 2018 ) . We do not experiment on NB101 in the present paper , as it features CIFAR-10 only similar to NAS-Bench-301 ( NB301 ) , but the latter is much closer in terms of size to a “ realistic ” search space the community expects and is more amenable to one-shot methods which are currently mainstream in NAS . 3 OPERATION-LEVEL ANALYSIS : REDUNDANCIES IN SEARCH SPACES . Cell-based NAS search space contains multiple sources of complexity ( deciding both the specific wiring and operations on the edges ; searching for 2 cells independently , etc . ) , each expanding the search space combinatorially . While this complexity is argued to be necessary for the search space to be expressive for good-performing novel architectures to be found , it is unknown whether all these sources of complexities are equally important , or whether the performance critically depend on some sub-features only . We argue that answering this question , and identifying such features , if they are indeed present , are highly significant : it enables us to separate the complexities that positively contribute to performances from those that do not , and subsequently would fundamentally affect the design of search methods . At individual architectures , it helps us understand what NAS has truly discovered , by removing confounding factors and focusing on the key aspects of the architectures . For findings to be generally applicable , we aim to be not specific to any search method ; this requires us to study a large set of architectures that are high-performing but technically diverse in terms of the search methods that produce them . Fortunately , the training set of NB301 ( Siems et al. , 2020 ) , which include > 50,000 architectureperformance pairs in the DARTS space using a combination of random sampling and more than 10 state-of-the-art yet technically diverse methods ( detailed in App . B.1 ) , could be used for such an analysis : to build a surrogate model that accurately predicts architecture performance across the entire space . We primarily focus on the top-5 % ( or top-2,589 ) architectures of the training set since we overwhelmingly care about the good-performing architectures by definition in NAS , although as we will show , the main findings hold true also for architectures found by methods not covered by NB301 and for other search spaces like the NB201 space . As shown in Fig 2 , the top architectures are well-spread out in the search space and are well-separated by search methods , seemingly suggesting that the architectures discovered by different methods are diverse in characteristics . Lastly , the worse-performing cells could also be of interest , as any features observed could be the ones we would actively like to avoid , and we analyse them in App . A . Operation Importance To untangle the influence of each part of the architecture , we introduce Operation Importance ( OI ) , which measures the incremental effect of the individual operations , which are the smallest possible features , to the overall performance . We quantify this via measuring the expected change in the performance by perturbing the type or wiring of the operation in question : considering an edge-attributed cell α with edge ei , j currently assigned with primitive ok , then the operation importance of ok on that edge of cell α is given by : OI ( α , ei , j : = ok ) = 1 |N ( α , ei , j : = ok ) | |N ( α , ei , j : =ok ) |∑ m=1 [ y ( αm ) ] − y ( α ) , ( 1 ) where y ( α ) denotes the validation accuracy or another appropriate performance metric of the fullytrained architecture induced by cell α . We are interested in measuring the importance of both the primitive choice and where the operation is located relative to the entire cell , and we use N ( α , ei , j = ok ) to denote a set of neighbour cells that differ to α only with the edge in question assigned with another primitive : ei , j ∈ A \ { ok } , or with the same primitive ok but with one end node of ei , j being rewired to another node , subjected to any constraints in the search space . It is worth noting that OI is an instance of the Permutation Feature Importance ( PFI ) ( Breiman , 2001 ; Fisher et al. , 2019 ; Molnar , 2019 ) . Given the categorical nature of the “ features ” in this case , we may enumerate all permutations on the edge in question instead of having to rely on random sampling as conventional PFI does . An important operation by Eq ( 1 ) would therefore be accorded with an OI with a large magnitude in either direction , whereas an irrelevant one would have a value of zero since altering it on expectation leads to no change in architecture performance . We compute the OI of each operation of 2,589 architectures , and to circumvent the computational challenge of having to train all neighbour architectures of α ( which are outside the NB301 training set ) from scratch to compute y ( · ) , we use the performance prediction ỹ ( · ) from NB301 . However , as we will show , we validate all key findings by actually training some architectures to ensure that they are not artefacts of the statistical surrogate ( training protocols detailed in App . B.2 ) . Findings The most natural way to group the operations is by their primitive and cell ( i.e . normal or reduce ) types , and we show the main results in Figs . 3 and 4 . In Fig . 3 ( b ) , we discretise the OI scores using the threshold 0.001 ( 0.1 % ) , which is similar to the observed noise standard deviation of the better-performing architectures from NB301 which quantifies the expected variation in performance if an identical architecture is re-trained from scratch , to highlight the important operations with |OI| ≥ 0.001 : these the ones we could more confidently assert to affect the architecture performance beyond noise effects . We summarise the key findings below : ( # 1 ) Only a fraction of operations is critical for good performance within cells : If all operations need to be fully specified ( i.e. , with both the primitive choice and the specific wiring determined ) for good performance , perturbing any of them should have led to significant performance deterioration . However , this is clearly not the case in practice : comparing Fig . 3 ( a ) and ( b ) , we observe that only a fraction of the operations are important based on our definition . To verify this directly beyond predicted performances , we randomly select 30 architectures from the top 5 % training set . Within each cell , we sort their 16 operations by their OI in both ascending and descending orders . We then successively disable the operations by zeroising them , and train the resulting architectures with increasing number of operations disabled from scratch1 until there are only half of the active operations remain ( Fig . 5 ) . The results largely confirms our findings and shows that the OI , although computed via predicted performance , is accurate in representing the ground-truth importance of the operations : on average , we need to disable 6 low-OI operations to see a statistically significant drop in performance , and almost half of the operations to match the effect of disabling just 2 high-OI ones . On the other hand , disabling the high-OI operations quickly reduce the performance and in some cases stall the training altogether : noting that the overall standard deviation of the entire NB301 training set is just approx . 0.8 % , the drop in performance is quite dramatic . ( # 2 ) Reduce cells are relatively unimportant for good performance : Searching independently for reduce cells scale the search space quadratically , but Fig . 3 ( b ) shows that they contain much fewer important operations , and Fig . 4 shows that the OI distribution across all primitives are centered close to zero in reduce cells : both suggest that reduce cell is less important to the architecture performance . To verify this , we draw another 30 random architectures . For each of the them , we construct and train from scratch the original architecture and 4 derived ones , with ( a ) reduce cell set identical to normal cell , ( b ) reduce cell with all operations set to parameterless skip connections , ( c ) normal cell set identical to reduce cell and ( d ) normal cell with operations set to skip connections . As shown in Fig . 1Note that we may not obtain NB301 performance prediction on these architectures , as NB301 requires all 16 operations to be enabled with valid primitives . 6 : setting reduce cell to be identical to the normal cell leads to no significant change in performance , while the reverse is not true . A more extreme example is that while setting cells to be consisted of skip connections only is unsurprisingly sub-optimal in both cases , doing so on the reduce cells harms the performance much less – this suggests that while searching separately for reduce cells are well-motivated , the current design , which places much fewer reduce cells than normal cells in the overall architecture yet treats them equally in search might be a sub-optimal trade-off , and searching two separate cells may yield little benefits over the simple strategy of searching for one graph only and applying it on both normal and reduce cells . ( # 3 ) Different primitives have vastly different importance profiles with many of them redundant : The set of candidate primitives A consists of operators that are known to be useful in manuallydesigned architectures , with the expectation that they should also be , to varying degrees , useful to NAS . However , this is clearly not the case : while it is already known that some primitives are favoured more by certain search algorithms ( Zela et al. , 2020 ) , the observed discrepancy in the relative importance of the primitives is , in fact , more extreme : the normal cells ( which are also the important cells by Finding 2 ) across the entire spectrum of good performing architectures overwhelmingly favour only 3 ( separable convolutions & skip connection ) out of 7 possible primitives ( Fig . 3 ( a ) ) . Even when the remaining 4 primitives are occasionally selected , they are almost never important ( Fig . 3 ( b ) ) – this is also observed in Fig . 4 ( a ) which shows that only they all have distributions of OI close to 0 . As we will show later in Sec 4 , we could essentially remove these primitives from A without impacting the performances . Even within the 3 favoured primitives , there is a significant amount of variation . First , comparing Figs 1 ( a ) and ( b ) , skips , when present in good architectures , are very likely to be important . We also note that the distribution of OI of skip has a higher variance – these suggest that the performance of an architecture is highly sensitive towards the specific locations and patterns of skip connections , a detailed study of which we defer to Sec . 4 . On the other hand , while both separable convolutions ( s3 and s5 ) are highly over-represented in the good-performing architectures , it seems that they are less important on an operation level than skip . A possible explanation is that while their presence is required for good performance , their exact locations in the cell matter less which we again verify in Sec . 4 . Increasingly in NAS we are also interested in optimising for multiple objectives ( e.g . maximising performance while minimising costs ) ; we show that even accounting for the conflicting objectives , redundant primitives , especially pooling , remain largely redundant . ( see App . G ) . Discussions The findings confirm that both the search space and the cells are rampant with various redundancies that increase the search complexity but do not actually contribute as much to the performance , and good performance most often does not depend on an entire cell but a few key features and primitives in both individual cells and in the search space . This clearly shows that the search space design can be further optimised , but consequently , many beliefs often ingrained in existing search methods can also be sub-optimal or unnecessary . For example , barring a few exceptions ( and none to our knowledge in the standard cell-based design space ) ( Xie et al. , 2019a ; You et al. , 2020 ; Ru et al. , 2020 ) , the overwhelming majority of the current approaches aims to search for a single , fully deterministic architecture , and this often results in high-dimensional vector encoding of the cells ( e.g . the path encoding of DARTS cell in White et al . ( 2021 ) is > 104 dimensions without truncation ) ; this affects the performance in general ( White et al. , 2020a ) and impedes methods that suffer from curse of dimensionality , such as Gaussian Processes , in particular . However , exact encoding could be in fact unnecessary if good performance simply hinges upon only a few key designs while the rest does not matter as much , and finding relevant low-dimensional , approximate representations could be beneficial instead .
This paper strongly criticized the current cell-based NAS approach is limitiing research in NAS. The main points are: 1. Redundancy in the cell-based search space: Performance of architecture is mostly attributed to a few important operations, while the 'unimportant' operations only add complexity to the search space. Essentially, the high-performing models share similar sub-graphs. 2. Constrianting cells to similar patterns yields better results: By limiting the search space to important operations and similar sub-graphs, random sampling in NAS performs on par with most other NAS algorithms.
SP:b32ddb672cd66b110b8a5cb74c66a8191d544763
On Redundancy and Diversity in Cell-based Neural Architecture Search
1 INTRODUCTION . Neural Architecture Search ( NAS ) , which automates designs of neural networks by task , has seen enormous advancements since its invention . In particular , cell-based NAS has become an important technique in NAS research : in contrast to attempts that directly aim to design architectures at once and inspired by the classical manually-designed architectures like VGGNet and ResNet that feature repeated blocks , cell-based NAS similarly searches for repeated cells only and later stack them into full architectures . This simplification reduces the search space ( although it remains highly complex ) , and allows easy transfer of architectures across different tasks/datasets , application scenarios and resources constraints ( Elsken et al. , 2019 ) . Indeed , although alternative search spaces exist , cell-based NAS has received an extraordinary amount of research attention : based on our preliminary survey , almost 80 % of the papers ( detailed in App C ) proposing new NAS methods published in the top machine learning conferences ( ICLR , ICML , NEURIPS ) in the past year show at least one part of their major results in standard Differentiable Architecture Search ( DARTS ) cell-based spaces and/or highly related ones , and approx . 60 % demonstrate results in such spaces only ; it is fair to state that such cell-based spaces currently dominate . However , the lagging understanding of these architectures and the search space itself stands in stark contrast with the volume and pace of new search algorithms . The literature on understanding why this dominant search space works and comparing what different NAS methods have found is much more limited in depth and scope , with most existing works typically focusing exclusively on a few search methods or highlighting high-level patterns only ( Shu et al. , 2020 ; Zela et al. , 2020 ) . We argue that strengthening such an understanding is crucial on multiple fronts , and the lack thereof is concerning : first , studying a diverse set of architectures enables us to discover patterns shared across the search space , the foundation of comparison amongst search methods that heavily influences the resultant performances ( Yang et al. , 2020a ) : if the search space itself is flawed , designing and iterating new search methods on it , which is typically computationally intensive , can be misleading and unproductive . Conversely , understanding any pitfalls of the existing search spaces informs us on how to design better search spaces in the future , which is critical in advancing the goal of NAS in finding novel and high-performing architectures , not only in conventional CNNs but also in emerging NAS paradigms such as Transformers ( which may also take the form of a cell-based design space ) . Second , opening the NAS black box enables us to distill the essence of the strong-performing NAS architectures beneath their surface of complexity . Unlike manually designed architectures where usually designers attribute performance to specific designs , currently owing to the apparent complexity of the design space , the NAS architectures , while all discovered in a similar or identical search space , are often compared to in terms of final performance on a standard dataset only ( e.g . CIFAR-10 test error ) . This could be problematic , as we do not necessarily understand what NAS has discovered that led to the purported improvements , and the metric itself is a poor one on which external factors such as hyperparameter settings , variations in training/data augmentation protocols and even just noise could exert a greater influence than the architectural design itself ( Yang et al. , 2020a ) . However , by linking performance to specific designs , we could more ascertain whether any performance differences stem from the architectures rather than the interfering factors . We aim to address this problem by presenting a post-hoc analysis of the well-performing architectures produced by technically diverse search methods . Specifically , we utilise explainable machine learning tools to open the NAS black box by inspecting the good- and bad-performing architectures produced by a wide range of NAS search methods in the dominant DARTS search space . We find : • Performances of architectures can often be disproportionately attributed to a small number of simple yet critical features that resemble known patterns in classical network designs ; • Many designs almost universally adopted simply contribute to complexity but not performance ; • The nominal complexity of the search spaces poorly reflects the actual diversity of the ( high- performing ) architectures discovered , the functional parts of which are often very similar despite the technical diversity in search methods and the seeming disparity in topology . In fact , with few simple and human-interpretable constraints , almost any randomly sampled architecture can perform on par or exceed those produced by the state-of-the-art NAS methods over varying network sizes and datasets ( CIFAR-10/IMAGENET ) . Ultimately , these findings prompt us to rethink the suitability of the current standard protocol in evaluating NAS and the capability to find truly novel architectures within the search space . We finally provide suggestions for prospective new search spaces inspired by these findings . input0 input1 0 1 2 3 out avg_pool_3x3 ( ap3 ) max_pool_3x3 ( mp3 ) skip_connect ( skip ) dil_conv_5x5 ( d5 ) dil_conv_3x3 ( d3 ) sep_conv_5x5 ( s5 ) sep_conv_3x3 ( s3 ) x ( j ) = i < j o ( i , j ) ( x ( i ) ) and finally the out node concatenates all outputs from the intermediate nodes . As shown in Fig 1 , the DARTS cell allows for up to 14 possible spots to place operations . However , to ensure that all intermediate nodes are connected in the computational graph , each intermediate node is constrained to have exactly 2 in-edges connected to it from 2 different preceding nodes . Lastly , it is conventional to search for two cells , namely normal αn and reduce αr cells simultaneously , with αr placed at 1/3 and 2/3 of the total depth of the networks and αn everywhere else . In total , there are ∏4 k=1 72k ( k+1 ) 2 ≈ 10 9 distinct cells without considering graph isomorphism , and since both αn and αr are required to fully specify an architecture , there exist approx . ( 109 ) 2 = 1018 distinct architectures ( Liu et al. , 2019 ) . Other cells commonly used in are almost invariably closely related to , and can be viewed as simplified or more complicated variants of the DARTS space . For example , NASBench-201 ( NB201 ) search space used in the NAS benchmark is similar to but simplified from the DARTS cell ( detailed and analysed in App D ) . Other works have increased the number of intermediate nodes ( Wang et al. , 2021b ) , expanded the primitive pool A ( Hundt et al. , 2019 ) and/or relaxed other constraints ( e.g . Shi et al . ( 2020 ) allow both incoming edges of the intermediate nodes to be from the same preceding nodes ) , but do not fundamentally differ from the DARTS space . Additionally , search spaces like the NAS-Bench-101 ( NB101 ) ( Ying et al. , 2019a ) feature cells where the operations are represented as node features , although we believe that the way the cells are represented should not affect any findings we will demonstrate , as the DARTS/NB201 spaces can similarly be equivalently represented in a feature-on-node style ( Ru et al. , 2021 ; Pham et al. , 2018 ) . We do not experiment on NB101 in the present paper , as it features CIFAR-10 only similar to NAS-Bench-301 ( NB301 ) , but the latter is much closer in terms of size to a “ realistic ” search space the community expects and is more amenable to one-shot methods which are currently mainstream in NAS . 3 OPERATION-LEVEL ANALYSIS : REDUNDANCIES IN SEARCH SPACES . Cell-based NAS search space contains multiple sources of complexity ( deciding both the specific wiring and operations on the edges ; searching for 2 cells independently , etc . ) , each expanding the search space combinatorially . While this complexity is argued to be necessary for the search space to be expressive for good-performing novel architectures to be found , it is unknown whether all these sources of complexities are equally important , or whether the performance critically depend on some sub-features only . We argue that answering this question , and identifying such features , if they are indeed present , are highly significant : it enables us to separate the complexities that positively contribute to performances from those that do not , and subsequently would fundamentally affect the design of search methods . At individual architectures , it helps us understand what NAS has truly discovered , by removing confounding factors and focusing on the key aspects of the architectures . For findings to be generally applicable , we aim to be not specific to any search method ; this requires us to study a large set of architectures that are high-performing but technically diverse in terms of the search methods that produce them . Fortunately , the training set of NB301 ( Siems et al. , 2020 ) , which include > 50,000 architectureperformance pairs in the DARTS space using a combination of random sampling and more than 10 state-of-the-art yet technically diverse methods ( detailed in App . B.1 ) , could be used for such an analysis : to build a surrogate model that accurately predicts architecture performance across the entire space . We primarily focus on the top-5 % ( or top-2,589 ) architectures of the training set since we overwhelmingly care about the good-performing architectures by definition in NAS , although as we will show , the main findings hold true also for architectures found by methods not covered by NB301 and for other search spaces like the NB201 space . As shown in Fig 2 , the top architectures are well-spread out in the search space and are well-separated by search methods , seemingly suggesting that the architectures discovered by different methods are diverse in characteristics . Lastly , the worse-performing cells could also be of interest , as any features observed could be the ones we would actively like to avoid , and we analyse them in App . A . Operation Importance To untangle the influence of each part of the architecture , we introduce Operation Importance ( OI ) , which measures the incremental effect of the individual operations , which are the smallest possible features , to the overall performance . We quantify this via measuring the expected change in the performance by perturbing the type or wiring of the operation in question : considering an edge-attributed cell α with edge ei , j currently assigned with primitive ok , then the operation importance of ok on that edge of cell α is given by : OI ( α , ei , j : = ok ) = 1 |N ( α , ei , j : = ok ) | |N ( α , ei , j : =ok ) |∑ m=1 [ y ( αm ) ] − y ( α ) , ( 1 ) where y ( α ) denotes the validation accuracy or another appropriate performance metric of the fullytrained architecture induced by cell α . We are interested in measuring the importance of both the primitive choice and where the operation is located relative to the entire cell , and we use N ( α , ei , j = ok ) to denote a set of neighbour cells that differ to α only with the edge in question assigned with another primitive : ei , j ∈ A \ { ok } , or with the same primitive ok but with one end node of ei , j being rewired to another node , subjected to any constraints in the search space . It is worth noting that OI is an instance of the Permutation Feature Importance ( PFI ) ( Breiman , 2001 ; Fisher et al. , 2019 ; Molnar , 2019 ) . Given the categorical nature of the “ features ” in this case , we may enumerate all permutations on the edge in question instead of having to rely on random sampling as conventional PFI does . An important operation by Eq ( 1 ) would therefore be accorded with an OI with a large magnitude in either direction , whereas an irrelevant one would have a value of zero since altering it on expectation leads to no change in architecture performance . We compute the OI of each operation of 2,589 architectures , and to circumvent the computational challenge of having to train all neighbour architectures of α ( which are outside the NB301 training set ) from scratch to compute y ( · ) , we use the performance prediction ỹ ( · ) from NB301 . However , as we will show , we validate all key findings by actually training some architectures to ensure that they are not artefacts of the statistical surrogate ( training protocols detailed in App . B.2 ) . Findings The most natural way to group the operations is by their primitive and cell ( i.e . normal or reduce ) types , and we show the main results in Figs . 3 and 4 . In Fig . 3 ( b ) , we discretise the OI scores using the threshold 0.001 ( 0.1 % ) , which is similar to the observed noise standard deviation of the better-performing architectures from NB301 which quantifies the expected variation in performance if an identical architecture is re-trained from scratch , to highlight the important operations with |OI| ≥ 0.001 : these the ones we could more confidently assert to affect the architecture performance beyond noise effects . We summarise the key findings below : ( # 1 ) Only a fraction of operations is critical for good performance within cells : If all operations need to be fully specified ( i.e. , with both the primitive choice and the specific wiring determined ) for good performance , perturbing any of them should have led to significant performance deterioration . However , this is clearly not the case in practice : comparing Fig . 3 ( a ) and ( b ) , we observe that only a fraction of the operations are important based on our definition . To verify this directly beyond predicted performances , we randomly select 30 architectures from the top 5 % training set . Within each cell , we sort their 16 operations by their OI in both ascending and descending orders . We then successively disable the operations by zeroising them , and train the resulting architectures with increasing number of operations disabled from scratch1 until there are only half of the active operations remain ( Fig . 5 ) . The results largely confirms our findings and shows that the OI , although computed via predicted performance , is accurate in representing the ground-truth importance of the operations : on average , we need to disable 6 low-OI operations to see a statistically significant drop in performance , and almost half of the operations to match the effect of disabling just 2 high-OI ones . On the other hand , disabling the high-OI operations quickly reduce the performance and in some cases stall the training altogether : noting that the overall standard deviation of the entire NB301 training set is just approx . 0.8 % , the drop in performance is quite dramatic . ( # 2 ) Reduce cells are relatively unimportant for good performance : Searching independently for reduce cells scale the search space quadratically , but Fig . 3 ( b ) shows that they contain much fewer important operations , and Fig . 4 shows that the OI distribution across all primitives are centered close to zero in reduce cells : both suggest that reduce cell is less important to the architecture performance . To verify this , we draw another 30 random architectures . For each of the them , we construct and train from scratch the original architecture and 4 derived ones , with ( a ) reduce cell set identical to normal cell , ( b ) reduce cell with all operations set to parameterless skip connections , ( c ) normal cell set identical to reduce cell and ( d ) normal cell with operations set to skip connections . As shown in Fig . 1Note that we may not obtain NB301 performance prediction on these architectures , as NB301 requires all 16 operations to be enabled with valid primitives . 6 : setting reduce cell to be identical to the normal cell leads to no significant change in performance , while the reverse is not true . A more extreme example is that while setting cells to be consisted of skip connections only is unsurprisingly sub-optimal in both cases , doing so on the reduce cells harms the performance much less – this suggests that while searching separately for reduce cells are well-motivated , the current design , which places much fewer reduce cells than normal cells in the overall architecture yet treats them equally in search might be a sub-optimal trade-off , and searching two separate cells may yield little benefits over the simple strategy of searching for one graph only and applying it on both normal and reduce cells . ( # 3 ) Different primitives have vastly different importance profiles with many of them redundant : The set of candidate primitives A consists of operators that are known to be useful in manuallydesigned architectures , with the expectation that they should also be , to varying degrees , useful to NAS . However , this is clearly not the case : while it is already known that some primitives are favoured more by certain search algorithms ( Zela et al. , 2020 ) , the observed discrepancy in the relative importance of the primitives is , in fact , more extreme : the normal cells ( which are also the important cells by Finding 2 ) across the entire spectrum of good performing architectures overwhelmingly favour only 3 ( separable convolutions & skip connection ) out of 7 possible primitives ( Fig . 3 ( a ) ) . Even when the remaining 4 primitives are occasionally selected , they are almost never important ( Fig . 3 ( b ) ) – this is also observed in Fig . 4 ( a ) which shows that only they all have distributions of OI close to 0 . As we will show later in Sec 4 , we could essentially remove these primitives from A without impacting the performances . Even within the 3 favoured primitives , there is a significant amount of variation . First , comparing Figs 1 ( a ) and ( b ) , skips , when present in good architectures , are very likely to be important . We also note that the distribution of OI of skip has a higher variance – these suggest that the performance of an architecture is highly sensitive towards the specific locations and patterns of skip connections , a detailed study of which we defer to Sec . 4 . On the other hand , while both separable convolutions ( s3 and s5 ) are highly over-represented in the good-performing architectures , it seems that they are less important on an operation level than skip . A possible explanation is that while their presence is required for good performance , their exact locations in the cell matter less which we again verify in Sec . 4 . Increasingly in NAS we are also interested in optimising for multiple objectives ( e.g . maximising performance while minimising costs ) ; we show that even accounting for the conflicting objectives , redundant primitives , especially pooling , remain largely redundant . ( see App . G ) . Discussions The findings confirm that both the search space and the cells are rampant with various redundancies that increase the search complexity but do not actually contribute as much to the performance , and good performance most often does not depend on an entire cell but a few key features and primitives in both individual cells and in the search space . This clearly shows that the search space design can be further optimised , but consequently , many beliefs often ingrained in existing search methods can also be sub-optimal or unnecessary . For example , barring a few exceptions ( and none to our knowledge in the standard cell-based design space ) ( Xie et al. , 2019a ; You et al. , 2020 ; Ru et al. , 2020 ) , the overwhelming majority of the current approaches aims to search for a single , fully deterministic architecture , and this often results in high-dimensional vector encoding of the cells ( e.g . the path encoding of DARTS cell in White et al . ( 2021 ) is > 104 dimensions without truncation ) ; this affects the performance in general ( White et al. , 2020a ) and impedes methods that suffer from curse of dimensionality , such as Gaussian Processes , in particular . However , exact encoding could be in fact unnecessary if good performance simply hinges upon only a few key designs while the rest does not matter as much , and finding relevant low-dimensional , approximate representations could be beneficial instead .
This paper examines the NASNET cell spaced search space in NASBENCH-301 (darts search space) and NASBENCH-201. They conclude that Separable convolution and skip connections are the most important operations. By replacing the current operations with just this subset and also enforcing the skip connection to be used as a residual connection, they are able to achieve accuracy very similar to the networks discovered by other SOTA algorithms. They also demonstrate that one does not have to search for reduce cell and that it can just use the same architecture as that of the normal cell. Finally, they replace the SOTA architectures found by the most popular NAS algorithms and replace the cells to use only Separable conv and skip connections and show that the accuracy of the network is very close to the original network.
SP:b32ddb672cd66b110b8a5cb74c66a8191d544763
On Redundancy and Diversity in Cell-based Neural Architecture Search
1 INTRODUCTION . Neural Architecture Search ( NAS ) , which automates designs of neural networks by task , has seen enormous advancements since its invention . In particular , cell-based NAS has become an important technique in NAS research : in contrast to attempts that directly aim to design architectures at once and inspired by the classical manually-designed architectures like VGGNet and ResNet that feature repeated blocks , cell-based NAS similarly searches for repeated cells only and later stack them into full architectures . This simplification reduces the search space ( although it remains highly complex ) , and allows easy transfer of architectures across different tasks/datasets , application scenarios and resources constraints ( Elsken et al. , 2019 ) . Indeed , although alternative search spaces exist , cell-based NAS has received an extraordinary amount of research attention : based on our preliminary survey , almost 80 % of the papers ( detailed in App C ) proposing new NAS methods published in the top machine learning conferences ( ICLR , ICML , NEURIPS ) in the past year show at least one part of their major results in standard Differentiable Architecture Search ( DARTS ) cell-based spaces and/or highly related ones , and approx . 60 % demonstrate results in such spaces only ; it is fair to state that such cell-based spaces currently dominate . However , the lagging understanding of these architectures and the search space itself stands in stark contrast with the volume and pace of new search algorithms . The literature on understanding why this dominant search space works and comparing what different NAS methods have found is much more limited in depth and scope , with most existing works typically focusing exclusively on a few search methods or highlighting high-level patterns only ( Shu et al. , 2020 ; Zela et al. , 2020 ) . We argue that strengthening such an understanding is crucial on multiple fronts , and the lack thereof is concerning : first , studying a diverse set of architectures enables us to discover patterns shared across the search space , the foundation of comparison amongst search methods that heavily influences the resultant performances ( Yang et al. , 2020a ) : if the search space itself is flawed , designing and iterating new search methods on it , which is typically computationally intensive , can be misleading and unproductive . Conversely , understanding any pitfalls of the existing search spaces informs us on how to design better search spaces in the future , which is critical in advancing the goal of NAS in finding novel and high-performing architectures , not only in conventional CNNs but also in emerging NAS paradigms such as Transformers ( which may also take the form of a cell-based design space ) . Second , opening the NAS black box enables us to distill the essence of the strong-performing NAS architectures beneath their surface of complexity . Unlike manually designed architectures where usually designers attribute performance to specific designs , currently owing to the apparent complexity of the design space , the NAS architectures , while all discovered in a similar or identical search space , are often compared to in terms of final performance on a standard dataset only ( e.g . CIFAR-10 test error ) . This could be problematic , as we do not necessarily understand what NAS has discovered that led to the purported improvements , and the metric itself is a poor one on which external factors such as hyperparameter settings , variations in training/data augmentation protocols and even just noise could exert a greater influence than the architectural design itself ( Yang et al. , 2020a ) . However , by linking performance to specific designs , we could more ascertain whether any performance differences stem from the architectures rather than the interfering factors . We aim to address this problem by presenting a post-hoc analysis of the well-performing architectures produced by technically diverse search methods . Specifically , we utilise explainable machine learning tools to open the NAS black box by inspecting the good- and bad-performing architectures produced by a wide range of NAS search methods in the dominant DARTS search space . We find : • Performances of architectures can often be disproportionately attributed to a small number of simple yet critical features that resemble known patterns in classical network designs ; • Many designs almost universally adopted simply contribute to complexity but not performance ; • The nominal complexity of the search spaces poorly reflects the actual diversity of the ( high- performing ) architectures discovered , the functional parts of which are often very similar despite the technical diversity in search methods and the seeming disparity in topology . In fact , with few simple and human-interpretable constraints , almost any randomly sampled architecture can perform on par or exceed those produced by the state-of-the-art NAS methods over varying network sizes and datasets ( CIFAR-10/IMAGENET ) . Ultimately , these findings prompt us to rethink the suitability of the current standard protocol in evaluating NAS and the capability to find truly novel architectures within the search space . We finally provide suggestions for prospective new search spaces inspired by these findings . input0 input1 0 1 2 3 out avg_pool_3x3 ( ap3 ) max_pool_3x3 ( mp3 ) skip_connect ( skip ) dil_conv_5x5 ( d5 ) dil_conv_3x3 ( d3 ) sep_conv_5x5 ( s5 ) sep_conv_3x3 ( s3 ) x ( j ) = i < j o ( i , j ) ( x ( i ) ) and finally the out node concatenates all outputs from the intermediate nodes . As shown in Fig 1 , the DARTS cell allows for up to 14 possible spots to place operations . However , to ensure that all intermediate nodes are connected in the computational graph , each intermediate node is constrained to have exactly 2 in-edges connected to it from 2 different preceding nodes . Lastly , it is conventional to search for two cells , namely normal αn and reduce αr cells simultaneously , with αr placed at 1/3 and 2/3 of the total depth of the networks and αn everywhere else . In total , there are ∏4 k=1 72k ( k+1 ) 2 ≈ 10 9 distinct cells without considering graph isomorphism , and since both αn and αr are required to fully specify an architecture , there exist approx . ( 109 ) 2 = 1018 distinct architectures ( Liu et al. , 2019 ) . Other cells commonly used in are almost invariably closely related to , and can be viewed as simplified or more complicated variants of the DARTS space . For example , NASBench-201 ( NB201 ) search space used in the NAS benchmark is similar to but simplified from the DARTS cell ( detailed and analysed in App D ) . Other works have increased the number of intermediate nodes ( Wang et al. , 2021b ) , expanded the primitive pool A ( Hundt et al. , 2019 ) and/or relaxed other constraints ( e.g . Shi et al . ( 2020 ) allow both incoming edges of the intermediate nodes to be from the same preceding nodes ) , but do not fundamentally differ from the DARTS space . Additionally , search spaces like the NAS-Bench-101 ( NB101 ) ( Ying et al. , 2019a ) feature cells where the operations are represented as node features , although we believe that the way the cells are represented should not affect any findings we will demonstrate , as the DARTS/NB201 spaces can similarly be equivalently represented in a feature-on-node style ( Ru et al. , 2021 ; Pham et al. , 2018 ) . We do not experiment on NB101 in the present paper , as it features CIFAR-10 only similar to NAS-Bench-301 ( NB301 ) , but the latter is much closer in terms of size to a “ realistic ” search space the community expects and is more amenable to one-shot methods which are currently mainstream in NAS . 3 OPERATION-LEVEL ANALYSIS : REDUNDANCIES IN SEARCH SPACES . Cell-based NAS search space contains multiple sources of complexity ( deciding both the specific wiring and operations on the edges ; searching for 2 cells independently , etc . ) , each expanding the search space combinatorially . While this complexity is argued to be necessary for the search space to be expressive for good-performing novel architectures to be found , it is unknown whether all these sources of complexities are equally important , or whether the performance critically depend on some sub-features only . We argue that answering this question , and identifying such features , if they are indeed present , are highly significant : it enables us to separate the complexities that positively contribute to performances from those that do not , and subsequently would fundamentally affect the design of search methods . At individual architectures , it helps us understand what NAS has truly discovered , by removing confounding factors and focusing on the key aspects of the architectures . For findings to be generally applicable , we aim to be not specific to any search method ; this requires us to study a large set of architectures that are high-performing but technically diverse in terms of the search methods that produce them . Fortunately , the training set of NB301 ( Siems et al. , 2020 ) , which include > 50,000 architectureperformance pairs in the DARTS space using a combination of random sampling and more than 10 state-of-the-art yet technically diverse methods ( detailed in App . B.1 ) , could be used for such an analysis : to build a surrogate model that accurately predicts architecture performance across the entire space . We primarily focus on the top-5 % ( or top-2,589 ) architectures of the training set since we overwhelmingly care about the good-performing architectures by definition in NAS , although as we will show , the main findings hold true also for architectures found by methods not covered by NB301 and for other search spaces like the NB201 space . As shown in Fig 2 , the top architectures are well-spread out in the search space and are well-separated by search methods , seemingly suggesting that the architectures discovered by different methods are diverse in characteristics . Lastly , the worse-performing cells could also be of interest , as any features observed could be the ones we would actively like to avoid , and we analyse them in App . A . Operation Importance To untangle the influence of each part of the architecture , we introduce Operation Importance ( OI ) , which measures the incremental effect of the individual operations , which are the smallest possible features , to the overall performance . We quantify this via measuring the expected change in the performance by perturbing the type or wiring of the operation in question : considering an edge-attributed cell α with edge ei , j currently assigned with primitive ok , then the operation importance of ok on that edge of cell α is given by : OI ( α , ei , j : = ok ) = 1 |N ( α , ei , j : = ok ) | |N ( α , ei , j : =ok ) |∑ m=1 [ y ( αm ) ] − y ( α ) , ( 1 ) where y ( α ) denotes the validation accuracy or another appropriate performance metric of the fullytrained architecture induced by cell α . We are interested in measuring the importance of both the primitive choice and where the operation is located relative to the entire cell , and we use N ( α , ei , j = ok ) to denote a set of neighbour cells that differ to α only with the edge in question assigned with another primitive : ei , j ∈ A \ { ok } , or with the same primitive ok but with one end node of ei , j being rewired to another node , subjected to any constraints in the search space . It is worth noting that OI is an instance of the Permutation Feature Importance ( PFI ) ( Breiman , 2001 ; Fisher et al. , 2019 ; Molnar , 2019 ) . Given the categorical nature of the “ features ” in this case , we may enumerate all permutations on the edge in question instead of having to rely on random sampling as conventional PFI does . An important operation by Eq ( 1 ) would therefore be accorded with an OI with a large magnitude in either direction , whereas an irrelevant one would have a value of zero since altering it on expectation leads to no change in architecture performance . We compute the OI of each operation of 2,589 architectures , and to circumvent the computational challenge of having to train all neighbour architectures of α ( which are outside the NB301 training set ) from scratch to compute y ( · ) , we use the performance prediction ỹ ( · ) from NB301 . However , as we will show , we validate all key findings by actually training some architectures to ensure that they are not artefacts of the statistical surrogate ( training protocols detailed in App . B.2 ) . Findings The most natural way to group the operations is by their primitive and cell ( i.e . normal or reduce ) types , and we show the main results in Figs . 3 and 4 . In Fig . 3 ( b ) , we discretise the OI scores using the threshold 0.001 ( 0.1 % ) , which is similar to the observed noise standard deviation of the better-performing architectures from NB301 which quantifies the expected variation in performance if an identical architecture is re-trained from scratch , to highlight the important operations with |OI| ≥ 0.001 : these the ones we could more confidently assert to affect the architecture performance beyond noise effects . We summarise the key findings below : ( # 1 ) Only a fraction of operations is critical for good performance within cells : If all operations need to be fully specified ( i.e. , with both the primitive choice and the specific wiring determined ) for good performance , perturbing any of them should have led to significant performance deterioration . However , this is clearly not the case in practice : comparing Fig . 3 ( a ) and ( b ) , we observe that only a fraction of the operations are important based on our definition . To verify this directly beyond predicted performances , we randomly select 30 architectures from the top 5 % training set . Within each cell , we sort their 16 operations by their OI in both ascending and descending orders . We then successively disable the operations by zeroising them , and train the resulting architectures with increasing number of operations disabled from scratch1 until there are only half of the active operations remain ( Fig . 5 ) . The results largely confirms our findings and shows that the OI , although computed via predicted performance , is accurate in representing the ground-truth importance of the operations : on average , we need to disable 6 low-OI operations to see a statistically significant drop in performance , and almost half of the operations to match the effect of disabling just 2 high-OI ones . On the other hand , disabling the high-OI operations quickly reduce the performance and in some cases stall the training altogether : noting that the overall standard deviation of the entire NB301 training set is just approx . 0.8 % , the drop in performance is quite dramatic . ( # 2 ) Reduce cells are relatively unimportant for good performance : Searching independently for reduce cells scale the search space quadratically , but Fig . 3 ( b ) shows that they contain much fewer important operations , and Fig . 4 shows that the OI distribution across all primitives are centered close to zero in reduce cells : both suggest that reduce cell is less important to the architecture performance . To verify this , we draw another 30 random architectures . For each of the them , we construct and train from scratch the original architecture and 4 derived ones , with ( a ) reduce cell set identical to normal cell , ( b ) reduce cell with all operations set to parameterless skip connections , ( c ) normal cell set identical to reduce cell and ( d ) normal cell with operations set to skip connections . As shown in Fig . 1Note that we may not obtain NB301 performance prediction on these architectures , as NB301 requires all 16 operations to be enabled with valid primitives . 6 : setting reduce cell to be identical to the normal cell leads to no significant change in performance , while the reverse is not true . A more extreme example is that while setting cells to be consisted of skip connections only is unsurprisingly sub-optimal in both cases , doing so on the reduce cells harms the performance much less – this suggests that while searching separately for reduce cells are well-motivated , the current design , which places much fewer reduce cells than normal cells in the overall architecture yet treats them equally in search might be a sub-optimal trade-off , and searching two separate cells may yield little benefits over the simple strategy of searching for one graph only and applying it on both normal and reduce cells . ( # 3 ) Different primitives have vastly different importance profiles with many of them redundant : The set of candidate primitives A consists of operators that are known to be useful in manuallydesigned architectures , with the expectation that they should also be , to varying degrees , useful to NAS . However , this is clearly not the case : while it is already known that some primitives are favoured more by certain search algorithms ( Zela et al. , 2020 ) , the observed discrepancy in the relative importance of the primitives is , in fact , more extreme : the normal cells ( which are also the important cells by Finding 2 ) across the entire spectrum of good performing architectures overwhelmingly favour only 3 ( separable convolutions & skip connection ) out of 7 possible primitives ( Fig . 3 ( a ) ) . Even when the remaining 4 primitives are occasionally selected , they are almost never important ( Fig . 3 ( b ) ) – this is also observed in Fig . 4 ( a ) which shows that only they all have distributions of OI close to 0 . As we will show later in Sec 4 , we could essentially remove these primitives from A without impacting the performances . Even within the 3 favoured primitives , there is a significant amount of variation . First , comparing Figs 1 ( a ) and ( b ) , skips , when present in good architectures , are very likely to be important . We also note that the distribution of OI of skip has a higher variance – these suggest that the performance of an architecture is highly sensitive towards the specific locations and patterns of skip connections , a detailed study of which we defer to Sec . 4 . On the other hand , while both separable convolutions ( s3 and s5 ) are highly over-represented in the good-performing architectures , it seems that they are less important on an operation level than skip . A possible explanation is that while their presence is required for good performance , their exact locations in the cell matter less which we again verify in Sec . 4 . Increasingly in NAS we are also interested in optimising for multiple objectives ( e.g . maximising performance while minimising costs ) ; we show that even accounting for the conflicting objectives , redundant primitives , especially pooling , remain largely redundant . ( see App . G ) . Discussions The findings confirm that both the search space and the cells are rampant with various redundancies that increase the search complexity but do not actually contribute as much to the performance , and good performance most often does not depend on an entire cell but a few key features and primitives in both individual cells and in the search space . This clearly shows that the search space design can be further optimised , but consequently , many beliefs often ingrained in existing search methods can also be sub-optimal or unnecessary . For example , barring a few exceptions ( and none to our knowledge in the standard cell-based design space ) ( Xie et al. , 2019a ; You et al. , 2020 ; Ru et al. , 2020 ) , the overwhelming majority of the current approaches aims to search for a single , fully deterministic architecture , and this often results in high-dimensional vector encoding of the cells ( e.g . the path encoding of DARTS cell in White et al . ( 2021 ) is > 104 dimensions without truncation ) ; this affects the performance in general ( White et al. , 2020a ) and impedes methods that suffer from curse of dimensionality , such as Gaussian Processes , in particular . However , exact encoding could be in fact unnecessary if good performance simply hinges upon only a few key designs while the rest does not matter as much , and finding relevant low-dimensional , approximate representations could be beneficial instead .
The paper performs a detailed analysis of the DARTS search space commonly used for weight-sharing neural architecture search. The authors used several simple statistical methods to identify salient features (ops or graphlet) that can best explain the higher-quality candidates. They also looked into the discovered patterns and reached the conclusion that the popular cell-based search space is highly redundant, as simply imposing two of the discovered constraints can result in strong performance even using random search.
SP:b32ddb672cd66b110b8a5cb74c66a8191d544763
Who Is Your Right Mixup Partner in Positive and Unlabeled Learning
1 INTRODUCTION . Positive and Unlabeled ( PU ) learning refers to a specific binary classification problem , where only a small number of positive training instances are manually annotated but all other instances are unlabeled ( Liu et al. , 2002 ) . Such kind of datasets naturally arise in many significant real-world scenarios such as product recommendation ( Hsieh et al. , 2015 ) , deceptive reviews detection ( Ren et al. , 2014 ) , and medical diagnosis ( Yang et al. , 2012 ) . For specific example , many diseases , e.g. , Alzheimer ’ s disease , Amyotrophic Lateral Sclerosis , and Parkinson ’ s disease , are very infrequent and with long latency , hence only few diagnosed patients are known but a much larger population of undiagnosed individuals may be either diseased or healthy . Treating the diagnosed ones as positive instances and the undiagnosed ones as unlabeled instances results in such PU datasets of medical diagnosis . To meet those practical demands , PU learning has drawn increasing interest from the machine learning community ( Bekker & Davis , 2020 ) . Formally , let x ∈ Rd and y ∈ { 0 , 1 } be the feature representation and category label , respectively , where the positive instance is indicated by y = 1 and the negative one by y = 0 . In the context of PU learning , the training dataset is composed of the sets of positive instances P = { ( xi , yi = 1 ) } np i=1 and unlabeled instances U = { xi } np+nu i=np+1 , where U contains both positive and negative instances . The target is to learn a binary classifier based on such weak training dataset P ∪ U . During the past decades , many PU learning methods have been proposed , where , naturally , the essential idea is to estimate the negative instances from the set of unlabeled instances U . Generally , most of existing PU learning methods can be divided into two categories , termed as sample-selection methods and cost-sensitive methods . The sample-selection methods , as the name suggests , mainly select reliable negative instances from U using various heuristic strategies , e.g. , Naı̈ve Bayes ( Liu et al. , 2002 ) , kNN ( Zhang & Zuo , 2009 ) , k-means ( Chaudhari & Shevade , 2012 ) , and reinforcement learning ( Luo et al. , 2021 ) ; and then apply supervised methods over positive and those reliable negative instances . In contrast , the cost-sensitive methods treat all unlabeled instances as corrupted negative ones , and correct the estimation bias of the objective by employing well-designed misclas- sification risks such as unbiased risk estimator ( du Plessis et al. , 2014 ; 2015 ; Kiryo et al. , 2017 ) and maximum margin loss ( Shi et al. , 2018 ; Gong et al. , 2019b ; Zhang et al. , 2019 ; Gong et al. , 2019a ) . Orthogonal to the aforementioned techniques , we note that some PU learning methods such as ( Chen et al. , 2020a ; Wei et al. , 2020 ) have made preliminary attempts to integrate with the art of mixup , i.e. , an economic-yet-effective data augmentation method ( Zhang et al. , 2018 ) . Formally , mixup generates an augmented instance ( x̂ , ŷ ) with the convex combination of any pair of instances { ( xi , yi ) , ( xj , yj ) } drawn from the training dataset : x̂ = λxi + ( 1− λ ) xj , ŷ = λyi + ( 1− λ ) yj , λ ∼ Beta ( α , α ) , α ∈ ( 0 , ∞ ) . Previous studies have indicated that mixup is approximately equivalent to applying adversarial training ( Zhang et al. , 2021 ) , enabling to improve robustness with even scarce and noisy supervision ( Thulasidasan et al. , 2019 ; Carratino et al. , 2020 ; Zhang et al. , 2021 ) . Accordingly , it has been successfully used to solve various learning problems with weak supervision , e.g. , semi-supervised learning ( Berthelot et al. , 2019 ) , noisy label learning ( Li et al. , 2020 ) , and partial label learning ( Yan & Guo , 2020 ) . Our story and contribution . Inspired by the recent success of mixup in learning with weak supervision , our original goal is to thoroughly investigate the impact of mixup in PU learning . To this end , we begin with a naive disambiguation-free objective of PU learning , where all unlabeled instances are treated as pseudo-negative instances , denoted by Ũ = { ( xi , yi = 0 ) } np+nu i=np+1 , and the binary classifier is trained based on P ∪ Ũ . In preliminary experiments , we found an interesting phenomenon , where the number of training instances predicted as positive by the disambiguationfree classifier tends to be smaller than usual as illustrated in Fig.1 . This phenomenon implies that the disambiguation-free boundary tends to deviate from the fully supervised boundary towards the positive side , expressed by a toy example shown in Fig.2 ( a ) . We consider that the decision boundary deviation is mainly caused by the marginal pseudo-negative instances , which lie between the two boundaries . Such instances are more likely to be positive but actually annotated by negative . 1Specially , we apply the early learning regularization ( Liu et al. , 2020 ) into the objective to keep it stable . Motivated by this observation , we extend mixup to a specific heuristic version for PU learning , enabling to achieve data augmentation and supervision correction simultaneously . Its basic idea is to transform the marginal pseudo-negative instances into augmented instances which are partially positive and yet also lie between the two boundaries , so as to push the learned boundary towards the fully supervised one . This can be achieved by selecting the mixup partners for marginal pseudonegative instances from the positive instances that are around the learned boundary , as expressed in Fig.2 ( b ) . With this insight , we propose a novel PU method , namely Positive and unlabeled learning with Partially Positive Mixup ( P3Mix ) . Generally , P3Mix is easy-to-implement , where , specifically , we can define the marginal pseudo-negative instances using the predictive results and the positive instances around the boundary using the entropy values of predictive results . To evaluate the effectiveness of P3Mix , we conduct a number of experiments on benchmark datasets . Experimental results demonstrate that P3Mix can consistently outperform the state-of-the-art PU methods . 2 THE PROPOSED P3MIX METHOD In this section , we introduce the proposed P3Mix method for PU learning . We first revisit and clarify some important notations : the set of positive instances P = { ( xi , yi = 1 ) } np i=1 and the set of unlabeled instances U = { xi } np+nu i=np+1 . By treating all unlabeled instances as negative , we translate U into the set of pseudo-negative instances Ũ = { ( xi , yi = 0 ) } np+nu i=np+1 . Given batches Xp ⊂ P and Xu ⊂ Ũ , the disambiguation-free objective of PU learning can be formulated as follows : L ( Xp , Xu ; Θ ) = 1 |Xp| ∑ ( x , y ) ∈Xp ` ( f ( x ; Θ ) , y ) + β |Xu| ∑ ( x , y ) ∈Xu ` ( f ( x ; Θ ) , y ) , ( 1 ) where f ( · ; Θ ) is a trainable neural network , i.e. , the binary classifier , parameterized by Θ ; ` ( · , · ) is the loss function ; and β is the coefficient parameter . To achieve data augmentation and supervision correction simultaneously , P3Mix transforms Xp and Xu into the batches of augmented instances X̂p and X̂u using the proposed heuristic mixup technique . Accordingly , the objective of P3Mix is then expressed as follows : L ( X̂p , X̂u ; Θ ) = 1 |X̂p| ∑ ( x̂ , ŷ ) ∈X̂p ` ( f ( x̂ ; Θ ) , ŷ ) + β |X̂u| ∑ ( x̂ , ŷ ) ∈X̂u ` ( f ( x̂ ; Θ ) , ŷ ) , ( 2 ) X̂p , X̂u = HeuristicMixup ( Xp , Xu , α ) , ( 3 ) where α ∈ ( 0 , ∞ ) is a hyperparameter of mixup . Next , we describe the details of heuristic mixup . Algorithm 1 Training procedure of P3Mix , P3Mix-E and P3Mix-C Input : P ∪ U : training instances ; β : coefficient parameter ; γ : thresholding parameter ; k : size of the candidate mixup pool ; α : hyperparameter of mixup ; η : coefficient parameter of early-learning regularization . η = 0 for P3Mix and P3Mix-C Output : Θ : binary classifier parameters 1 : Initialize Θ , the mean-teacher parameters Θ̃ and the candidate mixup pool Xcnd randomly , translate U into Ũ ; 2 : for t = 1 , 2 , · · · , MaxEpoch do 3 : Shuffle P ∪ Ũ into I mini-batches and denote the i-th mini-batch by ( X ip , X iu ) ; 4 : for i = 1 , 2 , · · · , I do 5 : Estimate marginal pseudo-negative instances Xmpn using Eq . ( 6 ) ; 6 : Select the mixup partners for each instance within X ip ∪ X iu using Eq . ( 5 ) ; 7 : Set labels of { ( x , y = 0 ) | ( x , y = 0 ) ∈ X iu , f ( x ; Θ ) > γ } to 1 ; . Optional for P3Mix-C 8 : Construct X̂ ip ∪ X̂ iu by applying Eq . ( 4 ) to X ip ∪ X iu and their mixup partners ; 9 : Estimate { ỹj } |X̂ ip|+|X̂ i u| j=1 for instances in X̂ ip ∪ X̂ iu by f ( · ; Θ̃ ) ; . Optional for P3Mix-E 10 : Update Θ by ∇Θ ( L ( X̂ ip , X̂ iu ; Θ ) + ηRelr ( { ( x̂j , ỹj ) } |X̂ ip|+|X̂ i u| j=1 ; Θ ) ) with Adam ; 11 : end for 12 : Update Xcnd using Eq . ( 7 ) ; 13 : Update Θ̃ by Θ with the move-average ; . Optional for P3Mix-E 14 : end for 2.1 TRAINING WITH HEURISTIC MIXUP . Basically , for each instance ( xi , yi ) ∈ Xp ∪ Xu we select a mixup partner ( xj , yj ) to generate an augmented instance ( x̂i , ŷi ) using the modified mixup operator2 ( Berthelot et al. , 2019 ) : x̂i = λ ′xi + ( 1− λ′ ) xj , ŷi = λ′yi + ( 1− λ′ ) yj , λ′ = max ( λ , 1− λ ) , λ ∼ Beta ( α , α ) , α ∈ ( 0 , ∞ ) , ( 4 ) accordingly forming the augmented instance sets X̂p and X̂u . Our heuristic mixup refers to a guidance of mixup partner selection to refine the imprecise supervision within Xu . We take inspiration from the phenomenon , where the boundary learned by Eq . ( 1 ) tends to deviate from the fully supervised boundary towards the positive side as illustrated in Fig.2 ( a ) . The marginal pseudo-negative instances Xmpn ⊂ Xu lie between the two boundaries , and they are more likely to be positive but actually annotated by negative . To resolve this problem , for each of them we uniformly select a mixup partner from the candidate mixup pool Xcnd ⊂ P of positive instances that are around the current learned boundary , so as to generate an augmented instance which is partially positive and yet also lies between the two boundaries as expressed in Fig.2 ( b ) . Besides , for positive instances Xp and other pseudo-negative instances Xu \ Xmpn , we uniformly choose their mixup partners from Xp ∪ Xu The overall mixup partner selection is formulated as follows : ( xj , yj ) ∼ Uniform ( Xcnd ) if ( xi , yi ) ∈ Xmpn , Uniform ( Xp ∪ Xu ) if ( xi , yi ) ∈ Xp ∪ Xu \ Xmpn . ( 5 ) In what follows , we introduce how to estimate the marginal pseudo-negative instances Xmpn and construct the candidate mixup pool Xcnd . 2Because we compute individual loss terms for positive instances and pseudo-negative ones in Eq . ( 2 ) appropriately , we define λ′ = max ( λ , 1−λ ) to guarantee that the feature of each augmented instance x̂i is closer to xi than the mixup partner xj . Consequently , ( x̂i , ŷi ) is assigned into X̂p if ( xi , yi ) ∈ Xp , or X̂u otherwise . Marginal pseudo-negative instance estimation . Because the fully supervised boundary is exactly unknown , we have to estimate the set of marginal pseudo-negative instances Xmpn from Xu . In this work , we define them as the “ unreliable ” pseudo-negative instances measured by the predictive scores with thresholding parameter γ ∈ [ 0.5 , 1 ] : Xmpn = { ( x , y = 0 ) | ( x , y = 0 ) ∈ Xu , 1− γ ≤ f ( x ; Θ ) ≤ γ } , ( 6 ) where γ = 0.5 implies Xmpn = ∅ , and γ = 1 means Xmpn = Xu . Candidate mixup pool . We maintain a candidate mixup pool Xcnd containing the positive instances around the current learned boundary from P . To be specific , for each positive instance we compute its entropy value of the predictive score , and update the candidate mixup pool with the top-k positive instances as follows : Xcnd = { ( x , y = 1 ) | ( x , y = 1 ) ∈ P , H ( f ( x ; Θ ) ) ∈ Rank ( { H ( f ( xi ; Θ ) ) } np i=1 ) } , ( 7 ) where H ( · ) is the entropy , and Rank ( · ) outputs a set of positive instances with the top-k maximum entropy values . For efficiency , we update Xcnd in every epoch . The full training procedure is shown in Algorithm 1 .
This paper proposes a variant of the mixup technique for positive-unlabeled learning. Based on the observation that the learned PU boundary tends to deviate towards the positive side, the authors suggest selecting samples between the learned PU and supervised boundaries. The proposed P3MIX method and its variant improve the classification performance of PU learning.
SP:560b873206fbcb949a6c582ea5e8169e0fc9dec9
Who Is Your Right Mixup Partner in Positive and Unlabeled Learning
1 INTRODUCTION . Positive and Unlabeled ( PU ) learning refers to a specific binary classification problem , where only a small number of positive training instances are manually annotated but all other instances are unlabeled ( Liu et al. , 2002 ) . Such kind of datasets naturally arise in many significant real-world scenarios such as product recommendation ( Hsieh et al. , 2015 ) , deceptive reviews detection ( Ren et al. , 2014 ) , and medical diagnosis ( Yang et al. , 2012 ) . For specific example , many diseases , e.g. , Alzheimer ’ s disease , Amyotrophic Lateral Sclerosis , and Parkinson ’ s disease , are very infrequent and with long latency , hence only few diagnosed patients are known but a much larger population of undiagnosed individuals may be either diseased or healthy . Treating the diagnosed ones as positive instances and the undiagnosed ones as unlabeled instances results in such PU datasets of medical diagnosis . To meet those practical demands , PU learning has drawn increasing interest from the machine learning community ( Bekker & Davis , 2020 ) . Formally , let x ∈ Rd and y ∈ { 0 , 1 } be the feature representation and category label , respectively , where the positive instance is indicated by y = 1 and the negative one by y = 0 . In the context of PU learning , the training dataset is composed of the sets of positive instances P = { ( xi , yi = 1 ) } np i=1 and unlabeled instances U = { xi } np+nu i=np+1 , where U contains both positive and negative instances . The target is to learn a binary classifier based on such weak training dataset P ∪ U . During the past decades , many PU learning methods have been proposed , where , naturally , the essential idea is to estimate the negative instances from the set of unlabeled instances U . Generally , most of existing PU learning methods can be divided into two categories , termed as sample-selection methods and cost-sensitive methods . The sample-selection methods , as the name suggests , mainly select reliable negative instances from U using various heuristic strategies , e.g. , Naı̈ve Bayes ( Liu et al. , 2002 ) , kNN ( Zhang & Zuo , 2009 ) , k-means ( Chaudhari & Shevade , 2012 ) , and reinforcement learning ( Luo et al. , 2021 ) ; and then apply supervised methods over positive and those reliable negative instances . In contrast , the cost-sensitive methods treat all unlabeled instances as corrupted negative ones , and correct the estimation bias of the objective by employing well-designed misclas- sification risks such as unbiased risk estimator ( du Plessis et al. , 2014 ; 2015 ; Kiryo et al. , 2017 ) and maximum margin loss ( Shi et al. , 2018 ; Gong et al. , 2019b ; Zhang et al. , 2019 ; Gong et al. , 2019a ) . Orthogonal to the aforementioned techniques , we note that some PU learning methods such as ( Chen et al. , 2020a ; Wei et al. , 2020 ) have made preliminary attempts to integrate with the art of mixup , i.e. , an economic-yet-effective data augmentation method ( Zhang et al. , 2018 ) . Formally , mixup generates an augmented instance ( x̂ , ŷ ) with the convex combination of any pair of instances { ( xi , yi ) , ( xj , yj ) } drawn from the training dataset : x̂ = λxi + ( 1− λ ) xj , ŷ = λyi + ( 1− λ ) yj , λ ∼ Beta ( α , α ) , α ∈ ( 0 , ∞ ) . Previous studies have indicated that mixup is approximately equivalent to applying adversarial training ( Zhang et al. , 2021 ) , enabling to improve robustness with even scarce and noisy supervision ( Thulasidasan et al. , 2019 ; Carratino et al. , 2020 ; Zhang et al. , 2021 ) . Accordingly , it has been successfully used to solve various learning problems with weak supervision , e.g. , semi-supervised learning ( Berthelot et al. , 2019 ) , noisy label learning ( Li et al. , 2020 ) , and partial label learning ( Yan & Guo , 2020 ) . Our story and contribution . Inspired by the recent success of mixup in learning with weak supervision , our original goal is to thoroughly investigate the impact of mixup in PU learning . To this end , we begin with a naive disambiguation-free objective of PU learning , where all unlabeled instances are treated as pseudo-negative instances , denoted by Ũ = { ( xi , yi = 0 ) } np+nu i=np+1 , and the binary classifier is trained based on P ∪ Ũ . In preliminary experiments , we found an interesting phenomenon , where the number of training instances predicted as positive by the disambiguationfree classifier tends to be smaller than usual as illustrated in Fig.1 . This phenomenon implies that the disambiguation-free boundary tends to deviate from the fully supervised boundary towards the positive side , expressed by a toy example shown in Fig.2 ( a ) . We consider that the decision boundary deviation is mainly caused by the marginal pseudo-negative instances , which lie between the two boundaries . Such instances are more likely to be positive but actually annotated by negative . 1Specially , we apply the early learning regularization ( Liu et al. , 2020 ) into the objective to keep it stable . Motivated by this observation , we extend mixup to a specific heuristic version for PU learning , enabling to achieve data augmentation and supervision correction simultaneously . Its basic idea is to transform the marginal pseudo-negative instances into augmented instances which are partially positive and yet also lie between the two boundaries , so as to push the learned boundary towards the fully supervised one . This can be achieved by selecting the mixup partners for marginal pseudonegative instances from the positive instances that are around the learned boundary , as expressed in Fig.2 ( b ) . With this insight , we propose a novel PU method , namely Positive and unlabeled learning with Partially Positive Mixup ( P3Mix ) . Generally , P3Mix is easy-to-implement , where , specifically , we can define the marginal pseudo-negative instances using the predictive results and the positive instances around the boundary using the entropy values of predictive results . To evaluate the effectiveness of P3Mix , we conduct a number of experiments on benchmark datasets . Experimental results demonstrate that P3Mix can consistently outperform the state-of-the-art PU methods . 2 THE PROPOSED P3MIX METHOD In this section , we introduce the proposed P3Mix method for PU learning . We first revisit and clarify some important notations : the set of positive instances P = { ( xi , yi = 1 ) } np i=1 and the set of unlabeled instances U = { xi } np+nu i=np+1 . By treating all unlabeled instances as negative , we translate U into the set of pseudo-negative instances Ũ = { ( xi , yi = 0 ) } np+nu i=np+1 . Given batches Xp ⊂ P and Xu ⊂ Ũ , the disambiguation-free objective of PU learning can be formulated as follows : L ( Xp , Xu ; Θ ) = 1 |Xp| ∑ ( x , y ) ∈Xp ` ( f ( x ; Θ ) , y ) + β |Xu| ∑ ( x , y ) ∈Xu ` ( f ( x ; Θ ) , y ) , ( 1 ) where f ( · ; Θ ) is a trainable neural network , i.e. , the binary classifier , parameterized by Θ ; ` ( · , · ) is the loss function ; and β is the coefficient parameter . To achieve data augmentation and supervision correction simultaneously , P3Mix transforms Xp and Xu into the batches of augmented instances X̂p and X̂u using the proposed heuristic mixup technique . Accordingly , the objective of P3Mix is then expressed as follows : L ( X̂p , X̂u ; Θ ) = 1 |X̂p| ∑ ( x̂ , ŷ ) ∈X̂p ` ( f ( x̂ ; Θ ) , ŷ ) + β |X̂u| ∑ ( x̂ , ŷ ) ∈X̂u ` ( f ( x̂ ; Θ ) , ŷ ) , ( 2 ) X̂p , X̂u = HeuristicMixup ( Xp , Xu , α ) , ( 3 ) where α ∈ ( 0 , ∞ ) is a hyperparameter of mixup . Next , we describe the details of heuristic mixup . Algorithm 1 Training procedure of P3Mix , P3Mix-E and P3Mix-C Input : P ∪ U : training instances ; β : coefficient parameter ; γ : thresholding parameter ; k : size of the candidate mixup pool ; α : hyperparameter of mixup ; η : coefficient parameter of early-learning regularization . η = 0 for P3Mix and P3Mix-C Output : Θ : binary classifier parameters 1 : Initialize Θ , the mean-teacher parameters Θ̃ and the candidate mixup pool Xcnd randomly , translate U into Ũ ; 2 : for t = 1 , 2 , · · · , MaxEpoch do 3 : Shuffle P ∪ Ũ into I mini-batches and denote the i-th mini-batch by ( X ip , X iu ) ; 4 : for i = 1 , 2 , · · · , I do 5 : Estimate marginal pseudo-negative instances Xmpn using Eq . ( 6 ) ; 6 : Select the mixup partners for each instance within X ip ∪ X iu using Eq . ( 5 ) ; 7 : Set labels of { ( x , y = 0 ) | ( x , y = 0 ) ∈ X iu , f ( x ; Θ ) > γ } to 1 ; . Optional for P3Mix-C 8 : Construct X̂ ip ∪ X̂ iu by applying Eq . ( 4 ) to X ip ∪ X iu and their mixup partners ; 9 : Estimate { ỹj } |X̂ ip|+|X̂ i u| j=1 for instances in X̂ ip ∪ X̂ iu by f ( · ; Θ̃ ) ; . Optional for P3Mix-E 10 : Update Θ by ∇Θ ( L ( X̂ ip , X̂ iu ; Θ ) + ηRelr ( { ( x̂j , ỹj ) } |X̂ ip|+|X̂ i u| j=1 ; Θ ) ) with Adam ; 11 : end for 12 : Update Xcnd using Eq . ( 7 ) ; 13 : Update Θ̃ by Θ with the move-average ; . Optional for P3Mix-E 14 : end for 2.1 TRAINING WITH HEURISTIC MIXUP . Basically , for each instance ( xi , yi ) ∈ Xp ∪ Xu we select a mixup partner ( xj , yj ) to generate an augmented instance ( x̂i , ŷi ) using the modified mixup operator2 ( Berthelot et al. , 2019 ) : x̂i = λ ′xi + ( 1− λ′ ) xj , ŷi = λ′yi + ( 1− λ′ ) yj , λ′ = max ( λ , 1− λ ) , λ ∼ Beta ( α , α ) , α ∈ ( 0 , ∞ ) , ( 4 ) accordingly forming the augmented instance sets X̂p and X̂u . Our heuristic mixup refers to a guidance of mixup partner selection to refine the imprecise supervision within Xu . We take inspiration from the phenomenon , where the boundary learned by Eq . ( 1 ) tends to deviate from the fully supervised boundary towards the positive side as illustrated in Fig.2 ( a ) . The marginal pseudo-negative instances Xmpn ⊂ Xu lie between the two boundaries , and they are more likely to be positive but actually annotated by negative . To resolve this problem , for each of them we uniformly select a mixup partner from the candidate mixup pool Xcnd ⊂ P of positive instances that are around the current learned boundary , so as to generate an augmented instance which is partially positive and yet also lies between the two boundaries as expressed in Fig.2 ( b ) . Besides , for positive instances Xp and other pseudo-negative instances Xu \ Xmpn , we uniformly choose their mixup partners from Xp ∪ Xu The overall mixup partner selection is formulated as follows : ( xj , yj ) ∼ Uniform ( Xcnd ) if ( xi , yi ) ∈ Xmpn , Uniform ( Xp ∪ Xu ) if ( xi , yi ) ∈ Xp ∪ Xu \ Xmpn . ( 5 ) In what follows , we introduce how to estimate the marginal pseudo-negative instances Xmpn and construct the candidate mixup pool Xcnd . 2Because we compute individual loss terms for positive instances and pseudo-negative ones in Eq . ( 2 ) appropriately , we define λ′ = max ( λ , 1−λ ) to guarantee that the feature of each augmented instance x̂i is closer to xi than the mixup partner xj . Consequently , ( x̂i , ŷi ) is assigned into X̂p if ( xi , yi ) ∈ Xp , or X̂u otherwise . Marginal pseudo-negative instance estimation . Because the fully supervised boundary is exactly unknown , we have to estimate the set of marginal pseudo-negative instances Xmpn from Xu . In this work , we define them as the “ unreliable ” pseudo-negative instances measured by the predictive scores with thresholding parameter γ ∈ [ 0.5 , 1 ] : Xmpn = { ( x , y = 0 ) | ( x , y = 0 ) ∈ Xu , 1− γ ≤ f ( x ; Θ ) ≤ γ } , ( 6 ) where γ = 0.5 implies Xmpn = ∅ , and γ = 1 means Xmpn = Xu . Candidate mixup pool . We maintain a candidate mixup pool Xcnd containing the positive instances around the current learned boundary from P . To be specific , for each positive instance we compute its entropy value of the predictive score , and update the candidate mixup pool with the top-k positive instances as follows : Xcnd = { ( x , y = 1 ) | ( x , y = 1 ) ∈ P , H ( f ( x ; Θ ) ) ∈ Rank ( { H ( f ( xi ; Θ ) ) } np i=1 ) } , ( 7 ) where H ( · ) is the entropy , and Rank ( · ) outputs a set of positive instances with the top-k maximum entropy values . For efficiency , we update Xcnd in every epoch . The full training procedure is shown in Algorithm 1 .
This paper studies an interesting weakly supervised binary classification problem called positive and unlabeled (PU) learning. The authors propose a novel PU learning method inspired by the boundary deviation phenomenon observed in experiments. Specifically, a new mixup method is proposed, which selects the mixup partners for unlabeled examples heuristically to obtain more correct supervised signals. Extensive empirical results, including ablation study and sensitiveness analysis, are provided to evaluate the proposal.
SP:560b873206fbcb949a6c582ea5e8169e0fc9dec9
Who Is Your Right Mixup Partner in Positive and Unlabeled Learning
1 INTRODUCTION . Positive and Unlabeled ( PU ) learning refers to a specific binary classification problem , where only a small number of positive training instances are manually annotated but all other instances are unlabeled ( Liu et al. , 2002 ) . Such kind of datasets naturally arise in many significant real-world scenarios such as product recommendation ( Hsieh et al. , 2015 ) , deceptive reviews detection ( Ren et al. , 2014 ) , and medical diagnosis ( Yang et al. , 2012 ) . For specific example , many diseases , e.g. , Alzheimer ’ s disease , Amyotrophic Lateral Sclerosis , and Parkinson ’ s disease , are very infrequent and with long latency , hence only few diagnosed patients are known but a much larger population of undiagnosed individuals may be either diseased or healthy . Treating the diagnosed ones as positive instances and the undiagnosed ones as unlabeled instances results in such PU datasets of medical diagnosis . To meet those practical demands , PU learning has drawn increasing interest from the machine learning community ( Bekker & Davis , 2020 ) . Formally , let x ∈ Rd and y ∈ { 0 , 1 } be the feature representation and category label , respectively , where the positive instance is indicated by y = 1 and the negative one by y = 0 . In the context of PU learning , the training dataset is composed of the sets of positive instances P = { ( xi , yi = 1 ) } np i=1 and unlabeled instances U = { xi } np+nu i=np+1 , where U contains both positive and negative instances . The target is to learn a binary classifier based on such weak training dataset P ∪ U . During the past decades , many PU learning methods have been proposed , where , naturally , the essential idea is to estimate the negative instances from the set of unlabeled instances U . Generally , most of existing PU learning methods can be divided into two categories , termed as sample-selection methods and cost-sensitive methods . The sample-selection methods , as the name suggests , mainly select reliable negative instances from U using various heuristic strategies , e.g. , Naı̈ve Bayes ( Liu et al. , 2002 ) , kNN ( Zhang & Zuo , 2009 ) , k-means ( Chaudhari & Shevade , 2012 ) , and reinforcement learning ( Luo et al. , 2021 ) ; and then apply supervised methods over positive and those reliable negative instances . In contrast , the cost-sensitive methods treat all unlabeled instances as corrupted negative ones , and correct the estimation bias of the objective by employing well-designed misclas- sification risks such as unbiased risk estimator ( du Plessis et al. , 2014 ; 2015 ; Kiryo et al. , 2017 ) and maximum margin loss ( Shi et al. , 2018 ; Gong et al. , 2019b ; Zhang et al. , 2019 ; Gong et al. , 2019a ) . Orthogonal to the aforementioned techniques , we note that some PU learning methods such as ( Chen et al. , 2020a ; Wei et al. , 2020 ) have made preliminary attempts to integrate with the art of mixup , i.e. , an economic-yet-effective data augmentation method ( Zhang et al. , 2018 ) . Formally , mixup generates an augmented instance ( x̂ , ŷ ) with the convex combination of any pair of instances { ( xi , yi ) , ( xj , yj ) } drawn from the training dataset : x̂ = λxi + ( 1− λ ) xj , ŷ = λyi + ( 1− λ ) yj , λ ∼ Beta ( α , α ) , α ∈ ( 0 , ∞ ) . Previous studies have indicated that mixup is approximately equivalent to applying adversarial training ( Zhang et al. , 2021 ) , enabling to improve robustness with even scarce and noisy supervision ( Thulasidasan et al. , 2019 ; Carratino et al. , 2020 ; Zhang et al. , 2021 ) . Accordingly , it has been successfully used to solve various learning problems with weak supervision , e.g. , semi-supervised learning ( Berthelot et al. , 2019 ) , noisy label learning ( Li et al. , 2020 ) , and partial label learning ( Yan & Guo , 2020 ) . Our story and contribution . Inspired by the recent success of mixup in learning with weak supervision , our original goal is to thoroughly investigate the impact of mixup in PU learning . To this end , we begin with a naive disambiguation-free objective of PU learning , where all unlabeled instances are treated as pseudo-negative instances , denoted by Ũ = { ( xi , yi = 0 ) } np+nu i=np+1 , and the binary classifier is trained based on P ∪ Ũ . In preliminary experiments , we found an interesting phenomenon , where the number of training instances predicted as positive by the disambiguationfree classifier tends to be smaller than usual as illustrated in Fig.1 . This phenomenon implies that the disambiguation-free boundary tends to deviate from the fully supervised boundary towards the positive side , expressed by a toy example shown in Fig.2 ( a ) . We consider that the decision boundary deviation is mainly caused by the marginal pseudo-negative instances , which lie between the two boundaries . Such instances are more likely to be positive but actually annotated by negative . 1Specially , we apply the early learning regularization ( Liu et al. , 2020 ) into the objective to keep it stable . Motivated by this observation , we extend mixup to a specific heuristic version for PU learning , enabling to achieve data augmentation and supervision correction simultaneously . Its basic idea is to transform the marginal pseudo-negative instances into augmented instances which are partially positive and yet also lie between the two boundaries , so as to push the learned boundary towards the fully supervised one . This can be achieved by selecting the mixup partners for marginal pseudonegative instances from the positive instances that are around the learned boundary , as expressed in Fig.2 ( b ) . With this insight , we propose a novel PU method , namely Positive and unlabeled learning with Partially Positive Mixup ( P3Mix ) . Generally , P3Mix is easy-to-implement , where , specifically , we can define the marginal pseudo-negative instances using the predictive results and the positive instances around the boundary using the entropy values of predictive results . To evaluate the effectiveness of P3Mix , we conduct a number of experiments on benchmark datasets . Experimental results demonstrate that P3Mix can consistently outperform the state-of-the-art PU methods . 2 THE PROPOSED P3MIX METHOD In this section , we introduce the proposed P3Mix method for PU learning . We first revisit and clarify some important notations : the set of positive instances P = { ( xi , yi = 1 ) } np i=1 and the set of unlabeled instances U = { xi } np+nu i=np+1 . By treating all unlabeled instances as negative , we translate U into the set of pseudo-negative instances Ũ = { ( xi , yi = 0 ) } np+nu i=np+1 . Given batches Xp ⊂ P and Xu ⊂ Ũ , the disambiguation-free objective of PU learning can be formulated as follows : L ( Xp , Xu ; Θ ) = 1 |Xp| ∑ ( x , y ) ∈Xp ` ( f ( x ; Θ ) , y ) + β |Xu| ∑ ( x , y ) ∈Xu ` ( f ( x ; Θ ) , y ) , ( 1 ) where f ( · ; Θ ) is a trainable neural network , i.e. , the binary classifier , parameterized by Θ ; ` ( · , · ) is the loss function ; and β is the coefficient parameter . To achieve data augmentation and supervision correction simultaneously , P3Mix transforms Xp and Xu into the batches of augmented instances X̂p and X̂u using the proposed heuristic mixup technique . Accordingly , the objective of P3Mix is then expressed as follows : L ( X̂p , X̂u ; Θ ) = 1 |X̂p| ∑ ( x̂ , ŷ ) ∈X̂p ` ( f ( x̂ ; Θ ) , ŷ ) + β |X̂u| ∑ ( x̂ , ŷ ) ∈X̂u ` ( f ( x̂ ; Θ ) , ŷ ) , ( 2 ) X̂p , X̂u = HeuristicMixup ( Xp , Xu , α ) , ( 3 ) where α ∈ ( 0 , ∞ ) is a hyperparameter of mixup . Next , we describe the details of heuristic mixup . Algorithm 1 Training procedure of P3Mix , P3Mix-E and P3Mix-C Input : P ∪ U : training instances ; β : coefficient parameter ; γ : thresholding parameter ; k : size of the candidate mixup pool ; α : hyperparameter of mixup ; η : coefficient parameter of early-learning regularization . η = 0 for P3Mix and P3Mix-C Output : Θ : binary classifier parameters 1 : Initialize Θ , the mean-teacher parameters Θ̃ and the candidate mixup pool Xcnd randomly , translate U into Ũ ; 2 : for t = 1 , 2 , · · · , MaxEpoch do 3 : Shuffle P ∪ Ũ into I mini-batches and denote the i-th mini-batch by ( X ip , X iu ) ; 4 : for i = 1 , 2 , · · · , I do 5 : Estimate marginal pseudo-negative instances Xmpn using Eq . ( 6 ) ; 6 : Select the mixup partners for each instance within X ip ∪ X iu using Eq . ( 5 ) ; 7 : Set labels of { ( x , y = 0 ) | ( x , y = 0 ) ∈ X iu , f ( x ; Θ ) > γ } to 1 ; . Optional for P3Mix-C 8 : Construct X̂ ip ∪ X̂ iu by applying Eq . ( 4 ) to X ip ∪ X iu and their mixup partners ; 9 : Estimate { ỹj } |X̂ ip|+|X̂ i u| j=1 for instances in X̂ ip ∪ X̂ iu by f ( · ; Θ̃ ) ; . Optional for P3Mix-E 10 : Update Θ by ∇Θ ( L ( X̂ ip , X̂ iu ; Θ ) + ηRelr ( { ( x̂j , ỹj ) } |X̂ ip|+|X̂ i u| j=1 ; Θ ) ) with Adam ; 11 : end for 12 : Update Xcnd using Eq . ( 7 ) ; 13 : Update Θ̃ by Θ with the move-average ; . Optional for P3Mix-E 14 : end for 2.1 TRAINING WITH HEURISTIC MIXUP . Basically , for each instance ( xi , yi ) ∈ Xp ∪ Xu we select a mixup partner ( xj , yj ) to generate an augmented instance ( x̂i , ŷi ) using the modified mixup operator2 ( Berthelot et al. , 2019 ) : x̂i = λ ′xi + ( 1− λ′ ) xj , ŷi = λ′yi + ( 1− λ′ ) yj , λ′ = max ( λ , 1− λ ) , λ ∼ Beta ( α , α ) , α ∈ ( 0 , ∞ ) , ( 4 ) accordingly forming the augmented instance sets X̂p and X̂u . Our heuristic mixup refers to a guidance of mixup partner selection to refine the imprecise supervision within Xu . We take inspiration from the phenomenon , where the boundary learned by Eq . ( 1 ) tends to deviate from the fully supervised boundary towards the positive side as illustrated in Fig.2 ( a ) . The marginal pseudo-negative instances Xmpn ⊂ Xu lie between the two boundaries , and they are more likely to be positive but actually annotated by negative . To resolve this problem , for each of them we uniformly select a mixup partner from the candidate mixup pool Xcnd ⊂ P of positive instances that are around the current learned boundary , so as to generate an augmented instance which is partially positive and yet also lies between the two boundaries as expressed in Fig.2 ( b ) . Besides , for positive instances Xp and other pseudo-negative instances Xu \ Xmpn , we uniformly choose their mixup partners from Xp ∪ Xu The overall mixup partner selection is formulated as follows : ( xj , yj ) ∼ Uniform ( Xcnd ) if ( xi , yi ) ∈ Xmpn , Uniform ( Xp ∪ Xu ) if ( xi , yi ) ∈ Xp ∪ Xu \ Xmpn . ( 5 ) In what follows , we introduce how to estimate the marginal pseudo-negative instances Xmpn and construct the candidate mixup pool Xcnd . 2Because we compute individual loss terms for positive instances and pseudo-negative ones in Eq . ( 2 ) appropriately , we define λ′ = max ( λ , 1−λ ) to guarantee that the feature of each augmented instance x̂i is closer to xi than the mixup partner xj . Consequently , ( x̂i , ŷi ) is assigned into X̂p if ( xi , yi ) ∈ Xp , or X̂u otherwise . Marginal pseudo-negative instance estimation . Because the fully supervised boundary is exactly unknown , we have to estimate the set of marginal pseudo-negative instances Xmpn from Xu . In this work , we define them as the “ unreliable ” pseudo-negative instances measured by the predictive scores with thresholding parameter γ ∈ [ 0.5 , 1 ] : Xmpn = { ( x , y = 0 ) | ( x , y = 0 ) ∈ Xu , 1− γ ≤ f ( x ; Θ ) ≤ γ } , ( 6 ) where γ = 0.5 implies Xmpn = ∅ , and γ = 1 means Xmpn = Xu . Candidate mixup pool . We maintain a candidate mixup pool Xcnd containing the positive instances around the current learned boundary from P . To be specific , for each positive instance we compute its entropy value of the predictive score , and update the candidate mixup pool with the top-k positive instances as follows : Xcnd = { ( x , y = 1 ) | ( x , y = 1 ) ∈ P , H ( f ( x ; Θ ) ) ∈ Rank ( { H ( f ( xi ; Θ ) ) } np i=1 ) } , ( 7 ) where H ( · ) is the entropy , and Rank ( · ) outputs a set of positive instances with the top-k maximum entropy values . For efficiency , we update Xcnd in every epoch . The full training procedure is shown in Algorithm 1 .
In this paper, the authors focus on the problem of positive and unlabeled learning. They show an interesting phenomenon, where the learned PU boundary tends to deviate the supervised boundary towards the positive side when treating unlabeled examples as pseudo-negative examples. The phenomenon may imply there are a number of marginal pseudo-negative examples that are more likely to be positive but labeled as negative. Based on this, the paper proposes a PU learning approach building on a novel heuristic mixup technique, which can achieve both data augmentation and supervision correction. They also present many empirical results to show the superior performance comparing with SOTA PU learning methods.
SP:560b873206fbcb949a6c582ea5e8169e0fc9dec9
GraphENS: Neighbor-Aware Ego Network Synthesis for Class-Imbalanced Node Classification
1 INTRODUCTION . Node classification for graphs has attracted significant attention as the importance of large-scale graphs analysis increases in various domains such as bioinformatics and commercial graphs to name a few ( Perozzi et al. , 2016 ; Hamilton et al. , 2017 ; Ying et al. , 2018 ; Mohammadrezaei et al. , 2018 ) . For example , in retail services , acquiring the qualitative node representations for items or customers is critical for improving the quality of recommendation systems ( Perozzi et al. , 2016 ; Ying et al. , 2018 ) . Detecting abnormal users in social networks , as another example , is also closely related to classifying the property of each node ( Mohammadrezaei et al. , 2018 ) . Recently , Graph Neural Networks ( GNNs ) have demonstrated their effectiveness on learning node representations ( Hamilton et al. , 2017 ; Kipf & Welling , 2017 ; Velickovic et al. ) . However , the nodes in many real-world graphs are sometimes class-imbalanced ( Mohammadrezaei et al. , 2018 ; Wang et al. , 2020a ) , hence GNNs are prone to be biased toward major classes , as in general class-imbalance tasks . This bias forces networks to poorly classify the nodes of minor classes , resulting in destructive impacts and a large cost to their services . While the peculiar characteristics of imbalanced node classification and specialized solutions suitable for it have hardly been investigated , applying generic imbalance handling methods ( Chawla et al. , 2002 ; Cui et al. , 2019 ; Cao et al. , 2019 ; Kang et al. , 2020 ; Menon et al. , 2021 ) directly to the graph domain has several non-trivial challenges . One of the distinct natures of graph data is that adjacent nodes are involved in constructing the representation of each node , which makes the model more confused to learn the unbiased representation of minor class nodes . Here , we hypothesize that it is in fact more serious to overfitting to neighbors of minor nodes than to overfitting to the node feature itself . This ‘ neighbor memorization ’ is the critical obstacle to naively adopt the class-imbalance approaches of other domains such as re-weighting and re-sampling used in image classification . Specifically , re-weighting approaches ( Cui et al. , 2019 ; Tan et al. , 2020 ; Cao et al. , 2019 ; Menon et al. , 2021 ; Hong et al. , 2021 ) , applying penalties according to the number of data , simply assign large weight to minor nodes , hence there is no change in neighbor sets of minor nodes observed during training . Re-sampling methods ( Chawla et al. , 2002 ; Kang et al. , 2020 ; Zhang et al. , 2021 ; Wang et al. , 2021 ) , sampling data to balance the number of data for each class , are also vulner- able to overfit on minor nodes with their neighbors . Another challenge of the re-sampling method , especially for oversampling variants , is determining how to connect the newly sampled nodes to the original graph . For example , simply connecting an oversampled node with all neighbors of original node will change the edges of neighbor nodes as well and hence significantly alters the message passing to the neighbors , which might impair their class semantics . To mitigate this issue , GraphSMOTE ( Zhao et al. , 2021 ) exploits edge predictor to decide the connectivity with the neighbors of two minor nodes used in oversampling . Nevertheless , GNNs trained with GraphSMOTE still suffer from neighbor memorization when the number of minor nodes is limited . In this paper , we propose GraphENS , a novel augmentation approach that synthesizes the whole ego network for minor class ( minor node and its one-hop neighbors ) by interpolating two different ego networks in the data ; to enlarge the limited neighbor views of minor instances in the data , our method combines the ego network of anchoring minor node with that of randomly selected node from all classes , where it interpolates ego networks based on KL-divergence between model predictions of ego networks in order to keep the semantics of minor classes . Synthesized ego networks are attached to the original graph to construct a class-balanced graph , and GNNs are trained with the enlarged graph . GraphENS enables the model to learn the minor class nodes with feasible neighbors by generating the virtual ego networks . To further prevent the synthesis of deleterious ego networks , we introduce a saliency-based node mixing approach to generate the central node of ego network . Our method separates class-generic node features from class-specific node features by using saliency information of each feature , and exploits only class-generic attributes to combine with node feature of anchoring minor node . We validate our method on various benchmark datasets including citation networks ( Sen et al. , 2008 ) , and Amazon product co-purchasing networks ( Shchur et al. , 2018 ) with diverse architectures such as GCN ( Kipf & Welling , 2017 ) , GAT ( Velickovic et al . ) , and GraphSAGE ( Hamilton et al. , 2017 ) , and confirm that our approach consistently outperforms baseline methods over various settings . In summary , our contribution is threefold : • We demonstrate that in class-imbalanced node classification , GNNs severely overfit to neighbor sets of minor class nodes , rather than to minor nodes themselves . This ‘ neighbor memorization ’ problem becomes severe when the number of minor nodes is extremely small . • Our method effectively alleviates the neighbor memorization problem in class-imbalanced graphs by synthesizing feasible ego networks based on the similarity between source ego networks . We also block the injection of harmful features in generating the mixed nodes using node feature saliency . • Through extensive experiments , we show that our approach consistently outperforms baselines on multiple benchmark datasets including real-world imbalanced datasets . Even in highly imbalanced synthetic graphs , our method exhibits superior performance . 2 RELATED WORK AND PRELIMINARY . 2.1 CLASS IMBALANCE PROBLEM . The goal of class-imbalance handling in classification is to construct an unbiased classifier to the label distribution of the training set . There are three main streams : loss modification , post-hoc correction , and re-sampling approaches . Loss modification methods alter the objective function to assign more weights ( Japkowicz & Stephen , 2002 ; Cui et al. , 2019 ) or margins ( Tan et al. , 2020 ; Cao et al. , 2019 ; Menon et al. , 2021 ) on minor classes . Post-hoc correction strategies ( Kang et al. , 2020 ; Tian et al. , 2020 ; Menon et al. , 2021 ; Hong et al. , 2021 ) remedy logits to compensate minor classes in the inference . Re-sampling approaches augment minor class data by sampling strategies ( Kang et al. , 2020 ; Liu et al. , 2020 ; Ren et al. , 2020 ) or generation ( Chawla et al. , 2002 ; Kim et al. , 2020a ; Chu et al. , 2020 ; Zhang et al. , 2021 ; Wang et al. , 2021 ) . Among minor class generation approaches , SMOTE ( Chawla et al. , 2002 ) is a widely used method to mix minor data with the nearest data of the identical class . Synthesizing minor class data from data of other classes ( Kim et al. , 2020a ; Chu et al. , 2020 ; Wang et al. , 2021 ) is introduced to exploit the rich information of other classes . Kim et al . ( 2020a ) produces new minor class data by taking gradient steps to translate major class data into minor class data . Wang et al . ( 2021 ) synthesizes minor class data by combining features of minor class data with feature displacements of other data . To extend these approaches to the graph domain , structural aspects of graph have to be considered when generating minor instances . In node classification , imbalance handling works ( Zhou et al. , 2018 ; Wang et al. , 2020b ; Shi et al. , 2020 ; Zhao et al. , 2021 ; Qu et al. , 2021 ) are proposed to exploit structural information in graphs . DR-GCN ( Shi et al. , 2020 ) produces the virtual minor nodes generated by additional conditional GAN and regularizes the features of virtual nodes close to adjacent nodes . GraphSMOTE ( Zhao et al. , 2021 ) generates synthetic minor nodes by interpolating two minor class nodes and a ( pretrained ) edge predictor determines the connectivity of synthetized nodes between synthetized nodes and neighbors of two source minor nodes . ImGAGN ( Qu et al. , 2021 ) synthesizes minor nodes by interpolating features among whole minor nodes with the generated weight matrix . Then the synthesized nodes are connected to the original minor nodes if weights in the matrix are larger than a fixed threshold . As GraphSMOTE and ImGAGN only utilize nodes of the identical minor class to generate minor nodes , the sample diversity of synthesized nodes would be significantly constrained . Moreover , ImGAGN mainly targets binary classification and its extension to multi-class classification is non-trivial since an independent generator is required per each class . Compared to these approaches , our GraphENS utilizes whole nodes to synthesize minor nodes , thus our method outperforms baselines when the number of minor classes is low in Section 5.3 ( Table 4 ) . 2.2 GRAPH NEURAL NETWORKS FOR NODE CLASSIFICATION . We briefly introduce GNNs for node classification tasks . Let us first define graph G ( V , E ) where V is the set of nodes and E is the set of edges between two nodes . Let X ∈ R|V |×d be the node features whose the i-th row represents the d-dimensional feature of the i-th node . N ( v ) is the set of adjacent nodes that are directly connected to v : { u ∈ V | { u , v } ∈ E } . In node classification , each node in the graph corresponds to a class y ∈ { 1 , . . . , C } where C is the number of classes . In this paper , we consider several variants of GNNs that consists of three differentiable functions : 1 ) message function ml , 2 ) permutation invariant message aggregation function φl , and 3 ) node update function hl . Let x ( l ) v be the latent vector of node v at layer l. To simplify notations for the recursive definition of GNNs , we use x ( 0 ) v to denote the input node feature . At each GNN layer , node features are updated as x ( l+1 ) v = hl ( x ( l ) v , φl ( { ml ( x ( l ) v , x ( l ) u , ev , u ) |u ∈ N ( v ) } ) to consider adjacent node embeddings . By passing these aggregated node features to a linear classifier as input , we obtain the prediction ŷ = f ( xv ) for node v where ŷc = P ( y = c|xv ) . As a representative example , a layer of Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) is defined as x ( l+1 ) v = Θ ∑ u∈N ( v ) ∪ { v } ev , u√ d̂ud̂v x ( l ) u where d̂v = 1 + ∑ u∈N ( v ) ev , u with ev , u is the edge weight of edge { u , v } ∈ E and Θ is a filter parameters . There are several variants depending on how these key components are designed , but they are beyond scope of our work .
This paper investigates the problem of class-imbalanced node classification on graph. To address this, the authors first conduct a case study on a dataset to analyze the underlying reason for inferior performance on minor class nodes. Then a model named GraphENS, which consists of two key components, is proposed to deal with the imbalance issue. Experiments on several datasets show that the proposed model can outperform the baselines.
SP:f47567af5b9d8a0fee6b5ae908a12327c0016d97
GraphENS: Neighbor-Aware Ego Network Synthesis for Class-Imbalanced Node Classification
1 INTRODUCTION . Node classification for graphs has attracted significant attention as the importance of large-scale graphs analysis increases in various domains such as bioinformatics and commercial graphs to name a few ( Perozzi et al. , 2016 ; Hamilton et al. , 2017 ; Ying et al. , 2018 ; Mohammadrezaei et al. , 2018 ) . For example , in retail services , acquiring the qualitative node representations for items or customers is critical for improving the quality of recommendation systems ( Perozzi et al. , 2016 ; Ying et al. , 2018 ) . Detecting abnormal users in social networks , as another example , is also closely related to classifying the property of each node ( Mohammadrezaei et al. , 2018 ) . Recently , Graph Neural Networks ( GNNs ) have demonstrated their effectiveness on learning node representations ( Hamilton et al. , 2017 ; Kipf & Welling , 2017 ; Velickovic et al. ) . However , the nodes in many real-world graphs are sometimes class-imbalanced ( Mohammadrezaei et al. , 2018 ; Wang et al. , 2020a ) , hence GNNs are prone to be biased toward major classes , as in general class-imbalance tasks . This bias forces networks to poorly classify the nodes of minor classes , resulting in destructive impacts and a large cost to their services . While the peculiar characteristics of imbalanced node classification and specialized solutions suitable for it have hardly been investigated , applying generic imbalance handling methods ( Chawla et al. , 2002 ; Cui et al. , 2019 ; Cao et al. , 2019 ; Kang et al. , 2020 ; Menon et al. , 2021 ) directly to the graph domain has several non-trivial challenges . One of the distinct natures of graph data is that adjacent nodes are involved in constructing the representation of each node , which makes the model more confused to learn the unbiased representation of minor class nodes . Here , we hypothesize that it is in fact more serious to overfitting to neighbors of minor nodes than to overfitting to the node feature itself . This ‘ neighbor memorization ’ is the critical obstacle to naively adopt the class-imbalance approaches of other domains such as re-weighting and re-sampling used in image classification . Specifically , re-weighting approaches ( Cui et al. , 2019 ; Tan et al. , 2020 ; Cao et al. , 2019 ; Menon et al. , 2021 ; Hong et al. , 2021 ) , applying penalties according to the number of data , simply assign large weight to minor nodes , hence there is no change in neighbor sets of minor nodes observed during training . Re-sampling methods ( Chawla et al. , 2002 ; Kang et al. , 2020 ; Zhang et al. , 2021 ; Wang et al. , 2021 ) , sampling data to balance the number of data for each class , are also vulner- able to overfit on minor nodes with their neighbors . Another challenge of the re-sampling method , especially for oversampling variants , is determining how to connect the newly sampled nodes to the original graph . For example , simply connecting an oversampled node with all neighbors of original node will change the edges of neighbor nodes as well and hence significantly alters the message passing to the neighbors , which might impair their class semantics . To mitigate this issue , GraphSMOTE ( Zhao et al. , 2021 ) exploits edge predictor to decide the connectivity with the neighbors of two minor nodes used in oversampling . Nevertheless , GNNs trained with GraphSMOTE still suffer from neighbor memorization when the number of minor nodes is limited . In this paper , we propose GraphENS , a novel augmentation approach that synthesizes the whole ego network for minor class ( minor node and its one-hop neighbors ) by interpolating two different ego networks in the data ; to enlarge the limited neighbor views of minor instances in the data , our method combines the ego network of anchoring minor node with that of randomly selected node from all classes , where it interpolates ego networks based on KL-divergence between model predictions of ego networks in order to keep the semantics of minor classes . Synthesized ego networks are attached to the original graph to construct a class-balanced graph , and GNNs are trained with the enlarged graph . GraphENS enables the model to learn the minor class nodes with feasible neighbors by generating the virtual ego networks . To further prevent the synthesis of deleterious ego networks , we introduce a saliency-based node mixing approach to generate the central node of ego network . Our method separates class-generic node features from class-specific node features by using saliency information of each feature , and exploits only class-generic attributes to combine with node feature of anchoring minor node . We validate our method on various benchmark datasets including citation networks ( Sen et al. , 2008 ) , and Amazon product co-purchasing networks ( Shchur et al. , 2018 ) with diverse architectures such as GCN ( Kipf & Welling , 2017 ) , GAT ( Velickovic et al . ) , and GraphSAGE ( Hamilton et al. , 2017 ) , and confirm that our approach consistently outperforms baseline methods over various settings . In summary , our contribution is threefold : • We demonstrate that in class-imbalanced node classification , GNNs severely overfit to neighbor sets of minor class nodes , rather than to minor nodes themselves . This ‘ neighbor memorization ’ problem becomes severe when the number of minor nodes is extremely small . • Our method effectively alleviates the neighbor memorization problem in class-imbalanced graphs by synthesizing feasible ego networks based on the similarity between source ego networks . We also block the injection of harmful features in generating the mixed nodes using node feature saliency . • Through extensive experiments , we show that our approach consistently outperforms baselines on multiple benchmark datasets including real-world imbalanced datasets . Even in highly imbalanced synthetic graphs , our method exhibits superior performance . 2 RELATED WORK AND PRELIMINARY . 2.1 CLASS IMBALANCE PROBLEM . The goal of class-imbalance handling in classification is to construct an unbiased classifier to the label distribution of the training set . There are three main streams : loss modification , post-hoc correction , and re-sampling approaches . Loss modification methods alter the objective function to assign more weights ( Japkowicz & Stephen , 2002 ; Cui et al. , 2019 ) or margins ( Tan et al. , 2020 ; Cao et al. , 2019 ; Menon et al. , 2021 ) on minor classes . Post-hoc correction strategies ( Kang et al. , 2020 ; Tian et al. , 2020 ; Menon et al. , 2021 ; Hong et al. , 2021 ) remedy logits to compensate minor classes in the inference . Re-sampling approaches augment minor class data by sampling strategies ( Kang et al. , 2020 ; Liu et al. , 2020 ; Ren et al. , 2020 ) or generation ( Chawla et al. , 2002 ; Kim et al. , 2020a ; Chu et al. , 2020 ; Zhang et al. , 2021 ; Wang et al. , 2021 ) . Among minor class generation approaches , SMOTE ( Chawla et al. , 2002 ) is a widely used method to mix minor data with the nearest data of the identical class . Synthesizing minor class data from data of other classes ( Kim et al. , 2020a ; Chu et al. , 2020 ; Wang et al. , 2021 ) is introduced to exploit the rich information of other classes . Kim et al . ( 2020a ) produces new minor class data by taking gradient steps to translate major class data into minor class data . Wang et al . ( 2021 ) synthesizes minor class data by combining features of minor class data with feature displacements of other data . To extend these approaches to the graph domain , structural aspects of graph have to be considered when generating minor instances . In node classification , imbalance handling works ( Zhou et al. , 2018 ; Wang et al. , 2020b ; Shi et al. , 2020 ; Zhao et al. , 2021 ; Qu et al. , 2021 ) are proposed to exploit structural information in graphs . DR-GCN ( Shi et al. , 2020 ) produces the virtual minor nodes generated by additional conditional GAN and regularizes the features of virtual nodes close to adjacent nodes . GraphSMOTE ( Zhao et al. , 2021 ) generates synthetic minor nodes by interpolating two minor class nodes and a ( pretrained ) edge predictor determines the connectivity of synthetized nodes between synthetized nodes and neighbors of two source minor nodes . ImGAGN ( Qu et al. , 2021 ) synthesizes minor nodes by interpolating features among whole minor nodes with the generated weight matrix . Then the synthesized nodes are connected to the original minor nodes if weights in the matrix are larger than a fixed threshold . As GraphSMOTE and ImGAGN only utilize nodes of the identical minor class to generate minor nodes , the sample diversity of synthesized nodes would be significantly constrained . Moreover , ImGAGN mainly targets binary classification and its extension to multi-class classification is non-trivial since an independent generator is required per each class . Compared to these approaches , our GraphENS utilizes whole nodes to synthesize minor nodes , thus our method outperforms baselines when the number of minor classes is low in Section 5.3 ( Table 4 ) . 2.2 GRAPH NEURAL NETWORKS FOR NODE CLASSIFICATION . We briefly introduce GNNs for node classification tasks . Let us first define graph G ( V , E ) where V is the set of nodes and E is the set of edges between two nodes . Let X ∈ R|V |×d be the node features whose the i-th row represents the d-dimensional feature of the i-th node . N ( v ) is the set of adjacent nodes that are directly connected to v : { u ∈ V | { u , v } ∈ E } . In node classification , each node in the graph corresponds to a class y ∈ { 1 , . . . , C } where C is the number of classes . In this paper , we consider several variants of GNNs that consists of three differentiable functions : 1 ) message function ml , 2 ) permutation invariant message aggregation function φl , and 3 ) node update function hl . Let x ( l ) v be the latent vector of node v at layer l. To simplify notations for the recursive definition of GNNs , we use x ( 0 ) v to denote the input node feature . At each GNN layer , node features are updated as x ( l+1 ) v = hl ( x ( l ) v , φl ( { ml ( x ( l ) v , x ( l ) u , ev , u ) |u ∈ N ( v ) } ) to consider adjacent node embeddings . By passing these aggregated node features to a linear classifier as input , we obtain the prediction ŷ = f ( xv ) for node v where ŷc = P ( y = c|xv ) . As a representative example , a layer of Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) is defined as x ( l+1 ) v = Θ ∑ u∈N ( v ) ∪ { v } ev , u√ d̂ud̂v x ( l ) u where d̂v = 1 + ∑ u∈N ( v ) ev , u with ev , u is the edge weight of edge { u , v } ∈ E and Θ is a filter parameters . There are several variants depending on how these key components are designed , but they are beyond scope of our work .
This paper addresses the class imbalance problem for node classification in a graph and points out some issues faced by existing GNNs and methods to address the same. It nicely depicts the problem of overfitting to minor classes and neighborhood memorization problem for class imbalance through experiments. It proposes n approach GraphENS which can work with any message passing GNN to resolve the problem of class imbalance for node classification. Experimental results show the merit of the proposed algorithm working with multiple GNNs (GCN, GraphSAGE and GAT) on both synthetically imbalanced dataset and real-world imbalanced dataset for node classification.
SP:f47567af5b9d8a0fee6b5ae908a12327c0016d97
GraphENS: Neighbor-Aware Ego Network Synthesis for Class-Imbalanced Node Classification
1 INTRODUCTION . Node classification for graphs has attracted significant attention as the importance of large-scale graphs analysis increases in various domains such as bioinformatics and commercial graphs to name a few ( Perozzi et al. , 2016 ; Hamilton et al. , 2017 ; Ying et al. , 2018 ; Mohammadrezaei et al. , 2018 ) . For example , in retail services , acquiring the qualitative node representations for items or customers is critical for improving the quality of recommendation systems ( Perozzi et al. , 2016 ; Ying et al. , 2018 ) . Detecting abnormal users in social networks , as another example , is also closely related to classifying the property of each node ( Mohammadrezaei et al. , 2018 ) . Recently , Graph Neural Networks ( GNNs ) have demonstrated their effectiveness on learning node representations ( Hamilton et al. , 2017 ; Kipf & Welling , 2017 ; Velickovic et al. ) . However , the nodes in many real-world graphs are sometimes class-imbalanced ( Mohammadrezaei et al. , 2018 ; Wang et al. , 2020a ) , hence GNNs are prone to be biased toward major classes , as in general class-imbalance tasks . This bias forces networks to poorly classify the nodes of minor classes , resulting in destructive impacts and a large cost to their services . While the peculiar characteristics of imbalanced node classification and specialized solutions suitable for it have hardly been investigated , applying generic imbalance handling methods ( Chawla et al. , 2002 ; Cui et al. , 2019 ; Cao et al. , 2019 ; Kang et al. , 2020 ; Menon et al. , 2021 ) directly to the graph domain has several non-trivial challenges . One of the distinct natures of graph data is that adjacent nodes are involved in constructing the representation of each node , which makes the model more confused to learn the unbiased representation of minor class nodes . Here , we hypothesize that it is in fact more serious to overfitting to neighbors of minor nodes than to overfitting to the node feature itself . This ‘ neighbor memorization ’ is the critical obstacle to naively adopt the class-imbalance approaches of other domains such as re-weighting and re-sampling used in image classification . Specifically , re-weighting approaches ( Cui et al. , 2019 ; Tan et al. , 2020 ; Cao et al. , 2019 ; Menon et al. , 2021 ; Hong et al. , 2021 ) , applying penalties according to the number of data , simply assign large weight to minor nodes , hence there is no change in neighbor sets of minor nodes observed during training . Re-sampling methods ( Chawla et al. , 2002 ; Kang et al. , 2020 ; Zhang et al. , 2021 ; Wang et al. , 2021 ) , sampling data to balance the number of data for each class , are also vulner- able to overfit on minor nodes with their neighbors . Another challenge of the re-sampling method , especially for oversampling variants , is determining how to connect the newly sampled nodes to the original graph . For example , simply connecting an oversampled node with all neighbors of original node will change the edges of neighbor nodes as well and hence significantly alters the message passing to the neighbors , which might impair their class semantics . To mitigate this issue , GraphSMOTE ( Zhao et al. , 2021 ) exploits edge predictor to decide the connectivity with the neighbors of two minor nodes used in oversampling . Nevertheless , GNNs trained with GraphSMOTE still suffer from neighbor memorization when the number of minor nodes is limited . In this paper , we propose GraphENS , a novel augmentation approach that synthesizes the whole ego network for minor class ( minor node and its one-hop neighbors ) by interpolating two different ego networks in the data ; to enlarge the limited neighbor views of minor instances in the data , our method combines the ego network of anchoring minor node with that of randomly selected node from all classes , where it interpolates ego networks based on KL-divergence between model predictions of ego networks in order to keep the semantics of minor classes . Synthesized ego networks are attached to the original graph to construct a class-balanced graph , and GNNs are trained with the enlarged graph . GraphENS enables the model to learn the minor class nodes with feasible neighbors by generating the virtual ego networks . To further prevent the synthesis of deleterious ego networks , we introduce a saliency-based node mixing approach to generate the central node of ego network . Our method separates class-generic node features from class-specific node features by using saliency information of each feature , and exploits only class-generic attributes to combine with node feature of anchoring minor node . We validate our method on various benchmark datasets including citation networks ( Sen et al. , 2008 ) , and Amazon product co-purchasing networks ( Shchur et al. , 2018 ) with diverse architectures such as GCN ( Kipf & Welling , 2017 ) , GAT ( Velickovic et al . ) , and GraphSAGE ( Hamilton et al. , 2017 ) , and confirm that our approach consistently outperforms baseline methods over various settings . In summary , our contribution is threefold : • We demonstrate that in class-imbalanced node classification , GNNs severely overfit to neighbor sets of minor class nodes , rather than to minor nodes themselves . This ‘ neighbor memorization ’ problem becomes severe when the number of minor nodes is extremely small . • Our method effectively alleviates the neighbor memorization problem in class-imbalanced graphs by synthesizing feasible ego networks based on the similarity between source ego networks . We also block the injection of harmful features in generating the mixed nodes using node feature saliency . • Through extensive experiments , we show that our approach consistently outperforms baselines on multiple benchmark datasets including real-world imbalanced datasets . Even in highly imbalanced synthetic graphs , our method exhibits superior performance . 2 RELATED WORK AND PRELIMINARY . 2.1 CLASS IMBALANCE PROBLEM . The goal of class-imbalance handling in classification is to construct an unbiased classifier to the label distribution of the training set . There are three main streams : loss modification , post-hoc correction , and re-sampling approaches . Loss modification methods alter the objective function to assign more weights ( Japkowicz & Stephen , 2002 ; Cui et al. , 2019 ) or margins ( Tan et al. , 2020 ; Cao et al. , 2019 ; Menon et al. , 2021 ) on minor classes . Post-hoc correction strategies ( Kang et al. , 2020 ; Tian et al. , 2020 ; Menon et al. , 2021 ; Hong et al. , 2021 ) remedy logits to compensate minor classes in the inference . Re-sampling approaches augment minor class data by sampling strategies ( Kang et al. , 2020 ; Liu et al. , 2020 ; Ren et al. , 2020 ) or generation ( Chawla et al. , 2002 ; Kim et al. , 2020a ; Chu et al. , 2020 ; Zhang et al. , 2021 ; Wang et al. , 2021 ) . Among minor class generation approaches , SMOTE ( Chawla et al. , 2002 ) is a widely used method to mix minor data with the nearest data of the identical class . Synthesizing minor class data from data of other classes ( Kim et al. , 2020a ; Chu et al. , 2020 ; Wang et al. , 2021 ) is introduced to exploit the rich information of other classes . Kim et al . ( 2020a ) produces new minor class data by taking gradient steps to translate major class data into minor class data . Wang et al . ( 2021 ) synthesizes minor class data by combining features of minor class data with feature displacements of other data . To extend these approaches to the graph domain , structural aspects of graph have to be considered when generating minor instances . In node classification , imbalance handling works ( Zhou et al. , 2018 ; Wang et al. , 2020b ; Shi et al. , 2020 ; Zhao et al. , 2021 ; Qu et al. , 2021 ) are proposed to exploit structural information in graphs . DR-GCN ( Shi et al. , 2020 ) produces the virtual minor nodes generated by additional conditional GAN and regularizes the features of virtual nodes close to adjacent nodes . GraphSMOTE ( Zhao et al. , 2021 ) generates synthetic minor nodes by interpolating two minor class nodes and a ( pretrained ) edge predictor determines the connectivity of synthetized nodes between synthetized nodes and neighbors of two source minor nodes . ImGAGN ( Qu et al. , 2021 ) synthesizes minor nodes by interpolating features among whole minor nodes with the generated weight matrix . Then the synthesized nodes are connected to the original minor nodes if weights in the matrix are larger than a fixed threshold . As GraphSMOTE and ImGAGN only utilize nodes of the identical minor class to generate minor nodes , the sample diversity of synthesized nodes would be significantly constrained . Moreover , ImGAGN mainly targets binary classification and its extension to multi-class classification is non-trivial since an independent generator is required per each class . Compared to these approaches , our GraphENS utilizes whole nodes to synthesize minor nodes , thus our method outperforms baselines when the number of minor classes is low in Section 5.3 ( Table 4 ) . 2.2 GRAPH NEURAL NETWORKS FOR NODE CLASSIFICATION . We briefly introduce GNNs for node classification tasks . Let us first define graph G ( V , E ) where V is the set of nodes and E is the set of edges between two nodes . Let X ∈ R|V |×d be the node features whose the i-th row represents the d-dimensional feature of the i-th node . N ( v ) is the set of adjacent nodes that are directly connected to v : { u ∈ V | { u , v } ∈ E } . In node classification , each node in the graph corresponds to a class y ∈ { 1 , . . . , C } where C is the number of classes . In this paper , we consider several variants of GNNs that consists of three differentiable functions : 1 ) message function ml , 2 ) permutation invariant message aggregation function φl , and 3 ) node update function hl . Let x ( l ) v be the latent vector of node v at layer l. To simplify notations for the recursive definition of GNNs , we use x ( 0 ) v to denote the input node feature . At each GNN layer , node features are updated as x ( l+1 ) v = hl ( x ( l ) v , φl ( { ml ( x ( l ) v , x ( l ) u , ev , u ) |u ∈ N ( v ) } ) to consider adjacent node embeddings . By passing these aggregated node features to a linear classifier as input , we obtain the prediction ŷ = f ( xv ) for node v where ŷc = P ( y = c|xv ) . As a representative example , a layer of Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2017 ) is defined as x ( l+1 ) v = Θ ∑ u∈N ( v ) ∪ { v } ev , u√ d̂ud̂v x ( l ) u where d̂v = 1 + ∑ u∈N ( v ) ev , u with ev , u is the edge weight of edge { u , v } ∈ E and Θ is a filter parameters . There are several variants depending on how these key components are designed , but they are beyond scope of our work .
The paper proposes an imbalanced classification strategy for GNNs. Unlike traditional imbalanced classification, nodes on a graph are dependent on its neighbors, and simply over-sampling/re-weighting the minor class instances would not work. In particular, the authors recognize and analyze the severity of the "neighbor memorization" problem, which is identified as the key cause of overfitting in GNNs. The proposed method alleviates the neighbor memorization problem by synthesizing ego networks. Extensive experiments are conducted to evaluate the proposed method.
SP:f47567af5b9d8a0fee6b5ae908a12327c0016d97
On Pseudo-Labeling for Class-Mismatch Semi-Supervised Learning
1 INTRODUCTION labeled data cat dog unlabeled data cat dog plane car ID data OOD data Figure 1 : Realistic Semi-Supervised Learning may simultaneously contain unlabeled ID and OOD data . ID data come from the same classes as labeled data while OOD data come from classes that are not seen in labeled data . Deep Semi-Supervised Learning ( SSL ) methods are proposed to reduce dependency on massive labeled data by utilizing a number of cheap , accessible unlabeled data . Pseudo-Labeling ( Lee et al. , 2013 ) is a simple but effective and widely used method that creates pseudo-labels according to predictions of the training model itself . Then SSL can be transformed to standard supervised learning . Other representative SSL methods are consistency regularization ( Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ; Miyato et al. , 2019 ) , holistic methods ( Berthelot et al. , 2019 ; Sohn et al. , 2020 ) and generative methods ( Kingma et al. , 2014 ) . The recent development of SSL shows that these methods have achieved competitive performance to supervised learning methods . However , all of these SSL methods achieve their good results based on an assumption that the unlabeled data are drawn from the same distribution as the labeled data . This assumption can be easily violated in real-world applications . One of the common cases is that some unlabeled data come from unseen classes . For example , as is illustrated in Figure 1 , in image classification , we can collect a lot of unlabeled images from the internet but usually they cover broader category concepts than labeled data . Oliver et al . ( 2018 ) have shown that on such class-mismatched conditions , performance of traditional SSL methods is damaged . To deal with this problem , several methods have been proposed . These methods include filtering out OOD data ( Yu et al. , 2020 ; Chen et al. , 2020 ) , down weighting OOD data ( Chen et al. , 2020 ) and re-use OOD data by neural style transfer ( Luo et al. , 2021 ) or self-supervised learning ( Huang et al. , 2021 ) . Although these methods achieve good results , why do OOD data damage performance and how will OOD data help remain unclear . Here , we focus on analyzing Pseudo-Labeling ( PL ) in class-mismatched SSL and give some answers to these two questions . In this paper , we empirically analyze PL in class-mismatched SSL setting . These experiments aim to answer the following questions : ( 1 ) How do OOD data influence PL ? ( 2 ) What are the better pseudo-labels for OOD data ? For question ( 1 ) , we investigate pseudo-labels created by PL . The main finding is that pseudo-labels on OOD data tend to be imbalanced while on ID data , it remains balanced . We further show that PL ’ s performance is damaged due to such imbalance on OOD data . For question ( 2 ) , several strategies for labeling OOD data are investigated . We conclude that it is beneficial when labeling OOD data as a class different from ID data , and the performance can be further improved when the pseudo-labels partition unlabeled OOD data into their semantic clusters . Based on the experimental analysis , we propose a two-branched model called Υ-Model , which processes unlabeled data according to their confidence score on ID classes . The first branch performs re-balanced pseudo-labeling on high-confidence data . It utilizes the property of imbalanced pseudolabels on OOD data , truncating the number of pseudo-labeled data for each class to their minimum . This procedure filters out many OOD data and also prevents the negative effect of imbalanced pseudo-labels . For the other branch , semantic exploration clustering is performed on low-confidence data . They are considered as OOD data and their semantics will be mined by clustering into different partitions on extra classes . The clustering result provides better pseudo-labels for these OOD data than vanilla PL . Experiments on different SSL benchmarks show that our model can achieve steady improvement in comparison to supervised baseline . We summarize our contributions as follows : • We analyze the Pseudo-Labeling model for ID and OOD data . The findings lead to two primary conclusions : ( 1 ) Imbalance of pseudo-labels on OOD data damages PL ’ s performance . ( 2 ) Best pseudo-labels for unlabeled OOD data are those different from ID classes and partitioning them into their semantic clusters . • We propose our two-branched Υ-Model . One branch re-balances pseudo-labels on ID classes and filter out OOD data . The other branch explores semantics of OOD data by clustering on extra classes . • Experiments on different SSL benchmarks empirically validate effectiveness of our model . 2 PRELIMINARY . 2.1 CLASS-MISMATCHED SSL . Similar to the SSL problem , the training dataset of the class-mismatched SSL problem contrains n ID labeled samples Dl = { ( xli , yli ) } ni=1 and m unlabeled samples Du = { xui } mi=1 , ( usually , m ≫ n , ) yli ∈ YID = { 1 , . . . , KID } , while different from SSL , the underlying ground truth yu of unlabeled data may be different from labeled data . i.e. , yuj ∈ YID ∪ YOOD , YOOD = { KID + 1 , . . . , KID + KOOD } . The goal of class-mismatched SSL is to correctly classify ID samples into YID using labeled set with ID samples and unlabeled set possibly with OOD samples . 2.2 PSEUDO-LABELING . Pseudo-Labeling ( PL ) leverages the idea that we can use the model itself to obtain artificial labels for unlabeled data ( Lee et al. , 2013 ) . PL first perform supervised learning on labeled data to get a pre-trained model f , which outputs the probability of belonging to each ID class . It then creates the pseudo-labels for each unlabeled sample : y′ = { argmaxy∈YID f ( y|x ) , c ( x ) > τ reject , otherwise , ( 1 ) c ( x ) = max y∈YID f ( y|x ) , ( 2 ) where c ( x ) is the confidence score for x . All the pseudo-labeled unlabel data will be treated as labeled data for the next supervised learning generation . PL iteratively performs supervised learning and pseudo-label creation until stop condition . 3 ANALYSIS OF PSEUDO-LABELING IN CLASS-MISMATCHED SSL . In class-mismatched SSL , vanilla PL can only create pseudo-labels on ID classes even for OOD data . We will analyze how these OOD data influence vanilla PL and what are the better pseudo-labels for them in this section . 3.1 SETUP . We use CIFAR-10 ( Krizhevsky et al. , 2009 ) as our experimental dataset . The data set contains 10 categories – 6 animal classes and 4 vehicle classes . Following Guo et al . ( 2020 ) , we perform a classification task on animal classes ( denoted as class 0-5 ) and select 400 images per class to construct the labeled data set , i.e. , 2,400 labeled examples . The other 4 vehicle classes are taken as OOD classes ( denoted as classes 6-9 ) . 20,000 images are randomly selected from all the 10 classes as the unlabeled data set . We vary the ratio of unlabeled images to modulate class distribution mismatch . For example , the extent is 50 % means half of the unlabeled data comes from animal classes and the others come from vehicle classes . We use Wide-ResNet-28-2 ( Zagoruyko & Komodakis , 2016 ) as our backbone . We also adopt data augmentation techniques including random resized crop , random color distortion and random horizontal flip . We train our network for 400 epochs . For each epoch , we iterate over the unlabeled set and random sample labeled data , each unlabeled and labeled minibatch contains 128 samples . We adopt Adam as the optimization algorithm with the initial learning rate 3× 10−3 . We report the averaged accuracy of the last 20 epochs , pretending there is no reliable ( too small ) validation set to perform early stop ( Oliver et al. , 2018 ) . 3.2 IMBALANCE OF PSEUDO-LABELS ON OOD DATA . In this section , we analyze the pre-trained model that creates the first set of pseudo-labels , and the final model trained by Pseudo-Labeling . Pretrained model . First , we draw the distribution of confidence score on OOD data and ID data . Figure 2 ( a ) tells that , like what is concluded in OOD detection ( Hendrycks & Gimpel , 2017 ) , proportion of high-confidence data in ID data is larger than OOD data . However , in class-mismatched SSL , the unlabeled data are in much larger quantities . When the class mismatch ratio is large , there are quite a few OOD data with high confidence scores . We will show in the final model experiments that these high-confidence OOD data damages performance . Secondly , we study pseudo-labels on both ID data and OOD data . Figure 2 ( b ) shows that pseudo-labels ID data is balanced . However , they are rather imbalanced on OOD data ( Figure 2 ( c ) ) . This is attributed to the different distribution they are drawn from . Samples with certain pattern bias to certain classes . ID data bias to ID classes uniformly because they are sampled by the same distribution . However , with little probability , OOD data will also bias to ID classes uniformly since they have little relevance to ID data . Final Pseudo-Labeling model . As an old saying goes , a good beginning is half done . However , such imbalance of the first set of pseudo-labels starts PL model badly when there is a large portion of OOD data , putting the model in danger of imbalanced learning . We run vanilla PL and show that the imbalance of pseudo-labels harms the performance . Figure 3 ( a ) shows the performance of PL model with different OOD ratios . In accord with Oliver et al . ( 2018 ) , PL model degrades as the portion of OOD data gets larger . Figure 3 ( b ) displays the confusion matrix of the PL model on the whole test set containing both ID and OOD data . Since only 6 classes are known to us , the confusion matrix is a rectangle . We can see almost all the OOD samples ( class 6-9 ) are classified as class 0 , which means the imbalance effect on OOD data gets even worse as the PL training goes on . The possible reason is that , unlike Pseudo-labeling on ID data , supervision of labeled data can not help correct pseudo-labels on OOD data . Thus the imbalance continuously deteriorates . The imbalance on OOD data also influences classification performance on ID data . Samples of major classes ( class 0 ) overwhelm the loss and gradient , leading to a degenerate model ( Lin et al. , 2017 ) . We can see the PL model mistakenly classifies many of data with class 1-5 into class 0 . 3.3 PSEUDO-LABELING STRATEGY FOR OOD DATA . The previous section shows OOD data hurt performance of vanilla PL . Then here comes the question : Assuming that we already know which data are OOD , how do we use these OOD data ? Is omitting them the best way ? If not , what are the better pseudo-labels for them ? To answer these questions , we investigate four strategies to create pseudo-labels for OOD data : • Baseline . This baseline omits all the OOD data and only trains on the labeled ID data . • Re-Assigned Labeling . This strategy assigns data of each OOD class to an ID class . It ensures that different OOD class is assigned to different ID class , keeping the semantics unchanged between OOD classes . For example , ( ship , trunk , airline , automobile ) can be assigned to ( bird , cat , deer , dog ) . This strategy can be seen as training a classifier of “ super-classes ” . • Open-Set Labeling . This strategy is named after the related setting – Open-Set Recognition ( Scheirer et al. , 2013 ; Bendale & Boult , 2016 ) . This strategy treats all OOD data as one unified class KID + 1 . Thus this model outputs probability over KID + 1 classes . • Oracle Labeling . This strategy uses the ground truth of OOD data . Thus this model outputs probability over KID +KOOD classes . Note that Open-Set Labeling and Oracle Labeling can classify samples into more than KID classes . However , during evaluation , we only classify samples into KID ID classes . For these model , the predicted label ŷ of a test sample x is calculated as : ŷ ( x ) = argmax y∈YID f ( y|x ) ( 3 ) The overall comparison of the four strategies is illustrated in Figure 4 . We also report test accuracy when class mismatch ratio is 100 % . From Figure 4 , we can get several important conclusions . ( 1 ) Re-Assigned Labeling underperforms baseline a little1 . This indicates that assign samples with OOD classes to ID classes does not help the model distinguish between ID classes even if we somehow know which OOD data are semantically different . It also reveals that performing vanilla PL on OOD data may never help even if we do it perfectly . ( 2 ) Open-Set Labeling outperforms baseline , which indicates it improves the performance if we label the OOD data as a class other than ID classes . ( 3 ) We can see Oracle Labeling improves the performance and achieves the best result among the four strategies . It means that in addition to label OOD data as extra classes , if we can further assign OOD data with different semantics to different classes , the model will achieve better results .
This paper works on the class-mismatch semi-supervised learning problem, where the assumption is that the unlabeled samples include class labels that do not appear in the labeled data, i.e., out-of-distribution (OOD) data, in addition to the class labels that appear in the labeled data, i.e., in-distribution (ID) data. It is known that if there are OOD data included in the unlabeled data, it can degrade performance of semi-supervised learning algorithms. This paper focuses on one of the semi-supervised learning methods, which is the pseudo-labeling method. The paper first investigates how using pseudo-labels in the class-mismatch semi-supervised learning setup can be problematic. The experiments show that proportions of high-confidence data in ID data is larger than OOD data. Experiments also show that pseudo-labels (w.r.t. class predictions) are more balanced in ID data while it is unbalanced in OOD data at the beginning of training (with a pretrained model). This phenomenon becomes exaggerated at the end of training, where almost all of the OOD data is predicted into a single ID class. Assuming we know the labels of the unlabeled data, the paper performs further experiments that compare a few methods, and finds out that it is ideal if we have access to the underlying single class labels of OOD data, and solve the problem as a multi-class classification problem with K_id + K_ood (number of classes of ID and OOD) classes. Based on these observations, the paper proposes an Upsilon model, that consists of a re-balanced pseudo-labelling (RPL) and semantic exploration clustering (SEC). RPL aims to balance the distributions of pseudo-labels based on the observation that pseudo-labels become balanced/imbalanced with ID/OOD data, respectively. SEC is a workaround for not having ground truth labels of OOD data. Experiments show comparison with both traditional SSL methods and class-mismatch SSL methods, and show how the proposed Upsilon model works better than these baselines. Ablation study shows why the components within the Upsilon model is necessary.
SP:90f26033b49add69e6a959b1ac469ab5938ab43f
On Pseudo-Labeling for Class-Mismatch Semi-Supervised Learning
1 INTRODUCTION labeled data cat dog unlabeled data cat dog plane car ID data OOD data Figure 1 : Realistic Semi-Supervised Learning may simultaneously contain unlabeled ID and OOD data . ID data come from the same classes as labeled data while OOD data come from classes that are not seen in labeled data . Deep Semi-Supervised Learning ( SSL ) methods are proposed to reduce dependency on massive labeled data by utilizing a number of cheap , accessible unlabeled data . Pseudo-Labeling ( Lee et al. , 2013 ) is a simple but effective and widely used method that creates pseudo-labels according to predictions of the training model itself . Then SSL can be transformed to standard supervised learning . Other representative SSL methods are consistency regularization ( Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ; Miyato et al. , 2019 ) , holistic methods ( Berthelot et al. , 2019 ; Sohn et al. , 2020 ) and generative methods ( Kingma et al. , 2014 ) . The recent development of SSL shows that these methods have achieved competitive performance to supervised learning methods . However , all of these SSL methods achieve their good results based on an assumption that the unlabeled data are drawn from the same distribution as the labeled data . This assumption can be easily violated in real-world applications . One of the common cases is that some unlabeled data come from unseen classes . For example , as is illustrated in Figure 1 , in image classification , we can collect a lot of unlabeled images from the internet but usually they cover broader category concepts than labeled data . Oliver et al . ( 2018 ) have shown that on such class-mismatched conditions , performance of traditional SSL methods is damaged . To deal with this problem , several methods have been proposed . These methods include filtering out OOD data ( Yu et al. , 2020 ; Chen et al. , 2020 ) , down weighting OOD data ( Chen et al. , 2020 ) and re-use OOD data by neural style transfer ( Luo et al. , 2021 ) or self-supervised learning ( Huang et al. , 2021 ) . Although these methods achieve good results , why do OOD data damage performance and how will OOD data help remain unclear . Here , we focus on analyzing Pseudo-Labeling ( PL ) in class-mismatched SSL and give some answers to these two questions . In this paper , we empirically analyze PL in class-mismatched SSL setting . These experiments aim to answer the following questions : ( 1 ) How do OOD data influence PL ? ( 2 ) What are the better pseudo-labels for OOD data ? For question ( 1 ) , we investigate pseudo-labels created by PL . The main finding is that pseudo-labels on OOD data tend to be imbalanced while on ID data , it remains balanced . We further show that PL ’ s performance is damaged due to such imbalance on OOD data . For question ( 2 ) , several strategies for labeling OOD data are investigated . We conclude that it is beneficial when labeling OOD data as a class different from ID data , and the performance can be further improved when the pseudo-labels partition unlabeled OOD data into their semantic clusters . Based on the experimental analysis , we propose a two-branched model called Υ-Model , which processes unlabeled data according to their confidence score on ID classes . The first branch performs re-balanced pseudo-labeling on high-confidence data . It utilizes the property of imbalanced pseudolabels on OOD data , truncating the number of pseudo-labeled data for each class to their minimum . This procedure filters out many OOD data and also prevents the negative effect of imbalanced pseudo-labels . For the other branch , semantic exploration clustering is performed on low-confidence data . They are considered as OOD data and their semantics will be mined by clustering into different partitions on extra classes . The clustering result provides better pseudo-labels for these OOD data than vanilla PL . Experiments on different SSL benchmarks show that our model can achieve steady improvement in comparison to supervised baseline . We summarize our contributions as follows : • We analyze the Pseudo-Labeling model for ID and OOD data . The findings lead to two primary conclusions : ( 1 ) Imbalance of pseudo-labels on OOD data damages PL ’ s performance . ( 2 ) Best pseudo-labels for unlabeled OOD data are those different from ID classes and partitioning them into their semantic clusters . • We propose our two-branched Υ-Model . One branch re-balances pseudo-labels on ID classes and filter out OOD data . The other branch explores semantics of OOD data by clustering on extra classes . • Experiments on different SSL benchmarks empirically validate effectiveness of our model . 2 PRELIMINARY . 2.1 CLASS-MISMATCHED SSL . Similar to the SSL problem , the training dataset of the class-mismatched SSL problem contrains n ID labeled samples Dl = { ( xli , yli ) } ni=1 and m unlabeled samples Du = { xui } mi=1 , ( usually , m ≫ n , ) yli ∈ YID = { 1 , . . . , KID } , while different from SSL , the underlying ground truth yu of unlabeled data may be different from labeled data . i.e. , yuj ∈ YID ∪ YOOD , YOOD = { KID + 1 , . . . , KID + KOOD } . The goal of class-mismatched SSL is to correctly classify ID samples into YID using labeled set with ID samples and unlabeled set possibly with OOD samples . 2.2 PSEUDO-LABELING . Pseudo-Labeling ( PL ) leverages the idea that we can use the model itself to obtain artificial labels for unlabeled data ( Lee et al. , 2013 ) . PL first perform supervised learning on labeled data to get a pre-trained model f , which outputs the probability of belonging to each ID class . It then creates the pseudo-labels for each unlabeled sample : y′ = { argmaxy∈YID f ( y|x ) , c ( x ) > τ reject , otherwise , ( 1 ) c ( x ) = max y∈YID f ( y|x ) , ( 2 ) where c ( x ) is the confidence score for x . All the pseudo-labeled unlabel data will be treated as labeled data for the next supervised learning generation . PL iteratively performs supervised learning and pseudo-label creation until stop condition . 3 ANALYSIS OF PSEUDO-LABELING IN CLASS-MISMATCHED SSL . In class-mismatched SSL , vanilla PL can only create pseudo-labels on ID classes even for OOD data . We will analyze how these OOD data influence vanilla PL and what are the better pseudo-labels for them in this section . 3.1 SETUP . We use CIFAR-10 ( Krizhevsky et al. , 2009 ) as our experimental dataset . The data set contains 10 categories – 6 animal classes and 4 vehicle classes . Following Guo et al . ( 2020 ) , we perform a classification task on animal classes ( denoted as class 0-5 ) and select 400 images per class to construct the labeled data set , i.e. , 2,400 labeled examples . The other 4 vehicle classes are taken as OOD classes ( denoted as classes 6-9 ) . 20,000 images are randomly selected from all the 10 classes as the unlabeled data set . We vary the ratio of unlabeled images to modulate class distribution mismatch . For example , the extent is 50 % means half of the unlabeled data comes from animal classes and the others come from vehicle classes . We use Wide-ResNet-28-2 ( Zagoruyko & Komodakis , 2016 ) as our backbone . We also adopt data augmentation techniques including random resized crop , random color distortion and random horizontal flip . We train our network for 400 epochs . For each epoch , we iterate over the unlabeled set and random sample labeled data , each unlabeled and labeled minibatch contains 128 samples . We adopt Adam as the optimization algorithm with the initial learning rate 3× 10−3 . We report the averaged accuracy of the last 20 epochs , pretending there is no reliable ( too small ) validation set to perform early stop ( Oliver et al. , 2018 ) . 3.2 IMBALANCE OF PSEUDO-LABELS ON OOD DATA . In this section , we analyze the pre-trained model that creates the first set of pseudo-labels , and the final model trained by Pseudo-Labeling . Pretrained model . First , we draw the distribution of confidence score on OOD data and ID data . Figure 2 ( a ) tells that , like what is concluded in OOD detection ( Hendrycks & Gimpel , 2017 ) , proportion of high-confidence data in ID data is larger than OOD data . However , in class-mismatched SSL , the unlabeled data are in much larger quantities . When the class mismatch ratio is large , there are quite a few OOD data with high confidence scores . We will show in the final model experiments that these high-confidence OOD data damages performance . Secondly , we study pseudo-labels on both ID data and OOD data . Figure 2 ( b ) shows that pseudo-labels ID data is balanced . However , they are rather imbalanced on OOD data ( Figure 2 ( c ) ) . This is attributed to the different distribution they are drawn from . Samples with certain pattern bias to certain classes . ID data bias to ID classes uniformly because they are sampled by the same distribution . However , with little probability , OOD data will also bias to ID classes uniformly since they have little relevance to ID data . Final Pseudo-Labeling model . As an old saying goes , a good beginning is half done . However , such imbalance of the first set of pseudo-labels starts PL model badly when there is a large portion of OOD data , putting the model in danger of imbalanced learning . We run vanilla PL and show that the imbalance of pseudo-labels harms the performance . Figure 3 ( a ) shows the performance of PL model with different OOD ratios . In accord with Oliver et al . ( 2018 ) , PL model degrades as the portion of OOD data gets larger . Figure 3 ( b ) displays the confusion matrix of the PL model on the whole test set containing both ID and OOD data . Since only 6 classes are known to us , the confusion matrix is a rectangle . We can see almost all the OOD samples ( class 6-9 ) are classified as class 0 , which means the imbalance effect on OOD data gets even worse as the PL training goes on . The possible reason is that , unlike Pseudo-labeling on ID data , supervision of labeled data can not help correct pseudo-labels on OOD data . Thus the imbalance continuously deteriorates . The imbalance on OOD data also influences classification performance on ID data . Samples of major classes ( class 0 ) overwhelm the loss and gradient , leading to a degenerate model ( Lin et al. , 2017 ) . We can see the PL model mistakenly classifies many of data with class 1-5 into class 0 . 3.3 PSEUDO-LABELING STRATEGY FOR OOD DATA . The previous section shows OOD data hurt performance of vanilla PL . Then here comes the question : Assuming that we already know which data are OOD , how do we use these OOD data ? Is omitting them the best way ? If not , what are the better pseudo-labels for them ? To answer these questions , we investigate four strategies to create pseudo-labels for OOD data : • Baseline . This baseline omits all the OOD data and only trains on the labeled ID data . • Re-Assigned Labeling . This strategy assigns data of each OOD class to an ID class . It ensures that different OOD class is assigned to different ID class , keeping the semantics unchanged between OOD classes . For example , ( ship , trunk , airline , automobile ) can be assigned to ( bird , cat , deer , dog ) . This strategy can be seen as training a classifier of “ super-classes ” . • Open-Set Labeling . This strategy is named after the related setting – Open-Set Recognition ( Scheirer et al. , 2013 ; Bendale & Boult , 2016 ) . This strategy treats all OOD data as one unified class KID + 1 . Thus this model outputs probability over KID + 1 classes . • Oracle Labeling . This strategy uses the ground truth of OOD data . Thus this model outputs probability over KID +KOOD classes . Note that Open-Set Labeling and Oracle Labeling can classify samples into more than KID classes . However , during evaluation , we only classify samples into KID ID classes . For these model , the predicted label ŷ of a test sample x is calculated as : ŷ ( x ) = argmax y∈YID f ( y|x ) ( 3 ) The overall comparison of the four strategies is illustrated in Figure 4 . We also report test accuracy when class mismatch ratio is 100 % . From Figure 4 , we can get several important conclusions . ( 1 ) Re-Assigned Labeling underperforms baseline a little1 . This indicates that assign samples with OOD classes to ID classes does not help the model distinguish between ID classes even if we somehow know which OOD data are semantically different . It also reveals that performing vanilla PL on OOD data may never help even if we do it perfectly . ( 2 ) Open-Set Labeling outperforms baseline , which indicates it improves the performance if we label the OOD data as a class other than ID classes . ( 3 ) We can see Oracle Labeling improves the performance and achieves the best result among the four strategies . It means that in addition to label OOD data as extra classes , if we can further assign OOD data with different semantics to different classes , the model will achieve better results .
This paper focuses on the problem of semi-supervised learning (SSL) with class miss-match. The authors first empirically analyze Pseudo-Labeling (PL) in class-mismatched SSL and proposed a new method that consists of two components – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC). Experiments show that the proposed method achieves steady improvement over supervised baseline and state-of-the-art performance under all class mismatch ratios on different benchmarks.
SP:90f26033b49add69e6a959b1ac469ab5938ab43f
On Pseudo-Labeling for Class-Mismatch Semi-Supervised Learning
1 INTRODUCTION labeled data cat dog unlabeled data cat dog plane car ID data OOD data Figure 1 : Realistic Semi-Supervised Learning may simultaneously contain unlabeled ID and OOD data . ID data come from the same classes as labeled data while OOD data come from classes that are not seen in labeled data . Deep Semi-Supervised Learning ( SSL ) methods are proposed to reduce dependency on massive labeled data by utilizing a number of cheap , accessible unlabeled data . Pseudo-Labeling ( Lee et al. , 2013 ) is a simple but effective and widely used method that creates pseudo-labels according to predictions of the training model itself . Then SSL can be transformed to standard supervised learning . Other representative SSL methods are consistency regularization ( Laine & Aila , 2017 ; Tarvainen & Valpola , 2017 ; Miyato et al. , 2019 ) , holistic methods ( Berthelot et al. , 2019 ; Sohn et al. , 2020 ) and generative methods ( Kingma et al. , 2014 ) . The recent development of SSL shows that these methods have achieved competitive performance to supervised learning methods . However , all of these SSL methods achieve their good results based on an assumption that the unlabeled data are drawn from the same distribution as the labeled data . This assumption can be easily violated in real-world applications . One of the common cases is that some unlabeled data come from unseen classes . For example , as is illustrated in Figure 1 , in image classification , we can collect a lot of unlabeled images from the internet but usually they cover broader category concepts than labeled data . Oliver et al . ( 2018 ) have shown that on such class-mismatched conditions , performance of traditional SSL methods is damaged . To deal with this problem , several methods have been proposed . These methods include filtering out OOD data ( Yu et al. , 2020 ; Chen et al. , 2020 ) , down weighting OOD data ( Chen et al. , 2020 ) and re-use OOD data by neural style transfer ( Luo et al. , 2021 ) or self-supervised learning ( Huang et al. , 2021 ) . Although these methods achieve good results , why do OOD data damage performance and how will OOD data help remain unclear . Here , we focus on analyzing Pseudo-Labeling ( PL ) in class-mismatched SSL and give some answers to these two questions . In this paper , we empirically analyze PL in class-mismatched SSL setting . These experiments aim to answer the following questions : ( 1 ) How do OOD data influence PL ? ( 2 ) What are the better pseudo-labels for OOD data ? For question ( 1 ) , we investigate pseudo-labels created by PL . The main finding is that pseudo-labels on OOD data tend to be imbalanced while on ID data , it remains balanced . We further show that PL ’ s performance is damaged due to such imbalance on OOD data . For question ( 2 ) , several strategies for labeling OOD data are investigated . We conclude that it is beneficial when labeling OOD data as a class different from ID data , and the performance can be further improved when the pseudo-labels partition unlabeled OOD data into their semantic clusters . Based on the experimental analysis , we propose a two-branched model called Υ-Model , which processes unlabeled data according to their confidence score on ID classes . The first branch performs re-balanced pseudo-labeling on high-confidence data . It utilizes the property of imbalanced pseudolabels on OOD data , truncating the number of pseudo-labeled data for each class to their minimum . This procedure filters out many OOD data and also prevents the negative effect of imbalanced pseudo-labels . For the other branch , semantic exploration clustering is performed on low-confidence data . They are considered as OOD data and their semantics will be mined by clustering into different partitions on extra classes . The clustering result provides better pseudo-labels for these OOD data than vanilla PL . Experiments on different SSL benchmarks show that our model can achieve steady improvement in comparison to supervised baseline . We summarize our contributions as follows : • We analyze the Pseudo-Labeling model for ID and OOD data . The findings lead to two primary conclusions : ( 1 ) Imbalance of pseudo-labels on OOD data damages PL ’ s performance . ( 2 ) Best pseudo-labels for unlabeled OOD data are those different from ID classes and partitioning them into their semantic clusters . • We propose our two-branched Υ-Model . One branch re-balances pseudo-labels on ID classes and filter out OOD data . The other branch explores semantics of OOD data by clustering on extra classes . • Experiments on different SSL benchmarks empirically validate effectiveness of our model . 2 PRELIMINARY . 2.1 CLASS-MISMATCHED SSL . Similar to the SSL problem , the training dataset of the class-mismatched SSL problem contrains n ID labeled samples Dl = { ( xli , yli ) } ni=1 and m unlabeled samples Du = { xui } mi=1 , ( usually , m ≫ n , ) yli ∈ YID = { 1 , . . . , KID } , while different from SSL , the underlying ground truth yu of unlabeled data may be different from labeled data . i.e. , yuj ∈ YID ∪ YOOD , YOOD = { KID + 1 , . . . , KID + KOOD } . The goal of class-mismatched SSL is to correctly classify ID samples into YID using labeled set with ID samples and unlabeled set possibly with OOD samples . 2.2 PSEUDO-LABELING . Pseudo-Labeling ( PL ) leverages the idea that we can use the model itself to obtain artificial labels for unlabeled data ( Lee et al. , 2013 ) . PL first perform supervised learning on labeled data to get a pre-trained model f , which outputs the probability of belonging to each ID class . It then creates the pseudo-labels for each unlabeled sample : y′ = { argmaxy∈YID f ( y|x ) , c ( x ) > τ reject , otherwise , ( 1 ) c ( x ) = max y∈YID f ( y|x ) , ( 2 ) where c ( x ) is the confidence score for x . All the pseudo-labeled unlabel data will be treated as labeled data for the next supervised learning generation . PL iteratively performs supervised learning and pseudo-label creation until stop condition . 3 ANALYSIS OF PSEUDO-LABELING IN CLASS-MISMATCHED SSL . In class-mismatched SSL , vanilla PL can only create pseudo-labels on ID classes even for OOD data . We will analyze how these OOD data influence vanilla PL and what are the better pseudo-labels for them in this section . 3.1 SETUP . We use CIFAR-10 ( Krizhevsky et al. , 2009 ) as our experimental dataset . The data set contains 10 categories – 6 animal classes and 4 vehicle classes . Following Guo et al . ( 2020 ) , we perform a classification task on animal classes ( denoted as class 0-5 ) and select 400 images per class to construct the labeled data set , i.e. , 2,400 labeled examples . The other 4 vehicle classes are taken as OOD classes ( denoted as classes 6-9 ) . 20,000 images are randomly selected from all the 10 classes as the unlabeled data set . We vary the ratio of unlabeled images to modulate class distribution mismatch . For example , the extent is 50 % means half of the unlabeled data comes from animal classes and the others come from vehicle classes . We use Wide-ResNet-28-2 ( Zagoruyko & Komodakis , 2016 ) as our backbone . We also adopt data augmentation techniques including random resized crop , random color distortion and random horizontal flip . We train our network for 400 epochs . For each epoch , we iterate over the unlabeled set and random sample labeled data , each unlabeled and labeled minibatch contains 128 samples . We adopt Adam as the optimization algorithm with the initial learning rate 3× 10−3 . We report the averaged accuracy of the last 20 epochs , pretending there is no reliable ( too small ) validation set to perform early stop ( Oliver et al. , 2018 ) . 3.2 IMBALANCE OF PSEUDO-LABELS ON OOD DATA . In this section , we analyze the pre-trained model that creates the first set of pseudo-labels , and the final model trained by Pseudo-Labeling . Pretrained model . First , we draw the distribution of confidence score on OOD data and ID data . Figure 2 ( a ) tells that , like what is concluded in OOD detection ( Hendrycks & Gimpel , 2017 ) , proportion of high-confidence data in ID data is larger than OOD data . However , in class-mismatched SSL , the unlabeled data are in much larger quantities . When the class mismatch ratio is large , there are quite a few OOD data with high confidence scores . We will show in the final model experiments that these high-confidence OOD data damages performance . Secondly , we study pseudo-labels on both ID data and OOD data . Figure 2 ( b ) shows that pseudo-labels ID data is balanced . However , they are rather imbalanced on OOD data ( Figure 2 ( c ) ) . This is attributed to the different distribution they are drawn from . Samples with certain pattern bias to certain classes . ID data bias to ID classes uniformly because they are sampled by the same distribution . However , with little probability , OOD data will also bias to ID classes uniformly since they have little relevance to ID data . Final Pseudo-Labeling model . As an old saying goes , a good beginning is half done . However , such imbalance of the first set of pseudo-labels starts PL model badly when there is a large portion of OOD data , putting the model in danger of imbalanced learning . We run vanilla PL and show that the imbalance of pseudo-labels harms the performance . Figure 3 ( a ) shows the performance of PL model with different OOD ratios . In accord with Oliver et al . ( 2018 ) , PL model degrades as the portion of OOD data gets larger . Figure 3 ( b ) displays the confusion matrix of the PL model on the whole test set containing both ID and OOD data . Since only 6 classes are known to us , the confusion matrix is a rectangle . We can see almost all the OOD samples ( class 6-9 ) are classified as class 0 , which means the imbalance effect on OOD data gets even worse as the PL training goes on . The possible reason is that , unlike Pseudo-labeling on ID data , supervision of labeled data can not help correct pseudo-labels on OOD data . Thus the imbalance continuously deteriorates . The imbalance on OOD data also influences classification performance on ID data . Samples of major classes ( class 0 ) overwhelm the loss and gradient , leading to a degenerate model ( Lin et al. , 2017 ) . We can see the PL model mistakenly classifies many of data with class 1-5 into class 0 . 3.3 PSEUDO-LABELING STRATEGY FOR OOD DATA . The previous section shows OOD data hurt performance of vanilla PL . Then here comes the question : Assuming that we already know which data are OOD , how do we use these OOD data ? Is omitting them the best way ? If not , what are the better pseudo-labels for them ? To answer these questions , we investigate four strategies to create pseudo-labels for OOD data : • Baseline . This baseline omits all the OOD data and only trains on the labeled ID data . • Re-Assigned Labeling . This strategy assigns data of each OOD class to an ID class . It ensures that different OOD class is assigned to different ID class , keeping the semantics unchanged between OOD classes . For example , ( ship , trunk , airline , automobile ) can be assigned to ( bird , cat , deer , dog ) . This strategy can be seen as training a classifier of “ super-classes ” . • Open-Set Labeling . This strategy is named after the related setting – Open-Set Recognition ( Scheirer et al. , 2013 ; Bendale & Boult , 2016 ) . This strategy treats all OOD data as one unified class KID + 1 . Thus this model outputs probability over KID + 1 classes . • Oracle Labeling . This strategy uses the ground truth of OOD data . Thus this model outputs probability over KID +KOOD classes . Note that Open-Set Labeling and Oracle Labeling can classify samples into more than KID classes . However , during evaluation , we only classify samples into KID ID classes . For these model , the predicted label ŷ of a test sample x is calculated as : ŷ ( x ) = argmax y∈YID f ( y|x ) ( 3 ) The overall comparison of the four strategies is illustrated in Figure 4 . We also report test accuracy when class mismatch ratio is 100 % . From Figure 4 , we can get several important conclusions . ( 1 ) Re-Assigned Labeling underperforms baseline a little1 . This indicates that assign samples with OOD classes to ID classes does not help the model distinguish between ID classes even if we somehow know which OOD data are semantically different . It also reveals that performing vanilla PL on OOD data may never help even if we do it perfectly . ( 2 ) Open-Set Labeling outperforms baseline , which indicates it improves the performance if we label the OOD data as a class other than ID classes . ( 3 ) We can see Oracle Labeling improves the performance and achieves the best result among the four strategies . It means that in addition to label OOD data as extra classes , if we can further assign OOD data with different semantics to different classes , the model will achieve better results .
This paper researched on the semi-supervised learning task when there are unlabeled out of distribution data from other classes. Several interesting issues of class mismatch SSL are studied, including the reasons for the performance degradation of PL in OOD data and how to better pseudo-label OOD data to provide a more balanced semantic distribution. To address above-mentioned problems, Y-model, consisting of two components – Re-balanced Pseudo-Labeling (RPL) and Semantic Exploration Clustering (SEC), are proposed. Authors conducted several experiments to show the effectiveness of the proposed method, such as CIFAR-10 and SVHN.
SP:90f26033b49add69e6a959b1ac469ab5938ab43f
Reward Shifting for Optimistic Exploration and Conservative Exploitation
1 INTRODUCTION . While reward shaping is a well-established practice in reinforcement learning applications and has a long-standing history ( Randløv & Alstrøm , 1998 ; Laud , 2004 ) , specifying a certain reward to incentivize the learning agent requires domain knowledge and deep understanding of the task ( Vinyals et al. , 2019 ; Akkaya et al. , 2019 ; Berner et al. , 2019 ; Elbarbari et al. , 2021 ) . Even with careful design and tuning , learning with a shaped reward that intends to accelerate learning may on the contrary hinder the learning performance by incurring the sub-optimal behaviors of the agent ( Florensa et al. , 2017 ; Plappert et al. , 2018 ) . Although Ng et al . ( 1999 ) theoretically points the optimal policy will keep unchanged under a special form of reward transformation , and in the later work of Wiewiora et al . ( 2003 ) a framework is proposed to guide policies with prior knowledge under tabular setting , the investigation of how it accommodates recent Deep Reinforcement Learning ( DRL ) algorithms remains much less explored . In this work , we focus on the simplest case of reward shaping , the linear transformation , in valuebased DRL ( Sutton & Barto , 1998 ; Lillicrap et al. , 2015 ; Mnih et al. , 2015 ; Fujimoto et al. , 2018b ) . We start with understanding how such a specific kind of reward shaping works in value-based DRL algorithms . We show that reward shifting , as the simplest reward transformation , is equivalent to engineering initialization of the Q-function estimation , extending previous discovery of ( Wiewiora et al. , 2003 ) to the function approximation settings . Based on such an equivalence , we bring the key insight of this work : a positive reward shifting leads to conservative exploitation , while a negative reward shifting leads to curiosity-driven exploration . We demonstrate the application of such an insight to three downstream tasks : ( 1 ) for offline RL , we show that conservative exploitation can lead to improved learning performance based on off-the-shelf algorithms ; ( 2 ) for online RL setting , we show multiple value functions with different reward shifting constants can be used as a trade-off between exploration and exploitation , thus improving learning efficiency ; ( 3 ) finally , we introduce a simple yet crucial improvement over a prevailing curiosity-based exploration method , the Random Network Distillation ( Burda et al. , 2018b ) , making it compatible with value-based DRL algorithms . We evaluate our idea on various tasks , including both continuous and discrete action space control , resulting in substantial improvements over previous baselines . Our contributions can be summarized as follows 1. we introduce the key insight that reward shifting is equivalent to diversified Q-value network initialization , which can be used to boost both curiosity-driven exploration and conservative exploitation ; 2. motivated by our key insight , we present three scenarios where the reward shifting can benefit , namely the offline conservative exploitation , the online sample-efficient RL , and the curiosity-driven exploration ; 3. we demonstrate the effectiveness of the proposed method integrated with off-the-shelf baselines on both continuous and discrete control tasks . 2 PRELIMINARIES . 2.1 ONLINE RL . We follow a standard MDP formulation in the online RL settings , i.e. , M = { S , A , T , R , ρ0 , γ , T } , where S ⊂ Rd denotes the d-dim state space , A is the action space ( note for discrete action space |A| < ∞ and for continuous control |A| = ∞ ) , T : S × A 7→ S is the transition dynamics , R : S × A 7→ R is the reward function . ρ0 denotes the initial state distribution , i.e. , ρ0 = p ( s0 ) . γ is the discount factor and T is the episodic decision . Online RL considers the problem of learning a policy π ∈ Π : S 7→ ∆A ( or π ∈ Π : S 7→ A with a deterministic policy class ) , such that the expected cumulative reward in the Markov decision process is maximized , i.e. , π = arg max π Eat∼π , st+1∼T , s0∼ρ0 T∑ t=0 γtrt ( st , at ) , ( 1 ) In the online RL setting , an agent normally learns through trials and errors ( Sutton & Barto , 1998 ) , either with an on-policy paradigm ( Schulman et al. , 2015 ; 2017 ; Cobbe et al. , 2021 ) or an off-policy manner ( Mnih et al. , 2015 ; Lillicrap et al. , 2015 ; Wang et al. , 2016 ; Haarnoja et al. , 2018 ; Fujimoto et al. , 2018b ) . In this work , we focus on the off-policy value-based methods which are in general more sample efficient . Specifically , our discussions assume the policy learning is based on a learned Q-value function , that approximates the cumulative reward an agent can gain in the following part of an episode . The Q-value function is defined as Q ( st , at ) = Eπ , T ∑T τ=t γ tr ( sτ , aτ ) , and can be approximated through the Bellman Operator BQ ( s , a ) = r ( s , a ) + γEQ ( s′ , a′ ) . For value-based methods , the ( soft- ) optimal policy is then produced by π∗α ( a|s ) = exp 1αQ ∗ ( s , a ) ∑ a′ exp 1 αQ ∗ ( s , a′ ) , ( 2 ) where Q∗ is optimal Q-value function . We can also set the temperature parameter close as 0 to have the deterministic policy class . Simplifying the notion we have π ( s ) = arg maxaQ ∗ ( s , a ) . Algorithms like DPG ( Silver et al. , 2014 ) can be used to address the intractable analytical argmax issue arises in continuous action space . We develop our work on top of prevailing baseline algorithms of DQN ( Mnih et al. , 2015 ) , BCQ ( Fujimoto et al. , 2018a ) , and TD3 ( Fujimoto et al. , 2018b ) , and it will be easy to extend to other baseline algorithms . 2.2 EXPLORATION AND THE CURIOSITY-DRIVEN METHODS . One of the most important issues in online RL is the exploration-exploitation dilemma ( Sutton & Barto , 1998 ) that the agent must learn to exploit its accumulated knowledge on the task while exploring new states and actions . Plenty of previous works address the exploration problem from various perspectives : In the tasks with discrete action space , count-based methods like Bellemare et al . ( 2016 ) ; Ostrovski et al . ( 2017 ) ; Tang et al . ( 2017 ) are proposed to motivate the policy to explore more on under-explored states . Curiosity-driven methods are investigated by Houthooft et al . ( 2016 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018a ; b ) , where the intrinsic reward is designed as a supplementary to the primal task reward for better exploration . Self-imitate approaches like Oh et al . ( 2018 ) ; Ecoffet et al . ( 2019 ) ; Sun et al . ( 2019 ) repeat success trajectories but require extra assumptions on the environment . The work of DIAYN and DADS ( Eysenbach et al. , 2018 ; Sharma et al. , 2019 ) show that various skills can be developed even without the primal extrinsic reward . For continuous control tasks , OAC ( Ciosek et al. , 2019 ) improves the SAC ( Haarnoja et al. , 2018 ) with informative action space noise based on the optimism in face of uncertainty ( OFU ) ( Brafman & Tennenholtz , 2002 ; Jaksch et al. , 2010 ; Azar et al. , 2017 ; Jin et al. , 2018 ) . GAC ( Tessler et al. , 2019 ) addresses the exploration issue with a richer functional class for the policy . In the recent work of Rashid et al . ( 2020 ) , the problematic pessimistic initialization is addressed for better exploration , yet the work focuses on specific settings of tabular and discrete control exploration . In the work of Osband et al . ( 2016 ; 2018 ) , ensemble models with diverse initialization and randomized priors are used to resemble the insight of bootstrap sampling and facilitate better value estimation , yet those methods are only applicable to discrete control tasks . Noted that although the reward shifting can be regarded as a special case of these random priors , it can be distinguished by not changing the optimal Q-value , and flexible to be plugged in to both continuous and discrete control algorithms . The Random Network Distillation ( RND ) ( Burda et al. , 2018b ) propose to use the difference between a fixed neural network φ1 and a trainable network φ2 to represents the intrinsic reward , e.g. , rint ( s , a ) = |φ2 ( s , a ) − φ1 ( s , a ) | , ( 3 ) when outputs of both networks are activated by a sigmoid function , and φ2 is optimized to approximate φ1 for the visited ( s , a ) pairs . Henceforth , the value of rint ( s , a ) will decay to 0 when such state-action pairs are visited frequently but remain high for seldom visited pairs . In this work , we show that exploratory behavior can be achieved simply by shifting the reward function with a constant , thus our method is orthogonal to those previous approaches in the sense that our intrinsic exploration behavior is motivated by function approximation error . We demonstrate such an insight by showing that RND , with its original design , is not suitable for value-based methods in developing exploratory behaviors , but integrating RND with a shifted reward function can remarkably improve the learning performance . 2.3 OFFLINE RL . The offline RL , also known as batch-RL , focuses on the problems where the interaction with the environment is impossible , and the policy can only be optimized based on the logged dataset . In those tasks , a fixed buffer B = { si , ai , ri , s′i } i= [ N ] is provided . As the agent in the offline RL setting can not correct its potentially biased knowledge through interactions , the most important issue is to address the extrapolation error ( Fujimoto et al. , 2018a ) induced by distributional mismatch ( Kumar et al. , 2019 ) . To address such an issue , a series of algorithms optimize the policy learning under the constraint of distributional similarity ( Kumar et al. , 2019 ; Wu et al. , 2019 ; Siegel et al. , 2020 ) . Bharadhwaj et al . ( 2020 ) proposed CQL to solve the offline RL tasks with a conservative value estimation . Specifically , CQL learns the Q-value estimation by jointly maximizing the Q-values of actions sampled from the behavior offline dataset and minimizing the Q-values of actions sampled with pre-defined prior distributions ( e.g. , uniform distribution over the action space ) . As we will show in this work , an alternative approach to have a lower bound for the optimal Q-value function is to use an appropriately shifted reward function . This idea leads to the direct application of our proposed framework in the offline setting . In general , reward shift can be plugged in many distribution-matching offline-RL algorithms ( Fujimoto et al. , 2018a ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Siegel et al. , 2020 ) to further improve the performance with conservative Q-value estimation .
This paper studies the effectiveness of reward shifting in value-based deep reinforcement learning. Particularly, it points out that (1) a positive reward shifting is equivalent to pessimistic initialization of Q values, thus, leading to conservative exploitation; and (2) a negative reward shifting equals to optimistic initialization and hence leads to curiosity-driven exploration. By leveraging these insights, the paper proposes to modify existing RL algorithms by simply adding reward shifting to improve their performance. Empirically, experimental results showed that reward shifting helps improve the baselines (or the vanilla algorithms) under various scenarios, including offline RL, online continuous control and online value-based curiosity-driven exploration.
SP:334c8f709711e6f373c581f427278bf171dcdc7c
Reward Shifting for Optimistic Exploration and Conservative Exploitation
1 INTRODUCTION . While reward shaping is a well-established practice in reinforcement learning applications and has a long-standing history ( Randløv & Alstrøm , 1998 ; Laud , 2004 ) , specifying a certain reward to incentivize the learning agent requires domain knowledge and deep understanding of the task ( Vinyals et al. , 2019 ; Akkaya et al. , 2019 ; Berner et al. , 2019 ; Elbarbari et al. , 2021 ) . Even with careful design and tuning , learning with a shaped reward that intends to accelerate learning may on the contrary hinder the learning performance by incurring the sub-optimal behaviors of the agent ( Florensa et al. , 2017 ; Plappert et al. , 2018 ) . Although Ng et al . ( 1999 ) theoretically points the optimal policy will keep unchanged under a special form of reward transformation , and in the later work of Wiewiora et al . ( 2003 ) a framework is proposed to guide policies with prior knowledge under tabular setting , the investigation of how it accommodates recent Deep Reinforcement Learning ( DRL ) algorithms remains much less explored . In this work , we focus on the simplest case of reward shaping , the linear transformation , in valuebased DRL ( Sutton & Barto , 1998 ; Lillicrap et al. , 2015 ; Mnih et al. , 2015 ; Fujimoto et al. , 2018b ) . We start with understanding how such a specific kind of reward shaping works in value-based DRL algorithms . We show that reward shifting , as the simplest reward transformation , is equivalent to engineering initialization of the Q-function estimation , extending previous discovery of ( Wiewiora et al. , 2003 ) to the function approximation settings . Based on such an equivalence , we bring the key insight of this work : a positive reward shifting leads to conservative exploitation , while a negative reward shifting leads to curiosity-driven exploration . We demonstrate the application of such an insight to three downstream tasks : ( 1 ) for offline RL , we show that conservative exploitation can lead to improved learning performance based on off-the-shelf algorithms ; ( 2 ) for online RL setting , we show multiple value functions with different reward shifting constants can be used as a trade-off between exploration and exploitation , thus improving learning efficiency ; ( 3 ) finally , we introduce a simple yet crucial improvement over a prevailing curiosity-based exploration method , the Random Network Distillation ( Burda et al. , 2018b ) , making it compatible with value-based DRL algorithms . We evaluate our idea on various tasks , including both continuous and discrete action space control , resulting in substantial improvements over previous baselines . Our contributions can be summarized as follows 1. we introduce the key insight that reward shifting is equivalent to diversified Q-value network initialization , which can be used to boost both curiosity-driven exploration and conservative exploitation ; 2. motivated by our key insight , we present three scenarios where the reward shifting can benefit , namely the offline conservative exploitation , the online sample-efficient RL , and the curiosity-driven exploration ; 3. we demonstrate the effectiveness of the proposed method integrated with off-the-shelf baselines on both continuous and discrete control tasks . 2 PRELIMINARIES . 2.1 ONLINE RL . We follow a standard MDP formulation in the online RL settings , i.e. , M = { S , A , T , R , ρ0 , γ , T } , where S ⊂ Rd denotes the d-dim state space , A is the action space ( note for discrete action space |A| < ∞ and for continuous control |A| = ∞ ) , T : S × A 7→ S is the transition dynamics , R : S × A 7→ R is the reward function . ρ0 denotes the initial state distribution , i.e. , ρ0 = p ( s0 ) . γ is the discount factor and T is the episodic decision . Online RL considers the problem of learning a policy π ∈ Π : S 7→ ∆A ( or π ∈ Π : S 7→ A with a deterministic policy class ) , such that the expected cumulative reward in the Markov decision process is maximized , i.e. , π = arg max π Eat∼π , st+1∼T , s0∼ρ0 T∑ t=0 γtrt ( st , at ) , ( 1 ) In the online RL setting , an agent normally learns through trials and errors ( Sutton & Barto , 1998 ) , either with an on-policy paradigm ( Schulman et al. , 2015 ; 2017 ; Cobbe et al. , 2021 ) or an off-policy manner ( Mnih et al. , 2015 ; Lillicrap et al. , 2015 ; Wang et al. , 2016 ; Haarnoja et al. , 2018 ; Fujimoto et al. , 2018b ) . In this work , we focus on the off-policy value-based methods which are in general more sample efficient . Specifically , our discussions assume the policy learning is based on a learned Q-value function , that approximates the cumulative reward an agent can gain in the following part of an episode . The Q-value function is defined as Q ( st , at ) = Eπ , T ∑T τ=t γ tr ( sτ , aτ ) , and can be approximated through the Bellman Operator BQ ( s , a ) = r ( s , a ) + γEQ ( s′ , a′ ) . For value-based methods , the ( soft- ) optimal policy is then produced by π∗α ( a|s ) = exp 1αQ ∗ ( s , a ) ∑ a′ exp 1 αQ ∗ ( s , a′ ) , ( 2 ) where Q∗ is optimal Q-value function . We can also set the temperature parameter close as 0 to have the deterministic policy class . Simplifying the notion we have π ( s ) = arg maxaQ ∗ ( s , a ) . Algorithms like DPG ( Silver et al. , 2014 ) can be used to address the intractable analytical argmax issue arises in continuous action space . We develop our work on top of prevailing baseline algorithms of DQN ( Mnih et al. , 2015 ) , BCQ ( Fujimoto et al. , 2018a ) , and TD3 ( Fujimoto et al. , 2018b ) , and it will be easy to extend to other baseline algorithms . 2.2 EXPLORATION AND THE CURIOSITY-DRIVEN METHODS . One of the most important issues in online RL is the exploration-exploitation dilemma ( Sutton & Barto , 1998 ) that the agent must learn to exploit its accumulated knowledge on the task while exploring new states and actions . Plenty of previous works address the exploration problem from various perspectives : In the tasks with discrete action space , count-based methods like Bellemare et al . ( 2016 ) ; Ostrovski et al . ( 2017 ) ; Tang et al . ( 2017 ) are proposed to motivate the policy to explore more on under-explored states . Curiosity-driven methods are investigated by Houthooft et al . ( 2016 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018a ; b ) , where the intrinsic reward is designed as a supplementary to the primal task reward for better exploration . Self-imitate approaches like Oh et al . ( 2018 ) ; Ecoffet et al . ( 2019 ) ; Sun et al . ( 2019 ) repeat success trajectories but require extra assumptions on the environment . The work of DIAYN and DADS ( Eysenbach et al. , 2018 ; Sharma et al. , 2019 ) show that various skills can be developed even without the primal extrinsic reward . For continuous control tasks , OAC ( Ciosek et al. , 2019 ) improves the SAC ( Haarnoja et al. , 2018 ) with informative action space noise based on the optimism in face of uncertainty ( OFU ) ( Brafman & Tennenholtz , 2002 ; Jaksch et al. , 2010 ; Azar et al. , 2017 ; Jin et al. , 2018 ) . GAC ( Tessler et al. , 2019 ) addresses the exploration issue with a richer functional class for the policy . In the recent work of Rashid et al . ( 2020 ) , the problematic pessimistic initialization is addressed for better exploration , yet the work focuses on specific settings of tabular and discrete control exploration . In the work of Osband et al . ( 2016 ; 2018 ) , ensemble models with diverse initialization and randomized priors are used to resemble the insight of bootstrap sampling and facilitate better value estimation , yet those methods are only applicable to discrete control tasks . Noted that although the reward shifting can be regarded as a special case of these random priors , it can be distinguished by not changing the optimal Q-value , and flexible to be plugged in to both continuous and discrete control algorithms . The Random Network Distillation ( RND ) ( Burda et al. , 2018b ) propose to use the difference between a fixed neural network φ1 and a trainable network φ2 to represents the intrinsic reward , e.g. , rint ( s , a ) = |φ2 ( s , a ) − φ1 ( s , a ) | , ( 3 ) when outputs of both networks are activated by a sigmoid function , and φ2 is optimized to approximate φ1 for the visited ( s , a ) pairs . Henceforth , the value of rint ( s , a ) will decay to 0 when such state-action pairs are visited frequently but remain high for seldom visited pairs . In this work , we show that exploratory behavior can be achieved simply by shifting the reward function with a constant , thus our method is orthogonal to those previous approaches in the sense that our intrinsic exploration behavior is motivated by function approximation error . We demonstrate such an insight by showing that RND , with its original design , is not suitable for value-based methods in developing exploratory behaviors , but integrating RND with a shifted reward function can remarkably improve the learning performance . 2.3 OFFLINE RL . The offline RL , also known as batch-RL , focuses on the problems where the interaction with the environment is impossible , and the policy can only be optimized based on the logged dataset . In those tasks , a fixed buffer B = { si , ai , ri , s′i } i= [ N ] is provided . As the agent in the offline RL setting can not correct its potentially biased knowledge through interactions , the most important issue is to address the extrapolation error ( Fujimoto et al. , 2018a ) induced by distributional mismatch ( Kumar et al. , 2019 ) . To address such an issue , a series of algorithms optimize the policy learning under the constraint of distributional similarity ( Kumar et al. , 2019 ; Wu et al. , 2019 ; Siegel et al. , 2020 ) . Bharadhwaj et al . ( 2020 ) proposed CQL to solve the offline RL tasks with a conservative value estimation . Specifically , CQL learns the Q-value estimation by jointly maximizing the Q-values of actions sampled from the behavior offline dataset and minimizing the Q-values of actions sampled with pre-defined prior distributions ( e.g. , uniform distribution over the action space ) . As we will show in this work , an alternative approach to have a lower bound for the optimal Q-value function is to use an appropriately shifted reward function . This idea leads to the direct application of our proposed framework in the offline setting . In general , reward shift can be plugged in many distribution-matching offline-RL algorithms ( Fujimoto et al. , 2018a ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Siegel et al. , 2020 ) to further improve the performance with conservative Q-value estimation .
This paper studies how different reward shifting (to be precise, adding a bias term on the reward) affects value-based RL algorithms. It illustrates the idea that different reward shifting corresponds to different initialization of the Q networks. Though linear reward shifting does not change the optimal solution for Q, the initialization does affect the learning process and might lead to different stationary points, when equipped with function approximation. The paper further studies the idea in two settings: (1). offline RL, which shows a position bias term could assign the $(s,a)\in\mathcal{D}$ a large value which helps conservative exploitation while in (2) online RL, a negative bias term could facilitate exploration. Some empirical results verify the ideas. Reward shifting techniques have been studied in the bandit literature, this paper studies how it affects the value-based RL algorithms with function approximations. This is an interesting and important topic, and this paper provides some intuition and empirical results to help the community systematically understand the effects of reward shifting on both online and offline settings.
SP:334c8f709711e6f373c581f427278bf171dcdc7c
Reward Shifting for Optimistic Exploration and Conservative Exploitation
1 INTRODUCTION . While reward shaping is a well-established practice in reinforcement learning applications and has a long-standing history ( Randløv & Alstrøm , 1998 ; Laud , 2004 ) , specifying a certain reward to incentivize the learning agent requires domain knowledge and deep understanding of the task ( Vinyals et al. , 2019 ; Akkaya et al. , 2019 ; Berner et al. , 2019 ; Elbarbari et al. , 2021 ) . Even with careful design and tuning , learning with a shaped reward that intends to accelerate learning may on the contrary hinder the learning performance by incurring the sub-optimal behaviors of the agent ( Florensa et al. , 2017 ; Plappert et al. , 2018 ) . Although Ng et al . ( 1999 ) theoretically points the optimal policy will keep unchanged under a special form of reward transformation , and in the later work of Wiewiora et al . ( 2003 ) a framework is proposed to guide policies with prior knowledge under tabular setting , the investigation of how it accommodates recent Deep Reinforcement Learning ( DRL ) algorithms remains much less explored . In this work , we focus on the simplest case of reward shaping , the linear transformation , in valuebased DRL ( Sutton & Barto , 1998 ; Lillicrap et al. , 2015 ; Mnih et al. , 2015 ; Fujimoto et al. , 2018b ) . We start with understanding how such a specific kind of reward shaping works in value-based DRL algorithms . We show that reward shifting , as the simplest reward transformation , is equivalent to engineering initialization of the Q-function estimation , extending previous discovery of ( Wiewiora et al. , 2003 ) to the function approximation settings . Based on such an equivalence , we bring the key insight of this work : a positive reward shifting leads to conservative exploitation , while a negative reward shifting leads to curiosity-driven exploration . We demonstrate the application of such an insight to three downstream tasks : ( 1 ) for offline RL , we show that conservative exploitation can lead to improved learning performance based on off-the-shelf algorithms ; ( 2 ) for online RL setting , we show multiple value functions with different reward shifting constants can be used as a trade-off between exploration and exploitation , thus improving learning efficiency ; ( 3 ) finally , we introduce a simple yet crucial improvement over a prevailing curiosity-based exploration method , the Random Network Distillation ( Burda et al. , 2018b ) , making it compatible with value-based DRL algorithms . We evaluate our idea on various tasks , including both continuous and discrete action space control , resulting in substantial improvements over previous baselines . Our contributions can be summarized as follows 1. we introduce the key insight that reward shifting is equivalent to diversified Q-value network initialization , which can be used to boost both curiosity-driven exploration and conservative exploitation ; 2. motivated by our key insight , we present three scenarios where the reward shifting can benefit , namely the offline conservative exploitation , the online sample-efficient RL , and the curiosity-driven exploration ; 3. we demonstrate the effectiveness of the proposed method integrated with off-the-shelf baselines on both continuous and discrete control tasks . 2 PRELIMINARIES . 2.1 ONLINE RL . We follow a standard MDP formulation in the online RL settings , i.e. , M = { S , A , T , R , ρ0 , γ , T } , where S ⊂ Rd denotes the d-dim state space , A is the action space ( note for discrete action space |A| < ∞ and for continuous control |A| = ∞ ) , T : S × A 7→ S is the transition dynamics , R : S × A 7→ R is the reward function . ρ0 denotes the initial state distribution , i.e. , ρ0 = p ( s0 ) . γ is the discount factor and T is the episodic decision . Online RL considers the problem of learning a policy π ∈ Π : S 7→ ∆A ( or π ∈ Π : S 7→ A with a deterministic policy class ) , such that the expected cumulative reward in the Markov decision process is maximized , i.e. , π = arg max π Eat∼π , st+1∼T , s0∼ρ0 T∑ t=0 γtrt ( st , at ) , ( 1 ) In the online RL setting , an agent normally learns through trials and errors ( Sutton & Barto , 1998 ) , either with an on-policy paradigm ( Schulman et al. , 2015 ; 2017 ; Cobbe et al. , 2021 ) or an off-policy manner ( Mnih et al. , 2015 ; Lillicrap et al. , 2015 ; Wang et al. , 2016 ; Haarnoja et al. , 2018 ; Fujimoto et al. , 2018b ) . In this work , we focus on the off-policy value-based methods which are in general more sample efficient . Specifically , our discussions assume the policy learning is based on a learned Q-value function , that approximates the cumulative reward an agent can gain in the following part of an episode . The Q-value function is defined as Q ( st , at ) = Eπ , T ∑T τ=t γ tr ( sτ , aτ ) , and can be approximated through the Bellman Operator BQ ( s , a ) = r ( s , a ) + γEQ ( s′ , a′ ) . For value-based methods , the ( soft- ) optimal policy is then produced by π∗α ( a|s ) = exp 1αQ ∗ ( s , a ) ∑ a′ exp 1 αQ ∗ ( s , a′ ) , ( 2 ) where Q∗ is optimal Q-value function . We can also set the temperature parameter close as 0 to have the deterministic policy class . Simplifying the notion we have π ( s ) = arg maxaQ ∗ ( s , a ) . Algorithms like DPG ( Silver et al. , 2014 ) can be used to address the intractable analytical argmax issue arises in continuous action space . We develop our work on top of prevailing baseline algorithms of DQN ( Mnih et al. , 2015 ) , BCQ ( Fujimoto et al. , 2018a ) , and TD3 ( Fujimoto et al. , 2018b ) , and it will be easy to extend to other baseline algorithms . 2.2 EXPLORATION AND THE CURIOSITY-DRIVEN METHODS . One of the most important issues in online RL is the exploration-exploitation dilemma ( Sutton & Barto , 1998 ) that the agent must learn to exploit its accumulated knowledge on the task while exploring new states and actions . Plenty of previous works address the exploration problem from various perspectives : In the tasks with discrete action space , count-based methods like Bellemare et al . ( 2016 ) ; Ostrovski et al . ( 2017 ) ; Tang et al . ( 2017 ) are proposed to motivate the policy to explore more on under-explored states . Curiosity-driven methods are investigated by Houthooft et al . ( 2016 ) ; Pathak et al . ( 2017 ) ; Burda et al . ( 2018a ; b ) , where the intrinsic reward is designed as a supplementary to the primal task reward for better exploration . Self-imitate approaches like Oh et al . ( 2018 ) ; Ecoffet et al . ( 2019 ) ; Sun et al . ( 2019 ) repeat success trajectories but require extra assumptions on the environment . The work of DIAYN and DADS ( Eysenbach et al. , 2018 ; Sharma et al. , 2019 ) show that various skills can be developed even without the primal extrinsic reward . For continuous control tasks , OAC ( Ciosek et al. , 2019 ) improves the SAC ( Haarnoja et al. , 2018 ) with informative action space noise based on the optimism in face of uncertainty ( OFU ) ( Brafman & Tennenholtz , 2002 ; Jaksch et al. , 2010 ; Azar et al. , 2017 ; Jin et al. , 2018 ) . GAC ( Tessler et al. , 2019 ) addresses the exploration issue with a richer functional class for the policy . In the recent work of Rashid et al . ( 2020 ) , the problematic pessimistic initialization is addressed for better exploration , yet the work focuses on specific settings of tabular and discrete control exploration . In the work of Osband et al . ( 2016 ; 2018 ) , ensemble models with diverse initialization and randomized priors are used to resemble the insight of bootstrap sampling and facilitate better value estimation , yet those methods are only applicable to discrete control tasks . Noted that although the reward shifting can be regarded as a special case of these random priors , it can be distinguished by not changing the optimal Q-value , and flexible to be plugged in to both continuous and discrete control algorithms . The Random Network Distillation ( RND ) ( Burda et al. , 2018b ) propose to use the difference between a fixed neural network φ1 and a trainable network φ2 to represents the intrinsic reward , e.g. , rint ( s , a ) = |φ2 ( s , a ) − φ1 ( s , a ) | , ( 3 ) when outputs of both networks are activated by a sigmoid function , and φ2 is optimized to approximate φ1 for the visited ( s , a ) pairs . Henceforth , the value of rint ( s , a ) will decay to 0 when such state-action pairs are visited frequently but remain high for seldom visited pairs . In this work , we show that exploratory behavior can be achieved simply by shifting the reward function with a constant , thus our method is orthogonal to those previous approaches in the sense that our intrinsic exploration behavior is motivated by function approximation error . We demonstrate such an insight by showing that RND , with its original design , is not suitable for value-based methods in developing exploratory behaviors , but integrating RND with a shifted reward function can remarkably improve the learning performance . 2.3 OFFLINE RL . The offline RL , also known as batch-RL , focuses on the problems where the interaction with the environment is impossible , and the policy can only be optimized based on the logged dataset . In those tasks , a fixed buffer B = { si , ai , ri , s′i } i= [ N ] is provided . As the agent in the offline RL setting can not correct its potentially biased knowledge through interactions , the most important issue is to address the extrapolation error ( Fujimoto et al. , 2018a ) induced by distributional mismatch ( Kumar et al. , 2019 ) . To address such an issue , a series of algorithms optimize the policy learning under the constraint of distributional similarity ( Kumar et al. , 2019 ; Wu et al. , 2019 ; Siegel et al. , 2020 ) . Bharadhwaj et al . ( 2020 ) proposed CQL to solve the offline RL tasks with a conservative value estimation . Specifically , CQL learns the Q-value estimation by jointly maximizing the Q-values of actions sampled from the behavior offline dataset and minimizing the Q-values of actions sampled with pre-defined prior distributions ( e.g. , uniform distribution over the action space ) . As we will show in this work , an alternative approach to have a lower bound for the optimal Q-value function is to use an appropriately shifted reward function . This idea leads to the direct application of our proposed framework in the offline setting . In general , reward shift can be plugged in many distribution-matching offline-RL algorithms ( Fujimoto et al. , 2018a ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Siegel et al. , 2020 ) to further improve the performance with conservative Q-value estimation .
This paper shows a key observation that the linear transformation of reward shaping is equivalent to diversified Q-value network initialization. Based on such observation, the paper presents three scenarios where the reward shifting can benefit: the offline conservative exploitation, the online sample-efficient RL, and the curiosity-driven exploration. The algorithm is evaluated on the continuous control tasks and discrete control tasks.
SP:334c8f709711e6f373c581f427278bf171dcdc7c
Learning Symmetric Locomotion using Cumulative Fatigue for Reinforcement Learning
Modern deep reinforcement learning ( DRL ) methods allow simulated characters to learn complex skills such as locomotion from scratch . However , without further exploitation of domain-specific knowledge , such as motion capture data , finite state machines or morphological specifications , physics-based locomotion generation with DRL often results in unrealistic motions . One explanation for this is that present RL models do not estimate biomechanical effort ; instead , they minimize instantaneous squared joint actuation torques as a proxy for the actual subjective cost of actions . To mitigate this discrepancy in a computationally efficient manner , we propose a method for mapping actuation torques to subjective effort without simulating muscles and their energy expenditure . Our approach is based on the Three Compartment Controller model , in which the relationships of variables such as maximum voluntary joint torques , recovery , and cumulative fatigue are present . We extend this method for sustained symmetric locomotion tasks for deep reinforcement learning using a Normalized Cumulative Fatigue ( NCF ) model . In summary , in this paper we present the first RL model to use biomechanical cumulative effort for full-body movement generation without the use of any finite state machines , morphological specification or motion capture data . Our results show that the learned policies are more symmetric , periodic and robust compared to methods found in previous literature . 1 INTRODUCTION . It is a long standing task in computer animation to make characters walk on their own . In this context , Deep Reinfocement Learning ( DRL ) has become a promising method for automatic generation of movement controls for interactive , physics-based characters . However , in many cases the resulting motions are still not perceived as natural ( Schulman et al. , 2017 ) . A common approach to mitigate this is to use motion capture or animation data ( Peng et al. , 2018 ; Bergamin et al. , 2019 ; Won et al. , 2020 ; Peng et al. , 2021 ) . Nevertheless , such approaches are limited to characters and movements to which data is readily available . Furthermore , obtaining qualitatively good data is oftentimes expensive , and many biomechanical constrains that are implicit in captured motions are not preserved during editing and retargeting – which is often required when data is limited . Another method for improving motion quality is to optimize for movement characteristics that shape the motion such as symmetric gait properties ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ) or minimal energy consumption and task goals . While such methods overcome the need of motion capture data , the absence of biomechanical constraints still may lead to unwanted behaviour and unnatural torque patterns . Another group of methods that have emerged come from bio-mechanical literature , which include musculoskeletal models and other forms of biological constraints . Previous works ( Wang et al. , 2012 ; Geijtenbeek et al. , 2013 ; Lee et al. , 2014 ) in this direction have explored biomimetic muscles and tendons to simulate a variety of human and animal motions . However , such musclebased methods are usually computationally expensive , especially under a reinforcement learning framework ( Kidziński et al. , 2018 ) . In this research we work towards developing a cumulative fatigue reward based on biomechanical literature to account for a computationally efficient way to include motion constraints that are implicit in articulated figures driven by musculotendon units , in the context of locomotion . To improve on quality we further incorporate movement characteristics , such as gait symmetry enforcement methods by Abdolhosseini et al . ( 2019 ) and Yu et al . ( 2018 ) . Contributions . In this paper we present the first RL model to use biomechanical cumulative effort for full-body movement generation . We derive a Normalized Cumulative Fatigue ( NFC ) model suitable for reinforcement learning based on the Three Compartment Controller ( 3CC ) model by Xia & Frey Law ( 2008 ) and show that both models are equivalent under the assumption of sustained dynamic load conditions but that the 3CC model fails when applied to pre-existing benchmark environments without further hyper-parameter-tuning of the environment itself . Furthermore , the fatigue reward derived from our model more accurately reflects the embodied biomechanical nature of a simulated character when compared to a reward based on instantaneous torque ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ; Schulman et al. , 2017 ) . We apply the cumulative fatigue model to a simulated humanoid for learning sustained symmetric locomotion and show that our method is robust and can generate more relaxed , natural and symmetric locomotion especially in complex environments – without the need of motion capture data , finite state machines or morphological specifications , as well as no further hyper-parameter-tuning of possible pre-existing environments . Additionally , the simplification from 3CC to NCF allows the method to be more easily adaptable to arbitrary characters that may not exhibit biologically accurate properties . 2 RELATED WORK . Recent developments in DRL have seen significant progress in solving high-dimensional continuous control problems . For example , Schulman et al . ( 2015a ) have proposed Trust Region Policy Optimization ( TRPO ) and show that this method can be used to generate biped locomotion in a 2D planar space . Later , by combining TRPO with Generalized Advantage Estimation , Schulman et al . ( 2015b ) have extended their work for their humanoid locomotion task to three dimensions . Afterwards , they have proposed Proximal Policy Optimization ( PPO ) , which further improves the data efficiency of the algorithm ( Schulman et al. , 2017 ) . However , the resulting movements oftentimes still look jerky and unnatural . A common way to overcome these issues , is to exploit domain specific knowledge in various forms ( Ramamurthy et al. , 2019 ) : Imitation Learning . Oftentimes , reference motion is used in this regard . Peng et al . ( 2017 ) introduce a two-level hierarchical controller to generate locomotion : the low-level controller is learned by mimicking the reference locomotion data ; the high-level controller is acting as a planner in order to respond to environment changes . However , this method is not capable of highly dynamic motions . Peng et al . ( 2018 ) address this issue and achieve significantly more natural-looking motions using imitation learning . More recently , they have extended their method with generative adversarial imitation learning ( Peng et al. , 2021 ) . Other works in this direction include mimicking various features over a large dataset of movements with RL ( Bergamin et al. , 2019 ) or learning a mixture of experts models for various movements ( Won et al. , 2020 ) . However , all these methods require readily available motion capture data as a prerequisite for training . Optimizing Movement Characteristics . Instead of using reference data as prior knowledge , another option is to exploit the characteristics of specific types of motions that shall be generated using hand-crafted features . In this regard , Yu et al . ( 2018 ) exploit symmetry property of locomotion and propose the mirror symmetry loss . They combine it with energy optimization and add an external force that acts as a virtual assistant to learn symmetric locomotion from scratch . Abdolhosseini et al . ( 2019 ) emphasize the core idea behind the mirror symmetry loss , called symmetric policy , and analyze its performance in terms of locomotion symmetry by combining it with DRL in a variety of ways to produce symmetric gaits . They also show that symmetry enforcement methods improve gait symmetry in general , but can not guarantee a symmetric gait . Furthermore , while such methods overcome the need for motion capture data , the absence of bio-mechanical constraints still leads to unwanted behaviour and less natural torque patterns . Musculoskeletal Models . A more biologically-accurate approach for movement synthesis involves musculoskeletal models . Previous works ( Taga , 1995 ; Anderson & Pandy , 2001 ; Geyer & Herr , 2010 ; Ackermann & van den Bogert , 2012 ; Ijspeert et al. , 2007 ; Maufroy et al. , 2008 ; Thelen et al. , 2003 ) in biomechanics have developed musculoskeletal models that use biomimetic muscles and tendons to simulate a variety of human and animal motions . Controlling a muscle-based virtual characters has also been explored in computer animation – from upper body movements ( Lee & Terzopoulos , 2006 ; Lee et al. , 2009 ; 2018 ) , to hand manipulation ( Tsang et al. , 2005 ; Sueda et al. , 2008 ) , and full-body locomotion ( Wang et al. , 2012 ; Geijtenbeek et al. , 2013 ; Lee et al. , 2014 ) . However , such muscle-based methods are usually computationally expensive , especially under a reinforcement learning framework ( Kidziński et al. , 2018 ) . To address this issue , Jiang et al . ( 2019 ) have proposed a technique to transform an optimal control problem formulated in the muscle actuation space to an equivalent problem in the joint-actuation space by training the model with control signals obtained from the muscle actuation space . The result shows that as long as the model can reflect the underlying biomechanical properties , it is not necessary to model muscle and tendon details explicitly in order to generate more realistic motions . However , a disadvantage of this work is that they need to learn the mapping from the muscle-based actuation space to the torque-based space using reference data . Biomechanical Cumulative Fatigue . Muscle fatigue is the failure to maintain the required or expected force ( Edwards , 1981 ) . In contrast to instantaneous fatigue , which does not take the endurance time into account , biomechanical fatigue assumes the fatigue to accumulate over time – i.e . the longer a task is done , the more fatiguing it becomes . Muscle fatigue is task-related and can vary across muscles and joints ( Xia & Frey Law , 2008 ; Imbeau et al. , 2006 ; Enoka & Duchateau , 2008 ; Jang et al. , 2017 ; Frey Law & Avin , 2010 ; Frey-Law et al. , 2012 ) , which partially explains the challenging nature of representing muscle fatigue analytically . In this regard , Liu et al . ( 2002b ) have proposed a motor unit ( MU ) -based fatigue model which uses three muscle activation states to estimate perceived biomechanical fatigeu : resting , activated and fatigued . The model is able to predict fatigue at static load conditions but fails at submaximal or dynamic conditions . Xia & Frey Law ( 2008 ) have proposed a Three-Compartment Controller ( 3CC ) model which improves upon the model of Liu et al . ( 2002b ) for dynamic load conditions by introducing a feed-back controller term between the active and rest MU-states based on torque without the need of explicit modeling of muscle actuators . The 3CC model , as a torque-based model for modeling muscle fatigue and recovery , has already shown effectiveness in motion analysis ( Jang et al. , 2017 ) and synthesis ( Cheema et al. , 2020 ) . The follow-up work ( Looft et al. , 2018 ) has been successfully used in upper body motion synthesis under a DLR framework by Cheema et al . ( 2020 ) without any motion capture data for mid-air interaction analysis and synthesis . Inspired by their work , we extend it to full-body locomotion generation on arbitrary pre-existing characters and environments . 3 PRELIMINARIES : FATIGUE MODELING . Previous works in computer animation , robotics and standard RL ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ; Rajamäki & Hämäläinen , 2017 ; Peng et al. , 2017 ) use instantaneous squared joint torques as a simple measurement to minimize the effort of a given task . However , such a measure is not very biologically accurate as it does not take the endurance time into account and thus the increasing amount of perceived fatigue the longer the task is sustained . The Three-Compartment Controller ( 3CC ) model ( Xia & Frey Law , 2008 ) is a cumulative fatigue model that assumes motor units ( MUs ) to be in one of three possible states : 1 ) active ( MA ) – MUs contributing to the task , 2 ) fatigued ( MF ) – MUs without activation , and 3 ) resting ( MR ) – inactive MUs not required for the task . These are usually expressed as a percentage of maximum voluntary contraction ( % MVC ) , which can practically be expressed as percentage of maximum voluntary force ( % MVF ) or torque ( % MVT ) . Additionally , control theory is used to obtain behaviour matching muscle physiology , i.e . active MUs ’ force production should decay ( fatigue ) over time when enough constant force is used . Such a cumulative fatigue model thus gives us a more accurate representation of perceived fatigue . This is expressed by the following system of equations : ∂MA ∂t = C ( t ) − F ·MA ( 1a ) ∂MR ∂t = −C ( t ) +R ·MF ( 1b ) ∂MF ∂t = F ·MA −R ·MF ( 1c ) Where F and R denote the fatigue and recovery coefficients . C ( t ) is a bounded proportional controller in order to produce the required force , i.e . target load ( TL ) by controlling the size of MA and MR : C ( t ) = LD · ( TL−MA ) if MA < TL and MR > TL−MA LD ·MR if MA < TL and MR ≤ TL−MA LR · ( TL−MA ) if MA ≥ TL ( 2 ) LD and LR are muscle force development , and relaxation factors , which describe the sensitivity towards the target load ( Xia & Frey Law , 2008 ) . The behavior of the 3CC model at different load conditions ( TL ) can be seen in Fig . 1 . If the conditions can not be matched any longer due to high fatigue and not enough rested MUs available , MA starts to decline ( Fig . 1b ) . From Eq . 2 we can conclude that the target load can be held iff MA+MR ≥ TL . Liu et al . ( 2002a ) have shown that the lower bound of MA +MR is RF+R , i.e . the target load TL can be held indefinitely iff TL ≤ R F+R ( Fig . 1c ) which results in TL = MA .
The paper proposes to use a normalized cumulative fatigue (NCF)-based reward to learn symmetric locomotion with Deep RL. The motivation is that most prior work on locomotion synthesis does not estimate cumulative biomechanical effort and only minimizes instantaneous joint torques. The paper derives the NCF reward based on the Three Component Controlller (3CC) fatigue model uses the reward alongside a symmetric loss proposed in prior work. The main contributions claimed in the paper are (1) it is the first work to use biomechanical effort to improve full-body movement generation, and (2) experiments show that the method can generate more natural and symmetric locomotion compared to the baselines.
SP:7aebf6088a91b8e1af13b2133bf4431fba68ca09
Learning Symmetric Locomotion using Cumulative Fatigue for Reinforcement Learning
Modern deep reinforcement learning ( DRL ) methods allow simulated characters to learn complex skills such as locomotion from scratch . However , without further exploitation of domain-specific knowledge , such as motion capture data , finite state machines or morphological specifications , physics-based locomotion generation with DRL often results in unrealistic motions . One explanation for this is that present RL models do not estimate biomechanical effort ; instead , they minimize instantaneous squared joint actuation torques as a proxy for the actual subjective cost of actions . To mitigate this discrepancy in a computationally efficient manner , we propose a method for mapping actuation torques to subjective effort without simulating muscles and their energy expenditure . Our approach is based on the Three Compartment Controller model , in which the relationships of variables such as maximum voluntary joint torques , recovery , and cumulative fatigue are present . We extend this method for sustained symmetric locomotion tasks for deep reinforcement learning using a Normalized Cumulative Fatigue ( NCF ) model . In summary , in this paper we present the first RL model to use biomechanical cumulative effort for full-body movement generation without the use of any finite state machines , morphological specification or motion capture data . Our results show that the learned policies are more symmetric , periodic and robust compared to methods found in previous literature . 1 INTRODUCTION . It is a long standing task in computer animation to make characters walk on their own . In this context , Deep Reinfocement Learning ( DRL ) has become a promising method for automatic generation of movement controls for interactive , physics-based characters . However , in many cases the resulting motions are still not perceived as natural ( Schulman et al. , 2017 ) . A common approach to mitigate this is to use motion capture or animation data ( Peng et al. , 2018 ; Bergamin et al. , 2019 ; Won et al. , 2020 ; Peng et al. , 2021 ) . Nevertheless , such approaches are limited to characters and movements to which data is readily available . Furthermore , obtaining qualitatively good data is oftentimes expensive , and many biomechanical constrains that are implicit in captured motions are not preserved during editing and retargeting – which is often required when data is limited . Another method for improving motion quality is to optimize for movement characteristics that shape the motion such as symmetric gait properties ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ) or minimal energy consumption and task goals . While such methods overcome the need of motion capture data , the absence of biomechanical constraints still may lead to unwanted behaviour and unnatural torque patterns . Another group of methods that have emerged come from bio-mechanical literature , which include musculoskeletal models and other forms of biological constraints . Previous works ( Wang et al. , 2012 ; Geijtenbeek et al. , 2013 ; Lee et al. , 2014 ) in this direction have explored biomimetic muscles and tendons to simulate a variety of human and animal motions . However , such musclebased methods are usually computationally expensive , especially under a reinforcement learning framework ( Kidziński et al. , 2018 ) . In this research we work towards developing a cumulative fatigue reward based on biomechanical literature to account for a computationally efficient way to include motion constraints that are implicit in articulated figures driven by musculotendon units , in the context of locomotion . To improve on quality we further incorporate movement characteristics , such as gait symmetry enforcement methods by Abdolhosseini et al . ( 2019 ) and Yu et al . ( 2018 ) . Contributions . In this paper we present the first RL model to use biomechanical cumulative effort for full-body movement generation . We derive a Normalized Cumulative Fatigue ( NFC ) model suitable for reinforcement learning based on the Three Compartment Controller ( 3CC ) model by Xia & Frey Law ( 2008 ) and show that both models are equivalent under the assumption of sustained dynamic load conditions but that the 3CC model fails when applied to pre-existing benchmark environments without further hyper-parameter-tuning of the environment itself . Furthermore , the fatigue reward derived from our model more accurately reflects the embodied biomechanical nature of a simulated character when compared to a reward based on instantaneous torque ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ; Schulman et al. , 2017 ) . We apply the cumulative fatigue model to a simulated humanoid for learning sustained symmetric locomotion and show that our method is robust and can generate more relaxed , natural and symmetric locomotion especially in complex environments – without the need of motion capture data , finite state machines or morphological specifications , as well as no further hyper-parameter-tuning of possible pre-existing environments . Additionally , the simplification from 3CC to NCF allows the method to be more easily adaptable to arbitrary characters that may not exhibit biologically accurate properties . 2 RELATED WORK . Recent developments in DRL have seen significant progress in solving high-dimensional continuous control problems . For example , Schulman et al . ( 2015a ) have proposed Trust Region Policy Optimization ( TRPO ) and show that this method can be used to generate biped locomotion in a 2D planar space . Later , by combining TRPO with Generalized Advantage Estimation , Schulman et al . ( 2015b ) have extended their work for their humanoid locomotion task to three dimensions . Afterwards , they have proposed Proximal Policy Optimization ( PPO ) , which further improves the data efficiency of the algorithm ( Schulman et al. , 2017 ) . However , the resulting movements oftentimes still look jerky and unnatural . A common way to overcome these issues , is to exploit domain specific knowledge in various forms ( Ramamurthy et al. , 2019 ) : Imitation Learning . Oftentimes , reference motion is used in this regard . Peng et al . ( 2017 ) introduce a two-level hierarchical controller to generate locomotion : the low-level controller is learned by mimicking the reference locomotion data ; the high-level controller is acting as a planner in order to respond to environment changes . However , this method is not capable of highly dynamic motions . Peng et al . ( 2018 ) address this issue and achieve significantly more natural-looking motions using imitation learning . More recently , they have extended their method with generative adversarial imitation learning ( Peng et al. , 2021 ) . Other works in this direction include mimicking various features over a large dataset of movements with RL ( Bergamin et al. , 2019 ) or learning a mixture of experts models for various movements ( Won et al. , 2020 ) . However , all these methods require readily available motion capture data as a prerequisite for training . Optimizing Movement Characteristics . Instead of using reference data as prior knowledge , another option is to exploit the characteristics of specific types of motions that shall be generated using hand-crafted features . In this regard , Yu et al . ( 2018 ) exploit symmetry property of locomotion and propose the mirror symmetry loss . They combine it with energy optimization and add an external force that acts as a virtual assistant to learn symmetric locomotion from scratch . Abdolhosseini et al . ( 2019 ) emphasize the core idea behind the mirror symmetry loss , called symmetric policy , and analyze its performance in terms of locomotion symmetry by combining it with DRL in a variety of ways to produce symmetric gaits . They also show that symmetry enforcement methods improve gait symmetry in general , but can not guarantee a symmetric gait . Furthermore , while such methods overcome the need for motion capture data , the absence of bio-mechanical constraints still leads to unwanted behaviour and less natural torque patterns . Musculoskeletal Models . A more biologically-accurate approach for movement synthesis involves musculoskeletal models . Previous works ( Taga , 1995 ; Anderson & Pandy , 2001 ; Geyer & Herr , 2010 ; Ackermann & van den Bogert , 2012 ; Ijspeert et al. , 2007 ; Maufroy et al. , 2008 ; Thelen et al. , 2003 ) in biomechanics have developed musculoskeletal models that use biomimetic muscles and tendons to simulate a variety of human and animal motions . Controlling a muscle-based virtual characters has also been explored in computer animation – from upper body movements ( Lee & Terzopoulos , 2006 ; Lee et al. , 2009 ; 2018 ) , to hand manipulation ( Tsang et al. , 2005 ; Sueda et al. , 2008 ) , and full-body locomotion ( Wang et al. , 2012 ; Geijtenbeek et al. , 2013 ; Lee et al. , 2014 ) . However , such muscle-based methods are usually computationally expensive , especially under a reinforcement learning framework ( Kidziński et al. , 2018 ) . To address this issue , Jiang et al . ( 2019 ) have proposed a technique to transform an optimal control problem formulated in the muscle actuation space to an equivalent problem in the joint-actuation space by training the model with control signals obtained from the muscle actuation space . The result shows that as long as the model can reflect the underlying biomechanical properties , it is not necessary to model muscle and tendon details explicitly in order to generate more realistic motions . However , a disadvantage of this work is that they need to learn the mapping from the muscle-based actuation space to the torque-based space using reference data . Biomechanical Cumulative Fatigue . Muscle fatigue is the failure to maintain the required or expected force ( Edwards , 1981 ) . In contrast to instantaneous fatigue , which does not take the endurance time into account , biomechanical fatigue assumes the fatigue to accumulate over time – i.e . the longer a task is done , the more fatiguing it becomes . Muscle fatigue is task-related and can vary across muscles and joints ( Xia & Frey Law , 2008 ; Imbeau et al. , 2006 ; Enoka & Duchateau , 2008 ; Jang et al. , 2017 ; Frey Law & Avin , 2010 ; Frey-Law et al. , 2012 ) , which partially explains the challenging nature of representing muscle fatigue analytically . In this regard , Liu et al . ( 2002b ) have proposed a motor unit ( MU ) -based fatigue model which uses three muscle activation states to estimate perceived biomechanical fatigeu : resting , activated and fatigued . The model is able to predict fatigue at static load conditions but fails at submaximal or dynamic conditions . Xia & Frey Law ( 2008 ) have proposed a Three-Compartment Controller ( 3CC ) model which improves upon the model of Liu et al . ( 2002b ) for dynamic load conditions by introducing a feed-back controller term between the active and rest MU-states based on torque without the need of explicit modeling of muscle actuators . The 3CC model , as a torque-based model for modeling muscle fatigue and recovery , has already shown effectiveness in motion analysis ( Jang et al. , 2017 ) and synthesis ( Cheema et al. , 2020 ) . The follow-up work ( Looft et al. , 2018 ) has been successfully used in upper body motion synthesis under a DLR framework by Cheema et al . ( 2020 ) without any motion capture data for mid-air interaction analysis and synthesis . Inspired by their work , we extend it to full-body locomotion generation on arbitrary pre-existing characters and environments . 3 PRELIMINARIES : FATIGUE MODELING . Previous works in computer animation , robotics and standard RL ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ; Rajamäki & Hämäläinen , 2017 ; Peng et al. , 2017 ) use instantaneous squared joint torques as a simple measurement to minimize the effort of a given task . However , such a measure is not very biologically accurate as it does not take the endurance time into account and thus the increasing amount of perceived fatigue the longer the task is sustained . The Three-Compartment Controller ( 3CC ) model ( Xia & Frey Law , 2008 ) is a cumulative fatigue model that assumes motor units ( MUs ) to be in one of three possible states : 1 ) active ( MA ) – MUs contributing to the task , 2 ) fatigued ( MF ) – MUs without activation , and 3 ) resting ( MR ) – inactive MUs not required for the task . These are usually expressed as a percentage of maximum voluntary contraction ( % MVC ) , which can practically be expressed as percentage of maximum voluntary force ( % MVF ) or torque ( % MVT ) . Additionally , control theory is used to obtain behaviour matching muscle physiology , i.e . active MUs ’ force production should decay ( fatigue ) over time when enough constant force is used . Such a cumulative fatigue model thus gives us a more accurate representation of perceived fatigue . This is expressed by the following system of equations : ∂MA ∂t = C ( t ) − F ·MA ( 1a ) ∂MR ∂t = −C ( t ) +R ·MF ( 1b ) ∂MF ∂t = F ·MA −R ·MF ( 1c ) Where F and R denote the fatigue and recovery coefficients . C ( t ) is a bounded proportional controller in order to produce the required force , i.e . target load ( TL ) by controlling the size of MA and MR : C ( t ) = LD · ( TL−MA ) if MA < TL and MR > TL−MA LD ·MR if MA < TL and MR ≤ TL−MA LR · ( TL−MA ) if MA ≥ TL ( 2 ) LD and LR are muscle force development , and relaxation factors , which describe the sensitivity towards the target load ( Xia & Frey Law , 2008 ) . The behavior of the 3CC model at different load conditions ( TL ) can be seen in Fig . 1 . If the conditions can not be matched any longer due to high fatigue and not enough rested MUs available , MA starts to decline ( Fig . 1b ) . From Eq . 2 we can conclude that the target load can be held iff MA+MR ≥ TL . Liu et al . ( 2002a ) have shown that the lower bound of MA +MR is RF+R , i.e . the target load TL can be held indefinitely iff TL ≤ R F+R ( Fig . 1c ) which results in TL = MA .
This paper presents a Reinforcement Learning (RL) model that uses cumulative effort to improve full-body movement generation. The use of a cumulative fatigue reward is a proper way of including biomechanical motion constrains when learning full-body movements. The proposed model uses a so-called Normalized Cumulative Fatigue (NFC), which is based on the Three Compartment Controller (3CC) model proposed by Xia & Frey Law (2008). The main contribution of this work is the proposed fatigue reward.
SP:7aebf6088a91b8e1af13b2133bf4431fba68ca09
Learning Symmetric Locomotion using Cumulative Fatigue for Reinforcement Learning
Modern deep reinforcement learning ( DRL ) methods allow simulated characters to learn complex skills such as locomotion from scratch . However , without further exploitation of domain-specific knowledge , such as motion capture data , finite state machines or morphological specifications , physics-based locomotion generation with DRL often results in unrealistic motions . One explanation for this is that present RL models do not estimate biomechanical effort ; instead , they minimize instantaneous squared joint actuation torques as a proxy for the actual subjective cost of actions . To mitigate this discrepancy in a computationally efficient manner , we propose a method for mapping actuation torques to subjective effort without simulating muscles and their energy expenditure . Our approach is based on the Three Compartment Controller model , in which the relationships of variables such as maximum voluntary joint torques , recovery , and cumulative fatigue are present . We extend this method for sustained symmetric locomotion tasks for deep reinforcement learning using a Normalized Cumulative Fatigue ( NCF ) model . In summary , in this paper we present the first RL model to use biomechanical cumulative effort for full-body movement generation without the use of any finite state machines , morphological specification or motion capture data . Our results show that the learned policies are more symmetric , periodic and robust compared to methods found in previous literature . 1 INTRODUCTION . It is a long standing task in computer animation to make characters walk on their own . In this context , Deep Reinfocement Learning ( DRL ) has become a promising method for automatic generation of movement controls for interactive , physics-based characters . However , in many cases the resulting motions are still not perceived as natural ( Schulman et al. , 2017 ) . A common approach to mitigate this is to use motion capture or animation data ( Peng et al. , 2018 ; Bergamin et al. , 2019 ; Won et al. , 2020 ; Peng et al. , 2021 ) . Nevertheless , such approaches are limited to characters and movements to which data is readily available . Furthermore , obtaining qualitatively good data is oftentimes expensive , and many biomechanical constrains that are implicit in captured motions are not preserved during editing and retargeting – which is often required when data is limited . Another method for improving motion quality is to optimize for movement characteristics that shape the motion such as symmetric gait properties ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ) or minimal energy consumption and task goals . While such methods overcome the need of motion capture data , the absence of biomechanical constraints still may lead to unwanted behaviour and unnatural torque patterns . Another group of methods that have emerged come from bio-mechanical literature , which include musculoskeletal models and other forms of biological constraints . Previous works ( Wang et al. , 2012 ; Geijtenbeek et al. , 2013 ; Lee et al. , 2014 ) in this direction have explored biomimetic muscles and tendons to simulate a variety of human and animal motions . However , such musclebased methods are usually computationally expensive , especially under a reinforcement learning framework ( Kidziński et al. , 2018 ) . In this research we work towards developing a cumulative fatigue reward based on biomechanical literature to account for a computationally efficient way to include motion constraints that are implicit in articulated figures driven by musculotendon units , in the context of locomotion . To improve on quality we further incorporate movement characteristics , such as gait symmetry enforcement methods by Abdolhosseini et al . ( 2019 ) and Yu et al . ( 2018 ) . Contributions . In this paper we present the first RL model to use biomechanical cumulative effort for full-body movement generation . We derive a Normalized Cumulative Fatigue ( NFC ) model suitable for reinforcement learning based on the Three Compartment Controller ( 3CC ) model by Xia & Frey Law ( 2008 ) and show that both models are equivalent under the assumption of sustained dynamic load conditions but that the 3CC model fails when applied to pre-existing benchmark environments without further hyper-parameter-tuning of the environment itself . Furthermore , the fatigue reward derived from our model more accurately reflects the embodied biomechanical nature of a simulated character when compared to a reward based on instantaneous torque ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ; Schulman et al. , 2017 ) . We apply the cumulative fatigue model to a simulated humanoid for learning sustained symmetric locomotion and show that our method is robust and can generate more relaxed , natural and symmetric locomotion especially in complex environments – without the need of motion capture data , finite state machines or morphological specifications , as well as no further hyper-parameter-tuning of possible pre-existing environments . Additionally , the simplification from 3CC to NCF allows the method to be more easily adaptable to arbitrary characters that may not exhibit biologically accurate properties . 2 RELATED WORK . Recent developments in DRL have seen significant progress in solving high-dimensional continuous control problems . For example , Schulman et al . ( 2015a ) have proposed Trust Region Policy Optimization ( TRPO ) and show that this method can be used to generate biped locomotion in a 2D planar space . Later , by combining TRPO with Generalized Advantage Estimation , Schulman et al . ( 2015b ) have extended their work for their humanoid locomotion task to three dimensions . Afterwards , they have proposed Proximal Policy Optimization ( PPO ) , which further improves the data efficiency of the algorithm ( Schulman et al. , 2017 ) . However , the resulting movements oftentimes still look jerky and unnatural . A common way to overcome these issues , is to exploit domain specific knowledge in various forms ( Ramamurthy et al. , 2019 ) : Imitation Learning . Oftentimes , reference motion is used in this regard . Peng et al . ( 2017 ) introduce a two-level hierarchical controller to generate locomotion : the low-level controller is learned by mimicking the reference locomotion data ; the high-level controller is acting as a planner in order to respond to environment changes . However , this method is not capable of highly dynamic motions . Peng et al . ( 2018 ) address this issue and achieve significantly more natural-looking motions using imitation learning . More recently , they have extended their method with generative adversarial imitation learning ( Peng et al. , 2021 ) . Other works in this direction include mimicking various features over a large dataset of movements with RL ( Bergamin et al. , 2019 ) or learning a mixture of experts models for various movements ( Won et al. , 2020 ) . However , all these methods require readily available motion capture data as a prerequisite for training . Optimizing Movement Characteristics . Instead of using reference data as prior knowledge , another option is to exploit the characteristics of specific types of motions that shall be generated using hand-crafted features . In this regard , Yu et al . ( 2018 ) exploit symmetry property of locomotion and propose the mirror symmetry loss . They combine it with energy optimization and add an external force that acts as a virtual assistant to learn symmetric locomotion from scratch . Abdolhosseini et al . ( 2019 ) emphasize the core idea behind the mirror symmetry loss , called symmetric policy , and analyze its performance in terms of locomotion symmetry by combining it with DRL in a variety of ways to produce symmetric gaits . They also show that symmetry enforcement methods improve gait symmetry in general , but can not guarantee a symmetric gait . Furthermore , while such methods overcome the need for motion capture data , the absence of bio-mechanical constraints still leads to unwanted behaviour and less natural torque patterns . Musculoskeletal Models . A more biologically-accurate approach for movement synthesis involves musculoskeletal models . Previous works ( Taga , 1995 ; Anderson & Pandy , 2001 ; Geyer & Herr , 2010 ; Ackermann & van den Bogert , 2012 ; Ijspeert et al. , 2007 ; Maufroy et al. , 2008 ; Thelen et al. , 2003 ) in biomechanics have developed musculoskeletal models that use biomimetic muscles and tendons to simulate a variety of human and animal motions . Controlling a muscle-based virtual characters has also been explored in computer animation – from upper body movements ( Lee & Terzopoulos , 2006 ; Lee et al. , 2009 ; 2018 ) , to hand manipulation ( Tsang et al. , 2005 ; Sueda et al. , 2008 ) , and full-body locomotion ( Wang et al. , 2012 ; Geijtenbeek et al. , 2013 ; Lee et al. , 2014 ) . However , such muscle-based methods are usually computationally expensive , especially under a reinforcement learning framework ( Kidziński et al. , 2018 ) . To address this issue , Jiang et al . ( 2019 ) have proposed a technique to transform an optimal control problem formulated in the muscle actuation space to an equivalent problem in the joint-actuation space by training the model with control signals obtained from the muscle actuation space . The result shows that as long as the model can reflect the underlying biomechanical properties , it is not necessary to model muscle and tendon details explicitly in order to generate more realistic motions . However , a disadvantage of this work is that they need to learn the mapping from the muscle-based actuation space to the torque-based space using reference data . Biomechanical Cumulative Fatigue . Muscle fatigue is the failure to maintain the required or expected force ( Edwards , 1981 ) . In contrast to instantaneous fatigue , which does not take the endurance time into account , biomechanical fatigue assumes the fatigue to accumulate over time – i.e . the longer a task is done , the more fatiguing it becomes . Muscle fatigue is task-related and can vary across muscles and joints ( Xia & Frey Law , 2008 ; Imbeau et al. , 2006 ; Enoka & Duchateau , 2008 ; Jang et al. , 2017 ; Frey Law & Avin , 2010 ; Frey-Law et al. , 2012 ) , which partially explains the challenging nature of representing muscle fatigue analytically . In this regard , Liu et al . ( 2002b ) have proposed a motor unit ( MU ) -based fatigue model which uses three muscle activation states to estimate perceived biomechanical fatigeu : resting , activated and fatigued . The model is able to predict fatigue at static load conditions but fails at submaximal or dynamic conditions . Xia & Frey Law ( 2008 ) have proposed a Three-Compartment Controller ( 3CC ) model which improves upon the model of Liu et al . ( 2002b ) for dynamic load conditions by introducing a feed-back controller term between the active and rest MU-states based on torque without the need of explicit modeling of muscle actuators . The 3CC model , as a torque-based model for modeling muscle fatigue and recovery , has already shown effectiveness in motion analysis ( Jang et al. , 2017 ) and synthesis ( Cheema et al. , 2020 ) . The follow-up work ( Looft et al. , 2018 ) has been successfully used in upper body motion synthesis under a DLR framework by Cheema et al . ( 2020 ) without any motion capture data for mid-air interaction analysis and synthesis . Inspired by their work , we extend it to full-body locomotion generation on arbitrary pre-existing characters and environments . 3 PRELIMINARIES : FATIGUE MODELING . Previous works in computer animation , robotics and standard RL ( Yu et al. , 2018 ; Abdolhosseini et al. , 2019 ; Rajamäki & Hämäläinen , 2017 ; Peng et al. , 2017 ) use instantaneous squared joint torques as a simple measurement to minimize the effort of a given task . However , such a measure is not very biologically accurate as it does not take the endurance time into account and thus the increasing amount of perceived fatigue the longer the task is sustained . The Three-Compartment Controller ( 3CC ) model ( Xia & Frey Law , 2008 ) is a cumulative fatigue model that assumes motor units ( MUs ) to be in one of three possible states : 1 ) active ( MA ) – MUs contributing to the task , 2 ) fatigued ( MF ) – MUs without activation , and 3 ) resting ( MR ) – inactive MUs not required for the task . These are usually expressed as a percentage of maximum voluntary contraction ( % MVC ) , which can practically be expressed as percentage of maximum voluntary force ( % MVF ) or torque ( % MVT ) . Additionally , control theory is used to obtain behaviour matching muscle physiology , i.e . active MUs ’ force production should decay ( fatigue ) over time when enough constant force is used . Such a cumulative fatigue model thus gives us a more accurate representation of perceived fatigue . This is expressed by the following system of equations : ∂MA ∂t = C ( t ) − F ·MA ( 1a ) ∂MR ∂t = −C ( t ) +R ·MF ( 1b ) ∂MF ∂t = F ·MA −R ·MF ( 1c ) Where F and R denote the fatigue and recovery coefficients . C ( t ) is a bounded proportional controller in order to produce the required force , i.e . target load ( TL ) by controlling the size of MA and MR : C ( t ) = LD · ( TL−MA ) if MA < TL and MR > TL−MA LD ·MR if MA < TL and MR ≤ TL−MA LR · ( TL−MA ) if MA ≥ TL ( 2 ) LD and LR are muscle force development , and relaxation factors , which describe the sensitivity towards the target load ( Xia & Frey Law , 2008 ) . The behavior of the 3CC model at different load conditions ( TL ) can be seen in Fig . 1 . If the conditions can not be matched any longer due to high fatigue and not enough rested MUs available , MA starts to decline ( Fig . 1b ) . From Eq . 2 we can conclude that the target load can be held iff MA+MR ≥ TL . Liu et al . ( 2002a ) have shown that the lower bound of MA +MR is RF+R , i.e . the target load TL can be held indefinitely iff TL ≤ R F+R ( Fig . 1c ) which results in TL = MA .
The paper proposes an improvement to the existing locomotion algorithm by incorporating a bio-inspired fatigue model, the Three Compartment Controller (3CC) model. The fatigue model computes for each motor and each actuation direction a fatigue value that penalizes the motor from exerting high force for a long period of time. The resulting model is used to construct a reward function to replace the traditional energy penalty term in locomotion learning. Results show that the proposed model can achieve some improvements on average compared to methods without the model in terms of motion symmetry.
SP:7aebf6088a91b8e1af13b2133bf4431fba68ca09
A Simple and Debiased Sampling Method for Personalized Ranking
1 INTRODUCTION . Offering personalized service to users is outstanding as an important task , for example , ranking the top-N items that a user may like . Solutions to such kind of problems are usually designed on a bipartite graph G = ( V , E ) , where vertex set V = U ∪ I contains user set U and item set I , and E denotes the edge set . Each edge eui ∈ E denotes an observed interaction between user u and item i. Users ’ preference on items is modeled by pairwise loss functions with the assumption that items with interactions from a user are of more interest to this user than those without interactions . The loss function thus involves pairwise comparison between an observed ( positive ) edge eui ∈ E and an unobserved ( negative ) edge euj /∈ E. The optimization process thus suffers from the classimbalance issue , because in practical scenario , the number of observed ( positive ) edges are always much less than the unobserved ( negative ) ones . The imbalance between eui ∈ E and euj /∈ E can be regarded as the edge-level imbalance issue . Pioneering works dealing with the class-imbalance problem can be categorized into two main families : using stationary sampling or using dynamic sampling . Approaches in the former family usually start from the edge-level class-imbalance issue through under-sampling negative edges from a pre-defined stationary distribution ( Rendle et al. , 2009 ; Rendle and Freudenthaler , 2014 ) , or oversampling positive edges by creating instances through the social connection ( Chen et al. , 2019 ) . However , they ignore that class-imbalance issue also exists in vertex side because each vertex can appear in both positive and negative edges . Through some basic statistical analysis , we acquire some interesting findings , that is , the vertex degree has positive impact on vertex-level imbalance problem . If we sample negative instances from a stationary distribution , those popularity vertexes with degree greater than average vertex degree are under-sampled as negative samples , while “ coldstart ” vertexes with degree less than average degree are over-sampled . Moreover , they can ’ t capture the dynamics of relative ranking order between positive and negative samples , as shown in Figure 1 ( a ) and 1 ( b ) . From Figure 1 ( a ) we can see that it ’ s easy to find an order-violated item for pairwise loss optimization at the initial state , because there are many negative items ranking higher than the positive item . However , as the learning process moves forward , massive number of negative items are distinguished well from the positive item , shown in Figure 1 ( b ) . At this time , a large portion of the negative items are useless for pairwise loss optimization , because they already rank lower than the positive item . Recently dynamic sampling approaches ( Weston et al. , 2011 ; Yuan et al. , 2016 ; Wang et al. , 2020 ) have shown their significant contribution to negative instances selection by considering the hardness of sampling a negative sample . However , existing dynamic methods have several drawbacks : 1 ) they lack systematically understanding their connection to class-imbalance issue , leading to only sampling candidate from uniform distribution ; 2 ) they have to find a violated negative sample through searching massive candidates , causing high computation complexity ( over ten times higher than sampling from stationary distribution ) . In this work , we aim at finding clues that can help to design a faster dynamic negative sampler for the personalized ranking task . We find that sampling from uniform distribution can be regarded as a random walk with a transition probability matrix P for arbitrary node pair in a fully connected item-item graph , which is presented in Figure 1 ( c ) . Intuitively , nodes ( items ) are different in their nature ( e.g. , degree , betweenness ) . A biased transition matrix P∗ might be more helpful on finding the desired negative items , than a uniform random P , as shown in Figure 1 ( d ) . Through theoretical analysis , we find that one of the potential solutions to decode the biased transition process and walking with a biased transition matrix P∗ is to tackle the class-imbalance issue . To achieve this goal , it is essential to first dissect the impact of class-imbalance issue . More specifically , we mainly investigate the three questions : Q1 ) how the class-imbalance problem is reflected in current sampling-based pairwise ranking approaches ? Q2 ) what is the impact of the imbalance problem on learning optimal pairwise ranking model ? Q3 ) how can we resolve the class-imbalance issue and design a faster dynamic sampling approach to boost ranking quality ? We answer the above questions with theoretical analysis in Section 3 . The brief summary is , to Q1 , if negative instances are sampled from a uniform distribution ( e.g. , in ( Rendle et al. , 2009 ) ) , vertexes with high degrees are under-sampled as negative samples , while “ cold-start ” vertexes with low degrees are over-sampled . To Q2 , we theoretically show that the class-imbalance issue will result in frequency gathering phenomenon where the learned embeddings of items with close popularity will gather together , and cause gradient vanishment at the output loss . Based on the above insights , for Q3 , we propose an efficient Vital Negative Sampler ( VINS ) , which explicitly considers both edge- and vertex-level class-imbalance issue . In summary , our contributions of this work are as follows : • We indicate out edge- and vertex-level imbalance problem raised in pairwise learning loss , and provide theoretical analysis that the imbalance issue could lead to frequency gathering phenomenon and vanishing gradient at the output loss . • To address the class-imbalance and vanishing gradient problem , we design an adaptive negative sampling method with a reject probability based on items ’ degree differences . • Thoroughly experimental results demonstrate that the proposed method can speed up the training procedure 30 % to 50 % for shallow and deep ranking models , compared with the state-of-the-art dynamic sampling methods . 2 RELATED WORK . Pairwise comparison usually happens between an observed ( positive ) and an unobserved ( negative ) edge , when the interactions between users and items are represented as a bipartite graph . Such an idea results in a serious class-imbalance issue due to the pairwise comparison between a small set of interacted items ( positive as minority class ) and a very large set of all remaining items ( negative as majority class ) . Pioneering work proposed in ( Rendle et al. , 2009 ) presented an under-sampling approach via uniformly sampling a negative edge for a given positive edge . Following the idea in ( Rendle et al. , 2009 ) , ( Zhao et al. , 2014 ) proposed an over-sampling method by employing social theory to create synthetic positive instances . ( Ding et al. , 2019 ) augmented pairwise samples with view data . However , these sampling strategies discard a fact that each item has its own properties , e.g. , degree , betweenness . ( Rendle and Freudenthaler , 2014 ) considered vertex properties and proposed to sample a negative instance from an exponential function over the order of vertex degree . Despite of the effectiveness and efficiency of sampling from a stationary distribution ( e.g. , uniform , or power function over vertex popularity ) , they ignore the impact of relative order between positive and negative samples during the learning processes , as shown in Figure 1 ( a ) and 1 ( b ) . Recently dynamic sampling approaches ( Weston et al. , 2011 ; Yuan et al. , 2016 ; Chen et al. , 2018 ) aiming at estimating the rank order of positive samples have shown significant contribution of selecting vital negative instances . As a pioneering work , ( Weston et al. , 2011 ) proposed the WARP loss aiming at playing less attention to well-learned positives , but more emphasis on the low-rank ones . However , along with the growing of iterations , sampling a violated negative items become very difficult ( Hsiao et al. , 2014 ) . WARP inspires lots of recent works to estimate rank-aware weight from a uniform distribution As the state-of-the-art variant of WARP loss , LFM-W ( Yuan et al. , 2016 ) advances WARP with a normalization term . However , estimating the rank-aware weight from a uniform distribution makes LFM-W need lots of steps to find a violated sample . Moreover , LFMW might find sub-optimal negative sample without considering the class-imbalance issue . Besides considering ranking order , ( Wang et al. , 2019 ) regarded dynamic sampling as a minmax game . VINS also inherits the basic ideas from WARP but modifies the target distribution and proposes to estimate it through an importance sampling method after theoretically investigating the existing class-imbalance issue and its potential influence . LFM-W can be regarded as a special case of the proposed VINS with a proper setting . 3 CLASS IMBALANCED ANALYSIS . Let ’ s use G = ( V , E ) to represent a user-item interaction graph , where vertex set V = U ∪ I contains users U and items I , and eui ∈ E denotes an observed interaction ( e.g . click , purchase behaviors ) between user u and item i . The relationship between user u and item i can be measured by a factorization focused method , known as xui = Pu · Pi , where Pu = f ( u|θu ) ∈ Rd and Pi = g ( i|θi ) ∈ Rd are the representation of user u and item i generated by deep neural network f ( · ) and g ( · ) with parameters θu and θi , respectively . To learn vertex representation that can be used to accurately infer users ’ preferences on items , pairwise ranking approaches usually regard the observed edges eui as positive pairs , and all the other combinations euj ∈ ( U × I \ E ) as negative ones . Then a set of triplets D = { ( u , i , j ) |eui ∈ E , euj ∈ ( U × I \ E ) } can be constructed base on a general assumption that the induced relevance of an observed user-item pair should be larger than the unobserved one , that is , xui > xuj . To model such contrastive relation , one popular solution is to induce pairwise loss function as follows : L ( G ) = ∑ ( u , i , j ) ∈D wui · ` uij ( xui , xuj ) , ( 1 ) where ` uij ( · ) can be hinge , logistic or cross entropy function that raises an effective loss for any triplet with incorrect prediction ( i.e . xuj > xui ) that violates the pairwise assumption . wui is the a weight factor which shows the complexity to discriminate the given comparison sample . The optimization of Equation ( 1 ) involves an extreme class-imbalance issue , because in practical scenario , the number of unobserved interactions euj /∈ E ( negative ) is usually extremely larger than the observed eui ∈ E ( positive ) . The imbalance between eui ∈ E and euj /∈ E in pairwise loss can be regarded as the edge-level imbalance issue . Since the class-imbalance problem is caused by the majority of negative edges , under-sampling majority is a practical solution for it ( Rendle et al. , 2009 ; Mikolov et al. , 2013 ) . Let ’ s take the most popular strategy of under-sampling negative edges as an example . For a given positive edge eui ∈ E , we can sample a negative edge by fixing user u ∈ U , then sample one item j ∈ I , euj /∈ E with replacement from a static distribution π = { π ( i ) , i ∈ I } , where π ( i ) = dβi , β ∈ [ 0 , 1 ] denotes a weight function of item degree di . Then we can optimize the objective function in Equation ( 1 ) with the constructed pairwise samples D̃ ∈ D. In most of pairwise ranking models , how to select effective pairwise comparison samples plays an indispensable role in boosting the ranking performance . In the following , we ’ d like to present the challenges raised by the class-imbalance issue on selecting pairwise comparison samples , and how to address these challenges with an adaptive sampling method .
For pair-wise learning-to-rank problems, it often suffers from class imbalance problem when constructing training data where negative class examples are much more frequent then positive class examples. Negative example sampling is an important differentiator on training and classifier performance. this paper proposed a method on negative class sampling by considering the degree of vertexes. It proposed a rejection function in order to sample with higher probability from high degree vertex instead of using edge level random sampling. It provided both theoretical analysis and experiments on real world datasets to demonstrate the effectiveness on the proposed sampling approach.
SP:68c696fe7843b41d5944aeee1c4d39a2cf46412e
A Simple and Debiased Sampling Method for Personalized Ranking
1 INTRODUCTION . Offering personalized service to users is outstanding as an important task , for example , ranking the top-N items that a user may like . Solutions to such kind of problems are usually designed on a bipartite graph G = ( V , E ) , where vertex set V = U ∪ I contains user set U and item set I , and E denotes the edge set . Each edge eui ∈ E denotes an observed interaction between user u and item i. Users ’ preference on items is modeled by pairwise loss functions with the assumption that items with interactions from a user are of more interest to this user than those without interactions . The loss function thus involves pairwise comparison between an observed ( positive ) edge eui ∈ E and an unobserved ( negative ) edge euj /∈ E. The optimization process thus suffers from the classimbalance issue , because in practical scenario , the number of observed ( positive ) edges are always much less than the unobserved ( negative ) ones . The imbalance between eui ∈ E and euj /∈ E can be regarded as the edge-level imbalance issue . Pioneering works dealing with the class-imbalance problem can be categorized into two main families : using stationary sampling or using dynamic sampling . Approaches in the former family usually start from the edge-level class-imbalance issue through under-sampling negative edges from a pre-defined stationary distribution ( Rendle et al. , 2009 ; Rendle and Freudenthaler , 2014 ) , or oversampling positive edges by creating instances through the social connection ( Chen et al. , 2019 ) . However , they ignore that class-imbalance issue also exists in vertex side because each vertex can appear in both positive and negative edges . Through some basic statistical analysis , we acquire some interesting findings , that is , the vertex degree has positive impact on vertex-level imbalance problem . If we sample negative instances from a stationary distribution , those popularity vertexes with degree greater than average vertex degree are under-sampled as negative samples , while “ coldstart ” vertexes with degree less than average degree are over-sampled . Moreover , they can ’ t capture the dynamics of relative ranking order between positive and negative samples , as shown in Figure 1 ( a ) and 1 ( b ) . From Figure 1 ( a ) we can see that it ’ s easy to find an order-violated item for pairwise loss optimization at the initial state , because there are many negative items ranking higher than the positive item . However , as the learning process moves forward , massive number of negative items are distinguished well from the positive item , shown in Figure 1 ( b ) . At this time , a large portion of the negative items are useless for pairwise loss optimization , because they already rank lower than the positive item . Recently dynamic sampling approaches ( Weston et al. , 2011 ; Yuan et al. , 2016 ; Wang et al. , 2020 ) have shown their significant contribution to negative instances selection by considering the hardness of sampling a negative sample . However , existing dynamic methods have several drawbacks : 1 ) they lack systematically understanding their connection to class-imbalance issue , leading to only sampling candidate from uniform distribution ; 2 ) they have to find a violated negative sample through searching massive candidates , causing high computation complexity ( over ten times higher than sampling from stationary distribution ) . In this work , we aim at finding clues that can help to design a faster dynamic negative sampler for the personalized ranking task . We find that sampling from uniform distribution can be regarded as a random walk with a transition probability matrix P for arbitrary node pair in a fully connected item-item graph , which is presented in Figure 1 ( c ) . Intuitively , nodes ( items ) are different in their nature ( e.g. , degree , betweenness ) . A biased transition matrix P∗ might be more helpful on finding the desired negative items , than a uniform random P , as shown in Figure 1 ( d ) . Through theoretical analysis , we find that one of the potential solutions to decode the biased transition process and walking with a biased transition matrix P∗ is to tackle the class-imbalance issue . To achieve this goal , it is essential to first dissect the impact of class-imbalance issue . More specifically , we mainly investigate the three questions : Q1 ) how the class-imbalance problem is reflected in current sampling-based pairwise ranking approaches ? Q2 ) what is the impact of the imbalance problem on learning optimal pairwise ranking model ? Q3 ) how can we resolve the class-imbalance issue and design a faster dynamic sampling approach to boost ranking quality ? We answer the above questions with theoretical analysis in Section 3 . The brief summary is , to Q1 , if negative instances are sampled from a uniform distribution ( e.g. , in ( Rendle et al. , 2009 ) ) , vertexes with high degrees are under-sampled as negative samples , while “ cold-start ” vertexes with low degrees are over-sampled . To Q2 , we theoretically show that the class-imbalance issue will result in frequency gathering phenomenon where the learned embeddings of items with close popularity will gather together , and cause gradient vanishment at the output loss . Based on the above insights , for Q3 , we propose an efficient Vital Negative Sampler ( VINS ) , which explicitly considers both edge- and vertex-level class-imbalance issue . In summary , our contributions of this work are as follows : • We indicate out edge- and vertex-level imbalance problem raised in pairwise learning loss , and provide theoretical analysis that the imbalance issue could lead to frequency gathering phenomenon and vanishing gradient at the output loss . • To address the class-imbalance and vanishing gradient problem , we design an adaptive negative sampling method with a reject probability based on items ’ degree differences . • Thoroughly experimental results demonstrate that the proposed method can speed up the training procedure 30 % to 50 % for shallow and deep ranking models , compared with the state-of-the-art dynamic sampling methods . 2 RELATED WORK . Pairwise comparison usually happens between an observed ( positive ) and an unobserved ( negative ) edge , when the interactions between users and items are represented as a bipartite graph . Such an idea results in a serious class-imbalance issue due to the pairwise comparison between a small set of interacted items ( positive as minority class ) and a very large set of all remaining items ( negative as majority class ) . Pioneering work proposed in ( Rendle et al. , 2009 ) presented an under-sampling approach via uniformly sampling a negative edge for a given positive edge . Following the idea in ( Rendle et al. , 2009 ) , ( Zhao et al. , 2014 ) proposed an over-sampling method by employing social theory to create synthetic positive instances . ( Ding et al. , 2019 ) augmented pairwise samples with view data . However , these sampling strategies discard a fact that each item has its own properties , e.g. , degree , betweenness . ( Rendle and Freudenthaler , 2014 ) considered vertex properties and proposed to sample a negative instance from an exponential function over the order of vertex degree . Despite of the effectiveness and efficiency of sampling from a stationary distribution ( e.g. , uniform , or power function over vertex popularity ) , they ignore the impact of relative order between positive and negative samples during the learning processes , as shown in Figure 1 ( a ) and 1 ( b ) . Recently dynamic sampling approaches ( Weston et al. , 2011 ; Yuan et al. , 2016 ; Chen et al. , 2018 ) aiming at estimating the rank order of positive samples have shown significant contribution of selecting vital negative instances . As a pioneering work , ( Weston et al. , 2011 ) proposed the WARP loss aiming at playing less attention to well-learned positives , but more emphasis on the low-rank ones . However , along with the growing of iterations , sampling a violated negative items become very difficult ( Hsiao et al. , 2014 ) . WARP inspires lots of recent works to estimate rank-aware weight from a uniform distribution As the state-of-the-art variant of WARP loss , LFM-W ( Yuan et al. , 2016 ) advances WARP with a normalization term . However , estimating the rank-aware weight from a uniform distribution makes LFM-W need lots of steps to find a violated sample . Moreover , LFMW might find sub-optimal negative sample without considering the class-imbalance issue . Besides considering ranking order , ( Wang et al. , 2019 ) regarded dynamic sampling as a minmax game . VINS also inherits the basic ideas from WARP but modifies the target distribution and proposes to estimate it through an importance sampling method after theoretically investigating the existing class-imbalance issue and its potential influence . LFM-W can be regarded as a special case of the proposed VINS with a proper setting . 3 CLASS IMBALANCED ANALYSIS . Let ’ s use G = ( V , E ) to represent a user-item interaction graph , where vertex set V = U ∪ I contains users U and items I , and eui ∈ E denotes an observed interaction ( e.g . click , purchase behaviors ) between user u and item i . The relationship between user u and item i can be measured by a factorization focused method , known as xui = Pu · Pi , where Pu = f ( u|θu ) ∈ Rd and Pi = g ( i|θi ) ∈ Rd are the representation of user u and item i generated by deep neural network f ( · ) and g ( · ) with parameters θu and θi , respectively . To learn vertex representation that can be used to accurately infer users ’ preferences on items , pairwise ranking approaches usually regard the observed edges eui as positive pairs , and all the other combinations euj ∈ ( U × I \ E ) as negative ones . Then a set of triplets D = { ( u , i , j ) |eui ∈ E , euj ∈ ( U × I \ E ) } can be constructed base on a general assumption that the induced relevance of an observed user-item pair should be larger than the unobserved one , that is , xui > xuj . To model such contrastive relation , one popular solution is to induce pairwise loss function as follows : L ( G ) = ∑ ( u , i , j ) ∈D wui · ` uij ( xui , xuj ) , ( 1 ) where ` uij ( · ) can be hinge , logistic or cross entropy function that raises an effective loss for any triplet with incorrect prediction ( i.e . xuj > xui ) that violates the pairwise assumption . wui is the a weight factor which shows the complexity to discriminate the given comparison sample . The optimization of Equation ( 1 ) involves an extreme class-imbalance issue , because in practical scenario , the number of unobserved interactions euj /∈ E ( negative ) is usually extremely larger than the observed eui ∈ E ( positive ) . The imbalance between eui ∈ E and euj /∈ E in pairwise loss can be regarded as the edge-level imbalance issue . Since the class-imbalance problem is caused by the majority of negative edges , under-sampling majority is a practical solution for it ( Rendle et al. , 2009 ; Mikolov et al. , 2013 ) . Let ’ s take the most popular strategy of under-sampling negative edges as an example . For a given positive edge eui ∈ E , we can sample a negative edge by fixing user u ∈ U , then sample one item j ∈ I , euj /∈ E with replacement from a static distribution π = { π ( i ) , i ∈ I } , where π ( i ) = dβi , β ∈ [ 0 , 1 ] denotes a weight function of item degree di . Then we can optimize the objective function in Equation ( 1 ) with the constructed pairwise samples D̃ ∈ D. In most of pairwise ranking models , how to select effective pairwise comparison samples plays an indispensable role in boosting the ranking performance . In the following , we ’ d like to present the challenges raised by the class-imbalance issue on selecting pairwise comparison samples , and how to address these challenges with an adaptive sampling method .
This paper investigates personalized ranking from implicit feedback where we leverage a set of positively labeled data (for which interactions exist such as clicking, purchase etc) and use all the items with no explicit feedback as negative instances and aim to learn a ranking model for recommendation. The idea is to minimize a loss over triplets (u,i, j) over users, a positively labeled instance and a negatively labeled instance to learn the embeddings for items and users. The main observation in this paper is that in addition to known unbalancedness issue on edge level (proportion of positive instance to negative instance), the existing methods also suffer from vertex-level imbalance problem due to fact that due to sampling, the number of times an item appears as positive and negative is disproportionate. In particular, popular items with degree (positiveness) greater than average item degree are under-sampled as negative samples, while cold-start items with degree less than average degree are over-sampled as negative instances. This in turn makes the norm of learned embeddings for popular items towards infinite after a certain training iterations. This issue even occurs if one utilizes an under-sampling/oversampling to solve the edge-level imbalance issue. The observations are supported both theoretically and empirically.
SP:68c696fe7843b41d5944aeee1c4d39a2cf46412e
A Simple and Debiased Sampling Method for Personalized Ranking
1 INTRODUCTION . Offering personalized service to users is outstanding as an important task , for example , ranking the top-N items that a user may like . Solutions to such kind of problems are usually designed on a bipartite graph G = ( V , E ) , where vertex set V = U ∪ I contains user set U and item set I , and E denotes the edge set . Each edge eui ∈ E denotes an observed interaction between user u and item i. Users ’ preference on items is modeled by pairwise loss functions with the assumption that items with interactions from a user are of more interest to this user than those without interactions . The loss function thus involves pairwise comparison between an observed ( positive ) edge eui ∈ E and an unobserved ( negative ) edge euj /∈ E. The optimization process thus suffers from the classimbalance issue , because in practical scenario , the number of observed ( positive ) edges are always much less than the unobserved ( negative ) ones . The imbalance between eui ∈ E and euj /∈ E can be regarded as the edge-level imbalance issue . Pioneering works dealing with the class-imbalance problem can be categorized into two main families : using stationary sampling or using dynamic sampling . Approaches in the former family usually start from the edge-level class-imbalance issue through under-sampling negative edges from a pre-defined stationary distribution ( Rendle et al. , 2009 ; Rendle and Freudenthaler , 2014 ) , or oversampling positive edges by creating instances through the social connection ( Chen et al. , 2019 ) . However , they ignore that class-imbalance issue also exists in vertex side because each vertex can appear in both positive and negative edges . Through some basic statistical analysis , we acquire some interesting findings , that is , the vertex degree has positive impact on vertex-level imbalance problem . If we sample negative instances from a stationary distribution , those popularity vertexes with degree greater than average vertex degree are under-sampled as negative samples , while “ coldstart ” vertexes with degree less than average degree are over-sampled . Moreover , they can ’ t capture the dynamics of relative ranking order between positive and negative samples , as shown in Figure 1 ( a ) and 1 ( b ) . From Figure 1 ( a ) we can see that it ’ s easy to find an order-violated item for pairwise loss optimization at the initial state , because there are many negative items ranking higher than the positive item . However , as the learning process moves forward , massive number of negative items are distinguished well from the positive item , shown in Figure 1 ( b ) . At this time , a large portion of the negative items are useless for pairwise loss optimization , because they already rank lower than the positive item . Recently dynamic sampling approaches ( Weston et al. , 2011 ; Yuan et al. , 2016 ; Wang et al. , 2020 ) have shown their significant contribution to negative instances selection by considering the hardness of sampling a negative sample . However , existing dynamic methods have several drawbacks : 1 ) they lack systematically understanding their connection to class-imbalance issue , leading to only sampling candidate from uniform distribution ; 2 ) they have to find a violated negative sample through searching massive candidates , causing high computation complexity ( over ten times higher than sampling from stationary distribution ) . In this work , we aim at finding clues that can help to design a faster dynamic negative sampler for the personalized ranking task . We find that sampling from uniform distribution can be regarded as a random walk with a transition probability matrix P for arbitrary node pair in a fully connected item-item graph , which is presented in Figure 1 ( c ) . Intuitively , nodes ( items ) are different in their nature ( e.g. , degree , betweenness ) . A biased transition matrix P∗ might be more helpful on finding the desired negative items , than a uniform random P , as shown in Figure 1 ( d ) . Through theoretical analysis , we find that one of the potential solutions to decode the biased transition process and walking with a biased transition matrix P∗ is to tackle the class-imbalance issue . To achieve this goal , it is essential to first dissect the impact of class-imbalance issue . More specifically , we mainly investigate the three questions : Q1 ) how the class-imbalance problem is reflected in current sampling-based pairwise ranking approaches ? Q2 ) what is the impact of the imbalance problem on learning optimal pairwise ranking model ? Q3 ) how can we resolve the class-imbalance issue and design a faster dynamic sampling approach to boost ranking quality ? We answer the above questions with theoretical analysis in Section 3 . The brief summary is , to Q1 , if negative instances are sampled from a uniform distribution ( e.g. , in ( Rendle et al. , 2009 ) ) , vertexes with high degrees are under-sampled as negative samples , while “ cold-start ” vertexes with low degrees are over-sampled . To Q2 , we theoretically show that the class-imbalance issue will result in frequency gathering phenomenon where the learned embeddings of items with close popularity will gather together , and cause gradient vanishment at the output loss . Based on the above insights , for Q3 , we propose an efficient Vital Negative Sampler ( VINS ) , which explicitly considers both edge- and vertex-level class-imbalance issue . In summary , our contributions of this work are as follows : • We indicate out edge- and vertex-level imbalance problem raised in pairwise learning loss , and provide theoretical analysis that the imbalance issue could lead to frequency gathering phenomenon and vanishing gradient at the output loss . • To address the class-imbalance and vanishing gradient problem , we design an adaptive negative sampling method with a reject probability based on items ’ degree differences . • Thoroughly experimental results demonstrate that the proposed method can speed up the training procedure 30 % to 50 % for shallow and deep ranking models , compared with the state-of-the-art dynamic sampling methods . 2 RELATED WORK . Pairwise comparison usually happens between an observed ( positive ) and an unobserved ( negative ) edge , when the interactions between users and items are represented as a bipartite graph . Such an idea results in a serious class-imbalance issue due to the pairwise comparison between a small set of interacted items ( positive as minority class ) and a very large set of all remaining items ( negative as majority class ) . Pioneering work proposed in ( Rendle et al. , 2009 ) presented an under-sampling approach via uniformly sampling a negative edge for a given positive edge . Following the idea in ( Rendle et al. , 2009 ) , ( Zhao et al. , 2014 ) proposed an over-sampling method by employing social theory to create synthetic positive instances . ( Ding et al. , 2019 ) augmented pairwise samples with view data . However , these sampling strategies discard a fact that each item has its own properties , e.g. , degree , betweenness . ( Rendle and Freudenthaler , 2014 ) considered vertex properties and proposed to sample a negative instance from an exponential function over the order of vertex degree . Despite of the effectiveness and efficiency of sampling from a stationary distribution ( e.g. , uniform , or power function over vertex popularity ) , they ignore the impact of relative order between positive and negative samples during the learning processes , as shown in Figure 1 ( a ) and 1 ( b ) . Recently dynamic sampling approaches ( Weston et al. , 2011 ; Yuan et al. , 2016 ; Chen et al. , 2018 ) aiming at estimating the rank order of positive samples have shown significant contribution of selecting vital negative instances . As a pioneering work , ( Weston et al. , 2011 ) proposed the WARP loss aiming at playing less attention to well-learned positives , but more emphasis on the low-rank ones . However , along with the growing of iterations , sampling a violated negative items become very difficult ( Hsiao et al. , 2014 ) . WARP inspires lots of recent works to estimate rank-aware weight from a uniform distribution As the state-of-the-art variant of WARP loss , LFM-W ( Yuan et al. , 2016 ) advances WARP with a normalization term . However , estimating the rank-aware weight from a uniform distribution makes LFM-W need lots of steps to find a violated sample . Moreover , LFMW might find sub-optimal negative sample without considering the class-imbalance issue . Besides considering ranking order , ( Wang et al. , 2019 ) regarded dynamic sampling as a minmax game . VINS also inherits the basic ideas from WARP but modifies the target distribution and proposes to estimate it through an importance sampling method after theoretically investigating the existing class-imbalance issue and its potential influence . LFM-W can be regarded as a special case of the proposed VINS with a proper setting . 3 CLASS IMBALANCED ANALYSIS . Let ’ s use G = ( V , E ) to represent a user-item interaction graph , where vertex set V = U ∪ I contains users U and items I , and eui ∈ E denotes an observed interaction ( e.g . click , purchase behaviors ) between user u and item i . The relationship between user u and item i can be measured by a factorization focused method , known as xui = Pu · Pi , where Pu = f ( u|θu ) ∈ Rd and Pi = g ( i|θi ) ∈ Rd are the representation of user u and item i generated by deep neural network f ( · ) and g ( · ) with parameters θu and θi , respectively . To learn vertex representation that can be used to accurately infer users ’ preferences on items , pairwise ranking approaches usually regard the observed edges eui as positive pairs , and all the other combinations euj ∈ ( U × I \ E ) as negative ones . Then a set of triplets D = { ( u , i , j ) |eui ∈ E , euj ∈ ( U × I \ E ) } can be constructed base on a general assumption that the induced relevance of an observed user-item pair should be larger than the unobserved one , that is , xui > xuj . To model such contrastive relation , one popular solution is to induce pairwise loss function as follows : L ( G ) = ∑ ( u , i , j ) ∈D wui · ` uij ( xui , xuj ) , ( 1 ) where ` uij ( · ) can be hinge , logistic or cross entropy function that raises an effective loss for any triplet with incorrect prediction ( i.e . xuj > xui ) that violates the pairwise assumption . wui is the a weight factor which shows the complexity to discriminate the given comparison sample . The optimization of Equation ( 1 ) involves an extreme class-imbalance issue , because in practical scenario , the number of unobserved interactions euj /∈ E ( negative ) is usually extremely larger than the observed eui ∈ E ( positive ) . The imbalance between eui ∈ E and euj /∈ E in pairwise loss can be regarded as the edge-level imbalance issue . Since the class-imbalance problem is caused by the majority of negative edges , under-sampling majority is a practical solution for it ( Rendle et al. , 2009 ; Mikolov et al. , 2013 ) . Let ’ s take the most popular strategy of under-sampling negative edges as an example . For a given positive edge eui ∈ E , we can sample a negative edge by fixing user u ∈ U , then sample one item j ∈ I , euj /∈ E with replacement from a static distribution π = { π ( i ) , i ∈ I } , where π ( i ) = dβi , β ∈ [ 0 , 1 ] denotes a weight function of item degree di . Then we can optimize the objective function in Equation ( 1 ) with the constructed pairwise samples D̃ ∈ D. In most of pairwise ranking models , how to select effective pairwise comparison samples plays an indispensable role in boosting the ranking performance . In the following , we ’ d like to present the challenges raised by the class-imbalance issue on selecting pairwise comparison samples , and how to address these challenges with an adaptive sampling method .
This paper proposes to improve the negative sampling process for training pairwise ranking models in personalized recommender systems. Negative sampling plays an important role in the above application, as it can largely impact the model performance in terms of both training efficiency and recommendation accuracy. Authors first analyze the vertex-level imbalance problem that exists in current sampling-based methods, i.e., popular items are unevenly sampled w.r.t. their chances as positive and negative. They observe that this may cause the imbalanced distribution of embedding norms between popular and long-tailed items, which can possibly harm the training efficacy and efficiency. To cope with this problem, this paper proposes a simple but effective negative sampling method. The core idea is to choose a negative candidate with larger popularity than the given positive item. To maintain sample quality, the dynamic relative rank position of positive and negative items is also considered, which has already proven to be useful in previous works. Specifically, the proposed VINS method is achieved through a bias sampler with reject probability, which cooperates with an item buffer to enable efficient sampling of informative instances. Empirical results on several real-world datasets demonstrate its superiority on both efficiency and effectiveness.
SP:68c696fe7843b41d5944aeee1c4d39a2cf46412e
The magnitude vector of images
1 INTRODUCTION . The topology community has recently invested much effort in studying a newly introduced quantity called magnitude ( Leinster , 2010 ) . While it originates from category theory , where it can be seen as a generalization of Euler characteristics to metric spaces , the magnitude of a metric space is most intuitively understood as an attempt to measure the effective size of a metric space . As a descriptive scalar , this quantity extends the set of other well known descriptors such as the rank , diameter or dimension . However , unlike those descriptors , the properties of magnitude are not yet well understood . Even though the metric space structure of dataset is of the utmost importance to understand e.g . the regularisation behaviour of classifiers , magnitude has received little attention by the machine learning community so far . However , it turns out that one can instead investigate the magnitude of each data point separately , considering each data instance as its own metric space . Following this line of thought , magnitude vectors were introduced as a way to characterise the contribution of each data sample to the overall magnitude , such that the sum of the elements of the magnitude vector amounts to the magnitude . As shown in previous works , the magnitude vectors can detect boundaries of a metric space , with boundary points having a larger contribution to magnitude ( Bunch et al. , 2021 ) . In this work , we seek to advance the research about magnitude in machine learning by addressing these challenges . Since the metric space structure of an entire dataset may not be the most useful application in machine learning , we instead therefore consider the magnitude of each individual data point by endowing each of them with a metric space structure and explore its properties . In particular , because of their ubiquity and the large dimensionality , we focus our analysis on image datasets , where each individual image is seen as a metric space . We then extend previous results by investigating the theoretical properties associated with magnitude vector in such a context . In addition , we study the properties of the magnitude vector for improving the adversarial robustness of existing neural network architectures for image classification . We propose a novel , fully differentiable , magnitude layer , which can serve as an input layer for any deep learning architecture . We show that it results in more robustness for several types of adversarial attacks , with an acceptable reduction in classification performance , paving the way for a new exciting direction in the creation of robust neural architectures . Moreover , since naive magnitude calculations may be computationally prohibitive for large images , we introduce a new algorithm that dramatically speeds up the computation of the magnitude vector . Leveraging the regular structure of images , this allows to conveniently approximate magnitude vectors for large images with minimal error . Intractable computational runtime often stymies the applicability of magnitude in machine learning and therefore hinders further the research of it ; our algorithm opens the door to using magnitude efficiently in machine learning research . Equipped with the speed up algorithm , we showcase possible use cases of magnitude vectors in machine learning in the realm of in edge detection and adversarial robustness . Our contributions are summarized as follows : • We formalize the notion of magnitude vectors for images and investigate the impact of the choice of different metrics . • We introduce an algorithm that dramatically speeds up the computation of magnitude with little loss , which removes a critical roadblock in using magnitude vectors in machine learning applications . • We provide a theoretical framework to understand the edge detection capability of magnitude vectors and report empirical supporting evidence . • We demonstrate the capabilities of a novel , fully differentiable , magnitude layer for improving adversarial robustness of computer vision architectures . 2 THEORY . In this section , we introduce essential notions of the theory of magnitude and magnitude vectors and develop the theoretical background that will help interpreting the empirical results . 2.1 MATHEMATICAL BACKGROUND . We start by formally introducing the notion of a finite metric space . Definition 1 . A metric space is an ordered pair ( B , d ) , where B is a finite set and d is a metric on B . We denote the cardinality of B by |B| . In our application the set B is a set of vectors B ⊂ Rn and the metric considered will be the ` p norm . In order to define the magnitude of such a space we first define the similarity matrix . Definition 2 . Given a finite metric space M = ( B , d ) , its similarity matrix is ζM with entries ζM ( i , j ) = e −d ( Bi , Bj ) for Bi , Bj ∈ B . The similarity matrix can be seen as the weights arising from an exponential kernel . We are now in a position to define the magnitude vector and the magnitude of a finite vector space . Definition 3 . Given a finite metric space M = ( B , d ) with |B| = n and similarity matrix ζM with inverse ζ−1M , the magnitude vector of element Bi is given by wi = ∑n j=1 ζ −1 M ( i , j ) . Moreover , the magnitude of M , magM is ∑n i , j=1 ζ −1 M ( i , j ) = ∑n i=1 wi . Not every finite metric space has a magnitude . In particular , the magnitude is not defined when the similarity matrix is not invertible ; the magnitude therefore characterizes the structure of a metric space to some extent . It should be also noted that the definition of the magnitude vector is reminiscent of optimising a support vector machine . This connection has been pointed out for the Euclidean norm by Bunch et al . ( 2021 ) . 2.2 THEORETICAL RESULTS . 2.2.1 THE MAGNITUDE OF AN IMAGE . This work focuses on the analysis of magnitude on images , by considering each individual image as its own metric space . We then first define how we build such a metric space from an image and then prove the existence of a magnitude on images . We refer to the metric spaces on images as image metric spaces and define them as follows . Definition 4 . Let I ∈ Rc×n×m be an image with c channels , height n and width m. Let { V : vij , i ∈ 1 , . . . , m ; j ∈ 1 , . . . , n } be the set of c-dimensional vectors of pixel values in the image . Then the image metric space M ( B , d ) is given by a set of vectors B ⊂ Rc+2 of the form B = { ( i , j , v ( 1 ) ij , . . . , v ( c ) ij ) T : ∀vij ∈ V } together with a metric d on Rc+2 . Informally , we put all vectors corresponding to pixel values on a grid , concatenating the position vector with the pixel value vector and use the resulting vectors as the ground set B for our finite metric space . Let us note that |B| = m× n and therefore the number of points in the metric space can be quite large even for moderately-sized images , a potential limitation that we address in Section 2.3 . We now turn to investigating when an image metric space has magnitude and , by extension , a magnitude vector . This is a priori not clear since , as we saw , the existence of magnitude depends on the properties of the metric space . From the definition of magnitude ( Definition 3 ) , it is readily seen that an image metric space M has magnitude if and only if its similarity matrix ξM is invertible . As generic n × n matrices are invertible ( i.e . subjecting any non-invertible matrix to a random perturbation will almost certainly result in an invertible matrix ) , we conclude that generic image metric spaces have magnitude . In fact , since the vectors b ∈ B of the image metric space by a factor t > 0 can be rescaled , we can define scaled metrics d ( · , · ) 7→ td ( · , · ) . This rescaled metric space has magnitude except for finitely many t > 0 ( Leinster et al. , 2017 , Proposition 2.8 ) . In other words , we can always find a scaling which gives rise to a magnitude vector . The univariate function defined by this scaling is called the magnitude function . In general , we are interested in computing the magnitude of all images in a whole dataset , and not only of a single image , such that we can compare their magnitude ( vectors ) . Therefore , we would like theoretical guarantees that there exists a scaling such that every image in a dataset has a magnitude vector . Proposition 5 . Let M ( B , d ) be an image metric space . If d is an ` p norm with 0 < p < 2 , then every image metric space M has magnitude . Proof . This is a special case of ( Meckes , 2013 , Theorem 3.6 ) . The above proposition theoretically only applies to ` p norm with 0 < p < 2 . However , in practice , we find that on the various datasets we considered in this work , an ` p norm with p = 4 , 10 and the Hamming distance also lead to invertible similarity matrices , and thus to the existence of magnitude . The image metric space exhibits substantial structure ; in particular , there is an underlying regular subspace ( the grid ) . To quantify this further , we use the notion of a product space . Lemma 6 . Let M1 ( B1 , d1 ) and M2 ( B2 , d2 ) be two finite metric spaces with d1 , d2 being ` p norms . Its product metric spaceM = M1×M2 is a metric space with a metric ρp : ( B1×B2 ) × ( B1×B2 ) → R such that ρp ( ( x1 × y1 ) , ( x2 × y2 ) ) = p √ ( d1 ( x1 , x2 ) p + d2 ( y1 , y2 ) p , 1 ≤ p ≤ ∞ . Proof . This is a special case of ( Dovgoshey & Martio , 2009 ) . Note that the product space formulation gives a lot of freedom , since every positively weighted combination of d1 and d2 is also a metric on the product space . Magnitude vector on images based on harmonic analysis We now present our main result of this subsection which is an interpretation of the magnitude vector for images based on harmonic analysis . For this we consider grey scale images , however , our reasoning generalises to colour images . First , notice that a grey scale image can be seen as discretisation of a surface in R3 , i.e . z = f ( x , y ) , where x , y are the pixel positions and z is the brightness . In general , it is not clear what an “ outlier ” on a continuous surface or curve is . In this paper we define an outlier as a point on the surface where the gradient is large , i.e . neighbouring points are further away w.r.t . some distance measure . In the image case , outliers are points where the brightness value is substantially different between neighbouring pixels . We can use this reasoning to define a filtration F on the vectors of B , bij = ( i , j , v ( 1 ) ij ) T , namely ∅ ⊆ B ( 1 ) ⊆ B ( 2 ) ⊆ · · · ⊆ B ( K ) = B such that v ( 1 ) ij < δ ( k ) for k = 1 , . . . , K , where the δ ( k ) are different brightness thresholds . Due to symmetry in the problem we also define F ′ with criterion v ( 1 ) ij ≥ δ ( k ) . Further , we define the projection onto R2 , p : R3 → R2 , p ( bij ) 7→ ( i , j ) . By considering the projections of each subset of F ( F ′ ) we break the problem down into successive boundary detection of compact subsets of R2 . To extend this reasoning to colour images we can either consider each channel as a grey scale image or define multi-dimensional filtrations . A visual description of our reasoning is found in Figure 1 . To investigate the effect of different metrics on boundary detection , we consider the behaviour of the weighting vector on the 2-dimensional grid . We closely follow the argument of Bunch et al . ( 2021 ) , extending their results to the product space metric in the case when di are ` p norms with p = 1 , 2 and ρ ( ( x1 × z1 ) , ( x2 × z2 ) ) = α1d1 ( x1 , x2 ) + α2d2 ( z1 , z2 ) ( 1 ) for some positive weights α1 , α2 . For image we can consider the xi as two-dimensional vectors indicating the position on the unit square and zi as the corresponding brightness value . Consider a regular 2d grid and the equation defining a weighting on the grid points ζMv = ( 1 , . . . , 1 ) T . Using a continuous analogue , this can be written as a convolution ( Bunch et al. , 2021 ) f ? v ( x ) = ∫ R2 f ( x− y ) v ( y ) = ∫ R2 e−αd ( x , y ) v ( y ) = I [ 0,1 ] 2 ( x ) , ( 2 ) where I [ 0,1 ] 2 is the indicator function and d is any translation invariant metric . Using the Fourier transform , F ( f ) ( ξ ) = ∫ R2 e−i2πx·ξf ( x ) dx , ( 3 ) its well-known properties Folland ( 1999 ) and the convolution theorem , we can derive an intuitive understanding of the magnitude vector and the effects of specific metrics . In the case of d ( · , · ) being the Euclidean ( ` 2 ) norm , it has been shown in ( Bunch et al. , 2021 ) that Γ ( 32 ) απ 3 2 ( α2 − 2∑ i=1 ∂2i ) 3 2 I [ 0,1 ] 2 = v , ( 4 ) where we generalised the result of Bunch et al . ( 2021 ) to the unit square . We can interpret equation 4 as the weighting v being constant in the interior of the unit square ( ∂2i is zero in interior ) and different to this constant on the boundary . In fact , the weighting will also be constant on the edges and corners . In the Euclidean case it has been shown ( Bunch et al. , 2021 ) that the intuitive understanding translates rigorously to compact subsets of Rn with n odd using distribution theory . Guided by this , we continue our argument using the Fourier transform . If d ( · , · ) is the ` 1 norm , i.e . the Manhattan or Cityblock distance , we obtain a similar result to equation 4 , [ 2∏ i=1 ( α2 − ∂2i ) ] I [ 0,1 ] 2 = v , ( 5 ) in other words , the ` 1 norm admits the same interpretation as ` 2 , however , this time the differential operators acts multiplicatively on each dimension . For the product space metric ρ ( · , · ) = α1d1 ( · , · ) + α2d2 ( · , · ) and di are ` p with p = 1 , 2 it follows that the Fourier transform is a product of the single-metric results . Equipped with this insight , we can now explain the edge detection capabilities of the magnitude vector . Consider a filtration F ( and F ′ ) on a grey scale image and choose a subset of vectors B ( i ) from the filtration . Apply the projection map p to every vector in this subset . The transformed set is a grid with potentially “ missing ” grid points on the domain [ 0 , n ] × [ 0 , m ] . From the results on the unit square and the discretising of the ∂ operator , we expect a constant weighting vector except on the boundaries of the grid , i.e . on points adjacent to the missing grid points and on the boundaries of the domain . This procedure can be performed on every B ( i ) ∈ F and the final magnitude vector can be seen as a a combination of the weightings of each step ( see Figure 1 ) . We find that our expectations from the theoretical result agree well with empirical results ( see Appendix D ) . Moreover , we empirically find that Hamming distance and ` p norms for p 6= 1 , 2 have similar properties .
This paper explores the application of the quantity “magnitude of a metric space” for images. Based on prior work, magnitude is a quantity originating from category theory extended to metric spaces with surprising geometric properties (Leinster 2017). This paper defines the magnitude as a vector valued function of a metric space (Def. 3). Through defining an image as a metric space, the magnitude of a single image is defined (Def. 4). Authors then state conditions under which magnitude of an image is known to exist (Prop 5). Then the paper defines a filtration operation that computes the magnitude vector on multiple thresholded projections of the input. Section 2.3 introduces an approximate computation method for the magnitude and Section 3.1 evaluates the efficiency and error of this approximation. Section 3.2 applies the method for edge detection and compares it to Canny detector. Section 3.3 uses the magnitude for adversarial robustness and claims gains.
SP:bb1692c22d78937a20fdb97ff4d4ca57317e99ae
The magnitude vector of images
1 INTRODUCTION . The topology community has recently invested much effort in studying a newly introduced quantity called magnitude ( Leinster , 2010 ) . While it originates from category theory , where it can be seen as a generalization of Euler characteristics to metric spaces , the magnitude of a metric space is most intuitively understood as an attempt to measure the effective size of a metric space . As a descriptive scalar , this quantity extends the set of other well known descriptors such as the rank , diameter or dimension . However , unlike those descriptors , the properties of magnitude are not yet well understood . Even though the metric space structure of dataset is of the utmost importance to understand e.g . the regularisation behaviour of classifiers , magnitude has received little attention by the machine learning community so far . However , it turns out that one can instead investigate the magnitude of each data point separately , considering each data instance as its own metric space . Following this line of thought , magnitude vectors were introduced as a way to characterise the contribution of each data sample to the overall magnitude , such that the sum of the elements of the magnitude vector amounts to the magnitude . As shown in previous works , the magnitude vectors can detect boundaries of a metric space , with boundary points having a larger contribution to magnitude ( Bunch et al. , 2021 ) . In this work , we seek to advance the research about magnitude in machine learning by addressing these challenges . Since the metric space structure of an entire dataset may not be the most useful application in machine learning , we instead therefore consider the magnitude of each individual data point by endowing each of them with a metric space structure and explore its properties . In particular , because of their ubiquity and the large dimensionality , we focus our analysis on image datasets , where each individual image is seen as a metric space . We then extend previous results by investigating the theoretical properties associated with magnitude vector in such a context . In addition , we study the properties of the magnitude vector for improving the adversarial robustness of existing neural network architectures for image classification . We propose a novel , fully differentiable , magnitude layer , which can serve as an input layer for any deep learning architecture . We show that it results in more robustness for several types of adversarial attacks , with an acceptable reduction in classification performance , paving the way for a new exciting direction in the creation of robust neural architectures . Moreover , since naive magnitude calculations may be computationally prohibitive for large images , we introduce a new algorithm that dramatically speeds up the computation of the magnitude vector . Leveraging the regular structure of images , this allows to conveniently approximate magnitude vectors for large images with minimal error . Intractable computational runtime often stymies the applicability of magnitude in machine learning and therefore hinders further the research of it ; our algorithm opens the door to using magnitude efficiently in machine learning research . Equipped with the speed up algorithm , we showcase possible use cases of magnitude vectors in machine learning in the realm of in edge detection and adversarial robustness . Our contributions are summarized as follows : • We formalize the notion of magnitude vectors for images and investigate the impact of the choice of different metrics . • We introduce an algorithm that dramatically speeds up the computation of magnitude with little loss , which removes a critical roadblock in using magnitude vectors in machine learning applications . • We provide a theoretical framework to understand the edge detection capability of magnitude vectors and report empirical supporting evidence . • We demonstrate the capabilities of a novel , fully differentiable , magnitude layer for improving adversarial robustness of computer vision architectures . 2 THEORY . In this section , we introduce essential notions of the theory of magnitude and magnitude vectors and develop the theoretical background that will help interpreting the empirical results . 2.1 MATHEMATICAL BACKGROUND . We start by formally introducing the notion of a finite metric space . Definition 1 . A metric space is an ordered pair ( B , d ) , where B is a finite set and d is a metric on B . We denote the cardinality of B by |B| . In our application the set B is a set of vectors B ⊂ Rn and the metric considered will be the ` p norm . In order to define the magnitude of such a space we first define the similarity matrix . Definition 2 . Given a finite metric space M = ( B , d ) , its similarity matrix is ζM with entries ζM ( i , j ) = e −d ( Bi , Bj ) for Bi , Bj ∈ B . The similarity matrix can be seen as the weights arising from an exponential kernel . We are now in a position to define the magnitude vector and the magnitude of a finite vector space . Definition 3 . Given a finite metric space M = ( B , d ) with |B| = n and similarity matrix ζM with inverse ζ−1M , the magnitude vector of element Bi is given by wi = ∑n j=1 ζ −1 M ( i , j ) . Moreover , the magnitude of M , magM is ∑n i , j=1 ζ −1 M ( i , j ) = ∑n i=1 wi . Not every finite metric space has a magnitude . In particular , the magnitude is not defined when the similarity matrix is not invertible ; the magnitude therefore characterizes the structure of a metric space to some extent . It should be also noted that the definition of the magnitude vector is reminiscent of optimising a support vector machine . This connection has been pointed out for the Euclidean norm by Bunch et al . ( 2021 ) . 2.2 THEORETICAL RESULTS . 2.2.1 THE MAGNITUDE OF AN IMAGE . This work focuses on the analysis of magnitude on images , by considering each individual image as its own metric space . We then first define how we build such a metric space from an image and then prove the existence of a magnitude on images . We refer to the metric spaces on images as image metric spaces and define them as follows . Definition 4 . Let I ∈ Rc×n×m be an image with c channels , height n and width m. Let { V : vij , i ∈ 1 , . . . , m ; j ∈ 1 , . . . , n } be the set of c-dimensional vectors of pixel values in the image . Then the image metric space M ( B , d ) is given by a set of vectors B ⊂ Rc+2 of the form B = { ( i , j , v ( 1 ) ij , . . . , v ( c ) ij ) T : ∀vij ∈ V } together with a metric d on Rc+2 . Informally , we put all vectors corresponding to pixel values on a grid , concatenating the position vector with the pixel value vector and use the resulting vectors as the ground set B for our finite metric space . Let us note that |B| = m× n and therefore the number of points in the metric space can be quite large even for moderately-sized images , a potential limitation that we address in Section 2.3 . We now turn to investigating when an image metric space has magnitude and , by extension , a magnitude vector . This is a priori not clear since , as we saw , the existence of magnitude depends on the properties of the metric space . From the definition of magnitude ( Definition 3 ) , it is readily seen that an image metric space M has magnitude if and only if its similarity matrix ξM is invertible . As generic n × n matrices are invertible ( i.e . subjecting any non-invertible matrix to a random perturbation will almost certainly result in an invertible matrix ) , we conclude that generic image metric spaces have magnitude . In fact , since the vectors b ∈ B of the image metric space by a factor t > 0 can be rescaled , we can define scaled metrics d ( · , · ) 7→ td ( · , · ) . This rescaled metric space has magnitude except for finitely many t > 0 ( Leinster et al. , 2017 , Proposition 2.8 ) . In other words , we can always find a scaling which gives rise to a magnitude vector . The univariate function defined by this scaling is called the magnitude function . In general , we are interested in computing the magnitude of all images in a whole dataset , and not only of a single image , such that we can compare their magnitude ( vectors ) . Therefore , we would like theoretical guarantees that there exists a scaling such that every image in a dataset has a magnitude vector . Proposition 5 . Let M ( B , d ) be an image metric space . If d is an ` p norm with 0 < p < 2 , then every image metric space M has magnitude . Proof . This is a special case of ( Meckes , 2013 , Theorem 3.6 ) . The above proposition theoretically only applies to ` p norm with 0 < p < 2 . However , in practice , we find that on the various datasets we considered in this work , an ` p norm with p = 4 , 10 and the Hamming distance also lead to invertible similarity matrices , and thus to the existence of magnitude . The image metric space exhibits substantial structure ; in particular , there is an underlying regular subspace ( the grid ) . To quantify this further , we use the notion of a product space . Lemma 6 . Let M1 ( B1 , d1 ) and M2 ( B2 , d2 ) be two finite metric spaces with d1 , d2 being ` p norms . Its product metric spaceM = M1×M2 is a metric space with a metric ρp : ( B1×B2 ) × ( B1×B2 ) → R such that ρp ( ( x1 × y1 ) , ( x2 × y2 ) ) = p √ ( d1 ( x1 , x2 ) p + d2 ( y1 , y2 ) p , 1 ≤ p ≤ ∞ . Proof . This is a special case of ( Dovgoshey & Martio , 2009 ) . Note that the product space formulation gives a lot of freedom , since every positively weighted combination of d1 and d2 is also a metric on the product space . Magnitude vector on images based on harmonic analysis We now present our main result of this subsection which is an interpretation of the magnitude vector for images based on harmonic analysis . For this we consider grey scale images , however , our reasoning generalises to colour images . First , notice that a grey scale image can be seen as discretisation of a surface in R3 , i.e . z = f ( x , y ) , where x , y are the pixel positions and z is the brightness . In general , it is not clear what an “ outlier ” on a continuous surface or curve is . In this paper we define an outlier as a point on the surface where the gradient is large , i.e . neighbouring points are further away w.r.t . some distance measure . In the image case , outliers are points where the brightness value is substantially different between neighbouring pixels . We can use this reasoning to define a filtration F on the vectors of B , bij = ( i , j , v ( 1 ) ij ) T , namely ∅ ⊆ B ( 1 ) ⊆ B ( 2 ) ⊆ · · · ⊆ B ( K ) = B such that v ( 1 ) ij < δ ( k ) for k = 1 , . . . , K , where the δ ( k ) are different brightness thresholds . Due to symmetry in the problem we also define F ′ with criterion v ( 1 ) ij ≥ δ ( k ) . Further , we define the projection onto R2 , p : R3 → R2 , p ( bij ) 7→ ( i , j ) . By considering the projections of each subset of F ( F ′ ) we break the problem down into successive boundary detection of compact subsets of R2 . To extend this reasoning to colour images we can either consider each channel as a grey scale image or define multi-dimensional filtrations . A visual description of our reasoning is found in Figure 1 . To investigate the effect of different metrics on boundary detection , we consider the behaviour of the weighting vector on the 2-dimensional grid . We closely follow the argument of Bunch et al . ( 2021 ) , extending their results to the product space metric in the case when di are ` p norms with p = 1 , 2 and ρ ( ( x1 × z1 ) , ( x2 × z2 ) ) = α1d1 ( x1 , x2 ) + α2d2 ( z1 , z2 ) ( 1 ) for some positive weights α1 , α2 . For image we can consider the xi as two-dimensional vectors indicating the position on the unit square and zi as the corresponding brightness value . Consider a regular 2d grid and the equation defining a weighting on the grid points ζMv = ( 1 , . . . , 1 ) T . Using a continuous analogue , this can be written as a convolution ( Bunch et al. , 2021 ) f ? v ( x ) = ∫ R2 f ( x− y ) v ( y ) = ∫ R2 e−αd ( x , y ) v ( y ) = I [ 0,1 ] 2 ( x ) , ( 2 ) where I [ 0,1 ] 2 is the indicator function and d is any translation invariant metric . Using the Fourier transform , F ( f ) ( ξ ) = ∫ R2 e−i2πx·ξf ( x ) dx , ( 3 ) its well-known properties Folland ( 1999 ) and the convolution theorem , we can derive an intuitive understanding of the magnitude vector and the effects of specific metrics . In the case of d ( · , · ) being the Euclidean ( ` 2 ) norm , it has been shown in ( Bunch et al. , 2021 ) that Γ ( 32 ) απ 3 2 ( α2 − 2∑ i=1 ∂2i ) 3 2 I [ 0,1 ] 2 = v , ( 4 ) where we generalised the result of Bunch et al . ( 2021 ) to the unit square . We can interpret equation 4 as the weighting v being constant in the interior of the unit square ( ∂2i is zero in interior ) and different to this constant on the boundary . In fact , the weighting will also be constant on the edges and corners . In the Euclidean case it has been shown ( Bunch et al. , 2021 ) that the intuitive understanding translates rigorously to compact subsets of Rn with n odd using distribution theory . Guided by this , we continue our argument using the Fourier transform . If d ( · , · ) is the ` 1 norm , i.e . the Manhattan or Cityblock distance , we obtain a similar result to equation 4 , [ 2∏ i=1 ( α2 − ∂2i ) ] I [ 0,1 ] 2 = v , ( 5 ) in other words , the ` 1 norm admits the same interpretation as ` 2 , however , this time the differential operators acts multiplicatively on each dimension . For the product space metric ρ ( · , · ) = α1d1 ( · , · ) + α2d2 ( · , · ) and di are ` p with p = 1 , 2 it follows that the Fourier transform is a product of the single-metric results . Equipped with this insight , we can now explain the edge detection capabilities of the magnitude vector . Consider a filtration F ( and F ′ ) on a grey scale image and choose a subset of vectors B ( i ) from the filtration . Apply the projection map p to every vector in this subset . The transformed set is a grid with potentially “ missing ” grid points on the domain [ 0 , n ] × [ 0 , m ] . From the results on the unit square and the discretising of the ∂ operator , we expect a constant weighting vector except on the boundaries of the grid , i.e . on points adjacent to the missing grid points and on the boundaries of the domain . This procedure can be performed on every B ( i ) ∈ F and the final magnitude vector can be seen as a a combination of the weightings of each step ( see Figure 1 ) . We find that our expectations from the theoretical result agree well with empirical results ( see Appendix D ) . Moreover , we empirically find that Hamming distance and ` p norms for p 6= 1 , 2 have similar properties .
The authors consider a relatively new finite metric space quantity known as the "magnitude" and its applications within computer vision. The paper proceeds with introducing and defining the magnitude quantity before exploring some key properties. The contributes the authors outline are a study of * relationship between 'outlier detection' and 'edge detection' * introduction of 'magnitude layer' as an adversarial defence * new algorithm to speed up the computation of the magnitude vect
SP:bb1692c22d78937a20fdb97ff4d4ca57317e99ae
The magnitude vector of images
1 INTRODUCTION . The topology community has recently invested much effort in studying a newly introduced quantity called magnitude ( Leinster , 2010 ) . While it originates from category theory , where it can be seen as a generalization of Euler characteristics to metric spaces , the magnitude of a metric space is most intuitively understood as an attempt to measure the effective size of a metric space . As a descriptive scalar , this quantity extends the set of other well known descriptors such as the rank , diameter or dimension . However , unlike those descriptors , the properties of magnitude are not yet well understood . Even though the metric space structure of dataset is of the utmost importance to understand e.g . the regularisation behaviour of classifiers , magnitude has received little attention by the machine learning community so far . However , it turns out that one can instead investigate the magnitude of each data point separately , considering each data instance as its own metric space . Following this line of thought , magnitude vectors were introduced as a way to characterise the contribution of each data sample to the overall magnitude , such that the sum of the elements of the magnitude vector amounts to the magnitude . As shown in previous works , the magnitude vectors can detect boundaries of a metric space , with boundary points having a larger contribution to magnitude ( Bunch et al. , 2021 ) . In this work , we seek to advance the research about magnitude in machine learning by addressing these challenges . Since the metric space structure of an entire dataset may not be the most useful application in machine learning , we instead therefore consider the magnitude of each individual data point by endowing each of them with a metric space structure and explore its properties . In particular , because of their ubiquity and the large dimensionality , we focus our analysis on image datasets , where each individual image is seen as a metric space . We then extend previous results by investigating the theoretical properties associated with magnitude vector in such a context . In addition , we study the properties of the magnitude vector for improving the adversarial robustness of existing neural network architectures for image classification . We propose a novel , fully differentiable , magnitude layer , which can serve as an input layer for any deep learning architecture . We show that it results in more robustness for several types of adversarial attacks , with an acceptable reduction in classification performance , paving the way for a new exciting direction in the creation of robust neural architectures . Moreover , since naive magnitude calculations may be computationally prohibitive for large images , we introduce a new algorithm that dramatically speeds up the computation of the magnitude vector . Leveraging the regular structure of images , this allows to conveniently approximate magnitude vectors for large images with minimal error . Intractable computational runtime often stymies the applicability of magnitude in machine learning and therefore hinders further the research of it ; our algorithm opens the door to using magnitude efficiently in machine learning research . Equipped with the speed up algorithm , we showcase possible use cases of magnitude vectors in machine learning in the realm of in edge detection and adversarial robustness . Our contributions are summarized as follows : • We formalize the notion of magnitude vectors for images and investigate the impact of the choice of different metrics . • We introduce an algorithm that dramatically speeds up the computation of magnitude with little loss , which removes a critical roadblock in using magnitude vectors in machine learning applications . • We provide a theoretical framework to understand the edge detection capability of magnitude vectors and report empirical supporting evidence . • We demonstrate the capabilities of a novel , fully differentiable , magnitude layer for improving adversarial robustness of computer vision architectures . 2 THEORY . In this section , we introduce essential notions of the theory of magnitude and magnitude vectors and develop the theoretical background that will help interpreting the empirical results . 2.1 MATHEMATICAL BACKGROUND . We start by formally introducing the notion of a finite metric space . Definition 1 . A metric space is an ordered pair ( B , d ) , where B is a finite set and d is a metric on B . We denote the cardinality of B by |B| . In our application the set B is a set of vectors B ⊂ Rn and the metric considered will be the ` p norm . In order to define the magnitude of such a space we first define the similarity matrix . Definition 2 . Given a finite metric space M = ( B , d ) , its similarity matrix is ζM with entries ζM ( i , j ) = e −d ( Bi , Bj ) for Bi , Bj ∈ B . The similarity matrix can be seen as the weights arising from an exponential kernel . We are now in a position to define the magnitude vector and the magnitude of a finite vector space . Definition 3 . Given a finite metric space M = ( B , d ) with |B| = n and similarity matrix ζM with inverse ζ−1M , the magnitude vector of element Bi is given by wi = ∑n j=1 ζ −1 M ( i , j ) . Moreover , the magnitude of M , magM is ∑n i , j=1 ζ −1 M ( i , j ) = ∑n i=1 wi . Not every finite metric space has a magnitude . In particular , the magnitude is not defined when the similarity matrix is not invertible ; the magnitude therefore characterizes the structure of a metric space to some extent . It should be also noted that the definition of the magnitude vector is reminiscent of optimising a support vector machine . This connection has been pointed out for the Euclidean norm by Bunch et al . ( 2021 ) . 2.2 THEORETICAL RESULTS . 2.2.1 THE MAGNITUDE OF AN IMAGE . This work focuses on the analysis of magnitude on images , by considering each individual image as its own metric space . We then first define how we build such a metric space from an image and then prove the existence of a magnitude on images . We refer to the metric spaces on images as image metric spaces and define them as follows . Definition 4 . Let I ∈ Rc×n×m be an image with c channels , height n and width m. Let { V : vij , i ∈ 1 , . . . , m ; j ∈ 1 , . . . , n } be the set of c-dimensional vectors of pixel values in the image . Then the image metric space M ( B , d ) is given by a set of vectors B ⊂ Rc+2 of the form B = { ( i , j , v ( 1 ) ij , . . . , v ( c ) ij ) T : ∀vij ∈ V } together with a metric d on Rc+2 . Informally , we put all vectors corresponding to pixel values on a grid , concatenating the position vector with the pixel value vector and use the resulting vectors as the ground set B for our finite metric space . Let us note that |B| = m× n and therefore the number of points in the metric space can be quite large even for moderately-sized images , a potential limitation that we address in Section 2.3 . We now turn to investigating when an image metric space has magnitude and , by extension , a magnitude vector . This is a priori not clear since , as we saw , the existence of magnitude depends on the properties of the metric space . From the definition of magnitude ( Definition 3 ) , it is readily seen that an image metric space M has magnitude if and only if its similarity matrix ξM is invertible . As generic n × n matrices are invertible ( i.e . subjecting any non-invertible matrix to a random perturbation will almost certainly result in an invertible matrix ) , we conclude that generic image metric spaces have magnitude . In fact , since the vectors b ∈ B of the image metric space by a factor t > 0 can be rescaled , we can define scaled metrics d ( · , · ) 7→ td ( · , · ) . This rescaled metric space has magnitude except for finitely many t > 0 ( Leinster et al. , 2017 , Proposition 2.8 ) . In other words , we can always find a scaling which gives rise to a magnitude vector . The univariate function defined by this scaling is called the magnitude function . In general , we are interested in computing the magnitude of all images in a whole dataset , and not only of a single image , such that we can compare their magnitude ( vectors ) . Therefore , we would like theoretical guarantees that there exists a scaling such that every image in a dataset has a magnitude vector . Proposition 5 . Let M ( B , d ) be an image metric space . If d is an ` p norm with 0 < p < 2 , then every image metric space M has magnitude . Proof . This is a special case of ( Meckes , 2013 , Theorem 3.6 ) . The above proposition theoretically only applies to ` p norm with 0 < p < 2 . However , in practice , we find that on the various datasets we considered in this work , an ` p norm with p = 4 , 10 and the Hamming distance also lead to invertible similarity matrices , and thus to the existence of magnitude . The image metric space exhibits substantial structure ; in particular , there is an underlying regular subspace ( the grid ) . To quantify this further , we use the notion of a product space . Lemma 6 . Let M1 ( B1 , d1 ) and M2 ( B2 , d2 ) be two finite metric spaces with d1 , d2 being ` p norms . Its product metric spaceM = M1×M2 is a metric space with a metric ρp : ( B1×B2 ) × ( B1×B2 ) → R such that ρp ( ( x1 × y1 ) , ( x2 × y2 ) ) = p √ ( d1 ( x1 , x2 ) p + d2 ( y1 , y2 ) p , 1 ≤ p ≤ ∞ . Proof . This is a special case of ( Dovgoshey & Martio , 2009 ) . Note that the product space formulation gives a lot of freedom , since every positively weighted combination of d1 and d2 is also a metric on the product space . Magnitude vector on images based on harmonic analysis We now present our main result of this subsection which is an interpretation of the magnitude vector for images based on harmonic analysis . For this we consider grey scale images , however , our reasoning generalises to colour images . First , notice that a grey scale image can be seen as discretisation of a surface in R3 , i.e . z = f ( x , y ) , where x , y are the pixel positions and z is the brightness . In general , it is not clear what an “ outlier ” on a continuous surface or curve is . In this paper we define an outlier as a point on the surface where the gradient is large , i.e . neighbouring points are further away w.r.t . some distance measure . In the image case , outliers are points where the brightness value is substantially different between neighbouring pixels . We can use this reasoning to define a filtration F on the vectors of B , bij = ( i , j , v ( 1 ) ij ) T , namely ∅ ⊆ B ( 1 ) ⊆ B ( 2 ) ⊆ · · · ⊆ B ( K ) = B such that v ( 1 ) ij < δ ( k ) for k = 1 , . . . , K , where the δ ( k ) are different brightness thresholds . Due to symmetry in the problem we also define F ′ with criterion v ( 1 ) ij ≥ δ ( k ) . Further , we define the projection onto R2 , p : R3 → R2 , p ( bij ) 7→ ( i , j ) . By considering the projections of each subset of F ( F ′ ) we break the problem down into successive boundary detection of compact subsets of R2 . To extend this reasoning to colour images we can either consider each channel as a grey scale image or define multi-dimensional filtrations . A visual description of our reasoning is found in Figure 1 . To investigate the effect of different metrics on boundary detection , we consider the behaviour of the weighting vector on the 2-dimensional grid . We closely follow the argument of Bunch et al . ( 2021 ) , extending their results to the product space metric in the case when di are ` p norms with p = 1 , 2 and ρ ( ( x1 × z1 ) , ( x2 × z2 ) ) = α1d1 ( x1 , x2 ) + α2d2 ( z1 , z2 ) ( 1 ) for some positive weights α1 , α2 . For image we can consider the xi as two-dimensional vectors indicating the position on the unit square and zi as the corresponding brightness value . Consider a regular 2d grid and the equation defining a weighting on the grid points ζMv = ( 1 , . . . , 1 ) T . Using a continuous analogue , this can be written as a convolution ( Bunch et al. , 2021 ) f ? v ( x ) = ∫ R2 f ( x− y ) v ( y ) = ∫ R2 e−αd ( x , y ) v ( y ) = I [ 0,1 ] 2 ( x ) , ( 2 ) where I [ 0,1 ] 2 is the indicator function and d is any translation invariant metric . Using the Fourier transform , F ( f ) ( ξ ) = ∫ R2 e−i2πx·ξf ( x ) dx , ( 3 ) its well-known properties Folland ( 1999 ) and the convolution theorem , we can derive an intuitive understanding of the magnitude vector and the effects of specific metrics . In the case of d ( · , · ) being the Euclidean ( ` 2 ) norm , it has been shown in ( Bunch et al. , 2021 ) that Γ ( 32 ) απ 3 2 ( α2 − 2∑ i=1 ∂2i ) 3 2 I [ 0,1 ] 2 = v , ( 4 ) where we generalised the result of Bunch et al . ( 2021 ) to the unit square . We can interpret equation 4 as the weighting v being constant in the interior of the unit square ( ∂2i is zero in interior ) and different to this constant on the boundary . In fact , the weighting will also be constant on the edges and corners . In the Euclidean case it has been shown ( Bunch et al. , 2021 ) that the intuitive understanding translates rigorously to compact subsets of Rn with n odd using distribution theory . Guided by this , we continue our argument using the Fourier transform . If d ( · , · ) is the ` 1 norm , i.e . the Manhattan or Cityblock distance , we obtain a similar result to equation 4 , [ 2∏ i=1 ( α2 − ∂2i ) ] I [ 0,1 ] 2 = v , ( 5 ) in other words , the ` 1 norm admits the same interpretation as ` 2 , however , this time the differential operators acts multiplicatively on each dimension . For the product space metric ρ ( · , · ) = α1d1 ( · , · ) + α2d2 ( · , · ) and di are ` p with p = 1 , 2 it follows that the Fourier transform is a product of the single-metric results . Equipped with this insight , we can now explain the edge detection capabilities of the magnitude vector . Consider a filtration F ( and F ′ ) on a grey scale image and choose a subset of vectors B ( i ) from the filtration . Apply the projection map p to every vector in this subset . The transformed set is a grid with potentially “ missing ” grid points on the domain [ 0 , n ] × [ 0 , m ] . From the results on the unit square and the discretising of the ∂ operator , we expect a constant weighting vector except on the boundaries of the grid , i.e . on points adjacent to the missing grid points and on the boundaries of the domain . This procedure can be performed on every B ( i ) ∈ F and the final magnitude vector can be seen as a a combination of the weightings of each step ( see Figure 1 ) . We find that our expectations from the theoretical result agree well with empirical results ( see Appendix D ) . Moreover , we empirically find that Hamming distance and ` p norms for p 6= 1 , 2 have similar properties .
This paper applies the machinery of magnitude vectors of metric spaces to extract information from individual images (which can be themselves be seen as finite metric spaces having one point for each pixel). After some theoretical considerations, the paper proposes a divide-and-conquer algorithm to approximate the magnitude vector of an image which is significantly faster than the naive inversion of a $N\times N$ matrix (where $N$ is the number of pixels in the image). Finally, the magnitude vector is shown to correlate with the result of traditional edge detectors (after hyper-parameter tuning) and an application to adversarial robustness is described.
SP:bb1692c22d78937a20fdb97ff4d4ca57317e99ae
Skill-based Meta-Reinforcement Learning
While deep reinforcement learning methods have shown impressive results in robot learning , their sample inefficiency makes the learning of complex , longhorizon behaviors with real robot systems infeasible . To mitigate this issue , metareinforcement learning methods aim to enable fast learning on novel tasks by learning how to learn . Yet , the application has been limited to short-horizon tasks with dense rewards . To enable learning long-horizon behaviors , recent works have explored leveraging prior experience in the form of offline datasets without reward or task annotations . While these approaches yield improved sample efficiency , millions of interactions with environments are still required to solve complex tasks . In this work , we devise a method that enables meta-learning on long-horizon , sparse-reward tasks , allowing us to solve unseen target tasks with orders of magnitude fewer environment interactions . Our core idea is to leverage prior experience extracted from offline datasets during meta-learning . Specifically , we propose to ( 1 ) extract reusable skills and a skill prior from offline datasets , ( 2 ) meta-train a high-level policy that learns to efficiently compose learned skills into long-horizon behaviors , and ( 3 ) rapidly adapt the meta-trained policy to solve an unseen target task . Experimental results on continuous control tasks in navigation and manipulation demonstrate that the proposed method can efficiently solve longhorizon novel target tasks by combining the strengths of meta-learning and the usage of offline datasets , while prior approaches in RL , meta-RL , and multi-task RL require substantially more environment interactions to solve the tasks . 1 INTRODUCTION . In recent years , deep reinforcement learning methods have achieved impressive results in robot learning ( Gu et al. , 2017 ; Andrychowicz et al. , 2020 ; Kalashnikov et al. , 2021 ) . Yet , existing approaches are sample inefficient , thus rendering the learning of complex behaviors through trial and error learning infeasible , especially on real robot systems . In contrast , humans are capable of effectively learning a variety of complex skills in only a few trials . This can be greatly attributed to our ability to learn how to learn new tasks quickly by efficiently utilizing previously acquired skills . Can machines likewise learn to how to learn by efficiently utilizing learned skills like humans ? Meta-reinforcement learning ( meta-RL ) holds the promise of allowing RL agents to acquire novel tasks with improved efficiency by learning to learn from a distribution of tasks ( Finn et al. , 2017 ; Rakelly et al. , 2019 ) . In spite of recent advances in the field , most existing metaRL algorithms are restricted to short-horizon , dense-reward tasks . To facilitate efficient learning on long-horizon , sparse-reward tasks , recent works aim to leverage experience from prior tasks in the form of offline datasets without additional reward and task annotations ( Lynch et al. , 2020 ; Pertsch et al. , 2020 ; Chebotar et al. , 2021 ) . While these methods can solve complex tasks with substantially improved sample efficiency over methods learning from scratch , millions of interactions with environments are still required to acquire long-horizon skills . In this work , we aim to take a step towards combining the capabilities of both learning how to quickly learn new tasks while also leveraging prior experience in the form of unannotated offline data ( see Figure 1 ) . Specifically , we aim to devise a method that enables meta-learning on complex , long-horizon tasks and can solve unseen target tasks with orders of magnitude fewer environment interactions than prior works . We propose to leverage the offline experience by extracting reusable skills – short-term behaviors that can be composed to solve unseen long-horizon tasks . We employ a hierarchical meta-learning scheme in which we meta-train a high-level policy to learn how to quickly reuse the extracted skills . To efficiently explore the learned skill space during meta-training , the high-level policy is guided by a skill prior which is also acquired from the offline experience data . We evaluate our method and prior approaches in deep RL , skill-based RL , meta-RL , and multi-task RL on two challenging continuous control environments : maze navigation and kitchen manipulation , which require long-horizon control and only provides sparse rewards . Experimental results show that our method can efficiently solve unseen tasks by exploiting meta-learning tasks and offline datasets , while prior approaches require substantially more samples or fail to solve the tasks . In summary , the main contributions of this paper are threefold : • To the best of our knowledge , this is the first work to combine meta-reinforcement learning algorithms with task-agnostic offline datasets that do not contain reward or task annotations . • We propose a method that combines meta-learning with offline data by extracting learned skills and a skill prior as well as meta-learning a hierarchical skill policy regularized by the skill prior . • We empirically show that our method is significantly more efficient at learning long-horizon sparse-reward tasks compared to prior methods in deep RL , skill-based RL , meta-RL , and multi-task RL . 2 RELATED WORK . Meta-Reinforcement Learning . Meta-RL approaches ( Duan et al. , 2016 ; Wang et al. , 2017 ; Finn et al. , 2017 ; Yu et al. , 2018 ; Rothfuss et al. , 2019 ; Gupta et al. , 2018 ; Vuorio et al. , 2018 ; Nagabandi et al. , 2019 ; Clavera et al. , 2019 ; 2018 ; Rakelly et al. , 2019 ; Vuorio et al. , 2019 ; Yang et al. , 2019 ; Zintgraf et al. , 2019 ; Humplik et al. , 2019 ; Zintgraf et al. , 2020 ; Liu et al. , 2021 ) hold the promise of allowing learning agents to quickly adapt to novel tasks by learning to learn from a distribution of tasks . Despite the recent advances in the field , most existing meta-RL algorithms are limited to short-horizon and dense-reward tasks . In contrast , we aim to develop a method that can meta-learn to solve long-horizon tasks with sparse rewards by leveraging offline datasets . Offline datasets . Recently , many works have investigated the usage of offline datasets for agent training . In particular , the field of offline reinforcement learning ( Levine et al. , 2020 ; Siegel et al. , 2020 ; Kumar et al. , 2020 ; Yu et al. , 2021 ) aims to devise methods that can perform RL fully offline from pre-collected data , without the need for environment interactions . However , these methods require target task reward annotations on the offline data for every new tasks that should be learned . These reward annotations can be challenging to obtain , especially if the offline data is collected from a diverse set of prior tasks . In contrast , our method is able to leverage offline datasets without any reward annotations . Offline Meta-RL . Another recent line of research aims to meta-learn from static , pre-collected datasets including reward annotations ( Mitchell et al. , 2021 ; Pong et al. , 2021 ; Dorfman et al. , 2021 ) . After meta-training with the offline datasets , these works aim to quickly adapt to a new task with only a small amount of data from that new task . In contrast to the aforementioned offline RL methods these works aim to adapt to unseen tasks and assume access to only limited data from the new tasks . However , in addition to reward annotations , these approaches often require that the offline training data is split into separate datasets for each training tasks , further limiting the scalability . Skill-based Learning . An alternative approach for leveraging offline data that does not require reward or task annotations is through the extraction of skills – reusable short-horizon behaviors . Methods for skill-based learning recombine these skills for learning unseen target tasks and converge substantially faster than methods that learn from scratch ( Lee et al. , 2018 ; Hausman et al. , 2018 ; Sharma et al. , 2020 ) . When trained from diverse datasets these approaches can extract a wide repertoire of skills and learn complex , long-horizon tasks ( Merel et al. , 2020 ; Lynch et al. , 2020 ; Pertsch et al. , 2020 ; Ajay et al. , 2021 ; Chebotar et al. , 2021 ; Pertsch et al. , 2021 ) . Yet , although they are more efficient than training from scratch , they still require a large number of environment interactions to learn a new task . Our method instead combines skills extracted from offline data with meta-learning , leading to significantly improved sample efficiency . 3 PROBLEM FORMULATION AND PRELIMINARIES . Our approach builds on prior work for meta-learning and learning from offline datasets and aims to combine the best of both worlds . In the following we will formalize our problem setup and briefly summarize relevant prior work . Problem Formulation . Following prior work on learning from large offline datasets ( Lynch et al. , 2020 ; Pertsch et al. , 2020 ; 2021 ) , we assume access to a dataset of state-action trajectories D = { st , at , ... } which is collected either across a wide variety of tasks or as “ play data ” with no particular task in mind . We thus refer to this dataset as task-agnostic . With a large number of data collection tasks , the dataset covers a wide variety of behaviors and can be used to accelerate learning on diverse tasks . Such data can be collected at scale , e.g . through autonomous exploration ( Hausman et al. , 2018 ; Sharma et al. , 2020 ; Dasari et al. , 2019 ) , human teleoperation ( Schaal et al. , 2005 ; Gupta et al. , 2019 ; Mandlekar et al. , 2018 ; Lynch et al. , 2020 ) , or from previously trained agents ( Fu et al. , 2020 ; Gulcehre et al. , 2020 ) . We additionally assume access to a set of meta-training tasks T = { T1 , . . . , TN } , where each task is represented as a Markov decision process ( MDP ) defined by a tuple { S , A , P , r , ρ , γ } of states , actions , transition probability , reward , initial state distribution , and discount factor . Our goal is to leverage both , the offline dataset D and the meta-training tasks T , to accelerate the training of a policy π ( a|s ) on a target task T ∗ which is also represented as an MDP . Crucially , we do not assume that T ∗ is a part of the set of training tasks T , nor that D contains demonstrations for solving T ∗ . Thus , we aim to design an algorithm that can leverage offline data and meta-training tasks for learning how to quickly compose known skills for solving an unseen target task . Next , we will describe existing approaches that either leverage offline data or meta-training tasks to accelerate target task learning . Then , we describe how our approach takes advantage of the best of both worlds . Skill-based RL . One successful approach for leveraging task-agnostic datasets for accelerating the learning of unseen tasks is though the transfer of reusable skills , i.e . short-horizon behaviors that can be composed to solve long-horizon tasks . Prior work in skill-based RL called Skill-Prior RL ( SPiRL , Pertsch et al . ( 2020 ) ) proposes an effective way to implement this idea . Specifically , SPiRL uses a task-agnostic dataset to learns two models : ( 1 ) a skill policy π ( a|s , z ) that decodes a latent skill representation z into a sequence of executable actions and ( 2 ) a prior over latent skill variables p ( z|s ) which can be leveraged to guide exploration in skill space . SPiRL uses these skills for learning new tasks efficiently by training a high-level skill policy π ( z|s ) that acts over the space of learned skills instead of primitive actions . The target task RL objective extends Soft Actor Critic ( SAC , Haarnoja et al . ( 2018 ) ) , a popular off-policy RL algorithm , by guiding the high-level policy with the learned skill prior : max π ∑ t E ( st , zt ) ∼ρπ [ r ( st , zt ) − αDKL ( π ( z|st ) , p ( z|st ) ) ] . ( 1 ) Here DKL denotes the Kullback-Leibler divergence between the policy and skill prior , and α is a weighting coefficient . Off-Policy Meta-RL . Rakelly et al . ( 2019 ) introduced an off-policy meta-RL algorithm called probabilistic embeddings for actor-critic RL ( PEARL ) that leverages a set of training tasks T to enable quick learning of new tasks . Specifically , PEARL leverages the meta-training tasks for learning a task encoder q ( e|c ) . This encoder takes in a small set of state-action-reward transitions c and produces a task embedding e. This embedding is used to condition the actor π ( a|s , z ) and critic Q ( s , a , e ) . In PEARL , actor , critic and task encoder are trained by jointly maximizing the obtained reward and the policy ’ s entropyH ( Haarnoja et al. , 2018 ) : max π ET ∼pT , e∼q ( ·|cT ) [ ∑ t E ( st , at ) ∼ρπ|e [ rT ( st , at ) + αH ( π ( a|st , e ) ) ] ] . ( 2 ) Additionally , the task embedding output of the task encoder is regularized towards a constant prior distribution p ( e ) .
This paper combines skill-based with meta RL using training tasks. The skill-based method (SPiRL, Pertsch et al. 2020) forms a skill embedding that represents $K$-step state-action trajectories. An action policy provides an action distribution conditioned on the currently-active skill, and a (higher-level) skill policy is trained using RL with the skill embedding as its action space. Simultaneously a skill prior conditioned on the starting state of a given skill is trained. Once trained, the skill embedding, the skill-conditioned action policy, and the skill prior are held fixed. The off-policy meta RL method (PEARL, Rakelly et al. 2019) forms a task embedding that represents small sets of skills active during a given task. (This is how I understand it; the paper does not explicitly state this. Rakelly et al. do not use skills; their task embedding represents small sets of state-action-state-reward sequences.) A skill policy provides a skill distribution conditioned on the currently-active task. Its training is regularized using the above skill prior. Once trained, the task embedding is held fixed. Up to here, the system is trained in a task-agnostic manner. To learn a target task, the system further trains the skill policy, conditioned on the target task, again regularized using the skill prior.
SP:788bfa15bff54a36fe4f5dd66cd25f3fffc9ea31
Skill-based Meta-Reinforcement Learning
While deep reinforcement learning methods have shown impressive results in robot learning , their sample inefficiency makes the learning of complex , longhorizon behaviors with real robot systems infeasible . To mitigate this issue , metareinforcement learning methods aim to enable fast learning on novel tasks by learning how to learn . Yet , the application has been limited to short-horizon tasks with dense rewards . To enable learning long-horizon behaviors , recent works have explored leveraging prior experience in the form of offline datasets without reward or task annotations . While these approaches yield improved sample efficiency , millions of interactions with environments are still required to solve complex tasks . In this work , we devise a method that enables meta-learning on long-horizon , sparse-reward tasks , allowing us to solve unseen target tasks with orders of magnitude fewer environment interactions . Our core idea is to leverage prior experience extracted from offline datasets during meta-learning . Specifically , we propose to ( 1 ) extract reusable skills and a skill prior from offline datasets , ( 2 ) meta-train a high-level policy that learns to efficiently compose learned skills into long-horizon behaviors , and ( 3 ) rapidly adapt the meta-trained policy to solve an unseen target task . Experimental results on continuous control tasks in navigation and manipulation demonstrate that the proposed method can efficiently solve longhorizon novel target tasks by combining the strengths of meta-learning and the usage of offline datasets , while prior approaches in RL , meta-RL , and multi-task RL require substantially more environment interactions to solve the tasks . 1 INTRODUCTION . In recent years , deep reinforcement learning methods have achieved impressive results in robot learning ( Gu et al. , 2017 ; Andrychowicz et al. , 2020 ; Kalashnikov et al. , 2021 ) . Yet , existing approaches are sample inefficient , thus rendering the learning of complex behaviors through trial and error learning infeasible , especially on real robot systems . In contrast , humans are capable of effectively learning a variety of complex skills in only a few trials . This can be greatly attributed to our ability to learn how to learn new tasks quickly by efficiently utilizing previously acquired skills . Can machines likewise learn to how to learn by efficiently utilizing learned skills like humans ? Meta-reinforcement learning ( meta-RL ) holds the promise of allowing RL agents to acquire novel tasks with improved efficiency by learning to learn from a distribution of tasks ( Finn et al. , 2017 ; Rakelly et al. , 2019 ) . In spite of recent advances in the field , most existing metaRL algorithms are restricted to short-horizon , dense-reward tasks . To facilitate efficient learning on long-horizon , sparse-reward tasks , recent works aim to leverage experience from prior tasks in the form of offline datasets without additional reward and task annotations ( Lynch et al. , 2020 ; Pertsch et al. , 2020 ; Chebotar et al. , 2021 ) . While these methods can solve complex tasks with substantially improved sample efficiency over methods learning from scratch , millions of interactions with environments are still required to acquire long-horizon skills . In this work , we aim to take a step towards combining the capabilities of both learning how to quickly learn new tasks while also leveraging prior experience in the form of unannotated offline data ( see Figure 1 ) . Specifically , we aim to devise a method that enables meta-learning on complex , long-horizon tasks and can solve unseen target tasks with orders of magnitude fewer environment interactions than prior works . We propose to leverage the offline experience by extracting reusable skills – short-term behaviors that can be composed to solve unseen long-horizon tasks . We employ a hierarchical meta-learning scheme in which we meta-train a high-level policy to learn how to quickly reuse the extracted skills . To efficiently explore the learned skill space during meta-training , the high-level policy is guided by a skill prior which is also acquired from the offline experience data . We evaluate our method and prior approaches in deep RL , skill-based RL , meta-RL , and multi-task RL on two challenging continuous control environments : maze navigation and kitchen manipulation , which require long-horizon control and only provides sparse rewards . Experimental results show that our method can efficiently solve unseen tasks by exploiting meta-learning tasks and offline datasets , while prior approaches require substantially more samples or fail to solve the tasks . In summary , the main contributions of this paper are threefold : • To the best of our knowledge , this is the first work to combine meta-reinforcement learning algorithms with task-agnostic offline datasets that do not contain reward or task annotations . • We propose a method that combines meta-learning with offline data by extracting learned skills and a skill prior as well as meta-learning a hierarchical skill policy regularized by the skill prior . • We empirically show that our method is significantly more efficient at learning long-horizon sparse-reward tasks compared to prior methods in deep RL , skill-based RL , meta-RL , and multi-task RL . 2 RELATED WORK . Meta-Reinforcement Learning . Meta-RL approaches ( Duan et al. , 2016 ; Wang et al. , 2017 ; Finn et al. , 2017 ; Yu et al. , 2018 ; Rothfuss et al. , 2019 ; Gupta et al. , 2018 ; Vuorio et al. , 2018 ; Nagabandi et al. , 2019 ; Clavera et al. , 2019 ; 2018 ; Rakelly et al. , 2019 ; Vuorio et al. , 2019 ; Yang et al. , 2019 ; Zintgraf et al. , 2019 ; Humplik et al. , 2019 ; Zintgraf et al. , 2020 ; Liu et al. , 2021 ) hold the promise of allowing learning agents to quickly adapt to novel tasks by learning to learn from a distribution of tasks . Despite the recent advances in the field , most existing meta-RL algorithms are limited to short-horizon and dense-reward tasks . In contrast , we aim to develop a method that can meta-learn to solve long-horizon tasks with sparse rewards by leveraging offline datasets . Offline datasets . Recently , many works have investigated the usage of offline datasets for agent training . In particular , the field of offline reinforcement learning ( Levine et al. , 2020 ; Siegel et al. , 2020 ; Kumar et al. , 2020 ; Yu et al. , 2021 ) aims to devise methods that can perform RL fully offline from pre-collected data , without the need for environment interactions . However , these methods require target task reward annotations on the offline data for every new tasks that should be learned . These reward annotations can be challenging to obtain , especially if the offline data is collected from a diverse set of prior tasks . In contrast , our method is able to leverage offline datasets without any reward annotations . Offline Meta-RL . Another recent line of research aims to meta-learn from static , pre-collected datasets including reward annotations ( Mitchell et al. , 2021 ; Pong et al. , 2021 ; Dorfman et al. , 2021 ) . After meta-training with the offline datasets , these works aim to quickly adapt to a new task with only a small amount of data from that new task . In contrast to the aforementioned offline RL methods these works aim to adapt to unseen tasks and assume access to only limited data from the new tasks . However , in addition to reward annotations , these approaches often require that the offline training data is split into separate datasets for each training tasks , further limiting the scalability . Skill-based Learning . An alternative approach for leveraging offline data that does not require reward or task annotations is through the extraction of skills – reusable short-horizon behaviors . Methods for skill-based learning recombine these skills for learning unseen target tasks and converge substantially faster than methods that learn from scratch ( Lee et al. , 2018 ; Hausman et al. , 2018 ; Sharma et al. , 2020 ) . When trained from diverse datasets these approaches can extract a wide repertoire of skills and learn complex , long-horizon tasks ( Merel et al. , 2020 ; Lynch et al. , 2020 ; Pertsch et al. , 2020 ; Ajay et al. , 2021 ; Chebotar et al. , 2021 ; Pertsch et al. , 2021 ) . Yet , although they are more efficient than training from scratch , they still require a large number of environment interactions to learn a new task . Our method instead combines skills extracted from offline data with meta-learning , leading to significantly improved sample efficiency . 3 PROBLEM FORMULATION AND PRELIMINARIES . Our approach builds on prior work for meta-learning and learning from offline datasets and aims to combine the best of both worlds . In the following we will formalize our problem setup and briefly summarize relevant prior work . Problem Formulation . Following prior work on learning from large offline datasets ( Lynch et al. , 2020 ; Pertsch et al. , 2020 ; 2021 ) , we assume access to a dataset of state-action trajectories D = { st , at , ... } which is collected either across a wide variety of tasks or as “ play data ” with no particular task in mind . We thus refer to this dataset as task-agnostic . With a large number of data collection tasks , the dataset covers a wide variety of behaviors and can be used to accelerate learning on diverse tasks . Such data can be collected at scale , e.g . through autonomous exploration ( Hausman et al. , 2018 ; Sharma et al. , 2020 ; Dasari et al. , 2019 ) , human teleoperation ( Schaal et al. , 2005 ; Gupta et al. , 2019 ; Mandlekar et al. , 2018 ; Lynch et al. , 2020 ) , or from previously trained agents ( Fu et al. , 2020 ; Gulcehre et al. , 2020 ) . We additionally assume access to a set of meta-training tasks T = { T1 , . . . , TN } , where each task is represented as a Markov decision process ( MDP ) defined by a tuple { S , A , P , r , ρ , γ } of states , actions , transition probability , reward , initial state distribution , and discount factor . Our goal is to leverage both , the offline dataset D and the meta-training tasks T , to accelerate the training of a policy π ( a|s ) on a target task T ∗ which is also represented as an MDP . Crucially , we do not assume that T ∗ is a part of the set of training tasks T , nor that D contains demonstrations for solving T ∗ . Thus , we aim to design an algorithm that can leverage offline data and meta-training tasks for learning how to quickly compose known skills for solving an unseen target task . Next , we will describe existing approaches that either leverage offline data or meta-training tasks to accelerate target task learning . Then , we describe how our approach takes advantage of the best of both worlds . Skill-based RL . One successful approach for leveraging task-agnostic datasets for accelerating the learning of unseen tasks is though the transfer of reusable skills , i.e . short-horizon behaviors that can be composed to solve long-horizon tasks . Prior work in skill-based RL called Skill-Prior RL ( SPiRL , Pertsch et al . ( 2020 ) ) proposes an effective way to implement this idea . Specifically , SPiRL uses a task-agnostic dataset to learns two models : ( 1 ) a skill policy π ( a|s , z ) that decodes a latent skill representation z into a sequence of executable actions and ( 2 ) a prior over latent skill variables p ( z|s ) which can be leveraged to guide exploration in skill space . SPiRL uses these skills for learning new tasks efficiently by training a high-level skill policy π ( z|s ) that acts over the space of learned skills instead of primitive actions . The target task RL objective extends Soft Actor Critic ( SAC , Haarnoja et al . ( 2018 ) ) , a popular off-policy RL algorithm , by guiding the high-level policy with the learned skill prior : max π ∑ t E ( st , zt ) ∼ρπ [ r ( st , zt ) − αDKL ( π ( z|st ) , p ( z|st ) ) ] . ( 1 ) Here DKL denotes the Kullback-Leibler divergence between the policy and skill prior , and α is a weighting coefficient . Off-Policy Meta-RL . Rakelly et al . ( 2019 ) introduced an off-policy meta-RL algorithm called probabilistic embeddings for actor-critic RL ( PEARL ) that leverages a set of training tasks T to enable quick learning of new tasks . Specifically , PEARL leverages the meta-training tasks for learning a task encoder q ( e|c ) . This encoder takes in a small set of state-action-reward transitions c and produces a task embedding e. This embedding is used to condition the actor π ( a|s , z ) and critic Q ( s , a , e ) . In PEARL , actor , critic and task encoder are trained by jointly maximizing the obtained reward and the policy ’ s entropyH ( Haarnoja et al. , 2018 ) : max π ET ∼pT , e∼q ( ·|cT ) [ ∑ t E ( st , at ) ∼ρπ|e [ rT ( st , at ) + αH ( π ( a|st , e ) ) ] ] . ( 2 ) Additionally , the task embedding output of the task encoder is regularized towards a constant prior distribution p ( e ) .
The paper proposes a meta-RL method to learn to solve long-horizon tasks with sparse rewards efficiently. To do so it builds on previous works, mainly SPiRL, to learn a set of skills and skills prior from an offline dataset. Jointly with the skills, a high-level policy generating action in the learned latent space, is trained with meta-RL. The goal of this policy is to learn to compose learned skills to solve new tasks efficiently. Given a small set of target task trajectories, the policy can meta-learn from it and learn quickly to solve the target task. Experimentally the proposed method is shown to be more sample-efficient on two tasks, maze navigation and kitchen manipulation.
SP:788bfa15bff54a36fe4f5dd66cd25f3fffc9ea31
Skill-based Meta-Reinforcement Learning
While deep reinforcement learning methods have shown impressive results in robot learning , their sample inefficiency makes the learning of complex , longhorizon behaviors with real robot systems infeasible . To mitigate this issue , metareinforcement learning methods aim to enable fast learning on novel tasks by learning how to learn . Yet , the application has been limited to short-horizon tasks with dense rewards . To enable learning long-horizon behaviors , recent works have explored leveraging prior experience in the form of offline datasets without reward or task annotations . While these approaches yield improved sample efficiency , millions of interactions with environments are still required to solve complex tasks . In this work , we devise a method that enables meta-learning on long-horizon , sparse-reward tasks , allowing us to solve unseen target tasks with orders of magnitude fewer environment interactions . Our core idea is to leverage prior experience extracted from offline datasets during meta-learning . Specifically , we propose to ( 1 ) extract reusable skills and a skill prior from offline datasets , ( 2 ) meta-train a high-level policy that learns to efficiently compose learned skills into long-horizon behaviors , and ( 3 ) rapidly adapt the meta-trained policy to solve an unseen target task . Experimental results on continuous control tasks in navigation and manipulation demonstrate that the proposed method can efficiently solve longhorizon novel target tasks by combining the strengths of meta-learning and the usage of offline datasets , while prior approaches in RL , meta-RL , and multi-task RL require substantially more environment interactions to solve the tasks . 1 INTRODUCTION . In recent years , deep reinforcement learning methods have achieved impressive results in robot learning ( Gu et al. , 2017 ; Andrychowicz et al. , 2020 ; Kalashnikov et al. , 2021 ) . Yet , existing approaches are sample inefficient , thus rendering the learning of complex behaviors through trial and error learning infeasible , especially on real robot systems . In contrast , humans are capable of effectively learning a variety of complex skills in only a few trials . This can be greatly attributed to our ability to learn how to learn new tasks quickly by efficiently utilizing previously acquired skills . Can machines likewise learn to how to learn by efficiently utilizing learned skills like humans ? Meta-reinforcement learning ( meta-RL ) holds the promise of allowing RL agents to acquire novel tasks with improved efficiency by learning to learn from a distribution of tasks ( Finn et al. , 2017 ; Rakelly et al. , 2019 ) . In spite of recent advances in the field , most existing metaRL algorithms are restricted to short-horizon , dense-reward tasks . To facilitate efficient learning on long-horizon , sparse-reward tasks , recent works aim to leverage experience from prior tasks in the form of offline datasets without additional reward and task annotations ( Lynch et al. , 2020 ; Pertsch et al. , 2020 ; Chebotar et al. , 2021 ) . While these methods can solve complex tasks with substantially improved sample efficiency over methods learning from scratch , millions of interactions with environments are still required to acquire long-horizon skills . In this work , we aim to take a step towards combining the capabilities of both learning how to quickly learn new tasks while also leveraging prior experience in the form of unannotated offline data ( see Figure 1 ) . Specifically , we aim to devise a method that enables meta-learning on complex , long-horizon tasks and can solve unseen target tasks with orders of magnitude fewer environment interactions than prior works . We propose to leverage the offline experience by extracting reusable skills – short-term behaviors that can be composed to solve unseen long-horizon tasks . We employ a hierarchical meta-learning scheme in which we meta-train a high-level policy to learn how to quickly reuse the extracted skills . To efficiently explore the learned skill space during meta-training , the high-level policy is guided by a skill prior which is also acquired from the offline experience data . We evaluate our method and prior approaches in deep RL , skill-based RL , meta-RL , and multi-task RL on two challenging continuous control environments : maze navigation and kitchen manipulation , which require long-horizon control and only provides sparse rewards . Experimental results show that our method can efficiently solve unseen tasks by exploiting meta-learning tasks and offline datasets , while prior approaches require substantially more samples or fail to solve the tasks . In summary , the main contributions of this paper are threefold : • To the best of our knowledge , this is the first work to combine meta-reinforcement learning algorithms with task-agnostic offline datasets that do not contain reward or task annotations . • We propose a method that combines meta-learning with offline data by extracting learned skills and a skill prior as well as meta-learning a hierarchical skill policy regularized by the skill prior . • We empirically show that our method is significantly more efficient at learning long-horizon sparse-reward tasks compared to prior methods in deep RL , skill-based RL , meta-RL , and multi-task RL . 2 RELATED WORK . Meta-Reinforcement Learning . Meta-RL approaches ( Duan et al. , 2016 ; Wang et al. , 2017 ; Finn et al. , 2017 ; Yu et al. , 2018 ; Rothfuss et al. , 2019 ; Gupta et al. , 2018 ; Vuorio et al. , 2018 ; Nagabandi et al. , 2019 ; Clavera et al. , 2019 ; 2018 ; Rakelly et al. , 2019 ; Vuorio et al. , 2019 ; Yang et al. , 2019 ; Zintgraf et al. , 2019 ; Humplik et al. , 2019 ; Zintgraf et al. , 2020 ; Liu et al. , 2021 ) hold the promise of allowing learning agents to quickly adapt to novel tasks by learning to learn from a distribution of tasks . Despite the recent advances in the field , most existing meta-RL algorithms are limited to short-horizon and dense-reward tasks . In contrast , we aim to develop a method that can meta-learn to solve long-horizon tasks with sparse rewards by leveraging offline datasets . Offline datasets . Recently , many works have investigated the usage of offline datasets for agent training . In particular , the field of offline reinforcement learning ( Levine et al. , 2020 ; Siegel et al. , 2020 ; Kumar et al. , 2020 ; Yu et al. , 2021 ) aims to devise methods that can perform RL fully offline from pre-collected data , without the need for environment interactions . However , these methods require target task reward annotations on the offline data for every new tasks that should be learned . These reward annotations can be challenging to obtain , especially if the offline data is collected from a diverse set of prior tasks . In contrast , our method is able to leverage offline datasets without any reward annotations . Offline Meta-RL . Another recent line of research aims to meta-learn from static , pre-collected datasets including reward annotations ( Mitchell et al. , 2021 ; Pong et al. , 2021 ; Dorfman et al. , 2021 ) . After meta-training with the offline datasets , these works aim to quickly adapt to a new task with only a small amount of data from that new task . In contrast to the aforementioned offline RL methods these works aim to adapt to unseen tasks and assume access to only limited data from the new tasks . However , in addition to reward annotations , these approaches often require that the offline training data is split into separate datasets for each training tasks , further limiting the scalability . Skill-based Learning . An alternative approach for leveraging offline data that does not require reward or task annotations is through the extraction of skills – reusable short-horizon behaviors . Methods for skill-based learning recombine these skills for learning unseen target tasks and converge substantially faster than methods that learn from scratch ( Lee et al. , 2018 ; Hausman et al. , 2018 ; Sharma et al. , 2020 ) . When trained from diverse datasets these approaches can extract a wide repertoire of skills and learn complex , long-horizon tasks ( Merel et al. , 2020 ; Lynch et al. , 2020 ; Pertsch et al. , 2020 ; Ajay et al. , 2021 ; Chebotar et al. , 2021 ; Pertsch et al. , 2021 ) . Yet , although they are more efficient than training from scratch , they still require a large number of environment interactions to learn a new task . Our method instead combines skills extracted from offline data with meta-learning , leading to significantly improved sample efficiency . 3 PROBLEM FORMULATION AND PRELIMINARIES . Our approach builds on prior work for meta-learning and learning from offline datasets and aims to combine the best of both worlds . In the following we will formalize our problem setup and briefly summarize relevant prior work . Problem Formulation . Following prior work on learning from large offline datasets ( Lynch et al. , 2020 ; Pertsch et al. , 2020 ; 2021 ) , we assume access to a dataset of state-action trajectories D = { st , at , ... } which is collected either across a wide variety of tasks or as “ play data ” with no particular task in mind . We thus refer to this dataset as task-agnostic . With a large number of data collection tasks , the dataset covers a wide variety of behaviors and can be used to accelerate learning on diverse tasks . Such data can be collected at scale , e.g . through autonomous exploration ( Hausman et al. , 2018 ; Sharma et al. , 2020 ; Dasari et al. , 2019 ) , human teleoperation ( Schaal et al. , 2005 ; Gupta et al. , 2019 ; Mandlekar et al. , 2018 ; Lynch et al. , 2020 ) , or from previously trained agents ( Fu et al. , 2020 ; Gulcehre et al. , 2020 ) . We additionally assume access to a set of meta-training tasks T = { T1 , . . . , TN } , where each task is represented as a Markov decision process ( MDP ) defined by a tuple { S , A , P , r , ρ , γ } of states , actions , transition probability , reward , initial state distribution , and discount factor . Our goal is to leverage both , the offline dataset D and the meta-training tasks T , to accelerate the training of a policy π ( a|s ) on a target task T ∗ which is also represented as an MDP . Crucially , we do not assume that T ∗ is a part of the set of training tasks T , nor that D contains demonstrations for solving T ∗ . Thus , we aim to design an algorithm that can leverage offline data and meta-training tasks for learning how to quickly compose known skills for solving an unseen target task . Next , we will describe existing approaches that either leverage offline data or meta-training tasks to accelerate target task learning . Then , we describe how our approach takes advantage of the best of both worlds . Skill-based RL . One successful approach for leveraging task-agnostic datasets for accelerating the learning of unseen tasks is though the transfer of reusable skills , i.e . short-horizon behaviors that can be composed to solve long-horizon tasks . Prior work in skill-based RL called Skill-Prior RL ( SPiRL , Pertsch et al . ( 2020 ) ) proposes an effective way to implement this idea . Specifically , SPiRL uses a task-agnostic dataset to learns two models : ( 1 ) a skill policy π ( a|s , z ) that decodes a latent skill representation z into a sequence of executable actions and ( 2 ) a prior over latent skill variables p ( z|s ) which can be leveraged to guide exploration in skill space . SPiRL uses these skills for learning new tasks efficiently by training a high-level skill policy π ( z|s ) that acts over the space of learned skills instead of primitive actions . The target task RL objective extends Soft Actor Critic ( SAC , Haarnoja et al . ( 2018 ) ) , a popular off-policy RL algorithm , by guiding the high-level policy with the learned skill prior : max π ∑ t E ( st , zt ) ∼ρπ [ r ( st , zt ) − αDKL ( π ( z|st ) , p ( z|st ) ) ] . ( 1 ) Here DKL denotes the Kullback-Leibler divergence between the policy and skill prior , and α is a weighting coefficient . Off-Policy Meta-RL . Rakelly et al . ( 2019 ) introduced an off-policy meta-RL algorithm called probabilistic embeddings for actor-critic RL ( PEARL ) that leverages a set of training tasks T to enable quick learning of new tasks . Specifically , PEARL leverages the meta-training tasks for learning a task encoder q ( e|c ) . This encoder takes in a small set of state-action-reward transitions c and produces a task embedding e. This embedding is used to condition the actor π ( a|s , z ) and critic Q ( s , a , e ) . In PEARL , actor , critic and task encoder are trained by jointly maximizing the obtained reward and the policy ’ s entropyH ( Haarnoja et al. , 2018 ) : max π ET ∼pT , e∼q ( ·|cT ) [ ∑ t E ( st , at ) ∼ρπ|e [ rT ( st , at ) + αH ( π ( a|st , e ) ) ] ] . ( 2 ) Additionally , the task embedding output of the task encoder is regularized towards a constant prior distribution p ( e ) .
This work considers a new setting where you can leverage both 1) offline "play" data that does not contain reward or task labels, and 2) meta-training tasks in order to quickly learn new tasks at meta-test time. To solve this setting, this work proposes to learn skills from 1) by using the SPiRL algorithm, and then learn a hierarchical policy on top of the learned skills over the meta-training tasks 2) using PEARL. Empirically, the combination of SPiRL and PEARL outperforms both SPiRL and PEARL on their own on a 2d maze navigation task and on a robot kitchen task.
SP:788bfa15bff54a36fe4f5dd66cd25f3fffc9ea31
Characterising the Area Under the Curve Loss Function Landscape
1 INTRODUCTION . The area under the curve of the receiver operating characteristic curve ( AUC ) is a commonly used method to evaluate the accuracy and reliability of a neural network classifier . However , for mathematical reasons , the AUC can not be used as as the loss function to be minimised during neural network training ( Menon & Elkan , 2011 ) . Instead , other functions such as cross-entropy are commonly employed . The stepwise nature of the AUC function is the reason that it is nondifferentiable and hence can not simply be optimised . However , the AUC can be approximated using surrogate losses such as the sigmoid function . This approach can lead to a formulation equivalent to the soft-AUC described by Calders & Jaroszewicz ( 2007 ) , which is referred to here as appAUC . We find it intuitive to optimise a function that it as close as possible to the one used to evaluate the model . Our main contributions in this paper are therefore : • To understand the use of different approximation to the AUC as the loss function employed in training of a neural network • To explore the organisation of the appAUC landscape and compare it to a ‘ standard ’ cross- entropy landscape • A theoretical foundation of the differences between appAUC and cross-entropy landscapes • To outline how a loss function landscape analysis can be useful for loss function selection To better understand the advantages and disadvantages of the approximated AUC loss function , we study the functional space , commonly referred to as LFL , using tools from the theoretical study of energy landscapes in molecular and condensed matter systems ( Wales , 2003 ) . The usefulness of the energy landscape approach has previously been demonstrated in the context of neural network LFLs ( Ballard et al. , 2017 ) . We will employ methods from this approach to gain insights about geometric features of the LFL , including the number of minima , their curvatures , and their connectivity . By repeatedly surveying large parts of the LFL , we are , with high probability , able to find the true global minimum . Additionally , due to a Metropolis criterion in the global optimisation approach described below , we do not get stuck in a local minimum , and explore the full LFL , hence learning more about the functional surface . Instead of a single minimum , we aim to find a large number of minima that , together with transition states , provide a faithful coarse-grained representation of the loss function landscape . We believe that this approach will yield valuable insights into the use of appAUC as a loss function in neural networks . Various interesting questions arise from the use of an appAUC loss function . Besides a comparison of properties between landscapes , and the effects of hyperparameter changes , we are especially interested in the differences between appAUC and CE landscapes . Both loss functions , for the same neural network architecture , address the learning problem of finding a mapping f from input data to class label . Does this common foundation imply that minima of the CE loss function are also minima of the appAUC function , or are they at least very similar to each other ? We will show below that this condition does not hold , and explain why . Furthermore , to the best of our knowledge , there has been no previous research into the functional properties of AUC surrogates . We believe that quantifying inherent , geometric properties of loss functions will provide a more fundamental understanding of the applicability of particular loss functions to distinct machine learning problems . 1.1 RELATED WORK . This contribution lies at the intersection of three different areas of research : general understanding and study of loss functions , the appAUC loss function in particular , and the study of loss function landscapes to understand neural networks . Optimising the AUC as a loss function has been considered before ( Cortes & Mohri , 2004 ) . It has been shown that testing AUC is improved when a loss function is chosen that is closer to the true AUC function ( Yan et al. , 2003 ) . Nonetheless , optimising loss functions that approximate the AUC is rarely considered . One reason for this situation may be that the computational complexity is formally O ( N2 ) , or specifically in the binary classification case employed here , O ( NPNN ) where NP are the positive data points and NN the negative ones . Other reasons include the non-convexity and perceived complexity of the appAUC landscape , and importantly the fact that it has zero derivatives almost everywhere ( Ghanbari & Scheinberg , 2018 ) . The loss function is one of the most critical choices in the design of a neural network , because it is the element that underlies the entire learning procedure . At first glance , it may appear that all loss functions solve the same problem for a given neural network , which led Rosasco et al . ( 2004 ) to question the practical differences between alternative loss functions . Yet , it is widely accepted that different loss functions allow us to optimise for specific properties ( Janocha & Czarnecki , 2017 ) . By studying the appAUC LFL , we can quantitatively address these important questions . The topology of the loss function has been a topic of interest for over 20 years ( Hochreiter & Schmidhuber , 1997 ) . Understanding the loss function for a given neural network can help to establish the fundamental interpretation of the ‘ black-box ’ , as such networks are often described ( Li et al. , 2017 ) . However , a key problem with studying the LFL is the large associated computational cost . Unlike the standard machine learning approach , where only a single minimum is located , the LFL approach attempts to sample a representative set of minima , which may often exceed 10,000 solutions ( Ballard et al. , 2017 ) . Computing the true number of minima for a loss function would require enhanced sampling techniques ( Wales , 2013 ; Martiniani et al. , 2016 ) . Recently , others have looked into the LFL of overparameterised neural networks ( Cooper , 2018 ) and shown that various properties can be exploited to improve accuracy and , importantly , robustness of neural networks ( Baldassi et al. , 2020 ; Chaudhari et al. , 2019 ) . Insights from the LFL have also been used to understand initialisation methods and their relative success ( Fort & Scherlis , 2019 ) . However , to the best of our knowledge , there has been no research so far into interpreting individual loss functions from the underlying LFL . In this work , we aim to address this knowledge gap , and show that studying the LFL can provide important insights into our understanding and design of neural networks and the associated loss functions . 2 METHODS . 2.1 NEURAL NETWORK . For this initial survey , we consider neural networks with a single hidden layer . We denote the set of data points as D = ( X , c ) , containing N : = |D| elements . For a given classification problem with C classes , a data point d ∈ D has input features xd , and a known output class , cd . We use a nonlinear tanh activation function and convert output values yi , i ∈ C into softmax probabilities by pi ( W ; x d ) = exp [ yi ( W ; x d ) ] / ∑C j=1 exp [ yj ( W ; x d ) ] , where W denotes the weight vector containing all weights of the neural network . As a reference to compare with our appAUC loss function , we use a cross-entropy ( CE ) loss function , defined as : CE ( W ; X ) = − 1 N N∑ d=1 ln ( pcd ( W ; x d ) ) + λW2 . ( 1 ) The λW2 in equation 1 represents an L2 regularisation term , which eliminates zero Hessian Eigenvalues and counteracts overfitting . We fix λ = 10−5 for a fair comparison between loss functions . 2.2 AREA UNDER THE CURVE . To evaluate a model such as the one described above , it is standard practice to consider the receiver operating characteristic ( ROC ) and calculate the area under the curve ( AUC ) ( Hastie et al. , 2009 ) . For a given classifier , the ROC is a plot of the true positive ratio , T ( P ) , against the false positive ratio , F ( P ) . These quantities are defined by T ( W ; X , P ) = ∑N d=1 δ ( c d − 1 ) Θ ( p1 ( W ; xd ) − P ) ∑N d=1 δ ( c d − 1 ) , ( 2 ) F ( W ; X , P ) = ∑N d=1 ( 1− δ ( cd − 1 ) ) Θ ( p1 ( W ; xd ) − P ) ∑N d=1 1− δ ( cd − 1 ) , ( 3 ) where δ ( cd − 1 ) is the Dirac delta function , and Θ ( p1 − P ) is the Heaviside step function , defined as δ ( cd − 1 ) = { 1 if cd = 1 , 0 if cd 6= 1 , Θ ( p1 − P ) = { 1 if p1 ≥ P , 0 if p1 < P , ( 4 ) and P is a parameter , which acts as a cutoff probability for the neural network to predict that a given data point belongs to class 1 ( p1 ) . Note that the choice of class 1 as a positive reading is arbitrary , and for a multi-class system , any class may be chosen . The ROC curve is defined by T and F as P varies from 0 to 1 . For a perfect classifier , the ROC would simply be a horizontal line at T = 1 ( i.e . ∀ P 6= 0 , T ( P ) = 1 ) , and the area under the curve would then be 1 . The AUC is therefore a measure of how close the model is to a perfect classifier . Formally , the AUC as a function of network weights W , parameterised by the data X , is given by AUC = ∫ 1 0 T ( W ; X , P ) dF ( W ; X , P ) = 1 NPNN ∑ p ∑ n Θ ( p1 ( W ; x p ) − p1 ( W ; xn ) ) ( 5 ) where p labels positive data points ( class 1 ) , and n labels negative data points ( not in class 1 ) . 2.3 APPROXIMATED AUC : APPAUC LOSS FUNCTION . Most optimisation routines benefit from analytical derivatives of the function to be optimised . The function defined in equation 1 has smooth analytical derivatives , but equation 5 does not , because of the discontinuous step function . However , in a similar way to other approaches ( e.g . ( Calders & Jaroszewicz , 2007 ) ) , one can replace the discontinuous Θ function with an approximate , smooth , analytical surrogate function , which can then be differentiated and optimised . We write AUC ( W ; X ) ≈ A ( W ; X ) ≡ 1 NPNN ∑ p ∑ n 1 1 + exp ( −β ( p1 ( W ; xp ) − p1 ( W ; xn ) ) ) , ( 6 ) where the Heaviside step function has been replaced with a smooth sigmoid function : Θ ( z ) → σ ( z ) ≡ 1 1 + exp ( −βz ) , ( 7 ) with parameter β that is discussed in detail in the Appendix . An important consideration for surrogate AUC loss functions is that they do not change the optimal solution when replacing the step function . Such surrogates are referred to as AUC-consistent ( Agarwal , 2014 ) . Charoenphakdee et al . ( 2019 ) show that sigmoid is an AUC-consistent surrogate . The appAUC loss function in equation 6 is now differentiable and can hence be used for optimisation , where we minimise the negative form of equation 6 such that a minimum ( i.e . lower loss ) is a good solution of higher AUC . For reference , we have included the analytical first and second derivatives for Equation 6 in the Appendix .
This paper studies AUC maximization in classification tasks, which aims to directly optimize the AUC metric instead of minimizing traditional classification losses such as the cross-entropy loss. The main contribution of this paper is an empirical study of an existing AUC maximization method termed approximate AUC. Specifically, this paper compares the landscape of the approximate AUC surrogate loss with that of the cross-entropy loss. The empirical results gives an explanation on why AUC maximization can achieve higher test AUC compared to the conventional method that directly minimizes the cross-entropy loss.
SP:0da86e4b9fc91fda809ed58dbe8c8f0019f8bd1d
Characterising the Area Under the Curve Loss Function Landscape
1 INTRODUCTION . The area under the curve of the receiver operating characteristic curve ( AUC ) is a commonly used method to evaluate the accuracy and reliability of a neural network classifier . However , for mathematical reasons , the AUC can not be used as as the loss function to be minimised during neural network training ( Menon & Elkan , 2011 ) . Instead , other functions such as cross-entropy are commonly employed . The stepwise nature of the AUC function is the reason that it is nondifferentiable and hence can not simply be optimised . However , the AUC can be approximated using surrogate losses such as the sigmoid function . This approach can lead to a formulation equivalent to the soft-AUC described by Calders & Jaroszewicz ( 2007 ) , which is referred to here as appAUC . We find it intuitive to optimise a function that it as close as possible to the one used to evaluate the model . Our main contributions in this paper are therefore : • To understand the use of different approximation to the AUC as the loss function employed in training of a neural network • To explore the organisation of the appAUC landscape and compare it to a ‘ standard ’ cross- entropy landscape • A theoretical foundation of the differences between appAUC and cross-entropy landscapes • To outline how a loss function landscape analysis can be useful for loss function selection To better understand the advantages and disadvantages of the approximated AUC loss function , we study the functional space , commonly referred to as LFL , using tools from the theoretical study of energy landscapes in molecular and condensed matter systems ( Wales , 2003 ) . The usefulness of the energy landscape approach has previously been demonstrated in the context of neural network LFLs ( Ballard et al. , 2017 ) . We will employ methods from this approach to gain insights about geometric features of the LFL , including the number of minima , their curvatures , and their connectivity . By repeatedly surveying large parts of the LFL , we are , with high probability , able to find the true global minimum . Additionally , due to a Metropolis criterion in the global optimisation approach described below , we do not get stuck in a local minimum , and explore the full LFL , hence learning more about the functional surface . Instead of a single minimum , we aim to find a large number of minima that , together with transition states , provide a faithful coarse-grained representation of the loss function landscape . We believe that this approach will yield valuable insights into the use of appAUC as a loss function in neural networks . Various interesting questions arise from the use of an appAUC loss function . Besides a comparison of properties between landscapes , and the effects of hyperparameter changes , we are especially interested in the differences between appAUC and CE landscapes . Both loss functions , for the same neural network architecture , address the learning problem of finding a mapping f from input data to class label . Does this common foundation imply that minima of the CE loss function are also minima of the appAUC function , or are they at least very similar to each other ? We will show below that this condition does not hold , and explain why . Furthermore , to the best of our knowledge , there has been no previous research into the functional properties of AUC surrogates . We believe that quantifying inherent , geometric properties of loss functions will provide a more fundamental understanding of the applicability of particular loss functions to distinct machine learning problems . 1.1 RELATED WORK . This contribution lies at the intersection of three different areas of research : general understanding and study of loss functions , the appAUC loss function in particular , and the study of loss function landscapes to understand neural networks . Optimising the AUC as a loss function has been considered before ( Cortes & Mohri , 2004 ) . It has been shown that testing AUC is improved when a loss function is chosen that is closer to the true AUC function ( Yan et al. , 2003 ) . Nonetheless , optimising loss functions that approximate the AUC is rarely considered . One reason for this situation may be that the computational complexity is formally O ( N2 ) , or specifically in the binary classification case employed here , O ( NPNN ) where NP are the positive data points and NN the negative ones . Other reasons include the non-convexity and perceived complexity of the appAUC landscape , and importantly the fact that it has zero derivatives almost everywhere ( Ghanbari & Scheinberg , 2018 ) . The loss function is one of the most critical choices in the design of a neural network , because it is the element that underlies the entire learning procedure . At first glance , it may appear that all loss functions solve the same problem for a given neural network , which led Rosasco et al . ( 2004 ) to question the practical differences between alternative loss functions . Yet , it is widely accepted that different loss functions allow us to optimise for specific properties ( Janocha & Czarnecki , 2017 ) . By studying the appAUC LFL , we can quantitatively address these important questions . The topology of the loss function has been a topic of interest for over 20 years ( Hochreiter & Schmidhuber , 1997 ) . Understanding the loss function for a given neural network can help to establish the fundamental interpretation of the ‘ black-box ’ , as such networks are often described ( Li et al. , 2017 ) . However , a key problem with studying the LFL is the large associated computational cost . Unlike the standard machine learning approach , where only a single minimum is located , the LFL approach attempts to sample a representative set of minima , which may often exceed 10,000 solutions ( Ballard et al. , 2017 ) . Computing the true number of minima for a loss function would require enhanced sampling techniques ( Wales , 2013 ; Martiniani et al. , 2016 ) . Recently , others have looked into the LFL of overparameterised neural networks ( Cooper , 2018 ) and shown that various properties can be exploited to improve accuracy and , importantly , robustness of neural networks ( Baldassi et al. , 2020 ; Chaudhari et al. , 2019 ) . Insights from the LFL have also been used to understand initialisation methods and their relative success ( Fort & Scherlis , 2019 ) . However , to the best of our knowledge , there has been no research so far into interpreting individual loss functions from the underlying LFL . In this work , we aim to address this knowledge gap , and show that studying the LFL can provide important insights into our understanding and design of neural networks and the associated loss functions . 2 METHODS . 2.1 NEURAL NETWORK . For this initial survey , we consider neural networks with a single hidden layer . We denote the set of data points as D = ( X , c ) , containing N : = |D| elements . For a given classification problem with C classes , a data point d ∈ D has input features xd , and a known output class , cd . We use a nonlinear tanh activation function and convert output values yi , i ∈ C into softmax probabilities by pi ( W ; x d ) = exp [ yi ( W ; x d ) ] / ∑C j=1 exp [ yj ( W ; x d ) ] , where W denotes the weight vector containing all weights of the neural network . As a reference to compare with our appAUC loss function , we use a cross-entropy ( CE ) loss function , defined as : CE ( W ; X ) = − 1 N N∑ d=1 ln ( pcd ( W ; x d ) ) + λW2 . ( 1 ) The λW2 in equation 1 represents an L2 regularisation term , which eliminates zero Hessian Eigenvalues and counteracts overfitting . We fix λ = 10−5 for a fair comparison between loss functions . 2.2 AREA UNDER THE CURVE . To evaluate a model such as the one described above , it is standard practice to consider the receiver operating characteristic ( ROC ) and calculate the area under the curve ( AUC ) ( Hastie et al. , 2009 ) . For a given classifier , the ROC is a plot of the true positive ratio , T ( P ) , against the false positive ratio , F ( P ) . These quantities are defined by T ( W ; X , P ) = ∑N d=1 δ ( c d − 1 ) Θ ( p1 ( W ; xd ) − P ) ∑N d=1 δ ( c d − 1 ) , ( 2 ) F ( W ; X , P ) = ∑N d=1 ( 1− δ ( cd − 1 ) ) Θ ( p1 ( W ; xd ) − P ) ∑N d=1 1− δ ( cd − 1 ) , ( 3 ) where δ ( cd − 1 ) is the Dirac delta function , and Θ ( p1 − P ) is the Heaviside step function , defined as δ ( cd − 1 ) = { 1 if cd = 1 , 0 if cd 6= 1 , Θ ( p1 − P ) = { 1 if p1 ≥ P , 0 if p1 < P , ( 4 ) and P is a parameter , which acts as a cutoff probability for the neural network to predict that a given data point belongs to class 1 ( p1 ) . Note that the choice of class 1 as a positive reading is arbitrary , and for a multi-class system , any class may be chosen . The ROC curve is defined by T and F as P varies from 0 to 1 . For a perfect classifier , the ROC would simply be a horizontal line at T = 1 ( i.e . ∀ P 6= 0 , T ( P ) = 1 ) , and the area under the curve would then be 1 . The AUC is therefore a measure of how close the model is to a perfect classifier . Formally , the AUC as a function of network weights W , parameterised by the data X , is given by AUC = ∫ 1 0 T ( W ; X , P ) dF ( W ; X , P ) = 1 NPNN ∑ p ∑ n Θ ( p1 ( W ; x p ) − p1 ( W ; xn ) ) ( 5 ) where p labels positive data points ( class 1 ) , and n labels negative data points ( not in class 1 ) . 2.3 APPROXIMATED AUC : APPAUC LOSS FUNCTION . Most optimisation routines benefit from analytical derivatives of the function to be optimised . The function defined in equation 1 has smooth analytical derivatives , but equation 5 does not , because of the discontinuous step function . However , in a similar way to other approaches ( e.g . ( Calders & Jaroszewicz , 2007 ) ) , one can replace the discontinuous Θ function with an approximate , smooth , analytical surrogate function , which can then be differentiated and optimised . We write AUC ( W ; X ) ≈ A ( W ; X ) ≡ 1 NPNN ∑ p ∑ n 1 1 + exp ( −β ( p1 ( W ; xp ) − p1 ( W ; xn ) ) ) , ( 6 ) where the Heaviside step function has been replaced with a smooth sigmoid function : Θ ( z ) → σ ( z ) ≡ 1 1 + exp ( −βz ) , ( 7 ) with parameter β that is discussed in detail in the Appendix . An important consideration for surrogate AUC loss functions is that they do not change the optimal solution when replacing the step function . Such surrogates are referred to as AUC-consistent ( Agarwal , 2014 ) . Charoenphakdee et al . ( 2019 ) show that sigmoid is an AUC-consistent surrogate . The appAUC loss function in equation 6 is now differentiable and can hence be used for optimisation , where we minimise the negative form of equation 6 such that a minimum ( i.e . lower loss ) is a good solution of higher AUC . For reference , we have included the analytical first and second derivatives for Equation 6 in the Appendix .
The paper studied the AUC loss by investigating its approximates and by comparing it with the cross-entropy loss. The landscapes of the optimization based on approximate AUC and based on the cross-entropy loss are also studied. Numerical experiments are conducted.
SP:0da86e4b9fc91fda809ed58dbe8c8f0019f8bd1d
Characterising the Area Under the Curve Loss Function Landscape
1 INTRODUCTION . The area under the curve of the receiver operating characteristic curve ( AUC ) is a commonly used method to evaluate the accuracy and reliability of a neural network classifier . However , for mathematical reasons , the AUC can not be used as as the loss function to be minimised during neural network training ( Menon & Elkan , 2011 ) . Instead , other functions such as cross-entropy are commonly employed . The stepwise nature of the AUC function is the reason that it is nondifferentiable and hence can not simply be optimised . However , the AUC can be approximated using surrogate losses such as the sigmoid function . This approach can lead to a formulation equivalent to the soft-AUC described by Calders & Jaroszewicz ( 2007 ) , which is referred to here as appAUC . We find it intuitive to optimise a function that it as close as possible to the one used to evaluate the model . Our main contributions in this paper are therefore : • To understand the use of different approximation to the AUC as the loss function employed in training of a neural network • To explore the organisation of the appAUC landscape and compare it to a ‘ standard ’ cross- entropy landscape • A theoretical foundation of the differences between appAUC and cross-entropy landscapes • To outline how a loss function landscape analysis can be useful for loss function selection To better understand the advantages and disadvantages of the approximated AUC loss function , we study the functional space , commonly referred to as LFL , using tools from the theoretical study of energy landscapes in molecular and condensed matter systems ( Wales , 2003 ) . The usefulness of the energy landscape approach has previously been demonstrated in the context of neural network LFLs ( Ballard et al. , 2017 ) . We will employ methods from this approach to gain insights about geometric features of the LFL , including the number of minima , their curvatures , and their connectivity . By repeatedly surveying large parts of the LFL , we are , with high probability , able to find the true global minimum . Additionally , due to a Metropolis criterion in the global optimisation approach described below , we do not get stuck in a local minimum , and explore the full LFL , hence learning more about the functional surface . Instead of a single minimum , we aim to find a large number of minima that , together with transition states , provide a faithful coarse-grained representation of the loss function landscape . We believe that this approach will yield valuable insights into the use of appAUC as a loss function in neural networks . Various interesting questions arise from the use of an appAUC loss function . Besides a comparison of properties between landscapes , and the effects of hyperparameter changes , we are especially interested in the differences between appAUC and CE landscapes . Both loss functions , for the same neural network architecture , address the learning problem of finding a mapping f from input data to class label . Does this common foundation imply that minima of the CE loss function are also minima of the appAUC function , or are they at least very similar to each other ? We will show below that this condition does not hold , and explain why . Furthermore , to the best of our knowledge , there has been no previous research into the functional properties of AUC surrogates . We believe that quantifying inherent , geometric properties of loss functions will provide a more fundamental understanding of the applicability of particular loss functions to distinct machine learning problems . 1.1 RELATED WORK . This contribution lies at the intersection of three different areas of research : general understanding and study of loss functions , the appAUC loss function in particular , and the study of loss function landscapes to understand neural networks . Optimising the AUC as a loss function has been considered before ( Cortes & Mohri , 2004 ) . It has been shown that testing AUC is improved when a loss function is chosen that is closer to the true AUC function ( Yan et al. , 2003 ) . Nonetheless , optimising loss functions that approximate the AUC is rarely considered . One reason for this situation may be that the computational complexity is formally O ( N2 ) , or specifically in the binary classification case employed here , O ( NPNN ) where NP are the positive data points and NN the negative ones . Other reasons include the non-convexity and perceived complexity of the appAUC landscape , and importantly the fact that it has zero derivatives almost everywhere ( Ghanbari & Scheinberg , 2018 ) . The loss function is one of the most critical choices in the design of a neural network , because it is the element that underlies the entire learning procedure . At first glance , it may appear that all loss functions solve the same problem for a given neural network , which led Rosasco et al . ( 2004 ) to question the practical differences between alternative loss functions . Yet , it is widely accepted that different loss functions allow us to optimise for specific properties ( Janocha & Czarnecki , 2017 ) . By studying the appAUC LFL , we can quantitatively address these important questions . The topology of the loss function has been a topic of interest for over 20 years ( Hochreiter & Schmidhuber , 1997 ) . Understanding the loss function for a given neural network can help to establish the fundamental interpretation of the ‘ black-box ’ , as such networks are often described ( Li et al. , 2017 ) . However , a key problem with studying the LFL is the large associated computational cost . Unlike the standard machine learning approach , where only a single minimum is located , the LFL approach attempts to sample a representative set of minima , which may often exceed 10,000 solutions ( Ballard et al. , 2017 ) . Computing the true number of minima for a loss function would require enhanced sampling techniques ( Wales , 2013 ; Martiniani et al. , 2016 ) . Recently , others have looked into the LFL of overparameterised neural networks ( Cooper , 2018 ) and shown that various properties can be exploited to improve accuracy and , importantly , robustness of neural networks ( Baldassi et al. , 2020 ; Chaudhari et al. , 2019 ) . Insights from the LFL have also been used to understand initialisation methods and their relative success ( Fort & Scherlis , 2019 ) . However , to the best of our knowledge , there has been no research so far into interpreting individual loss functions from the underlying LFL . In this work , we aim to address this knowledge gap , and show that studying the LFL can provide important insights into our understanding and design of neural networks and the associated loss functions . 2 METHODS . 2.1 NEURAL NETWORK . For this initial survey , we consider neural networks with a single hidden layer . We denote the set of data points as D = ( X , c ) , containing N : = |D| elements . For a given classification problem with C classes , a data point d ∈ D has input features xd , and a known output class , cd . We use a nonlinear tanh activation function and convert output values yi , i ∈ C into softmax probabilities by pi ( W ; x d ) = exp [ yi ( W ; x d ) ] / ∑C j=1 exp [ yj ( W ; x d ) ] , where W denotes the weight vector containing all weights of the neural network . As a reference to compare with our appAUC loss function , we use a cross-entropy ( CE ) loss function , defined as : CE ( W ; X ) = − 1 N N∑ d=1 ln ( pcd ( W ; x d ) ) + λW2 . ( 1 ) The λW2 in equation 1 represents an L2 regularisation term , which eliminates zero Hessian Eigenvalues and counteracts overfitting . We fix λ = 10−5 for a fair comparison between loss functions . 2.2 AREA UNDER THE CURVE . To evaluate a model such as the one described above , it is standard practice to consider the receiver operating characteristic ( ROC ) and calculate the area under the curve ( AUC ) ( Hastie et al. , 2009 ) . For a given classifier , the ROC is a plot of the true positive ratio , T ( P ) , against the false positive ratio , F ( P ) . These quantities are defined by T ( W ; X , P ) = ∑N d=1 δ ( c d − 1 ) Θ ( p1 ( W ; xd ) − P ) ∑N d=1 δ ( c d − 1 ) , ( 2 ) F ( W ; X , P ) = ∑N d=1 ( 1− δ ( cd − 1 ) ) Θ ( p1 ( W ; xd ) − P ) ∑N d=1 1− δ ( cd − 1 ) , ( 3 ) where δ ( cd − 1 ) is the Dirac delta function , and Θ ( p1 − P ) is the Heaviside step function , defined as δ ( cd − 1 ) = { 1 if cd = 1 , 0 if cd 6= 1 , Θ ( p1 − P ) = { 1 if p1 ≥ P , 0 if p1 < P , ( 4 ) and P is a parameter , which acts as a cutoff probability for the neural network to predict that a given data point belongs to class 1 ( p1 ) . Note that the choice of class 1 as a positive reading is arbitrary , and for a multi-class system , any class may be chosen . The ROC curve is defined by T and F as P varies from 0 to 1 . For a perfect classifier , the ROC would simply be a horizontal line at T = 1 ( i.e . ∀ P 6= 0 , T ( P ) = 1 ) , and the area under the curve would then be 1 . The AUC is therefore a measure of how close the model is to a perfect classifier . Formally , the AUC as a function of network weights W , parameterised by the data X , is given by AUC = ∫ 1 0 T ( W ; X , P ) dF ( W ; X , P ) = 1 NPNN ∑ p ∑ n Θ ( p1 ( W ; x p ) − p1 ( W ; xn ) ) ( 5 ) where p labels positive data points ( class 1 ) , and n labels negative data points ( not in class 1 ) . 2.3 APPROXIMATED AUC : APPAUC LOSS FUNCTION . Most optimisation routines benefit from analytical derivatives of the function to be optimised . The function defined in equation 1 has smooth analytical derivatives , but equation 5 does not , because of the discontinuous step function . However , in a similar way to other approaches ( e.g . ( Calders & Jaroszewicz , 2007 ) ) , one can replace the discontinuous Θ function with an approximate , smooth , analytical surrogate function , which can then be differentiated and optimised . We write AUC ( W ; X ) ≈ A ( W ; X ) ≡ 1 NPNN ∑ p ∑ n 1 1 + exp ( −β ( p1 ( W ; xp ) − p1 ( W ; xn ) ) ) , ( 6 ) where the Heaviside step function has been replaced with a smooth sigmoid function : Θ ( z ) → σ ( z ) ≡ 1 1 + exp ( −βz ) , ( 7 ) with parameter β that is discussed in detail in the Appendix . An important consideration for surrogate AUC loss functions is that they do not change the optimal solution when replacing the step function . Such surrogates are referred to as AUC-consistent ( Agarwal , 2014 ) . Charoenphakdee et al . ( 2019 ) show that sigmoid is an AUC-consistent surrogate . The appAUC loss function in equation 6 is now differentiable and can hence be used for optimisation , where we minimise the negative form of equation 6 such that a minimum ( i.e . lower loss ) is a good solution of higher AUC . For reference , we have included the analytical first and second derivatives for Equation 6 in the Appendix .
Authors did empirical studies to visualize and investigate the landscape of AUC (area under curve) loss. AUC is cirtical for a wide range of applications, such as recommendation and online advertising. Thus it is important.
SP:0da86e4b9fc91fda809ed58dbe8c8f0019f8bd1d
Understanding Intrinsic Robustness Using Label Uncertainty
1 INTRODUCTION . Since the initial reports of adversarial examples against deep neural networks ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) , many defensive mechanisms have been proposed aiming to enhance the robustness of machine learning classifiers . Most have failed , however , against stronger adaptive attacks ( Athalye et al. , 2018 ; Tramer et al. , 2020 ) . PGD-based adversarial training ( Mądry et al. , 2018 ) and its variants ( Zhang et al. , 2019 ; Carmon et al. , 2019 ) are among the few heuristic defenses that have not been broken so far , but these methods still fail to produce satisfactorily robust classifiers , even for classification tasks on benchmark datasets like CIFAR-10 . Motivated by the empirical hardness of adversarially-robust learning , a line of theoretical works ( Gilmer et al. , 2018 ; Fawzi et al. , 2018 ; Mahloujifar et al. , 2019a ; Shafahi et al. , 2019 ) have argued that adversarial examples are unavoidable . In particular , these works proved that as long as the input distributions are concentrated with respect to the perturbation metric , adversarially robust classifiers do not exist . Recently , Mahloujifar et al . ( 2019b ) and Prescott et al . ( 2021 ) generalized these results by developing empirical methods for measuring the concentration of arbitrary input distributions to derive an intrinsic robustness limit . ( Appendix A provides a more thorough discussion of related work . ) We argue that the standard concentration of measure problem , which was studied in all of the aforementioned works , is not sufficient to capture a realistic intrinsic robustness limit for a classification problem . In particular , the standard concentration function is defined as an inherent property regarding the input metric probability space that does not take account of the underlying label information . We argue that such label information is essential for any supervised learning problem , including adversarially robust classification , so must be incorporated into intrinsic robustness limits . Contributions . We identify the insufficiency of the standard concentration of measure problem and demonstrate why it fails to capture a realistic intrinsic robustness limit ( Section 3 ) . Then , we introduce the notion of label uncertainty ( Definition 4.1 ) , which characterizes the average uncertainty of label assignments for an input region . We then incorporate label uncertainty in the standard concentration measure as an initial step towards a more realistic characterization of intrinsic robustness ( Section 4 ) . Experiments on the CIFAR-10 and CIFAR-10H ( Peterson et al. , 2019 ) datasets demonstrate that error regions induced by state-of-the-art classification models all have high label uncertainty ( Section 6.1 ) , which validates the proposed label uncertainty constrained concentration problem . By adapting the standard concentration estimation method in Mahloujifar et al . ( 2019b ) , we propose an empirical estimator for the label uncertainty constrained concentration function . We then theoretically study the asymptotic behavior of the proposed estimator and provide a corresponding heuristic algorithm for typical perturbation metrics ( Section 5 ) . We demonstrate that our method is able to produce a more accurate characterization of intrinsic robustness limit for benchmark datasets such as CIFAR-10 ( Section 6.2 ) than was possible using prior methods that do not consider labels . We also provide empirical evidence showing that both the clean and robust accuracies of stateof-the-art robust classification models are largely affected by the label uncertainty of the tested examples , suggesting that adding an abstain option based on label uncertainty is a promising avenue for improving adversarial robustness of deployed machine learning systems ( Section 6.3 ) . Notation . We use lowercase boldface letters to denote vectors and use [ k ] to denote { 1 , 2 , . . . , k } . We use In to denote the n × n identity matrix . For any set A , |A| denotes its cardinality , pow ( A ) is all its measurable subsets , and 1A ( · ) is the indicator function of A . Consider metric probability space ( X , µ , ∆ ) , where ∆ : X × X → R≥0 is a distance metric on X . Define the empirical measure of µ with respect to a data set S sampled from µ as µ̂S ( A ) = ∑ x∈S 1A ( x ) /|S| . Denote by B ( x , ∆ ) the ball around x with radius measured by ∆ . The -expansion of A is defined as A ( ∆ ) = { x ∈ X : ∃ x′ ∈ B ( x , ∆ ) ∩ A } . When ∆ is free of context , we simply write B ( x ) = B ( x , ∆ ) and A = A ( ∆ ) . 2 PRELIMINARIES . Adversarial Risk . Adversarial risk captures the vulnerability of a classifier against adversarial perturbations . In particular , we adopt the following adversarial risk definition , which has been studied in several previous works , such as Gilmer et al . ( 2018 ) ; Bubeck et al . ( 2019 ) ; Mahloujifar et al . ( 2019a ; b ) ; Zhang et al . ( 2020b ) ; Prescott et al . ( 2021 ) . Definition 2.1 ( Adversarial Risk ) . Let ( X , µ , ∆ ) be a metric probability space of instances and Y be the set of possible class labels . Assume c : X → Y is a concept function that gives each instance a label . For any classifier f : X → Y and ≥ 0 , the adversarial risk of f is defined as : AdvRisk ( f , c ) = Pr x∼µ [ ∃ x′ ∈ B ( x ) s.t . f ( x′ ) 6= c ( x′ ) ] . The adversarial robustness of f is defined as : AdvRob ( f , c ) = 1−AdvRisk ( f , c ) . When = 0 , adversarial risk equals to the standard risk . Namely , AdvRisk0 ( f , c ) = Risk ( f , c ) : = Prx∼µ [ f ( x ) 6= c ( x ) ] holds for any classifier f . Other definitions of adversarial risk have been proposed , such as the one used in Mądry et al . ( 2018 ) . These definitions are equivalent to the one we use , as long as small perturbations preserve the labels assigned by c ( · ) . Intrinsic Robustness . The definition of intrinsic robustness was first introduced by Mahloujifar et al . ( 2019b ) to capture the maximum adversarial robustness with respect to some set of classifiers : Definition 2.2 ( Intrinsic Robustness ) . Consider the input metric probability space ( X , µ , ∆ ) and the set of labels Y . Let c : X → Y be a concept function that gives a label to each input . For any set of classifiers F ⊆ { f : X → Y } and ≥ 0 , the intrinsic robustness with respect to F is defined as : AdvRob ( F , c ) = 1− inf f∈F { AdvRisk ( f , c ) } = sup f∈F { AdvRob ( f , c ) } . According to the definition of intrinsic robustness , there does not exist any classifier in F with adversarial robustness higher than AdvRob ( F , c ) for the considered task . Prior works , including Gilmer et al . ( 2018 ) ; Mahloujifar et al . ( 2019a ; b ) ; Zhang et al . ( 2020b ) , selected F in Definition 2.2 as the set of imperfect classifiers Fα = { f : Risk ( f , c ) ≥ α } , where α ∈ ( 0 , 1 ) is set as a small constant that reflects the best classification error rates achieved by state-of-the-art methods . Concentration of Measure . Concentration of measure captures a ‘ closeness ’ property for a metric probability space of instances . More formally , it is defined by the concentration function : Definition 2.3 ( Concentration Function ) . Let ( X , µ , ∆ ) be a metric probability space . For any α ∈ ( 0 , 1 ) and ≥ 0 , the concentration function of ( X , µ , ∆ ) is defined as : h ( µ , α , ) = inf E∈pow ( X ) { µ ( E ) : µ ( E ) ≥ α } . The standard notion of concentration function considers a special case of Definition 2.3 with α = 1/2 ( e.g. , Talagrand ( 1995 ) ) . For some special metric probability spaces , one can prove the closed-form solution of the concentration function . The Gaussian Isoperimetric Inequality ( Borell , 1975 ; Sudakov & Tsirelson , 1974 ) characterizes the concentration function for spherical Gaussian distribution and ` 2-norm distance metric , and was generalized by Prescott et al . ( 2021 ) to other ` p norms . 3 STANDARD CONCENTRATION IS INSUFFICIENT . We first explain a fundamental connection between the concentration of measure and the intrinsic robustness with respect to imperfect classifiers shown in previous work , and then argue that standard concentration fails to capture a realistic intrinsic robustness limit because it ignores data labels . Connecting Intrinsic Robustness with Concentration of Measure . Let ( X , µ , ∆ ) be the considered input metric probability space , Y be the set of possible labels , and c : X → Y be the concept function that gives each input a label . Given parameters 0 < α < 1 and ≥ 0 , the standard concentration problem can be cast into an optimization problem as follows : minimize E∈pow ( X ) µ ( E ) subject to µ ( E ) ≥ α . ( 3.1 ) For any classifier f , let Ef = { x ∈ X : f ( x ) 6= c ( x ) } be its induced error region with respect to c ( · ) . By connecting the risk of f with the measure of Ef and the adversarial risk of f with the measure of the -expansion of Ef , Mahloujifar et al . ( 2019a ) proved that the standard concentration problem ( 3.1 ) is equivalent to the following optimization problem regarding risk and adversarial risk : minimize f AdvRisk ( f , c ) subject to Risk ( f , c ) ≥ α . To be more specific , the following lemma characterizes the connection between the standard concentration function and the intrinsic robustness limit with respect to the set of imperfect classifiers : Lemma 3.1 ( Mahloujifar et al . ( 2019a ) ) . Let α ∈ ( 0 , 1 ) and Fα = { f : Risk ( f , c ) ≥ α } be the set of imperfect classifiers . For any ≥ 0 , it holds that AdvRob ( Fα , c ) = 1− h ( µ , α , ) . Lemma 3.1 suggests that the concentration function of the input metric probability space h ( µ , α , ) can be translated into an adversarial robustness upper bound that applies to any classifier with risk at least α . If this upper bound is shown to be small , then one can conclude that it is impossible to learn an adversarially robust classifier , as long as the learned classifier has risk at least α . Concentration without Labels Mischaracterizes Intrinsic Robustness . Despite the appealing relationship between concentration of measure and intrinsic robustness , we argue that solving the standard concentration problem is not enough to capture a meaningful intrinsic limit for adversarially robust classification . The standard concentration of measure problem ( 3.1 ) , which aims to find the optimal subset that has the smallest -expansion with regard to the input metric probability space ( X , µ , ∆ ) , does not involve the concept function c ( · ) that determines the underlying class label of each input . Therefore , no matter how we assign the labels to the inputs , the concentration function h ( µ , α , ) will remain the same for the considered metric probability space . In sharp contrast , learning an adversarially-robust classifier depends on the joint distribution of both the inputs and the labels . Moreover , when the standard concentration function is translated into an intrinsic limit of adversarial robustness , it is defined with respect to the set of imperfect classfiers Fα ( see Lemma 3.1 ) . The only restriction imposed by Fα is that the classifier ( or equivalently , the measure of the corresponding error region ) has risk at least α . This fails to consider whereas whether the classifier is learnable or not under the given classification problem . Therefore , the intrinsic robustness limit implied by standard concentration AdvRob ( Fα , c ) could be much higher than AdvRob ( Flearn , c ) , where Flearn denotes the set of classifiers that can be produced by some supervised learning method . Hence , it is not surprising that Mahloujifar et al . ( 2019b ) found that the adversarial robustness attained by state-of-theart robust training methods for several image benchmarks is much lower than the intrinsic robustness limit implied by standard concentration of measure . In this work , to obtain a more meaningful intrinsic robustness limit we restrict the search space of the standard concentration problem ( 3.1 ) by considering both the underlying class labels and the learnability of the given classification problem . Gaussian Mixture Model . We further illustrate the insufficiency of standard concentration under a simple Gaussian mixture model . Let X ⊆ Rn be the input space and Y = { −1 , +1 } be the label space . Assume all the inputs are first generated according to a mixture of 2-Gaussian distribution : x ∼ µ = 12N ( −θ , σ 2In ) + 1 2N ( θ , σ 2In ) , then labeled by a concept function c ( x ) = sgn ( θ > x ) , where θ ∈ Rn and σ ∈ R are given parameters ( this concept function is also the Bayes optimal classifier , which best separates the two Gaussian clusters ) . Theorem 3.2 , proven in Appendix C.1 , characterizes the optimal solution to the standard concentration problem under this assumed model . Theorem 3.2 . Consider the above Gaussian mixture model with ` 2 perturbation metric . The optimal solution to the standard concentration problem ( 3.1 ) is a halfspace , either H− = { x ∈ X : θ > x+ b · ‖θ‖2 ≤ 0 } or H+ = { x ∈ X : θ > x− b · ‖θ‖2 ≥ 0 } , where b is a parameter depending on α and θ such that µ ( H− ) = µ ( H+ ) = α . Remark 3.3 . Theorem 3.2 suggests that for the Gaussian mixture model , the optimal subset achieving the smallest -expansion under ` 2-norm distance metric is a halfspaceH , which is far away from the boundary between the two Gaussian classes for small α . When translated into the intrinsic robustness problem , the corresponding optimal classifier f has to be constructed by treatingH as the only error region , or more precisely f ( x ) = c ( x ) if x /∈ H ; f ( x ) 6= c ( x ) otherwise . This optimally constructed classifier f , however , does not match our intuition of what a predictive classifier would do under the considered Gaussian mixture model . In particular , since all the inputs inH and their neighbours share the same class label and are also far away from the boundary , examples that fall intoH should be easily classified correctly using simple decision rule , such as k-nearest neighbour or maximum margin , whereas examples that are close to the boundary should be more likely to be misclassified as errors by supervisedly-learned classifiers . This confirms our claim that standard concentration is not sufficient for capturing a meaningful intrinsic robustness limit .
The paper proposes to include label uncertainty (LU) in the formulation of the concentration of measure problem which has been deemed the cause of adversarial vulnerability. It suggests that the current formulation of intrinsic robustness based on concentration measures is insufficient because of the exclusion of label information. The paper demonstrates a more accurate estimation of intrinsic robustness. Practically, in the original concentration of measure problem, a search algorithm finds the least $\epsilon$-expansion set $\mu(\varepsilon_\epsilon)$ given a concentration constraint $\mu(\varepsilon)>\alpha$ from a selected collection of data sets. The new formulation adds a label uncertainty constraint to this search algorithm: the label uncertainty of the set $\varepsilon$ must be larger than $\gamma$.
SP:6b5fdc52826ed97fc5f1a873807ce6cc9ddeac4d
Understanding Intrinsic Robustness Using Label Uncertainty
1 INTRODUCTION . Since the initial reports of adversarial examples against deep neural networks ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) , many defensive mechanisms have been proposed aiming to enhance the robustness of machine learning classifiers . Most have failed , however , against stronger adaptive attacks ( Athalye et al. , 2018 ; Tramer et al. , 2020 ) . PGD-based adversarial training ( Mądry et al. , 2018 ) and its variants ( Zhang et al. , 2019 ; Carmon et al. , 2019 ) are among the few heuristic defenses that have not been broken so far , but these methods still fail to produce satisfactorily robust classifiers , even for classification tasks on benchmark datasets like CIFAR-10 . Motivated by the empirical hardness of adversarially-robust learning , a line of theoretical works ( Gilmer et al. , 2018 ; Fawzi et al. , 2018 ; Mahloujifar et al. , 2019a ; Shafahi et al. , 2019 ) have argued that adversarial examples are unavoidable . In particular , these works proved that as long as the input distributions are concentrated with respect to the perturbation metric , adversarially robust classifiers do not exist . Recently , Mahloujifar et al . ( 2019b ) and Prescott et al . ( 2021 ) generalized these results by developing empirical methods for measuring the concentration of arbitrary input distributions to derive an intrinsic robustness limit . ( Appendix A provides a more thorough discussion of related work . ) We argue that the standard concentration of measure problem , which was studied in all of the aforementioned works , is not sufficient to capture a realistic intrinsic robustness limit for a classification problem . In particular , the standard concentration function is defined as an inherent property regarding the input metric probability space that does not take account of the underlying label information . We argue that such label information is essential for any supervised learning problem , including adversarially robust classification , so must be incorporated into intrinsic robustness limits . Contributions . We identify the insufficiency of the standard concentration of measure problem and demonstrate why it fails to capture a realistic intrinsic robustness limit ( Section 3 ) . Then , we introduce the notion of label uncertainty ( Definition 4.1 ) , which characterizes the average uncertainty of label assignments for an input region . We then incorporate label uncertainty in the standard concentration measure as an initial step towards a more realistic characterization of intrinsic robustness ( Section 4 ) . Experiments on the CIFAR-10 and CIFAR-10H ( Peterson et al. , 2019 ) datasets demonstrate that error regions induced by state-of-the-art classification models all have high label uncertainty ( Section 6.1 ) , which validates the proposed label uncertainty constrained concentration problem . By adapting the standard concentration estimation method in Mahloujifar et al . ( 2019b ) , we propose an empirical estimator for the label uncertainty constrained concentration function . We then theoretically study the asymptotic behavior of the proposed estimator and provide a corresponding heuristic algorithm for typical perturbation metrics ( Section 5 ) . We demonstrate that our method is able to produce a more accurate characterization of intrinsic robustness limit for benchmark datasets such as CIFAR-10 ( Section 6.2 ) than was possible using prior methods that do not consider labels . We also provide empirical evidence showing that both the clean and robust accuracies of stateof-the-art robust classification models are largely affected by the label uncertainty of the tested examples , suggesting that adding an abstain option based on label uncertainty is a promising avenue for improving adversarial robustness of deployed machine learning systems ( Section 6.3 ) . Notation . We use lowercase boldface letters to denote vectors and use [ k ] to denote { 1 , 2 , . . . , k } . We use In to denote the n × n identity matrix . For any set A , |A| denotes its cardinality , pow ( A ) is all its measurable subsets , and 1A ( · ) is the indicator function of A . Consider metric probability space ( X , µ , ∆ ) , where ∆ : X × X → R≥0 is a distance metric on X . Define the empirical measure of µ with respect to a data set S sampled from µ as µ̂S ( A ) = ∑ x∈S 1A ( x ) /|S| . Denote by B ( x , ∆ ) the ball around x with radius measured by ∆ . The -expansion of A is defined as A ( ∆ ) = { x ∈ X : ∃ x′ ∈ B ( x , ∆ ) ∩ A } . When ∆ is free of context , we simply write B ( x ) = B ( x , ∆ ) and A = A ( ∆ ) . 2 PRELIMINARIES . Adversarial Risk . Adversarial risk captures the vulnerability of a classifier against adversarial perturbations . In particular , we adopt the following adversarial risk definition , which has been studied in several previous works , such as Gilmer et al . ( 2018 ) ; Bubeck et al . ( 2019 ) ; Mahloujifar et al . ( 2019a ; b ) ; Zhang et al . ( 2020b ) ; Prescott et al . ( 2021 ) . Definition 2.1 ( Adversarial Risk ) . Let ( X , µ , ∆ ) be a metric probability space of instances and Y be the set of possible class labels . Assume c : X → Y is a concept function that gives each instance a label . For any classifier f : X → Y and ≥ 0 , the adversarial risk of f is defined as : AdvRisk ( f , c ) = Pr x∼µ [ ∃ x′ ∈ B ( x ) s.t . f ( x′ ) 6= c ( x′ ) ] . The adversarial robustness of f is defined as : AdvRob ( f , c ) = 1−AdvRisk ( f , c ) . When = 0 , adversarial risk equals to the standard risk . Namely , AdvRisk0 ( f , c ) = Risk ( f , c ) : = Prx∼µ [ f ( x ) 6= c ( x ) ] holds for any classifier f . Other definitions of adversarial risk have been proposed , such as the one used in Mądry et al . ( 2018 ) . These definitions are equivalent to the one we use , as long as small perturbations preserve the labels assigned by c ( · ) . Intrinsic Robustness . The definition of intrinsic robustness was first introduced by Mahloujifar et al . ( 2019b ) to capture the maximum adversarial robustness with respect to some set of classifiers : Definition 2.2 ( Intrinsic Robustness ) . Consider the input metric probability space ( X , µ , ∆ ) and the set of labels Y . Let c : X → Y be a concept function that gives a label to each input . For any set of classifiers F ⊆ { f : X → Y } and ≥ 0 , the intrinsic robustness with respect to F is defined as : AdvRob ( F , c ) = 1− inf f∈F { AdvRisk ( f , c ) } = sup f∈F { AdvRob ( f , c ) } . According to the definition of intrinsic robustness , there does not exist any classifier in F with adversarial robustness higher than AdvRob ( F , c ) for the considered task . Prior works , including Gilmer et al . ( 2018 ) ; Mahloujifar et al . ( 2019a ; b ) ; Zhang et al . ( 2020b ) , selected F in Definition 2.2 as the set of imperfect classifiers Fα = { f : Risk ( f , c ) ≥ α } , where α ∈ ( 0 , 1 ) is set as a small constant that reflects the best classification error rates achieved by state-of-the-art methods . Concentration of Measure . Concentration of measure captures a ‘ closeness ’ property for a metric probability space of instances . More formally , it is defined by the concentration function : Definition 2.3 ( Concentration Function ) . Let ( X , µ , ∆ ) be a metric probability space . For any α ∈ ( 0 , 1 ) and ≥ 0 , the concentration function of ( X , µ , ∆ ) is defined as : h ( µ , α , ) = inf E∈pow ( X ) { µ ( E ) : µ ( E ) ≥ α } . The standard notion of concentration function considers a special case of Definition 2.3 with α = 1/2 ( e.g. , Talagrand ( 1995 ) ) . For some special metric probability spaces , one can prove the closed-form solution of the concentration function . The Gaussian Isoperimetric Inequality ( Borell , 1975 ; Sudakov & Tsirelson , 1974 ) characterizes the concentration function for spherical Gaussian distribution and ` 2-norm distance metric , and was generalized by Prescott et al . ( 2021 ) to other ` p norms . 3 STANDARD CONCENTRATION IS INSUFFICIENT . We first explain a fundamental connection between the concentration of measure and the intrinsic robustness with respect to imperfect classifiers shown in previous work , and then argue that standard concentration fails to capture a realistic intrinsic robustness limit because it ignores data labels . Connecting Intrinsic Robustness with Concentration of Measure . Let ( X , µ , ∆ ) be the considered input metric probability space , Y be the set of possible labels , and c : X → Y be the concept function that gives each input a label . Given parameters 0 < α < 1 and ≥ 0 , the standard concentration problem can be cast into an optimization problem as follows : minimize E∈pow ( X ) µ ( E ) subject to µ ( E ) ≥ α . ( 3.1 ) For any classifier f , let Ef = { x ∈ X : f ( x ) 6= c ( x ) } be its induced error region with respect to c ( · ) . By connecting the risk of f with the measure of Ef and the adversarial risk of f with the measure of the -expansion of Ef , Mahloujifar et al . ( 2019a ) proved that the standard concentration problem ( 3.1 ) is equivalent to the following optimization problem regarding risk and adversarial risk : minimize f AdvRisk ( f , c ) subject to Risk ( f , c ) ≥ α . To be more specific , the following lemma characterizes the connection between the standard concentration function and the intrinsic robustness limit with respect to the set of imperfect classifiers : Lemma 3.1 ( Mahloujifar et al . ( 2019a ) ) . Let α ∈ ( 0 , 1 ) and Fα = { f : Risk ( f , c ) ≥ α } be the set of imperfect classifiers . For any ≥ 0 , it holds that AdvRob ( Fα , c ) = 1− h ( µ , α , ) . Lemma 3.1 suggests that the concentration function of the input metric probability space h ( µ , α , ) can be translated into an adversarial robustness upper bound that applies to any classifier with risk at least α . If this upper bound is shown to be small , then one can conclude that it is impossible to learn an adversarially robust classifier , as long as the learned classifier has risk at least α . Concentration without Labels Mischaracterizes Intrinsic Robustness . Despite the appealing relationship between concentration of measure and intrinsic robustness , we argue that solving the standard concentration problem is not enough to capture a meaningful intrinsic limit for adversarially robust classification . The standard concentration of measure problem ( 3.1 ) , which aims to find the optimal subset that has the smallest -expansion with regard to the input metric probability space ( X , µ , ∆ ) , does not involve the concept function c ( · ) that determines the underlying class label of each input . Therefore , no matter how we assign the labels to the inputs , the concentration function h ( µ , α , ) will remain the same for the considered metric probability space . In sharp contrast , learning an adversarially-robust classifier depends on the joint distribution of both the inputs and the labels . Moreover , when the standard concentration function is translated into an intrinsic limit of adversarial robustness , it is defined with respect to the set of imperfect classfiers Fα ( see Lemma 3.1 ) . The only restriction imposed by Fα is that the classifier ( or equivalently , the measure of the corresponding error region ) has risk at least α . This fails to consider whereas whether the classifier is learnable or not under the given classification problem . Therefore , the intrinsic robustness limit implied by standard concentration AdvRob ( Fα , c ) could be much higher than AdvRob ( Flearn , c ) , where Flearn denotes the set of classifiers that can be produced by some supervised learning method . Hence , it is not surprising that Mahloujifar et al . ( 2019b ) found that the adversarial robustness attained by state-of-theart robust training methods for several image benchmarks is much lower than the intrinsic robustness limit implied by standard concentration of measure . In this work , to obtain a more meaningful intrinsic robustness limit we restrict the search space of the standard concentration problem ( 3.1 ) by considering both the underlying class labels and the learnability of the given classification problem . Gaussian Mixture Model . We further illustrate the insufficiency of standard concentration under a simple Gaussian mixture model . Let X ⊆ Rn be the input space and Y = { −1 , +1 } be the label space . Assume all the inputs are first generated according to a mixture of 2-Gaussian distribution : x ∼ µ = 12N ( −θ , σ 2In ) + 1 2N ( θ , σ 2In ) , then labeled by a concept function c ( x ) = sgn ( θ > x ) , where θ ∈ Rn and σ ∈ R are given parameters ( this concept function is also the Bayes optimal classifier , which best separates the two Gaussian clusters ) . Theorem 3.2 , proven in Appendix C.1 , characterizes the optimal solution to the standard concentration problem under this assumed model . Theorem 3.2 . Consider the above Gaussian mixture model with ` 2 perturbation metric . The optimal solution to the standard concentration problem ( 3.1 ) is a halfspace , either H− = { x ∈ X : θ > x+ b · ‖θ‖2 ≤ 0 } or H+ = { x ∈ X : θ > x− b · ‖θ‖2 ≥ 0 } , where b is a parameter depending on α and θ such that µ ( H− ) = µ ( H+ ) = α . Remark 3.3 . Theorem 3.2 suggests that for the Gaussian mixture model , the optimal subset achieving the smallest -expansion under ` 2-norm distance metric is a halfspaceH , which is far away from the boundary between the two Gaussian classes for small α . When translated into the intrinsic robustness problem , the corresponding optimal classifier f has to be constructed by treatingH as the only error region , or more precisely f ( x ) = c ( x ) if x /∈ H ; f ( x ) 6= c ( x ) otherwise . This optimally constructed classifier f , however , does not match our intuition of what a predictive classifier would do under the considered Gaussian mixture model . In particular , since all the inputs inH and their neighbours share the same class label and are also far away from the boundary , examples that fall intoH should be easily classified correctly using simple decision rule , such as k-nearest neighbour or maximum margin , whereas examples that are close to the boundary should be more likely to be misclassified as errors by supervisedly-learned classifiers . This confirms our claim that standard concentration is not sufficient for capturing a meaningful intrinsic robustness limit .
The paper improves the previous results on the intrinsic robustness (i.e., an upper bound on the adversarial robustness over a set of classifiers) based on concentration of data distribution, by incorporating the constraint on the label uncertainty of the models. Specifically, the paper tightens the upper bound on the adversarial robustness given a family of models with error rates $\ge \alpha$, by additionally assuming that the average label uncertainty must be $\ge \gamma$. This requires an information of label uncertainty for each data sample, so the paper leveraged CIFAR-10H (CIFAR-10 with human-annotated uncertainty) in their experiments. Experimental results on CIFAR-10/CIFAR-10H show that the proposed upper bound can be practically computable, and improves the previous upper bound given the knowledge of CIFAR-10H.
SP:6b5fdc52826ed97fc5f1a873807ce6cc9ddeac4d
Understanding Intrinsic Robustness Using Label Uncertainty
1 INTRODUCTION . Since the initial reports of adversarial examples against deep neural networks ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) , many defensive mechanisms have been proposed aiming to enhance the robustness of machine learning classifiers . Most have failed , however , against stronger adaptive attacks ( Athalye et al. , 2018 ; Tramer et al. , 2020 ) . PGD-based adversarial training ( Mądry et al. , 2018 ) and its variants ( Zhang et al. , 2019 ; Carmon et al. , 2019 ) are among the few heuristic defenses that have not been broken so far , but these methods still fail to produce satisfactorily robust classifiers , even for classification tasks on benchmark datasets like CIFAR-10 . Motivated by the empirical hardness of adversarially-robust learning , a line of theoretical works ( Gilmer et al. , 2018 ; Fawzi et al. , 2018 ; Mahloujifar et al. , 2019a ; Shafahi et al. , 2019 ) have argued that adversarial examples are unavoidable . In particular , these works proved that as long as the input distributions are concentrated with respect to the perturbation metric , adversarially robust classifiers do not exist . Recently , Mahloujifar et al . ( 2019b ) and Prescott et al . ( 2021 ) generalized these results by developing empirical methods for measuring the concentration of arbitrary input distributions to derive an intrinsic robustness limit . ( Appendix A provides a more thorough discussion of related work . ) We argue that the standard concentration of measure problem , which was studied in all of the aforementioned works , is not sufficient to capture a realistic intrinsic robustness limit for a classification problem . In particular , the standard concentration function is defined as an inherent property regarding the input metric probability space that does not take account of the underlying label information . We argue that such label information is essential for any supervised learning problem , including adversarially robust classification , so must be incorporated into intrinsic robustness limits . Contributions . We identify the insufficiency of the standard concentration of measure problem and demonstrate why it fails to capture a realistic intrinsic robustness limit ( Section 3 ) . Then , we introduce the notion of label uncertainty ( Definition 4.1 ) , which characterizes the average uncertainty of label assignments for an input region . We then incorporate label uncertainty in the standard concentration measure as an initial step towards a more realistic characterization of intrinsic robustness ( Section 4 ) . Experiments on the CIFAR-10 and CIFAR-10H ( Peterson et al. , 2019 ) datasets demonstrate that error regions induced by state-of-the-art classification models all have high label uncertainty ( Section 6.1 ) , which validates the proposed label uncertainty constrained concentration problem . By adapting the standard concentration estimation method in Mahloujifar et al . ( 2019b ) , we propose an empirical estimator for the label uncertainty constrained concentration function . We then theoretically study the asymptotic behavior of the proposed estimator and provide a corresponding heuristic algorithm for typical perturbation metrics ( Section 5 ) . We demonstrate that our method is able to produce a more accurate characterization of intrinsic robustness limit for benchmark datasets such as CIFAR-10 ( Section 6.2 ) than was possible using prior methods that do not consider labels . We also provide empirical evidence showing that both the clean and robust accuracies of stateof-the-art robust classification models are largely affected by the label uncertainty of the tested examples , suggesting that adding an abstain option based on label uncertainty is a promising avenue for improving adversarial robustness of deployed machine learning systems ( Section 6.3 ) . Notation . We use lowercase boldface letters to denote vectors and use [ k ] to denote { 1 , 2 , . . . , k } . We use In to denote the n × n identity matrix . For any set A , |A| denotes its cardinality , pow ( A ) is all its measurable subsets , and 1A ( · ) is the indicator function of A . Consider metric probability space ( X , µ , ∆ ) , where ∆ : X × X → R≥0 is a distance metric on X . Define the empirical measure of µ with respect to a data set S sampled from µ as µ̂S ( A ) = ∑ x∈S 1A ( x ) /|S| . Denote by B ( x , ∆ ) the ball around x with radius measured by ∆ . The -expansion of A is defined as A ( ∆ ) = { x ∈ X : ∃ x′ ∈ B ( x , ∆ ) ∩ A } . When ∆ is free of context , we simply write B ( x ) = B ( x , ∆ ) and A = A ( ∆ ) . 2 PRELIMINARIES . Adversarial Risk . Adversarial risk captures the vulnerability of a classifier against adversarial perturbations . In particular , we adopt the following adversarial risk definition , which has been studied in several previous works , such as Gilmer et al . ( 2018 ) ; Bubeck et al . ( 2019 ) ; Mahloujifar et al . ( 2019a ; b ) ; Zhang et al . ( 2020b ) ; Prescott et al . ( 2021 ) . Definition 2.1 ( Adversarial Risk ) . Let ( X , µ , ∆ ) be a metric probability space of instances and Y be the set of possible class labels . Assume c : X → Y is a concept function that gives each instance a label . For any classifier f : X → Y and ≥ 0 , the adversarial risk of f is defined as : AdvRisk ( f , c ) = Pr x∼µ [ ∃ x′ ∈ B ( x ) s.t . f ( x′ ) 6= c ( x′ ) ] . The adversarial robustness of f is defined as : AdvRob ( f , c ) = 1−AdvRisk ( f , c ) . When = 0 , adversarial risk equals to the standard risk . Namely , AdvRisk0 ( f , c ) = Risk ( f , c ) : = Prx∼µ [ f ( x ) 6= c ( x ) ] holds for any classifier f . Other definitions of adversarial risk have been proposed , such as the one used in Mądry et al . ( 2018 ) . These definitions are equivalent to the one we use , as long as small perturbations preserve the labels assigned by c ( · ) . Intrinsic Robustness . The definition of intrinsic robustness was first introduced by Mahloujifar et al . ( 2019b ) to capture the maximum adversarial robustness with respect to some set of classifiers : Definition 2.2 ( Intrinsic Robustness ) . Consider the input metric probability space ( X , µ , ∆ ) and the set of labels Y . Let c : X → Y be a concept function that gives a label to each input . For any set of classifiers F ⊆ { f : X → Y } and ≥ 0 , the intrinsic robustness with respect to F is defined as : AdvRob ( F , c ) = 1− inf f∈F { AdvRisk ( f , c ) } = sup f∈F { AdvRob ( f , c ) } . According to the definition of intrinsic robustness , there does not exist any classifier in F with adversarial robustness higher than AdvRob ( F , c ) for the considered task . Prior works , including Gilmer et al . ( 2018 ) ; Mahloujifar et al . ( 2019a ; b ) ; Zhang et al . ( 2020b ) , selected F in Definition 2.2 as the set of imperfect classifiers Fα = { f : Risk ( f , c ) ≥ α } , where α ∈ ( 0 , 1 ) is set as a small constant that reflects the best classification error rates achieved by state-of-the-art methods . Concentration of Measure . Concentration of measure captures a ‘ closeness ’ property for a metric probability space of instances . More formally , it is defined by the concentration function : Definition 2.3 ( Concentration Function ) . Let ( X , µ , ∆ ) be a metric probability space . For any α ∈ ( 0 , 1 ) and ≥ 0 , the concentration function of ( X , µ , ∆ ) is defined as : h ( µ , α , ) = inf E∈pow ( X ) { µ ( E ) : µ ( E ) ≥ α } . The standard notion of concentration function considers a special case of Definition 2.3 with α = 1/2 ( e.g. , Talagrand ( 1995 ) ) . For some special metric probability spaces , one can prove the closed-form solution of the concentration function . The Gaussian Isoperimetric Inequality ( Borell , 1975 ; Sudakov & Tsirelson , 1974 ) characterizes the concentration function for spherical Gaussian distribution and ` 2-norm distance metric , and was generalized by Prescott et al . ( 2021 ) to other ` p norms . 3 STANDARD CONCENTRATION IS INSUFFICIENT . We first explain a fundamental connection between the concentration of measure and the intrinsic robustness with respect to imperfect classifiers shown in previous work , and then argue that standard concentration fails to capture a realistic intrinsic robustness limit because it ignores data labels . Connecting Intrinsic Robustness with Concentration of Measure . Let ( X , µ , ∆ ) be the considered input metric probability space , Y be the set of possible labels , and c : X → Y be the concept function that gives each input a label . Given parameters 0 < α < 1 and ≥ 0 , the standard concentration problem can be cast into an optimization problem as follows : minimize E∈pow ( X ) µ ( E ) subject to µ ( E ) ≥ α . ( 3.1 ) For any classifier f , let Ef = { x ∈ X : f ( x ) 6= c ( x ) } be its induced error region with respect to c ( · ) . By connecting the risk of f with the measure of Ef and the adversarial risk of f with the measure of the -expansion of Ef , Mahloujifar et al . ( 2019a ) proved that the standard concentration problem ( 3.1 ) is equivalent to the following optimization problem regarding risk and adversarial risk : minimize f AdvRisk ( f , c ) subject to Risk ( f , c ) ≥ α . To be more specific , the following lemma characterizes the connection between the standard concentration function and the intrinsic robustness limit with respect to the set of imperfect classifiers : Lemma 3.1 ( Mahloujifar et al . ( 2019a ) ) . Let α ∈ ( 0 , 1 ) and Fα = { f : Risk ( f , c ) ≥ α } be the set of imperfect classifiers . For any ≥ 0 , it holds that AdvRob ( Fα , c ) = 1− h ( µ , α , ) . Lemma 3.1 suggests that the concentration function of the input metric probability space h ( µ , α , ) can be translated into an adversarial robustness upper bound that applies to any classifier with risk at least α . If this upper bound is shown to be small , then one can conclude that it is impossible to learn an adversarially robust classifier , as long as the learned classifier has risk at least α . Concentration without Labels Mischaracterizes Intrinsic Robustness . Despite the appealing relationship between concentration of measure and intrinsic robustness , we argue that solving the standard concentration problem is not enough to capture a meaningful intrinsic limit for adversarially robust classification . The standard concentration of measure problem ( 3.1 ) , which aims to find the optimal subset that has the smallest -expansion with regard to the input metric probability space ( X , µ , ∆ ) , does not involve the concept function c ( · ) that determines the underlying class label of each input . Therefore , no matter how we assign the labels to the inputs , the concentration function h ( µ , α , ) will remain the same for the considered metric probability space . In sharp contrast , learning an adversarially-robust classifier depends on the joint distribution of both the inputs and the labels . Moreover , when the standard concentration function is translated into an intrinsic limit of adversarial robustness , it is defined with respect to the set of imperfect classfiers Fα ( see Lemma 3.1 ) . The only restriction imposed by Fα is that the classifier ( or equivalently , the measure of the corresponding error region ) has risk at least α . This fails to consider whereas whether the classifier is learnable or not under the given classification problem . Therefore , the intrinsic robustness limit implied by standard concentration AdvRob ( Fα , c ) could be much higher than AdvRob ( Flearn , c ) , where Flearn denotes the set of classifiers that can be produced by some supervised learning method . Hence , it is not surprising that Mahloujifar et al . ( 2019b ) found that the adversarial robustness attained by state-of-theart robust training methods for several image benchmarks is much lower than the intrinsic robustness limit implied by standard concentration of measure . In this work , to obtain a more meaningful intrinsic robustness limit we restrict the search space of the standard concentration problem ( 3.1 ) by considering both the underlying class labels and the learnability of the given classification problem . Gaussian Mixture Model . We further illustrate the insufficiency of standard concentration under a simple Gaussian mixture model . Let X ⊆ Rn be the input space and Y = { −1 , +1 } be the label space . Assume all the inputs are first generated according to a mixture of 2-Gaussian distribution : x ∼ µ = 12N ( −θ , σ 2In ) + 1 2N ( θ , σ 2In ) , then labeled by a concept function c ( x ) = sgn ( θ > x ) , where θ ∈ Rn and σ ∈ R are given parameters ( this concept function is also the Bayes optimal classifier , which best separates the two Gaussian clusters ) . Theorem 3.2 , proven in Appendix C.1 , characterizes the optimal solution to the standard concentration problem under this assumed model . Theorem 3.2 . Consider the above Gaussian mixture model with ` 2 perturbation metric . The optimal solution to the standard concentration problem ( 3.1 ) is a halfspace , either H− = { x ∈ X : θ > x+ b · ‖θ‖2 ≤ 0 } or H+ = { x ∈ X : θ > x− b · ‖θ‖2 ≥ 0 } , where b is a parameter depending on α and θ such that µ ( H− ) = µ ( H+ ) = α . Remark 3.3 . Theorem 3.2 suggests that for the Gaussian mixture model , the optimal subset achieving the smallest -expansion under ` 2-norm distance metric is a halfspaceH , which is far away from the boundary between the two Gaussian classes for small α . When translated into the intrinsic robustness problem , the corresponding optimal classifier f has to be constructed by treatingH as the only error region , or more precisely f ( x ) = c ( x ) if x /∈ H ; f ( x ) 6= c ( x ) otherwise . This optimally constructed classifier f , however , does not match our intuition of what a predictive classifier would do under the considered Gaussian mixture model . In particular , since all the inputs inH and their neighbours share the same class label and are also far away from the boundary , examples that fall intoH should be easily classified correctly using simple decision rule , such as k-nearest neighbour or maximum margin , whereas examples that are close to the boundary should be more likely to be misclassified as errors by supervisedly-learned classifiers . This confirms our claim that standard concentration is not sufficient for capturing a meaningful intrinsic robustness limit .
This paper focuses on adversarial learning, which is one of the hottest topics in machine/deep learning. The authors are concerned that the standard concentration of measure problem in prior works cannot capture realistic intrinsic robustness well. In this paper, the information of class labels is therefore introduced into intrinsic robustness limits. Both theoretical analysis and experimental results are provided.
SP:6b5fdc52826ed97fc5f1a873807ce6cc9ddeac4d
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing
1 INTRODUCTION . Although deep learning models have achieved remarkable success on various applications ( LeCun et al. , 2015 ) , they are vulnerable to adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) and semantic transformations ( Hendrycks & Dietterich , 2019 ) . The vulnerability of deep learning models can limit their applications on many important tasks . For example , the autonomous driving system can be misled even by a small adversarial patch on the road mark ( Jing et al. , 2021 ) . Compared with the maliciously crafted adversarial examples , semantic transformations are more practical in real-world scenarios , such as rotation , translation , blur , bad weather , and so on . Such transformations do not damage the semantic features of images and can be easily recognized by humans , but they also degrade the performance of deep learning models . Therefore , it is imperative to improve model robustness against semantic transformations . To develop more reliable machine learning systems , many efforts have been made to design defense techniques against adversarial attacks or semantic transformations . The existing defense methods can be categorized into empirical defenses and certified defenses . Adversarial training ( AT ) ( Madry et al. , 2017 ; Zhang et al. , 2019a ) is one of the most effective empirical defenses against ` p-norm bounded adversarial examples . Moreover , methods based on data augmentation ( Hendrycks et al. , 2019 ; Wang et al. , 2019 ; Calian et al. , 2021 ) have been proposed to empirically improve the performance under semantic transformations . However , the performance of empirical defenses is difficult to be fully justified and these defenses can be further broken by new adaptive attacks ( Athalye et al. , 2018 ; Tramer et al. , 2020 ) . In contrast , the certified defenses aim to theoretically provide a certified region where the model is theoretically safe under any attack or perturbation ( Wong & Kolter , 2018 ; Cohen et al. , 2019 ; Gowal et al. , 2018 ; Zhang et al. , 2019b ) . Along this line , developing certified defense methods is a crucial step towards reliable machine learning systems . Although certified defenses have achieved great success , most of them are limited to defend against ` p-norm bounded attacks . However , the ` p distance between the original image and its corrupted counterpart by a semantic transformation ( e.g. , translation , rotation ) would be large even when the corruption is slight . Therefore , the current methods are incapable of certifying robustness against such semantic perturbations . To solve this problem , several recent works ( Fischer et al. , 2020 ; Mohapatra et al. , 2020 ; Li et al. , 2021 ) attempt to extend the certified defenses to several simple semantic corruptions , including translation , rotation , and Gaussian blur . However , these works are not scalable to certify robustness against complex and general semantic perturbations . First , deterministic certified defenses ( Mohapatra et al. , 2020 ) based on convex relaxation for the activation function require solving a complex optimization problem for computing bound which is computationally expensive . Second , probabilistic approaches based on randomized smoothing also demand a handcrafted Lipschitz bound ( Li et al. , 2021 ) , which is intractable for complicated semantic transformations . For example , many semantic transformations such as glass blur and pixelate do not have a closed form expression or they are black boxes and hard to be analyzed theoretically , but they are common in real-world scenarios . Therefore , it is still highly challenging to certify robustness against these complex and realistic semantic transformations . To address the aforementioned challenges , we propose a generalized randomized smoothing framework ( GSmooth ) . First , we provide a unified framework of GSmooth for certifying general semantic transformations . Then we categorize the transformations into resolvable transformations ( e.g. , translation ) and non-resolvable transformations ( e.g. , rotational blur ) similar with Li et al . ( 2021 ) . As mentioned above , most non-resolvable transformations are complex and the existing methods can not provide their certified radius . To handle the challenge , we propose to use an image-to-image translation neural network to approximate all these transformations . Due to the strong capacity of neural networks , our method is flexible and scalable to model these complex semantic transformations . By introducing an augmented noise in the layers of the surrogate model , we can theoretically provide the certified radius for the proxy neural networks which can be used for certifying the original transformations . Next , we provide theoretical analysis and error bounding for the approximation . Finally , we validate the effectiveness of our methods on several publicly available datasets . Extensive experimental results demonstrate that our methods are effective for certifying complex semantic transformations including different types of blur or image quality corruptions . 2 RELATED WORK . 2.1 ATTACKS AND DEFENSES FOR SEMANTIC TRANSFORMATIONS . Unlike ` p perturbation which adds small noise to every pixel of an image , semantic attacks or physical attacks are usually unrestricted . Brown et al . ( 2017 ) ; Song et al . ( 2018 ) use a small patch added to the image to mislead the classifier or the object detector . Engstrom et al . ( 2019 ; 2018 ) ; Xiao et al . ( 2018 ) construct adversarial examples using spatial transformations like rotation or translation . Hendrycks & Dietterich ( 2019 ) show that a wide variety of semantic perturbations degrade the performance for many deep learning models . Many works ( Cubuk et al. , 2019 ; Hendrycks et al. , 2019 ; 2020 ; Robey et al. , 2020 ) propose diverse data augmentation techniques to enhance robustness under semantic perturbations . Calian et al . ( 2021 ) propose adversarial data augmentation that can be viewed as adversarial training for defending semantic perturbations . Beyond empirical defenses , several works ( Mohapatra et al. , 2020 ; Madry et al. , 2017 ; Singh et al. , 2019 ; Balunović et al. , 2019 ) attempt to certify some simple geometric transformations . However , all of them belong to deterministic certification approaches and their performance on realistic datasets are unsatisfactory . 2.2 RANDOMIZED SMOOTHING . Randomized smoothing is a novel certification method originated from differential privacy ( Lecuyer et al. , 2019 ) . Cohen et al . ( 2019 ) then improve the certified bound and apply it to large scale deep neural networks and datasets . Yang et al . ( 2020 ) exhaustively analyze the robust radius by using different noise distribution and norms . Hayes ( 2020 ) ; Yang et al . ( 2020 ) point out that randomized smoothing suffers from curse of dimensionality for the l∞ norm . Salman et al . ( 2019 ) adopt adversarial training to train smoothed classifiers to obtain better robustness guarantees . Li et al . ( 2021 ) ; Fischer et al . ( 2020 ) extend randomized smoothing to certify some simple semantic transformations , e.g. , image translation and rotation . It shows that randomized smoothing could be generalized to certify more diverse attacks or corruptions . However , their methods are limited to simple semantic transformations , which are easy to analyze their mathematical properties . 3 PROPOSED METHOD . In this section , we present the framework and theoretical analyses of our Generalized Randomized Smoothing ( GSmooth ) . We first introduce the basic notations . Then we divide the semantic transformations into resolvable transformations and non-resolvable transformations similar with Li et al . ( 2021 ) . Next we introduce the details of our GSmooth for these two types of semantic transformations , respectively . Finally , we show the theoretical insight and proof sketch of our main results . 3.1 NOTATIONS . We first introduce the notations and formulation of the task . Given the input of x ∈ Rn and the labels of Y = { 1 , 2 , . . . p } , we denote the classifier as f ( x ) : Rn → [ 0 , 1 ] p , which outputs predicted probabilities over all p classes . The prediction of f is arg maxi∈Y f ( x ) i , where f ( · ) i denotes the i-th element of f ( · ) . Let τ ( θ , x ) : Rm×Rn → Rn be a semantic transformation of raw input x with parameter θ ∈ Rm . We define the smoothed classifier as G ( x ) = Eθ∼g [ f ( τ ( θ , x ) ) ] , ( 1 ) which is the average prediction for the samples under a smoothing distribution g ( θ ) where g ( θ ) = exp ( −ψ ( θ ) ) and ψ ( θ ) is a smooth function from Rm → R. Let ||u|| = 1 be any vector with unit norm and a random variable γu = 〈u , ∇ψ ( δ ) 〉 where δ ∼ g and ∇ is the gradient operator of a function . The complementary CDF is ϕu ( c ) = P [ γu > c ] and the inverse complementary CDF is ϕ−1u ( p ) = inf { c|P ( γu > c ) 6 p } . Following Yang et al . ( 2020 ) , we define a function Φ as Φ ( p ) = max||u||=1 E [ γuI { γu > ϕ−1u ( p ) } ] , ( 2 ) which will be used to represent the certified radius . Let yA= arg maxi∈YG ( x ) i be the predicted label by the smoothed classifier G ( x ) and yB = arg maxi∈Y\yA G ( x ) i is the runner-up class . Without causing confusion , we use G ( x ) A to denote the probability of the top class G ( x ) yA ; likewise for G ( x ) B . 3.2 CERTIFIED BOUND FOR RESOLVABLE SEMANTIC TRANSFORMATIONS . We first discuss a class of transformations that are resolvable — the composition of two transformations with parameters belonging to a perturbation set θ , ξ ∈ P ⊂ Rm is still a transformation with a new parameter γ ( θ , ξ ) ∈ P ⊂ Rm , where γ ( · , · ) : P × P → P is a function depending on these parameters . For resolvable semantic transformations , we have the following theorem . Theorem 1 . Let f ( x ) be any classifier and G ( x ) be the smoothed classifier defined in Eq . ( 1 ) . If there exists a function M ( · , · ) : P × P → R , the transformation τ ( · , · ) satisfies ∂γ ( θ , ξ ) ∂ξ = ∂γ ( θ , ξ ) ∂θ M ( θ , ξ ) , and there exist two constants pA , pB satisfying G ( x ) A > pA > pB > G ( x ) B , then yA = arg maxi∈Y G ( τ ( ξ , x ) ) i holds for any ‖ξ‖ 6 R where R = 1 2M∗ ∫ pA pB 1 Φ ( p ) dp , ( 3 ) and M∗ = maxξ , θ∈P ||M ( ξ , θ ) || . Remark . The settings of Theorem 1 are similar with Li et al . ( 2021 ) for resolvable semantic transformations . But here we adopt a different presentation and proof for the theorem which could be easier to extend to our GSmooth framework for general semantic transformations . Specifically , we show two examples of the theorem which are additive transformations and commutable transformations . A transformation is additive if τ ( θ , τ ( ξ , x ) ) = τ ( ξ + θ , x ) for any θ , ξ ∈ P ; or it is commutable if τ ( θ , τ ( ξ , x ) ) = τ ( ξ , τ ( θ , x ) ) for any θ , ξ ∈ P . For these two types of transformations , it is straightforward to verify that they satisfy the property proposed in Theorem 1 . As an example , we simply apply Theorem 1 for isotropic Gaussian distribution g ( θ ) = N ( 0 , σ2I ) and get the certified radius R = σ 2 ( Ψ ( pA ) −Ψ ( pB ) ) , ( 4 ) where Ψ is the inverse CDF of the standard Gaussian distribution . These two kinds of transformations include image translation and Gaussian blur , which are basic semantic transformations and widely discussed in previous works ( Li et al. , 2021 ; Fischer et al. , 2020 ) . The certification of these simple transformations only requires applying translation or Gaussian blur to the sample and gets the average classification score under the noise distribution .
This paper proposes a more generalized form of certified robustness and attempts to provide new results on applying randomized smoothing to semantic transformations such as different types of blurs or distortions. The main idea is to use an image-to-image neural network to approximate semantic transformations, and then certify robustness based on bounds on that neural network. The authors provide empirical results on standard datasets like MNIST and CIFAR showing that their method can achieve improved results on some transformations compared to prior work.
SP:99d920ad5153ed336087c3a05b7d8c0b06dc9020
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing
1 INTRODUCTION . Although deep learning models have achieved remarkable success on various applications ( LeCun et al. , 2015 ) , they are vulnerable to adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) and semantic transformations ( Hendrycks & Dietterich , 2019 ) . The vulnerability of deep learning models can limit their applications on many important tasks . For example , the autonomous driving system can be misled even by a small adversarial patch on the road mark ( Jing et al. , 2021 ) . Compared with the maliciously crafted adversarial examples , semantic transformations are more practical in real-world scenarios , such as rotation , translation , blur , bad weather , and so on . Such transformations do not damage the semantic features of images and can be easily recognized by humans , but they also degrade the performance of deep learning models . Therefore , it is imperative to improve model robustness against semantic transformations . To develop more reliable machine learning systems , many efforts have been made to design defense techniques against adversarial attacks or semantic transformations . The existing defense methods can be categorized into empirical defenses and certified defenses . Adversarial training ( AT ) ( Madry et al. , 2017 ; Zhang et al. , 2019a ) is one of the most effective empirical defenses against ` p-norm bounded adversarial examples . Moreover , methods based on data augmentation ( Hendrycks et al. , 2019 ; Wang et al. , 2019 ; Calian et al. , 2021 ) have been proposed to empirically improve the performance under semantic transformations . However , the performance of empirical defenses is difficult to be fully justified and these defenses can be further broken by new adaptive attacks ( Athalye et al. , 2018 ; Tramer et al. , 2020 ) . In contrast , the certified defenses aim to theoretically provide a certified region where the model is theoretically safe under any attack or perturbation ( Wong & Kolter , 2018 ; Cohen et al. , 2019 ; Gowal et al. , 2018 ; Zhang et al. , 2019b ) . Along this line , developing certified defense methods is a crucial step towards reliable machine learning systems . Although certified defenses have achieved great success , most of them are limited to defend against ` p-norm bounded attacks . However , the ` p distance between the original image and its corrupted counterpart by a semantic transformation ( e.g. , translation , rotation ) would be large even when the corruption is slight . Therefore , the current methods are incapable of certifying robustness against such semantic perturbations . To solve this problem , several recent works ( Fischer et al. , 2020 ; Mohapatra et al. , 2020 ; Li et al. , 2021 ) attempt to extend the certified defenses to several simple semantic corruptions , including translation , rotation , and Gaussian blur . However , these works are not scalable to certify robustness against complex and general semantic perturbations . First , deterministic certified defenses ( Mohapatra et al. , 2020 ) based on convex relaxation for the activation function require solving a complex optimization problem for computing bound which is computationally expensive . Second , probabilistic approaches based on randomized smoothing also demand a handcrafted Lipschitz bound ( Li et al. , 2021 ) , which is intractable for complicated semantic transformations . For example , many semantic transformations such as glass blur and pixelate do not have a closed form expression or they are black boxes and hard to be analyzed theoretically , but they are common in real-world scenarios . Therefore , it is still highly challenging to certify robustness against these complex and realistic semantic transformations . To address the aforementioned challenges , we propose a generalized randomized smoothing framework ( GSmooth ) . First , we provide a unified framework of GSmooth for certifying general semantic transformations . Then we categorize the transformations into resolvable transformations ( e.g. , translation ) and non-resolvable transformations ( e.g. , rotational blur ) similar with Li et al . ( 2021 ) . As mentioned above , most non-resolvable transformations are complex and the existing methods can not provide their certified radius . To handle the challenge , we propose to use an image-to-image translation neural network to approximate all these transformations . Due to the strong capacity of neural networks , our method is flexible and scalable to model these complex semantic transformations . By introducing an augmented noise in the layers of the surrogate model , we can theoretically provide the certified radius for the proxy neural networks which can be used for certifying the original transformations . Next , we provide theoretical analysis and error bounding for the approximation . Finally , we validate the effectiveness of our methods on several publicly available datasets . Extensive experimental results demonstrate that our methods are effective for certifying complex semantic transformations including different types of blur or image quality corruptions . 2 RELATED WORK . 2.1 ATTACKS AND DEFENSES FOR SEMANTIC TRANSFORMATIONS . Unlike ` p perturbation which adds small noise to every pixel of an image , semantic attacks or physical attacks are usually unrestricted . Brown et al . ( 2017 ) ; Song et al . ( 2018 ) use a small patch added to the image to mislead the classifier or the object detector . Engstrom et al . ( 2019 ; 2018 ) ; Xiao et al . ( 2018 ) construct adversarial examples using spatial transformations like rotation or translation . Hendrycks & Dietterich ( 2019 ) show that a wide variety of semantic perturbations degrade the performance for many deep learning models . Many works ( Cubuk et al. , 2019 ; Hendrycks et al. , 2019 ; 2020 ; Robey et al. , 2020 ) propose diverse data augmentation techniques to enhance robustness under semantic perturbations . Calian et al . ( 2021 ) propose adversarial data augmentation that can be viewed as adversarial training for defending semantic perturbations . Beyond empirical defenses , several works ( Mohapatra et al. , 2020 ; Madry et al. , 2017 ; Singh et al. , 2019 ; Balunović et al. , 2019 ) attempt to certify some simple geometric transformations . However , all of them belong to deterministic certification approaches and their performance on realistic datasets are unsatisfactory . 2.2 RANDOMIZED SMOOTHING . Randomized smoothing is a novel certification method originated from differential privacy ( Lecuyer et al. , 2019 ) . Cohen et al . ( 2019 ) then improve the certified bound and apply it to large scale deep neural networks and datasets . Yang et al . ( 2020 ) exhaustively analyze the robust radius by using different noise distribution and norms . Hayes ( 2020 ) ; Yang et al . ( 2020 ) point out that randomized smoothing suffers from curse of dimensionality for the l∞ norm . Salman et al . ( 2019 ) adopt adversarial training to train smoothed classifiers to obtain better robustness guarantees . Li et al . ( 2021 ) ; Fischer et al . ( 2020 ) extend randomized smoothing to certify some simple semantic transformations , e.g. , image translation and rotation . It shows that randomized smoothing could be generalized to certify more diverse attacks or corruptions . However , their methods are limited to simple semantic transformations , which are easy to analyze their mathematical properties . 3 PROPOSED METHOD . In this section , we present the framework and theoretical analyses of our Generalized Randomized Smoothing ( GSmooth ) . We first introduce the basic notations . Then we divide the semantic transformations into resolvable transformations and non-resolvable transformations similar with Li et al . ( 2021 ) . Next we introduce the details of our GSmooth for these two types of semantic transformations , respectively . Finally , we show the theoretical insight and proof sketch of our main results . 3.1 NOTATIONS . We first introduce the notations and formulation of the task . Given the input of x ∈ Rn and the labels of Y = { 1 , 2 , . . . p } , we denote the classifier as f ( x ) : Rn → [ 0 , 1 ] p , which outputs predicted probabilities over all p classes . The prediction of f is arg maxi∈Y f ( x ) i , where f ( · ) i denotes the i-th element of f ( · ) . Let τ ( θ , x ) : Rm×Rn → Rn be a semantic transformation of raw input x with parameter θ ∈ Rm . We define the smoothed classifier as G ( x ) = Eθ∼g [ f ( τ ( θ , x ) ) ] , ( 1 ) which is the average prediction for the samples under a smoothing distribution g ( θ ) where g ( θ ) = exp ( −ψ ( θ ) ) and ψ ( θ ) is a smooth function from Rm → R. Let ||u|| = 1 be any vector with unit norm and a random variable γu = 〈u , ∇ψ ( δ ) 〉 where δ ∼ g and ∇ is the gradient operator of a function . The complementary CDF is ϕu ( c ) = P [ γu > c ] and the inverse complementary CDF is ϕ−1u ( p ) = inf { c|P ( γu > c ) 6 p } . Following Yang et al . ( 2020 ) , we define a function Φ as Φ ( p ) = max||u||=1 E [ γuI { γu > ϕ−1u ( p ) } ] , ( 2 ) which will be used to represent the certified radius . Let yA= arg maxi∈YG ( x ) i be the predicted label by the smoothed classifier G ( x ) and yB = arg maxi∈Y\yA G ( x ) i is the runner-up class . Without causing confusion , we use G ( x ) A to denote the probability of the top class G ( x ) yA ; likewise for G ( x ) B . 3.2 CERTIFIED BOUND FOR RESOLVABLE SEMANTIC TRANSFORMATIONS . We first discuss a class of transformations that are resolvable — the composition of two transformations with parameters belonging to a perturbation set θ , ξ ∈ P ⊂ Rm is still a transformation with a new parameter γ ( θ , ξ ) ∈ P ⊂ Rm , where γ ( · , · ) : P × P → P is a function depending on these parameters . For resolvable semantic transformations , we have the following theorem . Theorem 1 . Let f ( x ) be any classifier and G ( x ) be the smoothed classifier defined in Eq . ( 1 ) . If there exists a function M ( · , · ) : P × P → R , the transformation τ ( · , · ) satisfies ∂γ ( θ , ξ ) ∂ξ = ∂γ ( θ , ξ ) ∂θ M ( θ , ξ ) , and there exist two constants pA , pB satisfying G ( x ) A > pA > pB > G ( x ) B , then yA = arg maxi∈Y G ( τ ( ξ , x ) ) i holds for any ‖ξ‖ 6 R where R = 1 2M∗ ∫ pA pB 1 Φ ( p ) dp , ( 3 ) and M∗ = maxξ , θ∈P ||M ( ξ , θ ) || . Remark . The settings of Theorem 1 are similar with Li et al . ( 2021 ) for resolvable semantic transformations . But here we adopt a different presentation and proof for the theorem which could be easier to extend to our GSmooth framework for general semantic transformations . Specifically , we show two examples of the theorem which are additive transformations and commutable transformations . A transformation is additive if τ ( θ , τ ( ξ , x ) ) = τ ( ξ + θ , x ) for any θ , ξ ∈ P ; or it is commutable if τ ( θ , τ ( ξ , x ) ) = τ ( ξ , τ ( θ , x ) ) for any θ , ξ ∈ P . For these two types of transformations , it is straightforward to verify that they satisfy the property proposed in Theorem 1 . As an example , we simply apply Theorem 1 for isotropic Gaussian distribution g ( θ ) = N ( 0 , σ2I ) and get the certified radius R = σ 2 ( Ψ ( pA ) −Ψ ( pB ) ) , ( 4 ) where Ψ is the inverse CDF of the standard Gaussian distribution . These two kinds of transformations include image translation and Gaussian blur , which are basic semantic transformations and widely discussed in previous works ( Li et al. , 2021 ; Fischer et al. , 2020 ) . The certification of these simple transformations only requires applying translation or Gaussian blur to the sample and gets the average classification score under the noise distribution .
The authors propose a randomized smoothing based certification algorithm for general semantic transformations. The key idea of the work is to use a neural surrogate for the semantic transformations, and add noise in the latent space of the surrogate for randomized smoothing. The neural surrogate appears to convert the non-linear, multiplicative transformation into an additive operation in the latent space, thus allowing for randomized smoothing methods to be applied. Their approach proposes to decompose the resolvable and unresolvable parts of the semantic transformation by lifting the data+transformation parameters into a larger augmented latent space defined by an image-to-image network. The resolvable parts of the transform can then be certified similar to previous works (Yang'2021, Salman'2019) using the Lipschitzness of the smoothed transforms. The non-resolvable part of the transform however are assumed to be Lipschitz in the latent space. The authors provide theoretical and empirical evidence for GSmooth and show improvement on contemporary methods.
SP:99d920ad5153ed336087c3a05b7d8c0b06dc9020
GSmooth: Certified Robustness against Semantic Transformations via Generalized Randomized Smoothing
1 INTRODUCTION . Although deep learning models have achieved remarkable success on various applications ( LeCun et al. , 2015 ) , they are vulnerable to adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) and semantic transformations ( Hendrycks & Dietterich , 2019 ) . The vulnerability of deep learning models can limit their applications on many important tasks . For example , the autonomous driving system can be misled even by a small adversarial patch on the road mark ( Jing et al. , 2021 ) . Compared with the maliciously crafted adversarial examples , semantic transformations are more practical in real-world scenarios , such as rotation , translation , blur , bad weather , and so on . Such transformations do not damage the semantic features of images and can be easily recognized by humans , but they also degrade the performance of deep learning models . Therefore , it is imperative to improve model robustness against semantic transformations . To develop more reliable machine learning systems , many efforts have been made to design defense techniques against adversarial attacks or semantic transformations . The existing defense methods can be categorized into empirical defenses and certified defenses . Adversarial training ( AT ) ( Madry et al. , 2017 ; Zhang et al. , 2019a ) is one of the most effective empirical defenses against ` p-norm bounded adversarial examples . Moreover , methods based on data augmentation ( Hendrycks et al. , 2019 ; Wang et al. , 2019 ; Calian et al. , 2021 ) have been proposed to empirically improve the performance under semantic transformations . However , the performance of empirical defenses is difficult to be fully justified and these defenses can be further broken by new adaptive attacks ( Athalye et al. , 2018 ; Tramer et al. , 2020 ) . In contrast , the certified defenses aim to theoretically provide a certified region where the model is theoretically safe under any attack or perturbation ( Wong & Kolter , 2018 ; Cohen et al. , 2019 ; Gowal et al. , 2018 ; Zhang et al. , 2019b ) . Along this line , developing certified defense methods is a crucial step towards reliable machine learning systems . Although certified defenses have achieved great success , most of them are limited to defend against ` p-norm bounded attacks . However , the ` p distance between the original image and its corrupted counterpart by a semantic transformation ( e.g. , translation , rotation ) would be large even when the corruption is slight . Therefore , the current methods are incapable of certifying robustness against such semantic perturbations . To solve this problem , several recent works ( Fischer et al. , 2020 ; Mohapatra et al. , 2020 ; Li et al. , 2021 ) attempt to extend the certified defenses to several simple semantic corruptions , including translation , rotation , and Gaussian blur . However , these works are not scalable to certify robustness against complex and general semantic perturbations . First , deterministic certified defenses ( Mohapatra et al. , 2020 ) based on convex relaxation for the activation function require solving a complex optimization problem for computing bound which is computationally expensive . Second , probabilistic approaches based on randomized smoothing also demand a handcrafted Lipschitz bound ( Li et al. , 2021 ) , which is intractable for complicated semantic transformations . For example , many semantic transformations such as glass blur and pixelate do not have a closed form expression or they are black boxes and hard to be analyzed theoretically , but they are common in real-world scenarios . Therefore , it is still highly challenging to certify robustness against these complex and realistic semantic transformations . To address the aforementioned challenges , we propose a generalized randomized smoothing framework ( GSmooth ) . First , we provide a unified framework of GSmooth for certifying general semantic transformations . Then we categorize the transformations into resolvable transformations ( e.g. , translation ) and non-resolvable transformations ( e.g. , rotational blur ) similar with Li et al . ( 2021 ) . As mentioned above , most non-resolvable transformations are complex and the existing methods can not provide their certified radius . To handle the challenge , we propose to use an image-to-image translation neural network to approximate all these transformations . Due to the strong capacity of neural networks , our method is flexible and scalable to model these complex semantic transformations . By introducing an augmented noise in the layers of the surrogate model , we can theoretically provide the certified radius for the proxy neural networks which can be used for certifying the original transformations . Next , we provide theoretical analysis and error bounding for the approximation . Finally , we validate the effectiveness of our methods on several publicly available datasets . Extensive experimental results demonstrate that our methods are effective for certifying complex semantic transformations including different types of blur or image quality corruptions . 2 RELATED WORK . 2.1 ATTACKS AND DEFENSES FOR SEMANTIC TRANSFORMATIONS . Unlike ` p perturbation which adds small noise to every pixel of an image , semantic attacks or physical attacks are usually unrestricted . Brown et al . ( 2017 ) ; Song et al . ( 2018 ) use a small patch added to the image to mislead the classifier or the object detector . Engstrom et al . ( 2019 ; 2018 ) ; Xiao et al . ( 2018 ) construct adversarial examples using spatial transformations like rotation or translation . Hendrycks & Dietterich ( 2019 ) show that a wide variety of semantic perturbations degrade the performance for many deep learning models . Many works ( Cubuk et al. , 2019 ; Hendrycks et al. , 2019 ; 2020 ; Robey et al. , 2020 ) propose diverse data augmentation techniques to enhance robustness under semantic perturbations . Calian et al . ( 2021 ) propose adversarial data augmentation that can be viewed as adversarial training for defending semantic perturbations . Beyond empirical defenses , several works ( Mohapatra et al. , 2020 ; Madry et al. , 2017 ; Singh et al. , 2019 ; Balunović et al. , 2019 ) attempt to certify some simple geometric transformations . However , all of them belong to deterministic certification approaches and their performance on realistic datasets are unsatisfactory . 2.2 RANDOMIZED SMOOTHING . Randomized smoothing is a novel certification method originated from differential privacy ( Lecuyer et al. , 2019 ) . Cohen et al . ( 2019 ) then improve the certified bound and apply it to large scale deep neural networks and datasets . Yang et al . ( 2020 ) exhaustively analyze the robust radius by using different noise distribution and norms . Hayes ( 2020 ) ; Yang et al . ( 2020 ) point out that randomized smoothing suffers from curse of dimensionality for the l∞ norm . Salman et al . ( 2019 ) adopt adversarial training to train smoothed classifiers to obtain better robustness guarantees . Li et al . ( 2021 ) ; Fischer et al . ( 2020 ) extend randomized smoothing to certify some simple semantic transformations , e.g. , image translation and rotation . It shows that randomized smoothing could be generalized to certify more diverse attacks or corruptions . However , their methods are limited to simple semantic transformations , which are easy to analyze their mathematical properties . 3 PROPOSED METHOD . In this section , we present the framework and theoretical analyses of our Generalized Randomized Smoothing ( GSmooth ) . We first introduce the basic notations . Then we divide the semantic transformations into resolvable transformations and non-resolvable transformations similar with Li et al . ( 2021 ) . Next we introduce the details of our GSmooth for these two types of semantic transformations , respectively . Finally , we show the theoretical insight and proof sketch of our main results . 3.1 NOTATIONS . We first introduce the notations and formulation of the task . Given the input of x ∈ Rn and the labels of Y = { 1 , 2 , . . . p } , we denote the classifier as f ( x ) : Rn → [ 0 , 1 ] p , which outputs predicted probabilities over all p classes . The prediction of f is arg maxi∈Y f ( x ) i , where f ( · ) i denotes the i-th element of f ( · ) . Let τ ( θ , x ) : Rm×Rn → Rn be a semantic transformation of raw input x with parameter θ ∈ Rm . We define the smoothed classifier as G ( x ) = Eθ∼g [ f ( τ ( θ , x ) ) ] , ( 1 ) which is the average prediction for the samples under a smoothing distribution g ( θ ) where g ( θ ) = exp ( −ψ ( θ ) ) and ψ ( θ ) is a smooth function from Rm → R. Let ||u|| = 1 be any vector with unit norm and a random variable γu = 〈u , ∇ψ ( δ ) 〉 where δ ∼ g and ∇ is the gradient operator of a function . The complementary CDF is ϕu ( c ) = P [ γu > c ] and the inverse complementary CDF is ϕ−1u ( p ) = inf { c|P ( γu > c ) 6 p } . Following Yang et al . ( 2020 ) , we define a function Φ as Φ ( p ) = max||u||=1 E [ γuI { γu > ϕ−1u ( p ) } ] , ( 2 ) which will be used to represent the certified radius . Let yA= arg maxi∈YG ( x ) i be the predicted label by the smoothed classifier G ( x ) and yB = arg maxi∈Y\yA G ( x ) i is the runner-up class . Without causing confusion , we use G ( x ) A to denote the probability of the top class G ( x ) yA ; likewise for G ( x ) B . 3.2 CERTIFIED BOUND FOR RESOLVABLE SEMANTIC TRANSFORMATIONS . We first discuss a class of transformations that are resolvable — the composition of two transformations with parameters belonging to a perturbation set θ , ξ ∈ P ⊂ Rm is still a transformation with a new parameter γ ( θ , ξ ) ∈ P ⊂ Rm , where γ ( · , · ) : P × P → P is a function depending on these parameters . For resolvable semantic transformations , we have the following theorem . Theorem 1 . Let f ( x ) be any classifier and G ( x ) be the smoothed classifier defined in Eq . ( 1 ) . If there exists a function M ( · , · ) : P × P → R , the transformation τ ( · , · ) satisfies ∂γ ( θ , ξ ) ∂ξ = ∂γ ( θ , ξ ) ∂θ M ( θ , ξ ) , and there exist two constants pA , pB satisfying G ( x ) A > pA > pB > G ( x ) B , then yA = arg maxi∈Y G ( τ ( ξ , x ) ) i holds for any ‖ξ‖ 6 R where R = 1 2M∗ ∫ pA pB 1 Φ ( p ) dp , ( 3 ) and M∗ = maxξ , θ∈P ||M ( ξ , θ ) || . Remark . The settings of Theorem 1 are similar with Li et al . ( 2021 ) for resolvable semantic transformations . But here we adopt a different presentation and proof for the theorem which could be easier to extend to our GSmooth framework for general semantic transformations . Specifically , we show two examples of the theorem which are additive transformations and commutable transformations . A transformation is additive if τ ( θ , τ ( ξ , x ) ) = τ ( ξ + θ , x ) for any θ , ξ ∈ P ; or it is commutable if τ ( θ , τ ( ξ , x ) ) = τ ( ξ , τ ( θ , x ) ) for any θ , ξ ∈ P . For these two types of transformations , it is straightforward to verify that they satisfy the property proposed in Theorem 1 . As an example , we simply apply Theorem 1 for isotropic Gaussian distribution g ( θ ) = N ( 0 , σ2I ) and get the certified radius R = σ 2 ( Ψ ( pA ) −Ψ ( pB ) ) , ( 4 ) where Ψ is the inverse CDF of the standard Gaussian distribution . These two kinds of transformations include image translation and Gaussian blur , which are basic semantic transformations and widely discussed in previous works ( Li et al. , 2021 ; Fischer et al. , 2020 ) . The certification of these simple transformations only requires applying translation or Gaussian blur to the sample and gets the average classification score under the noise distribution .
This paper proposed GSmooth, a generalized randomized smoothing method for semantic transformations. The main technical contributions are: (1) Introduce the use of an image-to-image translation network to provide a unified framework for the analysis of non-resolvable semantic transformations. (2) Theoretical proof on the certified radius and the approximation error from the image-to-image translation network. (3) The empirical performance is superior to existing methods on most transformations. More importantly, the method can certify many new transformations that are hard to analyze based on existing methods.
SP:99d920ad5153ed336087c3a05b7d8c0b06dc9020
Zero-Shot Self-Supervised Learning for MRI Reconstruction
1 INTRODUCTION . Magnetic resonance imaging ( MRI ) is a non-invasive , radiation-free medical imaging modality that provides excellent soft tissue contrast for diagnostic purposes . However , lengthy acquisition times in MRI remain a limitation . Accelerated MRI techniques acquire fewer measurements at a subNyquist rate , and use redundancies in the acquisition system or the images to remove the resulting aliasing artifacts during reconstruction . In clinical MRI systems , multi-coil receivers are used during data acquisition . Parallel imaging ( PI ) is the most clinically used method for accelerated MRI , and exploits the redundancies between these coils for reconstruction ( Pruessmann et al. , 1999 ; Griswold et al. , 2002 ) . Compressed sensing ( CS ) is another conventional accelerated MRI technique that exploits the compressibility of images in sparsifying transform domains ( Lustig et al. , 2007 ) , and is commonly used in combination with PI . However , PI and CS may suffer from noise and residual artifacts at high acceleration rates ( Robson et al. , 2008 ; Sandino et al. , 2020 ) . Recently , deep learning ( DL ) methods have emerged as an alternative for accelerated MRI due to their improved reconstruction quality compared to conventional approaches ( Hammernik et al. , 2018 ; Knoll et al. , 2020b ) . Particularly , physics-guided deep learning reconstruction ( PG-DLR ) approaches have gained interest due to their robustness and improved performance ( Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ; Schlemper et al. , 2018 ; Hosseini et al. , 2020b ) . PG-DLR explicitly incorporates the physics of the data acquisition system into the neural network via a procedure known as algorithm unrolling ( Monga et al. , 2021 ) . This is done by unrolling iterative optimization algorithms that alternate between data consistency ( DC ) and regularization steps for a fixed number of iterations . Subsequently , PG-DLR methods are trained in a supervised manner using large databases of fully-sampled measurements ( Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ) . More recently , self-supervised learning has shown that reconstruction quality similar to supervised PG-DLR can be achieved while training on a database of only undersampled measurements ( Yaman et al. , 2020 ) . While such database learning strategies offer improved reconstruction quality , acquisition of large datasets may often be infeasible . In some MRI applications involving time-varying physiological processes , dynamic information such as time courses of signal changes , contrast uptake or breathing patterns may differ substantially between subjects , making it difficult to generate high-quality databases of sufficient size for the aforementioned strategies . Furthermore , database training , in general , brings along concerns about robustness and generalization ( Eldar et al. , 2017 ; Knoll et al. , 2020c ) . In MRI reconstruction , this may exhibit itself when there are mismatches between training and test datasets in terms of image contrast , sampling pattern , SNR , vendor , and anatomy . While it is imperative to have high-quality reconstructions that can be used to correctly identify lesions/disease for every individual , the fastMRI transfer track challenge shows that pretrained models fail to generalize when applied to patients/scans with different distribution or acquisition parameters , with potential for misdiagnosis ( Muckley et al. , 2021 ) . Finally , training datasets may lack examples of rare and/or subtle pathologies , increasing the risk of generalization failure ( Knoll et al. , 2019 ; 2020c ) . In this work , we tackle these challenges associated with database training , and propose a zero-shot self-supervised learning ( ZS-SSL ) approach , which performs subject-specific training of PG-DLR without any external training database . Succinctly , ZS-SSL partitions the acquired measurements into three types of disjoint sets , which are respectively used only in the PG-DLR neural network , in defining the training loss , and in establishing a stopping strategy to avoid overfitting . Thus , our training is both self-supervised and self-validated . In cases where a database-pretrained network is available , ZS-SSL leverages transfer learning ( TL ) for improved reconstruction quality and reduced computational complexity . Our contributions can be summarized as follows : • We propose a zero-shot self-supervised method for learning subject-specific DL MRI reconstruction from a single undersampled dataset without any external training database . • We provide a well-defined methodology for determining stopping criterion to avoid overfitting in contrast to other single-image training approaches ( Ulyanov et al. , 2018 ) . • We apply the proposed zero-shot learning approach to knee and brain MRI datasets , and show its efficacy in removing residual aliasing and banding artifacts compared to supervised database learning . • We show our ZS-SSL can be combined with with TL in cases when a database-pretrained model is available to reduce computational costs . • We show that our zero-shot learning strategies address robustness and generalizability issues of trained supervised models in terms of changes in sampling pattern , acceleration rate , contrast , SNR , and anatomy at inference time . 2 BACKGROUND AND RELATED WORK . 2.1 ACCELERATED MRI ACQUISITION MODEL . In MRI , raw measurement data is acquired in the frequency domain , also known as k-space . In current clinical MRI systems , multiple receiver coils are used , where each is sensitive to different parts of the volume . In practice , MRI is accelerated by taking fewer measurements , which are characterized by an undersampling mask that specifies the acquired locations in k-space . For a multi-coil MRI acquisition , the forward model is given as yi = PΩFCix + ni , i ∈ { 1 , . . . , nc } , ( 1 ) where x is the underlying image , yi is the acquired data for the ith coil , PΩ is the masking operator for undersampling pattern Ω , F is the Fourier transform , Ci is a diagonal matrix characterizing the ith coil sensitivity , ni is measurement noise for ith coil , and nc is the number of coils ( Pruessmann et al. , 1999 ) . This system can be concatenated across the coil dimension for a compact representation yΩ = EΩx + n , ( 2 ) where yΩ is the acquired undersampled measurements across all coils , EΩ is the forward encoding operator that concatenates PΩFCi across i ∈ { 1 , . . . , nc } . The general inverse problem for accelerated MRI is given as arg min x ‖yΩ −EΩx‖22 +R ( x ) , ( 3 ) where the ‖yΩ−EΩx‖22 term enforces consistency with acquired data ( DC ) andR ( · ) is a regularizer . 2.2 PG-DLR WITH ALGORITHM UNROLLING . Several optimization methods are available for solving the inverse problem in ( 3 ) ( Fessler , 2020 ) . Variable-splitting via quadratic penalty is one such approach that decouples the DC and regularizer units . It introduces an auxiliary variable z that is constrained to be equal to x , and ( 3 ) is reformulated as an unconstrained problem with a quadratic penalty arg min x , z ‖yΩ −EΩx‖22 + µ‖x− z‖22 +R ( z ) , ( 4 ) where µ is the penalty parameter . The optimization problem in ( 4 ) is then solved via z ( i ) = arg min z µ‖x ( i−1 ) − z‖22 +R ( z ) , ( 5a ) x ( i ) = arg min x ‖yΩ −EΩx‖22 + µ‖x− z ( i ) ‖22 , ( 5b ) where z ( i ) is an intermediate variable and x ( i ) is the desired image at iteration i . In PG-DLR , an iterative algorithm , as in ( 5a ) and ( 5b ) is unrolled for a fixed number of iterations ( Liang et al. , 2020 ) . The regularizer sub-problem in Eq . ( 5a ) is implicitly solved with neural networks and the DC sub-problem in Eq . ( 5b ) is solved via linear methods such as gradient descent ( Hammernik et al. , 2018 ) or conjugate gradient ( CG ) ( Aggarwal et al. , 2019 ) . There have been numerous works on PG-DLR for accelerated MRI ( Schlemper et al. , 2018 ; Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ; Liang et al. , 2020 ; Yaman et al. , 2020 ) . Most of these works vary from each other on the algorithms used for DC and neural networks employed in the regularizer units . However , all these works require a large database of training samples . 2.3 SUPERVISED LEARNING FOR PG-DLR . In supervised PG-DLR , training is performed using a database of fully-sampled reference data . Let ynref be the fully-sampled k-space for subject n and f ( y n Ω , E n Ω ; θ ) be the output of the unrolled network for under-sampled k-space ynΩ , where the network is parameterized by θ. End-to-end training minimizes ( Knoll et al. , 2020b ; Yaman et al. , 2020 ) min θ 1 N N∑ n=1 L ( ynref , Enfullf ( ynΩ , EnΩ ; θ ) ) , ( 6 ) where N is the number of samples in the training database , Enfull is the fully-sampled encoding operator that transform network output to k-space and L ( · , · ) is a loss function . 2.4 SELF-SUPERVISED LEARNING FOR PG-DLR . Unlike supervised learning , self-supervised learning enables training without fully-sampled data by only utilizing acquired undersampled measurements ( Yaman et al. , 2020 ) . A masking approach is used for self-supervision in this setting , where a subset Λ ⊂ Ω is set aside for checking prediction performance/loss calculation , while the remainder of points Θ = Ω\Λ are used in the DC units of the PG-DLR network . End-to-end training is performed using the loss function min θ 1 N N∑ n=1 L ( ynΛ , E n Λ ( f ( ynΘ , E n Θ ; θ ) ) ) . ( 7 ) 3 ZERO-SHOT SELF-SUPERVISED LEARNING FOR PG-DLR . As discussed in Section 1 , lack of large datasets in numerous MRI applications , as well as robustness and generalizability issues of pretrained models pose a challenge for the clinical translation of DL reconstruction methods . Hence , subject-specific reconstruction is desirable in clinical practice , since it is critical to achieve a reconstruction quality that can be used for correctly diagnosing every patient . While the conventional self-supervised masking strategy , as in ( Yaman et al. , 2020 ) can be applied for subject-specific learning , it leads to overfitting unless the training is stopped early ( Hosseini et al. , 2020a ) . This is similar to other single-image learning strategies , such as the deep image prior ( DIP ) or zero-shot super-resolution ( Ulyanov et al. , 2018 ; Shocher et al. , 2018 ) . DIP-type approaches shows that an untrained neural network can successfully perform instance-specific image restoration tasks such as denoising , super-resolution , inpainting without any training data . However , such DIPtype techniques requires an early stopping for avoiding over-fitting , which is typically done with a manual heuristic selection ( Ulyanov et al. , 2018 ; Hosseini et al. , 2020a ; Darestani et al. , 2021 ) . While this may work in a research setting , having a well-defined automated early stopping criterion is critical to fully harness the potential of subject-specific DL MRI reconstruction in practice . Early stopping regularization in database-trained setting is conventionally motivated through the bias-variance trade-off , in which a validation set is used as a proxy for the generalization error to identify the stopping criterion . Using the same bias-variance trade-off motivation , having a validation set can aid in devising a stopping criterion , but this has not been feasible in existing zero-shot learning approaches , which either use all acquired measurements ( Ulyanov et al. , 2018 ; Senouf et al. , 2019 ) or partition them into two sets for training and defining loss ( Hosseini et al. , 2020a ) . Hence , existing zero-shot learning techniques lack a validation set to identify the stopping criterion . ZS-SSL Formulation and Training : We propose a new ZS-SSL partitioning framework to enable subject-specific self-supervised training and validation with a well-defined stopping criterion . We define the following partition for the available measurement locations from a single scan , Ω : Ω = Θ t Λ t Γ , ( 8 ) where t denotes a disjoint union , i.e . Θ , Λ and Γ are pairwise disjoint ( Figure 1 ) . Similar to Section 2.4 , Θ is used in the DC units of the unrolled network , and Λ is used to define a k-space loss for the self-supervision of the network . The third partition Γ is a set of acquired k-space indices set aside for defining a k-space validation loss . Thus , ZS-SSL training is both self-supervised and self-validated . In general , since zero-shot learning approaches perform training using a single dataset , generation of multiple data pairs from this single dataset is necessary to self-supervise the neural network ( Quan et al. , 2020 ) . Hence , we generate multiple ( Θ , Λ ) pairs from the acquired locations Ω of the single scan . In ZS-SSL , this is achieved by fixing the k-space validation partition Γ ⊂ Ω , and performing the retrospective masking on Ω\Γ multiple times . Formally , Ω\Γ is partitioned K times such that Ω\Γ = Θk t Λk , k ∈ { 1 , . . . , K } , ( 9 ) where Λk , Θk and Γ are pairwise disjoint , i.e . Ω = Γ tΘk t Λk , ∀ k. ZS-SSL training minimizes min θ 1 K K∑ k=1 L ( yΛk , EΛk ( f ( yΘk , EΘk ; θ ) ) ) In the proposed ZS-SSL , this is supplemented by a k-space self-validation loss , which tests the generalization performance of the trained network on the k-space validation partition Γ . For the lth epoch , where the learned network weights are specified by θ ( l ) , this validation loss is given by : L ( yΓ , EΓ ( f ( yΩ\Γ , EΩ\Γ ; θ ( l ) ) ) ) . ( 10 ) Note that in ( 10 ) , the network output is calculated by applying the DC units on Ω\Γ = Θ t Λ , i.e . all acquired points outside of Γ , to better assess its generalizability performance . Our key motivation is that while the training loss will decrease over epochs , the k-space validation loss will start increasing once overfitting is observed . Thus , we monitor the loss in ( 10 ) during training to define an early stopping criterion to avoid overfitting . Let L be the epoch in which training needs to be stopped . Then at inference time , the network output is calculated as f ( yΩ , EΩ ; θ ( L ) ) , i.e . all acquired points are used to calculate the network output . ZS-SSL with Transfer Learning ( TL ) : While pretrained models are very efficient in reconstructing new unseen measurements from similar MRI scan protocols , their performance degrades significantly when acquisition parameters vary ( Muckley et al. , 2021 ) . Moreover , retraining a new model on a large database for each acquisition parameter , sampling/contrast/anatomy/acceleration , may be very computationally expensive ( Knoll et al. , 2019 ) . Hence , TL has been used for re-training DL models pre-trained on large databases to reconstruct MRI data with different characteristics ( Knoll et al. , 2019 ) . However , such transfer still requires another , often smaller , database for re-training . In contrast , in the presence of pre-trained models , ZS-SSL can be combined with TL , referred to as ZSSSL-TL , to reconstruct a single slice/instance with different characteristics by using weights of the pre-trained model for initialization . Thus , ZS-SSL-TL ensures that the pretrained model is adapted for each patient/subject , while facilitating faster convergence time and reduced reconstruction time .
In order to eliminate the need for large training sets, one can consider a transition from (1) fully-supervised to (2) self-supervised methods, and then from (2) self-supervised methods to (3) single-instance reconstruction methods. In the context of accelerated MRI reconstruction which is considered in this work, the above categories translate into models that are trained based on (1) having access to a fully-sampled dataset, (2) having access to a dataset of under-sampled measurements, and (3) having access to only one under-sampled measurement. The paper targets (3), that is proposing a zero-shot learning approach for accelerated MRI reconstruction. The algorithm is based on the idea proposed in paper [1] combined with building a dataset for the given sample in order to eventually perform self-supervised training on the synthesized dataset. In order to prevent overfitting, the authors propose a way to do automatic early stopping. [1] Yaman, Burhaneddin, et al. "Self-supervised physics-based deep learning MRI reconstruction without fully-sampled data." 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, 2020.
SP:eb17e2fcdfe38299e77c87eeeff3ddeb9b21f65a
Zero-Shot Self-Supervised Learning for MRI Reconstruction
1 INTRODUCTION . Magnetic resonance imaging ( MRI ) is a non-invasive , radiation-free medical imaging modality that provides excellent soft tissue contrast for diagnostic purposes . However , lengthy acquisition times in MRI remain a limitation . Accelerated MRI techniques acquire fewer measurements at a subNyquist rate , and use redundancies in the acquisition system or the images to remove the resulting aliasing artifacts during reconstruction . In clinical MRI systems , multi-coil receivers are used during data acquisition . Parallel imaging ( PI ) is the most clinically used method for accelerated MRI , and exploits the redundancies between these coils for reconstruction ( Pruessmann et al. , 1999 ; Griswold et al. , 2002 ) . Compressed sensing ( CS ) is another conventional accelerated MRI technique that exploits the compressibility of images in sparsifying transform domains ( Lustig et al. , 2007 ) , and is commonly used in combination with PI . However , PI and CS may suffer from noise and residual artifacts at high acceleration rates ( Robson et al. , 2008 ; Sandino et al. , 2020 ) . Recently , deep learning ( DL ) methods have emerged as an alternative for accelerated MRI due to their improved reconstruction quality compared to conventional approaches ( Hammernik et al. , 2018 ; Knoll et al. , 2020b ) . Particularly , physics-guided deep learning reconstruction ( PG-DLR ) approaches have gained interest due to their robustness and improved performance ( Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ; Schlemper et al. , 2018 ; Hosseini et al. , 2020b ) . PG-DLR explicitly incorporates the physics of the data acquisition system into the neural network via a procedure known as algorithm unrolling ( Monga et al. , 2021 ) . This is done by unrolling iterative optimization algorithms that alternate between data consistency ( DC ) and regularization steps for a fixed number of iterations . Subsequently , PG-DLR methods are trained in a supervised manner using large databases of fully-sampled measurements ( Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ) . More recently , self-supervised learning has shown that reconstruction quality similar to supervised PG-DLR can be achieved while training on a database of only undersampled measurements ( Yaman et al. , 2020 ) . While such database learning strategies offer improved reconstruction quality , acquisition of large datasets may often be infeasible . In some MRI applications involving time-varying physiological processes , dynamic information such as time courses of signal changes , contrast uptake or breathing patterns may differ substantially between subjects , making it difficult to generate high-quality databases of sufficient size for the aforementioned strategies . Furthermore , database training , in general , brings along concerns about robustness and generalization ( Eldar et al. , 2017 ; Knoll et al. , 2020c ) . In MRI reconstruction , this may exhibit itself when there are mismatches between training and test datasets in terms of image contrast , sampling pattern , SNR , vendor , and anatomy . While it is imperative to have high-quality reconstructions that can be used to correctly identify lesions/disease for every individual , the fastMRI transfer track challenge shows that pretrained models fail to generalize when applied to patients/scans with different distribution or acquisition parameters , with potential for misdiagnosis ( Muckley et al. , 2021 ) . Finally , training datasets may lack examples of rare and/or subtle pathologies , increasing the risk of generalization failure ( Knoll et al. , 2019 ; 2020c ) . In this work , we tackle these challenges associated with database training , and propose a zero-shot self-supervised learning ( ZS-SSL ) approach , which performs subject-specific training of PG-DLR without any external training database . Succinctly , ZS-SSL partitions the acquired measurements into three types of disjoint sets , which are respectively used only in the PG-DLR neural network , in defining the training loss , and in establishing a stopping strategy to avoid overfitting . Thus , our training is both self-supervised and self-validated . In cases where a database-pretrained network is available , ZS-SSL leverages transfer learning ( TL ) for improved reconstruction quality and reduced computational complexity . Our contributions can be summarized as follows : • We propose a zero-shot self-supervised method for learning subject-specific DL MRI reconstruction from a single undersampled dataset without any external training database . • We provide a well-defined methodology for determining stopping criterion to avoid overfitting in contrast to other single-image training approaches ( Ulyanov et al. , 2018 ) . • We apply the proposed zero-shot learning approach to knee and brain MRI datasets , and show its efficacy in removing residual aliasing and banding artifacts compared to supervised database learning . • We show our ZS-SSL can be combined with with TL in cases when a database-pretrained model is available to reduce computational costs . • We show that our zero-shot learning strategies address robustness and generalizability issues of trained supervised models in terms of changes in sampling pattern , acceleration rate , contrast , SNR , and anatomy at inference time . 2 BACKGROUND AND RELATED WORK . 2.1 ACCELERATED MRI ACQUISITION MODEL . In MRI , raw measurement data is acquired in the frequency domain , also known as k-space . In current clinical MRI systems , multiple receiver coils are used , where each is sensitive to different parts of the volume . In practice , MRI is accelerated by taking fewer measurements , which are characterized by an undersampling mask that specifies the acquired locations in k-space . For a multi-coil MRI acquisition , the forward model is given as yi = PΩFCix + ni , i ∈ { 1 , . . . , nc } , ( 1 ) where x is the underlying image , yi is the acquired data for the ith coil , PΩ is the masking operator for undersampling pattern Ω , F is the Fourier transform , Ci is a diagonal matrix characterizing the ith coil sensitivity , ni is measurement noise for ith coil , and nc is the number of coils ( Pruessmann et al. , 1999 ) . This system can be concatenated across the coil dimension for a compact representation yΩ = EΩx + n , ( 2 ) where yΩ is the acquired undersampled measurements across all coils , EΩ is the forward encoding operator that concatenates PΩFCi across i ∈ { 1 , . . . , nc } . The general inverse problem for accelerated MRI is given as arg min x ‖yΩ −EΩx‖22 +R ( x ) , ( 3 ) where the ‖yΩ−EΩx‖22 term enforces consistency with acquired data ( DC ) andR ( · ) is a regularizer . 2.2 PG-DLR WITH ALGORITHM UNROLLING . Several optimization methods are available for solving the inverse problem in ( 3 ) ( Fessler , 2020 ) . Variable-splitting via quadratic penalty is one such approach that decouples the DC and regularizer units . It introduces an auxiliary variable z that is constrained to be equal to x , and ( 3 ) is reformulated as an unconstrained problem with a quadratic penalty arg min x , z ‖yΩ −EΩx‖22 + µ‖x− z‖22 +R ( z ) , ( 4 ) where µ is the penalty parameter . The optimization problem in ( 4 ) is then solved via z ( i ) = arg min z µ‖x ( i−1 ) − z‖22 +R ( z ) , ( 5a ) x ( i ) = arg min x ‖yΩ −EΩx‖22 + µ‖x− z ( i ) ‖22 , ( 5b ) where z ( i ) is an intermediate variable and x ( i ) is the desired image at iteration i . In PG-DLR , an iterative algorithm , as in ( 5a ) and ( 5b ) is unrolled for a fixed number of iterations ( Liang et al. , 2020 ) . The regularizer sub-problem in Eq . ( 5a ) is implicitly solved with neural networks and the DC sub-problem in Eq . ( 5b ) is solved via linear methods such as gradient descent ( Hammernik et al. , 2018 ) or conjugate gradient ( CG ) ( Aggarwal et al. , 2019 ) . There have been numerous works on PG-DLR for accelerated MRI ( Schlemper et al. , 2018 ; Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ; Liang et al. , 2020 ; Yaman et al. , 2020 ) . Most of these works vary from each other on the algorithms used for DC and neural networks employed in the regularizer units . However , all these works require a large database of training samples . 2.3 SUPERVISED LEARNING FOR PG-DLR . In supervised PG-DLR , training is performed using a database of fully-sampled reference data . Let ynref be the fully-sampled k-space for subject n and f ( y n Ω , E n Ω ; θ ) be the output of the unrolled network for under-sampled k-space ynΩ , where the network is parameterized by θ. End-to-end training minimizes ( Knoll et al. , 2020b ; Yaman et al. , 2020 ) min θ 1 N N∑ n=1 L ( ynref , Enfullf ( ynΩ , EnΩ ; θ ) ) , ( 6 ) where N is the number of samples in the training database , Enfull is the fully-sampled encoding operator that transform network output to k-space and L ( · , · ) is a loss function . 2.4 SELF-SUPERVISED LEARNING FOR PG-DLR . Unlike supervised learning , self-supervised learning enables training without fully-sampled data by only utilizing acquired undersampled measurements ( Yaman et al. , 2020 ) . A masking approach is used for self-supervision in this setting , where a subset Λ ⊂ Ω is set aside for checking prediction performance/loss calculation , while the remainder of points Θ = Ω\Λ are used in the DC units of the PG-DLR network . End-to-end training is performed using the loss function min θ 1 N N∑ n=1 L ( ynΛ , E n Λ ( f ( ynΘ , E n Θ ; θ ) ) ) . ( 7 ) 3 ZERO-SHOT SELF-SUPERVISED LEARNING FOR PG-DLR . As discussed in Section 1 , lack of large datasets in numerous MRI applications , as well as robustness and generalizability issues of pretrained models pose a challenge for the clinical translation of DL reconstruction methods . Hence , subject-specific reconstruction is desirable in clinical practice , since it is critical to achieve a reconstruction quality that can be used for correctly diagnosing every patient . While the conventional self-supervised masking strategy , as in ( Yaman et al. , 2020 ) can be applied for subject-specific learning , it leads to overfitting unless the training is stopped early ( Hosseini et al. , 2020a ) . This is similar to other single-image learning strategies , such as the deep image prior ( DIP ) or zero-shot super-resolution ( Ulyanov et al. , 2018 ; Shocher et al. , 2018 ) . DIP-type approaches shows that an untrained neural network can successfully perform instance-specific image restoration tasks such as denoising , super-resolution , inpainting without any training data . However , such DIPtype techniques requires an early stopping for avoiding over-fitting , which is typically done with a manual heuristic selection ( Ulyanov et al. , 2018 ; Hosseini et al. , 2020a ; Darestani et al. , 2021 ) . While this may work in a research setting , having a well-defined automated early stopping criterion is critical to fully harness the potential of subject-specific DL MRI reconstruction in practice . Early stopping regularization in database-trained setting is conventionally motivated through the bias-variance trade-off , in which a validation set is used as a proxy for the generalization error to identify the stopping criterion . Using the same bias-variance trade-off motivation , having a validation set can aid in devising a stopping criterion , but this has not been feasible in existing zero-shot learning approaches , which either use all acquired measurements ( Ulyanov et al. , 2018 ; Senouf et al. , 2019 ) or partition them into two sets for training and defining loss ( Hosseini et al. , 2020a ) . Hence , existing zero-shot learning techniques lack a validation set to identify the stopping criterion . ZS-SSL Formulation and Training : We propose a new ZS-SSL partitioning framework to enable subject-specific self-supervised training and validation with a well-defined stopping criterion . We define the following partition for the available measurement locations from a single scan , Ω : Ω = Θ t Λ t Γ , ( 8 ) where t denotes a disjoint union , i.e . Θ , Λ and Γ are pairwise disjoint ( Figure 1 ) . Similar to Section 2.4 , Θ is used in the DC units of the unrolled network , and Λ is used to define a k-space loss for the self-supervision of the network . The third partition Γ is a set of acquired k-space indices set aside for defining a k-space validation loss . Thus , ZS-SSL training is both self-supervised and self-validated . In general , since zero-shot learning approaches perform training using a single dataset , generation of multiple data pairs from this single dataset is necessary to self-supervise the neural network ( Quan et al. , 2020 ) . Hence , we generate multiple ( Θ , Λ ) pairs from the acquired locations Ω of the single scan . In ZS-SSL , this is achieved by fixing the k-space validation partition Γ ⊂ Ω , and performing the retrospective masking on Ω\Γ multiple times . Formally , Ω\Γ is partitioned K times such that Ω\Γ = Θk t Λk , k ∈ { 1 , . . . , K } , ( 9 ) where Λk , Θk and Γ are pairwise disjoint , i.e . Ω = Γ tΘk t Λk , ∀ k. ZS-SSL training minimizes min θ 1 K K∑ k=1 L ( yΛk , EΛk ( f ( yΘk , EΘk ; θ ) ) ) In the proposed ZS-SSL , this is supplemented by a k-space self-validation loss , which tests the generalization performance of the trained network on the k-space validation partition Γ . For the lth epoch , where the learned network weights are specified by θ ( l ) , this validation loss is given by : L ( yΓ , EΓ ( f ( yΩ\Γ , EΩ\Γ ; θ ( l ) ) ) ) . ( 10 ) Note that in ( 10 ) , the network output is calculated by applying the DC units on Ω\Γ = Θ t Λ , i.e . all acquired points outside of Γ , to better assess its generalizability performance . Our key motivation is that while the training loss will decrease over epochs , the k-space validation loss will start increasing once overfitting is observed . Thus , we monitor the loss in ( 10 ) during training to define an early stopping criterion to avoid overfitting . Let L be the epoch in which training needs to be stopped . Then at inference time , the network output is calculated as f ( yΩ , EΩ ; θ ( L ) ) , i.e . all acquired points are used to calculate the network output . ZS-SSL with Transfer Learning ( TL ) : While pretrained models are very efficient in reconstructing new unseen measurements from similar MRI scan protocols , their performance degrades significantly when acquisition parameters vary ( Muckley et al. , 2021 ) . Moreover , retraining a new model on a large database for each acquisition parameter , sampling/contrast/anatomy/acceleration , may be very computationally expensive ( Knoll et al. , 2019 ) . Hence , TL has been used for re-training DL models pre-trained on large databases to reconstruct MRI data with different characteristics ( Knoll et al. , 2019 ) . However , such transfer still requires another , often smaller , database for re-training . In contrast , in the presence of pre-trained models , ZS-SSL can be combined with TL , referred to as ZSSSL-TL , to reconstruct a single slice/instance with different characteristics by using weights of the pre-trained model for initialization . Thus , ZS-SSL-TL ensures that the pretrained model is adapted for each patient/subject , while facilitating faster convergence time and reduced reconstruction time .
This article propose a "zero-shot" method for MRI reconstruction, which is a well-studied inverse problem. The method is based on the ideas of deep image prior, i.e. the ability of correctly sized neural networks to learn about the structure of a single image, sufficiently well for denoising tasks. This self-supervised learned network can then be used in a plug-and-play architecture to solve inverse problem with a variational approach, i.e. as if the learned denoiser were a Total Variation (TV) minimiser. Their denoiser can be improved in a transfer learning approach to benefit from a more complex network trained on similar images than those at hand, and fined-tuned on the image to be reconstructed The authors go on to apply their plug-and-play architecture to solve the MRI reconstruction problem. They provide experiments and comparisons.
SP:eb17e2fcdfe38299e77c87eeeff3ddeb9b21f65a
Zero-Shot Self-Supervised Learning for MRI Reconstruction
1 INTRODUCTION . Magnetic resonance imaging ( MRI ) is a non-invasive , radiation-free medical imaging modality that provides excellent soft tissue contrast for diagnostic purposes . However , lengthy acquisition times in MRI remain a limitation . Accelerated MRI techniques acquire fewer measurements at a subNyquist rate , and use redundancies in the acquisition system or the images to remove the resulting aliasing artifacts during reconstruction . In clinical MRI systems , multi-coil receivers are used during data acquisition . Parallel imaging ( PI ) is the most clinically used method for accelerated MRI , and exploits the redundancies between these coils for reconstruction ( Pruessmann et al. , 1999 ; Griswold et al. , 2002 ) . Compressed sensing ( CS ) is another conventional accelerated MRI technique that exploits the compressibility of images in sparsifying transform domains ( Lustig et al. , 2007 ) , and is commonly used in combination with PI . However , PI and CS may suffer from noise and residual artifacts at high acceleration rates ( Robson et al. , 2008 ; Sandino et al. , 2020 ) . Recently , deep learning ( DL ) methods have emerged as an alternative for accelerated MRI due to their improved reconstruction quality compared to conventional approaches ( Hammernik et al. , 2018 ; Knoll et al. , 2020b ) . Particularly , physics-guided deep learning reconstruction ( PG-DLR ) approaches have gained interest due to their robustness and improved performance ( Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ; Schlemper et al. , 2018 ; Hosseini et al. , 2020b ) . PG-DLR explicitly incorporates the physics of the data acquisition system into the neural network via a procedure known as algorithm unrolling ( Monga et al. , 2021 ) . This is done by unrolling iterative optimization algorithms that alternate between data consistency ( DC ) and regularization steps for a fixed number of iterations . Subsequently , PG-DLR methods are trained in a supervised manner using large databases of fully-sampled measurements ( Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ) . More recently , self-supervised learning has shown that reconstruction quality similar to supervised PG-DLR can be achieved while training on a database of only undersampled measurements ( Yaman et al. , 2020 ) . While such database learning strategies offer improved reconstruction quality , acquisition of large datasets may often be infeasible . In some MRI applications involving time-varying physiological processes , dynamic information such as time courses of signal changes , contrast uptake or breathing patterns may differ substantially between subjects , making it difficult to generate high-quality databases of sufficient size for the aforementioned strategies . Furthermore , database training , in general , brings along concerns about robustness and generalization ( Eldar et al. , 2017 ; Knoll et al. , 2020c ) . In MRI reconstruction , this may exhibit itself when there are mismatches between training and test datasets in terms of image contrast , sampling pattern , SNR , vendor , and anatomy . While it is imperative to have high-quality reconstructions that can be used to correctly identify lesions/disease for every individual , the fastMRI transfer track challenge shows that pretrained models fail to generalize when applied to patients/scans with different distribution or acquisition parameters , with potential for misdiagnosis ( Muckley et al. , 2021 ) . Finally , training datasets may lack examples of rare and/or subtle pathologies , increasing the risk of generalization failure ( Knoll et al. , 2019 ; 2020c ) . In this work , we tackle these challenges associated with database training , and propose a zero-shot self-supervised learning ( ZS-SSL ) approach , which performs subject-specific training of PG-DLR without any external training database . Succinctly , ZS-SSL partitions the acquired measurements into three types of disjoint sets , which are respectively used only in the PG-DLR neural network , in defining the training loss , and in establishing a stopping strategy to avoid overfitting . Thus , our training is both self-supervised and self-validated . In cases where a database-pretrained network is available , ZS-SSL leverages transfer learning ( TL ) for improved reconstruction quality and reduced computational complexity . Our contributions can be summarized as follows : • We propose a zero-shot self-supervised method for learning subject-specific DL MRI reconstruction from a single undersampled dataset without any external training database . • We provide a well-defined methodology for determining stopping criterion to avoid overfitting in contrast to other single-image training approaches ( Ulyanov et al. , 2018 ) . • We apply the proposed zero-shot learning approach to knee and brain MRI datasets , and show its efficacy in removing residual aliasing and banding artifacts compared to supervised database learning . • We show our ZS-SSL can be combined with with TL in cases when a database-pretrained model is available to reduce computational costs . • We show that our zero-shot learning strategies address robustness and generalizability issues of trained supervised models in terms of changes in sampling pattern , acceleration rate , contrast , SNR , and anatomy at inference time . 2 BACKGROUND AND RELATED WORK . 2.1 ACCELERATED MRI ACQUISITION MODEL . In MRI , raw measurement data is acquired in the frequency domain , also known as k-space . In current clinical MRI systems , multiple receiver coils are used , where each is sensitive to different parts of the volume . In practice , MRI is accelerated by taking fewer measurements , which are characterized by an undersampling mask that specifies the acquired locations in k-space . For a multi-coil MRI acquisition , the forward model is given as yi = PΩFCix + ni , i ∈ { 1 , . . . , nc } , ( 1 ) where x is the underlying image , yi is the acquired data for the ith coil , PΩ is the masking operator for undersampling pattern Ω , F is the Fourier transform , Ci is a diagonal matrix characterizing the ith coil sensitivity , ni is measurement noise for ith coil , and nc is the number of coils ( Pruessmann et al. , 1999 ) . This system can be concatenated across the coil dimension for a compact representation yΩ = EΩx + n , ( 2 ) where yΩ is the acquired undersampled measurements across all coils , EΩ is the forward encoding operator that concatenates PΩFCi across i ∈ { 1 , . . . , nc } . The general inverse problem for accelerated MRI is given as arg min x ‖yΩ −EΩx‖22 +R ( x ) , ( 3 ) where the ‖yΩ−EΩx‖22 term enforces consistency with acquired data ( DC ) andR ( · ) is a regularizer . 2.2 PG-DLR WITH ALGORITHM UNROLLING . Several optimization methods are available for solving the inverse problem in ( 3 ) ( Fessler , 2020 ) . Variable-splitting via quadratic penalty is one such approach that decouples the DC and regularizer units . It introduces an auxiliary variable z that is constrained to be equal to x , and ( 3 ) is reformulated as an unconstrained problem with a quadratic penalty arg min x , z ‖yΩ −EΩx‖22 + µ‖x− z‖22 +R ( z ) , ( 4 ) where µ is the penalty parameter . The optimization problem in ( 4 ) is then solved via z ( i ) = arg min z µ‖x ( i−1 ) − z‖22 +R ( z ) , ( 5a ) x ( i ) = arg min x ‖yΩ −EΩx‖22 + µ‖x− z ( i ) ‖22 , ( 5b ) where z ( i ) is an intermediate variable and x ( i ) is the desired image at iteration i . In PG-DLR , an iterative algorithm , as in ( 5a ) and ( 5b ) is unrolled for a fixed number of iterations ( Liang et al. , 2020 ) . The regularizer sub-problem in Eq . ( 5a ) is implicitly solved with neural networks and the DC sub-problem in Eq . ( 5b ) is solved via linear methods such as gradient descent ( Hammernik et al. , 2018 ) or conjugate gradient ( CG ) ( Aggarwal et al. , 2019 ) . There have been numerous works on PG-DLR for accelerated MRI ( Schlemper et al. , 2018 ; Hammernik et al. , 2018 ; Aggarwal et al. , 2019 ; Liang et al. , 2020 ; Yaman et al. , 2020 ) . Most of these works vary from each other on the algorithms used for DC and neural networks employed in the regularizer units . However , all these works require a large database of training samples . 2.3 SUPERVISED LEARNING FOR PG-DLR . In supervised PG-DLR , training is performed using a database of fully-sampled reference data . Let ynref be the fully-sampled k-space for subject n and f ( y n Ω , E n Ω ; θ ) be the output of the unrolled network for under-sampled k-space ynΩ , where the network is parameterized by θ. End-to-end training minimizes ( Knoll et al. , 2020b ; Yaman et al. , 2020 ) min θ 1 N N∑ n=1 L ( ynref , Enfullf ( ynΩ , EnΩ ; θ ) ) , ( 6 ) where N is the number of samples in the training database , Enfull is the fully-sampled encoding operator that transform network output to k-space and L ( · , · ) is a loss function . 2.4 SELF-SUPERVISED LEARNING FOR PG-DLR . Unlike supervised learning , self-supervised learning enables training without fully-sampled data by only utilizing acquired undersampled measurements ( Yaman et al. , 2020 ) . A masking approach is used for self-supervision in this setting , where a subset Λ ⊂ Ω is set aside for checking prediction performance/loss calculation , while the remainder of points Θ = Ω\Λ are used in the DC units of the PG-DLR network . End-to-end training is performed using the loss function min θ 1 N N∑ n=1 L ( ynΛ , E n Λ ( f ( ynΘ , E n Θ ; θ ) ) ) . ( 7 ) 3 ZERO-SHOT SELF-SUPERVISED LEARNING FOR PG-DLR . As discussed in Section 1 , lack of large datasets in numerous MRI applications , as well as robustness and generalizability issues of pretrained models pose a challenge for the clinical translation of DL reconstruction methods . Hence , subject-specific reconstruction is desirable in clinical practice , since it is critical to achieve a reconstruction quality that can be used for correctly diagnosing every patient . While the conventional self-supervised masking strategy , as in ( Yaman et al. , 2020 ) can be applied for subject-specific learning , it leads to overfitting unless the training is stopped early ( Hosseini et al. , 2020a ) . This is similar to other single-image learning strategies , such as the deep image prior ( DIP ) or zero-shot super-resolution ( Ulyanov et al. , 2018 ; Shocher et al. , 2018 ) . DIP-type approaches shows that an untrained neural network can successfully perform instance-specific image restoration tasks such as denoising , super-resolution , inpainting without any training data . However , such DIPtype techniques requires an early stopping for avoiding over-fitting , which is typically done with a manual heuristic selection ( Ulyanov et al. , 2018 ; Hosseini et al. , 2020a ; Darestani et al. , 2021 ) . While this may work in a research setting , having a well-defined automated early stopping criterion is critical to fully harness the potential of subject-specific DL MRI reconstruction in practice . Early stopping regularization in database-trained setting is conventionally motivated through the bias-variance trade-off , in which a validation set is used as a proxy for the generalization error to identify the stopping criterion . Using the same bias-variance trade-off motivation , having a validation set can aid in devising a stopping criterion , but this has not been feasible in existing zero-shot learning approaches , which either use all acquired measurements ( Ulyanov et al. , 2018 ; Senouf et al. , 2019 ) or partition them into two sets for training and defining loss ( Hosseini et al. , 2020a ) . Hence , existing zero-shot learning techniques lack a validation set to identify the stopping criterion . ZS-SSL Formulation and Training : We propose a new ZS-SSL partitioning framework to enable subject-specific self-supervised training and validation with a well-defined stopping criterion . We define the following partition for the available measurement locations from a single scan , Ω : Ω = Θ t Λ t Γ , ( 8 ) where t denotes a disjoint union , i.e . Θ , Λ and Γ are pairwise disjoint ( Figure 1 ) . Similar to Section 2.4 , Θ is used in the DC units of the unrolled network , and Λ is used to define a k-space loss for the self-supervision of the network . The third partition Γ is a set of acquired k-space indices set aside for defining a k-space validation loss . Thus , ZS-SSL training is both self-supervised and self-validated . In general , since zero-shot learning approaches perform training using a single dataset , generation of multiple data pairs from this single dataset is necessary to self-supervise the neural network ( Quan et al. , 2020 ) . Hence , we generate multiple ( Θ , Λ ) pairs from the acquired locations Ω of the single scan . In ZS-SSL , this is achieved by fixing the k-space validation partition Γ ⊂ Ω , and performing the retrospective masking on Ω\Γ multiple times . Formally , Ω\Γ is partitioned K times such that Ω\Γ = Θk t Λk , k ∈ { 1 , . . . , K } , ( 9 ) where Λk , Θk and Γ are pairwise disjoint , i.e . Ω = Γ tΘk t Λk , ∀ k. ZS-SSL training minimizes min θ 1 K K∑ k=1 L ( yΛk , EΛk ( f ( yΘk , EΘk ; θ ) ) ) In the proposed ZS-SSL , this is supplemented by a k-space self-validation loss , which tests the generalization performance of the trained network on the k-space validation partition Γ . For the lth epoch , where the learned network weights are specified by θ ( l ) , this validation loss is given by : L ( yΓ , EΓ ( f ( yΩ\Γ , EΩ\Γ ; θ ( l ) ) ) ) . ( 10 ) Note that in ( 10 ) , the network output is calculated by applying the DC units on Ω\Γ = Θ t Λ , i.e . all acquired points outside of Γ , to better assess its generalizability performance . Our key motivation is that while the training loss will decrease over epochs , the k-space validation loss will start increasing once overfitting is observed . Thus , we monitor the loss in ( 10 ) during training to define an early stopping criterion to avoid overfitting . Let L be the epoch in which training needs to be stopped . Then at inference time , the network output is calculated as f ( yΩ , EΩ ; θ ( L ) ) , i.e . all acquired points are used to calculate the network output . ZS-SSL with Transfer Learning ( TL ) : While pretrained models are very efficient in reconstructing new unseen measurements from similar MRI scan protocols , their performance degrades significantly when acquisition parameters vary ( Muckley et al. , 2021 ) . Moreover , retraining a new model on a large database for each acquisition parameter , sampling/contrast/anatomy/acceleration , may be very computationally expensive ( Knoll et al. , 2019 ) . Hence , TL has been used for re-training DL models pre-trained on large databases to reconstruct MRI data with different characteristics ( Knoll et al. , 2019 ) . However , such transfer still requires another , often smaller , database for re-training . In contrast , in the presence of pre-trained models , ZS-SSL can be combined with TL , referred to as ZSSSL-TL , to reconstruct a single slice/instance with different characteristics by using weights of the pre-trained model for initialization . Thus , ZS-SSL-TL ensures that the pretrained model is adapted for each patient/subject , while facilitating faster convergence time and reduced reconstruction time .
This submission deals with MR images reconstruction in a context where the raw data is under-sampled. This data (which is in the Fourier domain) can be under-sampled to accelerate the imaging exam and thus improve clinical workflow and it is thus a very relevant research topic. The submission relies on recent deep learning models that explicitly incorporate some physical aspects of MR image acquisition (multiple coils, coil sensitivity and Fourier transform) and achieve MRI reconstruction based on supervised training examples. The training examples do not need to be (under-sampled / fully sampled) pairs. Indeed, some entries of under-sampled data can be deleted to create an input for which correct reconstruction of the deleted entries can be required through a computable loss term.
SP:eb17e2fcdfe38299e77c87eeeff3ddeb9b21f65a
MAML is a Noisy Contrastive Learner
1 INTRODUCTION . Humans can learn from very few samples . They can readily establish their cognition and understanding to novel tasks , environments , or domains even with very limited experience in the corresponding circumstances . Meta-learning , a subfield of machine learning aims at equipping machines with such capacity to effectively accommodate new scenarios ( Vilalta & Drissi , 2002 ; Grant et al. , 2018 ) . Machines learn to extract task-agnostic information so that their performance on unseen tasks can be improved ( Hospedales et al. , 2020 ) . One highly influential meta-learning algorithm is Model Agnostic Meta-Learning ( MAML ) ( Finn et al. , 2017 ) , which has inspired numerous follow-up extensions ( Nichol et al. , 2018 ; Rajeswaran et al. , 2019 ; Liu et al. , 2019 ; Finn et al. , 2019 ; Jamal & Qi , 2019 ; Javed & White , 2019 ) . MAML estimates a set of model parameters such that an adaptation of the model to a new task only requires some updates to those parameters . We take the few-shot classification task as an example to review the algorithmic procedure of MAML . A few-shot classification problem refers to classifying samples from some classes ( i.e . query data ) after seeing a few examples per class ( i.e . support data ) . In a meta-learning scenario , we consider a distribution of tasks , where each task is a few-shot classification problem and different tasks have different target classes . MAML aims to meta-train the base-model based on training tasks ( i.e. , the meta-training dataset ) and evaluate the performance of the base-model on the testing tasks , sampled from a held-out unseen dataset ( i.e . the meta-testing dataset ) . In meta-training , MAML follows a bi-level optimization scheme composed of the inner loop and the outer loop , as shown in Appendix A ( please refer to Section 2 for detailed definition ) . In the inner loop ( also known as fast adaptation ) , the base-model θ is updated to θ′ using the support set . In the outer loop , a loss is evaluated on θ′ using the query set , and its gradient is computed with respect to θ to update the base-model . Since the outer loop requires computing the gradient of gradient ( as the update in the inner loop is included in the entire computation graph ) , it is called second-order MAML ( SOMAML ) . To prevent computing the Hessian matrix , Finn et al . ( 2017 ) propose first-order MAML ( FOMAML ) that uses the gradient computed with respect to the inner-loop-updated parameters θ′ to update the base-model . The widely accepted intuition behind MAML is that the models are encouraged to learn a generalpurpose representations which are broadly applicable not only to the seen tasks but also to novel tasks ( Finn et al. , 2017 ; Raghu et al. , 2020 ; Goldblum et al. , 2020 ) . Raghu et al . ( 2020 ) confirm this perspective by showing that during fast adaptation , the majority of changes being made is in the final linear layers , while the convolution layers ( as the feature encoder ) remain almost static . This implies the models trained with MAML learn a good feature representation and that they only have to change the linear mapping from features to outputs during the fast adaptation . Similar ideas of freezing feature extractor during the inner loop have also been explored ( Lee et al. , 2019 ; Bertinetto et al. , 2019 ; Liu et al. , 2020 ) , and have been held as an assumption in theoretical works ( Du et al. , 2021 ; Tripuraneni et al. , 2020 ; Chua et al. , 2021 ) . While this intuition sounds satisfactory , we step further and ask the following fundamental questions : ( 1 ) In what sense does MAML guide any model to learn general-purpose representations ? ( 2 ) How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so ? ( 3 ) What is the role of the support and query data and how do they , if any , interact with each other ? In this paper , we answer these questions and give new insights on the working mechanism of MAML , which turns out to be closely connected to supervised contrastive learning ( SCL ) 1 . Here , we provide a sketch of our analysis in Figure 1 . We consider a setting of ( a ) a 5-way 1-shot paradigm of few-shot learning , ( b ) the mean square error ( MSE ) between the one-hot encoding of groundtruth label and the outputs as the objective function , and ( c ) MAML with a single inner-loop update . At the beginning of the inner loop , we set the linear layer w0 to zero . Then , the inner loop update of w0 is equivalent to adding the support features to w0 . In the outer loop , the output of a query sample q1 is actually the inner product between the query feature ϕ ( q1 ) and all support features ( the learning rate is omitted for now ) . As the groundtruth is an one-hot vector , the encoder is trained to either minimize the inner product between the query features and the support features ( when they are from different classes , as shown in the green box ) , or to pull the inner product between the query features and the support features to 1 ( when they have the same label , as shown in the red box ) . Therefore , the inner loop and the outer loop together manifest a SCL objective . Particularly , as the vanilla implementation of MAML uses non-zero ( random ) initialization for the linear layer , we will show such initialization leads to a noisy SCL objective which would impede the training . In this paper , we firstly review a formal definition of SCL , present a more general case of MAML with cross entropy loss in classification , and show the underlying learning protocol of vanilla MAML as an interfered SCL in Section 2 . We then experimentally verify the supervised contrastiveness of MAML , and propose to mitigate the interference with our simple but effective technique of the zeroinitialization and zeroing trick ( cf . Section 3 ) . In summary , our main contributions are three-fold : • We show that MAML is implicitly an SCL algorithm in classification and that the noise comes from the randomly initialized linear layer and the cross-task interaction , interfering the training of the encoder . • We verify the inherent contrastiveness of MAML based on the cosine similarity analysis . • We experimentally validate our analysis and show that applying the zeroing trick induces a notable improvement in testing accuracy during training . We also show that during meta-testing , a pronounced increase in accuracy occurs when the zeroing trick is applied . 2 WHY MAML IS IMPLICITLY A NOISY SUPERVISED CONTRASTIVE ALGORITHM ? . 2.1 PRELIMINARY : SUPERVISED CONTRASTIVE LEARNING . In this work , we aim to bridge MAML and supervised contrastive learning ( SCL ) and attribute the success of MAML to SCL ’ s capacity in learning good representations . Thus , we would like to briefly introduce SCL . 1We use the term supervised contrastiveness to refer to the setting of using groundtruth label information to differentiate positive samples and negative samples ( Khosla et al. , 2020 ) . This setting is different from ( unsupervised/self-supervised ) contrastive learning . Supervised contrastive learning , proposed by Khosla et al . ( 2020 ) , is a generalization of several metric learning algorithms , such as triplet loss and N-pair loss ( Schroff et al. , 2015 ; Sohn , 2016 ) and has shown the best performance in classification compared to SimCLR and CrossEntropy . In Khosla et al . ( 2020 ) , SCL is described as “ contrasts the set of all samples from the same class as positives against the negatives from the remainder of the batch ” and “ embeddings from the same class are pulled closer together than embeddings from different classes. ” For a sample s , label information is leveraged to indicate positive samples ( i.e. , samples having the same label as sample s ) and negative samples ( i.e. , samples having different labels to sample s ) . The loss of SCL is designed to increase the similarity ( or decrease metric distance ) of embeddings of positive samples and to decrease the similarity ( or increase the metric distance ) of embeddings of negative samples ( Khosla et al. , 2020 ) . In essence , SCL combines supervised learning and contrastive learning and differs from supervised learning in that the loss contains a measurement of the similarity or metric distance between the embedding of a sample and embedding of its positive/negative sample pairs . Now we give a formal definition of SCL . For a set of N samples drawn from a n-class dataset . Let i ∈ I = { 1 , ... , N } be the index of an arbitrary sample . Let A ( i ) = I \ { i } , P ( i ) be the set of indices of all positive samples of sample i , and N ( i ) = A ( i ) \ P ( i ) be the set of indices of all negative samples of sample i . Let zi indicates the embedding of sample i . Definition 1 Let Msim be a measurement of similarity ( e.g. , inner product , cosine similarity ) . Training algorithms that adopt loss of the following form belong to SCL : LSCL = ∑ i ∑ p∈P ( i ) c−p , iMsim ( zi , zp ) + ∑ i ∑ n∈N ( i ) c+n , iMsim ( zi , zn ) + c ( 1 ) where c−p , i < 0 and c + n , i > 0 for all n , p and i ; and c is a constant independent of samples . We further define that a training algorithm that follows Eq . ( 1 ) , but with either ( a ) c+n , i < 0 for some n , i or ( b ) c is a constant dependent of samples , belongs to noisy SCL . 2.2 PROBLEM SETUP . We provide the detailed derivation to show that MAML is implicitly a noisy SCL , where we adopt the few-shot classification as the example application . In this section , we focus on the meta-training period . Consider drawing a batch of tasks { T1 , . . . , TNbatch } from a meta-training task distribution D. Each task Tn contains a support set Sn and a query set Qn , where Sn = { ( sm , tm ) } Nway×Nshot m=1 , Qn = { ( qm , um ) } Nway×Nquery m=1 , sm , qm ∈ RNin are data samples , and tm , um ∈ { 1 , ... , Nway } are labels . We denote Nway the number of classes in each task , and { Nshot , Nquery } respectively the number of support and query samples per class . The architecture of our base-model comprises of a convolutional encoder ϕ : RNin → RNf ( parameterized by φ ) , a fully connected linear head w ∈ RNf×Nway , and a Softmax output layer , where Nf is the dimension of the feature space . We denote the kth column of w as wk . Note that the base-model parameters θ consist of φ and w. As shown in Appendix A , both FOMAML and SOMAML adopt a training strategy comprising the inner loop and the outer loop . At the beginning of a meta-training iteration , we sample Nbatch tasks . For each task Tn , we perform inner loop updates using the inner loop loss ( c.f . Eq . ( 2 ) ) evaluated on the support data , and then evaluate the outer loop loss ( c.f . Eq . ( 3 ) ) on the updated base-model using the query data . In the ith step of the inner loop , the parameters { φi−1 , wi−1 } are updated to { φi , wi } using the multi-class cross entropy loss evaluated on the support dataset Sn as L { φi , wi } , Sn = E ( s , t ) ∼Sn Nway∑ j=1 1j=t [ − log exp ( ϕi ( s ) ⊤wj i ) ∑Nway k=1 exp ( ϕ i ( s ) ⊤wki ) ] ( 2 ) After Nstep inner loop updates , we compute the outer loop loss using the query data Qn : L { φNstep , wNstep } , Qn = E ( q , u ) ∼Qn [ − log exp ( ϕ Nstep ( q ) ⊤wu Nstep ) ∑Nway k=1 exp ( ϕ Nstep ( q ) ⊤wkNstep ) ] ( 3 ) Then , we sum up the outer loop losses of all tasks , and perform gradient descent to update the base-model ’ s initial parameters { φ0 , w0 } . To show the supervised contrastiveness entailed in MAML , we adopt an assumption that the Encoder ϕ is Frozen during the Inner Loop ( the EFIL assumption ) and we discuss the validity of the assumption in Section 2.6 . Without loss of generality , we consider training models with MAML with Nbatch = 1 and Nstep = 1 , and we discuss the generalized version in Section 2.6 . For simplicity , the kth element of model output exp ( ϕ ( s ) ⊤wk 0 ) ∑Nway j=1 exp ( ϕ ( s ) ⊤wj0 ) ( respectively exp ( ϕ ( q ) ⊤wk 1 ) ∑Nway j=1 exp ( ϕ ( q ) ⊤wj1 ) ) of sample s ( respectively q ) is denoted as sk ( respectively qk ) .
This work shows that, in the setting of few-shot classification, MAML is a (noisy) contrastive learner. Complementary theoretical and experimental results are provided. The theoretical results also lead to a (to my knowledge) new proposal, namely to zero out the linear layer after each MAML outer loop update. In their experiments, the authors show that making this small change to the MAML algorithm can lead to meaningful improvements in performance.
SP:4c1957c7170ce4fa16d5b7efddc0bf156a6ea9f1
MAML is a Noisy Contrastive Learner
1 INTRODUCTION . Humans can learn from very few samples . They can readily establish their cognition and understanding to novel tasks , environments , or domains even with very limited experience in the corresponding circumstances . Meta-learning , a subfield of machine learning aims at equipping machines with such capacity to effectively accommodate new scenarios ( Vilalta & Drissi , 2002 ; Grant et al. , 2018 ) . Machines learn to extract task-agnostic information so that their performance on unseen tasks can be improved ( Hospedales et al. , 2020 ) . One highly influential meta-learning algorithm is Model Agnostic Meta-Learning ( MAML ) ( Finn et al. , 2017 ) , which has inspired numerous follow-up extensions ( Nichol et al. , 2018 ; Rajeswaran et al. , 2019 ; Liu et al. , 2019 ; Finn et al. , 2019 ; Jamal & Qi , 2019 ; Javed & White , 2019 ) . MAML estimates a set of model parameters such that an adaptation of the model to a new task only requires some updates to those parameters . We take the few-shot classification task as an example to review the algorithmic procedure of MAML . A few-shot classification problem refers to classifying samples from some classes ( i.e . query data ) after seeing a few examples per class ( i.e . support data ) . In a meta-learning scenario , we consider a distribution of tasks , where each task is a few-shot classification problem and different tasks have different target classes . MAML aims to meta-train the base-model based on training tasks ( i.e. , the meta-training dataset ) and evaluate the performance of the base-model on the testing tasks , sampled from a held-out unseen dataset ( i.e . the meta-testing dataset ) . In meta-training , MAML follows a bi-level optimization scheme composed of the inner loop and the outer loop , as shown in Appendix A ( please refer to Section 2 for detailed definition ) . In the inner loop ( also known as fast adaptation ) , the base-model θ is updated to θ′ using the support set . In the outer loop , a loss is evaluated on θ′ using the query set , and its gradient is computed with respect to θ to update the base-model . Since the outer loop requires computing the gradient of gradient ( as the update in the inner loop is included in the entire computation graph ) , it is called second-order MAML ( SOMAML ) . To prevent computing the Hessian matrix , Finn et al . ( 2017 ) propose first-order MAML ( FOMAML ) that uses the gradient computed with respect to the inner-loop-updated parameters θ′ to update the base-model . The widely accepted intuition behind MAML is that the models are encouraged to learn a generalpurpose representations which are broadly applicable not only to the seen tasks but also to novel tasks ( Finn et al. , 2017 ; Raghu et al. , 2020 ; Goldblum et al. , 2020 ) . Raghu et al . ( 2020 ) confirm this perspective by showing that during fast adaptation , the majority of changes being made is in the final linear layers , while the convolution layers ( as the feature encoder ) remain almost static . This implies the models trained with MAML learn a good feature representation and that they only have to change the linear mapping from features to outputs during the fast adaptation . Similar ideas of freezing feature extractor during the inner loop have also been explored ( Lee et al. , 2019 ; Bertinetto et al. , 2019 ; Liu et al. , 2020 ) , and have been held as an assumption in theoretical works ( Du et al. , 2021 ; Tripuraneni et al. , 2020 ; Chua et al. , 2021 ) . While this intuition sounds satisfactory , we step further and ask the following fundamental questions : ( 1 ) In what sense does MAML guide any model to learn general-purpose representations ? ( 2 ) How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so ? ( 3 ) What is the role of the support and query data and how do they , if any , interact with each other ? In this paper , we answer these questions and give new insights on the working mechanism of MAML , which turns out to be closely connected to supervised contrastive learning ( SCL ) 1 . Here , we provide a sketch of our analysis in Figure 1 . We consider a setting of ( a ) a 5-way 1-shot paradigm of few-shot learning , ( b ) the mean square error ( MSE ) between the one-hot encoding of groundtruth label and the outputs as the objective function , and ( c ) MAML with a single inner-loop update . At the beginning of the inner loop , we set the linear layer w0 to zero . Then , the inner loop update of w0 is equivalent to adding the support features to w0 . In the outer loop , the output of a query sample q1 is actually the inner product between the query feature ϕ ( q1 ) and all support features ( the learning rate is omitted for now ) . As the groundtruth is an one-hot vector , the encoder is trained to either minimize the inner product between the query features and the support features ( when they are from different classes , as shown in the green box ) , or to pull the inner product between the query features and the support features to 1 ( when they have the same label , as shown in the red box ) . Therefore , the inner loop and the outer loop together manifest a SCL objective . Particularly , as the vanilla implementation of MAML uses non-zero ( random ) initialization for the linear layer , we will show such initialization leads to a noisy SCL objective which would impede the training . In this paper , we firstly review a formal definition of SCL , present a more general case of MAML with cross entropy loss in classification , and show the underlying learning protocol of vanilla MAML as an interfered SCL in Section 2 . We then experimentally verify the supervised contrastiveness of MAML , and propose to mitigate the interference with our simple but effective technique of the zeroinitialization and zeroing trick ( cf . Section 3 ) . In summary , our main contributions are three-fold : • We show that MAML is implicitly an SCL algorithm in classification and that the noise comes from the randomly initialized linear layer and the cross-task interaction , interfering the training of the encoder . • We verify the inherent contrastiveness of MAML based on the cosine similarity analysis . • We experimentally validate our analysis and show that applying the zeroing trick induces a notable improvement in testing accuracy during training . We also show that during meta-testing , a pronounced increase in accuracy occurs when the zeroing trick is applied . 2 WHY MAML IS IMPLICITLY A NOISY SUPERVISED CONTRASTIVE ALGORITHM ? . 2.1 PRELIMINARY : SUPERVISED CONTRASTIVE LEARNING . In this work , we aim to bridge MAML and supervised contrastive learning ( SCL ) and attribute the success of MAML to SCL ’ s capacity in learning good representations . Thus , we would like to briefly introduce SCL . 1We use the term supervised contrastiveness to refer to the setting of using groundtruth label information to differentiate positive samples and negative samples ( Khosla et al. , 2020 ) . This setting is different from ( unsupervised/self-supervised ) contrastive learning . Supervised contrastive learning , proposed by Khosla et al . ( 2020 ) , is a generalization of several metric learning algorithms , such as triplet loss and N-pair loss ( Schroff et al. , 2015 ; Sohn , 2016 ) and has shown the best performance in classification compared to SimCLR and CrossEntropy . In Khosla et al . ( 2020 ) , SCL is described as “ contrasts the set of all samples from the same class as positives against the negatives from the remainder of the batch ” and “ embeddings from the same class are pulled closer together than embeddings from different classes. ” For a sample s , label information is leveraged to indicate positive samples ( i.e. , samples having the same label as sample s ) and negative samples ( i.e. , samples having different labels to sample s ) . The loss of SCL is designed to increase the similarity ( or decrease metric distance ) of embeddings of positive samples and to decrease the similarity ( or increase the metric distance ) of embeddings of negative samples ( Khosla et al. , 2020 ) . In essence , SCL combines supervised learning and contrastive learning and differs from supervised learning in that the loss contains a measurement of the similarity or metric distance between the embedding of a sample and embedding of its positive/negative sample pairs . Now we give a formal definition of SCL . For a set of N samples drawn from a n-class dataset . Let i ∈ I = { 1 , ... , N } be the index of an arbitrary sample . Let A ( i ) = I \ { i } , P ( i ) be the set of indices of all positive samples of sample i , and N ( i ) = A ( i ) \ P ( i ) be the set of indices of all negative samples of sample i . Let zi indicates the embedding of sample i . Definition 1 Let Msim be a measurement of similarity ( e.g. , inner product , cosine similarity ) . Training algorithms that adopt loss of the following form belong to SCL : LSCL = ∑ i ∑ p∈P ( i ) c−p , iMsim ( zi , zp ) + ∑ i ∑ n∈N ( i ) c+n , iMsim ( zi , zn ) + c ( 1 ) where c−p , i < 0 and c + n , i > 0 for all n , p and i ; and c is a constant independent of samples . We further define that a training algorithm that follows Eq . ( 1 ) , but with either ( a ) c+n , i < 0 for some n , i or ( b ) c is a constant dependent of samples , belongs to noisy SCL . 2.2 PROBLEM SETUP . We provide the detailed derivation to show that MAML is implicitly a noisy SCL , where we adopt the few-shot classification as the example application . In this section , we focus on the meta-training period . Consider drawing a batch of tasks { T1 , . . . , TNbatch } from a meta-training task distribution D. Each task Tn contains a support set Sn and a query set Qn , where Sn = { ( sm , tm ) } Nway×Nshot m=1 , Qn = { ( qm , um ) } Nway×Nquery m=1 , sm , qm ∈ RNin are data samples , and tm , um ∈ { 1 , ... , Nway } are labels . We denote Nway the number of classes in each task , and { Nshot , Nquery } respectively the number of support and query samples per class . The architecture of our base-model comprises of a convolutional encoder ϕ : RNin → RNf ( parameterized by φ ) , a fully connected linear head w ∈ RNf×Nway , and a Softmax output layer , where Nf is the dimension of the feature space . We denote the kth column of w as wk . Note that the base-model parameters θ consist of φ and w. As shown in Appendix A , both FOMAML and SOMAML adopt a training strategy comprising the inner loop and the outer loop . At the beginning of a meta-training iteration , we sample Nbatch tasks . For each task Tn , we perform inner loop updates using the inner loop loss ( c.f . Eq . ( 2 ) ) evaluated on the support data , and then evaluate the outer loop loss ( c.f . Eq . ( 3 ) ) on the updated base-model using the query data . In the ith step of the inner loop , the parameters { φi−1 , wi−1 } are updated to { φi , wi } using the multi-class cross entropy loss evaluated on the support dataset Sn as L { φi , wi } , Sn = E ( s , t ) ∼Sn Nway∑ j=1 1j=t [ − log exp ( ϕi ( s ) ⊤wj i ) ∑Nway k=1 exp ( ϕ i ( s ) ⊤wki ) ] ( 2 ) After Nstep inner loop updates , we compute the outer loop loss using the query data Qn : L { φNstep , wNstep } , Qn = E ( q , u ) ∼Qn [ − log exp ( ϕ Nstep ( q ) ⊤wu Nstep ) ∑Nway k=1 exp ( ϕ Nstep ( q ) ⊤wkNstep ) ] ( 3 ) Then , we sum up the outer loop losses of all tasks , and perform gradient descent to update the base-model ’ s initial parameters { φ0 , w0 } . To show the supervised contrastiveness entailed in MAML , we adopt an assumption that the Encoder ϕ is Frozen during the Inner Loop ( the EFIL assumption ) and we discuss the validity of the assumption in Section 2.6 . Without loss of generality , we consider training models with MAML with Nbatch = 1 and Nstep = 1 , and we discuss the generalized version in Section 2.6 . For simplicity , the kth element of model output exp ( ϕ ( s ) ⊤wk 0 ) ∑Nway j=1 exp ( ϕ ( s ) ⊤wj0 ) ( respectively exp ( ϕ ( q ) ⊤wk 1 ) ∑Nway j=1 exp ( ϕ ( q ) ⊤wj1 ) ) of sample s ( respectively q ) is denoted as sk ( respectively qk ) .
The paper analyzes MAML algorithms. Assuming in the inner loop the encoder is fixed and only last linear layer is updated, they analyze the gradient update and loss terms in the inner loop and outer loops (sec 2.3). Through this effort, the authors claim that there are noisy supervised contrastive term in the outer loop loss (eqn (7) and (8)). They further claim that there are additional interference terms which may degrade the performance of MAML at the beginning of training when the linear layer weights are largely random. To overcome this, they propose a simple zeroing trick by zeroing the initial linear layer weights after each outer loop update, essentially removing the interference terms (eqn (9) and (10)). They conduct experiments to support the contrastiveness in MAML and performance improvement using zeroing trick.
SP:4c1957c7170ce4fa16d5b7efddc0bf156a6ea9f1
MAML is a Noisy Contrastive Learner
1 INTRODUCTION . Humans can learn from very few samples . They can readily establish their cognition and understanding to novel tasks , environments , or domains even with very limited experience in the corresponding circumstances . Meta-learning , a subfield of machine learning aims at equipping machines with such capacity to effectively accommodate new scenarios ( Vilalta & Drissi , 2002 ; Grant et al. , 2018 ) . Machines learn to extract task-agnostic information so that their performance on unseen tasks can be improved ( Hospedales et al. , 2020 ) . One highly influential meta-learning algorithm is Model Agnostic Meta-Learning ( MAML ) ( Finn et al. , 2017 ) , which has inspired numerous follow-up extensions ( Nichol et al. , 2018 ; Rajeswaran et al. , 2019 ; Liu et al. , 2019 ; Finn et al. , 2019 ; Jamal & Qi , 2019 ; Javed & White , 2019 ) . MAML estimates a set of model parameters such that an adaptation of the model to a new task only requires some updates to those parameters . We take the few-shot classification task as an example to review the algorithmic procedure of MAML . A few-shot classification problem refers to classifying samples from some classes ( i.e . query data ) after seeing a few examples per class ( i.e . support data ) . In a meta-learning scenario , we consider a distribution of tasks , where each task is a few-shot classification problem and different tasks have different target classes . MAML aims to meta-train the base-model based on training tasks ( i.e. , the meta-training dataset ) and evaluate the performance of the base-model on the testing tasks , sampled from a held-out unseen dataset ( i.e . the meta-testing dataset ) . In meta-training , MAML follows a bi-level optimization scheme composed of the inner loop and the outer loop , as shown in Appendix A ( please refer to Section 2 for detailed definition ) . In the inner loop ( also known as fast adaptation ) , the base-model θ is updated to θ′ using the support set . In the outer loop , a loss is evaluated on θ′ using the query set , and its gradient is computed with respect to θ to update the base-model . Since the outer loop requires computing the gradient of gradient ( as the update in the inner loop is included in the entire computation graph ) , it is called second-order MAML ( SOMAML ) . To prevent computing the Hessian matrix , Finn et al . ( 2017 ) propose first-order MAML ( FOMAML ) that uses the gradient computed with respect to the inner-loop-updated parameters θ′ to update the base-model . The widely accepted intuition behind MAML is that the models are encouraged to learn a generalpurpose representations which are broadly applicable not only to the seen tasks but also to novel tasks ( Finn et al. , 2017 ; Raghu et al. , 2020 ; Goldblum et al. , 2020 ) . Raghu et al . ( 2020 ) confirm this perspective by showing that during fast adaptation , the majority of changes being made is in the final linear layers , while the convolution layers ( as the feature encoder ) remain almost static . This implies the models trained with MAML learn a good feature representation and that they only have to change the linear mapping from features to outputs during the fast adaptation . Similar ideas of freezing feature extractor during the inner loop have also been explored ( Lee et al. , 2019 ; Bertinetto et al. , 2019 ; Liu et al. , 2020 ) , and have been held as an assumption in theoretical works ( Du et al. , 2021 ; Tripuraneni et al. , 2020 ; Chua et al. , 2021 ) . While this intuition sounds satisfactory , we step further and ask the following fundamental questions : ( 1 ) In what sense does MAML guide any model to learn general-purpose representations ? ( 2 ) How do the inner and outer loops in the training mechanism of MAML collaboratively prompt to achieve so ? ( 3 ) What is the role of the support and query data and how do they , if any , interact with each other ? In this paper , we answer these questions and give new insights on the working mechanism of MAML , which turns out to be closely connected to supervised contrastive learning ( SCL ) 1 . Here , we provide a sketch of our analysis in Figure 1 . We consider a setting of ( a ) a 5-way 1-shot paradigm of few-shot learning , ( b ) the mean square error ( MSE ) between the one-hot encoding of groundtruth label and the outputs as the objective function , and ( c ) MAML with a single inner-loop update . At the beginning of the inner loop , we set the linear layer w0 to zero . Then , the inner loop update of w0 is equivalent to adding the support features to w0 . In the outer loop , the output of a query sample q1 is actually the inner product between the query feature ϕ ( q1 ) and all support features ( the learning rate is omitted for now ) . As the groundtruth is an one-hot vector , the encoder is trained to either minimize the inner product between the query features and the support features ( when they are from different classes , as shown in the green box ) , or to pull the inner product between the query features and the support features to 1 ( when they have the same label , as shown in the red box ) . Therefore , the inner loop and the outer loop together manifest a SCL objective . Particularly , as the vanilla implementation of MAML uses non-zero ( random ) initialization for the linear layer , we will show such initialization leads to a noisy SCL objective which would impede the training . In this paper , we firstly review a formal definition of SCL , present a more general case of MAML with cross entropy loss in classification , and show the underlying learning protocol of vanilla MAML as an interfered SCL in Section 2 . We then experimentally verify the supervised contrastiveness of MAML , and propose to mitigate the interference with our simple but effective technique of the zeroinitialization and zeroing trick ( cf . Section 3 ) . In summary , our main contributions are three-fold : • We show that MAML is implicitly an SCL algorithm in classification and that the noise comes from the randomly initialized linear layer and the cross-task interaction , interfering the training of the encoder . • We verify the inherent contrastiveness of MAML based on the cosine similarity analysis . • We experimentally validate our analysis and show that applying the zeroing trick induces a notable improvement in testing accuracy during training . We also show that during meta-testing , a pronounced increase in accuracy occurs when the zeroing trick is applied . 2 WHY MAML IS IMPLICITLY A NOISY SUPERVISED CONTRASTIVE ALGORITHM ? . 2.1 PRELIMINARY : SUPERVISED CONTRASTIVE LEARNING . In this work , we aim to bridge MAML and supervised contrastive learning ( SCL ) and attribute the success of MAML to SCL ’ s capacity in learning good representations . Thus , we would like to briefly introduce SCL . 1We use the term supervised contrastiveness to refer to the setting of using groundtruth label information to differentiate positive samples and negative samples ( Khosla et al. , 2020 ) . This setting is different from ( unsupervised/self-supervised ) contrastive learning . Supervised contrastive learning , proposed by Khosla et al . ( 2020 ) , is a generalization of several metric learning algorithms , such as triplet loss and N-pair loss ( Schroff et al. , 2015 ; Sohn , 2016 ) and has shown the best performance in classification compared to SimCLR and CrossEntropy . In Khosla et al . ( 2020 ) , SCL is described as “ contrasts the set of all samples from the same class as positives against the negatives from the remainder of the batch ” and “ embeddings from the same class are pulled closer together than embeddings from different classes. ” For a sample s , label information is leveraged to indicate positive samples ( i.e. , samples having the same label as sample s ) and negative samples ( i.e. , samples having different labels to sample s ) . The loss of SCL is designed to increase the similarity ( or decrease metric distance ) of embeddings of positive samples and to decrease the similarity ( or increase the metric distance ) of embeddings of negative samples ( Khosla et al. , 2020 ) . In essence , SCL combines supervised learning and contrastive learning and differs from supervised learning in that the loss contains a measurement of the similarity or metric distance between the embedding of a sample and embedding of its positive/negative sample pairs . Now we give a formal definition of SCL . For a set of N samples drawn from a n-class dataset . Let i ∈ I = { 1 , ... , N } be the index of an arbitrary sample . Let A ( i ) = I \ { i } , P ( i ) be the set of indices of all positive samples of sample i , and N ( i ) = A ( i ) \ P ( i ) be the set of indices of all negative samples of sample i . Let zi indicates the embedding of sample i . Definition 1 Let Msim be a measurement of similarity ( e.g. , inner product , cosine similarity ) . Training algorithms that adopt loss of the following form belong to SCL : LSCL = ∑ i ∑ p∈P ( i ) c−p , iMsim ( zi , zp ) + ∑ i ∑ n∈N ( i ) c+n , iMsim ( zi , zn ) + c ( 1 ) where c−p , i < 0 and c + n , i > 0 for all n , p and i ; and c is a constant independent of samples . We further define that a training algorithm that follows Eq . ( 1 ) , but with either ( a ) c+n , i < 0 for some n , i or ( b ) c is a constant dependent of samples , belongs to noisy SCL . 2.2 PROBLEM SETUP . We provide the detailed derivation to show that MAML is implicitly a noisy SCL , where we adopt the few-shot classification as the example application . In this section , we focus on the meta-training period . Consider drawing a batch of tasks { T1 , . . . , TNbatch } from a meta-training task distribution D. Each task Tn contains a support set Sn and a query set Qn , where Sn = { ( sm , tm ) } Nway×Nshot m=1 , Qn = { ( qm , um ) } Nway×Nquery m=1 , sm , qm ∈ RNin are data samples , and tm , um ∈ { 1 , ... , Nway } are labels . We denote Nway the number of classes in each task , and { Nshot , Nquery } respectively the number of support and query samples per class . The architecture of our base-model comprises of a convolutional encoder ϕ : RNin → RNf ( parameterized by φ ) , a fully connected linear head w ∈ RNf×Nway , and a Softmax output layer , where Nf is the dimension of the feature space . We denote the kth column of w as wk . Note that the base-model parameters θ consist of φ and w. As shown in Appendix A , both FOMAML and SOMAML adopt a training strategy comprising the inner loop and the outer loop . At the beginning of a meta-training iteration , we sample Nbatch tasks . For each task Tn , we perform inner loop updates using the inner loop loss ( c.f . Eq . ( 2 ) ) evaluated on the support data , and then evaluate the outer loop loss ( c.f . Eq . ( 3 ) ) on the updated base-model using the query data . In the ith step of the inner loop , the parameters { φi−1 , wi−1 } are updated to { φi , wi } using the multi-class cross entropy loss evaluated on the support dataset Sn as L { φi , wi } , Sn = E ( s , t ) ∼Sn Nway∑ j=1 1j=t [ − log exp ( ϕi ( s ) ⊤wj i ) ∑Nway k=1 exp ( ϕ i ( s ) ⊤wki ) ] ( 2 ) After Nstep inner loop updates , we compute the outer loop loss using the query data Qn : L { φNstep , wNstep } , Qn = E ( q , u ) ∼Qn [ − log exp ( ϕ Nstep ( q ) ⊤wu Nstep ) ∑Nway k=1 exp ( ϕ Nstep ( q ) ⊤wkNstep ) ] ( 3 ) Then , we sum up the outer loop losses of all tasks , and perform gradient descent to update the base-model ’ s initial parameters { φ0 , w0 } . To show the supervised contrastiveness entailed in MAML , we adopt an assumption that the Encoder ϕ is Frozen during the Inner Loop ( the EFIL assumption ) and we discuss the validity of the assumption in Section 2.6 . Without loss of generality , we consider training models with MAML with Nbatch = 1 and Nstep = 1 , and we discuss the generalized version in Section 2.6 . For simplicity , the kth element of model output exp ( ϕ ( s ) ⊤wk 0 ) ∑Nway j=1 exp ( ϕ ( s ) ⊤wj0 ) ( respectively exp ( ϕ ( q ) ⊤wk 1 ) ∑Nway j=1 exp ( ϕ ( q ) ⊤wj1 ) ) of sample s ( respectively q ) is denoted as sk ( respectively qk ) .
In this paper, a new view of MAML under few-shot learning is proposed. The main result is that under the assumption that the inner loop updates are only applied on the top linear layer, MAML actually performs supervised contrastive learning (SCL). SCL shows that MAML learns the feature transformation that makes the intra-class feature distances small, meanwhile the inter-class feature distances large. The zeroing trick is proposed based on this result, showing performance gain in the experiments.
SP:4c1957c7170ce4fa16d5b7efddc0bf156a6ea9f1
Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data
T ) when the compression error is bounded over the course of training . We provide specific requirements for convergence with common compression techniques , such as quantization and top-k sparsification . Finally , we experimentally show compression can reduce communication by over 90 % without a significant decrease in accuracy over VFL without compression . 1 INTRODUCTION . Federated Learning ( McMahan et al. , 2017 ) is a distributed machine learning approach that has become of much interest in both theory ( Li et al. , 2020 ; Wang et al. , 2019 ; Liu et al. , 2020 ) and practice ( Bonawitz et al. , 2019 ; Rieke et al. , 2020 ; Lim et al. , 2020 ) in recent years . Naive distributed learning algorithms may require frequent exchanges of large amounts of data , which can lead to slow training performance ( Lin et al. , 2020 ) . Further , participants may be globally distributed , with high latency network connections . To mitigate these factors , Federated Learning algorithms aim to be communication-efficient by design . Methods such as local updates ( Moritz et al. , 2016 ; Liu et al. , 2019 ) , where parties train local parameters for multiple iterations without communication , and message compression ( Stich et al. , 2018 ; Wen et al. , 2017 ; Karimireddy et al. , 2019 ) reduce message frequency and size , respectively , with little impact on training performance . Federated Learning methods often target the case where the data among parties is distributed horizontally : each party ’ s data shares the same features but parties hold data corresponding to different sample IDs . This is known as Horizontal Federated Learning ( HFL ) ( Yang et al. , 2019 ) . However , there are several application areas where data is partitioned in a vertical manner : the parties store data on the same sample IDs but different feature spaces . An example of a vertically partitioned setting includes a hospital , bank , and insurance company seeking to train a model to predict something of mutual interest , such as customer credit score . Each of these institutions may have data on the same individuals but store medical history , financial transactions , and vehicle accident reports , respectively . These features must remain local to the institutions due to privacy concerns , rules and regulations ( e.g. , GDPR , HIPAA ) , and/or communication network limitations . In such a scenario , Vertical Federated Learning ( VFL ) methods must be employed . Although VFL is less well-studied than HFL , there has been a growing interest in VFL algorithms recently ( Hu et al. , 2019 ; Gu et al. , 2021 ; Cha et al. , 2021 ) , and VFL algorithms have important applications including risk prediction , smart manufacturing , and discovery of pharmaceuticals ( Kairouz et al. , 2021 ) . Typically in VFL , each party trains a local embedding function that maps raw data features to a meaningful vector representation , or embedding , for prediction tasks . For example , a neural network can be an embedding function for mapping the text of an online article to a vector space for classification ( Koehrsen , 2018 ) . Referring to Figure 1a , suppose Party 1 is a hospital with medical data features x1 . The hospital computes its embedding h1 ( θ1 ; x1 ) for the features by feeding x1 through a neural network . The other parties ( the bank and insurance company ) , compute embeddings for their features , then all parties share the embeddings in a private manner ( e.g. , homomorphic encryption , secure multi-party computation , or secure aggregation ) . The embeddings are then combined in a server model θ0 to determine the final loss of the global model . A server model ( or fusion network ) captures the complicated interactions of embeddings and is often a complex , non-linear model ( Gu et al. , 2019 ; Nie et al. , 2021 ; Han et al. , 2021 ) . Embeddings can be very large , in practice , sometimes requiring terabytes of communication over the course of training . Motivated by this , we propose Compressed Vertical Federated Learning ( C-VFL ) , a general framework for communication-efficient Federated Learning over vertically partitioned data . In our algorithm , parties communicate compressed embeddings periodically , and the parties and the server each run block-coordinate descent for multiple local iterations , in parallel , using stochastic gradients to update their local parameters . C-VFL is the first theoretically verified VFL algorithm that applies embedding compression . Unlike in HFL algorithms , C-VFL compresses embeddings rather than gradients . Previous work has proven convergence for HFL algorithms with gradient compression ( Stich et al. , 2018 ; Wen et al. , 2017 ; Karimireddy et al. , 2019 ) . However , no previous work analyzes the convergence requirements for VFL algorithms that use embedding compression . Embeddings are parameters in the partial derivatives calculated at each party . The effect of compression error on the resulting partial derivatives may be complex ; therefore , the analysis in previous work on gradient compression in HFL does not apply to compression in VFL . In our work , we prove that , under a diminishing compression error , C-VFL converges at a rate of O ( 1√ T ) , which is comparable to previous VFL algorithms that do not employ compression . We also analyze common compressors , such as quantization and sparsification , in C-VFL and provide bounds on their compression parameters to ensure convergence . C-VFL also generalizes previous work by supporting an arbitrary server model . Previous work in VFL has either only analyzed an arbitrary server model without local updates ( Chen et al. , 2020 ) , or analyzed local updates with a linear server model ( Liu et al. , 2019 ; Zhang et al. , 2020 ; Das & Patterson , 2021 ) . C-VFL is designed with an arbitrary server model , allowing support for more complex prediction tasks than those supported by previous VFL algorithms . We summarize our main contributions in this work . 1 . We introduce C-VFL with an arbitrary compression scheme . Our algorithm generalizes previous work in VFL by including both an arbitrary server model and multiple local iterations . 2 . We prove convergence of C-VFL to a fixed point on non-convex objectives at a rate ofO ( 1√ T ) for a fixed step size when the compression error is bounded over the course of training . We also prove that the algorithm convergence error goes to zero for a diminishing step size if the compression error diminishes as well . Our work provides novel analysis for the effect of compressing embeddings on convergence in a VFL algorithm . Our analysis also applies to Split Learning when uploads to the server are compressed . 3 . We provide convergence bounds on parameters in common compressors that can be used in CVFL . In particular , we examine scalar quantization ( Bennett , 1948 ) , lattice vector quantization ( Zamir & Feder , 1996 ) , and top-k sparsification ( Lin et al. , 2018 ) . 4 . We evaluate our algorithm by training LSTMs on the MIMIC-III dataset and CNNs on the ModelNet10 dataset . We empirically show how C-VFL can reduce the number of bits sent by over 90 % compared to VFL with no compression without a significant loss in accuracy of the final model . Related Work . Richtárik & Takác ( 2016 ) ; Hardy et al . ( 2017 ) were the first works to propose Federated Learning algorithms for vertically partitioned data . Chen et al . ( 2020 ) ; Romanini et al . ( 2021 ) propose the inclusion of an arbitrary server model in a VFL algorithm . However , these works do not consider multiple local iterations , and thus communicate at every iteration . Liu et al . ( 2019 ) , Feng & Yu ( 2020 ) , and Das & Patterson ( 2021 ) all propose different VFL algorithms with local iterations for vertically partitioned data but do not consider an arbitrary server model . In contrast to previous works , our work addresses a vertical scenario , an arbitrary server model , local iterations , and message compression . Message compression is a common topic in HFL scenarios , where participants exchange gradients determined by their local datasets . Methods of gradient compression in HFL include scalar quantization ( Bernstein et al. , 2018 ) , vector quantization ( Shlezinger et al. , 2021 ) , and top-k sparsification ( Shi et al. , 2019 ) . In C-VFL , compressed embeddings are shared , rather than compressed gradients . Analysis in previous work on gradient compression in HFL does not apply to compression in VFL , as the effect of embedding compression error on each party ’ s partial derivatives may be complex . No prior work has analyzed the impact of compression on convergence in VFL . Outline . In Section 2 , we provide the problem formulation and our assumptions . Section 3 presents the details of C-VFL . In Section 4 , we present our main theoretical results . Our experimental results are given in Section 5 . Finally , we conclude in Section 6 . 2 PROBLEM FORMULATION . We present our problem formulation and notation to be used in the rest of the paper . We let ‖a‖ be the 2-norm of a vector a , and let ‖A‖F be the Frobenius norm of a matrix A . We consider a set of M parties { 1 , . . . , M } and a server . The dataset X ∈ RN×D is vertically partitioned a priori across the M parties , where N is the number of data samples and D is the number of features . The i-th row of X corresponds to a data sample xi . For each sample xi , a party m holds a disjoint subset of the features , denoted xim , so that x i = [ xi1 , . . . , x i M ] . For each x i , there is a corresponding label yi . Let y ∈ RN×1 be the vector of all sample labels . We let Xm ∈ RN×Dm be the local dataset of a party m , where the i-th row correspond to data features xim . We assume that the server and all parties have a copy of the labels y . For scenarios where the labels are private and only present at a single party , the label holder can provide enough information for the parties to compute gradients for some classes of model architectures ( Liu et al. , 2019 ) . Each party m holds a set of model parameters θm as well as a local embedding function hm ( · ) . The server holds a set of parameters θ0 called the server model and a loss function l ( · ) that combines the embeddings hm ( θm ; xim ) from all parties . Our objective is as follows : minimize Θ F ( Θ ; X ; y ) : = 1 N N∑ i=1 l ( θ0 , h1 ( θ1 ; x i 1 ) , . . . , hM ( θM ; x i M ) ; y i ) ( 1 ) where Θ = [ θT0 , . . . , θ T M ] T is the global model . An example of a global model Θ is in Figure 1a . For simplicity , we let m = 0 refer to the server , and define h0 ( θ0 ; xi ) : = θ0 for all xi , where h0 ( · ) is equivalent to the identity function . Let hm ( θm ; xim ) ∈ RPm for m = 0 , . . . , M , where Pm is the size of the m-th embedding . Let ∇mF ( Θ ; X ; y ) : = 1 N ∑N i=1∇θm l ( θ0 , h1 ( θ1 ; xi1 ) , . . . , hM ( θM ; xiM ) ; yi ) be the partial derivatives for parameters θm . Let XB and yB be the set of samples and labels corresponding to a randomly sampled mini-batch B of size B . We let the stochastic partial derivatives for parameters θm be ∇mFB ( Θ ; X ; y ) : = 1 B ∑ xi , yi∈XB , yB ∇θm l ( θ0 , h1 ( θ1 ; xi1 ) , . . . , hM ( θM ; xiM ) ; y ) . We may drop X and y from F ( · ) and FB ( · ) . With a minor abuse of notation , we let hm ( θm ; XBm ) : = { hm ( θm ; xB 1 m ) , . . . , hm ( θm ; x BB m ) } be the set of all party m embeddings associated with mini-batch B , where Bi is the i-th sample in the mini-batch B . We let ∇mFB ( Θ ) and ∇mFB ( θ0 , h1 ( θ1 ; XB1 ) , . . . , hM ( θM ; XBM ) ) be equivalent , and use them interchangeably . Assumption 1 . Smoothness : There exists positive constants L < ∞ and Lm < ∞ , for m = 0 , . . . , M , such that for all Θ1 , Θ2 , the objective function satisfies ‖∇F ( Θ1 ) −∇F ( Θ2 ) ‖ ≤ L ‖Θ1 −Θ2‖ and ‖∇mFB ( Θ1 ) −∇mFB ( Θ2 ) ‖ ≤ Lm ‖Θ1 −Θ2‖ . Assumption 2 . Unbiased gradients : Form = 0 , . . . , M , for a randomly selected mini-batch B , the stochastic partial derivatives are unbiased , i.e. , EB∇mFB ( Θ ) = ∇mF ( Θ ) . Assumption 3 . Bounded variance : For m = 0 , . . . , M , there exists constants σm < ∞ such that the variance of the stochastic partial derivatives are bounded as : EB‖∇mF ( Θ ) −∇mFB ( Θ ) ‖2 ≤ σ2m B for a randomly selected mini-batch B of size B . Assumption 1 bounds how fast the gradient and stochastic partial derivatives can change . Assumptions 2 and 3 require that the stochastic partial derivatives are unbiased estimators of the true partial derivatives with bounded variance . Assumptions 1–3 are common assumptions in convergence analysis of gradient-based algorithms ( Tsitsiklis et al. , 1986 ; Nguyen et al. , 2018 ; Bottou et al. , 2018 ) . We note Assumptions 2–3 are similar to the IID assumptions in HFL convergence analysis . However , in VFL settings , all parties store identical sample IDs but different subsets of features . Hence , there is no equivalent notion of a non-IID distribution in VFL . Assumption 4 . Bounded Hessian : There exists positive constants Hm for m = 0 , . . . , M such that for all Θ , the second partial derivatives of FB with respect to hm ( θm ; XBm ) satisfy : ‖∇2hm ( θm ; XBm ) FB ( Θ ) ‖F ≤ Hm for any mini-batch B . Assumption 5 . Bounded Embedding Gradients : There exists positive constants Gm for m = 0 , . . . , M such that for all θm , the stochastic embedding gradients are bounded by : ‖∇θmhm ( θm ; XBm ) ‖F ≤ Gm for any mini-batch B . Since we are assuming a Lipschitz-continuous loss function ( Assumption 1 ) , we know the Hessian of F is bounded . Assumption 4 strengthens this assumption slightly to also bound the Hessian over any mini-batch . Assumption 5 bounds the magnitude of the partial derivatives with respect to embeddings . This embedding gradient bound is necessary to ensure convergence in the presence of embedding compression error ( see appendix for details ) .
There is a growing interest in vertical federated learning where each party only store a subset of features due to its various important applications. This paper studies how to make vertical federated learning more efficient. It proposed to apply compression to the embeddings shared periodically. Compression has been well studied for reducing the network traffic for synchronizing gradient but there is little study in compressing embeddings. This paper shows that the convergence is guaranteed if the compression errors diminish at the same rate as the learning rate. The experimental results show that the vector quantizer produces the same accuracy as the full-precision counterpart while significantly reducing the communication costs.
SP:eed3ca7c501eda882c92acd0d8cd3a725030c97c
Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data
T ) when the compression error is bounded over the course of training . We provide specific requirements for convergence with common compression techniques , such as quantization and top-k sparsification . Finally , we experimentally show compression can reduce communication by over 90 % without a significant decrease in accuracy over VFL without compression . 1 INTRODUCTION . Federated Learning ( McMahan et al. , 2017 ) is a distributed machine learning approach that has become of much interest in both theory ( Li et al. , 2020 ; Wang et al. , 2019 ; Liu et al. , 2020 ) and practice ( Bonawitz et al. , 2019 ; Rieke et al. , 2020 ; Lim et al. , 2020 ) in recent years . Naive distributed learning algorithms may require frequent exchanges of large amounts of data , which can lead to slow training performance ( Lin et al. , 2020 ) . Further , participants may be globally distributed , with high latency network connections . To mitigate these factors , Federated Learning algorithms aim to be communication-efficient by design . Methods such as local updates ( Moritz et al. , 2016 ; Liu et al. , 2019 ) , where parties train local parameters for multiple iterations without communication , and message compression ( Stich et al. , 2018 ; Wen et al. , 2017 ; Karimireddy et al. , 2019 ) reduce message frequency and size , respectively , with little impact on training performance . Federated Learning methods often target the case where the data among parties is distributed horizontally : each party ’ s data shares the same features but parties hold data corresponding to different sample IDs . This is known as Horizontal Federated Learning ( HFL ) ( Yang et al. , 2019 ) . However , there are several application areas where data is partitioned in a vertical manner : the parties store data on the same sample IDs but different feature spaces . An example of a vertically partitioned setting includes a hospital , bank , and insurance company seeking to train a model to predict something of mutual interest , such as customer credit score . Each of these institutions may have data on the same individuals but store medical history , financial transactions , and vehicle accident reports , respectively . These features must remain local to the institutions due to privacy concerns , rules and regulations ( e.g. , GDPR , HIPAA ) , and/or communication network limitations . In such a scenario , Vertical Federated Learning ( VFL ) methods must be employed . Although VFL is less well-studied than HFL , there has been a growing interest in VFL algorithms recently ( Hu et al. , 2019 ; Gu et al. , 2021 ; Cha et al. , 2021 ) , and VFL algorithms have important applications including risk prediction , smart manufacturing , and discovery of pharmaceuticals ( Kairouz et al. , 2021 ) . Typically in VFL , each party trains a local embedding function that maps raw data features to a meaningful vector representation , or embedding , for prediction tasks . For example , a neural network can be an embedding function for mapping the text of an online article to a vector space for classification ( Koehrsen , 2018 ) . Referring to Figure 1a , suppose Party 1 is a hospital with medical data features x1 . The hospital computes its embedding h1 ( θ1 ; x1 ) for the features by feeding x1 through a neural network . The other parties ( the bank and insurance company ) , compute embeddings for their features , then all parties share the embeddings in a private manner ( e.g. , homomorphic encryption , secure multi-party computation , or secure aggregation ) . The embeddings are then combined in a server model θ0 to determine the final loss of the global model . A server model ( or fusion network ) captures the complicated interactions of embeddings and is often a complex , non-linear model ( Gu et al. , 2019 ; Nie et al. , 2021 ; Han et al. , 2021 ) . Embeddings can be very large , in practice , sometimes requiring terabytes of communication over the course of training . Motivated by this , we propose Compressed Vertical Federated Learning ( C-VFL ) , a general framework for communication-efficient Federated Learning over vertically partitioned data . In our algorithm , parties communicate compressed embeddings periodically , and the parties and the server each run block-coordinate descent for multiple local iterations , in parallel , using stochastic gradients to update their local parameters . C-VFL is the first theoretically verified VFL algorithm that applies embedding compression . Unlike in HFL algorithms , C-VFL compresses embeddings rather than gradients . Previous work has proven convergence for HFL algorithms with gradient compression ( Stich et al. , 2018 ; Wen et al. , 2017 ; Karimireddy et al. , 2019 ) . However , no previous work analyzes the convergence requirements for VFL algorithms that use embedding compression . Embeddings are parameters in the partial derivatives calculated at each party . The effect of compression error on the resulting partial derivatives may be complex ; therefore , the analysis in previous work on gradient compression in HFL does not apply to compression in VFL . In our work , we prove that , under a diminishing compression error , C-VFL converges at a rate of O ( 1√ T ) , which is comparable to previous VFL algorithms that do not employ compression . We also analyze common compressors , such as quantization and sparsification , in C-VFL and provide bounds on their compression parameters to ensure convergence . C-VFL also generalizes previous work by supporting an arbitrary server model . Previous work in VFL has either only analyzed an arbitrary server model without local updates ( Chen et al. , 2020 ) , or analyzed local updates with a linear server model ( Liu et al. , 2019 ; Zhang et al. , 2020 ; Das & Patterson , 2021 ) . C-VFL is designed with an arbitrary server model , allowing support for more complex prediction tasks than those supported by previous VFL algorithms . We summarize our main contributions in this work . 1 . We introduce C-VFL with an arbitrary compression scheme . Our algorithm generalizes previous work in VFL by including both an arbitrary server model and multiple local iterations . 2 . We prove convergence of C-VFL to a fixed point on non-convex objectives at a rate ofO ( 1√ T ) for a fixed step size when the compression error is bounded over the course of training . We also prove that the algorithm convergence error goes to zero for a diminishing step size if the compression error diminishes as well . Our work provides novel analysis for the effect of compressing embeddings on convergence in a VFL algorithm . Our analysis also applies to Split Learning when uploads to the server are compressed . 3 . We provide convergence bounds on parameters in common compressors that can be used in CVFL . In particular , we examine scalar quantization ( Bennett , 1948 ) , lattice vector quantization ( Zamir & Feder , 1996 ) , and top-k sparsification ( Lin et al. , 2018 ) . 4 . We evaluate our algorithm by training LSTMs on the MIMIC-III dataset and CNNs on the ModelNet10 dataset . We empirically show how C-VFL can reduce the number of bits sent by over 90 % compared to VFL with no compression without a significant loss in accuracy of the final model . Related Work . Richtárik & Takác ( 2016 ) ; Hardy et al . ( 2017 ) were the first works to propose Federated Learning algorithms for vertically partitioned data . Chen et al . ( 2020 ) ; Romanini et al . ( 2021 ) propose the inclusion of an arbitrary server model in a VFL algorithm . However , these works do not consider multiple local iterations , and thus communicate at every iteration . Liu et al . ( 2019 ) , Feng & Yu ( 2020 ) , and Das & Patterson ( 2021 ) all propose different VFL algorithms with local iterations for vertically partitioned data but do not consider an arbitrary server model . In contrast to previous works , our work addresses a vertical scenario , an arbitrary server model , local iterations , and message compression . Message compression is a common topic in HFL scenarios , where participants exchange gradients determined by their local datasets . Methods of gradient compression in HFL include scalar quantization ( Bernstein et al. , 2018 ) , vector quantization ( Shlezinger et al. , 2021 ) , and top-k sparsification ( Shi et al. , 2019 ) . In C-VFL , compressed embeddings are shared , rather than compressed gradients . Analysis in previous work on gradient compression in HFL does not apply to compression in VFL , as the effect of embedding compression error on each party ’ s partial derivatives may be complex . No prior work has analyzed the impact of compression on convergence in VFL . Outline . In Section 2 , we provide the problem formulation and our assumptions . Section 3 presents the details of C-VFL . In Section 4 , we present our main theoretical results . Our experimental results are given in Section 5 . Finally , we conclude in Section 6 . 2 PROBLEM FORMULATION . We present our problem formulation and notation to be used in the rest of the paper . We let ‖a‖ be the 2-norm of a vector a , and let ‖A‖F be the Frobenius norm of a matrix A . We consider a set of M parties { 1 , . . . , M } and a server . The dataset X ∈ RN×D is vertically partitioned a priori across the M parties , where N is the number of data samples and D is the number of features . The i-th row of X corresponds to a data sample xi . For each sample xi , a party m holds a disjoint subset of the features , denoted xim , so that x i = [ xi1 , . . . , x i M ] . For each x i , there is a corresponding label yi . Let y ∈ RN×1 be the vector of all sample labels . We let Xm ∈ RN×Dm be the local dataset of a party m , where the i-th row correspond to data features xim . We assume that the server and all parties have a copy of the labels y . For scenarios where the labels are private and only present at a single party , the label holder can provide enough information for the parties to compute gradients for some classes of model architectures ( Liu et al. , 2019 ) . Each party m holds a set of model parameters θm as well as a local embedding function hm ( · ) . The server holds a set of parameters θ0 called the server model and a loss function l ( · ) that combines the embeddings hm ( θm ; xim ) from all parties . Our objective is as follows : minimize Θ F ( Θ ; X ; y ) : = 1 N N∑ i=1 l ( θ0 , h1 ( θ1 ; x i 1 ) , . . . , hM ( θM ; x i M ) ; y i ) ( 1 ) where Θ = [ θT0 , . . . , θ T M ] T is the global model . An example of a global model Θ is in Figure 1a . For simplicity , we let m = 0 refer to the server , and define h0 ( θ0 ; xi ) : = θ0 for all xi , where h0 ( · ) is equivalent to the identity function . Let hm ( θm ; xim ) ∈ RPm for m = 0 , . . . , M , where Pm is the size of the m-th embedding . Let ∇mF ( Θ ; X ; y ) : = 1 N ∑N i=1∇θm l ( θ0 , h1 ( θ1 ; xi1 ) , . . . , hM ( θM ; xiM ) ; yi ) be the partial derivatives for parameters θm . Let XB and yB be the set of samples and labels corresponding to a randomly sampled mini-batch B of size B . We let the stochastic partial derivatives for parameters θm be ∇mFB ( Θ ; X ; y ) : = 1 B ∑ xi , yi∈XB , yB ∇θm l ( θ0 , h1 ( θ1 ; xi1 ) , . . . , hM ( θM ; xiM ) ; y ) . We may drop X and y from F ( · ) and FB ( · ) . With a minor abuse of notation , we let hm ( θm ; XBm ) : = { hm ( θm ; xB 1 m ) , . . . , hm ( θm ; x BB m ) } be the set of all party m embeddings associated with mini-batch B , where Bi is the i-th sample in the mini-batch B . We let ∇mFB ( Θ ) and ∇mFB ( θ0 , h1 ( θ1 ; XB1 ) , . . . , hM ( θM ; XBM ) ) be equivalent , and use them interchangeably . Assumption 1 . Smoothness : There exists positive constants L < ∞ and Lm < ∞ , for m = 0 , . . . , M , such that for all Θ1 , Θ2 , the objective function satisfies ‖∇F ( Θ1 ) −∇F ( Θ2 ) ‖ ≤ L ‖Θ1 −Θ2‖ and ‖∇mFB ( Θ1 ) −∇mFB ( Θ2 ) ‖ ≤ Lm ‖Θ1 −Θ2‖ . Assumption 2 . Unbiased gradients : Form = 0 , . . . , M , for a randomly selected mini-batch B , the stochastic partial derivatives are unbiased , i.e. , EB∇mFB ( Θ ) = ∇mF ( Θ ) . Assumption 3 . Bounded variance : For m = 0 , . . . , M , there exists constants σm < ∞ such that the variance of the stochastic partial derivatives are bounded as : EB‖∇mF ( Θ ) −∇mFB ( Θ ) ‖2 ≤ σ2m B for a randomly selected mini-batch B of size B . Assumption 1 bounds how fast the gradient and stochastic partial derivatives can change . Assumptions 2 and 3 require that the stochastic partial derivatives are unbiased estimators of the true partial derivatives with bounded variance . Assumptions 1–3 are common assumptions in convergence analysis of gradient-based algorithms ( Tsitsiklis et al. , 1986 ; Nguyen et al. , 2018 ; Bottou et al. , 2018 ) . We note Assumptions 2–3 are similar to the IID assumptions in HFL convergence analysis . However , in VFL settings , all parties store identical sample IDs but different subsets of features . Hence , there is no equivalent notion of a non-IID distribution in VFL . Assumption 4 . Bounded Hessian : There exists positive constants Hm for m = 0 , . . . , M such that for all Θ , the second partial derivatives of FB with respect to hm ( θm ; XBm ) satisfy : ‖∇2hm ( θm ; XBm ) FB ( Θ ) ‖F ≤ Hm for any mini-batch B . Assumption 5 . Bounded Embedding Gradients : There exists positive constants Gm for m = 0 , . . . , M such that for all θm , the stochastic embedding gradients are bounded by : ‖∇θmhm ( θm ; XBm ) ‖F ≤ Gm for any mini-batch B . Since we are assuming a Lipschitz-continuous loss function ( Assumption 1 ) , we know the Hessian of F is bounded . Assumption 4 strengthens this assumption slightly to also bound the Hessian over any mini-batch . Assumption 5 bounds the magnitude of the partial derivatives with respect to embeddings . This embedding gradient bound is necessary to ensure convergence in the presence of embedding compression error ( see appendix for details ) .
This work studies the problem of communication-efficient training on vertically partitioned data and it is relevant to the conference. The paper provides the theoretical analysis of the effect message compression has on distributed training over vertically partitioned data, and prove convergence of non-convex objectives to a fixed point at a rate of O($1/\sqrt{T}$) when the compression error is bounded over the course of training. Experimental validation is provided to show compression can reduce communication by over 90% without a significant decrease in accuracy over VFL without compression.
SP:eed3ca7c501eda882c92acd0d8cd3a725030c97c
Compressed-VFL: Communication-Efficient Learning with Vertically Partitioned Data
T ) when the compression error is bounded over the course of training . We provide specific requirements for convergence with common compression techniques , such as quantization and top-k sparsification . Finally , we experimentally show compression can reduce communication by over 90 % without a significant decrease in accuracy over VFL without compression . 1 INTRODUCTION . Federated Learning ( McMahan et al. , 2017 ) is a distributed machine learning approach that has become of much interest in both theory ( Li et al. , 2020 ; Wang et al. , 2019 ; Liu et al. , 2020 ) and practice ( Bonawitz et al. , 2019 ; Rieke et al. , 2020 ; Lim et al. , 2020 ) in recent years . Naive distributed learning algorithms may require frequent exchanges of large amounts of data , which can lead to slow training performance ( Lin et al. , 2020 ) . Further , participants may be globally distributed , with high latency network connections . To mitigate these factors , Federated Learning algorithms aim to be communication-efficient by design . Methods such as local updates ( Moritz et al. , 2016 ; Liu et al. , 2019 ) , where parties train local parameters for multiple iterations without communication , and message compression ( Stich et al. , 2018 ; Wen et al. , 2017 ; Karimireddy et al. , 2019 ) reduce message frequency and size , respectively , with little impact on training performance . Federated Learning methods often target the case where the data among parties is distributed horizontally : each party ’ s data shares the same features but parties hold data corresponding to different sample IDs . This is known as Horizontal Federated Learning ( HFL ) ( Yang et al. , 2019 ) . However , there are several application areas where data is partitioned in a vertical manner : the parties store data on the same sample IDs but different feature spaces . An example of a vertically partitioned setting includes a hospital , bank , and insurance company seeking to train a model to predict something of mutual interest , such as customer credit score . Each of these institutions may have data on the same individuals but store medical history , financial transactions , and vehicle accident reports , respectively . These features must remain local to the institutions due to privacy concerns , rules and regulations ( e.g. , GDPR , HIPAA ) , and/or communication network limitations . In such a scenario , Vertical Federated Learning ( VFL ) methods must be employed . Although VFL is less well-studied than HFL , there has been a growing interest in VFL algorithms recently ( Hu et al. , 2019 ; Gu et al. , 2021 ; Cha et al. , 2021 ) , and VFL algorithms have important applications including risk prediction , smart manufacturing , and discovery of pharmaceuticals ( Kairouz et al. , 2021 ) . Typically in VFL , each party trains a local embedding function that maps raw data features to a meaningful vector representation , or embedding , for prediction tasks . For example , a neural network can be an embedding function for mapping the text of an online article to a vector space for classification ( Koehrsen , 2018 ) . Referring to Figure 1a , suppose Party 1 is a hospital with medical data features x1 . The hospital computes its embedding h1 ( θ1 ; x1 ) for the features by feeding x1 through a neural network . The other parties ( the bank and insurance company ) , compute embeddings for their features , then all parties share the embeddings in a private manner ( e.g. , homomorphic encryption , secure multi-party computation , or secure aggregation ) . The embeddings are then combined in a server model θ0 to determine the final loss of the global model . A server model ( or fusion network ) captures the complicated interactions of embeddings and is often a complex , non-linear model ( Gu et al. , 2019 ; Nie et al. , 2021 ; Han et al. , 2021 ) . Embeddings can be very large , in practice , sometimes requiring terabytes of communication over the course of training . Motivated by this , we propose Compressed Vertical Federated Learning ( C-VFL ) , a general framework for communication-efficient Federated Learning over vertically partitioned data . In our algorithm , parties communicate compressed embeddings periodically , and the parties and the server each run block-coordinate descent for multiple local iterations , in parallel , using stochastic gradients to update their local parameters . C-VFL is the first theoretically verified VFL algorithm that applies embedding compression . Unlike in HFL algorithms , C-VFL compresses embeddings rather than gradients . Previous work has proven convergence for HFL algorithms with gradient compression ( Stich et al. , 2018 ; Wen et al. , 2017 ; Karimireddy et al. , 2019 ) . However , no previous work analyzes the convergence requirements for VFL algorithms that use embedding compression . Embeddings are parameters in the partial derivatives calculated at each party . The effect of compression error on the resulting partial derivatives may be complex ; therefore , the analysis in previous work on gradient compression in HFL does not apply to compression in VFL . In our work , we prove that , under a diminishing compression error , C-VFL converges at a rate of O ( 1√ T ) , which is comparable to previous VFL algorithms that do not employ compression . We also analyze common compressors , such as quantization and sparsification , in C-VFL and provide bounds on their compression parameters to ensure convergence . C-VFL also generalizes previous work by supporting an arbitrary server model . Previous work in VFL has either only analyzed an arbitrary server model without local updates ( Chen et al. , 2020 ) , or analyzed local updates with a linear server model ( Liu et al. , 2019 ; Zhang et al. , 2020 ; Das & Patterson , 2021 ) . C-VFL is designed with an arbitrary server model , allowing support for more complex prediction tasks than those supported by previous VFL algorithms . We summarize our main contributions in this work . 1 . We introduce C-VFL with an arbitrary compression scheme . Our algorithm generalizes previous work in VFL by including both an arbitrary server model and multiple local iterations . 2 . We prove convergence of C-VFL to a fixed point on non-convex objectives at a rate ofO ( 1√ T ) for a fixed step size when the compression error is bounded over the course of training . We also prove that the algorithm convergence error goes to zero for a diminishing step size if the compression error diminishes as well . Our work provides novel analysis for the effect of compressing embeddings on convergence in a VFL algorithm . Our analysis also applies to Split Learning when uploads to the server are compressed . 3 . We provide convergence bounds on parameters in common compressors that can be used in CVFL . In particular , we examine scalar quantization ( Bennett , 1948 ) , lattice vector quantization ( Zamir & Feder , 1996 ) , and top-k sparsification ( Lin et al. , 2018 ) . 4 . We evaluate our algorithm by training LSTMs on the MIMIC-III dataset and CNNs on the ModelNet10 dataset . We empirically show how C-VFL can reduce the number of bits sent by over 90 % compared to VFL with no compression without a significant loss in accuracy of the final model . Related Work . Richtárik & Takác ( 2016 ) ; Hardy et al . ( 2017 ) were the first works to propose Federated Learning algorithms for vertically partitioned data . Chen et al . ( 2020 ) ; Romanini et al . ( 2021 ) propose the inclusion of an arbitrary server model in a VFL algorithm . However , these works do not consider multiple local iterations , and thus communicate at every iteration . Liu et al . ( 2019 ) , Feng & Yu ( 2020 ) , and Das & Patterson ( 2021 ) all propose different VFL algorithms with local iterations for vertically partitioned data but do not consider an arbitrary server model . In contrast to previous works , our work addresses a vertical scenario , an arbitrary server model , local iterations , and message compression . Message compression is a common topic in HFL scenarios , where participants exchange gradients determined by their local datasets . Methods of gradient compression in HFL include scalar quantization ( Bernstein et al. , 2018 ) , vector quantization ( Shlezinger et al. , 2021 ) , and top-k sparsification ( Shi et al. , 2019 ) . In C-VFL , compressed embeddings are shared , rather than compressed gradients . Analysis in previous work on gradient compression in HFL does not apply to compression in VFL , as the effect of embedding compression error on each party ’ s partial derivatives may be complex . No prior work has analyzed the impact of compression on convergence in VFL . Outline . In Section 2 , we provide the problem formulation and our assumptions . Section 3 presents the details of C-VFL . In Section 4 , we present our main theoretical results . Our experimental results are given in Section 5 . Finally , we conclude in Section 6 . 2 PROBLEM FORMULATION . We present our problem formulation and notation to be used in the rest of the paper . We let ‖a‖ be the 2-norm of a vector a , and let ‖A‖F be the Frobenius norm of a matrix A . We consider a set of M parties { 1 , . . . , M } and a server . The dataset X ∈ RN×D is vertically partitioned a priori across the M parties , where N is the number of data samples and D is the number of features . The i-th row of X corresponds to a data sample xi . For each sample xi , a party m holds a disjoint subset of the features , denoted xim , so that x i = [ xi1 , . . . , x i M ] . For each x i , there is a corresponding label yi . Let y ∈ RN×1 be the vector of all sample labels . We let Xm ∈ RN×Dm be the local dataset of a party m , where the i-th row correspond to data features xim . We assume that the server and all parties have a copy of the labels y . For scenarios where the labels are private and only present at a single party , the label holder can provide enough information for the parties to compute gradients for some classes of model architectures ( Liu et al. , 2019 ) . Each party m holds a set of model parameters θm as well as a local embedding function hm ( · ) . The server holds a set of parameters θ0 called the server model and a loss function l ( · ) that combines the embeddings hm ( θm ; xim ) from all parties . Our objective is as follows : minimize Θ F ( Θ ; X ; y ) : = 1 N N∑ i=1 l ( θ0 , h1 ( θ1 ; x i 1 ) , . . . , hM ( θM ; x i M ) ; y i ) ( 1 ) where Θ = [ θT0 , . . . , θ T M ] T is the global model . An example of a global model Θ is in Figure 1a . For simplicity , we let m = 0 refer to the server , and define h0 ( θ0 ; xi ) : = θ0 for all xi , where h0 ( · ) is equivalent to the identity function . Let hm ( θm ; xim ) ∈ RPm for m = 0 , . . . , M , where Pm is the size of the m-th embedding . Let ∇mF ( Θ ; X ; y ) : = 1 N ∑N i=1∇θm l ( θ0 , h1 ( θ1 ; xi1 ) , . . . , hM ( θM ; xiM ) ; yi ) be the partial derivatives for parameters θm . Let XB and yB be the set of samples and labels corresponding to a randomly sampled mini-batch B of size B . We let the stochastic partial derivatives for parameters θm be ∇mFB ( Θ ; X ; y ) : = 1 B ∑ xi , yi∈XB , yB ∇θm l ( θ0 , h1 ( θ1 ; xi1 ) , . . . , hM ( θM ; xiM ) ; y ) . We may drop X and y from F ( · ) and FB ( · ) . With a minor abuse of notation , we let hm ( θm ; XBm ) : = { hm ( θm ; xB 1 m ) , . . . , hm ( θm ; x BB m ) } be the set of all party m embeddings associated with mini-batch B , where Bi is the i-th sample in the mini-batch B . We let ∇mFB ( Θ ) and ∇mFB ( θ0 , h1 ( θ1 ; XB1 ) , . . . , hM ( θM ; XBM ) ) be equivalent , and use them interchangeably . Assumption 1 . Smoothness : There exists positive constants L < ∞ and Lm < ∞ , for m = 0 , . . . , M , such that for all Θ1 , Θ2 , the objective function satisfies ‖∇F ( Θ1 ) −∇F ( Θ2 ) ‖ ≤ L ‖Θ1 −Θ2‖ and ‖∇mFB ( Θ1 ) −∇mFB ( Θ2 ) ‖ ≤ Lm ‖Θ1 −Θ2‖ . Assumption 2 . Unbiased gradients : Form = 0 , . . . , M , for a randomly selected mini-batch B , the stochastic partial derivatives are unbiased , i.e. , EB∇mFB ( Θ ) = ∇mF ( Θ ) . Assumption 3 . Bounded variance : For m = 0 , . . . , M , there exists constants σm < ∞ such that the variance of the stochastic partial derivatives are bounded as : EB‖∇mF ( Θ ) −∇mFB ( Θ ) ‖2 ≤ σ2m B for a randomly selected mini-batch B of size B . Assumption 1 bounds how fast the gradient and stochastic partial derivatives can change . Assumptions 2 and 3 require that the stochastic partial derivatives are unbiased estimators of the true partial derivatives with bounded variance . Assumptions 1–3 are common assumptions in convergence analysis of gradient-based algorithms ( Tsitsiklis et al. , 1986 ; Nguyen et al. , 2018 ; Bottou et al. , 2018 ) . We note Assumptions 2–3 are similar to the IID assumptions in HFL convergence analysis . However , in VFL settings , all parties store identical sample IDs but different subsets of features . Hence , there is no equivalent notion of a non-IID distribution in VFL . Assumption 4 . Bounded Hessian : There exists positive constants Hm for m = 0 , . . . , M such that for all Θ , the second partial derivatives of FB with respect to hm ( θm ; XBm ) satisfy : ‖∇2hm ( θm ; XBm ) FB ( Θ ) ‖F ≤ Hm for any mini-batch B . Assumption 5 . Bounded Embedding Gradients : There exists positive constants Gm for m = 0 , . . . , M such that for all θm , the stochastic embedding gradients are bounded by : ‖∇θmhm ( θm ; XBm ) ‖F ≤ Gm for any mini-batch B . Since we are assuming a Lipschitz-continuous loss function ( Assumption 1 ) , we know the Hessian of F is bounded . Assumption 4 strengthens this assumption slightly to also bound the Hessian over any mini-batch . Assumption 5 bounds the magnitude of the partial derivatives with respect to embeddings . This embedding gradient bound is necessary to ensure convergence in the presence of embedding compression error ( see appendix for details ) .
This paper proposes a communication-efficient training algorithm for vertical federated learning (VFL). It compresses the embeddings (clients) and parameters (the server) to save communication, what’s more, it also performs multiple local epochs to save more communication. Then it proves convergence rate of O(1/{T}^0.5) for the proposed algorithm. Finally, it provides some empirical results for validation.
SP:eed3ca7c501eda882c92acd0d8cd3a725030c97c
Generate, Annotate, and Learn: Generative Models Advance Self-Training and Knowledge Distillation
1 INTRODUCTION . Unlabeled data is abundant in the real world , but task-specific unlabeled data within the scope of a given machine learning problem can be challenging to find . For instance , one can not easily find in-domain unlabeled data conforming to the input distribution of a specific Natural Language Processing ( NLP ) task from the GLUE benchmark ( Wang et al. , 2019b ) . Some NLP tasks require an input comprising a pair of sentences with a particular relationship between them . Moreover , classification datasets typically represent a tailored distribution of text and only include a limited number of class labels . If task-specific unlabeled data were available , one could adopt self-training ( Yarowsky , 1995 ) to automatically annotate unlabeled data with pseudo labels to improve accuracy and robustness of classifiers ( Xie et al. , 2020 ; Carmon et al. , 2019b ) . In addition , one can use knowledge distillation ( Hinton et al. , 2015 ) on fresh task-specific unlabeled data to more effectively compress deep neural networks and ensembles ( Buciluǎ et al. , 2006 ; Chen et al. , 2020c ) . When task-specific unlabeled examples do not exist , one can try to retrieve them from a large and diverse open-domain dataset . For instance , Du et al . ( 2020 ) have used nearest neighbor retrieval to harvest in-domain unlabeled text from the internet , leading to a successful application of selftraining and knowledge distillation to certain NLP tasks . While retrieval can indeed help to find in-domain data for problems with simple inputs , it is not practical for problems with complex input schemes , e.g. , sentence pairs with certain relations and tabular data . Accordingly , self-training and retrieval-based methods have not been widely adopted for NLP tasks , e.g. , on the GLUE benchmark . This paper presents a deceptively simple and general framework called “ generate , annotate , and learn ( GAL ) ” to help advance semi-supervised learning and knowledge distillation on various applications that do not come with unlabeled data . We advocate for the use of language models to synthesize unlabeled tasks-specific data , in lieu of real unlabeled data . We build on recent advances in text generation ( Radford et al. , 2019 ; Gao et al. , 2021 ) , and use powerful generative models to synthesize unlabeled text and tables . Then , we use state-of-the-art classifiers to annotate generated unlabeled data with pseudo labels . Finally , we combine labeled data with pseudo labeled data to train more effective classifiers or for the purpose of knowledge distillation ( KD ) . We motivate GAL by making connections to empirical and vicinal risk minimization ( Vapnik , 1992 ; Chapelle et al. , 2001 ) , and demonstrate its utility by presenting empirical results on a wide range of applications . Our key contributions include : • We propose a simple way to advance SSL , KD , and few-shot learning on NLP by using language models to synthesize large amounts of task-specific unlabeled data . • We link GAL to empirical and vicinal risk minimization , helping explain why GAL works and why synthetic samples from class-conditional language models are not as effective . • We systematically dissect GAL and study the key components leading to its success . • GAL establishes a new SoTA for a single 6-layer transformer on the GLUE test set . • GAL improves prompt-based few-shot learning , providing an average improvement of 1.3 % on four 4-shot learning NLP tasks . • GAL advance self-training for tabular tasks , outperforming XGBoost on 2 out of 4 tasks . 2 RELATED WORK . There has been a surge of interest in improving accuracy and label efficiency of classifiers via : 1 . Self-Supervised pretraining on open-domain unlabeled data in a task-agnostic way ( Peters et al. , 2018 ; Devlin et al. , 2019 ; Chen et al. , 2020b ) , 2 . Self-Training using domain-specific unlabeled data in a task-specific way ( Rosenberg et al. , 2005 ; McClosky et al. , 2006 ; Xie et al. , 2020 ) . While self-supervised learning can be applied to a broad distribution of unlabeled data , self-training requires unlabeled data that at least can be annotated using the same set of class labels available for the downstream task ( Oliver et al. , 2018 ) . For instance , if one is interested in training a classifier to distinguish images of cats and dogs , self-training with images of aircraft is likely not helpful , but it is conceivable that self-supervised learning with images of aircraft can still help . A growing body of recent work suggests that perhaps self-supervised pretraining and self-training are compatible and can be combined to achieve the best semi-supervised learning performance ( Chen et al. , 2020c ; Du et al. , 2020 ) . We corroborate the existing evidence by showing gains from generative self-training . Semi-supervised learning ( SSL ) has a long and rich history in machine learning ( Cooper & Freeman , 1970 ; McLachlan & Ganesalingam , 1982 ; Riloff , 1996 ; Chapelle et al. , 2009 ; Van Engelen & Hoos , 2020 ) . One of the oldest family of SSL algorithms is self-training , a.k.a . self-learning or selflabeling ( Scudder , 1965 ; Fralick , 1967 ; Agrawala , 1970 ; Yarowsky , 1995 ) . Self-training encourages knowledge transfer between a teacher and a student model in such a way that the student can outperform the teacher . Specifically , one leverages the teacher ’ s knowledge to annotate unlabeled data with so-called pseudo labels , and the student learns from a mixture of pseudo- and human-labeled data . Self-training has recently seen renewed interest across vision and NLP applications ( Yalniz et al. , 2019 ; Xie et al. , 2020 ; Zoph et al. , 2020 ; Du et al. , 2020 ) . Recent work aims to combine self-training and consistency regularization to develop powerful SSL algorithms . The key idea is to ensure that the predictions of a classifier on unlabeled examples are robust to strong augmentations ( Berthelot et al. , 2019a ; Sohn et al. , 2020 ; Xie et al. , 2019 ) . We build on prior work and investigate the use of synthetic data within the broad family of self-training methods . Recent theoretical work analyzes self-training for linear models , often under the assumption that the data distribution is ( nearly ) Gaussian ( Carmon et al. , 2019a ; Raghunathan et al. , 2020 ; Chen et al. , 2020d ; Kumar et al. , 2020a ; Oymak & Gulcu , 2020 ) . Wei et al . ( 2021 ) prove that , under “ expansion ” and “ class separation ” assumptions , self-training can lead to more accurate neural network classifiers . We present a theoretical framing of GAL in terms of empirical and vicinal risk minimization ( Vapnik , 1992 ; Chapelle et al. , 2001 ) . An important family of related work uses generative models for SSL by learning features that are useful for both generation and discrimination ( e.g. , Chen et al. , 2020a ; Odena , 2016 ; Dai et al. , 2017 ) . For instance , Kingma et al . ( 2014 ) approach SSL by viewing missing class labels as a set of latent variables and use variational inference to impute missing labels as well as other factors of variation . By contrast , our work does not learn features using generative models and keeps the generative and discriminative processes separate . This offers more flexibility and allows GAL to use self-supervised pretraining methods that are not fully generative . Our work is closely related to recent work on the uses of generative models for data augmentation ( Norouzi et al. , 2020 ; Yang et al. , 2020 ) . Unlike Norouzi et al . ( 2020 ) , we do not use instance- based generative models . Yang et al . ( 2020 ) propose a complex scheme , including data relabeling , data filtering , and two-stage training , to utilize synthetic data . By contrast , we show that a simple mixture of the original data and synthetic data can provide sizable gains . Furthermore , we show a more broad use of generative models on KD and few-shot learning , in addition to tabular tasks . Knowledge Distillation ( KD ) ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ) uses a procedure similar to self-training to distill knowledge of an expressive teacher model into a smaller student model . In contrast , self-distillation ( Furlanello et al. , 2018 ; Zhang et al. , 2019a ; Mobahi et al. , 2020 ) uses teacher and student models of equal size , hoping to iteratively refine class labels . Previous work uses unlabeled data ( Buciluǎ et al. , 2006 ) and adversarial training ( Wang et al. , 2018 ) to improve KD . We demonstrate that synthetic data generated by unconditional generative models can improve KD on NLP , outperforming strong KD baselines , which often add more complexity and additional hyper-parameters ( e.g. , Sun et al. , 2019a ; Jiao et al. , 2019 ; Xu et al. , 2020 ; Rashid et al. , 2021 ) . Advanced generative models are able to generate realistic images and text ( Karras et al. , 2017 ; Brock et al. , 2019 ; Karras et al. , 2019 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . The quality of synthetic samples has improved to the extent that deep fake detection has become an important research topic itself ( Zellers et al. , 2019 ; Dolhansky et al. , 2019 ) . Recent work has aimed to utilize classconditional generative models to help improve supervised learning ( Antoniou et al. , 2017 ; Bowles et al. , 2018 ; Zhang et al. , 2019b ; Kumar et al. , 2020b ; Gao et al. , 2020 ) . However , Ravuri & Vinyals ( 2019 ) have shown that images generated by state-of-the-art class-conditional generative models fall short of improving ImageNet classification accuracy , despite strong sample quality scores ( Salimans et al. , 2016 ; Heusel et al. , 2017 ) . Similarly , Kumar et al . ( 2020b ) find that it is difficult for sentences generated by label-conditioned GPT-2 ( Radford et al. , 2019 ) to retain the semantics or pragmatics of a specified category , which leads to poor performance on downstream tasks . We discuss why class-conditional generative models are hardly effective for supervised learning , and instead , focus on unconditional generative models . 3 BACKGROUND ON SELF-TRAINING . Given a labeled dataset L = { ( xi , yi ) } Ni=1 and an unlabeled dataset U = { xj } Mj=1 , we summarize the general family of SSL algorithms known as self-training as : 1 . First , an initial model denoted f1 is trained using supervised learning on the labeled dataset L. 2 . Then , at iteration t , one adopts ft as the teacher model to annotate the unlabeled dataset U using pseudo labels . Optionally , one uses a selection method to pick a subset St ⊆ { ( xj , ft ( xj ) ) } Mj=1 of pseudo labeled examples . 3 . A student model ft+1 is trained to optimize a classification loss on the combination of L and St : ` t+1 = E ( x , y ) ∼ ( L∪St ) H ( y , ft+1 ( x ) ) , ( 1 ) where H ( q , p ) = q > log p is the softmax cross entropy loss , and y is assumed to be a one-hot vector ( original labels ) or a vector of class probabilities ( soft pseudo labels ) . 4 . Self-training iterations are repeated T times or until performance plateaus . Many different variants of the basic self-training algorithm discussed above exist in the literature . These variants differ in the type of pseudo labels used , the selection strategy to filter pseudo labeled examples , the speed at which ft is replaced with ft+1 , the choice of data augmentation strategy in the teacher and student models , and the weighting of the two datasets in the objective ( Berthelot et al. , 2019b ; a ; Xie et al. , 2020 ; Sohn et al. , 2020 ; Du et al. , 2020 ) . An important design choice is the type of pseudo labels used . One can simply use soft class probabilities predicted by a teacher ft ( Du et al. , 2020 ) , sharpened class probabilities ( Berthelot et al. , 2019b ) , or hard labels ( a one-hot vector that is zero except at argmaxft ( x ) ) ( Lee et al. , 2013 ) . Another important consideration is the selection strategy to retain a subset of pseudo-labeled examples . FixMatch ( Sohn et al. , 2020 ) uses a hyper-parameter τ to select examples on which the teacher model has a certain level of confidence , i.e. , St = { ( x , ft ( x ) ) | x ∈ U & max ( ft ( x ) ) ≥ τ } . ( 2 ) NoisyStudent ( Xie et al. , 2020 ) also uses a form of confidence filtering but ensures that the class labels in the selected subset are balanced . In principle , any method for out-of-distribution detection ( Hendrycks & Gimpel , 2016 ) can be adopted for filtering pseudo-labeled examples . We adopt the simplest variant of self-training and limit hyper-parameter tuning to a bare minimum .
The authors propose a new framework of data augmentation based on generative models. The idea is to generate some new samples and then to classify them in an unsupervised manner to improve a student model. The authors provide an extreme experiment where the sentences are composed of numerical data raw extracted from UCI datasets. The authors give a very clear formalization & discussion to bridge between data generation and semi-supervised learning. Then, the authors provide a large experimental section that show the interest of the GAL approach. In particular for small models but also for larger ones. The experimental section is very strong, investingating several interseting ablation. The last section (5.5) seems very promising for the future, in particular for data2text applications.
SP:b8423ad70e89d2e5ddd2bd9ad5ce80cfa99c0368
Generate, Annotate, and Learn: Generative Models Advance Self-Training and Knowledge Distillation
1 INTRODUCTION . Unlabeled data is abundant in the real world , but task-specific unlabeled data within the scope of a given machine learning problem can be challenging to find . For instance , one can not easily find in-domain unlabeled data conforming to the input distribution of a specific Natural Language Processing ( NLP ) task from the GLUE benchmark ( Wang et al. , 2019b ) . Some NLP tasks require an input comprising a pair of sentences with a particular relationship between them . Moreover , classification datasets typically represent a tailored distribution of text and only include a limited number of class labels . If task-specific unlabeled data were available , one could adopt self-training ( Yarowsky , 1995 ) to automatically annotate unlabeled data with pseudo labels to improve accuracy and robustness of classifiers ( Xie et al. , 2020 ; Carmon et al. , 2019b ) . In addition , one can use knowledge distillation ( Hinton et al. , 2015 ) on fresh task-specific unlabeled data to more effectively compress deep neural networks and ensembles ( Buciluǎ et al. , 2006 ; Chen et al. , 2020c ) . When task-specific unlabeled examples do not exist , one can try to retrieve them from a large and diverse open-domain dataset . For instance , Du et al . ( 2020 ) have used nearest neighbor retrieval to harvest in-domain unlabeled text from the internet , leading to a successful application of selftraining and knowledge distillation to certain NLP tasks . While retrieval can indeed help to find in-domain data for problems with simple inputs , it is not practical for problems with complex input schemes , e.g. , sentence pairs with certain relations and tabular data . Accordingly , self-training and retrieval-based methods have not been widely adopted for NLP tasks , e.g. , on the GLUE benchmark . This paper presents a deceptively simple and general framework called “ generate , annotate , and learn ( GAL ) ” to help advance semi-supervised learning and knowledge distillation on various applications that do not come with unlabeled data . We advocate for the use of language models to synthesize unlabeled tasks-specific data , in lieu of real unlabeled data . We build on recent advances in text generation ( Radford et al. , 2019 ; Gao et al. , 2021 ) , and use powerful generative models to synthesize unlabeled text and tables . Then , we use state-of-the-art classifiers to annotate generated unlabeled data with pseudo labels . Finally , we combine labeled data with pseudo labeled data to train more effective classifiers or for the purpose of knowledge distillation ( KD ) . We motivate GAL by making connections to empirical and vicinal risk minimization ( Vapnik , 1992 ; Chapelle et al. , 2001 ) , and demonstrate its utility by presenting empirical results on a wide range of applications . Our key contributions include : • We propose a simple way to advance SSL , KD , and few-shot learning on NLP by using language models to synthesize large amounts of task-specific unlabeled data . • We link GAL to empirical and vicinal risk minimization , helping explain why GAL works and why synthetic samples from class-conditional language models are not as effective . • We systematically dissect GAL and study the key components leading to its success . • GAL establishes a new SoTA for a single 6-layer transformer on the GLUE test set . • GAL improves prompt-based few-shot learning , providing an average improvement of 1.3 % on four 4-shot learning NLP tasks . • GAL advance self-training for tabular tasks , outperforming XGBoost on 2 out of 4 tasks . 2 RELATED WORK . There has been a surge of interest in improving accuracy and label efficiency of classifiers via : 1 . Self-Supervised pretraining on open-domain unlabeled data in a task-agnostic way ( Peters et al. , 2018 ; Devlin et al. , 2019 ; Chen et al. , 2020b ) , 2 . Self-Training using domain-specific unlabeled data in a task-specific way ( Rosenberg et al. , 2005 ; McClosky et al. , 2006 ; Xie et al. , 2020 ) . While self-supervised learning can be applied to a broad distribution of unlabeled data , self-training requires unlabeled data that at least can be annotated using the same set of class labels available for the downstream task ( Oliver et al. , 2018 ) . For instance , if one is interested in training a classifier to distinguish images of cats and dogs , self-training with images of aircraft is likely not helpful , but it is conceivable that self-supervised learning with images of aircraft can still help . A growing body of recent work suggests that perhaps self-supervised pretraining and self-training are compatible and can be combined to achieve the best semi-supervised learning performance ( Chen et al. , 2020c ; Du et al. , 2020 ) . We corroborate the existing evidence by showing gains from generative self-training . Semi-supervised learning ( SSL ) has a long and rich history in machine learning ( Cooper & Freeman , 1970 ; McLachlan & Ganesalingam , 1982 ; Riloff , 1996 ; Chapelle et al. , 2009 ; Van Engelen & Hoos , 2020 ) . One of the oldest family of SSL algorithms is self-training , a.k.a . self-learning or selflabeling ( Scudder , 1965 ; Fralick , 1967 ; Agrawala , 1970 ; Yarowsky , 1995 ) . Self-training encourages knowledge transfer between a teacher and a student model in such a way that the student can outperform the teacher . Specifically , one leverages the teacher ’ s knowledge to annotate unlabeled data with so-called pseudo labels , and the student learns from a mixture of pseudo- and human-labeled data . Self-training has recently seen renewed interest across vision and NLP applications ( Yalniz et al. , 2019 ; Xie et al. , 2020 ; Zoph et al. , 2020 ; Du et al. , 2020 ) . Recent work aims to combine self-training and consistency regularization to develop powerful SSL algorithms . The key idea is to ensure that the predictions of a classifier on unlabeled examples are robust to strong augmentations ( Berthelot et al. , 2019a ; Sohn et al. , 2020 ; Xie et al. , 2019 ) . We build on prior work and investigate the use of synthetic data within the broad family of self-training methods . Recent theoretical work analyzes self-training for linear models , often under the assumption that the data distribution is ( nearly ) Gaussian ( Carmon et al. , 2019a ; Raghunathan et al. , 2020 ; Chen et al. , 2020d ; Kumar et al. , 2020a ; Oymak & Gulcu , 2020 ) . Wei et al . ( 2021 ) prove that , under “ expansion ” and “ class separation ” assumptions , self-training can lead to more accurate neural network classifiers . We present a theoretical framing of GAL in terms of empirical and vicinal risk minimization ( Vapnik , 1992 ; Chapelle et al. , 2001 ) . An important family of related work uses generative models for SSL by learning features that are useful for both generation and discrimination ( e.g. , Chen et al. , 2020a ; Odena , 2016 ; Dai et al. , 2017 ) . For instance , Kingma et al . ( 2014 ) approach SSL by viewing missing class labels as a set of latent variables and use variational inference to impute missing labels as well as other factors of variation . By contrast , our work does not learn features using generative models and keeps the generative and discriminative processes separate . This offers more flexibility and allows GAL to use self-supervised pretraining methods that are not fully generative . Our work is closely related to recent work on the uses of generative models for data augmentation ( Norouzi et al. , 2020 ; Yang et al. , 2020 ) . Unlike Norouzi et al . ( 2020 ) , we do not use instance- based generative models . Yang et al . ( 2020 ) propose a complex scheme , including data relabeling , data filtering , and two-stage training , to utilize synthetic data . By contrast , we show that a simple mixture of the original data and synthetic data can provide sizable gains . Furthermore , we show a more broad use of generative models on KD and few-shot learning , in addition to tabular tasks . Knowledge Distillation ( KD ) ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ) uses a procedure similar to self-training to distill knowledge of an expressive teacher model into a smaller student model . In contrast , self-distillation ( Furlanello et al. , 2018 ; Zhang et al. , 2019a ; Mobahi et al. , 2020 ) uses teacher and student models of equal size , hoping to iteratively refine class labels . Previous work uses unlabeled data ( Buciluǎ et al. , 2006 ) and adversarial training ( Wang et al. , 2018 ) to improve KD . We demonstrate that synthetic data generated by unconditional generative models can improve KD on NLP , outperforming strong KD baselines , which often add more complexity and additional hyper-parameters ( e.g. , Sun et al. , 2019a ; Jiao et al. , 2019 ; Xu et al. , 2020 ; Rashid et al. , 2021 ) . Advanced generative models are able to generate realistic images and text ( Karras et al. , 2017 ; Brock et al. , 2019 ; Karras et al. , 2019 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . The quality of synthetic samples has improved to the extent that deep fake detection has become an important research topic itself ( Zellers et al. , 2019 ; Dolhansky et al. , 2019 ) . Recent work has aimed to utilize classconditional generative models to help improve supervised learning ( Antoniou et al. , 2017 ; Bowles et al. , 2018 ; Zhang et al. , 2019b ; Kumar et al. , 2020b ; Gao et al. , 2020 ) . However , Ravuri & Vinyals ( 2019 ) have shown that images generated by state-of-the-art class-conditional generative models fall short of improving ImageNet classification accuracy , despite strong sample quality scores ( Salimans et al. , 2016 ; Heusel et al. , 2017 ) . Similarly , Kumar et al . ( 2020b ) find that it is difficult for sentences generated by label-conditioned GPT-2 ( Radford et al. , 2019 ) to retain the semantics or pragmatics of a specified category , which leads to poor performance on downstream tasks . We discuss why class-conditional generative models are hardly effective for supervised learning , and instead , focus on unconditional generative models . 3 BACKGROUND ON SELF-TRAINING . Given a labeled dataset L = { ( xi , yi ) } Ni=1 and an unlabeled dataset U = { xj } Mj=1 , we summarize the general family of SSL algorithms known as self-training as : 1 . First , an initial model denoted f1 is trained using supervised learning on the labeled dataset L. 2 . Then , at iteration t , one adopts ft as the teacher model to annotate the unlabeled dataset U using pseudo labels . Optionally , one uses a selection method to pick a subset St ⊆ { ( xj , ft ( xj ) ) } Mj=1 of pseudo labeled examples . 3 . A student model ft+1 is trained to optimize a classification loss on the combination of L and St : ` t+1 = E ( x , y ) ∼ ( L∪St ) H ( y , ft+1 ( x ) ) , ( 1 ) where H ( q , p ) = q > log p is the softmax cross entropy loss , and y is assumed to be a one-hot vector ( original labels ) or a vector of class probabilities ( soft pseudo labels ) . 4 . Self-training iterations are repeated T times or until performance plateaus . Many different variants of the basic self-training algorithm discussed above exist in the literature . These variants differ in the type of pseudo labels used , the selection strategy to filter pseudo labeled examples , the speed at which ft is replaced with ft+1 , the choice of data augmentation strategy in the teacher and student models , and the weighting of the two datasets in the objective ( Berthelot et al. , 2019b ; a ; Xie et al. , 2020 ; Sohn et al. , 2020 ; Du et al. , 2020 ) . An important design choice is the type of pseudo labels used . One can simply use soft class probabilities predicted by a teacher ft ( Du et al. , 2020 ) , sharpened class probabilities ( Berthelot et al. , 2019b ) , or hard labels ( a one-hot vector that is zero except at argmaxft ( x ) ) ( Lee et al. , 2013 ) . Another important consideration is the selection strategy to retain a subset of pseudo-labeled examples . FixMatch ( Sohn et al. , 2020 ) uses a hyper-parameter τ to select examples on which the teacher model has a certain level of confidence , i.e. , St = { ( x , ft ( x ) ) | x ∈ U & max ( ft ( x ) ) ≥ τ } . ( 2 ) NoisyStudent ( Xie et al. , 2020 ) also uses a form of confidence filtering but ensures that the class labels in the selected subset are balanced . In principle , any method for out-of-distribution detection ( Hendrycks & Gimpel , 2016 ) can be adopted for filtering pseudo-labeled examples . We adopt the simplest variant of self-training and limit hyper-parameter tuning to a bare minimum .
This paper identifies challenges in self-training as a lack of in-domain data (input x). To overcome this challenge, the paper proposes to "generate" in-domain data using large self-supervised LMs. The rest of the process, which are annotation and learning, follows typical self-training and this paper takes the approach of learning with soft-target as (Du et al.) The paper examines the proposed approach (GAL) on GLUE benchmark on KD (table1), SSL on full data (tab2,3), and prompt-based few-shot (tab4), and also examines GAL in tabular tasks (tab 7). On KD, GAL usually shows the largest improvement on 1st iteration and shows minor fluctuations of performance during more iterations (Tab2, 7).
SP:b8423ad70e89d2e5ddd2bd9ad5ce80cfa99c0368
Generate, Annotate, and Learn: Generative Models Advance Self-Training and Knowledge Distillation
1 INTRODUCTION . Unlabeled data is abundant in the real world , but task-specific unlabeled data within the scope of a given machine learning problem can be challenging to find . For instance , one can not easily find in-domain unlabeled data conforming to the input distribution of a specific Natural Language Processing ( NLP ) task from the GLUE benchmark ( Wang et al. , 2019b ) . Some NLP tasks require an input comprising a pair of sentences with a particular relationship between them . Moreover , classification datasets typically represent a tailored distribution of text and only include a limited number of class labels . If task-specific unlabeled data were available , one could adopt self-training ( Yarowsky , 1995 ) to automatically annotate unlabeled data with pseudo labels to improve accuracy and robustness of classifiers ( Xie et al. , 2020 ; Carmon et al. , 2019b ) . In addition , one can use knowledge distillation ( Hinton et al. , 2015 ) on fresh task-specific unlabeled data to more effectively compress deep neural networks and ensembles ( Buciluǎ et al. , 2006 ; Chen et al. , 2020c ) . When task-specific unlabeled examples do not exist , one can try to retrieve them from a large and diverse open-domain dataset . For instance , Du et al . ( 2020 ) have used nearest neighbor retrieval to harvest in-domain unlabeled text from the internet , leading to a successful application of selftraining and knowledge distillation to certain NLP tasks . While retrieval can indeed help to find in-domain data for problems with simple inputs , it is not practical for problems with complex input schemes , e.g. , sentence pairs with certain relations and tabular data . Accordingly , self-training and retrieval-based methods have not been widely adopted for NLP tasks , e.g. , on the GLUE benchmark . This paper presents a deceptively simple and general framework called “ generate , annotate , and learn ( GAL ) ” to help advance semi-supervised learning and knowledge distillation on various applications that do not come with unlabeled data . We advocate for the use of language models to synthesize unlabeled tasks-specific data , in lieu of real unlabeled data . We build on recent advances in text generation ( Radford et al. , 2019 ; Gao et al. , 2021 ) , and use powerful generative models to synthesize unlabeled text and tables . Then , we use state-of-the-art classifiers to annotate generated unlabeled data with pseudo labels . Finally , we combine labeled data with pseudo labeled data to train more effective classifiers or for the purpose of knowledge distillation ( KD ) . We motivate GAL by making connections to empirical and vicinal risk minimization ( Vapnik , 1992 ; Chapelle et al. , 2001 ) , and demonstrate its utility by presenting empirical results on a wide range of applications . Our key contributions include : • We propose a simple way to advance SSL , KD , and few-shot learning on NLP by using language models to synthesize large amounts of task-specific unlabeled data . • We link GAL to empirical and vicinal risk minimization , helping explain why GAL works and why synthetic samples from class-conditional language models are not as effective . • We systematically dissect GAL and study the key components leading to its success . • GAL establishes a new SoTA for a single 6-layer transformer on the GLUE test set . • GAL improves prompt-based few-shot learning , providing an average improvement of 1.3 % on four 4-shot learning NLP tasks . • GAL advance self-training for tabular tasks , outperforming XGBoost on 2 out of 4 tasks . 2 RELATED WORK . There has been a surge of interest in improving accuracy and label efficiency of classifiers via : 1 . Self-Supervised pretraining on open-domain unlabeled data in a task-agnostic way ( Peters et al. , 2018 ; Devlin et al. , 2019 ; Chen et al. , 2020b ) , 2 . Self-Training using domain-specific unlabeled data in a task-specific way ( Rosenberg et al. , 2005 ; McClosky et al. , 2006 ; Xie et al. , 2020 ) . While self-supervised learning can be applied to a broad distribution of unlabeled data , self-training requires unlabeled data that at least can be annotated using the same set of class labels available for the downstream task ( Oliver et al. , 2018 ) . For instance , if one is interested in training a classifier to distinguish images of cats and dogs , self-training with images of aircraft is likely not helpful , but it is conceivable that self-supervised learning with images of aircraft can still help . A growing body of recent work suggests that perhaps self-supervised pretraining and self-training are compatible and can be combined to achieve the best semi-supervised learning performance ( Chen et al. , 2020c ; Du et al. , 2020 ) . We corroborate the existing evidence by showing gains from generative self-training . Semi-supervised learning ( SSL ) has a long and rich history in machine learning ( Cooper & Freeman , 1970 ; McLachlan & Ganesalingam , 1982 ; Riloff , 1996 ; Chapelle et al. , 2009 ; Van Engelen & Hoos , 2020 ) . One of the oldest family of SSL algorithms is self-training , a.k.a . self-learning or selflabeling ( Scudder , 1965 ; Fralick , 1967 ; Agrawala , 1970 ; Yarowsky , 1995 ) . Self-training encourages knowledge transfer between a teacher and a student model in such a way that the student can outperform the teacher . Specifically , one leverages the teacher ’ s knowledge to annotate unlabeled data with so-called pseudo labels , and the student learns from a mixture of pseudo- and human-labeled data . Self-training has recently seen renewed interest across vision and NLP applications ( Yalniz et al. , 2019 ; Xie et al. , 2020 ; Zoph et al. , 2020 ; Du et al. , 2020 ) . Recent work aims to combine self-training and consistency regularization to develop powerful SSL algorithms . The key idea is to ensure that the predictions of a classifier on unlabeled examples are robust to strong augmentations ( Berthelot et al. , 2019a ; Sohn et al. , 2020 ; Xie et al. , 2019 ) . We build on prior work and investigate the use of synthetic data within the broad family of self-training methods . Recent theoretical work analyzes self-training for linear models , often under the assumption that the data distribution is ( nearly ) Gaussian ( Carmon et al. , 2019a ; Raghunathan et al. , 2020 ; Chen et al. , 2020d ; Kumar et al. , 2020a ; Oymak & Gulcu , 2020 ) . Wei et al . ( 2021 ) prove that , under “ expansion ” and “ class separation ” assumptions , self-training can lead to more accurate neural network classifiers . We present a theoretical framing of GAL in terms of empirical and vicinal risk minimization ( Vapnik , 1992 ; Chapelle et al. , 2001 ) . An important family of related work uses generative models for SSL by learning features that are useful for both generation and discrimination ( e.g. , Chen et al. , 2020a ; Odena , 2016 ; Dai et al. , 2017 ) . For instance , Kingma et al . ( 2014 ) approach SSL by viewing missing class labels as a set of latent variables and use variational inference to impute missing labels as well as other factors of variation . By contrast , our work does not learn features using generative models and keeps the generative and discriminative processes separate . This offers more flexibility and allows GAL to use self-supervised pretraining methods that are not fully generative . Our work is closely related to recent work on the uses of generative models for data augmentation ( Norouzi et al. , 2020 ; Yang et al. , 2020 ) . Unlike Norouzi et al . ( 2020 ) , we do not use instance- based generative models . Yang et al . ( 2020 ) propose a complex scheme , including data relabeling , data filtering , and two-stage training , to utilize synthetic data . By contrast , we show that a simple mixture of the original data and synthetic data can provide sizable gains . Furthermore , we show a more broad use of generative models on KD and few-shot learning , in addition to tabular tasks . Knowledge Distillation ( KD ) ( Buciluǎ et al. , 2006 ; Hinton et al. , 2015 ) uses a procedure similar to self-training to distill knowledge of an expressive teacher model into a smaller student model . In contrast , self-distillation ( Furlanello et al. , 2018 ; Zhang et al. , 2019a ; Mobahi et al. , 2020 ) uses teacher and student models of equal size , hoping to iteratively refine class labels . Previous work uses unlabeled data ( Buciluǎ et al. , 2006 ) and adversarial training ( Wang et al. , 2018 ) to improve KD . We demonstrate that synthetic data generated by unconditional generative models can improve KD on NLP , outperforming strong KD baselines , which often add more complexity and additional hyper-parameters ( e.g. , Sun et al. , 2019a ; Jiao et al. , 2019 ; Xu et al. , 2020 ; Rashid et al. , 2021 ) . Advanced generative models are able to generate realistic images and text ( Karras et al. , 2017 ; Brock et al. , 2019 ; Karras et al. , 2019 ; Radford et al. , 2019 ; Brown et al. , 2020 ) . The quality of synthetic samples has improved to the extent that deep fake detection has become an important research topic itself ( Zellers et al. , 2019 ; Dolhansky et al. , 2019 ) . Recent work has aimed to utilize classconditional generative models to help improve supervised learning ( Antoniou et al. , 2017 ; Bowles et al. , 2018 ; Zhang et al. , 2019b ; Kumar et al. , 2020b ; Gao et al. , 2020 ) . However , Ravuri & Vinyals ( 2019 ) have shown that images generated by state-of-the-art class-conditional generative models fall short of improving ImageNet classification accuracy , despite strong sample quality scores ( Salimans et al. , 2016 ; Heusel et al. , 2017 ) . Similarly , Kumar et al . ( 2020b ) find that it is difficult for sentences generated by label-conditioned GPT-2 ( Radford et al. , 2019 ) to retain the semantics or pragmatics of a specified category , which leads to poor performance on downstream tasks . We discuss why class-conditional generative models are hardly effective for supervised learning , and instead , focus on unconditional generative models . 3 BACKGROUND ON SELF-TRAINING . Given a labeled dataset L = { ( xi , yi ) } Ni=1 and an unlabeled dataset U = { xj } Mj=1 , we summarize the general family of SSL algorithms known as self-training as : 1 . First , an initial model denoted f1 is trained using supervised learning on the labeled dataset L. 2 . Then , at iteration t , one adopts ft as the teacher model to annotate the unlabeled dataset U using pseudo labels . Optionally , one uses a selection method to pick a subset St ⊆ { ( xj , ft ( xj ) ) } Mj=1 of pseudo labeled examples . 3 . A student model ft+1 is trained to optimize a classification loss on the combination of L and St : ` t+1 = E ( x , y ) ∼ ( L∪St ) H ( y , ft+1 ( x ) ) , ( 1 ) where H ( q , p ) = q > log p is the softmax cross entropy loss , and y is assumed to be a one-hot vector ( original labels ) or a vector of class probabilities ( soft pseudo labels ) . 4 . Self-training iterations are repeated T times or until performance plateaus . Many different variants of the basic self-training algorithm discussed above exist in the literature . These variants differ in the type of pseudo labels used , the selection strategy to filter pseudo labeled examples , the speed at which ft is replaced with ft+1 , the choice of data augmentation strategy in the teacher and student models , and the weighting of the two datasets in the objective ( Berthelot et al. , 2019b ; a ; Xie et al. , 2020 ; Sohn et al. , 2020 ; Du et al. , 2020 ) . An important design choice is the type of pseudo labels used . One can simply use soft class probabilities predicted by a teacher ft ( Du et al. , 2020 ) , sharpened class probabilities ( Berthelot et al. , 2019b ) , or hard labels ( a one-hot vector that is zero except at argmaxft ( x ) ) ( Lee et al. , 2013 ) . Another important consideration is the selection strategy to retain a subset of pseudo-labeled examples . FixMatch ( Sohn et al. , 2020 ) uses a hyper-parameter τ to select examples on which the teacher model has a certain level of confidence , i.e. , St = { ( x , ft ( x ) ) | x ∈ U & max ( ft ( x ) ) ≥ τ } . ( 2 ) NoisyStudent ( Xie et al. , 2020 ) also uses a form of confidence filtering but ensures that the class labels in the selected subset are balanced . In principle , any method for out-of-distribution detection ( Hendrycks & Gimpel , 2016 ) can be adopted for filtering pseudo-labeled examples . We adopt the simplest variant of self-training and limit hyper-parameter tuning to a bare minimum .
The paper provides a novel framework for advancing self-supervised learning, Knowledge distillation and few-shot learning for NLP and Tabular data. The framework focuses on self-training and knowledge distillation by generating synthetic data based on an unconditional generative model. They conduct extensive experiments spanning NLP, Tabular data and Computer vision datasets, discussing the benefits and limitations of their proposed methods. They also conduct and extensive literature review for each of the components used in their framework.
SP:b8423ad70e89d2e5ddd2bd9ad5ce80cfa99c0368
Riemannian Manifold Embeddings for Straight-Through Estimator
1 INTRODUCTION . Neural networks can handle many complex tasks due to their large number of trainable parameters and strong nonlinear capabilities ( Krizhevsky et al. , 2012 ) . However , the massive amount of models and calculations hinder the application of neural networks on mobile and miniaturized devices , which naturally comes with constraints on computing power and resources . Neural network quantization is considered an efficient solution in the inference that alleviates the number of parameters and optimizes the computation by reducing the bit width of weights or activations ( Courbariaux et al. , 2016 ; Li et al. , 2016 ; Zhu et al. , 2016 ) . Existing neural network quantization methods can be roughly divided into two categories : “ STE ” and “ Non-STE ” methods . Most of the quantization methods adopted by the QNNs belong to the former , i.e . there is always a non-differentiable quantization function during training . The role of STE is to penetrate this non-differentiable quantization function and pass the gradients in backpropogation ( Hinton , 2012 ) , e.g . DeepShift ( Elhoushi et al. , 2019 ) , INT8 ( Zhu et al. , 2020 ) , AQE ( Chen et al. , 2020 ) , etc . “ Non-STE ” methods refer to maintain feasible quantization during training , which does not need to apply STE directly to all full-precision weights . For example , Zhou et al . ( Zhou et al. , 2017 ) divided the weights into two groups until all parameters are quantized , where the first group is directly quantized and fixed ; the second group needs to be retrained to make up for the decrease of accuracy caused by quantization of the first group . Louizos et al . ( Louizos et al. , 2019 ) introduced a differentiable quantizer that can transform the continuous distributions of weights and activations to categorical distributions with gradient-based optimization . However , “ Non-STE ” methods face the setting and influence of heavy hyper-parameters in the training process . Relatively , “ STE ” methods are wider choices for quantized models from simplicity and versatility . To approximate ∇W w.r.t . W and complete stable quantization training , Courbariaux et al . ( Courbariaux et al. , 2016 ) binarized the neural networks by the approximated gradients using STE : ∇W = ∇Ŵ ◦ I , where ∂Ŵ ∂W = I = { 1 if |W | ≤ 1 0 otherwise . However , it will inevitably bring the gradient mismatch problem raised by just simply application of STE . To overcome this challenge , Zhou et al . ( Zhou et al. , 2016 ) proposed to transformW to W̃ : W̃ = tanh ( W ) max ( | tanh ( W ) | ) , and then quantize it using a quantization function Q ( · ) . During backpropogation , the gradients∇W w.r.t . W can be further computed using the chain rule : ∇W = ∇Q ( W̃ ) 1− tanh2 ( W ) max ( | tanh ( W ) | ) . Based on this work , Chen et al . ( Chen et al. , 2019 ) proposed to learn ∇W by fully-connected neural networks or LSTM , which replaces this gradients via a meta quantizer Mφ parameterized by φ across layers : ∇W =Mφ ( ∇Q ( W̃ ) , Q ( W̃ ) ) ∂W̃ ∂W . However , such methods not only add many additional parameters , but also increase the difficulty of the training of quantized models . In this paper , we introduce the Manifold Quantization ( ManiQuant ) to train a quantized model via embedding Riemannian manifolds for STE . In Figure 1 , we treat the parameter space in a quantized model as a Riemannian manifold , which will alleviate the gradient mismatch problem caused by non-differentiable quantization and assist quantized models in achieving a more stable convergence and better performance . The main contributions of this work are three-fold : First , we propose to use Fisher Information Matrix embedding to alleviate the gradient mismatch problem . Second , we define a novel Hyperbolic divergence by a convex function with geometric structure . With the constraint of Hyperbolic divergence , we introduce a weak curvature manifold that forms the background of ManiQuant . Third , based on the Second , we propose to use weak curvature metric embeddings for STE as the weak curvature gradient to approximate ∇W without extra parameters and training complexity . 2 RELATED WORK . 2.1 GENERAL GRADIENT . For a neural network with l layers , the full-precision weight matrix of each layer is just marked as Wi . By defining the operation vec ( · ) that vectorizes matrices by stacking their columns together , the total parameter vector of the neural network can be denoted as θ = [ vec ( W1 ) > , vec ( W2 ) > , . . . , vec ( Wl ) > ] > . The parameter vector is given as a column vector . Let θ ∈ Rn be a parameter space on which a loss function L associated with weight is well-defined . It is relatively easy to express the general gradient in training : ∇θL = ∂L ∂θ , ( 1 ) which is the steepest descent method in Euclidean space with an orthonormal coordinate . Note that the negative gradient represents the direction of the steepest descent . When the Euclidean space is considered , the Euclidean divergence between two sufficiently closed points θ and θ′ is actually defined by default : DE [ θ : θ ′ ] = 1 2 ∑ i ( θi − θ′i ) 2 , ( 2 ) which is a half of the square of the Euclidean distance identified by the Euclidean metric δij , so that ds2 = 2DE [ θ : θ + dθ ] = ∑ i ( dθi ) 2 = ∑ i , j δijdθidθj . ( 3 ) 2.2 NATURAL GRADIENT . The general gradient only considers the parameter update along the gradient direction that does not involve the metric tensor for the parameter space of the problem . Now , we attach a Riemannian metric Gij ( θ ) for the parameter space to form a Riemannian manifold ( M , Gij ) ( Amari , 1998 ; 2016 ) . In that case , the steepest descent direction also depends on the quadratic form introduced by a small incremental vector dθ that connects θ and θ + dθ , whose form is given by ds2 = ∑ i , j Gij ( θ ) dθidθj , ( 4 ) where dθi is the component of dθ . Under the constraint ds2 , the steepest descent direction is toward the optimization goal of L ( θ + dθ ) . Intuitively , it measures the Kullback-Leibler ( KL ) divergence between two distributions p ( x|θ ) and p ( x|θ + dθ ) of network model , which is equivalent to the interval of two adjacent points on a Riemannian manifold . The KL divergence under the Riemannian metric is well approximated through the Fisher Information Matrix ( FIM ) with the second-order Taylor expansion of the KL divergence ( see Appendix B.3 ) ( Ba et al. , 2016 ) : ds2 = 2DKL [ p ( x|θ ) : p ( x|θ + dθ ) ] ≈ −dθ > Ep ( x|θ ) [ ∇ log p ( x|θ ) ∇ log p ( x|θ ) > ] dθ . ( 5 ) We see that FIM is equal to the negative expected Hessian of log likelihood . Furthermore , Amari deduced that the Riemannian metric is given by the FIM ( Amari , 1998 ) . By the Lagrangian form , we have the natural gradient ( see Appendix C.1 ) ∇̃θL = F−1 ( θ ) ∂L ∂θ , whereF ( θ ) = Ep ( x|θ ) [ ∇ log p ( x|θ ) ∇ log p ( x|θ ) > ] , ( 6 ) which is the steepest descent method in a Riemannian space and F ( θ ) is the FIM with a parameter vector ( the natural gradient also known as the Riemannian gradient ) . Empirically , the immediate application of FIM is as drop-in replacement of Hessian in second order optimization algorithm . Note that ∇̃θL will return to∇θL when Gij ( θ ) is equal to the Euclidean metric δij , i.e . the identity matrix I . 3 MANIFOLD QUANTIZATION . Considering a quantized model of each layer , we notate the quantized weight matrix using Ŵi for differentiating full-precision weight matrix Wi . Note that we define the quantized parameter vector as θ̂ , which is similar to the full-precision parameter vector θ . During training a QNN , the quantization function Q ( · ) is a one-to-one mapping from full-precision values to quantized values , which can be expressed as θ̂ = Q ( θ ) . By involving the practice to train a neural network with low bit-width , the process of the quantization needs to be further designed that plays a vital role in the final performance , as for the other parts , e.g . via mapping from an input pattern âi−1 to output âi can still be imitated in the same way as full-precision neural networks : si = Ŵiâi−1 , âi = Q ( fi ◦ si ) , ( 7 ) where fi is non-linear function acted on element-wise . To distinguish the full-precision model , we mark the notation “ ˆ ” to represent the operation through a quantization function in a quantized model , whether for weight matrix Ŵi or activation vector âi . 3.1 FISHER INFORMATION MATRIX EMBEDDING FOR STE . The concept of the natural gradient is closely related to the FIM and KL divergence ( see Appendix B.3 ) . Since the KL divergence is intrinsic , the natural gradient is also intrinsically invariant under the parameter transformation . By viewing the FIM as an l by l block matrix where l denotes the number of layers in a neural network , the natural gradient formula can be introduced to alleviate the gradient mismatch problem when training a QNN and updating its parameters . Lemma 1 In the Riemannian manifold defined by KL divergence , Fisher Information Matrix embedding for Straight-Through Estimator is written by : ∇̃θL STE = F−1 ( θ̂ ) ∇θ̂L ◦ I|θ|≤1 = E−1 [ d log p ( x|θ̂ ) dθ̂ d log p ( x|θ̂ ) dθ̂ > ] ∇θ̂L ◦ I|θ|≤1 = E−1 [ vec ( ∂L ∂θ̂ ) vec ( ∂L ∂θ̂ ) > ] ∇θ̂L ◦ I|θ|≤1 . ( 8 ) Furthermore , F−1 ( θ̂ ) can be expressed as E [ vec ( ∂L ∂Ŵ1 ) vec ( ∂L ∂Ŵ1 ) > ] · · · E [ vec ( ∂L ∂Ŵ1 ) vec ( ∂L ∂Ŵl ) > ] ... . . . ... E [ vec ( ∂L ∂Ŵl ) vec ( ∂L ∂Ŵ1 ) > ] · · · E [ vec ( ∂L ∂Ŵl ) vec ( ∂L ∂Ŵl ) > ] −1 ( 9 ) where vec ( ∂L ∂Ŵi ) can be represented by the gradient error ∂L∂ai . Proof . The proofs are the combination between STE and Appendix C.1 . Considering that the gradient propagation needs to span over the quantized neurons and graphs , we still need to use STE to update the gradient of QNNs in the learning procedure : vec ( ∂L ∂Ŵi ) = â > i−1 ⊗ ( ∂L ∂ai ◦ f ′i ( si ) ) STE = â > i−1 ⊗ ∂L ∂âi ◦ ( f ′i ( si ) I|ai|≤1 ) , ( 10 ) where⊗ denotes the Kronecker product 1 . Relying on the KL divergence , we indicate the parameter space as the Riemannian manifold rather than the Euclidean space in the quantization procedure . For neural networks with a scale of one million or more parameters , the time complexity of inverting the FIM , a component of natural gradients , is O ( n3 ) ( Povey et al. , 2014 ) . Previously , there were some works to calculate the natural gradient efficiently , e.g . Roux et al . ( Roux et al. , 2008 ) decomposed the FIM into multiple diagonal blocks , where each diagonal block is approximated by a low-rank matrix . Bastian et al . ( Bastian et al. , 2011 ) also used the idea of diagonal blocks by constructing a diagonal block that corresponds to a weight matrix . Martens et al . ( Martens & Grosse , 1Since the size of â > i−1 and the size of ∂L∂ai may be different in each layer , Kronecker product is necessary . 2015 ) proposed to approximate the FIM through the Kronecker product of two smaller matrices to improve computational efficiency . Even so , FIM embedding for STE with complex decomposition methods is still not suitable for large-scale tasks , and the computational amount is large compared to the standard STE .
This paper proposes an alternative to straight-through estimator by considering the geometry of the likelihood. The proposed method can be viewed as a natural gradient descent algorithm on the Riemannian manifold. Compared to STE, the proposed estimator seems to penalize the quantities that are far from the quantization boundary. Slightly better accuracy results are reported for training 1-bit weight and activation neural networks.
SP:0e92cc56d37e6de4f8574a28a9f94d85423730ac
Riemannian Manifold Embeddings for Straight-Through Estimator
1 INTRODUCTION . Neural networks can handle many complex tasks due to their large number of trainable parameters and strong nonlinear capabilities ( Krizhevsky et al. , 2012 ) . However , the massive amount of models and calculations hinder the application of neural networks on mobile and miniaturized devices , which naturally comes with constraints on computing power and resources . Neural network quantization is considered an efficient solution in the inference that alleviates the number of parameters and optimizes the computation by reducing the bit width of weights or activations ( Courbariaux et al. , 2016 ; Li et al. , 2016 ; Zhu et al. , 2016 ) . Existing neural network quantization methods can be roughly divided into two categories : “ STE ” and “ Non-STE ” methods . Most of the quantization methods adopted by the QNNs belong to the former , i.e . there is always a non-differentiable quantization function during training . The role of STE is to penetrate this non-differentiable quantization function and pass the gradients in backpropogation ( Hinton , 2012 ) , e.g . DeepShift ( Elhoushi et al. , 2019 ) , INT8 ( Zhu et al. , 2020 ) , AQE ( Chen et al. , 2020 ) , etc . “ Non-STE ” methods refer to maintain feasible quantization during training , which does not need to apply STE directly to all full-precision weights . For example , Zhou et al . ( Zhou et al. , 2017 ) divided the weights into two groups until all parameters are quantized , where the first group is directly quantized and fixed ; the second group needs to be retrained to make up for the decrease of accuracy caused by quantization of the first group . Louizos et al . ( Louizos et al. , 2019 ) introduced a differentiable quantizer that can transform the continuous distributions of weights and activations to categorical distributions with gradient-based optimization . However , “ Non-STE ” methods face the setting and influence of heavy hyper-parameters in the training process . Relatively , “ STE ” methods are wider choices for quantized models from simplicity and versatility . To approximate ∇W w.r.t . W and complete stable quantization training , Courbariaux et al . ( Courbariaux et al. , 2016 ) binarized the neural networks by the approximated gradients using STE : ∇W = ∇Ŵ ◦ I , where ∂Ŵ ∂W = I = { 1 if |W | ≤ 1 0 otherwise . However , it will inevitably bring the gradient mismatch problem raised by just simply application of STE . To overcome this challenge , Zhou et al . ( Zhou et al. , 2016 ) proposed to transformW to W̃ : W̃ = tanh ( W ) max ( | tanh ( W ) | ) , and then quantize it using a quantization function Q ( · ) . During backpropogation , the gradients∇W w.r.t . W can be further computed using the chain rule : ∇W = ∇Q ( W̃ ) 1− tanh2 ( W ) max ( | tanh ( W ) | ) . Based on this work , Chen et al . ( Chen et al. , 2019 ) proposed to learn ∇W by fully-connected neural networks or LSTM , which replaces this gradients via a meta quantizer Mφ parameterized by φ across layers : ∇W =Mφ ( ∇Q ( W̃ ) , Q ( W̃ ) ) ∂W̃ ∂W . However , such methods not only add many additional parameters , but also increase the difficulty of the training of quantized models . In this paper , we introduce the Manifold Quantization ( ManiQuant ) to train a quantized model via embedding Riemannian manifolds for STE . In Figure 1 , we treat the parameter space in a quantized model as a Riemannian manifold , which will alleviate the gradient mismatch problem caused by non-differentiable quantization and assist quantized models in achieving a more stable convergence and better performance . The main contributions of this work are three-fold : First , we propose to use Fisher Information Matrix embedding to alleviate the gradient mismatch problem . Second , we define a novel Hyperbolic divergence by a convex function with geometric structure . With the constraint of Hyperbolic divergence , we introduce a weak curvature manifold that forms the background of ManiQuant . Third , based on the Second , we propose to use weak curvature metric embeddings for STE as the weak curvature gradient to approximate ∇W without extra parameters and training complexity . 2 RELATED WORK . 2.1 GENERAL GRADIENT . For a neural network with l layers , the full-precision weight matrix of each layer is just marked as Wi . By defining the operation vec ( · ) that vectorizes matrices by stacking their columns together , the total parameter vector of the neural network can be denoted as θ = [ vec ( W1 ) > , vec ( W2 ) > , . . . , vec ( Wl ) > ] > . The parameter vector is given as a column vector . Let θ ∈ Rn be a parameter space on which a loss function L associated with weight is well-defined . It is relatively easy to express the general gradient in training : ∇θL = ∂L ∂θ , ( 1 ) which is the steepest descent method in Euclidean space with an orthonormal coordinate . Note that the negative gradient represents the direction of the steepest descent . When the Euclidean space is considered , the Euclidean divergence between two sufficiently closed points θ and θ′ is actually defined by default : DE [ θ : θ ′ ] = 1 2 ∑ i ( θi − θ′i ) 2 , ( 2 ) which is a half of the square of the Euclidean distance identified by the Euclidean metric δij , so that ds2 = 2DE [ θ : θ + dθ ] = ∑ i ( dθi ) 2 = ∑ i , j δijdθidθj . ( 3 ) 2.2 NATURAL GRADIENT . The general gradient only considers the parameter update along the gradient direction that does not involve the metric tensor for the parameter space of the problem . Now , we attach a Riemannian metric Gij ( θ ) for the parameter space to form a Riemannian manifold ( M , Gij ) ( Amari , 1998 ; 2016 ) . In that case , the steepest descent direction also depends on the quadratic form introduced by a small incremental vector dθ that connects θ and θ + dθ , whose form is given by ds2 = ∑ i , j Gij ( θ ) dθidθj , ( 4 ) where dθi is the component of dθ . Under the constraint ds2 , the steepest descent direction is toward the optimization goal of L ( θ + dθ ) . Intuitively , it measures the Kullback-Leibler ( KL ) divergence between two distributions p ( x|θ ) and p ( x|θ + dθ ) of network model , which is equivalent to the interval of two adjacent points on a Riemannian manifold . The KL divergence under the Riemannian metric is well approximated through the Fisher Information Matrix ( FIM ) with the second-order Taylor expansion of the KL divergence ( see Appendix B.3 ) ( Ba et al. , 2016 ) : ds2 = 2DKL [ p ( x|θ ) : p ( x|θ + dθ ) ] ≈ −dθ > Ep ( x|θ ) [ ∇ log p ( x|θ ) ∇ log p ( x|θ ) > ] dθ . ( 5 ) We see that FIM is equal to the negative expected Hessian of log likelihood . Furthermore , Amari deduced that the Riemannian metric is given by the FIM ( Amari , 1998 ) . By the Lagrangian form , we have the natural gradient ( see Appendix C.1 ) ∇̃θL = F−1 ( θ ) ∂L ∂θ , whereF ( θ ) = Ep ( x|θ ) [ ∇ log p ( x|θ ) ∇ log p ( x|θ ) > ] , ( 6 ) which is the steepest descent method in a Riemannian space and F ( θ ) is the FIM with a parameter vector ( the natural gradient also known as the Riemannian gradient ) . Empirically , the immediate application of FIM is as drop-in replacement of Hessian in second order optimization algorithm . Note that ∇̃θL will return to∇θL when Gij ( θ ) is equal to the Euclidean metric δij , i.e . the identity matrix I . 3 MANIFOLD QUANTIZATION . Considering a quantized model of each layer , we notate the quantized weight matrix using Ŵi for differentiating full-precision weight matrix Wi . Note that we define the quantized parameter vector as θ̂ , which is similar to the full-precision parameter vector θ . During training a QNN , the quantization function Q ( · ) is a one-to-one mapping from full-precision values to quantized values , which can be expressed as θ̂ = Q ( θ ) . By involving the practice to train a neural network with low bit-width , the process of the quantization needs to be further designed that plays a vital role in the final performance , as for the other parts , e.g . via mapping from an input pattern âi−1 to output âi can still be imitated in the same way as full-precision neural networks : si = Ŵiâi−1 , âi = Q ( fi ◦ si ) , ( 7 ) where fi is non-linear function acted on element-wise . To distinguish the full-precision model , we mark the notation “ ˆ ” to represent the operation through a quantization function in a quantized model , whether for weight matrix Ŵi or activation vector âi . 3.1 FISHER INFORMATION MATRIX EMBEDDING FOR STE . The concept of the natural gradient is closely related to the FIM and KL divergence ( see Appendix B.3 ) . Since the KL divergence is intrinsic , the natural gradient is also intrinsically invariant under the parameter transformation . By viewing the FIM as an l by l block matrix where l denotes the number of layers in a neural network , the natural gradient formula can be introduced to alleviate the gradient mismatch problem when training a QNN and updating its parameters . Lemma 1 In the Riemannian manifold defined by KL divergence , Fisher Information Matrix embedding for Straight-Through Estimator is written by : ∇̃θL STE = F−1 ( θ̂ ) ∇θ̂L ◦ I|θ|≤1 = E−1 [ d log p ( x|θ̂ ) dθ̂ d log p ( x|θ̂ ) dθ̂ > ] ∇θ̂L ◦ I|θ|≤1 = E−1 [ vec ( ∂L ∂θ̂ ) vec ( ∂L ∂θ̂ ) > ] ∇θ̂L ◦ I|θ|≤1 . ( 8 ) Furthermore , F−1 ( θ̂ ) can be expressed as E [ vec ( ∂L ∂Ŵ1 ) vec ( ∂L ∂Ŵ1 ) > ] · · · E [ vec ( ∂L ∂Ŵ1 ) vec ( ∂L ∂Ŵl ) > ] ... . . . ... E [ vec ( ∂L ∂Ŵl ) vec ( ∂L ∂Ŵ1 ) > ] · · · E [ vec ( ∂L ∂Ŵl ) vec ( ∂L ∂Ŵl ) > ] −1 ( 9 ) where vec ( ∂L ∂Ŵi ) can be represented by the gradient error ∂L∂ai . Proof . The proofs are the combination between STE and Appendix C.1 . Considering that the gradient propagation needs to span over the quantized neurons and graphs , we still need to use STE to update the gradient of QNNs in the learning procedure : vec ( ∂L ∂Ŵi ) = â > i−1 ⊗ ( ∂L ∂ai ◦ f ′i ( si ) ) STE = â > i−1 ⊗ ∂L ∂âi ◦ ( f ′i ( si ) I|ai|≤1 ) , ( 10 ) where⊗ denotes the Kronecker product 1 . Relying on the KL divergence , we indicate the parameter space as the Riemannian manifold rather than the Euclidean space in the quantization procedure . For neural networks with a scale of one million or more parameters , the time complexity of inverting the FIM , a component of natural gradients , is O ( n3 ) ( Povey et al. , 2014 ) . Previously , there were some works to calculate the natural gradient efficiently , e.g . Roux et al . ( Roux et al. , 2008 ) decomposed the FIM into multiple diagonal blocks , where each diagonal block is approximated by a low-rank matrix . Bastian et al . ( Bastian et al. , 2011 ) also used the idea of diagonal blocks by constructing a diagonal block that corresponds to a weight matrix . Martens et al . ( Martens & Grosse , 1Since the size of â > i−1 and the size of ∂L∂ai may be different in each layer , Kronecker product is necessary . 2015 ) proposed to approximate the FIM through the Kronecker product of two smaller matrices to improve computational efficiency . Even so , FIM embedding for STE with complex decomposition methods is still not suitable for large-scale tasks , and the computational amount is large compared to the standard STE .
The main motivation for this paper is the gradient mismatch problem, which emerges from using the Straight-Through Estimator (STE) in training quantized neural networks and leads to an unstable training process. To deal with that gradient mismatch problem, the authors introduce the Manifold Quantization (ManiQuant) that embeds Riemannian manifolds into the STE. Specifically, the ManiQuant associates the gradient mismatch problem with Fisher information, which can then be exploited to alleviate that problem. Considering the high cost when inverting the Fisher information, the authors present an alternative simpler method related to Hyperbolic divergence and weak curvature manifold.
SP:0e92cc56d37e6de4f8574a28a9f94d85423730ac
Riemannian Manifold Embeddings for Straight-Through Estimator
1 INTRODUCTION . Neural networks can handle many complex tasks due to their large number of trainable parameters and strong nonlinear capabilities ( Krizhevsky et al. , 2012 ) . However , the massive amount of models and calculations hinder the application of neural networks on mobile and miniaturized devices , which naturally comes with constraints on computing power and resources . Neural network quantization is considered an efficient solution in the inference that alleviates the number of parameters and optimizes the computation by reducing the bit width of weights or activations ( Courbariaux et al. , 2016 ; Li et al. , 2016 ; Zhu et al. , 2016 ) . Existing neural network quantization methods can be roughly divided into two categories : “ STE ” and “ Non-STE ” methods . Most of the quantization methods adopted by the QNNs belong to the former , i.e . there is always a non-differentiable quantization function during training . The role of STE is to penetrate this non-differentiable quantization function and pass the gradients in backpropogation ( Hinton , 2012 ) , e.g . DeepShift ( Elhoushi et al. , 2019 ) , INT8 ( Zhu et al. , 2020 ) , AQE ( Chen et al. , 2020 ) , etc . “ Non-STE ” methods refer to maintain feasible quantization during training , which does not need to apply STE directly to all full-precision weights . For example , Zhou et al . ( Zhou et al. , 2017 ) divided the weights into two groups until all parameters are quantized , where the first group is directly quantized and fixed ; the second group needs to be retrained to make up for the decrease of accuracy caused by quantization of the first group . Louizos et al . ( Louizos et al. , 2019 ) introduced a differentiable quantizer that can transform the continuous distributions of weights and activations to categorical distributions with gradient-based optimization . However , “ Non-STE ” methods face the setting and influence of heavy hyper-parameters in the training process . Relatively , “ STE ” methods are wider choices for quantized models from simplicity and versatility . To approximate ∇W w.r.t . W and complete stable quantization training , Courbariaux et al . ( Courbariaux et al. , 2016 ) binarized the neural networks by the approximated gradients using STE : ∇W = ∇Ŵ ◦ I , where ∂Ŵ ∂W = I = { 1 if |W | ≤ 1 0 otherwise . However , it will inevitably bring the gradient mismatch problem raised by just simply application of STE . To overcome this challenge , Zhou et al . ( Zhou et al. , 2016 ) proposed to transformW to W̃ : W̃ = tanh ( W ) max ( | tanh ( W ) | ) , and then quantize it using a quantization function Q ( · ) . During backpropogation , the gradients∇W w.r.t . W can be further computed using the chain rule : ∇W = ∇Q ( W̃ ) 1− tanh2 ( W ) max ( | tanh ( W ) | ) . Based on this work , Chen et al . ( Chen et al. , 2019 ) proposed to learn ∇W by fully-connected neural networks or LSTM , which replaces this gradients via a meta quantizer Mφ parameterized by φ across layers : ∇W =Mφ ( ∇Q ( W̃ ) , Q ( W̃ ) ) ∂W̃ ∂W . However , such methods not only add many additional parameters , but also increase the difficulty of the training of quantized models . In this paper , we introduce the Manifold Quantization ( ManiQuant ) to train a quantized model via embedding Riemannian manifolds for STE . In Figure 1 , we treat the parameter space in a quantized model as a Riemannian manifold , which will alleviate the gradient mismatch problem caused by non-differentiable quantization and assist quantized models in achieving a more stable convergence and better performance . The main contributions of this work are three-fold : First , we propose to use Fisher Information Matrix embedding to alleviate the gradient mismatch problem . Second , we define a novel Hyperbolic divergence by a convex function with geometric structure . With the constraint of Hyperbolic divergence , we introduce a weak curvature manifold that forms the background of ManiQuant . Third , based on the Second , we propose to use weak curvature metric embeddings for STE as the weak curvature gradient to approximate ∇W without extra parameters and training complexity . 2 RELATED WORK . 2.1 GENERAL GRADIENT . For a neural network with l layers , the full-precision weight matrix of each layer is just marked as Wi . By defining the operation vec ( · ) that vectorizes matrices by stacking their columns together , the total parameter vector of the neural network can be denoted as θ = [ vec ( W1 ) > , vec ( W2 ) > , . . . , vec ( Wl ) > ] > . The parameter vector is given as a column vector . Let θ ∈ Rn be a parameter space on which a loss function L associated with weight is well-defined . It is relatively easy to express the general gradient in training : ∇θL = ∂L ∂θ , ( 1 ) which is the steepest descent method in Euclidean space with an orthonormal coordinate . Note that the negative gradient represents the direction of the steepest descent . When the Euclidean space is considered , the Euclidean divergence between two sufficiently closed points θ and θ′ is actually defined by default : DE [ θ : θ ′ ] = 1 2 ∑ i ( θi − θ′i ) 2 , ( 2 ) which is a half of the square of the Euclidean distance identified by the Euclidean metric δij , so that ds2 = 2DE [ θ : θ + dθ ] = ∑ i ( dθi ) 2 = ∑ i , j δijdθidθj . ( 3 ) 2.2 NATURAL GRADIENT . The general gradient only considers the parameter update along the gradient direction that does not involve the metric tensor for the parameter space of the problem . Now , we attach a Riemannian metric Gij ( θ ) for the parameter space to form a Riemannian manifold ( M , Gij ) ( Amari , 1998 ; 2016 ) . In that case , the steepest descent direction also depends on the quadratic form introduced by a small incremental vector dθ that connects θ and θ + dθ , whose form is given by ds2 = ∑ i , j Gij ( θ ) dθidθj , ( 4 ) where dθi is the component of dθ . Under the constraint ds2 , the steepest descent direction is toward the optimization goal of L ( θ + dθ ) . Intuitively , it measures the Kullback-Leibler ( KL ) divergence between two distributions p ( x|θ ) and p ( x|θ + dθ ) of network model , which is equivalent to the interval of two adjacent points on a Riemannian manifold . The KL divergence under the Riemannian metric is well approximated through the Fisher Information Matrix ( FIM ) with the second-order Taylor expansion of the KL divergence ( see Appendix B.3 ) ( Ba et al. , 2016 ) : ds2 = 2DKL [ p ( x|θ ) : p ( x|θ + dθ ) ] ≈ −dθ > Ep ( x|θ ) [ ∇ log p ( x|θ ) ∇ log p ( x|θ ) > ] dθ . ( 5 ) We see that FIM is equal to the negative expected Hessian of log likelihood . Furthermore , Amari deduced that the Riemannian metric is given by the FIM ( Amari , 1998 ) . By the Lagrangian form , we have the natural gradient ( see Appendix C.1 ) ∇̃θL = F−1 ( θ ) ∂L ∂θ , whereF ( θ ) = Ep ( x|θ ) [ ∇ log p ( x|θ ) ∇ log p ( x|θ ) > ] , ( 6 ) which is the steepest descent method in a Riemannian space and F ( θ ) is the FIM with a parameter vector ( the natural gradient also known as the Riemannian gradient ) . Empirically , the immediate application of FIM is as drop-in replacement of Hessian in second order optimization algorithm . Note that ∇̃θL will return to∇θL when Gij ( θ ) is equal to the Euclidean metric δij , i.e . the identity matrix I . 3 MANIFOLD QUANTIZATION . Considering a quantized model of each layer , we notate the quantized weight matrix using Ŵi for differentiating full-precision weight matrix Wi . Note that we define the quantized parameter vector as θ̂ , which is similar to the full-precision parameter vector θ . During training a QNN , the quantization function Q ( · ) is a one-to-one mapping from full-precision values to quantized values , which can be expressed as θ̂ = Q ( θ ) . By involving the practice to train a neural network with low bit-width , the process of the quantization needs to be further designed that plays a vital role in the final performance , as for the other parts , e.g . via mapping from an input pattern âi−1 to output âi can still be imitated in the same way as full-precision neural networks : si = Ŵiâi−1 , âi = Q ( fi ◦ si ) , ( 7 ) where fi is non-linear function acted on element-wise . To distinguish the full-precision model , we mark the notation “ ˆ ” to represent the operation through a quantization function in a quantized model , whether for weight matrix Ŵi or activation vector âi . 3.1 FISHER INFORMATION MATRIX EMBEDDING FOR STE . The concept of the natural gradient is closely related to the FIM and KL divergence ( see Appendix B.3 ) . Since the KL divergence is intrinsic , the natural gradient is also intrinsically invariant under the parameter transformation . By viewing the FIM as an l by l block matrix where l denotes the number of layers in a neural network , the natural gradient formula can be introduced to alleviate the gradient mismatch problem when training a QNN and updating its parameters . Lemma 1 In the Riemannian manifold defined by KL divergence , Fisher Information Matrix embedding for Straight-Through Estimator is written by : ∇̃θL STE = F−1 ( θ̂ ) ∇θ̂L ◦ I|θ|≤1 = E−1 [ d log p ( x|θ̂ ) dθ̂ d log p ( x|θ̂ ) dθ̂ > ] ∇θ̂L ◦ I|θ|≤1 = E−1 [ vec ( ∂L ∂θ̂ ) vec ( ∂L ∂θ̂ ) > ] ∇θ̂L ◦ I|θ|≤1 . ( 8 ) Furthermore , F−1 ( θ̂ ) can be expressed as E [ vec ( ∂L ∂Ŵ1 ) vec ( ∂L ∂Ŵ1 ) > ] · · · E [ vec ( ∂L ∂Ŵ1 ) vec ( ∂L ∂Ŵl ) > ] ... . . . ... E [ vec ( ∂L ∂Ŵl ) vec ( ∂L ∂Ŵ1 ) > ] · · · E [ vec ( ∂L ∂Ŵl ) vec ( ∂L ∂Ŵl ) > ] −1 ( 9 ) where vec ( ∂L ∂Ŵi ) can be represented by the gradient error ∂L∂ai . Proof . The proofs are the combination between STE and Appendix C.1 . Considering that the gradient propagation needs to span over the quantized neurons and graphs , we still need to use STE to update the gradient of QNNs in the learning procedure : vec ( ∂L ∂Ŵi ) = â > i−1 ⊗ ( ∂L ∂ai ◦ f ′i ( si ) ) STE = â > i−1 ⊗ ∂L ∂âi ◦ ( f ′i ( si ) I|ai|≤1 ) , ( 10 ) where⊗ denotes the Kronecker product 1 . Relying on the KL divergence , we indicate the parameter space as the Riemannian manifold rather than the Euclidean space in the quantization procedure . For neural networks with a scale of one million or more parameters , the time complexity of inverting the FIM , a component of natural gradients , is O ( n3 ) ( Povey et al. , 2014 ) . Previously , there were some works to calculate the natural gradient efficiently , e.g . Roux et al . ( Roux et al. , 2008 ) decomposed the FIM into multiple diagonal blocks , where each diagonal block is approximated by a low-rank matrix . Bastian et al . ( Bastian et al. , 2011 ) also used the idea of diagonal blocks by constructing a diagonal block that corresponds to a weight matrix . Martens et al . ( Martens & Grosse , 1Since the size of â > i−1 and the size of ∂L∂ai may be different in each layer , Kronecker product is necessary . 2015 ) proposed to approximate the FIM through the Kronecker product of two smaller matrices to improve computational efficiency . Even so , FIM embedding for STE with complex decomposition methods is still not suitable for large-scale tasks , and the computational amount is large compared to the standard STE .
This paper proposed to improve the gradient of quantization operator in quantization-aware training (QAT). Basically, it pointed out that the Euclidean assumption in general gradient (used by Straight-Through Estimator) can not reflect the curvature in the loss surface. Instead, it revised the gradient (natural gradient) based on mainfold learning. Due to the huge cost of Fisher Information Matrix (FIM) used to achieve natural gradient, it proposed to approximate it by weak curvature embedding.
SP:0e92cc56d37e6de4f8574a28a9f94d85423730ac
Video Forgery Detection Using Multiple Cues on Fusion of EfficientNet and Swin Transformer
1 INTRODUCTION . Video synthesis techniques based on deep learning Karras et al . ( 2017 ; 2018 ) ; Thies et al . ( 2019 ) have reduced the threshold for people to manipulate videos , and the high-quality deepfake videos that are indistinguishable to the human eye are increasingly available Deepfakes ( 2020 ) ; Faceswap ( 2019 ) ; Fakeapp ( 2020 ) . Although this technology has been applied to Cineflex , Crytek , and driverless technology , which has had a positive effect on society . Inevitably , it is illegally abused by some malicious users to forge indistinguishable false information , which seriously affects the level of public trust in the government and society , causes irreparable social problems , and can even pose a serious threat to politics . Many deepfake detection methods Wang et al . ( 2019 ) ; Kim et al . ( 2021 ) ; Jeon et al . ( 2020 ) have been proposed to address this problem and significant progress has been made . Previous works explore forgery patterns based on manual features Ferrara et al . ( 2012 ) ; Pan et al . ( 2012 ) ; Cozzolino et al . ( 2015 ) and then classify them by zooming in on the subtle differences between the real and forged images . While most of the recent learning-based approaches Cozzolino et al . ( 2017 ) ; Chollet ( 2017a ) define the problem as a binary classification of an image or video with a process of first extracting the global features of the image using a neural network and then feeding them to a classifier to distinguish between genuine and faked images . However , with the increasing refinement of deep generative methods , the generated forged videos are becoming increasingly difficult to discriminate , especially some low-quality ones with only a few subtle visual artifacts . Most existing detection methods still focus on specific artifacts , leading to significant challenges in the generalization and effectiveness of detection systems in realistic scenarios . This creates an urgent need to discover subtle cues that are more critical for detection , and then to extract and incorporate appropriate modules with efficient backbone networks to enhance these artifact features and combine them effectively for forgery detection . After systematic analysis , we find three fundamental reasons for the low generalization of the model . The first is the difficulty of capturing general artifact cues and the limitations of the dataset in terms of quantity and quality . The second is the limitations of selecting a suitable network model for specific feature extraction . The third is the limitations of fully and effectively utilizing the extracted features . In terms of feature cues , we find that although the generated videos are realistic with few artifacts in the texture , the frequency domain , especially at high frequency , still reveals significant differences , even for low-quality videos , and many forgery methods are designed to generate videos that are slightly defective to human vision , with few methods targeting the frequency domain . In addition , it is found that texture information located at the shallow level of the detection network rather than the higher-level semantic information is particularly critical for detection . To eliminate the effect of luminance in RGB on texture feature extraction , we convert the images from RGB to YCbCr and remove the luminance ( Y ) channel , and combine Cb and Cr channels with high frequencies . In addition , we find a characteristic that the optical flow of the real video changes significantly while the optical flow of the deepfake video has little change , and coupled with the temporal consistency of the continuous frames of the real video , we choose the high frequency , texture , and optical flow as the feature cues for video forgery detection . For features such as high frequency and texture , which require attention to local details , we choose EfficientNet-B5 Tan & Le ( 2019 ) , a CNN class network that is superior in local feature extraction , as one of the backbone networks , which has higher accuracy and performance compared with other CNNs . For features such as optical flow , which reflect the inter-frame temporality , we choose Swin Transformer Liu et al . ( 2021 ) , a Transformer class network that is superior in global representation , as another backbone network , which applies to all scales of images , simple and flexible in computation , and shows excellent performance compared with other Transformers . In addition , we introduce the attention mechanism and a new loss function for EfficientNet-B5 to extract more robust face features . In summary , in this paper , we propose a novel multiple-cue video forgery detection model ENST using a combination of high frequency , low-level texture , and optical flow cues , based on the fusion of EfficientNet-B5 and Swin Transformer , which fully integrates multiple key features and uses an advanced backbone network for extraction . To demonstrate the effectiveness of ENST , we conduct extensive experiments on FaceForensics++ Rssler et al . ( 2019 ) and Celeb-DF ( v2 ) Li et al . ( 2019 ) , as well as a comparison of various settings in an ablation study . Comprehensive experiments indicate that ENST has excellent competitive properties . The contributions of this paper are as follows : 1 . We investigate the current video forgery detection methods in-depth and systematically , and find three reasons for the poor generalization ability of these methods , as well as discover several artifact cues in forgery detection , and use these cues to design an efficient network structure for the above reasons . 2 . We propose a novel network structure ENST , which can utilize the local feature extraction ability of EfficientNet-B5 and the global relationship sensing ability of Swin Transformer , and integrate them to complement their abilities during data processing . In addition , we introduce the attention module and a new loss function for EfficientNet-B5 to enhance the ability to extract features . 3 . Our experiments on FaceForensics++ and Celeb-DF ( v2 ) demonstrate that ENST achieves superior classification performance and generalization compared to other state-of-the-art methods . The rest of this paper is organized as follows . Section 2 presents related work in the field of video forgery detection and describes the motivation for the proposed ENST . Section 3 details the specifics of the proposed model . Section 4 describes the experimental results and analysis . Section 5 concludes this paper . 2 RELATED WORK . 2.1 VIDEO FAKE CUES . In recent years , various deep learning forgery methods have produced synthetic images or videos of faces Suwajanakorn et al . ( 2017 ; 2015 ) that have caused some degree of harm to society . The problem of deep forgery detection has received extensive attention in the vision field , and many forgery detection methods have been proposed to detect such forged products . Some methods focus on mining specific artifacts generated by the forgery process , such as color space Li et al . ( 2018 ) ; McCloskey & Albright ( 2018 ) and shape cues . Many deep learning methods Marra et al . ( 2018 ) ; Yu et al . ( 2018 ) ; Zhou et al . ( 2018 ) use deep neural networks to extract high-level semantic information from the spatial domain and then classify a given image or video . Some methods convert the image from the spatial domain to the frequency domain Stuchi et al . ( 2017 ) ; Franzen ( 2018 ) , capturing some information that is valuable for forgery detection . Stuchi et al . ( 2017 ) employ a fixed set of filters to extract different ranges of frequency information and then adopt a fully connected layer to obtain classification results . Durall et al . ( 2019 ) extract the frequency domain information using the DFT transform and average the amplitudes of the different frequency bands . Other methods extract statistical features , such as capturing features from spatial textures Pan et al . ( 2009 ) ; Conotter ( 2011 ) and transform domain coefficient distributions Chen et al . ( 2009 ) ; Lyu & Farid ( 2005 ) . Video detection methods can discover differences in the temporal dimension of video frames Sabir et al . ( 2019 ) ; Guera & Delp ( 2018 ) , such as Wang & Dantcheva ( 2020 ) using spatial and motion information , and Chintha et al . ( 2020 ) ; Amerini et al . ( 2020 ) ; Caldelli et al . ( 2021 ) using optical flow for video forgery detection . While the above methods are limited to specific cues , we take full advantage of these cues . 2.2 DEEPFAKE DETECTION STRUCTURES . Many deep learning models have been proposed to extract features and discover forgery patterns directly without targeting a specific forgery cue or artifact , classify real and fake images or videos , and implement deep forgery detection created by random methods . EfficientNet Tan & Le ( 2019 ) achieves higher accuracy with a larger network by combining a composite model expansion method with neural structure search techniques to optimize the dimensions of network depth , network width , and input image resolution , which has higher accuracy and performance compared to other advanced CNNs . Each layer of Swin Transformer computes self-attention within the window of the transform , enabling cross-window connectivity for all scales of the image . Swin Transformer is flexible and efficient , with only linear complexity for image size . Convolution module in CNN is good at extracting local details , and as the layers get deeper , the attention field will be wider and more suitable for image processing . But to capture global information , detection model often requires stacking many convolutional layers . While attention in Transformer is good at grasping the whole , but requires a large amount of data for training . Conformer Peng et al . ( 2021 ) uses a parallel and complementary approach , combining both CNN and Transformer branches to preserve the features extracted by both branches , to achieve performance beyond CNN and ViT Dosovitskiy et al . ( 2020 ) with comparable parameter complexity . 3 PROPOSED METHOD . In this section , we outline the principles and details of the different parts of the proposed method according to the data processing process . It is challenging for most methods to extract sufficiently comprehensive features with suitable network models and then integrate them effectively for forgery detection . Inspired by Conformer Peng et al . ( 2021 ) , we choose two models EfficientNet-B5 and Swin Transformer with more efficient performance among CNNs and Transformers , as baselines and combine them for video forgery detection . EfficientNet-B5 extracts local features of frames , such as high frequency features and texture features by using the characteristic of focusing well on local features . Swin Transformer extracts artifact features from the optical flow map between frames to obtain the global representation of frames , where the optical flow features responding to the temporal consistency between frames are used . We design a parallel interactive deep forgery detection architecture ENST to learn a potential representation that can distinguish between real and forged faces , and the pipeline of ENST is shown in Figure 1 . 3.1 PRE-PROCESSING . We first extract the frames of the input video , and then use the Multitask Cascaded CNNs ( MTCNN ) Zhang et al . ( 2016 ) to detect and extract the faces present in each frame , crop the faces of each frame , then resize to 224×224 pixels . Moreover , we normalize them to zero mean and unit variance , and finally obtain the extracted faces . After that , we first operate on a single frame in the video , and the face in frame i is input to two branches after the basic feature extraction process .
The author proposes a method for detecting deepfakes that makes use of EfficientNet B5 and Swin Transformer. Additionally, the authors propose a loss function and an attention mechanism for EfficientNet B5. The proposed methods demonstrate increased accuracy and generalizability when compared to the baselines.
SP:6043ac6b1b3188287938e4bc366a90c0f7652633
Video Forgery Detection Using Multiple Cues on Fusion of EfficientNet and Swin Transformer
1 INTRODUCTION . Video synthesis techniques based on deep learning Karras et al . ( 2017 ; 2018 ) ; Thies et al . ( 2019 ) have reduced the threshold for people to manipulate videos , and the high-quality deepfake videos that are indistinguishable to the human eye are increasingly available Deepfakes ( 2020 ) ; Faceswap ( 2019 ) ; Fakeapp ( 2020 ) . Although this technology has been applied to Cineflex , Crytek , and driverless technology , which has had a positive effect on society . Inevitably , it is illegally abused by some malicious users to forge indistinguishable false information , which seriously affects the level of public trust in the government and society , causes irreparable social problems , and can even pose a serious threat to politics . Many deepfake detection methods Wang et al . ( 2019 ) ; Kim et al . ( 2021 ) ; Jeon et al . ( 2020 ) have been proposed to address this problem and significant progress has been made . Previous works explore forgery patterns based on manual features Ferrara et al . ( 2012 ) ; Pan et al . ( 2012 ) ; Cozzolino et al . ( 2015 ) and then classify them by zooming in on the subtle differences between the real and forged images . While most of the recent learning-based approaches Cozzolino et al . ( 2017 ) ; Chollet ( 2017a ) define the problem as a binary classification of an image or video with a process of first extracting the global features of the image using a neural network and then feeding them to a classifier to distinguish between genuine and faked images . However , with the increasing refinement of deep generative methods , the generated forged videos are becoming increasingly difficult to discriminate , especially some low-quality ones with only a few subtle visual artifacts . Most existing detection methods still focus on specific artifacts , leading to significant challenges in the generalization and effectiveness of detection systems in realistic scenarios . This creates an urgent need to discover subtle cues that are more critical for detection , and then to extract and incorporate appropriate modules with efficient backbone networks to enhance these artifact features and combine them effectively for forgery detection . After systematic analysis , we find three fundamental reasons for the low generalization of the model . The first is the difficulty of capturing general artifact cues and the limitations of the dataset in terms of quantity and quality . The second is the limitations of selecting a suitable network model for specific feature extraction . The third is the limitations of fully and effectively utilizing the extracted features . In terms of feature cues , we find that although the generated videos are realistic with few artifacts in the texture , the frequency domain , especially at high frequency , still reveals significant differences , even for low-quality videos , and many forgery methods are designed to generate videos that are slightly defective to human vision , with few methods targeting the frequency domain . In addition , it is found that texture information located at the shallow level of the detection network rather than the higher-level semantic information is particularly critical for detection . To eliminate the effect of luminance in RGB on texture feature extraction , we convert the images from RGB to YCbCr and remove the luminance ( Y ) channel , and combine Cb and Cr channels with high frequencies . In addition , we find a characteristic that the optical flow of the real video changes significantly while the optical flow of the deepfake video has little change , and coupled with the temporal consistency of the continuous frames of the real video , we choose the high frequency , texture , and optical flow as the feature cues for video forgery detection . For features such as high frequency and texture , which require attention to local details , we choose EfficientNet-B5 Tan & Le ( 2019 ) , a CNN class network that is superior in local feature extraction , as one of the backbone networks , which has higher accuracy and performance compared with other CNNs . For features such as optical flow , which reflect the inter-frame temporality , we choose Swin Transformer Liu et al . ( 2021 ) , a Transformer class network that is superior in global representation , as another backbone network , which applies to all scales of images , simple and flexible in computation , and shows excellent performance compared with other Transformers . In addition , we introduce the attention mechanism and a new loss function for EfficientNet-B5 to extract more robust face features . In summary , in this paper , we propose a novel multiple-cue video forgery detection model ENST using a combination of high frequency , low-level texture , and optical flow cues , based on the fusion of EfficientNet-B5 and Swin Transformer , which fully integrates multiple key features and uses an advanced backbone network for extraction . To demonstrate the effectiveness of ENST , we conduct extensive experiments on FaceForensics++ Rssler et al . ( 2019 ) and Celeb-DF ( v2 ) Li et al . ( 2019 ) , as well as a comparison of various settings in an ablation study . Comprehensive experiments indicate that ENST has excellent competitive properties . The contributions of this paper are as follows : 1 . We investigate the current video forgery detection methods in-depth and systematically , and find three reasons for the poor generalization ability of these methods , as well as discover several artifact cues in forgery detection , and use these cues to design an efficient network structure for the above reasons . 2 . We propose a novel network structure ENST , which can utilize the local feature extraction ability of EfficientNet-B5 and the global relationship sensing ability of Swin Transformer , and integrate them to complement their abilities during data processing . In addition , we introduce the attention module and a new loss function for EfficientNet-B5 to enhance the ability to extract features . 3 . Our experiments on FaceForensics++ and Celeb-DF ( v2 ) demonstrate that ENST achieves superior classification performance and generalization compared to other state-of-the-art methods . The rest of this paper is organized as follows . Section 2 presents related work in the field of video forgery detection and describes the motivation for the proposed ENST . Section 3 details the specifics of the proposed model . Section 4 describes the experimental results and analysis . Section 5 concludes this paper . 2 RELATED WORK . 2.1 VIDEO FAKE CUES . In recent years , various deep learning forgery methods have produced synthetic images or videos of faces Suwajanakorn et al . ( 2017 ; 2015 ) that have caused some degree of harm to society . The problem of deep forgery detection has received extensive attention in the vision field , and many forgery detection methods have been proposed to detect such forged products . Some methods focus on mining specific artifacts generated by the forgery process , such as color space Li et al . ( 2018 ) ; McCloskey & Albright ( 2018 ) and shape cues . Many deep learning methods Marra et al . ( 2018 ) ; Yu et al . ( 2018 ) ; Zhou et al . ( 2018 ) use deep neural networks to extract high-level semantic information from the spatial domain and then classify a given image or video . Some methods convert the image from the spatial domain to the frequency domain Stuchi et al . ( 2017 ) ; Franzen ( 2018 ) , capturing some information that is valuable for forgery detection . Stuchi et al . ( 2017 ) employ a fixed set of filters to extract different ranges of frequency information and then adopt a fully connected layer to obtain classification results . Durall et al . ( 2019 ) extract the frequency domain information using the DFT transform and average the amplitudes of the different frequency bands . Other methods extract statistical features , such as capturing features from spatial textures Pan et al . ( 2009 ) ; Conotter ( 2011 ) and transform domain coefficient distributions Chen et al . ( 2009 ) ; Lyu & Farid ( 2005 ) . Video detection methods can discover differences in the temporal dimension of video frames Sabir et al . ( 2019 ) ; Guera & Delp ( 2018 ) , such as Wang & Dantcheva ( 2020 ) using spatial and motion information , and Chintha et al . ( 2020 ) ; Amerini et al . ( 2020 ) ; Caldelli et al . ( 2021 ) using optical flow for video forgery detection . While the above methods are limited to specific cues , we take full advantage of these cues . 2.2 DEEPFAKE DETECTION STRUCTURES . Many deep learning models have been proposed to extract features and discover forgery patterns directly without targeting a specific forgery cue or artifact , classify real and fake images or videos , and implement deep forgery detection created by random methods . EfficientNet Tan & Le ( 2019 ) achieves higher accuracy with a larger network by combining a composite model expansion method with neural structure search techniques to optimize the dimensions of network depth , network width , and input image resolution , which has higher accuracy and performance compared to other advanced CNNs . Each layer of Swin Transformer computes self-attention within the window of the transform , enabling cross-window connectivity for all scales of the image . Swin Transformer is flexible and efficient , with only linear complexity for image size . Convolution module in CNN is good at extracting local details , and as the layers get deeper , the attention field will be wider and more suitable for image processing . But to capture global information , detection model often requires stacking many convolutional layers . While attention in Transformer is good at grasping the whole , but requires a large amount of data for training . Conformer Peng et al . ( 2021 ) uses a parallel and complementary approach , combining both CNN and Transformer branches to preserve the features extracted by both branches , to achieve performance beyond CNN and ViT Dosovitskiy et al . ( 2020 ) with comparable parameter complexity . 3 PROPOSED METHOD . In this section , we outline the principles and details of the different parts of the proposed method according to the data processing process . It is challenging for most methods to extract sufficiently comprehensive features with suitable network models and then integrate them effectively for forgery detection . Inspired by Conformer Peng et al . ( 2021 ) , we choose two models EfficientNet-B5 and Swin Transformer with more efficient performance among CNNs and Transformers , as baselines and combine them for video forgery detection . EfficientNet-B5 extracts local features of frames , such as high frequency features and texture features by using the characteristic of focusing well on local features . Swin Transformer extracts artifact features from the optical flow map between frames to obtain the global representation of frames , where the optical flow features responding to the temporal consistency between frames are used . We design a parallel interactive deep forgery detection architecture ENST to learn a potential representation that can distinguish between real and forged faces , and the pipeline of ENST is shown in Figure 1 . 3.1 PRE-PROCESSING . We first extract the frames of the input video , and then use the Multitask Cascaded CNNs ( MTCNN ) Zhang et al . ( 2016 ) to detect and extract the faces present in each frame , crop the faces of each frame , then resize to 224×224 pixels . Moreover , we normalize them to zero mean and unit variance , and finally obtain the extracted faces . After that , we first operate on a single frame in the video , and the face in frame i is input to two branches after the basic feature extraction process .
This paper faces the problem of video deepfake detection. The idea is to rely on high frequency, texture and optical flow cues to gain generalization. To this end it is proposed a dual-branch detection approach: low-level features are extracted by EfficientNet-B5 that also includes an attention mechanism, while temporal inconsistencies are captured by Swin Transformer. Experiments are carried out on FaceForensics++ and Celeb-DF (v2) and show better generalization ability with respect to state-of-the-art.
SP:6043ac6b1b3188287938e4bc366a90c0f7652633
Video Forgery Detection Using Multiple Cues on Fusion of EfficientNet and Swin Transformer
1 INTRODUCTION . Video synthesis techniques based on deep learning Karras et al . ( 2017 ; 2018 ) ; Thies et al . ( 2019 ) have reduced the threshold for people to manipulate videos , and the high-quality deepfake videos that are indistinguishable to the human eye are increasingly available Deepfakes ( 2020 ) ; Faceswap ( 2019 ) ; Fakeapp ( 2020 ) . Although this technology has been applied to Cineflex , Crytek , and driverless technology , which has had a positive effect on society . Inevitably , it is illegally abused by some malicious users to forge indistinguishable false information , which seriously affects the level of public trust in the government and society , causes irreparable social problems , and can even pose a serious threat to politics . Many deepfake detection methods Wang et al . ( 2019 ) ; Kim et al . ( 2021 ) ; Jeon et al . ( 2020 ) have been proposed to address this problem and significant progress has been made . Previous works explore forgery patterns based on manual features Ferrara et al . ( 2012 ) ; Pan et al . ( 2012 ) ; Cozzolino et al . ( 2015 ) and then classify them by zooming in on the subtle differences between the real and forged images . While most of the recent learning-based approaches Cozzolino et al . ( 2017 ) ; Chollet ( 2017a ) define the problem as a binary classification of an image or video with a process of first extracting the global features of the image using a neural network and then feeding them to a classifier to distinguish between genuine and faked images . However , with the increasing refinement of deep generative methods , the generated forged videos are becoming increasingly difficult to discriminate , especially some low-quality ones with only a few subtle visual artifacts . Most existing detection methods still focus on specific artifacts , leading to significant challenges in the generalization and effectiveness of detection systems in realistic scenarios . This creates an urgent need to discover subtle cues that are more critical for detection , and then to extract and incorporate appropriate modules with efficient backbone networks to enhance these artifact features and combine them effectively for forgery detection . After systematic analysis , we find three fundamental reasons for the low generalization of the model . The first is the difficulty of capturing general artifact cues and the limitations of the dataset in terms of quantity and quality . The second is the limitations of selecting a suitable network model for specific feature extraction . The third is the limitations of fully and effectively utilizing the extracted features . In terms of feature cues , we find that although the generated videos are realistic with few artifacts in the texture , the frequency domain , especially at high frequency , still reveals significant differences , even for low-quality videos , and many forgery methods are designed to generate videos that are slightly defective to human vision , with few methods targeting the frequency domain . In addition , it is found that texture information located at the shallow level of the detection network rather than the higher-level semantic information is particularly critical for detection . To eliminate the effect of luminance in RGB on texture feature extraction , we convert the images from RGB to YCbCr and remove the luminance ( Y ) channel , and combine Cb and Cr channels with high frequencies . In addition , we find a characteristic that the optical flow of the real video changes significantly while the optical flow of the deepfake video has little change , and coupled with the temporal consistency of the continuous frames of the real video , we choose the high frequency , texture , and optical flow as the feature cues for video forgery detection . For features such as high frequency and texture , which require attention to local details , we choose EfficientNet-B5 Tan & Le ( 2019 ) , a CNN class network that is superior in local feature extraction , as one of the backbone networks , which has higher accuracy and performance compared with other CNNs . For features such as optical flow , which reflect the inter-frame temporality , we choose Swin Transformer Liu et al . ( 2021 ) , a Transformer class network that is superior in global representation , as another backbone network , which applies to all scales of images , simple and flexible in computation , and shows excellent performance compared with other Transformers . In addition , we introduce the attention mechanism and a new loss function for EfficientNet-B5 to extract more robust face features . In summary , in this paper , we propose a novel multiple-cue video forgery detection model ENST using a combination of high frequency , low-level texture , and optical flow cues , based on the fusion of EfficientNet-B5 and Swin Transformer , which fully integrates multiple key features and uses an advanced backbone network for extraction . To demonstrate the effectiveness of ENST , we conduct extensive experiments on FaceForensics++ Rssler et al . ( 2019 ) and Celeb-DF ( v2 ) Li et al . ( 2019 ) , as well as a comparison of various settings in an ablation study . Comprehensive experiments indicate that ENST has excellent competitive properties . The contributions of this paper are as follows : 1 . We investigate the current video forgery detection methods in-depth and systematically , and find three reasons for the poor generalization ability of these methods , as well as discover several artifact cues in forgery detection , and use these cues to design an efficient network structure for the above reasons . 2 . We propose a novel network structure ENST , which can utilize the local feature extraction ability of EfficientNet-B5 and the global relationship sensing ability of Swin Transformer , and integrate them to complement their abilities during data processing . In addition , we introduce the attention module and a new loss function for EfficientNet-B5 to enhance the ability to extract features . 3 . Our experiments on FaceForensics++ and Celeb-DF ( v2 ) demonstrate that ENST achieves superior classification performance and generalization compared to other state-of-the-art methods . The rest of this paper is organized as follows . Section 2 presents related work in the field of video forgery detection and describes the motivation for the proposed ENST . Section 3 details the specifics of the proposed model . Section 4 describes the experimental results and analysis . Section 5 concludes this paper . 2 RELATED WORK . 2.1 VIDEO FAKE CUES . In recent years , various deep learning forgery methods have produced synthetic images or videos of faces Suwajanakorn et al . ( 2017 ; 2015 ) that have caused some degree of harm to society . The problem of deep forgery detection has received extensive attention in the vision field , and many forgery detection methods have been proposed to detect such forged products . Some methods focus on mining specific artifacts generated by the forgery process , such as color space Li et al . ( 2018 ) ; McCloskey & Albright ( 2018 ) and shape cues . Many deep learning methods Marra et al . ( 2018 ) ; Yu et al . ( 2018 ) ; Zhou et al . ( 2018 ) use deep neural networks to extract high-level semantic information from the spatial domain and then classify a given image or video . Some methods convert the image from the spatial domain to the frequency domain Stuchi et al . ( 2017 ) ; Franzen ( 2018 ) , capturing some information that is valuable for forgery detection . Stuchi et al . ( 2017 ) employ a fixed set of filters to extract different ranges of frequency information and then adopt a fully connected layer to obtain classification results . Durall et al . ( 2019 ) extract the frequency domain information using the DFT transform and average the amplitudes of the different frequency bands . Other methods extract statistical features , such as capturing features from spatial textures Pan et al . ( 2009 ) ; Conotter ( 2011 ) and transform domain coefficient distributions Chen et al . ( 2009 ) ; Lyu & Farid ( 2005 ) . Video detection methods can discover differences in the temporal dimension of video frames Sabir et al . ( 2019 ) ; Guera & Delp ( 2018 ) , such as Wang & Dantcheva ( 2020 ) using spatial and motion information , and Chintha et al . ( 2020 ) ; Amerini et al . ( 2020 ) ; Caldelli et al . ( 2021 ) using optical flow for video forgery detection . While the above methods are limited to specific cues , we take full advantage of these cues . 2.2 DEEPFAKE DETECTION STRUCTURES . Many deep learning models have been proposed to extract features and discover forgery patterns directly without targeting a specific forgery cue or artifact , classify real and fake images or videos , and implement deep forgery detection created by random methods . EfficientNet Tan & Le ( 2019 ) achieves higher accuracy with a larger network by combining a composite model expansion method with neural structure search techniques to optimize the dimensions of network depth , network width , and input image resolution , which has higher accuracy and performance compared to other advanced CNNs . Each layer of Swin Transformer computes self-attention within the window of the transform , enabling cross-window connectivity for all scales of the image . Swin Transformer is flexible and efficient , with only linear complexity for image size . Convolution module in CNN is good at extracting local details , and as the layers get deeper , the attention field will be wider and more suitable for image processing . But to capture global information , detection model often requires stacking many convolutional layers . While attention in Transformer is good at grasping the whole , but requires a large amount of data for training . Conformer Peng et al . ( 2021 ) uses a parallel and complementary approach , combining both CNN and Transformer branches to preserve the features extracted by both branches , to achieve performance beyond CNN and ViT Dosovitskiy et al . ( 2020 ) with comparable parameter complexity . 3 PROPOSED METHOD . In this section , we outline the principles and details of the different parts of the proposed method according to the data processing process . It is challenging for most methods to extract sufficiently comprehensive features with suitable network models and then integrate them effectively for forgery detection . Inspired by Conformer Peng et al . ( 2021 ) , we choose two models EfficientNet-B5 and Swin Transformer with more efficient performance among CNNs and Transformers , as baselines and combine them for video forgery detection . EfficientNet-B5 extracts local features of frames , such as high frequency features and texture features by using the characteristic of focusing well on local features . Swin Transformer extracts artifact features from the optical flow map between frames to obtain the global representation of frames , where the optical flow features responding to the temporal consistency between frames are used . We design a parallel interactive deep forgery detection architecture ENST to learn a potential representation that can distinguish between real and forged faces , and the pipeline of ENST is shown in Figure 1 . 3.1 PRE-PROCESSING . We first extract the frames of the input video , and then use the Multitask Cascaded CNNs ( MTCNN ) Zhang et al . ( 2016 ) to detect and extract the faces present in each frame , crop the faces of each frame , then resize to 224×224 pixels . Moreover , we normalize them to zero mean and unit variance , and finally obtain the extracted faces . After that , we first operate on a single frame in the video , and the face in frame i is input to two branches after the basic feature extraction process .
This paper proposes a new video forgery detection method (ENST) by combining multiple forgery clues including high frequency, low-level texture, and optical flow. ENST employs a two-branch network, an EfficientNet-B5 branch for high frequent and texture info and an Swin transformer branch for optical flow info. The motivation behind this design is that forgery traces are more likely to present in texture regions where human eyes cannot easily catch, and the optical flow of forgery videos has rare variations compared to real videos.
SP:6043ac6b1b3188287938e4bc366a90c0f7652633
Diverse Client Selection for Federated Learning via Submodular Maximization
1 INTRODUCTION . Federated learning ( FL ) involves collaboratively training of machine learning model across a large number of clients while keeping client data local . Recent approaches to this problem repeatedly alternate between device-local ( stochastic ) gradient descent steps and server-aggregation of the clients ’ model updates ( McMahan et al. , 2017 ) . In cross-device settings , a server and its model usually serves several thousands of devices . Therefore , the communication between clients and the server can be costly and slow , forming a huge impediment to FL ’ s viability . One property of the collection of clients that can mitigate these problems , however , is often not exploited , and that is redundancy . Specifically , many clients might provide similar , and thus redundant , gradient information for updating the server model . Therefore , transmitting all such updates to the server is a waste of communication and computational resources . How best to select a representative and more informative client set while adhering to practical constraints in federated learning is still an open challenge . Although several selection criterion have been investigated in recent literature , e.g. , sampling clients with probabilities proportional to their local dataset size ( McMahan et al. , 2017 ) , sampling clients of larger update norm with higher probability ( Chen et al. , 2020 ) , and selecting clients with higher losses ( Balakrishnan et al. , 2020 ; Cho et al. , 2020 ) , the redundancy and similarity of the clients ’ updates sent to the server is not represented and exploited in these approaches . In particular , communicating multiple clients ’ updates to the server may cause statistical and system inefficiency if too many of them are too similar to each other . The commonly studied modular score/probability for each individual client is incapable of capturing information as a property over a group of clients . Ideally , a diverse set of clients would be selected , thereby increasing the impact of under-represented clients that contribute different information , and thereby improving fairness . This , in fact , is a topic of increasing interest ( Mohri et al. , 2019 ; Cho et al. , 2020 ; Dennis et al. , 2021 ) . In this paper , we introduce diversity to client selection in FL , namely a strategy to measure how a selected subset of clients can represent the whole when being aggregated on the server . Specifically , in each communication round , we aim to find a subset whose aggregated model update approximates the aggregate update over all clients . By doing this , we aim to limit the impact of subset selection which introduces variance in the model updates across round , that could otherwise slow the learning process . Inspired by the CRAIG method of coreset selection for efficient machine learning training ( Mirzasoleiman et al. , 2020 ) , we derive an upper bound of the approximation error as a supermodular set function ( in particular , the min-form of the facility location function ( Cornuéjols et al. , 1977 ) ) evaluated on the selected subset . We can then apply submodular maximization ( Fujishige , 2005 ; Iyer et al. , 2013 ; Wei et al. , 2014 ) on a complement submodular function to ( approximately ) minimize the error upper bound . We employ the greedy selection ( Nemhauser et al. , 1978 ) of a subset of clients according to the marginal gain of the submodular function to achieve a solution with provable approximation guarantee ( Conforti & Cornuejols , 1984 ) . By integrating the diverse client selection into the most commonly studied FL scheme , i.e. , Federated Averaging ( FedAvg ) ( McMahan et al. , 2017 ) , we propose DivFL that applies global model aggregation over a selected subset of clients after multiple local steps on every client . We present theoretical convergence analysis of DivFL and show its tolerance to the heterogeneity of data distributions across clients and large number of local steps . However , our method differs from the CRAIG method where selection is performed based on model updates ( involving multiple epochs at the clients ) . In addition , our approach allows for partial device participation where the server does not have access to all data at any communication round , as is standard in FL ( McMahan et al. , 2017 ) . In experiments , we compare DivFL with other client selection approaches on both synthetic dataset and FEMNIST , wherein our method excels on convergence , fairness and learning efficiency . 2 BACKGROUND AND RELATED WORK . We consider a typical federated learning objective : min w f ( w ) = N∑ k=1 pkFk ( w ) , where for each client k ∈ [ N ] , pk is a pre-defined weight ( such that ∑N k=1 pk = 1 ) that can be set to 1 N or the fraction of training samples , and Fk is the client-specific empirical loss . While there are various possible modeling approaches , we consider this canonical objective of fitting a single global model to the non-identically distributed data across all clients ( McMahan et al. , 2017 ) . Client Selection in Federated Learning . Client1 sampling is a critical problem particularly for cross-device settings where it is prohibitive to communicate with all devices . Two common ( or default ) strategies are ( a ) sampling the clients based on the number of local data points and uniformly averaging the model updates , and ( b ) sampling the clients uniformly at random and aggregating the model updates with weights proportional to the local samples ( Li et al. , 2020 ) . There is also recent work proposing advanced sampling techniques to incorporate dynamic systems constraints , accelerate the convergence of federated optimization , or to obtain a better model with higher accuracy ( Nishio & Yonetani , 2019 ; Ribero & Vikalo , 2020 ; Cho et al. , 2020 ; Lai et al. , 2020 ) . We investigate client selection through the lens of encouraging client diversity at each communication round which largely remains unexplored in previous work . The closest client selection method to ours is based on clustering ( e.g. , selecting representative clients from separate clusters ( Dennis et al. , 2021 ) ) . We note that performing ( private ) clustering in federated settings is still an open problem , and our method can be viewed as a soft version of dynamic clustering at each round ( discussed in the next paragraph ) . The benefits of gradient ( or model ) diversity has been demonstrated in other related contexts , such as scaling up mini-batch stochastic gradient descent ( SGD ) ( Yin et al. , 2018 ) . Enforcing sample or gradient diversity during optimization also implicitly places more emphasis on the underrepresented sub-population of clients , and can promote fairness defined as representative disparity ( Hashimoto et al. , 2018 ) . Similar to previous work ( e.g. , Cho et al. , 2020 ; Balakrishnan et al. , 2020 ) , we observe our approach yields more fair solutions across the network in Section 5 . Diverse Subset Selection via Submodularity . Modular scores have been widely studied for subset selection in machine learning and federated learning , e.g. , a utility score for each sample or client often measured by the loss . However , the diversity of a subset can not be fully captured by such 1Following conventions , we use the term ‘ client ’ for the problem of client selection . Throughout the paper , we use ‘ devices ’ and ‘ clients ’ interchangeably . modular scores since there is no score interaction . Diversity is often well modeled by a diminishing return property , i.e. , the ( marginal ) gain an element brings to a subset diminishes as more elements added to the subset . There exists a rich and expressive family of functions , all of which are natural for measuring diversity , and all having the diminishing returns property : given a finite ground set V of size n , and any subset A ⊆ B ⊆ V and a v /∈ B , a set function F : 2V → R is submodular if F ( v ∪A ) − F ( A ) ≥ F ( v ∪B ) − F ( B ) . ( 1 ) This implies v is no less valuable to the smaller set A than to the larger set B . The marginal gain of v conditioned on A is denoted f ( v|A ) , f ( v ∪A ) − f ( A ) and reflects the importance of v to A. Submodular functions ( Fujishige , 2005 ) have been widely used for diversity models ( Lin & Bilmes , 2011 ; Batra et al. , 2012 ; Prasad et al. , 2014 ; Gillenwater et al. , 2012 ; Bilmes & Bai , 2017 ) . Maximizing a submodular function usually encourages the diversity and reduces the redundancy of a subset . This property has been utilized for data selection in active learning ( Guillory & Bilmes , 2011 ) , curriculum learning ( Zhou & Bilmes , 2018 ) , mini-batch partitioning ( Wang et al. , 2019 ) , gradient approximation ( Mirzasoleiman et al. , 2020 ) , etc . Although the number of possible subsets A is ( n k ) , enumerating them all to find the maximum is intractable . Thanks to submodularity , fast approximate algorithms ( Nemhauser et al. , 1978 ; Minoux , 1978 ; Mirzasoleiman et al. , 2015 ) exist to find an approximately optimal A with provable bounds ( Nemhauser et al. , 1978 ; Conforti & Cornuejols , 1984 ) . Despite its success in data selection , submodularity has not been explored for client selection in federated learning . Encouraging diversity amongst local gradients ( or model updates ) of selected clients can effectively reduce redundant communication and promote fairness . Moreover , it raises several new challenges in the FL setting , e.g. , ( 1 ) it is unclear which submodular function to optimize and in which space to measure the similarity/diversity between clients ; ( 2 ) What convergence guarantee can be obtained under practical assumptions such as heterogeneity among clients , and ( 3 ) What are the effects of outdated client selection due to communication constraints ? 3 DIVERSE CLIENT SELECTION . In this section , we introduce “ federated averaging with diverse client selection ” ( or DivFL ) , a method that incorporates diverse client selection into the most widely studied FL scheme , federated averaging ( FedAvg ) . We will first derive a combinatorial objective for client selection via an approximation of the full communication from all clients , which naturally morphs into a facility location function in the gradient space that can be optimized by submodular maximization . We then present the standard greedy algorithm that optimizes the objective by selecting a diverse subset of clients at every communication round . 3.1 APPROXIMATION OF FULL COMMUNICATION . We aim to find a subset S of clients whose aggregated gradient can approximate the full aggregation over all the N clients V = [ N ] . To formulate this problem , we start by following the logic in Mirzasoleiman et al . ( 2020 ) . Given a subset S , we define a mapping σ : V → S such that the gradient information ∇Fk ( vk ) from client k is approximated by the gradient information from a selected client σ ( k ) ∈ S. For i ∈ S , let Ci , { k ∈ V |σ ( k ) = i } be the set of clients approximated by client-i and γi , |Ci| . The full aggregated gradient can be written as∑ k∈ [ N ] ∇Fk ( vk ) = ∑ k∈ [ N ] [ ∇Fk ( vk ) −∇Fσ ( k ) ( vσ ( k ) ) ] + ∑ k∈S γk∇Fk ( vk ) . ( 2 ) Subtracting the second term from both sides , taking the norms , and applying triangular inequality , we can obtain an upper bound for the approximation to the aggregated gradient by S , i.e. , ∥∥∥∥∥∥ ∑ k∈ [ N ] ∇Fk ( vk ) − ∑ k∈S γk∇Fk ( vk ) ∥∥∥∥∥∥ ≤ ∑ k∈ [ N ] ∥∥∥∇Fk ( vk ) −∇Fσ ( k ) ( vσ ( k ) ) ∥∥∥ . ( 3 ) The above inequality holds for any feasible mapping σ since the left hand side does not depend on σ . So we can take the minimum of the right hand side w.r.t . σ ( k ) , ∀k ∈ [ N ] , i.e. , ∥∥∥∥∥∥ ∑ k∈ [ N ] ∇Fk ( vk ) − ∑ k∈S γk∇Fk ( vk ) ∥∥∥∥∥∥ ≤ ∑ k∈ [ N ] min i∈S ∥∥∇Fk ( vk ) −∇Fi ( vi ) ∥∥ , G ( S ) . ( 4 ) The right hand side provides a relaxed objective G ( S ) for minimizing the approximation error on the left hand . Minimizing G ( S ) ( or maximizing Ḡ , a constant minus its negation ) equals maximizing a well-known submodular function , i.e. , the facility location function ( Cornuéjols et al. , 1977 ) . To restrict the communication cost , we usually limit the number of selected clients to be no greater than K , i.e. , |S| ≤ K. This resorts to a submodular maximization problem under cardinality constraint , which is NP-hard but an approximation solution with 1− e−1 bound can be achieved via the greedy algorithm ( Nemhauser et al. , 1978 ) .
The paper studies a problem of improving the efficiency of federated learning by selecting a subset of clients whose gradients are representative. This is formulated as a submodular maximization problem, and the authors analyze a greedy algorithm for it. This is evaluated through experiments on synthetic as well as real datasets.
SP:16dd5b0ec5e070f0875e405655998c20c5d6429c
Diverse Client Selection for Federated Learning via Submodular Maximization
1 INTRODUCTION . Federated learning ( FL ) involves collaboratively training of machine learning model across a large number of clients while keeping client data local . Recent approaches to this problem repeatedly alternate between device-local ( stochastic ) gradient descent steps and server-aggregation of the clients ’ model updates ( McMahan et al. , 2017 ) . In cross-device settings , a server and its model usually serves several thousands of devices . Therefore , the communication between clients and the server can be costly and slow , forming a huge impediment to FL ’ s viability . One property of the collection of clients that can mitigate these problems , however , is often not exploited , and that is redundancy . Specifically , many clients might provide similar , and thus redundant , gradient information for updating the server model . Therefore , transmitting all such updates to the server is a waste of communication and computational resources . How best to select a representative and more informative client set while adhering to practical constraints in federated learning is still an open challenge . Although several selection criterion have been investigated in recent literature , e.g. , sampling clients with probabilities proportional to their local dataset size ( McMahan et al. , 2017 ) , sampling clients of larger update norm with higher probability ( Chen et al. , 2020 ) , and selecting clients with higher losses ( Balakrishnan et al. , 2020 ; Cho et al. , 2020 ) , the redundancy and similarity of the clients ’ updates sent to the server is not represented and exploited in these approaches . In particular , communicating multiple clients ’ updates to the server may cause statistical and system inefficiency if too many of them are too similar to each other . The commonly studied modular score/probability for each individual client is incapable of capturing information as a property over a group of clients . Ideally , a diverse set of clients would be selected , thereby increasing the impact of under-represented clients that contribute different information , and thereby improving fairness . This , in fact , is a topic of increasing interest ( Mohri et al. , 2019 ; Cho et al. , 2020 ; Dennis et al. , 2021 ) . In this paper , we introduce diversity to client selection in FL , namely a strategy to measure how a selected subset of clients can represent the whole when being aggregated on the server . Specifically , in each communication round , we aim to find a subset whose aggregated model update approximates the aggregate update over all clients . By doing this , we aim to limit the impact of subset selection which introduces variance in the model updates across round , that could otherwise slow the learning process . Inspired by the CRAIG method of coreset selection for efficient machine learning training ( Mirzasoleiman et al. , 2020 ) , we derive an upper bound of the approximation error as a supermodular set function ( in particular , the min-form of the facility location function ( Cornuéjols et al. , 1977 ) ) evaluated on the selected subset . We can then apply submodular maximization ( Fujishige , 2005 ; Iyer et al. , 2013 ; Wei et al. , 2014 ) on a complement submodular function to ( approximately ) minimize the error upper bound . We employ the greedy selection ( Nemhauser et al. , 1978 ) of a subset of clients according to the marginal gain of the submodular function to achieve a solution with provable approximation guarantee ( Conforti & Cornuejols , 1984 ) . By integrating the diverse client selection into the most commonly studied FL scheme , i.e. , Federated Averaging ( FedAvg ) ( McMahan et al. , 2017 ) , we propose DivFL that applies global model aggregation over a selected subset of clients after multiple local steps on every client . We present theoretical convergence analysis of DivFL and show its tolerance to the heterogeneity of data distributions across clients and large number of local steps . However , our method differs from the CRAIG method where selection is performed based on model updates ( involving multiple epochs at the clients ) . In addition , our approach allows for partial device participation where the server does not have access to all data at any communication round , as is standard in FL ( McMahan et al. , 2017 ) . In experiments , we compare DivFL with other client selection approaches on both synthetic dataset and FEMNIST , wherein our method excels on convergence , fairness and learning efficiency . 2 BACKGROUND AND RELATED WORK . We consider a typical federated learning objective : min w f ( w ) = N∑ k=1 pkFk ( w ) , where for each client k ∈ [ N ] , pk is a pre-defined weight ( such that ∑N k=1 pk = 1 ) that can be set to 1 N or the fraction of training samples , and Fk is the client-specific empirical loss . While there are various possible modeling approaches , we consider this canonical objective of fitting a single global model to the non-identically distributed data across all clients ( McMahan et al. , 2017 ) . Client Selection in Federated Learning . Client1 sampling is a critical problem particularly for cross-device settings where it is prohibitive to communicate with all devices . Two common ( or default ) strategies are ( a ) sampling the clients based on the number of local data points and uniformly averaging the model updates , and ( b ) sampling the clients uniformly at random and aggregating the model updates with weights proportional to the local samples ( Li et al. , 2020 ) . There is also recent work proposing advanced sampling techniques to incorporate dynamic systems constraints , accelerate the convergence of federated optimization , or to obtain a better model with higher accuracy ( Nishio & Yonetani , 2019 ; Ribero & Vikalo , 2020 ; Cho et al. , 2020 ; Lai et al. , 2020 ) . We investigate client selection through the lens of encouraging client diversity at each communication round which largely remains unexplored in previous work . The closest client selection method to ours is based on clustering ( e.g. , selecting representative clients from separate clusters ( Dennis et al. , 2021 ) ) . We note that performing ( private ) clustering in federated settings is still an open problem , and our method can be viewed as a soft version of dynamic clustering at each round ( discussed in the next paragraph ) . The benefits of gradient ( or model ) diversity has been demonstrated in other related contexts , such as scaling up mini-batch stochastic gradient descent ( SGD ) ( Yin et al. , 2018 ) . Enforcing sample or gradient diversity during optimization also implicitly places more emphasis on the underrepresented sub-population of clients , and can promote fairness defined as representative disparity ( Hashimoto et al. , 2018 ) . Similar to previous work ( e.g. , Cho et al. , 2020 ; Balakrishnan et al. , 2020 ) , we observe our approach yields more fair solutions across the network in Section 5 . Diverse Subset Selection via Submodularity . Modular scores have been widely studied for subset selection in machine learning and federated learning , e.g. , a utility score for each sample or client often measured by the loss . However , the diversity of a subset can not be fully captured by such 1Following conventions , we use the term ‘ client ’ for the problem of client selection . Throughout the paper , we use ‘ devices ’ and ‘ clients ’ interchangeably . modular scores since there is no score interaction . Diversity is often well modeled by a diminishing return property , i.e. , the ( marginal ) gain an element brings to a subset diminishes as more elements added to the subset . There exists a rich and expressive family of functions , all of which are natural for measuring diversity , and all having the diminishing returns property : given a finite ground set V of size n , and any subset A ⊆ B ⊆ V and a v /∈ B , a set function F : 2V → R is submodular if F ( v ∪A ) − F ( A ) ≥ F ( v ∪B ) − F ( B ) . ( 1 ) This implies v is no less valuable to the smaller set A than to the larger set B . The marginal gain of v conditioned on A is denoted f ( v|A ) , f ( v ∪A ) − f ( A ) and reflects the importance of v to A. Submodular functions ( Fujishige , 2005 ) have been widely used for diversity models ( Lin & Bilmes , 2011 ; Batra et al. , 2012 ; Prasad et al. , 2014 ; Gillenwater et al. , 2012 ; Bilmes & Bai , 2017 ) . Maximizing a submodular function usually encourages the diversity and reduces the redundancy of a subset . This property has been utilized for data selection in active learning ( Guillory & Bilmes , 2011 ) , curriculum learning ( Zhou & Bilmes , 2018 ) , mini-batch partitioning ( Wang et al. , 2019 ) , gradient approximation ( Mirzasoleiman et al. , 2020 ) , etc . Although the number of possible subsets A is ( n k ) , enumerating them all to find the maximum is intractable . Thanks to submodularity , fast approximate algorithms ( Nemhauser et al. , 1978 ; Minoux , 1978 ; Mirzasoleiman et al. , 2015 ) exist to find an approximately optimal A with provable bounds ( Nemhauser et al. , 1978 ; Conforti & Cornuejols , 1984 ) . Despite its success in data selection , submodularity has not been explored for client selection in federated learning . Encouraging diversity amongst local gradients ( or model updates ) of selected clients can effectively reduce redundant communication and promote fairness . Moreover , it raises several new challenges in the FL setting , e.g. , ( 1 ) it is unclear which submodular function to optimize and in which space to measure the similarity/diversity between clients ; ( 2 ) What convergence guarantee can be obtained under practical assumptions such as heterogeneity among clients , and ( 3 ) What are the effects of outdated client selection due to communication constraints ? 3 DIVERSE CLIENT SELECTION . In this section , we introduce “ federated averaging with diverse client selection ” ( or DivFL ) , a method that incorporates diverse client selection into the most widely studied FL scheme , federated averaging ( FedAvg ) . We will first derive a combinatorial objective for client selection via an approximation of the full communication from all clients , which naturally morphs into a facility location function in the gradient space that can be optimized by submodular maximization . We then present the standard greedy algorithm that optimizes the objective by selecting a diverse subset of clients at every communication round . 3.1 APPROXIMATION OF FULL COMMUNICATION . We aim to find a subset S of clients whose aggregated gradient can approximate the full aggregation over all the N clients V = [ N ] . To formulate this problem , we start by following the logic in Mirzasoleiman et al . ( 2020 ) . Given a subset S , we define a mapping σ : V → S such that the gradient information ∇Fk ( vk ) from client k is approximated by the gradient information from a selected client σ ( k ) ∈ S. For i ∈ S , let Ci , { k ∈ V |σ ( k ) = i } be the set of clients approximated by client-i and γi , |Ci| . The full aggregated gradient can be written as∑ k∈ [ N ] ∇Fk ( vk ) = ∑ k∈ [ N ] [ ∇Fk ( vk ) −∇Fσ ( k ) ( vσ ( k ) ) ] + ∑ k∈S γk∇Fk ( vk ) . ( 2 ) Subtracting the second term from both sides , taking the norms , and applying triangular inequality , we can obtain an upper bound for the approximation to the aggregated gradient by S , i.e. , ∥∥∥∥∥∥ ∑ k∈ [ N ] ∇Fk ( vk ) − ∑ k∈S γk∇Fk ( vk ) ∥∥∥∥∥∥ ≤ ∑ k∈ [ N ] ∥∥∥∇Fk ( vk ) −∇Fσ ( k ) ( vσ ( k ) ) ∥∥∥ . ( 3 ) The above inequality holds for any feasible mapping σ since the left hand side does not depend on σ . So we can take the minimum of the right hand side w.r.t . σ ( k ) , ∀k ∈ [ N ] , i.e. , ∥∥∥∥∥∥ ∑ k∈ [ N ] ∇Fk ( vk ) − ∑ k∈S γk∇Fk ( vk ) ∥∥∥∥∥∥ ≤ ∑ k∈ [ N ] min i∈S ∥∥∇Fk ( vk ) −∇Fi ( vi ) ∥∥ , G ( S ) . ( 4 ) The right hand side provides a relaxed objective G ( S ) for minimizing the approximation error on the left hand . Minimizing G ( S ) ( or maximizing Ḡ , a constant minus its negation ) equals maximizing a well-known submodular function , i.e. , the facility location function ( Cornuéjols et al. , 1977 ) . To restrict the communication cost , we usually limit the number of selected clients to be no greater than K , i.e. , |S| ≤ K. This resorts to a submodular maximization problem under cardinality constraint , which is NP-hard but an approximation solution with 1− e−1 bound can be achieved via the greedy algorithm ( Nemhauser et al. , 1978 ) .
This paper studies the selection of clients for federated learning under the assumption of full participation via submodular function maximization. Specifically, the goal is to obtain a subset of the clients such that the aggregation of the gradients of their loss functions approximates the full aggregated gradient. The authors show that the error defined by the difference between both aggregations (the full one and the one defined by the subset) can be upper bounded by a supermodular function. Therefore, minimizing this function (subject to a single cardinality constraint, which is motivated by communication costs) gives an upper bound for the problem of minimizing the error. The minimization of this particular supermodular function can be equivalently posed as the maximization of a submodular function, which has been previously studied in the literature (e.g. [1] below). Given this, the authors propose a variant of the federated averaging scheme, called *DivFL*, which uses the standard greedy algorithm to select $K$ clients to communicate the current value $w_t$. They show that under the “gradient approximation error” assumption, the error between $w_t$ and $w^*$ decreases as $O(1/t)+O(\epsilon)$, where $\epsilon$ is a parameter of the assumption. Finally, they test their scheme in synthetic and real-data sets against other methods in the literature. [1] Ryan Gomes and Andreas Krause. Budgeted Nonparametric Learning from Data Streams, 2015.
SP:16dd5b0ec5e070f0875e405655998c20c5d6429c