paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift | 1 INTRODUCTION . Time-series forecasting plays a significant role in various daily problems such as health care , economics , and traffic engineering ( Che et al. , 2018 ; Bauer et al. , 2016 ; Zhang et al. , 2017 ) . Recently , time-series forecasting models have achieved comparative performance on these problems by overcoming several challenges , e.g. , long-term forecasting ( Zhou et al. , 2021 ) and missing value im- putation ( Tang et al. , 2020 ) . However , these time-series forecasting models often severely suffer from statistical properties that change over time , which is also widely known as a distribution shift problem ; for example , the mean and variance of the time-series data distribution vary over time . Accordingly , the input sequences to the forecasting models have different underlying distributions from each other . In the case of the train and test data , they are usually divided into data before and after a specific point in time . Thus , the distributions of the two data often hardly overlap , which is commonly known as the reason for model performance degradation . Thus , we assume that if we normalize every sequence in the data to have identical mean and variance and provide the transformed data to the model , the discrepancy in the distribution will be reduced , thereby improving the model performance . However , if the input to the model is shifted and scaled , the prediction of the model can be shifted and scaled as much as the input ; since this will differ from the groundtruth distribution , the accuracy of the model will decrease . Also , we further hypothesize that if the model output , shifted and scaled , can be returned to its original distribution explicitly , we can alleviate the discrepancy in the distribution of the data during the forward pass of the model as well as allow its prediction to obey the original distribution . Here , reconstructing the original distribution can be achieved by applying denormalization on the output of the model as a reversed function of the normalization applied on the input data . Inspired by these , we propose a simple yet effective normalization method , reversible instance normalization ( RevIN ) , which first normalizes the input sequences and then denormalizes the model output sequences to solve the time-series forecasting task against the distribution shift problem . RevIN is symmetrically structured to return the model output to the original distribution in the denormalization layer by scaling and shifting the model output as much as the input data is shifted and scaled in the normalization layer . Adding only the normalization and denormalization layers on the input and output of the model , respectively , RevIN can be applied to any model without modification to the model architecture . We empirically found that RevIN effectively alleviates the distribution discrepancy within the data , as well as significantly improves the model performance when applied to various state-of-the-art time-series forecasting methods . To verify the effectiveness of RevIN , we conduct extensive quantitative evaluations with various baselines : Informer ( Zhou et al. , 2021 ) , N-BEATS ( Oreshkin et al. , 2020 ) , and SCINet ( Liu et al. , 2021 ) . We also present an in-depth analysis of the behavior of the model , including the verification of the assumptions on reversible instance normalization ( See Section 3.2 ) . RevIN is a flexible , end-to-end trainable layer that can be applied to any arbitrarily chosen layers , effectively suppressing non-stationary information ( mean and standard deviation of the instance ) from one layer and restoring it on the other layer at a virtually symmetric position . Despite its remarkable performance , there has been no work on generalizing and expanding the instance-wise normalization-and-denormalization as a flexibly applicable , trainable layer in the time-series domain . Recently , deep learning-based time-series forecasting approaches , such as Informer ( Zhou et al. , 2021 ) and N-BEATS ( Oreshkin et al. , 2020 ) , have shown outstanding performance in timeseries forecasting . However , they overlooked the importance of normalization , merely using simple global preprocessing to the model input without further exploration and expecting their endto-end deep learning model to replace the role . Despite the simplicity of our method , there have been no cases of using such techniques in modern deep-learning-based time-series forecasting approaches ( Zhou et al. , 2021 ; Liu et al. , 2021 ; Oreshkin et al. , 2020 ) . In this sense , we enlight the importance of the appropriate normalization method in deep-learning-based time-series approaches ; we propose a carefully designed , deep-learning-friendly module , for time-series forecasting , by combining the learnable affine transformation with the method , which has been widely accepted in recent deep-learning-based normalization work ( Ulyanov et al. , 2016 ) . In summary , our contributions are as follows : • We propose a simple and effective normalization-and-denormalization method for timeseries , called RevIN , symmetrically structured to remove and restore the statistical information of a time-series instance ; it is a generally-applicable layer to arbitrary deep neural networks with negligible cost and significantly improves the performance by reducing the distribution discrepancy within the data . • By adding RevIN to the baseline , we achieve the state-of-the-art performance on four largescale real-world datasets with a significant margin . • We conduct extensive evaluations on RevIN with quantitative analysis and qualitative visualizations to verify its effectiveness , addressing the distribution shift problem . 2 RELATED WORK . Time-series forecasting . Time-series forecasting methods are mainly categorized into three distinct approaches : ( 1 ) statistical methods , ( 2 ) hybrid methods , and ( 3 ) deep learning-based methods . As an example of the statistical models , exponential smoothing forecasting ( Holt , 2004 ; Winters , 1960 ) is a well-established benchmark to predict future values . Statistical models have several advantages , e.g. , interpretability and theoretically well guaranteed . In order to boost the performance , recent work proposed a hybrid model ( Smyl , 2020 ) which incorporates a deep learning module with a statistical model . It achieved better performance over the statistical methods in the M4 timeseries forecasting competition . The deep learning-based method basically follows the sequenceto-sequence framework to model the time-series forecasting . Initially , deep learning-based models utilized variations of recurrent neural networks ( RNNs ) . However , to overcome the limitation of the limited receptive field , several studies utilize advanced techniques , such as the dilatation and attention module . For instance , SCINet ( Liu et al. , 2021 ) and Informer ( Zhou et al. , 2021 ) modified the sequence-to-sequence-based model to improve the performance for long sequences . However , most previous deep learning-based models are hard to interpret compared to statistical models . Thus , inspired by statistical models , N-BEATS ( Oreshkin et al. , 2020 ) designed an interpretable layer for time-series forecasting by encouraging the model to explicitly learn trend , seasonality , and residual components . This model shows superior performance on M4 competition datasets . Distribution shift . Although there are various models for time-series forecasting , they often suffer from non-stationary time-series , where the data distribution changes over time . Domain adaptation ( DA ) ( Tzeng et al. , 2017 ; Ganin et al. , 2016 ; Wang et al. , 2018 ) and domain generalization ( DG ) ( Wang et al. , 2021 ; Li et al. , 2018 ; Muandet et al. , 2013 ) are common solutions for alleviating the distribution shift . A domain adaptation algorithm attempts to reduce the distribution gap between source and target domains . A domain generalization algorithm only relies on the source domain and hopes to generalize on the target domain . Both DA and DG have a common objective that bridges the gap between source and target distributions . However , in non-stationary time-series , defining a domain is not as straightforward since the data distribution shifts over time . Recently , Du et al . ( Du et al. , 2021 ) proposed Adaptive RNNs ( AdaRNNs ) to handle the distribution shift problems for non-stationary time-series data . It first characterizes the distribution information by splitting the training data into periods . Then , it matches distributions of the discovered periods to generalize the model . However , unlike AdaRNNs that are costly expensive , RevIN is simple yet effective and model-agnostic since the method can be adopted easily to any deep-learning model . 3 PROPOSED METHOD . In this section , we propose reversible instance normalization ( RevIN ) for alleviating the distribution shift problem in time-series , which causes the discrepancy among the input sequences of the training data , as well as the discrepancy between train and test data distributions . Section 3.1 describes RevIN in detail and Section 3.2 discusses how RevIN alleviates the distribution discrepancy in time-series throughout qualitative visualization . 3.1 REVERSIBLE INSTANCE NORMALIZATION . We consider the multivariate time-series forecasting task in discrete time , with a sliding window size of Tx for input data . Here , N , K , Tx , and Ty indicate the number of the sequences , the number of variables , the input sequence length , and the prediction length . Let X = { x ( i ) } Ni=1 and Y = { y ( i ) } Ni=1 denote the input data and corresponding target , respectively . In RevIN , the input sequence length Tx and the prediction length Ty can be different since the observations are normalized and denormalized across the temporal dimension , as will be explained below . Given an input sequence x ( i ) ∈ RK×Tx , we aim to solve the time-series forecasting problem , which is to predict the subsequent values y ( i ) ∈ RK×Ty . Our proposed method , RevIN , symmetrically transforms the input data x ( i ) and the prediction output ỹ ( i ) of the network , as illustrated in Fig . 2 . First , we normalize the input data x ( i ) with the instance-specific mean and standard deviation , which is widely accepted as instance normalization ( Ulyanov et al. , 2016 ) . The mean and standard deviation are computed for every instance x ( i ) k· of the input data ( Fig . 2 ( a-3 ) ) as Et [ x ( i ) kt ] = 1 Tx Tx∑ j=1 x ( i ) kj and Var [ x ( i ) kt ] = 1 Tx Tx∑ j=1 ( x ( i ) kj − Et [ x ( i ) kt ] ) 2 . ( 1 ) Using these statistics , we normalize the input data x ( i ) as ( Fig . 2 ( a-1 ) ) x̂ ( i ) kt = γk ( x ( i ) kt − Et [ x ( i ) kt ] √ Var [ x ( i ) kt ] + ) + βk , ( 2 ) where γ , β ∈ RK are learnable affine parameter vectors . The normalized sequences have the same mean and variance , where the non-stationary information is reduced . As a result , the denormalization layer allows the model to accurately predict the local dynamics within the sequence while receiving inputs of consistent distributions in terms of the mean and variance . The model then receives the transformed data x̂ ( i ) as input and forecasts their future values . However , the input data have different statistics from the original distribution , and only observing the normalized input x̂ ( i ) , it is difficult to capture the original distribution of the input x ( i ) . Thus , to make this easier for the model , we explicitly restore the non-stationary properties removed from the input data to the model output by reversing the normalization step ( Fig . 2 ( a-3 ) ) . Such denormalization step can return the model output to the original time-series value as well ( Ogasawara et al. , 2010 ) . Accordingly , we denormalize the model output ỹ ( i ) by applying the reciprocal of the normalization in Eq . 2 ( Fig . 2 ( a-2 ) ) as ŷ ( i ) kt = √ Var [ x ( i ) kt ] + · ( ỹ ( i ) kt − β γ ) + Et [ x ( i ) kt ] , ( 3 ) where the same statistics used in the normalization step in Eq . ( 2 ) are used for the scaling and shifting . Now , ŷ ( i ) is the final prediction of the model instead of ỹ ( i ) . Simply added to the input and output of a network , RevIN can be easily applied to any network with negligible cost while effectively improving time-series forecasting performance . Additionally , each step of RevIN , normalization and denormalization , can be applied to any activations of the intermediate layers in the network . However , our method is most effective when applied to the layers of symmetric encoder-decoder structure . In a typical time-series forecasting model , the boundary between the encoder and the decoder is often unclear . Therefore , we apply normalization and denormalization to the input and output of a forecasting network , which can be interpreted as an encoderdecoder structure , generating subsequent values given input data . Additional analysis on selecting layers to apply RevIN is provided in Section 4.2.3 . | The authors propose a "reversible instance normalization" as an input pre and post processing procedure to improve the forecasting of any given base model - targeted at addressing the distribution shift that is common in time series data - e.g., time series are typically non-stationary. This works by normalizing each time window input to a (deep learning) forecast model, applying the model on the normalized data, then unnormalizing the predictions to get the final predictions. The normalization is done by subtracting the mean and dividing by the std. dev. in a current (input instance) window, followed by scaling and shifting by learnable, shared cross-instance (global), scaling and shifting parameters per input feature / variable. The authors perform extensive experiments to show the proposed approach significantly improves the base metric score results for 3 recent, state-of-the-art forecasting algorithms across several datasets, and further that it significantly improves over other normalization approaches, and helps align distributions between train and test windows. | SP:549b4b2a88e8b13656ee6bd9425fe1d2be77b334 |
Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks | Spiking Neural Networks ( SNNs ) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware . As the most effective method to get deep SNNs , ANN-SNN conversion has achieved comparable performance as ANNs on large-scale datasets . Despite this , it requires long time-steps to match the firing rates of SNNs to the activation of ANNs . As a result , the converted SNN suffers severe performance degradation problems with short time-steps , which hamper the practical application of SNNs . In this paper , we theoretically analyze ANN-SNN conversion error and derive the estimated activation function of SNNs . Then we propose the quantization clipfloor-shift activation function to replace the ReLU activation function in source ANNs , which can better approximate the activation function of SNNs . We prove that the expected conversion error between SNNs and ANNs is zero , enabling us to achieve high-accuracy and ultra-low-latency SNNs . We evaluate our method on CIFAR-10/100 and ImageNet datasets , and show that it outperforms the stateof-the-art ANN-SNN and directly trained SNNs in both accuracy and time-steps . To the best of our knowledge , this is the first time to explore high-performance ANN-SNN conversion with ultra-low latency ( 4 time-steps ) . 1 INTRODUCTION . Spiking neural networks ( SNNs ) are biologically plausible neural networks based on the dynamic characteristic of biological neurons ( McCulloch & Pitts , 1943 ; Izhikevich , 2003 ) . As the third generation of artificial neural networks ( Maass , 1997 ) , SNNs have attracted great attention due to their distinctive properties over deep analog neural networks ( ANNs ) ( Roy et al. , 2019 ) . Each neuron transmits discrete spikes to convey information when exceeding a threshold . For most SNNs , the spiking neurons will accumulate the current of the last layer as the output within T inference time steps . The binarized activation has rendered dedicated hardware of neuromorphic computing ( Pei et al. , 2019 ; DeBole et al. , 2019 ; Davies et al. , 2018 ) . This kind of hardware has excellent advantages in temporal resolution and energy budget . Existing work has shown the potential of tremendous energy saving with considerably fast inference ( Stöckl & Maass , 2021 ) . In addition to efficiency advantages , the learning algorithm of SNNs has been improved by leaps and bounds in recent years . The performance of SNNs trained by backpropagation through time and ANN-SNN conversion techniques has gradually been comparable to ANNs on large-scale datasets ( Fang et al. , 2021 ; Rueckauer et al. , 2017 ) . Both techniques benefit from the setting of SNN inference time . Setting longer time-steps in backpropagation can make the gradient of surrogate functions more reliable ( Wu et al. , 2018 ; Lee et al. , 2016 ; Neftci et al. , 2019 ; Zenke & Vogels , 2021 ) . However , the price is enormous resource consumption during training . Existing platforms such as TensorFlow and PyTorch based on CUDA have limited optimization for SNN training . In contrast , ANN-SNN conversion usually depends on a longer inference time to get comparable accuracy as the original ANN ( Sengupta et al. , 2019 ) because it is based on the equivalence of ReLU activation and integrate-and-fire model ’ s firing rate ( Cao et al. , 2015 ) . Although longer inference time can further reduce the conversion error , it also hampers the practical application of SNNs on neuromorphic chips . The dilemma of ANN-SNN conversion is that there exists a remaining potential in the conversion theory , which is hard to be eliminated in a few time steps ( Rueckauer et al. , 2016 ) . Although many methods have been proposed to improve the conversion accuracy , such as weight normalization ( Diehl et al. , 2015 ) , threshold rescaling ( Sengupta et al. , 2019 ) , soft-reset ( Han & Roy , 2020 ) and threshold shift ( Deng & Gu , 2020 ) , tens to hundreds of time-steps in the baseline works are still unbearable . To obtain high-performance SNNs with ultra-low latency ( e.g. , 4 time-steps ) , we list the critical errors in ANN-SNN conversion and provide solutions for each error . Our main contributions are summarized as follows : • We go deeper into the errors in the ANN-SNN conversion and ascribe them to clipping error , quantization error , and unevenness error . We find that unevenness error , which is caused by the changes in the timing of arrival spikes and has been neglected in previous works , can induce more spikes or fewer spikes as expected . • We propose the quantization clip-floor-shift activation function to replace the ReLU activation function in source ANNs , which better approximates the activation function of SNNs . We prove that the expected conversion error between SNNs and ANNs is zero , indicating that we can achieve high-performance converted SNN at ultra-low time-steps . • We evaluate our method on CIFAR-10 , CIFAR-100 , and ImageNet datasets . Compared with both ANN-SNN conversion and backpropagation training methods , the proposed method exceeds state-of-the-art accuracy with fewer time-steps . For example , we reach top-1 accuracy 91.18 % on CIFAR-10 with unprecedented 2 time-steps . 2 PRELIMINARIES . In this section , we first briefly review the neuron models for SNNs and ANNs . Then we introduce the basic framework for ANN-SNN conversion . Neuron model for ANNs . For ANNs , the computations of analog neurons can be simplified as the combination of a linear transformation and a non-linear mapping : al = h ( W lal−1 ) , l = 1 , 2 , ... , M ( 1 ) where the vector al denotes the output of all neurons in l-th layer , W l denotes the weight matrix between layer l and layer l − 1 , and h ( · ) is the ReLU activation function . Neuron model for SNNs . Similar to the previous works ( Cao et al. , 2015 ; Diehl et al. , 2015 ; Han et al. , 2020 ) , we consider the Integrate-and-Fire ( IF ) model for SNNs . If the IF neurons in l-th layer receive the input xl−1 ( t ) from last layer , the temporal potential of the IF neurons can be defined as : ml ( t ) = vl ( t− 1 ) +W lxl−1 ( t ) , ( 2 ) where ml ( t ) and vl ( t ) represent the membrane potential before and after the trigger of a spike at time-step t. W l denote the weight in l-th layer . As soon as any element mli ( t ) of m l ( t ) exceeds the firing threshold θl , the neuron will elicit a spike and update the membrane potential vli ( t ) . To avoid information loss , we use the “ reset-by-subtraction ” mechanism ( Rueckauer et al. , 2017 ; Han et al. , 2020 ) instead of the “ reset-to-zero ” mechanism , which means the membrane potential vli ( t ) is subtracted by the threshold value θl if the neuron fires . Based on the threshold-triggered firing mechanism and the “ reset-by-subtraction ” of the membrane potential after firing discussed above , we can write the uplate rule of membrane potential as : sl ( t ) = H ( ml ( t ) − θl ) , ( 3 ) vl ( t ) = ml ( t ) − sl ( t ) θl . ( 4 ) Here sl ( t ) refers to the output spikes of all neurons in layer l at time t , the element of which equals 1 if there is a spike and 0 otherwise . H ( · ) is the Heaviside step function . θl is the vector of the firing threshold θl . Similar to Deng & Gu ( 2020 ) , we suppose that the postsynaptic neuron in l-th layer receives unweighted postsynaptic potential θl if the presynaptic neuron in l − 1-th layer fires a spike , that is : xl ( t ) = sl ( t ) θl . ( 5 ) ANN-SNN conversion . The key idea of ANN-SNN conversion is to map the activation value of an analog neuron in ANN to the firing rate ( or average postsynaptic potential ) of a spiking neuron in SNN . Specifically , we can get the potential update equation by combining Equation 2 – Equation 4 : vl ( t ) − vl ( t− 1 ) = W lxl−1 ( t ) − sl ( t ) θl . ( 6 ) Equation 6 describes the basic function of spiking neurons used in ANN-SNN conversion . By summing Equation 6 from time 1 to T and dividing T on both sides , we have : vl ( T ) − vl ( 0 ) T = W l ∑T i=1 x l−1 ( i ) T − ∑T i=1 s l ( i ) θl T . ( 7 ) If we use ϕl−1 ( T ) = ∑T i=1 x l−1 ( i ) T to denote the average postsynaptic potential during the period from 0 to T and substitute Equation 5 into Equation 7 , then we get : ϕl ( T ) = W lϕl−1 ( T ) − v l ( T ) − vl ( 0 ) T . ( 8 ) Equation 8 describes the relationship of the average postsynaptic potential of neurons in adjacent layers . Note that ϕl ( T ) ⩾ 0 . If we set the initial potential vl ( 0 ) to zero and neglect the remaining term v l ( T ) T when the simulation time-steps T is long enough , the converted SNN has nearly the same activation function as source ANN ( Equation 1 ) . However , high T would cause long inference latency that hampers the practical application of SNNs . Therefore , this paper aims to implement high-performance ANN-SNN conversion with extremely low latency . 3 CONVERSION ERROR ANALYSIS . In this section , we will analyze the conversion error between the source ANN and the converted SNN in each layer in detail . In the following , we assume that both ANN and SNN receive the same input from the layer l − 1 , that is , al−1 = ϕl−1 ( T ) , and then analyze the error in layer l. For simplicity , we use zl = W lϕl−1 ( T ) = W lal−1 to substitute the weighted input from layer l − 1 for both ANN and SNN . The absolute conversion error is exactly the outputs from converted SNN subtract the outputs from ANN : Errl = ϕl ( T ) − al = zl − v l ( T ) − vl ( 0 ) T − h ( zl ) , ( 9 ) where h ( zl ) = ReLU ( zl ) . It can be found from Equation 9 that the conversion error is nonzero if vl ( T ) − vl ( 0 ) ̸= 0 and zl > 0 . In fact , the conversion error is caused by three factors . Clipping error . The output ϕl ( T ) of SNNs is in the range of [ 0 , θl ] as ϕl ( T ) = ∑T i=1 x l ( i ) T =∑T i=1 s l ( i ) T θ l ( see Equation 5 ) . However , the output al of ANNs is in a much lager range of [ 0 , almax ] , where almax denotes the maximum value of a l. As illustrated in Figure 1a , al can be mapped to ϕl ( T ) by the following equation : ϕl ( T ) = clip ( θl T ⌊ alT λl ⌋ , 0 , θl ) . ( 10 ) Here the clip function sets the upper bound θl and the lower bound 0 . ⌊·⌋ denotes the floor function . λl represents the actual maximum value of output al mapped to the maximum value θl of ϕl ( T ) . Considering that nearly 99.9 % activations of al in ANN are in the range of [ 0 , a l max 3 ] , Rueckauer et al . ( 2016 ) suggested to choose λl according to 99.9 % activations . The activations between λl and almax in ANN are mapped to the same value θ l in SNN , which will cause conversion error called clipping error . Quantization error ( flooring error ) . The output spikes sl ( t ) are discrete events , thus ϕl ( T ) are discrete with quantization resolution θ l T ( see Equation 10 ) . When mapping a l to ϕl ( T ) , there exists unavoidable quantization error . For example , as illustrated in Figure 1a , the activations of ANN in the range of [ λ l T , 2λl T ) are mapped to the same value θl T of SNN . Unevenness error . Unevenness error is caused by the unevenness of input spikes . If the timing of arrival spikes changes , the output firing rates may change , which causes conversion error . There are two situations : more spikes as expected or fewer spikes as expected . To see this , in source ANN , we suppose that two analog neurons in layer l − 1 are connected to an analog neuron in layer l with weights 2 and -2 , and the output vector al−1 of neurons in layer l − 1 is [ 0.6 , 0.4 ] . Besides , in converted SNN , we suppose that the two spiking neurons in layer l − 1 fire 3 spikes and 2 spikes in 5 time-steps ( T=5 ) , respectively , and the threshold θl−1 = 1 . Thus , ϕl−1 ( T ) = ∑T i=1 s l−1 ( i ) T θ l−1 = [ 0.6 , 0.4 ] . Even though ϕl−1 ( T ) = al−1 and the weights are same for the ANN and SNN , ϕl ( T ) can be different from al if the timing of arrival spikes changes . According to Equation 1 , the ANN output al = W lal−1 = [ 2 , −2 ] [ 0.6 , 0.4 ] T = 0.4 . As for SNN , supposing that the threshold θl = 1 , there are three possible output firing rates , which are illustrated in Figure 1 ( b ) - ( d ) . If the two presynaptic neurons fires at t = 1 , 3 , 5 and t = 2 , 4 ( red bars ) respectively with weights 2 and -2 , the postsynaptic neuron will fire two spikes at t = 1 , 3 ( red bars ) , and ϕl ( T ) = ∑T i=1 s l ( i ) T θ l = 0.4 = al . However , if the presynaptic neurons fires at t = 1 , 2 , 3 and t = 4 , 5 , respectively , the postsynaptic neuron will fire four spikes at t = 1 , 2 , 3 , 4 , and ϕl ( T ) = 0.8 > al . If the presynaptic neurons fires at t = 3 , 4 , 5 and t = 1 , 2 , respectively , the postsynaptic neuron will fire only one spikes at t = 5 , and ϕl ( T ) = 0.2 < al . Note that the clipping error and quantization error have been proposed in Li et al . ( 2021 ) . There exist interdependence between the above three kinds of errors . Specifically , the unevenness error will degenerate to the quantization error if vl ( T ) is in the range of [ 0 , θl ] . Assuming that the potential vl ( T ) falls into [ 0 , θl ] will enable us to estimate the activation function of SNNs ignoring the effect of unevenness error . Therefore , an estimation of the output value ϕl ( T ) in a converted SNN can be formulated with the combination of clip function and floor function , that is : ϕl ( T ) ≈ θl clip ( 1 T ⌊ zlT + vl ( 0 ) θl ⌋ , 0 , 1 ) . ( 11 ) The detailed derivation is in the Appendix . With the help of this estimation for the SNN output , the estimated conversion error Ẽrr l can be derived from Equation 9 : Ẽrr l = θl clip ( 1 T ⌊ zlT + vl ( 0 ) θl ⌋ , 0 , 1 ) − h ( zl ) ≈ Errl . ( 12 ) | This paper proposes a Quantization neural network to spiking neural network (QNN2SNN) conversion method. The authors first analyze the conversion error between ANN and SNN. Then they construct the ann with quantized activation so that the error can be eliminated. Both theoretical and empirical results are presented in this paper. | SP:4f9c3ed91f44326e3bddf14223779d7f5fa07954 |
Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks | Spiking Neural Networks ( SNNs ) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware . As the most effective method to get deep SNNs , ANN-SNN conversion has achieved comparable performance as ANNs on large-scale datasets . Despite this , it requires long time-steps to match the firing rates of SNNs to the activation of ANNs . As a result , the converted SNN suffers severe performance degradation problems with short time-steps , which hamper the practical application of SNNs . In this paper , we theoretically analyze ANN-SNN conversion error and derive the estimated activation function of SNNs . Then we propose the quantization clipfloor-shift activation function to replace the ReLU activation function in source ANNs , which can better approximate the activation function of SNNs . We prove that the expected conversion error between SNNs and ANNs is zero , enabling us to achieve high-accuracy and ultra-low-latency SNNs . We evaluate our method on CIFAR-10/100 and ImageNet datasets , and show that it outperforms the stateof-the-art ANN-SNN and directly trained SNNs in both accuracy and time-steps . To the best of our knowledge , this is the first time to explore high-performance ANN-SNN conversion with ultra-low latency ( 4 time-steps ) . 1 INTRODUCTION . Spiking neural networks ( SNNs ) are biologically plausible neural networks based on the dynamic characteristic of biological neurons ( McCulloch & Pitts , 1943 ; Izhikevich , 2003 ) . As the third generation of artificial neural networks ( Maass , 1997 ) , SNNs have attracted great attention due to their distinctive properties over deep analog neural networks ( ANNs ) ( Roy et al. , 2019 ) . Each neuron transmits discrete spikes to convey information when exceeding a threshold . For most SNNs , the spiking neurons will accumulate the current of the last layer as the output within T inference time steps . The binarized activation has rendered dedicated hardware of neuromorphic computing ( Pei et al. , 2019 ; DeBole et al. , 2019 ; Davies et al. , 2018 ) . This kind of hardware has excellent advantages in temporal resolution and energy budget . Existing work has shown the potential of tremendous energy saving with considerably fast inference ( Stöckl & Maass , 2021 ) . In addition to efficiency advantages , the learning algorithm of SNNs has been improved by leaps and bounds in recent years . The performance of SNNs trained by backpropagation through time and ANN-SNN conversion techniques has gradually been comparable to ANNs on large-scale datasets ( Fang et al. , 2021 ; Rueckauer et al. , 2017 ) . Both techniques benefit from the setting of SNN inference time . Setting longer time-steps in backpropagation can make the gradient of surrogate functions more reliable ( Wu et al. , 2018 ; Lee et al. , 2016 ; Neftci et al. , 2019 ; Zenke & Vogels , 2021 ) . However , the price is enormous resource consumption during training . Existing platforms such as TensorFlow and PyTorch based on CUDA have limited optimization for SNN training . In contrast , ANN-SNN conversion usually depends on a longer inference time to get comparable accuracy as the original ANN ( Sengupta et al. , 2019 ) because it is based on the equivalence of ReLU activation and integrate-and-fire model ’ s firing rate ( Cao et al. , 2015 ) . Although longer inference time can further reduce the conversion error , it also hampers the practical application of SNNs on neuromorphic chips . The dilemma of ANN-SNN conversion is that there exists a remaining potential in the conversion theory , which is hard to be eliminated in a few time steps ( Rueckauer et al. , 2016 ) . Although many methods have been proposed to improve the conversion accuracy , such as weight normalization ( Diehl et al. , 2015 ) , threshold rescaling ( Sengupta et al. , 2019 ) , soft-reset ( Han & Roy , 2020 ) and threshold shift ( Deng & Gu , 2020 ) , tens to hundreds of time-steps in the baseline works are still unbearable . To obtain high-performance SNNs with ultra-low latency ( e.g. , 4 time-steps ) , we list the critical errors in ANN-SNN conversion and provide solutions for each error . Our main contributions are summarized as follows : • We go deeper into the errors in the ANN-SNN conversion and ascribe them to clipping error , quantization error , and unevenness error . We find that unevenness error , which is caused by the changes in the timing of arrival spikes and has been neglected in previous works , can induce more spikes or fewer spikes as expected . • We propose the quantization clip-floor-shift activation function to replace the ReLU activation function in source ANNs , which better approximates the activation function of SNNs . We prove that the expected conversion error between SNNs and ANNs is zero , indicating that we can achieve high-performance converted SNN at ultra-low time-steps . • We evaluate our method on CIFAR-10 , CIFAR-100 , and ImageNet datasets . Compared with both ANN-SNN conversion and backpropagation training methods , the proposed method exceeds state-of-the-art accuracy with fewer time-steps . For example , we reach top-1 accuracy 91.18 % on CIFAR-10 with unprecedented 2 time-steps . 2 PRELIMINARIES . In this section , we first briefly review the neuron models for SNNs and ANNs . Then we introduce the basic framework for ANN-SNN conversion . Neuron model for ANNs . For ANNs , the computations of analog neurons can be simplified as the combination of a linear transformation and a non-linear mapping : al = h ( W lal−1 ) , l = 1 , 2 , ... , M ( 1 ) where the vector al denotes the output of all neurons in l-th layer , W l denotes the weight matrix between layer l and layer l − 1 , and h ( · ) is the ReLU activation function . Neuron model for SNNs . Similar to the previous works ( Cao et al. , 2015 ; Diehl et al. , 2015 ; Han et al. , 2020 ) , we consider the Integrate-and-Fire ( IF ) model for SNNs . If the IF neurons in l-th layer receive the input xl−1 ( t ) from last layer , the temporal potential of the IF neurons can be defined as : ml ( t ) = vl ( t− 1 ) +W lxl−1 ( t ) , ( 2 ) where ml ( t ) and vl ( t ) represent the membrane potential before and after the trigger of a spike at time-step t. W l denote the weight in l-th layer . As soon as any element mli ( t ) of m l ( t ) exceeds the firing threshold θl , the neuron will elicit a spike and update the membrane potential vli ( t ) . To avoid information loss , we use the “ reset-by-subtraction ” mechanism ( Rueckauer et al. , 2017 ; Han et al. , 2020 ) instead of the “ reset-to-zero ” mechanism , which means the membrane potential vli ( t ) is subtracted by the threshold value θl if the neuron fires . Based on the threshold-triggered firing mechanism and the “ reset-by-subtraction ” of the membrane potential after firing discussed above , we can write the uplate rule of membrane potential as : sl ( t ) = H ( ml ( t ) − θl ) , ( 3 ) vl ( t ) = ml ( t ) − sl ( t ) θl . ( 4 ) Here sl ( t ) refers to the output spikes of all neurons in layer l at time t , the element of which equals 1 if there is a spike and 0 otherwise . H ( · ) is the Heaviside step function . θl is the vector of the firing threshold θl . Similar to Deng & Gu ( 2020 ) , we suppose that the postsynaptic neuron in l-th layer receives unweighted postsynaptic potential θl if the presynaptic neuron in l − 1-th layer fires a spike , that is : xl ( t ) = sl ( t ) θl . ( 5 ) ANN-SNN conversion . The key idea of ANN-SNN conversion is to map the activation value of an analog neuron in ANN to the firing rate ( or average postsynaptic potential ) of a spiking neuron in SNN . Specifically , we can get the potential update equation by combining Equation 2 – Equation 4 : vl ( t ) − vl ( t− 1 ) = W lxl−1 ( t ) − sl ( t ) θl . ( 6 ) Equation 6 describes the basic function of spiking neurons used in ANN-SNN conversion . By summing Equation 6 from time 1 to T and dividing T on both sides , we have : vl ( T ) − vl ( 0 ) T = W l ∑T i=1 x l−1 ( i ) T − ∑T i=1 s l ( i ) θl T . ( 7 ) If we use ϕl−1 ( T ) = ∑T i=1 x l−1 ( i ) T to denote the average postsynaptic potential during the period from 0 to T and substitute Equation 5 into Equation 7 , then we get : ϕl ( T ) = W lϕl−1 ( T ) − v l ( T ) − vl ( 0 ) T . ( 8 ) Equation 8 describes the relationship of the average postsynaptic potential of neurons in adjacent layers . Note that ϕl ( T ) ⩾ 0 . If we set the initial potential vl ( 0 ) to zero and neglect the remaining term v l ( T ) T when the simulation time-steps T is long enough , the converted SNN has nearly the same activation function as source ANN ( Equation 1 ) . However , high T would cause long inference latency that hampers the practical application of SNNs . Therefore , this paper aims to implement high-performance ANN-SNN conversion with extremely low latency . 3 CONVERSION ERROR ANALYSIS . In this section , we will analyze the conversion error between the source ANN and the converted SNN in each layer in detail . In the following , we assume that both ANN and SNN receive the same input from the layer l − 1 , that is , al−1 = ϕl−1 ( T ) , and then analyze the error in layer l. For simplicity , we use zl = W lϕl−1 ( T ) = W lal−1 to substitute the weighted input from layer l − 1 for both ANN and SNN . The absolute conversion error is exactly the outputs from converted SNN subtract the outputs from ANN : Errl = ϕl ( T ) − al = zl − v l ( T ) − vl ( 0 ) T − h ( zl ) , ( 9 ) where h ( zl ) = ReLU ( zl ) . It can be found from Equation 9 that the conversion error is nonzero if vl ( T ) − vl ( 0 ) ̸= 0 and zl > 0 . In fact , the conversion error is caused by three factors . Clipping error . The output ϕl ( T ) of SNNs is in the range of [ 0 , θl ] as ϕl ( T ) = ∑T i=1 x l ( i ) T =∑T i=1 s l ( i ) T θ l ( see Equation 5 ) . However , the output al of ANNs is in a much lager range of [ 0 , almax ] , where almax denotes the maximum value of a l. As illustrated in Figure 1a , al can be mapped to ϕl ( T ) by the following equation : ϕl ( T ) = clip ( θl T ⌊ alT λl ⌋ , 0 , θl ) . ( 10 ) Here the clip function sets the upper bound θl and the lower bound 0 . ⌊·⌋ denotes the floor function . λl represents the actual maximum value of output al mapped to the maximum value θl of ϕl ( T ) . Considering that nearly 99.9 % activations of al in ANN are in the range of [ 0 , a l max 3 ] , Rueckauer et al . ( 2016 ) suggested to choose λl according to 99.9 % activations . The activations between λl and almax in ANN are mapped to the same value θ l in SNN , which will cause conversion error called clipping error . Quantization error ( flooring error ) . The output spikes sl ( t ) are discrete events , thus ϕl ( T ) are discrete with quantization resolution θ l T ( see Equation 10 ) . When mapping a l to ϕl ( T ) , there exists unavoidable quantization error . For example , as illustrated in Figure 1a , the activations of ANN in the range of [ λ l T , 2λl T ) are mapped to the same value θl T of SNN . Unevenness error . Unevenness error is caused by the unevenness of input spikes . If the timing of arrival spikes changes , the output firing rates may change , which causes conversion error . There are two situations : more spikes as expected or fewer spikes as expected . To see this , in source ANN , we suppose that two analog neurons in layer l − 1 are connected to an analog neuron in layer l with weights 2 and -2 , and the output vector al−1 of neurons in layer l − 1 is [ 0.6 , 0.4 ] . Besides , in converted SNN , we suppose that the two spiking neurons in layer l − 1 fire 3 spikes and 2 spikes in 5 time-steps ( T=5 ) , respectively , and the threshold θl−1 = 1 . Thus , ϕl−1 ( T ) = ∑T i=1 s l−1 ( i ) T θ l−1 = [ 0.6 , 0.4 ] . Even though ϕl−1 ( T ) = al−1 and the weights are same for the ANN and SNN , ϕl ( T ) can be different from al if the timing of arrival spikes changes . According to Equation 1 , the ANN output al = W lal−1 = [ 2 , −2 ] [ 0.6 , 0.4 ] T = 0.4 . As for SNN , supposing that the threshold θl = 1 , there are three possible output firing rates , which are illustrated in Figure 1 ( b ) - ( d ) . If the two presynaptic neurons fires at t = 1 , 3 , 5 and t = 2 , 4 ( red bars ) respectively with weights 2 and -2 , the postsynaptic neuron will fire two spikes at t = 1 , 3 ( red bars ) , and ϕl ( T ) = ∑T i=1 s l ( i ) T θ l = 0.4 = al . However , if the presynaptic neurons fires at t = 1 , 2 , 3 and t = 4 , 5 , respectively , the postsynaptic neuron will fire four spikes at t = 1 , 2 , 3 , 4 , and ϕl ( T ) = 0.8 > al . If the presynaptic neurons fires at t = 3 , 4 , 5 and t = 1 , 2 , respectively , the postsynaptic neuron will fire only one spikes at t = 5 , and ϕl ( T ) = 0.2 < al . Note that the clipping error and quantization error have been proposed in Li et al . ( 2021 ) . There exist interdependence between the above three kinds of errors . Specifically , the unevenness error will degenerate to the quantization error if vl ( T ) is in the range of [ 0 , θl ] . Assuming that the potential vl ( T ) falls into [ 0 , θl ] will enable us to estimate the activation function of SNNs ignoring the effect of unevenness error . Therefore , an estimation of the output value ϕl ( T ) in a converted SNN can be formulated with the combination of clip function and floor function , that is : ϕl ( T ) ≈ θl clip ( 1 T ⌊ zlT + vl ( 0 ) θl ⌋ , 0 , 1 ) . ( 11 ) The detailed derivation is in the Appendix . With the help of this estimation for the SNN output , the estimated conversion error Ẽrr l can be derived from Equation 9 : Ẽrr l = θl clip ( 1 T ⌊ zlT + vl ( 0 ) θl ⌋ , 0 , 1 ) − h ( zl ) ≈ Errl . ( 12 ) | This paper proposes a quantization clip-bottom-shift activation function to replace the ReLU activation function in ANNs, so as to better approximate the activation function of SNNs. The authors also prove that the expected error of ANN-SNN conversion can be reduced to 0 by using this method. The reported results on CIFAR-10/100 and ImageNet dataset show that this work archives state-of-the-art accuracy with fewer time-steps. | SP:4f9c3ed91f44326e3bddf14223779d7f5fa07954 |
Context-invariant, multi-variate time series representations | 1 INTRODUCTION . Many modern applications in the physical and virtual world are equipped with sensors that measure the state of the application , its sub-components , and the environment . Examples can be found in the Internet-of-Things ( IoT ) or in the DevOps/AIOPs space like monitoring wind turbines or cloudbased applications ( Lu et al. , 2009 ; Lohrmann & Kao , 2011 ; Nedelkoski et al. , 2019 ; Li et al. , 2020 ; Krupitzer et al. , 2020 ) . Leveraging such time series to identify abnormal appliance behaviour is appealing ( see Figures 1b,1c for an overview of time series anomaly types ) , yet certain characteristics of these time series make them difficult to model with existing representation learning techniques . First , time series corpora are often highly multi-variate as illustrated in Figure 1a . Each appliance has several sensors1 associated with it that measure both exogenous signals from the environment as well as endogenous signals from the internal state of the appliance . Examples for exogenous variables include user behaviour/traffic in a web-based application or physical measurements such as temperature in an IoT context . Conversely , endogenous variables could include the CPU usage or the vibrations of a machine . Increased ( application-internal ) network traffic is expected with higher user load , and higher ambient temperatures naturally result in elevated temperature of a wind turbine . It is however important to understand when an application deviates from such expected patterns and exhibits unexpected behaviour relative to its environment . We call such effects contextual anomalies . In addition , a defining characteristic of such time series corpora is the sparsity and noisiness of their associated labels . A label could indicate time spans when an application was in an a-typical state . This sparsity may be due to diverse reasons ranging from practical ( e.g. , data integration or cost of labelling ) to fundamental ( internal system failures may be exceedingly rare ) . Noisiness stems from the fact that failures are often subjective and human insight is needed or alarms come from rulebased systems that are themselves overly noisy ( Bogatinovski et al. , 2021 ; Wu & Keogh , 2021 ) . Hence , unsupervised or self-supervised representations of time series are needed that take the characteristics of such modern time series corpora into account . However , while the field of representation learning for sequential data has received considerable attention in domains s.a. natural language processing ( NLP ) ( Lan et al. , 2020 ; Mikolov et al. , 2013 ; Fang et al. , 2020 ; Jaiswal et al. , 2021 ) , 1Each of these sensors may also measure multiple statistics of the signal ( e.g. , min , max , avg , std ) . similar work in the numerical time series domain remains rare . Specifically , rich , multi-purpose representations facilitating down-stream applications are common in NLP . Instead , feature extraction methods mostly dominate for time series ( Lubba et al. , 2019 ; Christ et al. , 2017 ) with Franceschi et al . ( 2019 ) providing a notable exception based on temporal convolution networks ( TCN ) . The main contribution of our paper is the extension of this TCN-based approach to cater for the aforementioned complications . We summarize our contributions as follows : 1 . We propose context-invariant embeddings that allow to identify representations of time series that are invariant to the exogenous variables . We achieve this by adapting domain adversity ( Ganin et al. , 2016 ) to the time series domain . 2 . We extend the TCN ( Franceschi et al. , 2019 ) model with ( i ) modern contrastive losses that we lift to the time series domain for the first time , ( ii ) data augmentation techniques , and ( iii ) considering time series simultaneously at multiple resolutions.2 3 . We conduct an empirical study in which we show the effectiveness of our approach . We provide a semi-synthetic DevOps data set that we contribute to the research community and consider an under-explored wind turbine dataset ( apart from classical synthetic and physical datasets ) . Our quantitative results show that context-invariant embeddings indeed represent time series data such that contextual anomalies can be identified in a label-effective way . The qualitative results show that the embeddings allow us to navigate complex data sets in an explorative manner ( e.g. , considering nearest/farthest neighbours of interesting time series snippets ) . 2 REPRESENTATION LEARNING WITH CONTEXT INVARIANCE . To motivate our approach , consider a simplified system where under normal operation a single endogenous variable y depends instantaneously on a single exogenous signal x via a function y = g ( x ) + ε , where ε is a noise term . The ideal signal to detect contextual anomalies ( those that break this relation ) is the residual δ : = y − g ( x ) . Under normal operation this signal carries no information about the exogenous variable and thresholding the magnitude of this residual signal can detect anomalies . Our approach is motivated by this setup but extends it to more complex situations where ( i ) exogenous & endogenous variables can be multivariate , ( ii ) the relation stochastic & highly non-linear , and ( iii ) may depend on the history of the system state . In this case we can not simply compute a ” residual signal ” , but instead we can try to learn unsupervised representations that are invariant to the exogenous variable . This means the embeddings should be independent of the driving signal as long as the endogenous variables respond in a typical manner , which captures some aspect of the residual signal of the toy example . In the following sections we formalize this intuition further and show that context invariance indeed helps detect such anomalies . Let Z = { zi ∈ D T } Ni=1 be a set of N equally spaced time series zi of length at most T ∈ N where D is a domain of numerical values . We do not assume time series to be of equal length . We assume 2Indeed , time series corpora typically consist of equally-spaced time series ( e.g. , time series with measurements in 1-min , 5-min or 10-min intervals ) . This allows us to reason at multiple resolutions . a decomposition Z = X ∪ Y such that time series in X allow to predict time series in Y . We call X the set of environmental/exogenous time series and Y the set of internal/endogenous time series . We assume that it is possible to predict Y from X , but we make no assumption on causality . The goal of this paper is to map sub-series of Z into a high-dimensional embedding space RM which preserves loosely defined properties such as : “ normal ” time series are close to each other and far away from “ abnormal ” states . This facilitates down-stream tasks such as time series classification or anomaly detection in a label sparse setting . In particular , our definition of “ normal ” should be context-invariant , that is , only changes in the dependency structure between Y and X should result in large distances in the embedding space . For these tasks , a limited number of labels is available that allows to identify a time span of abnormal behaviour . Typically , the amount of labels is such that a supervised approach is prohibitive and even evaluation may be a challenge . Our representation learning approach consists of two main components : a predictor network g that ties the endogenous and exogenous time series together ( either by predicting endogenous from exogenous variables , or vice-versa ) , and , an embedding network f which learns embeddings using contrastive losses . We can combine both in multiple ways . One extreme is a two-step approach where we learn embeddings on the residuals of the predictor network . The other extreme is an end-to-end approach , where we learn embeddings such that the distance between ( multivariate ) time series is adjusted based on the exogenous variables in a domain-adversarial way . Figure 2 depicts the main components of our approach . For the predictor network , we mainly resort to standard models , so we focus our exposition on the main ( novel ) components in the following . 2.1 CONTRASTIVE , SELF-SUPERVISED , LEARNING OF MULTI-RESOLUTION TCN NETWORK . The basic building block of our embedding network architecture ( Franceschi et al. , 2019 ) consists of stacked temporal dilated causal convolutions ( Bai et al. , 2018 ) . We have multiple such networks , one per time resolution . We illustrate in Figure 4 the effect of aggregation on the input time series . To obtain a consolidated representation , the concatenated representations are mapped through a neural network . These multi-resolution representations allow the network to encode patterns that are more pronounced in the higher resolutions of the time series in a way that is more effective than an encoder which only operates on a single resolution . We choose resolutions manually as the natural granularities corresponding to the base frequency of the time series we consider in our empirical studies ( e.g. , seconds , minutes , hours ) . Similar to ( Franceschi et al. , 2019 ) , we rely on a contrastive , self-supervised learning approach to train the embedding network . This crucially relies on a loss function and a careful selection of positive ( a , b ) p ∈ ( X , Y ) , reference ( c , d ) c ∈ ( X , Y ) and negative ( x , y ) n ∈ ( X , Y ) time series snippets on which to compute the loss terms ( depicted in Figure 2 ) . Similar time series should be close to each other and dissimilar time series distant from each other in the embedding space . For an embedding network fW with parameters W , the loss function takes the following general form : min W dist ( fW ( ( c , d ) c ) , fW ( ( a , b ) p ) ) + max W dist ( fW ( ( x , y ) n , ( a , b ) p ) ) ( 1 ) We choose ( a , b ) p , ( c , d ) c such that ( c , d ) c ⊇ ( a , b ) p while ( x , y ) n is such that x ∩ c ≈ ∅ , y ∩ d ≈ ∅ ( e.g. , time snippets at different times and from different elements in the batch ) . Note further that ( x , y ) n is constructed to explicitly break the dependency structure in Z by choosing x to be the exogenous variables at a different time than y . During training , we further augment the examples randomly before feeding them to the TCN network . In particular we apply random jittering , scaling , flipping direction , 2d rotation around a center , permuting random segments , magnitude or time warping ( Um et al. , 2017 ) and window slicing or wrapping ( Guennec et al. , 2016 ) . Equation ( 1 ) is designed to support a variety of contrastive loss functions . Apart from the loss discussed in Franceschi et al . ( 2019 ) , we rely on other , more recent losses which we describe in the following briefly . These losses , in particular the latter two , aim to avoid collapse of the embeddings while taking practical consideration ( e.g. , the size of the batch ) into account . The SimCLR ( Chen et al. , 2020 ) takes two random windows zA and zB of a time series and encodes it to get two representations hA and hB . It then maximizes the similarity between these two representations from the same time series and dissimilarity between others representations in the batch using the Normalized Temperature-Scaled Cross-Entropy loss ( Sohn , 2016 ) as the distance in ( 1 ) . Formally , for temperature parameter τ and mini-batch size of N for each pair we define the loss as : ℓA , B = − log exp ( dot ( hA , hB ) /τ ) ∑2N n=1 , n 6=A exp ( dot ( hA , hn ) /τ ) , ( 2 ) where dot is the dot-product between ℓ2-normalized vectors . This contrastive loss benefits from larger N which might not be feasible in the time series setting and thus we also explore other losses . Barlow Twins ( Zbontar et al. , 2021 ) is a loss operating on two batches of different windows from the same respective time series embeddings , ZA and ZB . It computes the cross-correlation matrix along the batch dimension and stores the result in a square matrix C. The final loss then encourages the diagonal terms in this matrix to be close to 1 and the off-diagonal terms be close to 0 . Formally , ℓ = ∑ i ( 1− Cii ) 2 + λ ∑ i ∑ i 6=j C2ij , ( 3 ) where λ > 0 trades off the contribution of the first and second term in the loss . Intuitively , this decorrelation reduces the redundancy between the output embeddings forcing them to contain nonredundant information about the time series . Unlike SimCLR , MoCo ( He et al. , 2020 ) uses two encoders to obtain representations for the two random windows from the same time series . The representations through the 2nd momentum encoder are preserved in a queue . During training , positive pairs in ( 2 ) are constructed from the current batch while negative pairs ( denominator of ( 2 ) are constructed from the queue of embeddings . The 2nd encoder is updated by linear interpolation of the two encoder with a momentum-based moving average of their weights during training . By using a queue with a slowly changing encoder , this loss attempts to construct large and consistent embeddings which better samples the continuous high dimensional space , independent of the batch size . | The paper presents a model for learning representations of time series. It addresses, in particular, the context of multivariate time series having exogenous and endogenous dimensions, and where the goal is to learn good representations independently from the context (exogenous dimensions). This work is an extension of existing work, the additions being the multivariate aspect and the context-invariant aspect. The context-invariant problem is treated thanks to an adversarial learning mechanism. Experiments are conducted on synthetic and real datasets. | SP:44ab69acefac1dba66c4849ea3eea5e7d3285710 |
Context-invariant, multi-variate time series representations | 1 INTRODUCTION . Many modern applications in the physical and virtual world are equipped with sensors that measure the state of the application , its sub-components , and the environment . Examples can be found in the Internet-of-Things ( IoT ) or in the DevOps/AIOPs space like monitoring wind turbines or cloudbased applications ( Lu et al. , 2009 ; Lohrmann & Kao , 2011 ; Nedelkoski et al. , 2019 ; Li et al. , 2020 ; Krupitzer et al. , 2020 ) . Leveraging such time series to identify abnormal appliance behaviour is appealing ( see Figures 1b,1c for an overview of time series anomaly types ) , yet certain characteristics of these time series make them difficult to model with existing representation learning techniques . First , time series corpora are often highly multi-variate as illustrated in Figure 1a . Each appliance has several sensors1 associated with it that measure both exogenous signals from the environment as well as endogenous signals from the internal state of the appliance . Examples for exogenous variables include user behaviour/traffic in a web-based application or physical measurements such as temperature in an IoT context . Conversely , endogenous variables could include the CPU usage or the vibrations of a machine . Increased ( application-internal ) network traffic is expected with higher user load , and higher ambient temperatures naturally result in elevated temperature of a wind turbine . It is however important to understand when an application deviates from such expected patterns and exhibits unexpected behaviour relative to its environment . We call such effects contextual anomalies . In addition , a defining characteristic of such time series corpora is the sparsity and noisiness of their associated labels . A label could indicate time spans when an application was in an a-typical state . This sparsity may be due to diverse reasons ranging from practical ( e.g. , data integration or cost of labelling ) to fundamental ( internal system failures may be exceedingly rare ) . Noisiness stems from the fact that failures are often subjective and human insight is needed or alarms come from rulebased systems that are themselves overly noisy ( Bogatinovski et al. , 2021 ; Wu & Keogh , 2021 ) . Hence , unsupervised or self-supervised representations of time series are needed that take the characteristics of such modern time series corpora into account . However , while the field of representation learning for sequential data has received considerable attention in domains s.a. natural language processing ( NLP ) ( Lan et al. , 2020 ; Mikolov et al. , 2013 ; Fang et al. , 2020 ; Jaiswal et al. , 2021 ) , 1Each of these sensors may also measure multiple statistics of the signal ( e.g. , min , max , avg , std ) . similar work in the numerical time series domain remains rare . Specifically , rich , multi-purpose representations facilitating down-stream applications are common in NLP . Instead , feature extraction methods mostly dominate for time series ( Lubba et al. , 2019 ; Christ et al. , 2017 ) with Franceschi et al . ( 2019 ) providing a notable exception based on temporal convolution networks ( TCN ) . The main contribution of our paper is the extension of this TCN-based approach to cater for the aforementioned complications . We summarize our contributions as follows : 1 . We propose context-invariant embeddings that allow to identify representations of time series that are invariant to the exogenous variables . We achieve this by adapting domain adversity ( Ganin et al. , 2016 ) to the time series domain . 2 . We extend the TCN ( Franceschi et al. , 2019 ) model with ( i ) modern contrastive losses that we lift to the time series domain for the first time , ( ii ) data augmentation techniques , and ( iii ) considering time series simultaneously at multiple resolutions.2 3 . We conduct an empirical study in which we show the effectiveness of our approach . We provide a semi-synthetic DevOps data set that we contribute to the research community and consider an under-explored wind turbine dataset ( apart from classical synthetic and physical datasets ) . Our quantitative results show that context-invariant embeddings indeed represent time series data such that contextual anomalies can be identified in a label-effective way . The qualitative results show that the embeddings allow us to navigate complex data sets in an explorative manner ( e.g. , considering nearest/farthest neighbours of interesting time series snippets ) . 2 REPRESENTATION LEARNING WITH CONTEXT INVARIANCE . To motivate our approach , consider a simplified system where under normal operation a single endogenous variable y depends instantaneously on a single exogenous signal x via a function y = g ( x ) + ε , where ε is a noise term . The ideal signal to detect contextual anomalies ( those that break this relation ) is the residual δ : = y − g ( x ) . Under normal operation this signal carries no information about the exogenous variable and thresholding the magnitude of this residual signal can detect anomalies . Our approach is motivated by this setup but extends it to more complex situations where ( i ) exogenous & endogenous variables can be multivariate , ( ii ) the relation stochastic & highly non-linear , and ( iii ) may depend on the history of the system state . In this case we can not simply compute a ” residual signal ” , but instead we can try to learn unsupervised representations that are invariant to the exogenous variable . This means the embeddings should be independent of the driving signal as long as the endogenous variables respond in a typical manner , which captures some aspect of the residual signal of the toy example . In the following sections we formalize this intuition further and show that context invariance indeed helps detect such anomalies . Let Z = { zi ∈ D T } Ni=1 be a set of N equally spaced time series zi of length at most T ∈ N where D is a domain of numerical values . We do not assume time series to be of equal length . We assume 2Indeed , time series corpora typically consist of equally-spaced time series ( e.g. , time series with measurements in 1-min , 5-min or 10-min intervals ) . This allows us to reason at multiple resolutions . a decomposition Z = X ∪ Y such that time series in X allow to predict time series in Y . We call X the set of environmental/exogenous time series and Y the set of internal/endogenous time series . We assume that it is possible to predict Y from X , but we make no assumption on causality . The goal of this paper is to map sub-series of Z into a high-dimensional embedding space RM which preserves loosely defined properties such as : “ normal ” time series are close to each other and far away from “ abnormal ” states . This facilitates down-stream tasks such as time series classification or anomaly detection in a label sparse setting . In particular , our definition of “ normal ” should be context-invariant , that is , only changes in the dependency structure between Y and X should result in large distances in the embedding space . For these tasks , a limited number of labels is available that allows to identify a time span of abnormal behaviour . Typically , the amount of labels is such that a supervised approach is prohibitive and even evaluation may be a challenge . Our representation learning approach consists of two main components : a predictor network g that ties the endogenous and exogenous time series together ( either by predicting endogenous from exogenous variables , or vice-versa ) , and , an embedding network f which learns embeddings using contrastive losses . We can combine both in multiple ways . One extreme is a two-step approach where we learn embeddings on the residuals of the predictor network . The other extreme is an end-to-end approach , where we learn embeddings such that the distance between ( multivariate ) time series is adjusted based on the exogenous variables in a domain-adversarial way . Figure 2 depicts the main components of our approach . For the predictor network , we mainly resort to standard models , so we focus our exposition on the main ( novel ) components in the following . 2.1 CONTRASTIVE , SELF-SUPERVISED , LEARNING OF MULTI-RESOLUTION TCN NETWORK . The basic building block of our embedding network architecture ( Franceschi et al. , 2019 ) consists of stacked temporal dilated causal convolutions ( Bai et al. , 2018 ) . We have multiple such networks , one per time resolution . We illustrate in Figure 4 the effect of aggregation on the input time series . To obtain a consolidated representation , the concatenated representations are mapped through a neural network . These multi-resolution representations allow the network to encode patterns that are more pronounced in the higher resolutions of the time series in a way that is more effective than an encoder which only operates on a single resolution . We choose resolutions manually as the natural granularities corresponding to the base frequency of the time series we consider in our empirical studies ( e.g. , seconds , minutes , hours ) . Similar to ( Franceschi et al. , 2019 ) , we rely on a contrastive , self-supervised learning approach to train the embedding network . This crucially relies on a loss function and a careful selection of positive ( a , b ) p ∈ ( X , Y ) , reference ( c , d ) c ∈ ( X , Y ) and negative ( x , y ) n ∈ ( X , Y ) time series snippets on which to compute the loss terms ( depicted in Figure 2 ) . Similar time series should be close to each other and dissimilar time series distant from each other in the embedding space . For an embedding network fW with parameters W , the loss function takes the following general form : min W dist ( fW ( ( c , d ) c ) , fW ( ( a , b ) p ) ) + max W dist ( fW ( ( x , y ) n , ( a , b ) p ) ) ( 1 ) We choose ( a , b ) p , ( c , d ) c such that ( c , d ) c ⊇ ( a , b ) p while ( x , y ) n is such that x ∩ c ≈ ∅ , y ∩ d ≈ ∅ ( e.g. , time snippets at different times and from different elements in the batch ) . Note further that ( x , y ) n is constructed to explicitly break the dependency structure in Z by choosing x to be the exogenous variables at a different time than y . During training , we further augment the examples randomly before feeding them to the TCN network . In particular we apply random jittering , scaling , flipping direction , 2d rotation around a center , permuting random segments , magnitude or time warping ( Um et al. , 2017 ) and window slicing or wrapping ( Guennec et al. , 2016 ) . Equation ( 1 ) is designed to support a variety of contrastive loss functions . Apart from the loss discussed in Franceschi et al . ( 2019 ) , we rely on other , more recent losses which we describe in the following briefly . These losses , in particular the latter two , aim to avoid collapse of the embeddings while taking practical consideration ( e.g. , the size of the batch ) into account . The SimCLR ( Chen et al. , 2020 ) takes two random windows zA and zB of a time series and encodes it to get two representations hA and hB . It then maximizes the similarity between these two representations from the same time series and dissimilarity between others representations in the batch using the Normalized Temperature-Scaled Cross-Entropy loss ( Sohn , 2016 ) as the distance in ( 1 ) . Formally , for temperature parameter τ and mini-batch size of N for each pair we define the loss as : ℓA , B = − log exp ( dot ( hA , hB ) /τ ) ∑2N n=1 , n 6=A exp ( dot ( hA , hn ) /τ ) , ( 2 ) where dot is the dot-product between ℓ2-normalized vectors . This contrastive loss benefits from larger N which might not be feasible in the time series setting and thus we also explore other losses . Barlow Twins ( Zbontar et al. , 2021 ) is a loss operating on two batches of different windows from the same respective time series embeddings , ZA and ZB . It computes the cross-correlation matrix along the batch dimension and stores the result in a square matrix C. The final loss then encourages the diagonal terms in this matrix to be close to 1 and the off-diagonal terms be close to 0 . Formally , ℓ = ∑ i ( 1− Cii ) 2 + λ ∑ i ∑ i 6=j C2ij , ( 3 ) where λ > 0 trades off the contribution of the first and second term in the loss . Intuitively , this decorrelation reduces the redundancy between the output embeddings forcing them to contain nonredundant information about the time series . Unlike SimCLR , MoCo ( He et al. , 2020 ) uses two encoders to obtain representations for the two random windows from the same time series . The representations through the 2nd momentum encoder are preserved in a queue . During training , positive pairs in ( 2 ) are constructed from the current batch while negative pairs ( denominator of ( 2 ) are constructed from the queue of embeddings . The 2nd encoder is updated by linear interpolation of the two encoder with a momentum-based moving average of their weights during training . By using a queue with a slowly changing encoder , this loss attempts to construct large and consistent embeddings which better samples the continuous high dimensional space , independent of the batch size . | This article deals with representation of multivariate signals in the case where we can distinguish endogenous & exogenous channels. This article mainly relies on (Franceschi et al.,2019), whose proposal consists in representing signals using Dilated-TCNN and to apply triplet loss on the sequences as in word2vec. Then the authors introduce 3 cases: normal behavior, anomalies where both endogenous & exogenous are impacted and partial anomalies. The encoded representation of a contextualized signal should be close to its temporal neigbors. Then, the authors tackle prediction & classification tasks on M5 and electricity datasets. The experimental part is a little bit confusing, as the datasets used in the different experiments are not the same. Generally speaking, results are very difficult to interpret. | SP:44ab69acefac1dba66c4849ea3eea5e7d3285710 |
Spike-inspired rank coding for fast and accurate recurrent neural networks | 1 INTRODUCTION . Neuromorphic computing is the study and use of computational mechanisms of biological neural networks in mathematical models , software simulations , and hardware emulations , both as a tool for neuroscience , and as a possible path towards improved machine intelligence ( Indiveri , 2021 ) . In fact , much of the recent progress in machine learning ( ML ) is attributed to artificial neural networks ( ANNs ) , which share certain characteristics with biological neural networks . These biological analogies of ANN models include a connectionist graph-like structure ( Rosenblatt , 1958 ) , parallel computing over multiple synaptic weights and neurons ( Indiveri & Horiuchi , 2011 ) , and the nonvon Neumann collocation of memory and processing at each synapse and neuron ( Sebastian et al. , 2020 ) . On the other hand , state-of-the-art ( SOTA ) ANNs for ML often miss several other neuromorphic mechanisms that are fundamental in biological neural systems . A characteristic example is that of “ spikes ” , i.e . the short stereotypical pulses that biological neurons emit to communicate ( Maass , 1997 ; Ponulak & Kasinski , 2011 ) . This is a principal neuromorphic feature , characterizing the brain and a category of bio-plausible models known as spiking neural networks ( SNNs ) , but not the most successful ANNs , suggesting unexploited potential and limited understanding of spike-based ∗Corresponding author approaches . Despite the stereotypical , unmodulated shape of spikes , spiking neurons can encode continuous values , for example in their average firing rate ( Brette , 2015 ) , which is abstracted into the continuous activation of conventional artificial neurons ( Pfeiffer & Pfeil , 2018 ) . Perhaps more interestingly , spikes can also carry information in their specific timing , i.e . through temporal coding that modulates when spikes are fired ( Brette , 2015 ) . Partly because individual spikes can encode information sparsely and rapidly , biological nervous systems and spiking neuromorphic systems can be extremely energy-efficient and fast in the processing of their input stimuli ( Qiao et al. , 2015 ; Davies et al. , 2018 ; Zhou et al. , 2021 ; Yin et al. , 2021 ) . This efficiency and speed are key motivations for much of the research on SNNs for potential applications in ML and inference . Moreover , owing to neuronal and synaptic dynamics , SNNs are more powerful computational models than certain ANNs in theory ( Maass , 1997 ; Moraitis et al. , 2020 ) . In practice , SNNs have recently surpassed conventional ANNs in accuracy in particular ML tasks , by virtue of short-term synaptic plasticity ( Leng et al. , 2018 ; Moraitis et al. , 2020 ) . Spike coding itself can also add computational power to SNNs by increasing the dimensionality of neuronal responses ( Izhikevich , 2006 ; Moraitis et al. , 2018 ) . However , in practical ML terms , firstly , spike coding poses difficulties to precise modelling and training that require ad hoc mitigation ( Mostafa , 2017 ; Pauli et al. , 2018 ; Pfeiffer & Pfeil , 2018 ; Woźniak et al. , 2020 ; Comşa et al. , 2021 ; Zhang et al. , 2021 ) . Secondly , SNNs are particularly difficult to analyse mathematically and rigorous ML-theoretic spiking models are scarce ( Nessler et al. , 2013 ; Moraitis et al. , 2020 ) . Thirdly , they require unconventional neuronal models , which do not fully benefit from the mature theoretical and practical toolbox of conventional ANNs ( Bellec et al. , 2018 ; Woźniak et al. , 2020 ; Comşa et al. , 2021 ) . As a result , the efficiency and speed benefits of temporal coding for ML have been hard to demonstrate with SOTA accuracy in real-world tasks . For instance , very recent literature on temporal coding with SNNs ( Comşa et al. , 2021 ; Zhang et al. , 2021 ; Zhou et al. , 2021 ; Mirsadeghi et al. , 2021 ; Göltz et al. , 2021 ) uses rank coding ( RC ) , i.e . the temporal scheme where the first output neuron to spike encodes the network ’ s inferred label ( Thorpe & Gautrais , 1998 ) , and it applies it to speed up inference on tasks such as hand-written digit ( MNIST ) ( Lecun et al. , 1998 ) recognition . However , when applied ( Zhou et al. , 2021 ) to more difficult datasets such as Imagenet ( Deng et al. , 2009 ) , the accuracy is significantly lower than in non-spiking versions of the same network ( Szegedy et al. , 2015 ) . Moreover , in these demonstrations there are no directly measured benefits compared to non-spiking ANNs . Even though temporal coding is usually not described in terms comparable with non-spiking ANNs , there are in fact ANN architectures with certain analogies to RC , when viewed from a particular angle . Namely , in an SNN that receives a sequence example of several input steps , e.g . in a sequenceclassification task , one implication of RC is that the computation for each sequence example can be halted after the first output neuron has spiked and produced an inferred label , even if several steps of the input sequence still remain unseen . Therefore , this is an adaptive type of processing that dynamically chooses the time and computation to be dedicated to each sequence . From this perspective , RC is related to ANN techniques such as Self-Delimiting Neural Networks ( Schmidhuber , 2012 ) , Adaptive Computation Time ( Graves , 2016 ) , and PonderNet ( Banino et al. , 2021 ) , which are also concerned with when computation should halt . However , these methods do not aim to adaptively reduce the processed length of an input sequence as RC does , but rather to adaptively add timesteps of processing to each step of the input sequence . A possibly more deeply related method is that of adaptive early-exit inference ( Laskaridis et al. , 2021 ) . In this case , the forward propagation of an input throughout layers in a deep neural network at inference time is forwarded from an early layer directly to a classifier head , skipping the layers that remain higher in the hierarchy , if that early layer crosses a confidence threshold . In certain cases , early exit has been applied to sequence classification with recurrent neural networks ( RNNs ) , where the threshold-crossing dictates when in the sequence of an input example the network should output its inference , saving indeed in terms of time and computation ( Dennis et al. , 2018 ; Tan et al. , 2021 ) . This timing decision based on the first cross of a threshold is similar to inference with RC . However , these early-exit models were not specifically trained to learn a rank code . It is conceivable that this mismatch between training and inference is suboptimal . In addition , this non-RC training of early-exit inference models does not apply the computational savings and speed benefits also to the training process . Taking together the limitations and benefits of SNNs and of other approaches , what is needed is a strategy that introduces aspects of RC into conventional , non-spiking ANNs , including during training . This would potentially reap the speed and efficiency benefits of this neuromorphic scheme , without abandoning the real-world usability and performance of ANNs . In addition , these insights into neural coding could feed back to neuroscience . Here we describe and demonstrate such a strat- egy for temporal coding in deep learning and inference with ANNs . The general concept is simple . Namely , even though ANN activations are continuous-valued and therefore neurons do not have to rely on time to encode information as in SNNs , ANNs too could time their outputs and , importantly , they could learn to do so . To achieve this in an RNN such as long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) , we back-propagate through time ( BPTT ) from a strategic and early time step in each training example ’ s sequence . The time step is decided by the rank-one , i.e . first , output neuron activation to cross a threshold . As a result of this Rank Coding ( RC ) during training , the network learns not only to minimize the loss , but implicitly also to optimize its outputs ’ timing , reducing time to insight as well as computational demands of training and inference . In our experiments , we provide several demonstrations , with advantages compared to SNNs from the literature as well as compared to conventional ANNs , including in MNIST classification and speech recognition . Moreover , we show that our method could be applied to SNNs directly as well . 2 DEEP LEARNING OF RANK CODING . Algorithm 1 RC-training Given : a training set of N example sequences Si = { xi0 , ... , xiT } and corresponding labels yi ; an RNNR ; and a threshold θ . 1 : i = 0 2 : while i < N do . iterate over training examples 3 : i++ ; t = 0 ; Ti = duration of sequence Si 4 : tsp , i ← Ti . latest possible first “ spike ” 5 : while t < tsp , i do . iterate through input sequence steps until first spike 6 : t++ 7 : activation ŷit = Rt ( Si ) 8 : for all output neurons ŷj do 9 : if ŷjit ≥ θ then . Inferred label=j 10 : tsp , i ← t . rank-one spike time 11 : ŷi = ŷit . activation at tsp is considered asR ’ s overall output from Si 12 : BPTT ( R , Loss ( ŷit , yi ) ) . BPTT from tsp only 13 : break . Done with this sequence Algorithm 1 shows the RC-training process . The network decides the rank-one spike timing tsp of its outputs based on its output-layer activations and a threshold θ ( line 10 ) . The loss function can be a common on such as cross-entropy . There is no explicit parametrization on time , and only one instantaneous output ŷit is used in the loss ( line 12 ) . Therefore , RC-training does not explicitly optimize the timing tsp of the network ’ s outputs . Thus , it is not obvious that learning of the timing aspect can emerge . Nevertheless , timing is implicitly , albeit indeed optimized , as seen in what follows . With random initialization , the activations are nearly uniformly distributed across output neurons , and are smaller than the threshold , throughout the sequence . Without an earlier cross of a threshold , the algorithm applies BPTT from the last step of the sequence ( line 4 : tsp = T ) . As training advances , minimizing the error between the outputs and the labels minimizes the entropy of the output distribution , i.e . causes the activations at the end of each sequence example to be concentrated around one of the output neurons , such that the maximum activation ŷmaxtsp is maximized . Through BPTT from that last time step , credit is assigned to earlier time steps as well . Progressively through training , this causes outputs to cross the threshold earlier and earlier , under the condition that relevant , credit-assigned input signal does exist earlier . This conditional acceleration causes the optimization of output timing . This is also shown mathematically in Appendix A . Importantly , the insight that it is through the minimization of entropy ( Eq . 4 ) that timing is minimized , gives us access to a mechanism for balancing between minimizing the loss and minimizing the timing . Specifically , we introduce to the loss function a regularization term , weighted by a hyperparameter β , such that minimization of the loss rewards entropy H of the outputs ŷtsp : LRC ( ŷtsp , y ) = L ( ŷtsp , y ) − βH ( ŷtsp ) . ( 1 ) RC-inference after RC-training is similar to Algorithm 1 but the backward pass ( line 12 ) is not applied . It should be noted that the inference stage on its own performed in this manner , where a threshold decides the timing of the output , is a version of what has been called early exit or early inference . In our implementation , the expectation is that the model learns to encode information in the rank order of its output ’ s timing , because that timing is integrated into the learning process . In this sense , at inference , the model does not merely exit early , but it does so through an underlying learned rank code ( RC ) . Our experimental demonstrations confirmed that the loss is minimized through RC-training , although BPTT ’ s application time-step varies between training examples , and that timing is also minimized down to an optimal floor , as the conditional arguments above ( and Eq . 8 in the Appendix ) predict . | The authors introduce a new way to train RNNs using rank order coding (ROC). With ROC the label is given by the first readout unit to reach a threshold. As soon as this happens, the processing is stopped, and BPTT is used from that particular time step, using the predictions at that particular time step and the ground truth. This will encourage the neuron with the right label to be as active as possible at that particular time step, and thus its threshold will tend to be reached earlier in the future. This is desirable, as the latency of the decision will decrease. Furthermore, the speed-accuracy trade-off is tunable by varying the threshold. The authors validate their idea using LSTMs on two toy problems, and then on MNIST and on the Google Speech Command dataset. | SP:7fa47de279e72f0782efd67722919321badcb022 |
Spike-inspired rank coding for fast and accurate recurrent neural networks | 1 INTRODUCTION . Neuromorphic computing is the study and use of computational mechanisms of biological neural networks in mathematical models , software simulations , and hardware emulations , both as a tool for neuroscience , and as a possible path towards improved machine intelligence ( Indiveri , 2021 ) . In fact , much of the recent progress in machine learning ( ML ) is attributed to artificial neural networks ( ANNs ) , which share certain characteristics with biological neural networks . These biological analogies of ANN models include a connectionist graph-like structure ( Rosenblatt , 1958 ) , parallel computing over multiple synaptic weights and neurons ( Indiveri & Horiuchi , 2011 ) , and the nonvon Neumann collocation of memory and processing at each synapse and neuron ( Sebastian et al. , 2020 ) . On the other hand , state-of-the-art ( SOTA ) ANNs for ML often miss several other neuromorphic mechanisms that are fundamental in biological neural systems . A characteristic example is that of “ spikes ” , i.e . the short stereotypical pulses that biological neurons emit to communicate ( Maass , 1997 ; Ponulak & Kasinski , 2011 ) . This is a principal neuromorphic feature , characterizing the brain and a category of bio-plausible models known as spiking neural networks ( SNNs ) , but not the most successful ANNs , suggesting unexploited potential and limited understanding of spike-based ∗Corresponding author approaches . Despite the stereotypical , unmodulated shape of spikes , spiking neurons can encode continuous values , for example in their average firing rate ( Brette , 2015 ) , which is abstracted into the continuous activation of conventional artificial neurons ( Pfeiffer & Pfeil , 2018 ) . Perhaps more interestingly , spikes can also carry information in their specific timing , i.e . through temporal coding that modulates when spikes are fired ( Brette , 2015 ) . Partly because individual spikes can encode information sparsely and rapidly , biological nervous systems and spiking neuromorphic systems can be extremely energy-efficient and fast in the processing of their input stimuli ( Qiao et al. , 2015 ; Davies et al. , 2018 ; Zhou et al. , 2021 ; Yin et al. , 2021 ) . This efficiency and speed are key motivations for much of the research on SNNs for potential applications in ML and inference . Moreover , owing to neuronal and synaptic dynamics , SNNs are more powerful computational models than certain ANNs in theory ( Maass , 1997 ; Moraitis et al. , 2020 ) . In practice , SNNs have recently surpassed conventional ANNs in accuracy in particular ML tasks , by virtue of short-term synaptic plasticity ( Leng et al. , 2018 ; Moraitis et al. , 2020 ) . Spike coding itself can also add computational power to SNNs by increasing the dimensionality of neuronal responses ( Izhikevich , 2006 ; Moraitis et al. , 2018 ) . However , in practical ML terms , firstly , spike coding poses difficulties to precise modelling and training that require ad hoc mitigation ( Mostafa , 2017 ; Pauli et al. , 2018 ; Pfeiffer & Pfeil , 2018 ; Woźniak et al. , 2020 ; Comşa et al. , 2021 ; Zhang et al. , 2021 ) . Secondly , SNNs are particularly difficult to analyse mathematically and rigorous ML-theoretic spiking models are scarce ( Nessler et al. , 2013 ; Moraitis et al. , 2020 ) . Thirdly , they require unconventional neuronal models , which do not fully benefit from the mature theoretical and practical toolbox of conventional ANNs ( Bellec et al. , 2018 ; Woźniak et al. , 2020 ; Comşa et al. , 2021 ) . As a result , the efficiency and speed benefits of temporal coding for ML have been hard to demonstrate with SOTA accuracy in real-world tasks . For instance , very recent literature on temporal coding with SNNs ( Comşa et al. , 2021 ; Zhang et al. , 2021 ; Zhou et al. , 2021 ; Mirsadeghi et al. , 2021 ; Göltz et al. , 2021 ) uses rank coding ( RC ) , i.e . the temporal scheme where the first output neuron to spike encodes the network ’ s inferred label ( Thorpe & Gautrais , 1998 ) , and it applies it to speed up inference on tasks such as hand-written digit ( MNIST ) ( Lecun et al. , 1998 ) recognition . However , when applied ( Zhou et al. , 2021 ) to more difficult datasets such as Imagenet ( Deng et al. , 2009 ) , the accuracy is significantly lower than in non-spiking versions of the same network ( Szegedy et al. , 2015 ) . Moreover , in these demonstrations there are no directly measured benefits compared to non-spiking ANNs . Even though temporal coding is usually not described in terms comparable with non-spiking ANNs , there are in fact ANN architectures with certain analogies to RC , when viewed from a particular angle . Namely , in an SNN that receives a sequence example of several input steps , e.g . in a sequenceclassification task , one implication of RC is that the computation for each sequence example can be halted after the first output neuron has spiked and produced an inferred label , even if several steps of the input sequence still remain unseen . Therefore , this is an adaptive type of processing that dynamically chooses the time and computation to be dedicated to each sequence . From this perspective , RC is related to ANN techniques such as Self-Delimiting Neural Networks ( Schmidhuber , 2012 ) , Adaptive Computation Time ( Graves , 2016 ) , and PonderNet ( Banino et al. , 2021 ) , which are also concerned with when computation should halt . However , these methods do not aim to adaptively reduce the processed length of an input sequence as RC does , but rather to adaptively add timesteps of processing to each step of the input sequence . A possibly more deeply related method is that of adaptive early-exit inference ( Laskaridis et al. , 2021 ) . In this case , the forward propagation of an input throughout layers in a deep neural network at inference time is forwarded from an early layer directly to a classifier head , skipping the layers that remain higher in the hierarchy , if that early layer crosses a confidence threshold . In certain cases , early exit has been applied to sequence classification with recurrent neural networks ( RNNs ) , where the threshold-crossing dictates when in the sequence of an input example the network should output its inference , saving indeed in terms of time and computation ( Dennis et al. , 2018 ; Tan et al. , 2021 ) . This timing decision based on the first cross of a threshold is similar to inference with RC . However , these early-exit models were not specifically trained to learn a rank code . It is conceivable that this mismatch between training and inference is suboptimal . In addition , this non-RC training of early-exit inference models does not apply the computational savings and speed benefits also to the training process . Taking together the limitations and benefits of SNNs and of other approaches , what is needed is a strategy that introduces aspects of RC into conventional , non-spiking ANNs , including during training . This would potentially reap the speed and efficiency benefits of this neuromorphic scheme , without abandoning the real-world usability and performance of ANNs . In addition , these insights into neural coding could feed back to neuroscience . Here we describe and demonstrate such a strat- egy for temporal coding in deep learning and inference with ANNs . The general concept is simple . Namely , even though ANN activations are continuous-valued and therefore neurons do not have to rely on time to encode information as in SNNs , ANNs too could time their outputs and , importantly , they could learn to do so . To achieve this in an RNN such as long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) , we back-propagate through time ( BPTT ) from a strategic and early time step in each training example ’ s sequence . The time step is decided by the rank-one , i.e . first , output neuron activation to cross a threshold . As a result of this Rank Coding ( RC ) during training , the network learns not only to minimize the loss , but implicitly also to optimize its outputs ’ timing , reducing time to insight as well as computational demands of training and inference . In our experiments , we provide several demonstrations , with advantages compared to SNNs from the literature as well as compared to conventional ANNs , including in MNIST classification and speech recognition . Moreover , we show that our method could be applied to SNNs directly as well . 2 DEEP LEARNING OF RANK CODING . Algorithm 1 RC-training Given : a training set of N example sequences Si = { xi0 , ... , xiT } and corresponding labels yi ; an RNNR ; and a threshold θ . 1 : i = 0 2 : while i < N do . iterate over training examples 3 : i++ ; t = 0 ; Ti = duration of sequence Si 4 : tsp , i ← Ti . latest possible first “ spike ” 5 : while t < tsp , i do . iterate through input sequence steps until first spike 6 : t++ 7 : activation ŷit = Rt ( Si ) 8 : for all output neurons ŷj do 9 : if ŷjit ≥ θ then . Inferred label=j 10 : tsp , i ← t . rank-one spike time 11 : ŷi = ŷit . activation at tsp is considered asR ’ s overall output from Si 12 : BPTT ( R , Loss ( ŷit , yi ) ) . BPTT from tsp only 13 : break . Done with this sequence Algorithm 1 shows the RC-training process . The network decides the rank-one spike timing tsp of its outputs based on its output-layer activations and a threshold θ ( line 10 ) . The loss function can be a common on such as cross-entropy . There is no explicit parametrization on time , and only one instantaneous output ŷit is used in the loss ( line 12 ) . Therefore , RC-training does not explicitly optimize the timing tsp of the network ’ s outputs . Thus , it is not obvious that learning of the timing aspect can emerge . Nevertheless , timing is implicitly , albeit indeed optimized , as seen in what follows . With random initialization , the activations are nearly uniformly distributed across output neurons , and are smaller than the threshold , throughout the sequence . Without an earlier cross of a threshold , the algorithm applies BPTT from the last step of the sequence ( line 4 : tsp = T ) . As training advances , minimizing the error between the outputs and the labels minimizes the entropy of the output distribution , i.e . causes the activations at the end of each sequence example to be concentrated around one of the output neurons , such that the maximum activation ŷmaxtsp is maximized . Through BPTT from that last time step , credit is assigned to earlier time steps as well . Progressively through training , this causes outputs to cross the threshold earlier and earlier , under the condition that relevant , credit-assigned input signal does exist earlier . This conditional acceleration causes the optimization of output timing . This is also shown mathematically in Appendix A . Importantly , the insight that it is through the minimization of entropy ( Eq . 4 ) that timing is minimized , gives us access to a mechanism for balancing between minimizing the loss and minimizing the timing . Specifically , we introduce to the loss function a regularization term , weighted by a hyperparameter β , such that minimization of the loss rewards entropy H of the outputs ŷtsp : LRC ( ŷtsp , y ) = L ( ŷtsp , y ) − βH ( ŷtsp ) . ( 1 ) RC-inference after RC-training is similar to Algorithm 1 but the backward pass ( line 12 ) is not applied . It should be noted that the inference stage on its own performed in this manner , where a threshold decides the timing of the output , is a version of what has been called early exit or early inference . In our implementation , the expectation is that the model learns to encode information in the rank order of its output ’ s timing , because that timing is integrated into the learning process . In this sense , at inference , the model does not merely exit early , but it does so through an underlying learned rank code ( RC ) . Our experimental demonstrations confirmed that the loss is minimized through RC-training , although BPTT ’ s application time-step varies between training examples , and that timing is also minimized down to an optimal floor , as the conditional arguments above ( and Eq . 8 in the Appendix ) predict . | The authors propose a method for fast and efficient classification of sequential data. The guiding principle is that for some data modalities it is not necessary to see the whole sequence in order to make a fairly certain classification. Their model reduces inference time by learning a rank code that is inspired by spiking neural networks. Reported results show improved inference times in two toy sequence classification tasks, temporal MNIST, and in Google Speech Commands classification (compared to models without optimizing timing of inference through learning a rank code). Increasing inference speed comes with a minimal decrease in accuracy, the authors, however, introduce and show the effectiveness of a regularization term that allows for tuning of this speed-accuracy trade-off. | SP:7fa47de279e72f0782efd67722919321badcb022 |
Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation | 1 INTRODUCTION . Optimal Mass Transport ( OMT ) is a well-studied problem with a variety of applications in a diverse set of fields , ranging from physics to computer vision and in particular statistics and data science . In this paper we propose a new framework for the estimation of the solution to the L2-optimal transport problem between two densities . Our algorithm , which is based on Brenier ’ s theorem , and builds on recent developments of input convex neural networks and physics-informed neural networks for solving PDE ’ s . Before we describe the contributions of the article in more detail , we will briefly summarize the motivation of our investigations and recent developments in the field . Density estimation and random sampling : The density estimation problem is to estimate a smooth probability density based on a discrete finite set of observations . In traditional parametric density estimation techniques , the data is assumed to be drawn from a known parametric family of distributions . One of the most ubiquitous parametric techniques is Gaussian Mixture Modeling ( McLachlan & Basford , 1988 ) . Nonparametric techniques were first proposed by Fix & Hodges ( 1951 ) ( Silverman & Jones ( 1989 ) ) to move away from rigid distributional assumptions . The most used approach is the kernel density estimation , which dates back to Rosenblatt ( 1952a ) and Parzen ( 1962 ) . Many challenges remain regarding the implementation and practical performance of kernel density estimators , including in particular , the bandwidth selection and the lack of local adaptivity resulting in a large sensitivity to outliers ( Loader et al. , 1999 ) . These problems are particularly exacerbated in high dimensions with the curse of dimensionality . Recently , diffeomorphic transformation-based algorithms have been proposed to tackle this problem ( Dinh et al. , 2017 ; Marzouk et al. , 2016 ; Younes , 2020 ; Bauer et al. , 2017 ) . The basic concept of transformation-based algorithms is to find a diffeomorphic mapping between a reference proba- bility distribution and the unknown target distribution , from which the data is drawn . Consequently , transformation-based density estimation leads at the same time to an efficient generative model , as new samples from the estimated density can be generated at a low cost by sampling from the reference density and transforming the samples by the estimated transformation . The fundamental problem in diffeomorphic transformation-based approaches is how to estimate and select the transformation : from a theoretical point of view there exists an infinite set of transformations that map two given probability densities onto each other . Recently , several deep learning methods have been devised for this task , where Normalizing Flows ( NF ) stand out among these methods . Examples of such models include Real NVP ( Dinh et al. , 2017 ) , Masked Autoregressive Flows ( Papamakarios et al. , 2017 ) , iResNets ( Behrmann et al. , 2019 ) , Flow++ ( Ho et al. , 2019 ) and Glow ( Kingma & Dhariwal , 2018 ) . For a review of the vast NF literature , we refer to the the overview article ( Kobyzev et al. , 2020 ) . Although these methods have shown to perform well in density estimation applications , the interpretability of the obtained transformation is less clear , e.g . in Real NVP ( Dinh et al. , 2017 ) , the solution selection is obtained by restricitng the transformations to the class of diffeomorphisms with triangular Jacobians that are easy to invert , which is closely related to the Knothe-Rosenblatt rearrangement ( Knothe , 1957 ; Rosenblatt , 1952b ) . Optimal mass transport : Optimal mass transport , on the other hand , formulates the transport map selection as the minimizer of a cost function ( Villani , 2008 ; 2003 ) . The optimal transportation cost induces a metric structure , the Wasserstein metric , on the space of probability densities and is sometimes referred to as the Earth Mover ’ s Distance . This theory , which dates back to 1781 , was originally formulated by the French mathematician Gaspard Monge ( 1781 ) . The difficulty in applying this framework to the proposed density estimation problem lies in solving the corresponding optimization problem , which in a dimension greater than one is highly non trivial . The fully discrete OMT problem ( optimal assignment problem ) can be solved using linear programming and can be approximated by the Sinkhorn algorithm ( Cuturi , 2013a ; Papadakis , 2015 ) . However , these algorithms do not lead to a continuous transformation map and thus can ’ t be used for the proposed diffeomorphic density estimation and generative modelling . Previous algorithmic solutions for the continuous OMT problem include fluid mechanics-based approaches ( Benamou & Brenier , 2000 ) , finite element or finite difference-based methods ( Benamou et al. , 2010 ; Benamou & Duval , 2019 ) and steepest descent-based energy minimization approaches ( Angenent et al. , 2003 ; Carlier et al. , 2010 ; Loeper & Rapetti , 2005 ) . In recent years , several deep learning methods have been deployed for solving the OMT problem . In these methods , the OMT problem is typically embedded in the loss function for the neural network model . Recent work by Makkuva et al . ( 2020 ) proposed to approximate the OMT map as the solution of min-max optimization using input convex neural networks ( ICNN ) , see Amos et al . ( 2017 ) . The min-max nature of this algorithm arises from the need to train an ICNN to represent a convex function and the conjugate of the convex function . Building upon this approach , Korotin et al . ( 2019 ) imposed a cyclic regularisation that converts the min-max optimization problem to a standard minimization problem . This change results in a faster converging algorithm that scales well to higher dimensions and also prevents convergence to local saddle points and instabilities during training , as is the case in the min-max algorithm . Another class of neural networks which have been proposed to solve OMT problems are Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) . GANs are defined through a min-max game of two neural networks where one of the networks tries to generate new samples from a data distribution , while the other network judges whether these generated samples originate from the data population or not . Later , Gulrajani et al . ( 2017 ) proposed using the Wasserstein-1 distance in GANs instead of the Jensen-Shannon divergence between the generated distribution and the data distribution as in the original formulation . They demonstrated that this new loss functions leads to better stability of the training of networks attributed to the Wasserstein metric being well defined even when the two distributions do not share the same support . Contributions : In this paper , we propose a different deep learning-based framework to approximate the optimal transport maps . The approach we present relies on Brenier ’ s celebrated theorem ( Brenier , 1991 ) , thereby reducing the optimal transport problem to that of solving a partial differential equation : a Monge-Ampere type equation . We frame this PDE in the recently developed paradigm of Physics Informed Neural Networks ( PINNs ) ( Raissi et al. , 2017 ) . Similar to other deep learning-based algorithms , our framework directly inherits the dimensional scalability of neural networks ( Shin et al. , 2020 ) , which traditional finite element or finite difference methods for solving PDEs do not possess . Brenier ’ s theorem further states that the optimal transport map is given by the gradient of a convex function- the Brenier potential . To incorporate this information in our PINN approach , we parameterize the Brenier potential using an ICNN , thereby guaranteeing its convexity . We test the accuracy of our OMT solver on numerous synthetic examples for which analytical solutions are known . Our experiments show that our algorithm indeed approximates the true solution well , even in high dimensions . To further quantify the performance of the new framework , we compare it to two other deep learning-based algorithms , for which we guided the selection by the results of the recent benchmarking paper by Korotin et al . ( 2021 ) , in which they evaluate the methods presented in Seguy et al . ( 2017 ) ; Nhan Dam et al . ( 2019 ) ; Taghvaei & Jalali ( 2019 ) ; Makkuva et al . ( 2020 ) ; Liu et al . ( 2019 ) ; Mallasto et al . ( 2019 ) ; Korotin et al . ( 2019 ) . We restricted our comparision to the algorithms of Makkuva et al . ( 2020 ) and Korotin et al . ( 2019 ) , as these two showed the best performance in this benchmark . Our results showed that the newly proposed method significantly outperforms these methods in terms of accuracy . As an explicit application of our solution of OMT , we focus on the density estimation problem . In synthetic examples , we show that we can estimate the true density based on a limited amount of samples . We compare the results of the proposed method to four other density estimation algorithm : the two OMT based algorithms mentioned above , and two methods from the family of normalizing flows : RealNVP ( Dinh et al . ( 2016 ) ) and iResNet ( Behrmann et al . ( 2019 ) ) . Finally we demonstrate how our framework can be combined with a traditional autoencoder to obtain a generative framework . In accordance with the best practices for reproducible research , we are providing an open-source version of the code , which is publicly available on github . 2 OMT USING DEEP LEARNING . In this section , we will present our framework for solving the Optimal Mass Transport ( OMT ) problem . Our approach will combine methods of deep learning with the celebrated theorem of Brenier , which reduces the solution of the OMT problem to solving a Monge-Ampere type equation . To be more precise , we will tackle this problem by embedding the Monge-Ampere equation into the broadly applicable concept of Physics Informed Neural Networks . 2.1 MATHEMATICAL BACKGROUND OF OMT . We start by summarizing the mathematical background of OMT , including a description of Brenier ’ s theorem . For more information we refer to the vast literature on OMT , see e.g. , Villani ( 2003 ; 2008 ) . Let Ω be a convex and bounded domain of Rn and let dx denote the standard measure on Rn . For simplicity , we restrict our presentation to the set P ( Ω ) of all absolutely continuous measures on Ω , i.e. , P ( Ω ) 3 µ = fdx with f ∈ L1 ( Ω ) , such that ∫ Ω fdx = 1 . From here , on we will identify the measure µ with its density function f . We aim to minimize the cost of transporting a density µ to a density ν using a ( transport ) map T , which leads to the so-called Monge Optimal Transport Problem . We will consider only the special case of a quadratic cost function as Brenier ’ s theorem , which forms the basis of our algorithm , is only true for this type of cost function . Definition 2.1 ( L2-Monge Optimal Transport Problem ) Given µ , ν ∈ P ( Ω ) , minimize M ( T ) =∫ Ω ‖x − T ( x ) ‖2dµ ( x ) over all µ-measureable maps T : Ω → Ω subject to ν = T∗µ . We will call an optimal T an optimal transport map . Here , the constraint is formulated in terms of the push forward action of a measurable map T : Ω→ Ω , which is defined via T∗µ ( B ) = µ ( T−1 ( B ) ) , for every measurable set A ⊂ Ω . By a change of coordinates , the constraint T∗µ = T∗ ( fdx ) = ν = gdx can be thus reduced to the equation f ( x ) = g ( T ( x ) ) |det ( DT ( x ) ) | . ( 1 ) This equation can also be expressed via the pullback action as µ = T ∗ν : = ( T−1 ) ∗ν . The existence and uniqueness of an optimal transport map is not guaranteed . We will see , that in our situation , i.e. , for absolutely continuous measures , the existence and uniqueness is indeed guaranteed . First , we will introduce a more general formulation of the Monge problem , called Kantorovich formulation . Therefore , we define the space of all transport plans Π ( µ , ν ) , i.e. , of all measures on the product space Ω× Ω , such that the first marginal is µ and the second marginal is ν . Then we have : Definition 2.2 ( L2-Kantorovich ’ s Optimal Transport Problem ) Given µ , ν ∈ P ( Ω ) , minimize K ( π ) = ∫ Ω×Ω ‖x− y‖ 2dπ ( x , y ) over all π ∈ Π ( µ , ν ) . Note that the L2-Wasserstein metric W2 ( µ , ν ) between µ and ν is defined as the infimum of K. We will now formulate Brenier ’ s theorem , which guarantees the existence of an optimal transport map and will be the central building block of our algorithm : Theorem 2.3 ( Brenier ( 1991 ) ) Let µ , ν ∈ P ( Ω ) . Then there exists a unique optimal transport plan π∗ ∈ Π ( f , g ) , which is given by π∗ ( x , y ) = ( id×T ) where T = ∇u is the gradient of a convex function u that pushes µ forward to ν , i.e. , ( ∇u ) ∗µ = ν . The inverse T−1 is also given by the gradient of a convex function that is the Legendre transform of the convex function u . Thus , Brenier ’ s Theorem guarantees the existence and the uniqueness of the optimal transport map of the OMT problem . Consequently , we can determine this optimal transport map by solving for the function u in the form of a Monge-Ampère equation : det ( D2 ( u ) ( x ) ) · g ( ∇u ( x ) ) = f ( x ) ( 2 ) where D2 is the Hessian , µ = fdx and ν = gdx . We obtain equation 2 directly from equation 1 using the constraint that T = ∇u as required by Brenier ’ s theorem . We will also refer to this map as the Brenier map . This map is a diffeomorphism as it is a gradient of a strictly convex function . In general , the Monge–Ampère equation is a nonlinear second-order partial differential equation . If we limit ourselves to convex solutions , then the differential equation is elliptic . If the two densities f and g are smooth ( C∞ ) , positive and absolutely continuous with respect to each other then an unique smooth convex solution is guaranteed to exist . See e.g . Forzani & Maldonado ( 2004 ) ; Bakelman ( 1983 ) ; De Philippis & Figalli ( 2013 ) for further details on existence and regularity of solutions to this equation . Using methods of classical numerical analysis , Brenier ’ s theorem has been used e.g . in Peyré et al . ( 2019 ) to obtain a numerical framework for the continous OMT problem . In the following section we will propose a new discretization to this problem , which will make use of recent advances in deep learning . | This paper introduces a new method for density estimation that is based on the idea of optimal mass transport (OMP), i.e., finding some function that transports a probability density into another probability density while minimizing the transportation cost. To do so, the authors leverage Brenier's theorem which guarantees the existence and uniqueness of the optimal transport map. As a consequence of this theorem, the optimal transport map of the OMT problem can be obtained as the solution of a special nonlinear partial differential equation (PDE). The authors then show that the approximate solution can be computed with help of physics informed neural networks (PINNs). The approach is demonstrated for several canonical examples. | SP:3a7dfa4251b64bfa4e0fef36a4dcecce97e3031c |
Physics Informed Convex Artificial Neural Networks (PICANNs) for Optimal Transport based Density Estimation | 1 INTRODUCTION . Optimal Mass Transport ( OMT ) is a well-studied problem with a variety of applications in a diverse set of fields , ranging from physics to computer vision and in particular statistics and data science . In this paper we propose a new framework for the estimation of the solution to the L2-optimal transport problem between two densities . Our algorithm , which is based on Brenier ’ s theorem , and builds on recent developments of input convex neural networks and physics-informed neural networks for solving PDE ’ s . Before we describe the contributions of the article in more detail , we will briefly summarize the motivation of our investigations and recent developments in the field . Density estimation and random sampling : The density estimation problem is to estimate a smooth probability density based on a discrete finite set of observations . In traditional parametric density estimation techniques , the data is assumed to be drawn from a known parametric family of distributions . One of the most ubiquitous parametric techniques is Gaussian Mixture Modeling ( McLachlan & Basford , 1988 ) . Nonparametric techniques were first proposed by Fix & Hodges ( 1951 ) ( Silverman & Jones ( 1989 ) ) to move away from rigid distributional assumptions . The most used approach is the kernel density estimation , which dates back to Rosenblatt ( 1952a ) and Parzen ( 1962 ) . Many challenges remain regarding the implementation and practical performance of kernel density estimators , including in particular , the bandwidth selection and the lack of local adaptivity resulting in a large sensitivity to outliers ( Loader et al. , 1999 ) . These problems are particularly exacerbated in high dimensions with the curse of dimensionality . Recently , diffeomorphic transformation-based algorithms have been proposed to tackle this problem ( Dinh et al. , 2017 ; Marzouk et al. , 2016 ; Younes , 2020 ; Bauer et al. , 2017 ) . The basic concept of transformation-based algorithms is to find a diffeomorphic mapping between a reference proba- bility distribution and the unknown target distribution , from which the data is drawn . Consequently , transformation-based density estimation leads at the same time to an efficient generative model , as new samples from the estimated density can be generated at a low cost by sampling from the reference density and transforming the samples by the estimated transformation . The fundamental problem in diffeomorphic transformation-based approaches is how to estimate and select the transformation : from a theoretical point of view there exists an infinite set of transformations that map two given probability densities onto each other . Recently , several deep learning methods have been devised for this task , where Normalizing Flows ( NF ) stand out among these methods . Examples of such models include Real NVP ( Dinh et al. , 2017 ) , Masked Autoregressive Flows ( Papamakarios et al. , 2017 ) , iResNets ( Behrmann et al. , 2019 ) , Flow++ ( Ho et al. , 2019 ) and Glow ( Kingma & Dhariwal , 2018 ) . For a review of the vast NF literature , we refer to the the overview article ( Kobyzev et al. , 2020 ) . Although these methods have shown to perform well in density estimation applications , the interpretability of the obtained transformation is less clear , e.g . in Real NVP ( Dinh et al. , 2017 ) , the solution selection is obtained by restricitng the transformations to the class of diffeomorphisms with triangular Jacobians that are easy to invert , which is closely related to the Knothe-Rosenblatt rearrangement ( Knothe , 1957 ; Rosenblatt , 1952b ) . Optimal mass transport : Optimal mass transport , on the other hand , formulates the transport map selection as the minimizer of a cost function ( Villani , 2008 ; 2003 ) . The optimal transportation cost induces a metric structure , the Wasserstein metric , on the space of probability densities and is sometimes referred to as the Earth Mover ’ s Distance . This theory , which dates back to 1781 , was originally formulated by the French mathematician Gaspard Monge ( 1781 ) . The difficulty in applying this framework to the proposed density estimation problem lies in solving the corresponding optimization problem , which in a dimension greater than one is highly non trivial . The fully discrete OMT problem ( optimal assignment problem ) can be solved using linear programming and can be approximated by the Sinkhorn algorithm ( Cuturi , 2013a ; Papadakis , 2015 ) . However , these algorithms do not lead to a continuous transformation map and thus can ’ t be used for the proposed diffeomorphic density estimation and generative modelling . Previous algorithmic solutions for the continuous OMT problem include fluid mechanics-based approaches ( Benamou & Brenier , 2000 ) , finite element or finite difference-based methods ( Benamou et al. , 2010 ; Benamou & Duval , 2019 ) and steepest descent-based energy minimization approaches ( Angenent et al. , 2003 ; Carlier et al. , 2010 ; Loeper & Rapetti , 2005 ) . In recent years , several deep learning methods have been deployed for solving the OMT problem . In these methods , the OMT problem is typically embedded in the loss function for the neural network model . Recent work by Makkuva et al . ( 2020 ) proposed to approximate the OMT map as the solution of min-max optimization using input convex neural networks ( ICNN ) , see Amos et al . ( 2017 ) . The min-max nature of this algorithm arises from the need to train an ICNN to represent a convex function and the conjugate of the convex function . Building upon this approach , Korotin et al . ( 2019 ) imposed a cyclic regularisation that converts the min-max optimization problem to a standard minimization problem . This change results in a faster converging algorithm that scales well to higher dimensions and also prevents convergence to local saddle points and instabilities during training , as is the case in the min-max algorithm . Another class of neural networks which have been proposed to solve OMT problems are Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) . GANs are defined through a min-max game of two neural networks where one of the networks tries to generate new samples from a data distribution , while the other network judges whether these generated samples originate from the data population or not . Later , Gulrajani et al . ( 2017 ) proposed using the Wasserstein-1 distance in GANs instead of the Jensen-Shannon divergence between the generated distribution and the data distribution as in the original formulation . They demonstrated that this new loss functions leads to better stability of the training of networks attributed to the Wasserstein metric being well defined even when the two distributions do not share the same support . Contributions : In this paper , we propose a different deep learning-based framework to approximate the optimal transport maps . The approach we present relies on Brenier ’ s celebrated theorem ( Brenier , 1991 ) , thereby reducing the optimal transport problem to that of solving a partial differential equation : a Monge-Ampere type equation . We frame this PDE in the recently developed paradigm of Physics Informed Neural Networks ( PINNs ) ( Raissi et al. , 2017 ) . Similar to other deep learning-based algorithms , our framework directly inherits the dimensional scalability of neural networks ( Shin et al. , 2020 ) , which traditional finite element or finite difference methods for solving PDEs do not possess . Brenier ’ s theorem further states that the optimal transport map is given by the gradient of a convex function- the Brenier potential . To incorporate this information in our PINN approach , we parameterize the Brenier potential using an ICNN , thereby guaranteeing its convexity . We test the accuracy of our OMT solver on numerous synthetic examples for which analytical solutions are known . Our experiments show that our algorithm indeed approximates the true solution well , even in high dimensions . To further quantify the performance of the new framework , we compare it to two other deep learning-based algorithms , for which we guided the selection by the results of the recent benchmarking paper by Korotin et al . ( 2021 ) , in which they evaluate the methods presented in Seguy et al . ( 2017 ) ; Nhan Dam et al . ( 2019 ) ; Taghvaei & Jalali ( 2019 ) ; Makkuva et al . ( 2020 ) ; Liu et al . ( 2019 ) ; Mallasto et al . ( 2019 ) ; Korotin et al . ( 2019 ) . We restricted our comparision to the algorithms of Makkuva et al . ( 2020 ) and Korotin et al . ( 2019 ) , as these two showed the best performance in this benchmark . Our results showed that the newly proposed method significantly outperforms these methods in terms of accuracy . As an explicit application of our solution of OMT , we focus on the density estimation problem . In synthetic examples , we show that we can estimate the true density based on a limited amount of samples . We compare the results of the proposed method to four other density estimation algorithm : the two OMT based algorithms mentioned above , and two methods from the family of normalizing flows : RealNVP ( Dinh et al . ( 2016 ) ) and iResNet ( Behrmann et al . ( 2019 ) ) . Finally we demonstrate how our framework can be combined with a traditional autoencoder to obtain a generative framework . In accordance with the best practices for reproducible research , we are providing an open-source version of the code , which is publicly available on github . 2 OMT USING DEEP LEARNING . In this section , we will present our framework for solving the Optimal Mass Transport ( OMT ) problem . Our approach will combine methods of deep learning with the celebrated theorem of Brenier , which reduces the solution of the OMT problem to solving a Monge-Ampere type equation . To be more precise , we will tackle this problem by embedding the Monge-Ampere equation into the broadly applicable concept of Physics Informed Neural Networks . 2.1 MATHEMATICAL BACKGROUND OF OMT . We start by summarizing the mathematical background of OMT , including a description of Brenier ’ s theorem . For more information we refer to the vast literature on OMT , see e.g. , Villani ( 2003 ; 2008 ) . Let Ω be a convex and bounded domain of Rn and let dx denote the standard measure on Rn . For simplicity , we restrict our presentation to the set P ( Ω ) of all absolutely continuous measures on Ω , i.e. , P ( Ω ) 3 µ = fdx with f ∈ L1 ( Ω ) , such that ∫ Ω fdx = 1 . From here , on we will identify the measure µ with its density function f . We aim to minimize the cost of transporting a density µ to a density ν using a ( transport ) map T , which leads to the so-called Monge Optimal Transport Problem . We will consider only the special case of a quadratic cost function as Brenier ’ s theorem , which forms the basis of our algorithm , is only true for this type of cost function . Definition 2.1 ( L2-Monge Optimal Transport Problem ) Given µ , ν ∈ P ( Ω ) , minimize M ( T ) =∫ Ω ‖x − T ( x ) ‖2dµ ( x ) over all µ-measureable maps T : Ω → Ω subject to ν = T∗µ . We will call an optimal T an optimal transport map . Here , the constraint is formulated in terms of the push forward action of a measurable map T : Ω→ Ω , which is defined via T∗µ ( B ) = µ ( T−1 ( B ) ) , for every measurable set A ⊂ Ω . By a change of coordinates , the constraint T∗µ = T∗ ( fdx ) = ν = gdx can be thus reduced to the equation f ( x ) = g ( T ( x ) ) |det ( DT ( x ) ) | . ( 1 ) This equation can also be expressed via the pullback action as µ = T ∗ν : = ( T−1 ) ∗ν . The existence and uniqueness of an optimal transport map is not guaranteed . We will see , that in our situation , i.e. , for absolutely continuous measures , the existence and uniqueness is indeed guaranteed . First , we will introduce a more general formulation of the Monge problem , called Kantorovich formulation . Therefore , we define the space of all transport plans Π ( µ , ν ) , i.e. , of all measures on the product space Ω× Ω , such that the first marginal is µ and the second marginal is ν . Then we have : Definition 2.2 ( L2-Kantorovich ’ s Optimal Transport Problem ) Given µ , ν ∈ P ( Ω ) , minimize K ( π ) = ∫ Ω×Ω ‖x− y‖ 2dπ ( x , y ) over all π ∈ Π ( µ , ν ) . Note that the L2-Wasserstein metric W2 ( µ , ν ) between µ and ν is defined as the infimum of K. We will now formulate Brenier ’ s theorem , which guarantees the existence of an optimal transport map and will be the central building block of our algorithm : Theorem 2.3 ( Brenier ( 1991 ) ) Let µ , ν ∈ P ( Ω ) . Then there exists a unique optimal transport plan π∗ ∈ Π ( f , g ) , which is given by π∗ ( x , y ) = ( id×T ) where T = ∇u is the gradient of a convex function u that pushes µ forward to ν , i.e. , ( ∇u ) ∗µ = ν . The inverse T−1 is also given by the gradient of a convex function that is the Legendre transform of the convex function u . Thus , Brenier ’ s Theorem guarantees the existence and the uniqueness of the optimal transport map of the OMT problem . Consequently , we can determine this optimal transport map by solving for the function u in the form of a Monge-Ampère equation : det ( D2 ( u ) ( x ) ) · g ( ∇u ( x ) ) = f ( x ) ( 2 ) where D2 is the Hessian , µ = fdx and ν = gdx . We obtain equation 2 directly from equation 1 using the constraint that T = ∇u as required by Brenier ’ s theorem . We will also refer to this map as the Brenier map . This map is a diffeomorphism as it is a gradient of a strictly convex function . In general , the Monge–Ampère equation is a nonlinear second-order partial differential equation . If we limit ourselves to convex solutions , then the differential equation is elliptic . If the two densities f and g are smooth ( C∞ ) , positive and absolutely continuous with respect to each other then an unique smooth convex solution is guaranteed to exist . See e.g . Forzani & Maldonado ( 2004 ) ; Bakelman ( 1983 ) ; De Philippis & Figalli ( 2013 ) for further details on existence and regularity of solutions to this equation . Using methods of classical numerical analysis , Brenier ’ s theorem has been used e.g . in Peyré et al . ( 2019 ) to obtain a numerical framework for the continous OMT problem . In the following section we will propose a new discretization to this problem , which will make use of recent advances in deep learning . | The paper presented an input convex neural networks (ICNN)-based methods to approximate the optimal transport maps. The learning objective is based on the Monge-Ampere equation, i.e. zero loss corresponds to the solution to the Monge-Ampere equation. Physics Informed Neural Networks (PINNs) are adopted to minimise the loss. The author(s) also extended the work to density estimation. The problem is interesting as estimation of optimal maps has a wide range of applications. Experiment results look promising when compared with other PINNs in one experiment setup, but only qualitative results are provided for the density estimation experiment with no comparison of other baselines. | SP:3a7dfa4251b64bfa4e0fef36a4dcecce97e3031c |
Churn Reduction via Distillation | In real-world systems , models are frequently updated as more data becomes available , and in addition to achieving high accuracy , the goal is to also maintain a low difference in predictions compared to the base model ( i.e . predictive “ churn ” ) . If model retraining results in vastly different behavior , then it could cause negative effects in downstream systems , especially if this churn can be avoided with limited impact on model accuracy . In this paper , we show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn . We then show that distillation performs strongly for low churn training against a number of recent baselines on a wide range of datasets and model architectures , including fully-connected networks , convolutional networks , and transformers . 1 INTRODUCTION Deep neural networks ( DNNs ) have had profound success at solving some of the most challenging machine learning problems . While much of the focus has been spent towards attaining state-of-art predictive performance , comparatively there has been little effort towards improving other aspects . One such important practical aspect is reducing unnecessary predictive churn with respect to a base model . We define predictive churn as the difference in the prediction of a model relative to a base model on the same datapoints . In a production system , models are often continuously released through an iterative improvement process which cycles through launching a model , collecting additional data and researching ways to improve the current model , and proposing a candidate model to replace the current version of the model serving in production . In order to validate a candidate model , it often needs to be compared to the production model through live A/B tests ( it ’ s known that offline performance alone isn ’ t a sufficient , especially if these models are used as part of a larger system where the offline and online metrics may not perfectly align ( Deng et al. , 2013 ; Beel et al. , 2013 ) ) . Live experiments are costly : they often require human evaluations when the candidate and production model disagree to know which model was correct ( Theocharous et al. , 2015 ; Deng & Shi , 2016 ) . Therefore , minimizing the unnecessary predictive churn can have a significant impact to the cost of the launch cycle . It ’ s been observed that training DNN can be very noisy due to a variety of factors including random initialization ( Glorot & Bengio , 2010 ) , mini-batch ordering ( Loshchilov & Hutter , 2015 ) , data augmentation and processing ( Santurkar et al. , 2018 ; Shorten & Khoshgoftaar , 2019 ) , and hardware ( Turner & Nowotny , 2015 ; Bhojanapalli et al. , 2021 ) – in other words running the same procedure multiple times can lead to models with surprisingly amount of disagreeing predictions even though all can have very high accuracies ( Bahri & Jiang , 2021 ) . While the stability of the training procedure is a separate problem from lowering predictive churn , such instability can further exacerbate the issue and underscores the difficulty of the problem . Knowledge distillation ( Hinton et al. , 2015 ) , which involves having a teacher model and mixing its predictions with the original labels has proved to be a useful tool in deep learning . In this paper , we show that this surprisingly is not only an effective tool for churn reduction by using the base model as the teacher , it is also mathematically aligned with learning under a constraint on the churn . Thus , in addition to providing a strong method for low churn training , we also provide insight into the distillation . Our contributions are as follows : • We show theoretically an equivalence between the low churn training objective ( i.e . minimize a loss function subject to a churn constraint on the base model ) and using knowledge distillation with the base model as the teacher . • We show that distillation performs strongly in a wide range of experiments against a number baselines that have been considered for churn reduction . • Our distillation approach is similar to a previous method called “ anchor ” ( Fard et al. , 2016 ) , which trains on the true labels instead of distilled labels for the incorrectly predicted examples by the base model . Our proposal outperforms it by a surprising amount . We present both theoretical and experimental results showing that the modification of anchor relative to distillation actually hurts performance . 2 RELATED WORKS Prediction Churn . There are few works that address low churn training with respect to a base model . Fard et al . ( 2016 ) proposed an anchor loss which is similar to distillation when the base model ’ s prediction agrees with the original label , and uses a scaled version of the original label otherwise . In our empirical evaluation , we find that this procedure performs considerably worse than distillation . Cotter et al . ( 2019a ) ; Goh et al . ( 2016 ) use constrained optimization by adding a constraint on the churn . We use some of the theoretical insights found in that work to show an equivalence between distillation and the constrained optimization problem . Thus , we are able to bypass the added complexity of using constrained optimization ( Cotter et al. , 2019b ) in favor of distillation , which is a simpler and more robust method . A related but different notion of churn that has been studied is where the goal is to reduce the training instability . Anil et al . ( 2018 ) noted that co-distillation is an effective method . Bahri & Jiang ( 2021 ) proposes a locally adaptive variant of label smoothing and Bhojanapalli et al . ( 2021 ) propose entropy regularizers and a variant of co-distillation . We tested many of the baselines proposed in these papers and adapt them to our notion of churn and showed that they were not effective at reducing predictive churn w.r.t . a base model . Distillation . Distillation ( Ba & Caruana , 2013 ; Hinton et al. , 2015 ) , first proposed to transfer knowledge from larger networks to smaller ones , has become immensely popular . Applications include learning from noisy labels ( Li et al. , 2017 ) , model compression ( Polino et al. , 2018 ) , adversarial robustness ( Papernot et al. , 2016 ) , DNNs with logic rules ( Hu et al. , 2016 ) , visual relationship detection ( Yu et al. , 2017 ) , reinforcement learning ( Rusu et al. , 2015 ) , domain adaptation ( Asami et al. , 2017 ) and privacy ( Lopez-Paz et al. , 2015 ) . Our work adds to the list of applications in which distillation is effective . The theoretical motivation of distillation however is less established . Lopez-Paz et al . ( 2015 ) studied distillation as learning using privileged information ( Vapnik & Izmailov , 2015 ) . Phuong & Lampert ( 2019 ) establishes fast convergence of the expected risk of a distillation-trained linear classifier . Foster et al . ( 2019 ) provides a generalization bound for the student with an assumption that it learns a model close to the teacher . Dong et al . ( 2019 ) argued distillation has a similar effect as that of early stopping . Mobahi et al . ( 2020 ) showed an equivalence to increasing the regularization strength for kernel methods . Menon et al . ( 2020 ) establish a bias-variance trade-off for the student . Our analysis provides a new theoretical perspective of its relationship to churn reduction . 3 DISTILLATION FOR CONSTRAINING CHURN We are interested in a multiclass classification problem with an instance space X and a label space rms “ t1 , . . . , mu . Let D denote the underlying data distribution over instances and labels , and DX denote the corresponding marginal distribution over X . Let ∆m denote the pm ´ 1q-dimensional simplex with m coordinates . We will use p : XÑ∆m to denote the underlying conditional-class probabilities , where pypxq “ PpY “ y|X “ xq . We assume that we are provided a base classifier g : XÑ∆m that predicts a vector of probabilities gpxq P ∆m for any instance x . Our goal is to then learn a new classifier h : XÑ∆m , constraining it to have low predictive churn against g. We will measure the classification performance of a classifier h using a loss function ` : rms ˆ ∆mÑR ` that maps a label y P rms and prediction hpxq P ∆m to a non-negative number ` py , hpxqq , and denote the classification risk by Rphq : “ Epx , yq „ D r ` py , hpxqqs . We would ideally like to define predictive churn as the fraction of examples on which h and g disagree . For the purpose of designing a tractable algorithm , we will instead work with a softer notion of churn , which evaluates the divergence between their output distributions . To this end , we use a measure of divergence d : ∆m ˆ∆mÑ R ` , and denote the expected churn between h and g by Cphq : “ Ex „ DX rdpgpxq , hpxqqs . We then seek to minimize the classification risk for h , subject to the expected churn being within an allowed limit ą 0 : min h : XÑ∆m Rphq s.t . Cphq ď . ( 1 ) We consider loss and divergence functions that are defined in terms of a scoring function φ : ∆mÑRm ` that maps a distribution to a m-dimensional score . Specifically , we will consider scoring functions φ that are strictly proper ( Gneiting & Raftery , 2007 ; Williamson et al. , 2016 ) , i.e . for which , given any distribution u P ∆m , the conditional risk Ey „ u rφypvqs is uniquely minimized by v “ u . The following are general loss and divergence functions derived from φ : ` φpy , vq : “ φypvq ; dφpu , vq : “ ÿ yPrms uypφypvq ´ φypuqq . ( 2 ) e.g . cross-entropy , KL-divergence , and squared loss are special cases of this formulation . 3.1 BAYES-OPTIMAL CLASSIFIER We show below that for the loss and divergence functions defined in ( 2 ) , the optimal-feasible classifier for the constrained problem in ( 1 ) is a convex combination of the class probability function p and the base classifier g. Proposition 1 . Let p ` , dq be defined as in ( 2 ) for a strictly proper scoring function φ . Suppose φpuq is strictly convex in u . Then there exists λ˚ P r0 , 1s such that the following is an optimal-feasible classifier for ( 1 ) : h˚pxq “ λ˚ppxq ` p1´ λ˚qgpxq . Furthermore , if u ¨ φpuq is α-strongly concave over u P ∆m w.r.t . the Lq-norm , then λ˚ ď b 2 { ` αEx “ } ppxq ´ gpxq } 2q ‰˘ . The strong concavity condition in Proposition 1 is satisfied by the cross-entropy loss and KLdivergence for α “ 1 with the L1-norm , and by the squared loss and L2-distance for α “ 2 with the L2-norm . The bound suggests that the mixing coefficient λ˚ depends on how close the base classifier is to the class probability function p. Algorithm 1 Distillation-based Churn Reduction 1 : Inputs : Training sample S “ tpx1 , y1q , . . . , pxn , ynqu , Grid of mixing coefficients Λ “ tλ1 , . . . , λLu , Base classifier g , Constraint slack ą 0 2 : Train a classifier hk for each λk P Λ by minimizing the distilled loss in ( 4 ) : hk P argminhPH pLλkphq 3 : Find a convex combination of h1 , . . . , hL by solving following convex program in L variables : min hPcoph1 , ... , hLq pRphq s.t . pCphq ď and return the solution ph 3.2 DISTILLATION-BASED APPROACH Proposition 1 directly motivates the use of a distillation-based approach for solving the churnconstrained optimization problem in ( 1 ) . We propose treating the base classifier g as a teacher model , mixing the training labels y with scores from the teacher gpxq , and minimizing a classification loss against the transformed labels : Lλphq “ Epx , yq „ D rpλey ` p1´ λqgpxqq ¨ φphpxqqs , ( 3 ) where ey P t0 , 1um denotes a one-hot encoding of the label y P rms and φ is a strictly proper scoring function . It is straight-forward to show that when λ “ λ˚ , the optimal classifier for the above distillation loss takes the same form in Proposition 1 , i.e . h˚pxq “ λ˚ppxq ` p1´ λ˚qgpxq . While the optimal mixing parameter λ˚ is unknown , we propose treating this as a hyper-parameter and tuning it to reach the desired level of churn . In practice , we do not have direct access to the distribution D and will need to work with a sample S “ tpx1 , y1q , . . . , pxn , ynqu drawn from D. To this end , we define the empirical risk and the empirical churn as follows : pRphq “ 1 n n ÿ i “ 1 ` φpyi , hpxiqq ; pCphq “ 1 n n ÿ i “ 1 dφpgpxiq , hpxiqq , where ` φ and dφ are defined as in ( 2 ) for a scoring function φ . Our proposal is to then solve the following empirical risk minimization problem over a hypothesis class H Ă th : XÑ∆mu for different values of coefficient λk chosen from a finite grid tλ1 , . . . , λLu Ă r0 , 1s : hk P argminhPH pLλkphq : “ 1 n n ÿ i “ 1 pλkeyi ` p1´ λkqgpxiqq ¨ φphpxiqq . ( 4 ) To construct the final classifier , we find a convex combination of the L classifiers h1 , . . . , hL that minimizes pRphq while satisfying the constraint pCphq ď , and return an ensemble of the L classifiers . The overall procedure is outlined in Algorithm 1 , where we denote the set of convex combinations of classifiers h1 , . . . , hL by coph1 , . . . , hLq “ th : x ÞÑ řL j “ 1 αjhjpxq |α P ∆Lu . The post-processing step in Algorithm 1 amounts to solving a simple convex program in L variables . This is needed for technical reasons in our theoretical results , specifically , to translate a solution to a dual-optimal solution to ( 1 ) to a primal-feasible solution . In practice , however , we do not construct an ensemble , and instead simply return a single classifier that achieves the least empirical risk while satisfying the churn constraint . In our experiments , we use the cross-entropy loss for training , i.e . set φypuq “ ´ logpuyq . 4 THEORETICAL GUARANTEES We provide optimality and feasibility guarantees for the proposed algorithm and also explain why our approach is better-suited for optimizing accuracy ( subject to a churn constraint ) compared to the previous churn-reduction method of Fard et al . ( 2016 ) . 4.1 OPTIMALITY AND FEASIBILITY GUARANTEES We now show that the classifier ph returned by Algorithm 1 approximately satisfies the churn constraint , while achieving a risk close to that of the optimal-feasible classifier inH . This result assumes that we are provided with generalization bounds for the classification risk and churn . Theorem 2 . Let the scoring function φ : ∆mÑRm ` be convex , and } φpzq } 8 ă B , @ z P ∆m . Let the set of classifiersH be convex , with the base classifier g P H. Suppose C and R enjoy the following generalization bounds : for any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn , for any h P H , |Rphq ´ pRphq| ď ∆Rpn , δq ; |Cphq ´ pCphq| ď ∆Cpn , δq , for some ∆Rpn , δq and ∆Cpn , δq that is decreasing in n and approaches 0 as nÑ8 . Let rh be an optimal-feasible classifier in H , i.e . Cprhq ď and Rprhq ď Rphq for all classifiers h for which Cphq ď . Let ph be the classifier returned by Algorithm 1 with Λ “ maxt ` 2B , uu ˇ ˇu P t 1L , 2 L , . . . , 1u ( for some L P N ` . For any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn , Optimality : Rpphq ď Rprhq ` O `` 1 ` 2B ˘ ` ∆Rpn , δq ` ∆Cpn , δq ` BL ˘˘ , Feasibility : Cpphq ď ` ∆C pn , δq . In practice , we expect the churn metric to generalize better than the classification risk , i.e . for ∆Cpn , δq to be smaller than ∆Rpn , δq . This is because the classification risk is computed on “ hard ” labels y P rms from the training sample , whereas the churn metric is computed on “ soft ” labels gpxq P ∆m from the base model . The traditional view of distillation ( Hinton et al. , 2015 ) suggests that the soft labels from a teacher model come with confidence scores for each example , and thus allow the student to generalize well to unseen new examples . A similar view is also posed by Menon et al . ( 2020 ) , who argue that the soft labels from the teacher have “ lower variance ” than the hard labels from the training sample , and therefore aid in better generalization of the student . See Appendix B for explicit bounds on ∆Cpn , δq and ∆Rpn , δq . In fact , for certain base classifiers g , generalizing well on “ churn ” can have the additional benefit of improving classification performance , as shown by the proposition below . Proposition 3 . Let p ` , dq be defined as in ( 2 ) for a strictly proper scoring function φ . Suppose φpuq is strictly convex in u , Φ-Lipschitz w.r.t . L1-norm for each y P rms , and } φpzq } 8 ă B , @ z P ∆m . Let λ˚ be the optimal mixing coefficient defined in Proposition 1 . Let ∆Cpn , δq be the churn generalization bound defined in Theorem 2 . Let rh be an optimal feasible classifier inH and ph be the classifier returned by Algorithm 1 . Then for any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn : Rpphq ´ Rprhq ď ` ∆Cpn , δq ` pB ` Φλ˚qEx „ DX r } ppxq ´ gpxq } 1s . This result bounds the excess classification risk in terms of the churn generalization bound and the expected difference between the base classifier g and the underlying class probability function p. When the base classifier is close to p , low values of ∆Cpn , δq result in low classification risk . 4.2 ADVANTAGE OVER ANCHOR LOSS We next compare our distillation loss in ( 3 ) with the previous anchor loss of Fard et al . ( 2016 ) , which uses the base model ’ s prediction only when it agrees with the original label , and uses a scaled version of the original label otherwise . While originally proposed for churn reduction with binary labels , we provide below an analogous version of this loss for a multiclass setup : Lancphq “ Epx , yq „ D ra ¨ φphpxqqs , ( 5 ) where a “ `` αgpxq ` p1´ αqey if y “ argmaxk gkpxq ηey otherwise , for hyper-parameters α , η P r0 , 1s and a strictly proper scoring function φ . Here , we have used argmax to denote ties being broken in favor of the larger class . While this helps us simplify the exposition , our results can be easily extended to a version of the loss which includes ties . The anchor loss does not take into account the confidence with which the base model disagrees with the sampled label y . For example , if the base model predicts near-equal probabilities for all classes , but happens to assign a slightly higher probability to a class different from y , the anchor loss would still completely ignore the base model ’ s score ( even though it might be the case that all the labels are indeed equally likely to occur ) . In some cases , this selective use of the teacher labels can result in a biased objective and may hurt the classifier ’ s accuracy . To see this , consider an ideal scenario where the base model predicts the true conditional-probabilities ppxq and the student hypothesis class is universal . In this case , minimizing the churn w.r.t . the base model has the effect of maximizing classification accuracy , i.e . a classifier that has zero churn w.r.t . the base model also produces the least classification error . However , as shown below , even in this ideal setup , minimizing the anchor loss may result in a classifier different from the base model . Proposition 4 . When gpxq “ ppxq , @ x , for any given λ P r0 , 1s , the minimizer for the distillation loss in ( 3 ) over all classifiers h is given by : h˚pxq “ ppxq , whereas the minimizer of the anchor loss in ( 5 ) is given by : h˚j pxq “ zj ř j zj where zj “ '' αp2j pxq ` p1´ αqpjpxq if j “ argmaxk pkpxq pη ` αmaxk pkpxqq pjpxq otherwise . Unless α “ 0 and η “ 1 ( which amounts to completely ignoring the base model ) or the base model makes hard predictions on all points , i.e . pjpxq P t0 , 1u , @ x , the anchor loss encourages scores that differ from the base model p. For example , when α “ η “ 1 ( and the base model makes soft predictions ) , the anchor loss has the effect of down weighting the label that the base model is most confident about , and as a result , encourages lower scores on that label and higher scores on all other labels . While one can indeed tweak the two hyper-parameters to reduce the gap between the learned classifier and the base model , our proposal requires only one hyper-parameter λ , which represents an intuitive trade-off between the one-hot and teacher labels . In fact , irrespective of the choice of λ , the classifier that minimizes our distillation loss in Proposition 4 mimics the base model p exactly , and as a result , achieves both zero churn and optimal accuracy . We shall see in the next section that even on real-world datasets , where the base classifier does not necessarily make predictions close to the true class probabilities ( and where the student hypothesis class is not necessarily universal and of limited capacity ) , our proposal performs substantially better than the anchor loss in minimizing churn at a particular accuracy . Figure 3 provides a further ablation study , effectively interpolating between the anchor and distillation methods , and provides evidence that using the true ( hard ) label instead of the teacher ( soft ) label can steadily degrade performance . 5 EXPERIMENTS We now show empirically that distillation is an effective method to train models for both accuracy and low churn . We test our method across a large number of datasets and neural network architectures . 5.1 SETUP Datasets and architectures : The following are the datasets we use in our experiments , along with the associated model architectures : • 12 OpenML datasets using fully-connected neural networks . • 10 MNIST variants , SVHN , CIFAR10 , 40 CelebA tasks using convolutional networks . • CIFAR10 and CIFAR100 with ResNet-50 , ResNet-101 , and ResNet-152 . • IMDB dataset using transformer network . For each architecture ( besides ResNet ) , we use 5 different sizes . For the fully connected network , we use a simple network with one-hidden layer of 10 , 102 , 103 , 104 , and 105 units , which we call fcn-x where x is the respective size of the hidden layer . For the convolutional neural network , we start with the LeNet5 architecture ( LeCun et al. , 1998 ) and scale the number of hidden units by a factor of x for x “ 1 , 2 , 4 , 8 , 16 , which we call ConvNet-x for the respective x . Finally , we use the basic transformer architecture from Keras tutorial ( Keras , 2020 ) and scale the number of hidden units by x for x “ 1 , 2 , 4 , 8 , 16 , which we call Transformer-x for the respective x . Code for the models in Keras can be found in the Appendix . For each dataset , we use the standard train/test split if available , otherwise , we fix a random train/test split with ratio 2:1 . Setup : For each dataset and neural network , we randomly select from the training set 1000 initial examples , 100 validation examples , and a batch of 1000 examples , and train an initial model using Adam optimizer with default settings on the initial set and early stopping ( i.e . stop when there ’ s no improvement on the validation loss after 5 epochs ) and default random initialization , and use that model as the base model . Then , for each baseline , we train on the combined initial set and batch ( 2000 datapoints ) , again using the Adam optimizer with default settings and the same early stopping scheme and calculate the accuracy and churn against the base model on the test set . We average across 100 runs and provide the error bands in the Appendix . For all the datasets except the OpenML datasets , we also have results for the case of 10000 initial examples , 1000 validation examples , and a batch 1000 . We also show results for the case of 100 initial samples , 1000 validation examples , and a batch of 1000 for all of the datasets . Due to space , we show those results in the Appendix . We ran our experiments on a cloud environment . For each run , we used a NVIDIA V100 GPU , which took up to several days to finish all 100 trials . Baselines : We test our method against the following baselines . ( 1 ) Cold start , where we train the model from scratch with the default initializer . ( 2 ) Warm start , where we initialize the model ’ s parameters to that of the base model before training . ( 3 ) Shrink-perturb ( Ash & Adams , 2019 ) , which is a method designed to improve warm-starting by initializing the model ’ s weights to α ¨θbase ` p1´αq¨θinit before training , where θbase are the weights of the base model , θinit is a randomly initialized model , and α is a hyperparameter we tune across t0.1 , 0.2 , ... , 0.9u . ( 4 ) Mixup ( Zhang et al. , 2017 ) ( a baseline suggested for a different notion of churn ( Bahri & Jiang , 2021 ) ) , which trains an convex combinations of pairs of datapoints . We search over its hyperparameter α P t0.1 , ... , 0.9u , as defined in Zhang et al . ( 2017 ) . ( 5 ) Label smoothing ( Szegedy et al. , 2016 ) , which was suggested by Bahri & Jiang ( 2021 ) for the variance notion of churn , proceeds by training on a convex combination between the original labels and the base models ’ soft prediction . We tune across the convex combination weight α P t0.1 , 0.2 , ... , 0.9u . ( 6 ) Co-distillation ( Anil et al. , 2018 ) , which was proposed for the variance notion of churn , where we train two warm-started networks that train simultaneously on a loss that is a convex combination on the original loss and a loss on the difference between their predictions . We tune across the convex combination weight α P t0.1 , 0.2 , ... , 0.9u . ( 7 ) Anchor ( Fard et al. , 2016 ) , which as noted in Section 4.2 , proceeds by optimizing the cross-entropy loss on a modified label : we use the label αgpxq ` p1´αqey when the base model g agrees with the true label y , and ηey otherwise . We tune across α P t0.1 , 0.2 , ... , 0.9u and η P t0.5 , 0.7 , 1u . For distillation , we tune the trade-off parameter λ across t0.1 , 0.2 , ... , 0.9u . Metric : All of the methods will produce a model that we evaluate for both accuracy and churn with respect to the base model on the test set . We consider the hard notion of churn , which measures the average difference in hard predictions w.r.t . the base classifier on a test set . We will see later that there is often-times a trade-off between accuracy and churn , and in an effort to produce one metric for quantitative evaluation , we propose churn at cold accuracy metric , which is defined as follows . Each baseline produces a set of models ( one for each hyperparameter setting ) . We take the averaged churn and accuracy across the 100 runs and choose the model with the lowest churn that is at least as accurate as the cold-start model ( it ’ s possible that no such model exists for that method ) . This way , we can identify the method that delivers the lowest churn but still performs at least as well as if we trained on the updated dataset in a vanilla manner . We believe this metric is practically relevant as a practitioner is unlikely to accept a reduction in accuracy to reduce churn . 5.2 RESULTS The detailed results for the following experiments can be found in the Appendix . Given space constraints , we only provide a high level summary in this section OpenML datasets with fully-connected networks : In Table 1 we show the results for the OpenML datasets using the fcn-1000 network . We see that distillation performs the well across the board , and for the other fully connected network sizes , distillation is the best in the majority of cases ( 84 % of the time for initial batch size 1000 and 52 % of time for initial batch size 100 ) . MNIST variants , SVHN , and CIFAR10 with convolutional networks : In Table 2 , we show the results for 10 MNIST variants , SVHN and CIFAR10 using convnet-4 . We see that distillation performs strongly across the board . We found that distillation performs best in 84 % of combinations between dataset and network . When we increase the initial sample size to 10000 and keep the batch size fixed at 1000 , then we found that label smoothing starts becoming competitive with distillation , where distillation is best 64 % of the time , and label smoothing wins by a small margin all other times . We only saw this phenomenon for a handful of the MNIST variants , which suggests that label smoothing may be especially effective in these situations . When we decreased the initial sample down to 100 and kept the batch size the same , we found that distillation was best 48 % of the time , with Anchor being the second best method winning 24 % of the time . For SVHN and CIFAR10 , of the 10 combinations , distillation performs the best on all 10 out of the 10 . If we increased the initial sample size to 10000 and kept the batch size fixed at 1000 , then we find that distillation still performs the best all 10 out of 10 combinations . If we decreased the initial sample size to 100 and kept the same batch size , then distillation performs the best on 8 out of the 10 combinations . CelebA with convolutional networks : Across all 200 combinations of task and network , distillation performs the best 79 % of the time . Moreover , if we increased the initial sample size to 10000 and kept the batch size fixed at 1000 , distillation is even better , performing the best 91.5 % of the time . If we decreased the initial sample size to 100 , then distillation is best 96 % of the time . CIFAR10 and CIFAR100 with ResNet : Due to the computational costs , we only run these experiments for initial sample size 1000 . In all cases ( across ResNet-50 , ResNet-101 and ResNet-152 ) , we see that distillation outperforms the other baselines . IMDB with transformer network : We experimented for initial batch size 100 , 1000 , and 10000 . We found that distillation performed the best the majority of the time , where the only notable weak performance was in some instances where no baselines were even able to reach the accuracy of the cold starting method . In Figure 2 we show the Pareto frontiers of the various baselines as well as plotting cost of each method as we vary the trade-off between accuracy and churn . We see that not only does distillation do well in churn , but it performs the best at any trade-off between churn and accuracy for the cases shown . 6 CONCLUSION We have proposed knowledge distillation as a new practical solution to churn reduction , and provided both theoretical and empirical justifications for the approach . Some directions for future work include investigating the interplay between churn and fairness . One way this connection may arise is that in trying to lower churn , some slices of the data may be disproportionately affected . Another way is if the teacher itself contains harmful biases , in which it makes sense to control what kind of churn should be minimized . Further work may also include combining distillation with other methods to achieve even stronger results . Reproducibility Statement : All details of experimental setup are in the main text , along with descriptions of the baselines and what hyperparameters were swept across . Code can be found in the Appendix . All proofs are in the Appendix . REFERENCES Shivani Agarwal . Surrogate regret bounds for bipartite ranking via strongly proper losses . The Journal of Machine Learning Research , 15 ( 1 ) :1653–1674 , 2014 . Rohan Anil , Gabriel Pereyra , Alexandre Passos , Robert Ormandi , George E Dahl , and Geoffrey E Hinton . Large scale distributed neural network training through online distillation . arXiv preprint arXiv:1804.03235 , 2018 . Taichi Asami , Ryo Masumura , Yoshikazu Yamaguchi , Hirokazu Masataki , and Yushi Aono . Domain adaptation of dnn acoustic models using knowledge distillation . In 2017 IEEE International Conference on Acoustics , Speech and Signal Processing ( ICASSP ) , pp . 5185–5189 . IEEE , 2017 . Jordan T Ash and Ryan P Adams . On warm-starting neural network training . arXiv preprint arXiv:1910.08475 , 2019 . Lei Jimmy Ba and Rich Caruana . Do deep nets really need to be deep ? arXiv preprint arXiv:1312.6184 , 2013 . Dara Bahri and Heinrich Jiang . Locally adaptive label smoothing for predictive churn . arXiv preprint arXiv:2102.05140 , 2021 . Joeran Beel , Marcel Genzmehr , Stefan Langer , Andreas Nürnberger , and Bela Gipp . A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation . In Proceedings of the international workshop on reproducibility and replication in recommender systems evaluation , pp . 7–14 , 2013 . Srinadh Bhojanapalli , Kimberly Wilber , Andreas Veit , Ankit Singh Rawat , Seungyeon Kim , Aditya Menon , and Sanjiv Kumar . On the reproducibility of neural network predictions . arXiv preprint arXiv:2102.03349 , 2021 . Andrew Cotter , Heinrich Jiang , Maya R Gupta , Serena Wang , Taman Narayan , Seungil You , and Karthik Sridharan . Optimization with non-differentiable constraints with applications to fairness , recall , churn , and other goals . Journal of Machine Learning Research , 20 ( 172 ) :1–59 , 2019a . Andrew Cotter , Heinrich Jiang , and Karthik Sridharan . Two-player games for efficient non-convex constrained optimization . In Algorithmic Learning Theory , pp . 300–332 . PMLR , 2019b . Alex Deng and Xiaolin Shi . Data-driven metric development for online controlled experiments : Seven lessons learned . In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , pp . 77–86 , 2016 . Alex Deng , Ya Xu , Ron Kohavi , and Toby Walker . Improving the sensitivity of online controlled experiments by utilizing pre-experiment data . In Proceedings of the sixth ACM international conference on Web search and data mining , pp . 123–132 , 2013 . Bin Dong , Jikai Hou , Yiping Lu , and Zhihua Zhang . Distillation « early stopping ? harvesting dark knowledge utilizing anisotropic information retrieval for overparameterized neural network . arXiv preprint arXiv:1910.01255 , 2019 . Mahdi Milani Fard , Quentin Cormier , Kevin Canini , and Maya Gupta . Launch and iterate : Reducing prediction churn . In Advances in Neural Information Processing Systems , pp . 3179–3187 , 2016 . Dylan J Foster , Spencer Greenberg , Satyen Kale , Haipeng Luo , Mehryar Mohri , and Karthik Sridharan . Hypothesis set stability and generalization . arXiv preprint arXiv:1904.04755 , 2019 . Xavier Glorot and Yoshua Bengio . Understanding the difficulty of training deep feedforward neural networks . In Proceedings of the thirteenth international conference on artificial intelligence and statistics , pp . 249–256 . JMLR Workshop and Conference Proceedings , 2010 . Tilmann Gneiting and Adrian E Raftery . Strictly proper scoring rules , prediction , and estimation . Journal of the American statistical Association , 102 ( 477 ) :359–378 , 2007 . Gabriel Goh , Andrew Cotter , Maya Gupta , and Michael P Friedlander . Satisfying real-world goals with dataset constraints . In Advances in Neural Information Processing Systems , pp . 2415–2423 , 2016 . Geoffrey Hinton , Oriol Vinyals , and Jeff Dean . Distilling the knowledge in a neural network . arXiv preprint arXiv:1503.02531 , 2015 . Zhiting Hu , Xuezhe Ma , Zhengzhong Liu , Eduard Hovy , and Eric Xing . Harnessing deep neural networks with logic rules . arXiv preprint arXiv:1603.06318 , 2016 . Keras . Keras documentation : Text classification with transformer , 2020 . URL https : //keras . io/examples/nlp/text_classification_with_transformer/ . Yann LeCun , Léon Bottou , Yoshua Bengio , and Patrick Haffner . Gradient-based learning applied to document recognition . Proceedings of the IEEE , 86 ( 11 ) :2278–2324 , 1998 . Yuncheng Li , Jianchao Yang , Yale Song , Liangliang Cao , Jiebo Luo , and Li-Jia Li . Learning from noisy labels with distillation . In Proceedings of the IEEE International Conference on Computer Vision , pp . 1910–1918 , 2017 . David Lopez-Paz , Léon Bottou , Bernhard Schölkopf , and Vladimir Vapnik . Unifying distillation and privileged information . arXiv preprint arXiv:1511.03643 , 2015 . Ilya Loshchilov and Frank Hutter . Online batch selection for faster training of neural networks . arXiv preprint arXiv:1511.06343 , 2015 . Aditya Krishna Menon , Ankit Singh Rawat , Sashank J Reddi , Seungyeon Kim , and Sanjiv Kumar . Why distillation helps : a statistical perspective . arXiv preprint arXiv:2005.10419 , 2020 . Hossein Mobahi , Mehrdad Farajtabar , and Peter L Bartlett . Self-distillation amplifies regularization in hilbert space . arXiv preprint arXiv:2002.05715 , 2020 . Nicolas Papernot , Patrick McDaniel , Xi Wu , Somesh Jha , and Ananthram Swami . Distillation as a defense to adversarial perturbations against deep neural networks . In 2016 IEEE symposium on security and privacy ( SP ) , pp . 582–597 . IEEE , 2016 . Mary Phuong and Christoph Lampert . Towards understanding knowledge distillation . In International Conference on Machine Learning , pp . 5142–5151 . PMLR , 2019 . Antonio Polino , Razvan Pascanu , and Dan Alistarh . Model compression via distillation and quantization . arXiv preprint arXiv:1802.05668 , 2018 . Andrei A Rusu , Sergio Gomez Colmenarejo , Caglar Gulcehre , Guillaume Desjardins , James Kirkpatrick , Razvan Pascanu , Volodymyr Mnih , Koray Kavukcuoglu , and Raia Hadsell . Policy distillation . arXiv preprint arXiv:1511.06295 , 2015 . Shibani Santurkar , Dimitris Tsipras , Andrew Ilyas , and Aleksander Madry . How does batch normalization help optimization ? arXiv preprint arXiv:1805.11604 , 2018 . Connor Shorten and Taghi M Khoshgoftaar . A survey on image data augmentation for deep learning . Journal of Big Data , 6 ( 1 ) :1–48 , 2019 . Christian Szegedy , Vincent Vanhoucke , Sergey Ioffe , Jon Shlens , and Zbigniew Wojna . Rethinking the inception architecture for computer vision . In Proceedings of the IEEE conference on computer vision and pattern recognition , pp . 2818–2826 , 2016 . Georgios Theocharous , Philip S Thomas , and Mohammad Ghavamzadeh . Ad recommendation systems for life-time value optimization . In Proceedings of the 24th International Conference on World Wide Web , pp . 1305–1310 , 2015 . James P Turner and Thomas Nowotny . Estimating numerical error in neural network simulations on graphics processing units . BMC Neuroscience , 16 ( 198 ) , 2015 . Vladimir Vapnik and Rauf Izmailov . Learning using privileged information : similarity control and knowledge transfer . J. Mach . Learn . Res. , 16 ( 1 ) :2023–2049 , 2015 . Robert C Williamson , Elodie Vernet , and Mark D Reid . Composite multiclass losses . Journal of Machine Learning Research , 17:1–52 , 2016 . Ruichi Yu , Ang Li , Vlad I Morariu , and Larry S Davis . Visual relationship detection with internal and external linguistic knowledge distillation . In Proceedings of the IEEE international conference on computer vision , pp . 1974–1982 , 2017 . Hongyi Zhang , Moustapha Cisse , Yann N Dauphin , and David Lopez-Paz . mixup : Beyond empirical risk minimization . arXiv preprint arXiv:1710.09412 , 2017 . A PROOFS A.1 PROOF OF PROPOSITION 1 Proposition ( Restated ) . Let p ` , dq be defined as in ( 2 ) for a strictly proper scoring function φ . Suppose φpuq is strictly convex in u . Then there exists λ˚ P r0 , 1s such that the following is an optimal-feasible classifier for ( 1 ) : h˚pxq “ λ˚ppxq ` p1´ λ˚qgpxq . Furthermore , if u ¨ φpuq is α-strongly concave over u P ∆m w.r.t . the Lq-norm , then λ˚ ď d 2 αEx “ } ppxq ´ gpxq } 2q ‰ . Proof . Let h˚ denote an optimal feasible solution for ( 1 ) . We first note that Rphq “ Ex , y r ` py , hpxqqs “ Ex “ Ey|x r ` py , hpxqqs ‰ “ Ex ” ÿ iPrms pipxqφiphpxqq ı and Cphq “ Ex ” ÿ iPrms gipxq pφiphpxqq ´ φipgpxqqq ı . Because φi is strictly convex in its argument , both Rphq and Cphq are strictly convex in h. In other words , for any α P r0 , 1s , and classifiers h1 , h2 , Rpαh1 ` p1´ αqh2q ă αRph1q ` p1´ αqRph2q , and similarly for C. Furthermore because Cpgq “ 0 ă , the constraint is strictly feasible , and hence strong duality holds for ( 1 ) ( as a result of Slater ’ s condition being satisfied ) . Therefore ( 1 ) can be equivalently formulated as a max-min problem : max µPR ` min h Rphq ` µCphq , for which there exists a µ˚ P R ` such that pµ˚ , h˚q is a saddle point . The strict convexity of Rphq and Cphq gives us that h˚ is the unique minimizer of Rphq ` µ˚Cphq . Setting λ˚ “ 11 ` µ˚ , we equivalently have that h˚ is a unique minimizer of the weighted objective λ˚Rphq ` p1´ λ˚qCphq . We next show that the minimizer h˚ is of the required form . Expanding the R and C , we have : λ˚Rphq ` p1´ λ˚qCphq “ Ex ” ÿ iPrms ` λ˚pipxq ` p1´ λ˚qgipxq ˘ φiphpxqq ´ p1´ λ˚qgipxqφipgpxqq ı “ Ex ” ÿ iPrms ` λ˚pipxq ` p1´ λ˚qgipxq ˘ φiphpxqq ı ` a term independent of h “ Ex ” ÿ iPrms p̄ipxqφiphpxqq ı ` a term independent of h , ( 6 ) where p̄pxq “ λ˚ppxq ` p1´ λ˚qgpxq . Note that it suffices to minimize ( 6 ) point-wise , i.e . to choose h˚ so that the term within the expectation ř iPrms p̄ipxqφiphpxqq is minimized for each x . For a fixed x , the inner term is minimized when h˚pxq “ p̄pxq . This is because of our assumption that φ is a strictly proper scoring function , i.e . for any distribution u , the weighted loss ř i uiφipvq is uniquely minimized by v “ u . Therefore ( 6 ) is minimized by h˚pxq “ p̄pxq “ λ˚ppxq ` p1´ λ˚qgpxq . To bound λ˚ , we use a result from Williamson et al . ( 2016 ) ; Agarwal ( 2014 ) to lower bound Cphq in terms of the norm difference } hpxq ´ gpxq } q . Define Qpuq “ infvP∆m u ¨ φpvq . Because φ is a proper scoring function , the infimum is attained at v “ u . Therefore Qpuq “ u ¨ φpuq , which recall is assumed to be strongly concave . Also , note thatQpuq “ infvP∆m u ¨φpvq is an infimum of “ linear ” functions in u , and therefore ∇Qpuq “ φpuq is a super-differential for Q at u . See Proposition 7 in Williamson et al . ( 2016 ) for more details . We now re-write Cphq in terms of Q and lower bound it using the strong concavity property : Cphq “ Ex ” gpxq ¨ pφphpxqq ´ φpgpxqqq ı “ Ex ” hpxq ¨ φphpxqq ` pgpxq ´ hpxqq ¨ φphpxqq ´ gpxq ¨ φpgpxqq ı “ Ex ” Qphpxqq ` pgpxq ´ hpxqq ¨∇Qphpxqq ´Qpgpxqq ı ě Ex ” α 2 } hpxq ´ gpxq } 2q ı , where the last step uses the fact that Q is α-strongly concave over u P ∆m w.r.t . the Lq-norm . Since the optimal scorer h˚ satisfies the coverage constraint Cph˚q ď , we have from the above bound Ex ” α 2 } h˚pxq ´ gpxq } 2q ı ď . Substituting for h˚ , we have : Ex „ pλ˚q2α 2 } ppxq ´ gpxq } 2q ď , or pλ˚q2 ď 2 αEx “ } ppxq ´ gpxq } 2q ‰ , which gives us the desired bound on λ˚ . A.2 PROOF OF THEOREM 2 Theorem ( Restated ) . Let the scoring function φ : ∆mÑRm ` be convex , and } φpzq } 8 ă B , @ z P ∆m . Let the set of classifiers H be convex , with the base classifier g P H. Suppose C and R enjoy the following generalization bounds : for any δ P p0 , 1q , w.p . ě 1 ´ δ over draw of S „ Dn , for any h P H , |Rphq ´ pRphq| ď ∆Rpn , δq ; |Cphq ´ pCphq| ď ∆Cpn , δq , for some ∆Rpn , δq and ∆Cpn , δq that is decreasing in n and approaches 0 as nÑ8 . Let rh be an optimal-feasible classifier in H , i.e . Cprhq ď and Rprhq ď Rphq for all classifiers h for which Cphq ď . Let ph be the classifier returned by Algorithm 1 with Λ “ maxt ` 2B , uu ˇ ˇu P t 1L , 2 L , . . . , 1u ( for some L P N ` . For any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn , Optimality : Rpphq ď Rprhq ` O `` 1 ` 2B ˘ ` ∆Rpn , δq ` ∆Cpn , δq ` BL ˘˘ , Feasibility : Cpphq ď ` ∆C pn , δq . We first note that because } φpzq } 8 ă B , @ z P ∆m , both pRphq ă B and pCphq ă B . Also , because φi is convex , both pRphq and pCphq are convex in h. In other words , for any α P r0 , 1s , and classifiers h1 , h2 , pRpαh1 ` p1 ´ αqh2q ď α pRph1q ` p1 ´ αq pRph2q , and similarly for pC . Furthermore , the objective in ( 4 ) can be decomposed into a convex combination of the empirical risk and churn : pLλphq “ 1 n n ÿ i “ 1 pλeyi ` p1´ λqgpxiqq ¨ φphpxiqq “ λ pRphq ` p1´ λq pCphq ` 1´ λ n n ÿ i “ 1 gpxiq ¨ φpgpxiqq . Therefore minimizing pLλphq is equivalent to minimizing the Lagrangian function rLλphq “ λ pRphq ` p1´ λqp pCphq ´ q ( 7 ) over h. Moreover , each hk minimizes rLλkphq . We also note that the churn-constrained optimization problem in ( 1 ) can be posed as a Lagrangian game between a player that seeks to minimize the above Lagrangian over h and a player that seeks to maximize the Lagrangian over λ . The next two lemmas show that Algorithm 1 can be seen as finding an approximate equilibrium of this two-player game . Lemma 5 . Let the assumptions on φ and H in Theorem 2 hold . Let ph be the classifier returned by Algorithm 1 when Λ is set to Λ “ maxt ` 2B , uu |u P t 1 L , . . . , 1u ( of the range r ` 2B , 1s for some L P N ` . Then there exists a bounded Lagrange multiplier λ̄ P r ` 2B , 1s such that pph , λ̄q forms an equilibrium of the Lagrangian min-max game : λ̄ pRpphq ` p1´ λ̄qp pCpphq ´ q “ min hPcoph1 , ... , hLq λ̄ pRphq ` p1´ λ̄qp pCphq ´ q max λPr0,1s p1´ λqp pCpphq ´ q “ p1´ λ̄qp pCpphq ´ q . Proof . The classifier ph returned by Algorithm 1 is a solution to the following constrained optimization problem over the convex-hull of the classifiers h1 , . . . , hL : min hPcoph1 , ... , hLq pRphq s.t . pCphq ď . Consequently , there exists a λ̄ P r0 , 1s such that : λ̄ pRpphq ` p1´ λ̄qp pCpphq ´ q “ min hPcoph1 , ... , hLq λ̄ pRphq ` p1´ λ̄qp pCphq ´ q ( 8 ) max λPr0,1s p1´ λqp pCpphq ´ q “ p1´ λ̄qp pCpphq ´ q . ( 9 ) To see this , note that the KKT conditions ( along with the convexity of R and C ) give us that there exists a Lagrange multiplier µ̄ ě 0 such that ph P argmin hPcoph1 , ... , hLq pRphq ` µ̄p pCphq ´ q ( stationarity ) µ̄p pCpphq ´ q “ 0 ( complementary slackness ) . When pCpphq ď , µ̄ “ 0 , and so ( 8 ) and ( 9 ) are satisfied for λ̄ “ 1 . When pCpphq “ , then ( 8 ) and ( 9 ) are satisfied for λ̄ “ 11 ` µ̄ . It remains to show that that λ̄ P r ` 2B , 1s . For this , we first show that there exists a h 1 P coph1 , . . . , hLq such that pCph1q ď { 2 . To see why , pick h1 to be the minimizer of the Lagrangian rLλphq over all h P H for λ “ ` 2B . Because rLλph 1q ď rLλpgq ď λB ´ p1´ λq , where g is the base classifier that we have assumed is inH , it follows that pCph1q ď λ1´λB ď { 2 . Next , by combining ( 8 ) and ( 9 ) , we have λ̄ pRpphq ` max λPr0,1s p1´ λqp pCpphq ´ q “ min hPcoph1 , ... , hLq λ̄ pRphq ` p1´ λ̄qp pCphq ´ q . Lower bounding the LHS by setting λ “ 1 and upper bounding the RHS by setting h “ h1 , we get : λ̄ pRpphq ď λ̄ pRph1q ´ p1´ λ̄q 2 , which gives us : { 2 ď λ̄p { 2 ` pRph1q ´ pRpphqq ď λ̄p { 2 ` Bq . Hence λ̄ ě ` 2B , which completes the proof . Lemma 6 . Let ph be the classifier returned by Algorithm 1 when Λ is set to Λ “ maxt ` 2B , uu |u P t 1L , . . . , 1u ( of the range r ` 2B , 1s for some L P N ` . Fix δ P p0 , 1q . Suppose R and C satisfy the generalization bounds in Theorem 2 with error bounds ∆Rpn , δq and ∆Cpn , δq respectively . Then there exists a bounded Lagrange multiplier pλ P r ` 2B , 1s such that pph , pµq forms an approximate equilibrium for the Lagrangian min-max game , i.e . w.p . ě 1´ δ over draw of sample S „ Dn , pλRpphq ` p1´ pλqpCpphq ´ q ď min hPH pλRphq ` p1´ pλqpCphq ´ q ` O p∆Rpn , δq ` ∆Cpn , δq ` B { Lq ( 10 ) and max λPr0,1s p1´ λqpCpphq ´ q ď p1´ pλqpCpphq ´ q ` O p∆Cpn , δq ` B { Lq . ( 11 ) Proof . We have from Lemma 5 that there exists λ̄ P r ` 2B , 1s such that λ̄ pRpphq ` p1´ λ̄qp pCpphq ´ q “ min hPcoph1 , ... , hLq λ̄ pRphq ` p1´ λ̄qp pCphq ´ q ( 12 ) max λPr0,1s p1´ λqp pCpphq ´ q “ p1´ λ̄qp pCpphq ´ q . ( 13 ) Algorithm 1 works with a discretization Λ “ maxt ` 2B , uu |u P t 1 L , . . . , 1u ( of the range r ` 2B , 1s . Allowing pλ to denote the closest value to λ̄ in this set , we have from ( 12 ) : pλ pRpphq ` p1´ pλqp pCpphq ´ q ď min hPcoph1 , ... , hLq pλ pRphq ` p1´ pλqp pCphq ´ q ` 4B L “ min hPH pλ pRphq ` p1´ pλqp pCphq ´ q ` 4B L , ( 14 ) where the last step follows from the fact that coph1 , . . . , hLq Ď H and each hk was chosen to minimize p1´ λkq pRphq ` λkp pCphq ´ q for λk P Λ . Similarly , we have from ( 13 ) , max λPr0,1s p1´ λqpCpphq ´ q ď p1´ pλqpCpphq ´ q ` B L . ( 15 ) What remains is to apply the generalization bounds for R and C to ( 14 ) and ( 15 ) . We first bound the LHS of ( 14 ) . We have with probability at least 1´ δ over draw of S „ Dn : pλ pRpphq ` p1´ pλqp pCpphq ´ q ě pλRpphq ` p1´ pλqpCpphq ´ q ´ pλ∆R pn , δq ´ p1´ pλq∆C pn , δq ě pλRpphq ` p1´ pλqpCpphq ´ q ´ ∆R pn , δq ´ ∆C pn , δq , ( 16 ) where the last step uses the fact that 0 ď pλ ď 1 . For the RHS , we have with the same probability : min hPH ! pλ pRphq ` p1´ pλqp pCphq ´ q ) ` 4B { L ď min hPH ! pλRphq ` p1´ pλqpCphq ´ q ` 4B { L ` pλ∆R pn , δq ` p1´ pλq∆C pn , δq ) ď min hPH ! pλRphq ` p1´ pλqpCphq ´ q ) ` 4B { L ` ∆R pn , δq ` ∆C pn , δq , where we again use 0 ď pλ ď 1 . Combining ( 14 ) with ( 16 ) and ( 17 ) completes the proof for the first part of the lemma . Applying the generalization bounds to ( 15 ) , we have with the same probability : B { L ě max λPr0,1s p1´ λqp pCpphq ´ q ´ p1´ pλqp pCpphq ´ q ě max λPr0,1s ! p1´ λqpCpphq ´ q ´ p1´ λq∆C pn , δq ) ´ p1´ pλqpCpphq ´ q ´ p1´ pλq∆C pn , δq ě max λPr0,1s p1´ λqpCpphq ´ q ´ p1´ pλqpCpphq ´ q ´ 2∆C pn , δq , which completes the proof for the second part of the lemma . We are now ready to prove Theorem 2 . Proof of Theorem 2 . To show optimality , we combine ( 10 ) and ( 11 ) and get : pλ pRpphq ` max λPr0,1s p1´ λqpCpphq ´ q ď min hPH pλ pRphq ` p1´ pλqp pCphq ´ q ` O ´ ∆Rpn , δq ` ∆Cpn , δq ` B { L ¯ . ( 17 ) We then lower bound the LHS in ( 17 ) by setting λ “ 1 and upper bound the RHS by setting h to the optimal feasible solution rh , giving us : pλRpphq ď pλRprhq ` p1´ pλqp0q ` O ´ ∆R pn , δq ` ∆C pn , δq ` BL ¯ . Dividing both sides by pλ , Rpphq ď Rprhq ` 1 pλ O ´ ∆R pn , δq ` ∆C pn , δq ` BL ¯ . Lower bounding pλ by ` 2B gives us the desired optimality result . The feasibility result directly follows from the fact that Algorithm 1 chooses a ph that satisfies the empirical churn constraint pCpphq ď , and from the generalization bound for C. A.3 PROOF OF PROPOSITION 3 Proposition ( Restated ) . Let p ` , dq be defined as in ( 2 ) for a strictly proper scoring function φ . Suppose φpuq is strictly convex in u , Φ-Lipschitz w.r.t . L1-norm for each y P rms , and } φpzq } 8 ă B , @ z P ∆m . Let λ˚ be the optimal mixing coefficient defined in Proposition 1 . Let ∆Cpn , δq be the churn generalization bound defined in Theorem 2 . Let rh be an optimal feasible classifier inH and ph be the classifier returned by Algorithm 1 . Then for any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn : Rpphq ´ Rprhq ď ` ∆Cpn , δq ` pB ` Φλ˚qEx „ DX r } ppxq ´ gpxq } 1s . Proof . Let h˚ be the Bayes optimal classifier , i.e . the optimal-feasible classifier over all classifiers ( not just those inH ) . We have : Rpphq ´ Rprhq ď Rpphq ´ Rph˚q “ Ex „ DX ” Ey|x ” ey ¨ φpphpxqq ıı ´ Ex „ DX “ Ey|x rey ¨ φph˚pxqqs ‰ “ Ex „ DX ” ppxq ¨ φpphpxqq ı ´ Ex „ DX rppxq ¨ φph˚pxqqs “ Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ´ Ex „ DX rppxq ¨ pφph˚pxqq ´ φpgpxqqqs ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` ˇ ˇ ˇ ˇ ˇ ˇ Ex „ DX » – ÿ yPrms pypxq pφyph˚pxqq ´ φypgpxqqq fi fl ˇ ˇ ˇ ˇ ˇ ˇ ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` Ex „ DX » – ÿ yPrms pypxq ˇ ˇφyph˚pxqq ´ φypgpxqq ˇ ˇ fi fl ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` ΦEx „ DX » – ÿ yPrms pypxq } h˚pxq ´ gpxq } 1 fi fl ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` ΦEx „ DX r } h˚pxq ´ gpxq } 1s , where the second-last step follows from Jensen ’ s inequality , and the last step uses the Lipschitz assumption on φy . We further have : Rpphq ´ Rprhq ď Ex „ DX ” gpxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` Ex „ DX ” pppxq ´ gpxqq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` ΦEx „ DX r } h˚pxq ´ gpxq } 1s ď Ex „ DX ” gpxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` Ex „ DX ” } ppxq ´ gpxq } 1 } φpphpxqq ´ φpgpxqq } 8 ı ` ΦEx „ DX r } h˚pxq ´ gpxq } 1s ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` BEx „ DX r } ppxq ´ gpxq } 1s ` ΦEx „ DX r } h˚pxq ´ gpxq } 1s ď Ex „ DX ” gpxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` BEx „ DX r } ppxq ´ gpxq } 1s ` λ˚ΦEx „ DX r } ppxq ´ gpxq } 1s “ Cpphq ` pB ` λ˚ΦqEx „ DX r } ppxq ´ gpxq } 1s , where the second step applies Hölder ’ s inequality to each x , the third step follows from the boundedness assumption on φ , and the fourth step uses the characterization h˚pxq “ λ˚ppxq ` p1´λ˚qgpxq , for λ˚ P r0 , 1s from Proposition 1 . Applying Theorem 2 to the churn Cpphq completes the proof . A.4 PROOF OF PROPOSITION 4 Proposition ( Restated ) . When gpxq “ ppxq , @ x , for any given λ P r0 , 1s , the minimizer for the distillation loss in ( 3 ) over all classifiers h is given by : h˚pxq “ ppxq , whereas the minimizer of the anchor loss in ( 5 ) is given by : h˚j pxq “ zj ř j zj where zj “ '' αp2j pxq ` p1´ αqpjpxq if j “ argmaxk pkpxq p ` αmaxk pkpxqq pjpxq otherwise . Proof . For the first part , we expand ( 3 ) with gpxq “ ppxq , and have for any λ P r0 , 1s , Lλphq “ Epx , yq „ D rpλey ` p1´ λqppxqq ¨ φphpxqqs ( 18 ) “ λEpx , yq „ D rey ¨ φphpxqqs ` p1´ λqEx „ DX rppxq ¨ φphpxqqs “ λEx „ DX “ Ey|x reys ¨ φphpxqq ‰ ` p1´ λqEx „ DX rppxq ¨ φphpxqqs “ λEx „ DX rppxq ¨ φphpxqqs ` p1´ λqEx „ DX rppxq ¨ φphpxqqs “ Ex „ DX rppxq ¨ φphpxqqs . ( 19 ) For a fixed x , the inner term in ( 19 ) is minimized when h˚pxq “ ppxq . This is because of our assumption that φ is a strictly proper scoring function , i.e . for any distribution u , the weighted loss ř i uiφipvq is uniquely minimized by v “ u . Therefore ( 19 ) is minimized by h˚pxq “ ppxq , @ x . For the second part , we expand ( 5 ) with gpxq “ ppxq , and have : Lancphq “ Epx , yq „ D ra ¨ φphpxqqs , where a “ `` αppxq ` p1´ αqey if y “ argmaxk pkpxq ey otherwise , For a given x , let us denote jx “ argmaxk pkpxq . We then have : Lancphq “ Epx , yq „ D rp1py “ jxq pαppxq ` p1´ αqeyq ` 1py ‰ jxqeyq ¨ φphpxqqs “ Ex „ DX “ Ey|x rp1py “ jxq pαppxq ` p1´ αqeyq ` 1py ‰ jxqeyq ¨ φphpxqqs ‰ “ Ex „ DX « ÿ k pkpxq pp1pk “ jxq pαppxq ` p1´ αqekq ` 1pk ‰ jxqekq ¨ φphpxqqq ff “ Ex „ DX « pjxpxq pαppxq ` p1´ αqejxq ¨ φphpxq ` ÿ k‰jx pkpxqφkphpxqq ff “ Ex „ DX „ pjxpxq pαpjxpxq ` p1´ αqqφjxphpxqq ` pjxpxq ÿ k‰jx αpkpxqφkphpxqq ` ÿ k‰jx pkpxqφkphpxqq “ Ex „ DX « pjxpxq pαpjxpxq ` p1´ αqqφjxphpxqq ` pαpjxpxq ` q ÿ k‰jx pkpxqφkphpxqq ff “ Ex „ DX rrppxq ¨ φphpxqqs , ( 20 ) where rpspxq “ `` αp2spxq ` p1´ αqpspxq if s “ jx pαpjxpxq ` qpspxq otherwise “ `` αp2spxq ` p1´ αqpspxq if s “ argmaxk pkpxq pαmaxk pkpxq ` qpspxq otherwise . For a fixed x , the inner term in ( 20 ) is minimized when h˚pxq “ 1Zpxqrppxq , where Zpxq “ ř k rpkpxq . This follows from the fact that for a fixed x , the minimizer of the inner term rppxq ¨ φphpxqq is the same as the the minimizer of the scaled term 1Zpxqrppxq ¨ φphpxqq , and from φ being a strictly proper scoring function . This completes the proof . B GENERALIZATION BOUNDS We adapt a result from Menon et al . ( 2020 ) to provide generalization bounds for the classification risk and churn in terms of the empirical variance of the loss and churn values . Proposition 7 ( Generalization bound for classification risk ) . Let the scoring function φ : ∆mÑRm ` be bounded . Let Vφ Ď RX denote the class of loss functions vpx , yq “ ` φpy , hpxqq “ φyphpxqq induced by classifiers h P H. LetMRn “ N8p 1n , Vφ , 2nq denote the uniform L8 covering number for Vφ . Fix δ P p0 , 1q . Then with probability ě 1´ δ over draw of S „ Dn , for any h P H : Rphq ď pRphq ` O ˜ c VRn phq logpMRn { δq n ` logpM R n { δq n ¸ . where VRn phq denotes the empirical variance of the loss computed on n examples teJyiφphpxiqqu n i “ 1 . Proposition 8 ( Generalization bound for churn ) . Let the scoring function φ : ∆mÑRm ` be bounded . For base classifier g , let Uφ Ď RX denote the corresponding class of divergence functions upxq “ dφphpxq , gpxqq “ gpxqJ ` φphpxqq ´ φpgpxqq ˘ induced by classifiers h P H. Let MCn “ N8p 1n , Uφ , 2nq denote the uniform L8 covering number for Uφ . Fix δ P p0 , 1q . Then with probability ě 1´ δ over draw of S „ Dn , for any h P H : Cphq ď pCphq ` O ˜ c VCn phq logpMCn { δq n ` logpM C n { δq n ¸ . where VCn phq denotes the empirical variance of the churn values computed on n examples tgpxiqJ ` φphpxiqq ´ φpgpxiqq ˘ uni “ 1 . The term VCn captures the variance in the labels provided by the base model g. When this term is low ( which is what we would expect in practice from a base model that makes soft-predictions ) , the churn metric enjoys a tight generalization bound . In contrast , the classification risk is evaluated on one-hot training labels and the variance term VRn there is not impacted by the base model ’ s scores . C DEFINITIONS OF NETWORK ARCHITECTURES USED C.1 FULLY CONNECTED NETWORK FCN-x refers to the following model with size set to `` x '' . In other words , it ’ s a simple fully connected network with one hidden layer with x units . def get_fcn ( n_columns , num_classes=10 , size=100 , weight_init=None ) : model = None model = tf.keras.Sequential ( [ tf.keras.layers.Input ( shape= ( n_columns , ) ) , tf.keras.layers.Dense ( size , activation=tf.nn.relu ) , tf.keras.layers.Dense ( num_classes , activation= '' softmax '' ) , ] ) model.compile ( optimizer=tf.keras.optimizers.Adam ( ) , loss=tf.keras.losses.CategoricalCrossentropy ( ) , metrics= [ tf.keras.metrics.categorical_accuracy ] ) return model C.2 CONVOLUTIONAL NETWORK Convnet-x refers to the following model with size set to `` x '' . Convnet-1 is based on the lenet5 architecture LeCun et al . ( 1998 ) . def get_convnet ( input_shape= ( 28 , 28 , 3 ) , size=1 , num_classes=2 , weight_init=None ) : model = tf.keras.Sequential ( ) model.add ( tf.keras.layers.Conv2D ( filters=16 * size , kernel_size= ( 5 , 5 ) , padding= '' same '' , activation= '' relu '' , input_shape=input_shape ) ) model.add ( tf.keras.layers.MaxPool2D ( strides=2 ) ) model.add ( tf.keras.layers.Conv2D ( filters=24 * size , kernel_size= ( 5 , 5 ) , padding= '' valid '' , activation= '' relu '' ) ) model.add ( tf.keras.layers.MaxPool2D ( strides=2 ) ) model.add ( tf.keras.layers.Flatten ( ) ) model.add ( tf.keras.layers.Dense ( 128 * size , activation= '' relu '' ) ) model.add ( tf.keras.layers.Dense ( 84 , activation= '' relu '' ) ) model.add ( tf.keras.layers.Dense ( num_classes , activation= '' softmax '' ) ) model.compile ( optimizer=tf.keras.optimizers.Adam ( ) , loss=tf.keras.losses.CategoricalCrossentropy ( ) , metrics= [ tf.keras.metrics.categorical_accuracy ] ) return model C.3 TRANSFORMER Transformer-x refers to the following with size set to `` x '' . It is based on keras tutorial on text classification ( https : //keras.io/examples/nlp/text_classification_ with_transformer/ licensed under the Apache License , Version 2.0 ) . def get_transformer ( maxlen , size=1 , num_classes=2 , weight_init=None ) : model = None class TransformerBlock ( tf.keras.layers.Layer ) : def __init__ ( self , embed_dim , num_heads , ff_dim , rate=0.1 , weight_init=None ) : super ( TransformerBlock , self ) .__init__ ( ) self.att = tf.keras.layers.MultiHeadAttention ( num_heads=num_heads , key_dim=embed_dim ) self.ffn = tf.keras.Sequential ( [ tf.keras.layers.Dense ( ff_dim , activation= '' relu '' ) , tf.keras.layers.Dense ( embed_dim ) , ] ) self.layernorm1 = tf.keras.layers.LayerNormalization ( epsilon=1e-6 ) self.layernorm2 = tf.keras.layers.LayerNormalization ( epsilon=1e-6 ) def call ( self , inputs , training ) : attn_output = self.att ( inputs , inputs ) # attn_output = self.dropout1 ( attn_output , training=training ) out1 = self.layernorm1 ( inputs + attn_output ) ffn_output = self.ffn ( out1 ) return self.layernorm2 ( out1 + ffn_output ) class TokenAndPositionEmbedding ( tf.keras.layers.Layer ) : def __init__ ( self , maxlen , vocab_size , embed_dim , ) : super ( TokenAndPositionEmbedding , self ) .__init__ ( ) self.token_emb = tf.keras.layers.Embedding ( input_dim=vocab_size , output_dim=embed_dim ) self.pos_emb = tf.keras.layers.Embedding ( input_dim=maxlen , output_dim=embed_dim ) def call ( self , x ) : maxlen = tf.shape ( x ) [ -1 ] positions = tf.range ( start=0 , limit=maxlen , delta=1 ) positions = self.pos_emb ( positions ) x = self.token_emb ( x ) return x + positions embed_dim = 32 * size # Embedding size for each token num_heads = 2 * size # Number of attention heads ff_dim = 32 * size # Hidden layer size in feed forward network inside transformer inputs = tf.keras.layers.Input ( shape= ( maxlen , ) ) embedding_layer = TokenAndPositionEmbedding ( maxlen , 20000 , embed_dim ) x = embedding_layer ( inputs ) transformer_block = TransformerBlock ( embed_dim , num_heads , ff_dim , weight_init ) x = transformer_block ( x ) x = tf.keras.layers.GlobalAveragePooling1D ( ) ( x ) outputs = tf.keras.layers.Dense ( num_classes , activation= '' softmax '' ) ( x ) model = tf.keras.Model ( inputs=inputs , outputs=outputs ) model.compile ( optimizer=tf.keras.optimizers.Adam ( ) , loss=tf.keras.losses.CategoricalCrossentropy ( ) , metrics= [ tf.keras.metrics.categorical_accuracy ] ) return model D MODEL TRAINING CODE def model_trainer ( get_model , X_train , y_train , X_test , y_test , weight_init=None , validation_data=None , warm=True , mixup_alpha=-1 , codistill_alpha=-1 , distill_alpha=-1 , anchor_alpha=-1 , anchor_eps=-1 ) : model = get_model ( ) if weight_init is not None and warm : model.set_weights ( weight_init ) if FLAGS.loss == `` squared '' : model.compile ( optimizer=tf.keras.optimizers.Adam ( ) , loss=tf.keras.losses.MeanSquaredError ( ) , metrics= [ tf.keras.metrics.categorical_accuracy ] ) callback = tf.keras.callbacks.EarlyStopping ( monitor= '' val_loss '' , patience=3 ) history = None if distill_alpha > = 0 : original_model = get_model ( ) original_model.set_weights ( weight_init ) y_pred = original_model.predict ( X_train ) y_use = distill_alpha * y_pred + ( 1 - distill_alpha ) * y_train history = model.fit ( x=X_train , y=y_use , epochs=FLAGS.n_epochs , callbacks= [ callback ] , validation_data=validation_data ) elif anchor_alpha > = 0 and anchor_eps > = 0 : original_model = get_model ( ) original_model.set_weights ( weight_init ) y_pred = original_model.predict ( X_train ) y_pred_hard = np.argmax ( y_pred , axis=1 ) y_hard = np.argmax ( y_train , axis=1 ) correct = ( y_pred_hard == y_hard ) correct = np.tile ( correct , ( y_train.shape [ 1 ] , 1 ) ) correct = np.transpose ( correct ) correct = correct.reshape ( y_train.shape ) y_use = np.where ( correct , anchor_alpha * y_pred + ( 1 - anchor_alpha ) * y_train , y_train * anchor_eps ) history = model.fit ( x=X_train , y=y_use , epochs=FLAGS.n_epochs , callbacks= [ callback ] , validation_data=validation_data ) elif mixup_alpha > = 0 : training_generator = deep_utils.MixupGenerator ( X_train , y_train , alpha=mixup_alpha ) ( ) history = model.fit ( x=training_generator , validation_data=validation_data , steps_per_epoch=int ( X_train.shape [ 0 ] / 32 ) , epochs=FLAGS.n_epochs , callbacks= [ callback ] ) elif codistill_alpha > = 0 : teacher_model = get_model ( ) if weight_init is not None and warm : teacher_model.set_weights ( weight_init ) val_losses = [ ] optimizer = tf.keras.optimizers.Adam ( ) global_step = 0 alpha = 0 codistillation_warmup_steps = 0 for epoch in range ( FLAGS.n_epochs ) : X_train_ , y_train_ = sklearn.utils.shuffle ( X_train , y_train ) batch_size = 32 for i in range ( int ( X_train_.shape [ 0 ] / batch_size ) ) : if global_step > = codistillation_warmup_steps : alpha = codistill_alpha else : alpha = 0. with tf.GradientTape ( ) as tape : X_batch = X_train_ [ i * 32 : ( i + 1 ) * 32 , : ] y_batch = y_train_ [ i * 32 : ( i + 1 ) * 32 , : ] prob_student = model ( X_batch , training=True ) prob_teacher = teacher_model ( X_batch , training=True ) loss = deep_utils.compute_loss ( prob_student , prob_teacher , y_batch , alpha ) trainable_weights = model.trainable_weights + teacher_model.trainable_weights grads = tape.gradient ( loss , trainable_weights ) optimizer.apply_gradients ( zip ( grads , trainable_weights ) ) global_step += 1 val_preds = model.predict ( validation_data [ 0 ] ) val_loss = np.sum ( deep_utils.cross_entropy ( validation_data [ 1 ] .astype ( `` float32 '' ) , val_preds ) ) val_losses.append ( val_loss ) if len ( val_losses ) > 3 and min ( val_losses [ -3 : ] ) > val_losses [ -4 ] : break else : history = model.fit ( X_train , y_train , epochs=FLAGS.n_epochs , callbacks= [ callback ] , validation_data=validation_data ) y_pred_train = model.predict ( X_train ) y_pred_test = model.predict ( X_test ) return y_pred_train , y_pred_test , model.get_weights ( ) E ADDITIONAL EXPERIMENTAL RESULTS E.1 ADDITIONAL OPENML RESULTS E.1.1 INITIAL SAMPLE 100 , BATCH SIZE 1000 , VALIDATION SIZE 100 In Tables 3 and 4 , we show the churn at cold accuracy metric across network sizes ( fcn-10 , fcn-100 , fcn-1000 , fcn-10000 , fcn-100000 ) . Table 5 shows the standard error bars . They are obtained by fixing the dataset and model , and taking the 100 accuracy and churn results from each baseline and calculating the standard error , which is the standard deviation of the mean . We then report the average standard error across the baselines We see that distillation is the best 52 % of the time . E.1.2 INITIAL SAMPLE 1000 , BATCH SIZE 1000 , VALIDATION SIZE 100 In Tables 6 and 7 , we show the churn at cold accuracy metric across network sizes ( fcn-10 , fcn-100 , fcn-1000 , fcn-10000 , fcn-100000 ) . We see that distillation consistently performs strongly across datasets and sizes of networks . Table 8 shows the standard error bars . We see that distillation is the best 84 % of the time . E.2 ADDITIONAL MNIST VARIANT RESULTS E.2.1 INITIAL SAMPLE SIZE 100 , BATCH SIZE 1000 , VALIDATION SIZE 100 We show full results in Table 9 . We see that distillation is the best for 24 out of the 50 combinations of dataset and network . Error bands can be found in Table 10 . E.2.2 INITIAL SAMPLE SIZE 1000 , BATCH SIZE 1000 , VALIDATION SIZE 100 We show full results in Table 11 . We see that distillation is the best for 42 out of the 50 combinations of dataset and network . Error bands can be found in Table 12 . E.2.3 INITIAL SAMPLE SIZE 10000 , BATCH SIZE 1000 , VALIDATION SIZE 1000 We show full results in Table 13 . We see that in this situation , label smoothing starts becoming competitive with distillation with either of them being the best . Distillation is the best for 32 out of the 50 combinations of dataset and network , and losing marginally to label smoothing in other cases . See Table 14 for error bands . E.3 ADDITIONAL SVHN AND CIFAR RESULTS E.3.1 INITIAL SAMPLE 100 , BATCH SIZE 1000 , VALIDATION SIZE 100 Results are in Table 15 , where we see that distillation is best on 8 out of 10 combinations of dataset and network . Error bands can be found in Table 16 . E.3.2 INITIAL SAMPLE 1000 , BATCH SIZE 1000 , VALIDATION SIZE 100 The results can be found in Table 17 . We include the error bands here in Table 18 . Distillation is best in all combinations . E.3.3 INITIAL SAMPLE 10000 , BATCH SIZE 1000 , VALIDATION SIZE 1000 Results are in Table 19 , where we see that distillation is best on all combinations of dataset and network . Error bands can be found in Table 20 . E.4 ADDITIONAL CELEBA RESULTS E.4.1 INITIAL SAMPLE 100 , BATCH SIZE 1000 , VALIDATION SIZE 100 Tables 21 , 22 , 23 , and 24 show the performance of CelebA tasks when we instead use an initial sample size of 100 . We see that across the 200 combinations of task and network , distillation is the best 192 of time , or 96 % of the time . The error bands can be found in Table 25 . E.4.2 INITIAL SAMPLE 1000 , BATCH SIZE 1000 , VALIDATION SIZE 100 We show some additional CelebA results for initial sample 1000 and batch size 1000 in Tables 26 , 27 , 28 , and 29 which show performance for each dataset across convnet-1 , convnet-2 , convnet-4 , convent-8 , convnet-16 . This gives us 40 ¨ 5 “ 200 results , of which distillation performs the best 158 out of those settings , or 79 % of the time . The error bands can be found in Table 30 . E.4.3 INITIAL SAMPLE SIZE 10000 , BATCH SIZE 1000 , VALIDATION SIZE 1000 Tables 31 , 32 , 33 , and 34 show the performance of CelebA tasks when we instead use an initial sample size of 10000 . We see that across the 200 combinations of task and network , distillation is the best 183 of time , or 91.5 % of the time . The error bands can be found in Table 35 . E.5 CIFAR10 AND CIFAR100 ON RESNET Results can be found in Table 36 . We see that distillation outperforms in every case . E.6 ADDITIONAL IMDB RESULTS In Table 37 , we show the results for the IMDB dataset and transformer networks for initial batch sizes of 100 , 1000 and 10000 with the batch size fixed at 1000 . The error bands can be found in Table 38 . We see that for initial sample size of 100 , distillation performs poorly for the smaller networks as the process of distillation hurts the performance with a weak teacher trained on only 100 examples , but performs well for the larger networks . For initial sample size of 1000 and 10000 , distillation is the clear winner losing in only one instance . We show the full Pareto frontiers and cost curves in Figure 4 . | In real-life applications of predictive models, predictions often form a step of the process. Changes in the predictive model often need to be validated end to end using methods such as A/B tests before they can be used in production. Thus for certain class of problems, there is a need to control the churn or the difference in the predictive model due to retraining of the model towards a more robust pipeline. The authors present an approach to control this churn via a simple distillation method. They also show that their distillation method, under certain assumptions, is equivalent to a constrained optimization problem that explicitly constrains the "churn" without the added complexity of constrained optimization. They validated their method by conducting experiments on a number of baselines on a wide range of datasets and model architectures. | SP:08c1cbc07686b1f7b7fa01246b72929858c3846a |
Churn Reduction via Distillation | In real-world systems , models are frequently updated as more data becomes available , and in addition to achieving high accuracy , the goal is to also maintain a low difference in predictions compared to the base model ( i.e . predictive “ churn ” ) . If model retraining results in vastly different behavior , then it could cause negative effects in downstream systems , especially if this churn can be avoided with limited impact on model accuracy . In this paper , we show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn . We then show that distillation performs strongly for low churn training against a number of recent baselines on a wide range of datasets and model architectures , including fully-connected networks , convolutional networks , and transformers . 1 INTRODUCTION Deep neural networks ( DNNs ) have had profound success at solving some of the most challenging machine learning problems . While much of the focus has been spent towards attaining state-of-art predictive performance , comparatively there has been little effort towards improving other aspects . One such important practical aspect is reducing unnecessary predictive churn with respect to a base model . We define predictive churn as the difference in the prediction of a model relative to a base model on the same datapoints . In a production system , models are often continuously released through an iterative improvement process which cycles through launching a model , collecting additional data and researching ways to improve the current model , and proposing a candidate model to replace the current version of the model serving in production . In order to validate a candidate model , it often needs to be compared to the production model through live A/B tests ( it ’ s known that offline performance alone isn ’ t a sufficient , especially if these models are used as part of a larger system where the offline and online metrics may not perfectly align ( Deng et al. , 2013 ; Beel et al. , 2013 ) ) . Live experiments are costly : they often require human evaluations when the candidate and production model disagree to know which model was correct ( Theocharous et al. , 2015 ; Deng & Shi , 2016 ) . Therefore , minimizing the unnecessary predictive churn can have a significant impact to the cost of the launch cycle . It ’ s been observed that training DNN can be very noisy due to a variety of factors including random initialization ( Glorot & Bengio , 2010 ) , mini-batch ordering ( Loshchilov & Hutter , 2015 ) , data augmentation and processing ( Santurkar et al. , 2018 ; Shorten & Khoshgoftaar , 2019 ) , and hardware ( Turner & Nowotny , 2015 ; Bhojanapalli et al. , 2021 ) – in other words running the same procedure multiple times can lead to models with surprisingly amount of disagreeing predictions even though all can have very high accuracies ( Bahri & Jiang , 2021 ) . While the stability of the training procedure is a separate problem from lowering predictive churn , such instability can further exacerbate the issue and underscores the difficulty of the problem . Knowledge distillation ( Hinton et al. , 2015 ) , which involves having a teacher model and mixing its predictions with the original labels has proved to be a useful tool in deep learning . In this paper , we show that this surprisingly is not only an effective tool for churn reduction by using the base model as the teacher , it is also mathematically aligned with learning under a constraint on the churn . Thus , in addition to providing a strong method for low churn training , we also provide insight into the distillation . Our contributions are as follows : • We show theoretically an equivalence between the low churn training objective ( i.e . minimize a loss function subject to a churn constraint on the base model ) and using knowledge distillation with the base model as the teacher . • We show that distillation performs strongly in a wide range of experiments against a number baselines that have been considered for churn reduction . • Our distillation approach is similar to a previous method called “ anchor ” ( Fard et al. , 2016 ) , which trains on the true labels instead of distilled labels for the incorrectly predicted examples by the base model . Our proposal outperforms it by a surprising amount . We present both theoretical and experimental results showing that the modification of anchor relative to distillation actually hurts performance . 2 RELATED WORKS Prediction Churn . There are few works that address low churn training with respect to a base model . Fard et al . ( 2016 ) proposed an anchor loss which is similar to distillation when the base model ’ s prediction agrees with the original label , and uses a scaled version of the original label otherwise . In our empirical evaluation , we find that this procedure performs considerably worse than distillation . Cotter et al . ( 2019a ) ; Goh et al . ( 2016 ) use constrained optimization by adding a constraint on the churn . We use some of the theoretical insights found in that work to show an equivalence between distillation and the constrained optimization problem . Thus , we are able to bypass the added complexity of using constrained optimization ( Cotter et al. , 2019b ) in favor of distillation , which is a simpler and more robust method . A related but different notion of churn that has been studied is where the goal is to reduce the training instability . Anil et al . ( 2018 ) noted that co-distillation is an effective method . Bahri & Jiang ( 2021 ) proposes a locally adaptive variant of label smoothing and Bhojanapalli et al . ( 2021 ) propose entropy regularizers and a variant of co-distillation . We tested many of the baselines proposed in these papers and adapt them to our notion of churn and showed that they were not effective at reducing predictive churn w.r.t . a base model . Distillation . Distillation ( Ba & Caruana , 2013 ; Hinton et al. , 2015 ) , first proposed to transfer knowledge from larger networks to smaller ones , has become immensely popular . Applications include learning from noisy labels ( Li et al. , 2017 ) , model compression ( Polino et al. , 2018 ) , adversarial robustness ( Papernot et al. , 2016 ) , DNNs with logic rules ( Hu et al. , 2016 ) , visual relationship detection ( Yu et al. , 2017 ) , reinforcement learning ( Rusu et al. , 2015 ) , domain adaptation ( Asami et al. , 2017 ) and privacy ( Lopez-Paz et al. , 2015 ) . Our work adds to the list of applications in which distillation is effective . The theoretical motivation of distillation however is less established . Lopez-Paz et al . ( 2015 ) studied distillation as learning using privileged information ( Vapnik & Izmailov , 2015 ) . Phuong & Lampert ( 2019 ) establishes fast convergence of the expected risk of a distillation-trained linear classifier . Foster et al . ( 2019 ) provides a generalization bound for the student with an assumption that it learns a model close to the teacher . Dong et al . ( 2019 ) argued distillation has a similar effect as that of early stopping . Mobahi et al . ( 2020 ) showed an equivalence to increasing the regularization strength for kernel methods . Menon et al . ( 2020 ) establish a bias-variance trade-off for the student . Our analysis provides a new theoretical perspective of its relationship to churn reduction . 3 DISTILLATION FOR CONSTRAINING CHURN We are interested in a multiclass classification problem with an instance space X and a label space rms “ t1 , . . . , mu . Let D denote the underlying data distribution over instances and labels , and DX denote the corresponding marginal distribution over X . Let ∆m denote the pm ´ 1q-dimensional simplex with m coordinates . We will use p : XÑ∆m to denote the underlying conditional-class probabilities , where pypxq “ PpY “ y|X “ xq . We assume that we are provided a base classifier g : XÑ∆m that predicts a vector of probabilities gpxq P ∆m for any instance x . Our goal is to then learn a new classifier h : XÑ∆m , constraining it to have low predictive churn against g. We will measure the classification performance of a classifier h using a loss function ` : rms ˆ ∆mÑR ` that maps a label y P rms and prediction hpxq P ∆m to a non-negative number ` py , hpxqq , and denote the classification risk by Rphq : “ Epx , yq „ D r ` py , hpxqqs . We would ideally like to define predictive churn as the fraction of examples on which h and g disagree . For the purpose of designing a tractable algorithm , we will instead work with a softer notion of churn , which evaluates the divergence between their output distributions . To this end , we use a measure of divergence d : ∆m ˆ∆mÑ R ` , and denote the expected churn between h and g by Cphq : “ Ex „ DX rdpgpxq , hpxqqs . We then seek to minimize the classification risk for h , subject to the expected churn being within an allowed limit ą 0 : min h : XÑ∆m Rphq s.t . Cphq ď . ( 1 ) We consider loss and divergence functions that are defined in terms of a scoring function φ : ∆mÑRm ` that maps a distribution to a m-dimensional score . Specifically , we will consider scoring functions φ that are strictly proper ( Gneiting & Raftery , 2007 ; Williamson et al. , 2016 ) , i.e . for which , given any distribution u P ∆m , the conditional risk Ey „ u rφypvqs is uniquely minimized by v “ u . The following are general loss and divergence functions derived from φ : ` φpy , vq : “ φypvq ; dφpu , vq : “ ÿ yPrms uypφypvq ´ φypuqq . ( 2 ) e.g . cross-entropy , KL-divergence , and squared loss are special cases of this formulation . 3.1 BAYES-OPTIMAL CLASSIFIER We show below that for the loss and divergence functions defined in ( 2 ) , the optimal-feasible classifier for the constrained problem in ( 1 ) is a convex combination of the class probability function p and the base classifier g. Proposition 1 . Let p ` , dq be defined as in ( 2 ) for a strictly proper scoring function φ . Suppose φpuq is strictly convex in u . Then there exists λ˚ P r0 , 1s such that the following is an optimal-feasible classifier for ( 1 ) : h˚pxq “ λ˚ppxq ` p1´ λ˚qgpxq . Furthermore , if u ¨ φpuq is α-strongly concave over u P ∆m w.r.t . the Lq-norm , then λ˚ ď b 2 { ` αEx “ } ppxq ´ gpxq } 2q ‰˘ . The strong concavity condition in Proposition 1 is satisfied by the cross-entropy loss and KLdivergence for α “ 1 with the L1-norm , and by the squared loss and L2-distance for α “ 2 with the L2-norm . The bound suggests that the mixing coefficient λ˚ depends on how close the base classifier is to the class probability function p. Algorithm 1 Distillation-based Churn Reduction 1 : Inputs : Training sample S “ tpx1 , y1q , . . . , pxn , ynqu , Grid of mixing coefficients Λ “ tλ1 , . . . , λLu , Base classifier g , Constraint slack ą 0 2 : Train a classifier hk for each λk P Λ by minimizing the distilled loss in ( 4 ) : hk P argminhPH pLλkphq 3 : Find a convex combination of h1 , . . . , hL by solving following convex program in L variables : min hPcoph1 , ... , hLq pRphq s.t . pCphq ď and return the solution ph 3.2 DISTILLATION-BASED APPROACH Proposition 1 directly motivates the use of a distillation-based approach for solving the churnconstrained optimization problem in ( 1 ) . We propose treating the base classifier g as a teacher model , mixing the training labels y with scores from the teacher gpxq , and minimizing a classification loss against the transformed labels : Lλphq “ Epx , yq „ D rpλey ` p1´ λqgpxqq ¨ φphpxqqs , ( 3 ) where ey P t0 , 1um denotes a one-hot encoding of the label y P rms and φ is a strictly proper scoring function . It is straight-forward to show that when λ “ λ˚ , the optimal classifier for the above distillation loss takes the same form in Proposition 1 , i.e . h˚pxq “ λ˚ppxq ` p1´ λ˚qgpxq . While the optimal mixing parameter λ˚ is unknown , we propose treating this as a hyper-parameter and tuning it to reach the desired level of churn . In practice , we do not have direct access to the distribution D and will need to work with a sample S “ tpx1 , y1q , . . . , pxn , ynqu drawn from D. To this end , we define the empirical risk and the empirical churn as follows : pRphq “ 1 n n ÿ i “ 1 ` φpyi , hpxiqq ; pCphq “ 1 n n ÿ i “ 1 dφpgpxiq , hpxiqq , where ` φ and dφ are defined as in ( 2 ) for a scoring function φ . Our proposal is to then solve the following empirical risk minimization problem over a hypothesis class H Ă th : XÑ∆mu for different values of coefficient λk chosen from a finite grid tλ1 , . . . , λLu Ă r0 , 1s : hk P argminhPH pLλkphq : “ 1 n n ÿ i “ 1 pλkeyi ` p1´ λkqgpxiqq ¨ φphpxiqq . ( 4 ) To construct the final classifier , we find a convex combination of the L classifiers h1 , . . . , hL that minimizes pRphq while satisfying the constraint pCphq ď , and return an ensemble of the L classifiers . The overall procedure is outlined in Algorithm 1 , where we denote the set of convex combinations of classifiers h1 , . . . , hL by coph1 , . . . , hLq “ th : x ÞÑ řL j “ 1 αjhjpxq |α P ∆Lu . The post-processing step in Algorithm 1 amounts to solving a simple convex program in L variables . This is needed for technical reasons in our theoretical results , specifically , to translate a solution to a dual-optimal solution to ( 1 ) to a primal-feasible solution . In practice , however , we do not construct an ensemble , and instead simply return a single classifier that achieves the least empirical risk while satisfying the churn constraint . In our experiments , we use the cross-entropy loss for training , i.e . set φypuq “ ´ logpuyq . 4 THEORETICAL GUARANTEES We provide optimality and feasibility guarantees for the proposed algorithm and also explain why our approach is better-suited for optimizing accuracy ( subject to a churn constraint ) compared to the previous churn-reduction method of Fard et al . ( 2016 ) . 4.1 OPTIMALITY AND FEASIBILITY GUARANTEES We now show that the classifier ph returned by Algorithm 1 approximately satisfies the churn constraint , while achieving a risk close to that of the optimal-feasible classifier inH . This result assumes that we are provided with generalization bounds for the classification risk and churn . Theorem 2 . Let the scoring function φ : ∆mÑRm ` be convex , and } φpzq } 8 ă B , @ z P ∆m . Let the set of classifiersH be convex , with the base classifier g P H. Suppose C and R enjoy the following generalization bounds : for any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn , for any h P H , |Rphq ´ pRphq| ď ∆Rpn , δq ; |Cphq ´ pCphq| ď ∆Cpn , δq , for some ∆Rpn , δq and ∆Cpn , δq that is decreasing in n and approaches 0 as nÑ8 . Let rh be an optimal-feasible classifier in H , i.e . Cprhq ď and Rprhq ď Rphq for all classifiers h for which Cphq ď . Let ph be the classifier returned by Algorithm 1 with Λ “ maxt ` 2B , uu ˇ ˇu P t 1L , 2 L , . . . , 1u ( for some L P N ` . For any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn , Optimality : Rpphq ď Rprhq ` O `` 1 ` 2B ˘ ` ∆Rpn , δq ` ∆Cpn , δq ` BL ˘˘ , Feasibility : Cpphq ď ` ∆C pn , δq . In practice , we expect the churn metric to generalize better than the classification risk , i.e . for ∆Cpn , δq to be smaller than ∆Rpn , δq . This is because the classification risk is computed on “ hard ” labels y P rms from the training sample , whereas the churn metric is computed on “ soft ” labels gpxq P ∆m from the base model . The traditional view of distillation ( Hinton et al. , 2015 ) suggests that the soft labels from a teacher model come with confidence scores for each example , and thus allow the student to generalize well to unseen new examples . A similar view is also posed by Menon et al . ( 2020 ) , who argue that the soft labels from the teacher have “ lower variance ” than the hard labels from the training sample , and therefore aid in better generalization of the student . See Appendix B for explicit bounds on ∆Cpn , δq and ∆Rpn , δq . In fact , for certain base classifiers g , generalizing well on “ churn ” can have the additional benefit of improving classification performance , as shown by the proposition below . Proposition 3 . Let p ` , dq be defined as in ( 2 ) for a strictly proper scoring function φ . Suppose φpuq is strictly convex in u , Φ-Lipschitz w.r.t . L1-norm for each y P rms , and } φpzq } 8 ă B , @ z P ∆m . Let λ˚ be the optimal mixing coefficient defined in Proposition 1 . Let ∆Cpn , δq be the churn generalization bound defined in Theorem 2 . Let rh be an optimal feasible classifier inH and ph be the classifier returned by Algorithm 1 . Then for any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn : Rpphq ´ Rprhq ď ` ∆Cpn , δq ` pB ` Φλ˚qEx „ DX r } ppxq ´ gpxq } 1s . This result bounds the excess classification risk in terms of the churn generalization bound and the expected difference between the base classifier g and the underlying class probability function p. When the base classifier is close to p , low values of ∆Cpn , δq result in low classification risk . 4.2 ADVANTAGE OVER ANCHOR LOSS We next compare our distillation loss in ( 3 ) with the previous anchor loss of Fard et al . ( 2016 ) , which uses the base model ’ s prediction only when it agrees with the original label , and uses a scaled version of the original label otherwise . While originally proposed for churn reduction with binary labels , we provide below an analogous version of this loss for a multiclass setup : Lancphq “ Epx , yq „ D ra ¨ φphpxqqs , ( 5 ) where a “ `` αgpxq ` p1´ αqey if y “ argmaxk gkpxq ηey otherwise , for hyper-parameters α , η P r0 , 1s and a strictly proper scoring function φ . Here , we have used argmax to denote ties being broken in favor of the larger class . While this helps us simplify the exposition , our results can be easily extended to a version of the loss which includes ties . The anchor loss does not take into account the confidence with which the base model disagrees with the sampled label y . For example , if the base model predicts near-equal probabilities for all classes , but happens to assign a slightly higher probability to a class different from y , the anchor loss would still completely ignore the base model ’ s score ( even though it might be the case that all the labels are indeed equally likely to occur ) . In some cases , this selective use of the teacher labels can result in a biased objective and may hurt the classifier ’ s accuracy . To see this , consider an ideal scenario where the base model predicts the true conditional-probabilities ppxq and the student hypothesis class is universal . In this case , minimizing the churn w.r.t . the base model has the effect of maximizing classification accuracy , i.e . a classifier that has zero churn w.r.t . the base model also produces the least classification error . However , as shown below , even in this ideal setup , minimizing the anchor loss may result in a classifier different from the base model . Proposition 4 . When gpxq “ ppxq , @ x , for any given λ P r0 , 1s , the minimizer for the distillation loss in ( 3 ) over all classifiers h is given by : h˚pxq “ ppxq , whereas the minimizer of the anchor loss in ( 5 ) is given by : h˚j pxq “ zj ř j zj where zj “ '' αp2j pxq ` p1´ αqpjpxq if j “ argmaxk pkpxq pη ` αmaxk pkpxqq pjpxq otherwise . Unless α “ 0 and η “ 1 ( which amounts to completely ignoring the base model ) or the base model makes hard predictions on all points , i.e . pjpxq P t0 , 1u , @ x , the anchor loss encourages scores that differ from the base model p. For example , when α “ η “ 1 ( and the base model makes soft predictions ) , the anchor loss has the effect of down weighting the label that the base model is most confident about , and as a result , encourages lower scores on that label and higher scores on all other labels . While one can indeed tweak the two hyper-parameters to reduce the gap between the learned classifier and the base model , our proposal requires only one hyper-parameter λ , which represents an intuitive trade-off between the one-hot and teacher labels . In fact , irrespective of the choice of λ , the classifier that minimizes our distillation loss in Proposition 4 mimics the base model p exactly , and as a result , achieves both zero churn and optimal accuracy . We shall see in the next section that even on real-world datasets , where the base classifier does not necessarily make predictions close to the true class probabilities ( and where the student hypothesis class is not necessarily universal and of limited capacity ) , our proposal performs substantially better than the anchor loss in minimizing churn at a particular accuracy . Figure 3 provides a further ablation study , effectively interpolating between the anchor and distillation methods , and provides evidence that using the true ( hard ) label instead of the teacher ( soft ) label can steadily degrade performance . 5 EXPERIMENTS We now show empirically that distillation is an effective method to train models for both accuracy and low churn . We test our method across a large number of datasets and neural network architectures . 5.1 SETUP Datasets and architectures : The following are the datasets we use in our experiments , along with the associated model architectures : • 12 OpenML datasets using fully-connected neural networks . • 10 MNIST variants , SVHN , CIFAR10 , 40 CelebA tasks using convolutional networks . • CIFAR10 and CIFAR100 with ResNet-50 , ResNet-101 , and ResNet-152 . • IMDB dataset using transformer network . For each architecture ( besides ResNet ) , we use 5 different sizes . For the fully connected network , we use a simple network with one-hidden layer of 10 , 102 , 103 , 104 , and 105 units , which we call fcn-x where x is the respective size of the hidden layer . For the convolutional neural network , we start with the LeNet5 architecture ( LeCun et al. , 1998 ) and scale the number of hidden units by a factor of x for x “ 1 , 2 , 4 , 8 , 16 , which we call ConvNet-x for the respective x . Finally , we use the basic transformer architecture from Keras tutorial ( Keras , 2020 ) and scale the number of hidden units by x for x “ 1 , 2 , 4 , 8 , 16 , which we call Transformer-x for the respective x . Code for the models in Keras can be found in the Appendix . For each dataset , we use the standard train/test split if available , otherwise , we fix a random train/test split with ratio 2:1 . Setup : For each dataset and neural network , we randomly select from the training set 1000 initial examples , 100 validation examples , and a batch of 1000 examples , and train an initial model using Adam optimizer with default settings on the initial set and early stopping ( i.e . stop when there ’ s no improvement on the validation loss after 5 epochs ) and default random initialization , and use that model as the base model . Then , for each baseline , we train on the combined initial set and batch ( 2000 datapoints ) , again using the Adam optimizer with default settings and the same early stopping scheme and calculate the accuracy and churn against the base model on the test set . We average across 100 runs and provide the error bands in the Appendix . For all the datasets except the OpenML datasets , we also have results for the case of 10000 initial examples , 1000 validation examples , and a batch 1000 . We also show results for the case of 100 initial samples , 1000 validation examples , and a batch of 1000 for all of the datasets . Due to space , we show those results in the Appendix . We ran our experiments on a cloud environment . For each run , we used a NVIDIA V100 GPU , which took up to several days to finish all 100 trials . Baselines : We test our method against the following baselines . ( 1 ) Cold start , where we train the model from scratch with the default initializer . ( 2 ) Warm start , where we initialize the model ’ s parameters to that of the base model before training . ( 3 ) Shrink-perturb ( Ash & Adams , 2019 ) , which is a method designed to improve warm-starting by initializing the model ’ s weights to α ¨θbase ` p1´αq¨θinit before training , where θbase are the weights of the base model , θinit is a randomly initialized model , and α is a hyperparameter we tune across t0.1 , 0.2 , ... , 0.9u . ( 4 ) Mixup ( Zhang et al. , 2017 ) ( a baseline suggested for a different notion of churn ( Bahri & Jiang , 2021 ) ) , which trains an convex combinations of pairs of datapoints . We search over its hyperparameter α P t0.1 , ... , 0.9u , as defined in Zhang et al . ( 2017 ) . ( 5 ) Label smoothing ( Szegedy et al. , 2016 ) , which was suggested by Bahri & Jiang ( 2021 ) for the variance notion of churn , proceeds by training on a convex combination between the original labels and the base models ’ soft prediction . We tune across the convex combination weight α P t0.1 , 0.2 , ... , 0.9u . ( 6 ) Co-distillation ( Anil et al. , 2018 ) , which was proposed for the variance notion of churn , where we train two warm-started networks that train simultaneously on a loss that is a convex combination on the original loss and a loss on the difference between their predictions . We tune across the convex combination weight α P t0.1 , 0.2 , ... , 0.9u . ( 7 ) Anchor ( Fard et al. , 2016 ) , which as noted in Section 4.2 , proceeds by optimizing the cross-entropy loss on a modified label : we use the label αgpxq ` p1´αqey when the base model g agrees with the true label y , and ηey otherwise . We tune across α P t0.1 , 0.2 , ... , 0.9u and η P t0.5 , 0.7 , 1u . For distillation , we tune the trade-off parameter λ across t0.1 , 0.2 , ... , 0.9u . Metric : All of the methods will produce a model that we evaluate for both accuracy and churn with respect to the base model on the test set . We consider the hard notion of churn , which measures the average difference in hard predictions w.r.t . the base classifier on a test set . We will see later that there is often-times a trade-off between accuracy and churn , and in an effort to produce one metric for quantitative evaluation , we propose churn at cold accuracy metric , which is defined as follows . Each baseline produces a set of models ( one for each hyperparameter setting ) . We take the averaged churn and accuracy across the 100 runs and choose the model with the lowest churn that is at least as accurate as the cold-start model ( it ’ s possible that no such model exists for that method ) . This way , we can identify the method that delivers the lowest churn but still performs at least as well as if we trained on the updated dataset in a vanilla manner . We believe this metric is practically relevant as a practitioner is unlikely to accept a reduction in accuracy to reduce churn . 5.2 RESULTS The detailed results for the following experiments can be found in the Appendix . Given space constraints , we only provide a high level summary in this section OpenML datasets with fully-connected networks : In Table 1 we show the results for the OpenML datasets using the fcn-1000 network . We see that distillation performs the well across the board , and for the other fully connected network sizes , distillation is the best in the majority of cases ( 84 % of the time for initial batch size 1000 and 52 % of time for initial batch size 100 ) . MNIST variants , SVHN , and CIFAR10 with convolutional networks : In Table 2 , we show the results for 10 MNIST variants , SVHN and CIFAR10 using convnet-4 . We see that distillation performs strongly across the board . We found that distillation performs best in 84 % of combinations between dataset and network . When we increase the initial sample size to 10000 and keep the batch size fixed at 1000 , then we found that label smoothing starts becoming competitive with distillation , where distillation is best 64 % of the time , and label smoothing wins by a small margin all other times . We only saw this phenomenon for a handful of the MNIST variants , which suggests that label smoothing may be especially effective in these situations . When we decreased the initial sample down to 100 and kept the batch size the same , we found that distillation was best 48 % of the time , with Anchor being the second best method winning 24 % of the time . For SVHN and CIFAR10 , of the 10 combinations , distillation performs the best on all 10 out of the 10 . If we increased the initial sample size to 10000 and kept the batch size fixed at 1000 , then we find that distillation still performs the best all 10 out of 10 combinations . If we decreased the initial sample size to 100 and kept the same batch size , then distillation performs the best on 8 out of the 10 combinations . CelebA with convolutional networks : Across all 200 combinations of task and network , distillation performs the best 79 % of the time . Moreover , if we increased the initial sample size to 10000 and kept the batch size fixed at 1000 , distillation is even better , performing the best 91.5 % of the time . If we decreased the initial sample size to 100 , then distillation is best 96 % of the time . CIFAR10 and CIFAR100 with ResNet : Due to the computational costs , we only run these experiments for initial sample size 1000 . In all cases ( across ResNet-50 , ResNet-101 and ResNet-152 ) , we see that distillation outperforms the other baselines . IMDB with transformer network : We experimented for initial batch size 100 , 1000 , and 10000 . We found that distillation performed the best the majority of the time , where the only notable weak performance was in some instances where no baselines were even able to reach the accuracy of the cold starting method . In Figure 2 we show the Pareto frontiers of the various baselines as well as plotting cost of each method as we vary the trade-off between accuracy and churn . We see that not only does distillation do well in churn , but it performs the best at any trade-off between churn and accuracy for the cases shown . 6 CONCLUSION We have proposed knowledge distillation as a new practical solution to churn reduction , and provided both theoretical and empirical justifications for the approach . Some directions for future work include investigating the interplay between churn and fairness . One way this connection may arise is that in trying to lower churn , some slices of the data may be disproportionately affected . Another way is if the teacher itself contains harmful biases , in which it makes sense to control what kind of churn should be minimized . Further work may also include combining distillation with other methods to achieve even stronger results . Reproducibility Statement : All details of experimental setup are in the main text , along with descriptions of the baselines and what hyperparameters were swept across . Code can be found in the Appendix . All proofs are in the Appendix . REFERENCES Shivani Agarwal . Surrogate regret bounds for bipartite ranking via strongly proper losses . The Journal of Machine Learning Research , 15 ( 1 ) :1653–1674 , 2014 . Rohan Anil , Gabriel Pereyra , Alexandre Passos , Robert Ormandi , George E Dahl , and Geoffrey E Hinton . Large scale distributed neural network training through online distillation . arXiv preprint arXiv:1804.03235 , 2018 . Taichi Asami , Ryo Masumura , Yoshikazu Yamaguchi , Hirokazu Masataki , and Yushi Aono . Domain adaptation of dnn acoustic models using knowledge distillation . In 2017 IEEE International Conference on Acoustics , Speech and Signal Processing ( ICASSP ) , pp . 5185–5189 . IEEE , 2017 . Jordan T Ash and Ryan P Adams . On warm-starting neural network training . arXiv preprint arXiv:1910.08475 , 2019 . Lei Jimmy Ba and Rich Caruana . Do deep nets really need to be deep ? arXiv preprint arXiv:1312.6184 , 2013 . Dara Bahri and Heinrich Jiang . Locally adaptive label smoothing for predictive churn . arXiv preprint arXiv:2102.05140 , 2021 . Joeran Beel , Marcel Genzmehr , Stefan Langer , Andreas Nürnberger , and Bela Gipp . A comparative analysis of offline and online evaluations and discussion of research paper recommender system evaluation . In Proceedings of the international workshop on reproducibility and replication in recommender systems evaluation , pp . 7–14 , 2013 . Srinadh Bhojanapalli , Kimberly Wilber , Andreas Veit , Ankit Singh Rawat , Seungyeon Kim , Aditya Menon , and Sanjiv Kumar . On the reproducibility of neural network predictions . arXiv preprint arXiv:2102.03349 , 2021 . Andrew Cotter , Heinrich Jiang , Maya R Gupta , Serena Wang , Taman Narayan , Seungil You , and Karthik Sridharan . Optimization with non-differentiable constraints with applications to fairness , recall , churn , and other goals . Journal of Machine Learning Research , 20 ( 172 ) :1–59 , 2019a . Andrew Cotter , Heinrich Jiang , and Karthik Sridharan . Two-player games for efficient non-convex constrained optimization . In Algorithmic Learning Theory , pp . 300–332 . PMLR , 2019b . Alex Deng and Xiaolin Shi . Data-driven metric development for online controlled experiments : Seven lessons learned . In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , pp . 77–86 , 2016 . Alex Deng , Ya Xu , Ron Kohavi , and Toby Walker . Improving the sensitivity of online controlled experiments by utilizing pre-experiment data . In Proceedings of the sixth ACM international conference on Web search and data mining , pp . 123–132 , 2013 . Bin Dong , Jikai Hou , Yiping Lu , and Zhihua Zhang . Distillation « early stopping ? harvesting dark knowledge utilizing anisotropic information retrieval for overparameterized neural network . arXiv preprint arXiv:1910.01255 , 2019 . Mahdi Milani Fard , Quentin Cormier , Kevin Canini , and Maya Gupta . Launch and iterate : Reducing prediction churn . In Advances in Neural Information Processing Systems , pp . 3179–3187 , 2016 . Dylan J Foster , Spencer Greenberg , Satyen Kale , Haipeng Luo , Mehryar Mohri , and Karthik Sridharan . Hypothesis set stability and generalization . arXiv preprint arXiv:1904.04755 , 2019 . Xavier Glorot and Yoshua Bengio . Understanding the difficulty of training deep feedforward neural networks . In Proceedings of the thirteenth international conference on artificial intelligence and statistics , pp . 249–256 . JMLR Workshop and Conference Proceedings , 2010 . Tilmann Gneiting and Adrian E Raftery . Strictly proper scoring rules , prediction , and estimation . Journal of the American statistical Association , 102 ( 477 ) :359–378 , 2007 . Gabriel Goh , Andrew Cotter , Maya Gupta , and Michael P Friedlander . Satisfying real-world goals with dataset constraints . In Advances in Neural Information Processing Systems , pp . 2415–2423 , 2016 . Geoffrey Hinton , Oriol Vinyals , and Jeff Dean . Distilling the knowledge in a neural network . arXiv preprint arXiv:1503.02531 , 2015 . Zhiting Hu , Xuezhe Ma , Zhengzhong Liu , Eduard Hovy , and Eric Xing . Harnessing deep neural networks with logic rules . arXiv preprint arXiv:1603.06318 , 2016 . Keras . Keras documentation : Text classification with transformer , 2020 . URL https : //keras . io/examples/nlp/text_classification_with_transformer/ . Yann LeCun , Léon Bottou , Yoshua Bengio , and Patrick Haffner . Gradient-based learning applied to document recognition . Proceedings of the IEEE , 86 ( 11 ) :2278–2324 , 1998 . Yuncheng Li , Jianchao Yang , Yale Song , Liangliang Cao , Jiebo Luo , and Li-Jia Li . Learning from noisy labels with distillation . In Proceedings of the IEEE International Conference on Computer Vision , pp . 1910–1918 , 2017 . David Lopez-Paz , Léon Bottou , Bernhard Schölkopf , and Vladimir Vapnik . Unifying distillation and privileged information . arXiv preprint arXiv:1511.03643 , 2015 . Ilya Loshchilov and Frank Hutter . Online batch selection for faster training of neural networks . arXiv preprint arXiv:1511.06343 , 2015 . Aditya Krishna Menon , Ankit Singh Rawat , Sashank J Reddi , Seungyeon Kim , and Sanjiv Kumar . Why distillation helps : a statistical perspective . arXiv preprint arXiv:2005.10419 , 2020 . Hossein Mobahi , Mehrdad Farajtabar , and Peter L Bartlett . Self-distillation amplifies regularization in hilbert space . arXiv preprint arXiv:2002.05715 , 2020 . Nicolas Papernot , Patrick McDaniel , Xi Wu , Somesh Jha , and Ananthram Swami . Distillation as a defense to adversarial perturbations against deep neural networks . In 2016 IEEE symposium on security and privacy ( SP ) , pp . 582–597 . IEEE , 2016 . Mary Phuong and Christoph Lampert . Towards understanding knowledge distillation . In International Conference on Machine Learning , pp . 5142–5151 . PMLR , 2019 . Antonio Polino , Razvan Pascanu , and Dan Alistarh . Model compression via distillation and quantization . arXiv preprint arXiv:1802.05668 , 2018 . Andrei A Rusu , Sergio Gomez Colmenarejo , Caglar Gulcehre , Guillaume Desjardins , James Kirkpatrick , Razvan Pascanu , Volodymyr Mnih , Koray Kavukcuoglu , and Raia Hadsell . Policy distillation . arXiv preprint arXiv:1511.06295 , 2015 . Shibani Santurkar , Dimitris Tsipras , Andrew Ilyas , and Aleksander Madry . How does batch normalization help optimization ? arXiv preprint arXiv:1805.11604 , 2018 . Connor Shorten and Taghi M Khoshgoftaar . A survey on image data augmentation for deep learning . Journal of Big Data , 6 ( 1 ) :1–48 , 2019 . Christian Szegedy , Vincent Vanhoucke , Sergey Ioffe , Jon Shlens , and Zbigniew Wojna . Rethinking the inception architecture for computer vision . In Proceedings of the IEEE conference on computer vision and pattern recognition , pp . 2818–2826 , 2016 . Georgios Theocharous , Philip S Thomas , and Mohammad Ghavamzadeh . Ad recommendation systems for life-time value optimization . In Proceedings of the 24th International Conference on World Wide Web , pp . 1305–1310 , 2015 . James P Turner and Thomas Nowotny . Estimating numerical error in neural network simulations on graphics processing units . BMC Neuroscience , 16 ( 198 ) , 2015 . Vladimir Vapnik and Rauf Izmailov . Learning using privileged information : similarity control and knowledge transfer . J. Mach . Learn . Res. , 16 ( 1 ) :2023–2049 , 2015 . Robert C Williamson , Elodie Vernet , and Mark D Reid . Composite multiclass losses . Journal of Machine Learning Research , 17:1–52 , 2016 . Ruichi Yu , Ang Li , Vlad I Morariu , and Larry S Davis . Visual relationship detection with internal and external linguistic knowledge distillation . In Proceedings of the IEEE international conference on computer vision , pp . 1974–1982 , 2017 . Hongyi Zhang , Moustapha Cisse , Yann N Dauphin , and David Lopez-Paz . mixup : Beyond empirical risk minimization . arXiv preprint arXiv:1710.09412 , 2017 . A PROOFS A.1 PROOF OF PROPOSITION 1 Proposition ( Restated ) . Let p ` , dq be defined as in ( 2 ) for a strictly proper scoring function φ . Suppose φpuq is strictly convex in u . Then there exists λ˚ P r0 , 1s such that the following is an optimal-feasible classifier for ( 1 ) : h˚pxq “ λ˚ppxq ` p1´ λ˚qgpxq . Furthermore , if u ¨ φpuq is α-strongly concave over u P ∆m w.r.t . the Lq-norm , then λ˚ ď d 2 αEx “ } ppxq ´ gpxq } 2q ‰ . Proof . Let h˚ denote an optimal feasible solution for ( 1 ) . We first note that Rphq “ Ex , y r ` py , hpxqqs “ Ex “ Ey|x r ` py , hpxqqs ‰ “ Ex ” ÿ iPrms pipxqφiphpxqq ı and Cphq “ Ex ” ÿ iPrms gipxq pφiphpxqq ´ φipgpxqqq ı . Because φi is strictly convex in its argument , both Rphq and Cphq are strictly convex in h. In other words , for any α P r0 , 1s , and classifiers h1 , h2 , Rpαh1 ` p1´ αqh2q ă αRph1q ` p1´ αqRph2q , and similarly for C. Furthermore because Cpgq “ 0 ă , the constraint is strictly feasible , and hence strong duality holds for ( 1 ) ( as a result of Slater ’ s condition being satisfied ) . Therefore ( 1 ) can be equivalently formulated as a max-min problem : max µPR ` min h Rphq ` µCphq , for which there exists a µ˚ P R ` such that pµ˚ , h˚q is a saddle point . The strict convexity of Rphq and Cphq gives us that h˚ is the unique minimizer of Rphq ` µ˚Cphq . Setting λ˚ “ 11 ` µ˚ , we equivalently have that h˚ is a unique minimizer of the weighted objective λ˚Rphq ` p1´ λ˚qCphq . We next show that the minimizer h˚ is of the required form . Expanding the R and C , we have : λ˚Rphq ` p1´ λ˚qCphq “ Ex ” ÿ iPrms ` λ˚pipxq ` p1´ λ˚qgipxq ˘ φiphpxqq ´ p1´ λ˚qgipxqφipgpxqq ı “ Ex ” ÿ iPrms ` λ˚pipxq ` p1´ λ˚qgipxq ˘ φiphpxqq ı ` a term independent of h “ Ex ” ÿ iPrms p̄ipxqφiphpxqq ı ` a term independent of h , ( 6 ) where p̄pxq “ λ˚ppxq ` p1´ λ˚qgpxq . Note that it suffices to minimize ( 6 ) point-wise , i.e . to choose h˚ so that the term within the expectation ř iPrms p̄ipxqφiphpxqq is minimized for each x . For a fixed x , the inner term is minimized when h˚pxq “ p̄pxq . This is because of our assumption that φ is a strictly proper scoring function , i.e . for any distribution u , the weighted loss ř i uiφipvq is uniquely minimized by v “ u . Therefore ( 6 ) is minimized by h˚pxq “ p̄pxq “ λ˚ppxq ` p1´ λ˚qgpxq . To bound λ˚ , we use a result from Williamson et al . ( 2016 ) ; Agarwal ( 2014 ) to lower bound Cphq in terms of the norm difference } hpxq ´ gpxq } q . Define Qpuq “ infvP∆m u ¨ φpvq . Because φ is a proper scoring function , the infimum is attained at v “ u . Therefore Qpuq “ u ¨ φpuq , which recall is assumed to be strongly concave . Also , note thatQpuq “ infvP∆m u ¨φpvq is an infimum of “ linear ” functions in u , and therefore ∇Qpuq “ φpuq is a super-differential for Q at u . See Proposition 7 in Williamson et al . ( 2016 ) for more details . We now re-write Cphq in terms of Q and lower bound it using the strong concavity property : Cphq “ Ex ” gpxq ¨ pφphpxqq ´ φpgpxqqq ı “ Ex ” hpxq ¨ φphpxqq ` pgpxq ´ hpxqq ¨ φphpxqq ´ gpxq ¨ φpgpxqq ı “ Ex ” Qphpxqq ` pgpxq ´ hpxqq ¨∇Qphpxqq ´Qpgpxqq ı ě Ex ” α 2 } hpxq ´ gpxq } 2q ı , where the last step uses the fact that Q is α-strongly concave over u P ∆m w.r.t . the Lq-norm . Since the optimal scorer h˚ satisfies the coverage constraint Cph˚q ď , we have from the above bound Ex ” α 2 } h˚pxq ´ gpxq } 2q ı ď . Substituting for h˚ , we have : Ex „ pλ˚q2α 2 } ppxq ´ gpxq } 2q ď , or pλ˚q2 ď 2 αEx “ } ppxq ´ gpxq } 2q ‰ , which gives us the desired bound on λ˚ . A.2 PROOF OF THEOREM 2 Theorem ( Restated ) . Let the scoring function φ : ∆mÑRm ` be convex , and } φpzq } 8 ă B , @ z P ∆m . Let the set of classifiers H be convex , with the base classifier g P H. Suppose C and R enjoy the following generalization bounds : for any δ P p0 , 1q , w.p . ě 1 ´ δ over draw of S „ Dn , for any h P H , |Rphq ´ pRphq| ď ∆Rpn , δq ; |Cphq ´ pCphq| ď ∆Cpn , δq , for some ∆Rpn , δq and ∆Cpn , δq that is decreasing in n and approaches 0 as nÑ8 . Let rh be an optimal-feasible classifier in H , i.e . Cprhq ď and Rprhq ď Rphq for all classifiers h for which Cphq ď . Let ph be the classifier returned by Algorithm 1 with Λ “ maxt ` 2B , uu ˇ ˇu P t 1L , 2 L , . . . , 1u ( for some L P N ` . For any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn , Optimality : Rpphq ď Rprhq ` O `` 1 ` 2B ˘ ` ∆Rpn , δq ` ∆Cpn , δq ` BL ˘˘ , Feasibility : Cpphq ď ` ∆C pn , δq . We first note that because } φpzq } 8 ă B , @ z P ∆m , both pRphq ă B and pCphq ă B . Also , because φi is convex , both pRphq and pCphq are convex in h. In other words , for any α P r0 , 1s , and classifiers h1 , h2 , pRpαh1 ` p1 ´ αqh2q ď α pRph1q ` p1 ´ αq pRph2q , and similarly for pC . Furthermore , the objective in ( 4 ) can be decomposed into a convex combination of the empirical risk and churn : pLλphq “ 1 n n ÿ i “ 1 pλeyi ` p1´ λqgpxiqq ¨ φphpxiqq “ λ pRphq ` p1´ λq pCphq ` 1´ λ n n ÿ i “ 1 gpxiq ¨ φpgpxiqq . Therefore minimizing pLλphq is equivalent to minimizing the Lagrangian function rLλphq “ λ pRphq ` p1´ λqp pCphq ´ q ( 7 ) over h. Moreover , each hk minimizes rLλkphq . We also note that the churn-constrained optimization problem in ( 1 ) can be posed as a Lagrangian game between a player that seeks to minimize the above Lagrangian over h and a player that seeks to maximize the Lagrangian over λ . The next two lemmas show that Algorithm 1 can be seen as finding an approximate equilibrium of this two-player game . Lemma 5 . Let the assumptions on φ and H in Theorem 2 hold . Let ph be the classifier returned by Algorithm 1 when Λ is set to Λ “ maxt ` 2B , uu |u P t 1 L , . . . , 1u ( of the range r ` 2B , 1s for some L P N ` . Then there exists a bounded Lagrange multiplier λ̄ P r ` 2B , 1s such that pph , λ̄q forms an equilibrium of the Lagrangian min-max game : λ̄ pRpphq ` p1´ λ̄qp pCpphq ´ q “ min hPcoph1 , ... , hLq λ̄ pRphq ` p1´ λ̄qp pCphq ´ q max λPr0,1s p1´ λqp pCpphq ´ q “ p1´ λ̄qp pCpphq ´ q . Proof . The classifier ph returned by Algorithm 1 is a solution to the following constrained optimization problem over the convex-hull of the classifiers h1 , . . . , hL : min hPcoph1 , ... , hLq pRphq s.t . pCphq ď . Consequently , there exists a λ̄ P r0 , 1s such that : λ̄ pRpphq ` p1´ λ̄qp pCpphq ´ q “ min hPcoph1 , ... , hLq λ̄ pRphq ` p1´ λ̄qp pCphq ´ q ( 8 ) max λPr0,1s p1´ λqp pCpphq ´ q “ p1´ λ̄qp pCpphq ´ q . ( 9 ) To see this , note that the KKT conditions ( along with the convexity of R and C ) give us that there exists a Lagrange multiplier µ̄ ě 0 such that ph P argmin hPcoph1 , ... , hLq pRphq ` µ̄p pCphq ´ q ( stationarity ) µ̄p pCpphq ´ q “ 0 ( complementary slackness ) . When pCpphq ď , µ̄ “ 0 , and so ( 8 ) and ( 9 ) are satisfied for λ̄ “ 1 . When pCpphq “ , then ( 8 ) and ( 9 ) are satisfied for λ̄ “ 11 ` µ̄ . It remains to show that that λ̄ P r ` 2B , 1s . For this , we first show that there exists a h 1 P coph1 , . . . , hLq such that pCph1q ď { 2 . To see why , pick h1 to be the minimizer of the Lagrangian rLλphq over all h P H for λ “ ` 2B . Because rLλph 1q ď rLλpgq ď λB ´ p1´ λq , where g is the base classifier that we have assumed is inH , it follows that pCph1q ď λ1´λB ď { 2 . Next , by combining ( 8 ) and ( 9 ) , we have λ̄ pRpphq ` max λPr0,1s p1´ λqp pCpphq ´ q “ min hPcoph1 , ... , hLq λ̄ pRphq ` p1´ λ̄qp pCphq ´ q . Lower bounding the LHS by setting λ “ 1 and upper bounding the RHS by setting h “ h1 , we get : λ̄ pRpphq ď λ̄ pRph1q ´ p1´ λ̄q 2 , which gives us : { 2 ď λ̄p { 2 ` pRph1q ´ pRpphqq ď λ̄p { 2 ` Bq . Hence λ̄ ě ` 2B , which completes the proof . Lemma 6 . Let ph be the classifier returned by Algorithm 1 when Λ is set to Λ “ maxt ` 2B , uu |u P t 1L , . . . , 1u ( of the range r ` 2B , 1s for some L P N ` . Fix δ P p0 , 1q . Suppose R and C satisfy the generalization bounds in Theorem 2 with error bounds ∆Rpn , δq and ∆Cpn , δq respectively . Then there exists a bounded Lagrange multiplier pλ P r ` 2B , 1s such that pph , pµq forms an approximate equilibrium for the Lagrangian min-max game , i.e . w.p . ě 1´ δ over draw of sample S „ Dn , pλRpphq ` p1´ pλqpCpphq ´ q ď min hPH pλRphq ` p1´ pλqpCphq ´ q ` O p∆Rpn , δq ` ∆Cpn , δq ` B { Lq ( 10 ) and max λPr0,1s p1´ λqpCpphq ´ q ď p1´ pλqpCpphq ´ q ` O p∆Cpn , δq ` B { Lq . ( 11 ) Proof . We have from Lemma 5 that there exists λ̄ P r ` 2B , 1s such that λ̄ pRpphq ` p1´ λ̄qp pCpphq ´ q “ min hPcoph1 , ... , hLq λ̄ pRphq ` p1´ λ̄qp pCphq ´ q ( 12 ) max λPr0,1s p1´ λqp pCpphq ´ q “ p1´ λ̄qp pCpphq ´ q . ( 13 ) Algorithm 1 works with a discretization Λ “ maxt ` 2B , uu |u P t 1 L , . . . , 1u ( of the range r ` 2B , 1s . Allowing pλ to denote the closest value to λ̄ in this set , we have from ( 12 ) : pλ pRpphq ` p1´ pλqp pCpphq ´ q ď min hPcoph1 , ... , hLq pλ pRphq ` p1´ pλqp pCphq ´ q ` 4B L “ min hPH pλ pRphq ` p1´ pλqp pCphq ´ q ` 4B L , ( 14 ) where the last step follows from the fact that coph1 , . . . , hLq Ď H and each hk was chosen to minimize p1´ λkq pRphq ` λkp pCphq ´ q for λk P Λ . Similarly , we have from ( 13 ) , max λPr0,1s p1´ λqpCpphq ´ q ď p1´ pλqpCpphq ´ q ` B L . ( 15 ) What remains is to apply the generalization bounds for R and C to ( 14 ) and ( 15 ) . We first bound the LHS of ( 14 ) . We have with probability at least 1´ δ over draw of S „ Dn : pλ pRpphq ` p1´ pλqp pCpphq ´ q ě pλRpphq ` p1´ pλqpCpphq ´ q ´ pλ∆R pn , δq ´ p1´ pλq∆C pn , δq ě pλRpphq ` p1´ pλqpCpphq ´ q ´ ∆R pn , δq ´ ∆C pn , δq , ( 16 ) where the last step uses the fact that 0 ď pλ ď 1 . For the RHS , we have with the same probability : min hPH ! pλ pRphq ` p1´ pλqp pCphq ´ q ) ` 4B { L ď min hPH ! pλRphq ` p1´ pλqpCphq ´ q ` 4B { L ` pλ∆R pn , δq ` p1´ pλq∆C pn , δq ) ď min hPH ! pλRphq ` p1´ pλqpCphq ´ q ) ` 4B { L ` ∆R pn , δq ` ∆C pn , δq , where we again use 0 ď pλ ď 1 . Combining ( 14 ) with ( 16 ) and ( 17 ) completes the proof for the first part of the lemma . Applying the generalization bounds to ( 15 ) , we have with the same probability : B { L ě max λPr0,1s p1´ λqp pCpphq ´ q ´ p1´ pλqp pCpphq ´ q ě max λPr0,1s ! p1´ λqpCpphq ´ q ´ p1´ λq∆C pn , δq ) ´ p1´ pλqpCpphq ´ q ´ p1´ pλq∆C pn , δq ě max λPr0,1s p1´ λqpCpphq ´ q ´ p1´ pλqpCpphq ´ q ´ 2∆C pn , δq , which completes the proof for the second part of the lemma . We are now ready to prove Theorem 2 . Proof of Theorem 2 . To show optimality , we combine ( 10 ) and ( 11 ) and get : pλ pRpphq ` max λPr0,1s p1´ λqpCpphq ´ q ď min hPH pλ pRphq ` p1´ pλqp pCphq ´ q ` O ´ ∆Rpn , δq ` ∆Cpn , δq ` B { L ¯ . ( 17 ) We then lower bound the LHS in ( 17 ) by setting λ “ 1 and upper bound the RHS by setting h to the optimal feasible solution rh , giving us : pλRpphq ď pλRprhq ` p1´ pλqp0q ` O ´ ∆R pn , δq ` ∆C pn , δq ` BL ¯ . Dividing both sides by pλ , Rpphq ď Rprhq ` 1 pλ O ´ ∆R pn , δq ` ∆C pn , δq ` BL ¯ . Lower bounding pλ by ` 2B gives us the desired optimality result . The feasibility result directly follows from the fact that Algorithm 1 chooses a ph that satisfies the empirical churn constraint pCpphq ď , and from the generalization bound for C. A.3 PROOF OF PROPOSITION 3 Proposition ( Restated ) . Let p ` , dq be defined as in ( 2 ) for a strictly proper scoring function φ . Suppose φpuq is strictly convex in u , Φ-Lipschitz w.r.t . L1-norm for each y P rms , and } φpzq } 8 ă B , @ z P ∆m . Let λ˚ be the optimal mixing coefficient defined in Proposition 1 . Let ∆Cpn , δq be the churn generalization bound defined in Theorem 2 . Let rh be an optimal feasible classifier inH and ph be the classifier returned by Algorithm 1 . Then for any δ P p0 , 1q , w.p . ě 1´ δ over draw of S „ Dn : Rpphq ´ Rprhq ď ` ∆Cpn , δq ` pB ` Φλ˚qEx „ DX r } ppxq ´ gpxq } 1s . Proof . Let h˚ be the Bayes optimal classifier , i.e . the optimal-feasible classifier over all classifiers ( not just those inH ) . We have : Rpphq ´ Rprhq ď Rpphq ´ Rph˚q “ Ex „ DX ” Ey|x ” ey ¨ φpphpxqq ıı ´ Ex „ DX “ Ey|x rey ¨ φph˚pxqqs ‰ “ Ex „ DX ” ppxq ¨ φpphpxqq ı ´ Ex „ DX rppxq ¨ φph˚pxqqs “ Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ´ Ex „ DX rppxq ¨ pφph˚pxqq ´ φpgpxqqqs ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` ˇ ˇ ˇ ˇ ˇ ˇ Ex „ DX » – ÿ yPrms pypxq pφyph˚pxqq ´ φypgpxqqq fi fl ˇ ˇ ˇ ˇ ˇ ˇ ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` Ex „ DX » – ÿ yPrms pypxq ˇ ˇφyph˚pxqq ´ φypgpxqq ˇ ˇ fi fl ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` ΦEx „ DX » – ÿ yPrms pypxq } h˚pxq ´ gpxq } 1 fi fl ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` ΦEx „ DX r } h˚pxq ´ gpxq } 1s , where the second-last step follows from Jensen ’ s inequality , and the last step uses the Lipschitz assumption on φy . We further have : Rpphq ´ Rprhq ď Ex „ DX ” gpxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` Ex „ DX ” pppxq ´ gpxqq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` ΦEx „ DX r } h˚pxq ´ gpxq } 1s ď Ex „ DX ” gpxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` Ex „ DX ” } ppxq ´ gpxq } 1 } φpphpxqq ´ φpgpxqq } 8 ı ` ΦEx „ DX r } h˚pxq ´ gpxq } 1s ď Ex „ DX ” ppxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` BEx „ DX r } ppxq ´ gpxq } 1s ` ΦEx „ DX r } h˚pxq ´ gpxq } 1s ď Ex „ DX ” gpxq ¨ ´ φpphpxqq ´ φpgpxqq ¯ı ` BEx „ DX r } ppxq ´ gpxq } 1s ` λ˚ΦEx „ DX r } ppxq ´ gpxq } 1s “ Cpphq ` pB ` λ˚ΦqEx „ DX r } ppxq ´ gpxq } 1s , where the second step applies Hölder ’ s inequality to each x , the third step follows from the boundedness assumption on φ , and the fourth step uses the characterization h˚pxq “ λ˚ppxq ` p1´λ˚qgpxq , for λ˚ P r0 , 1s from Proposition 1 . Applying Theorem 2 to the churn Cpphq completes the proof . A.4 PROOF OF PROPOSITION 4 Proposition ( Restated ) . When gpxq “ ppxq , @ x , for any given λ P r0 , 1s , the minimizer for the distillation loss in ( 3 ) over all classifiers h is given by : h˚pxq “ ppxq , whereas the minimizer of the anchor loss in ( 5 ) is given by : h˚j pxq “ zj ř j zj where zj “ '' αp2j pxq ` p1´ αqpjpxq if j “ argmaxk pkpxq p ` αmaxk pkpxqq pjpxq otherwise . Proof . For the first part , we expand ( 3 ) with gpxq “ ppxq , and have for any λ P r0 , 1s , Lλphq “ Epx , yq „ D rpλey ` p1´ λqppxqq ¨ φphpxqqs ( 18 ) “ λEpx , yq „ D rey ¨ φphpxqqs ` p1´ λqEx „ DX rppxq ¨ φphpxqqs “ λEx „ DX “ Ey|x reys ¨ φphpxqq ‰ ` p1´ λqEx „ DX rppxq ¨ φphpxqqs “ λEx „ DX rppxq ¨ φphpxqqs ` p1´ λqEx „ DX rppxq ¨ φphpxqqs “ Ex „ DX rppxq ¨ φphpxqqs . ( 19 ) For a fixed x , the inner term in ( 19 ) is minimized when h˚pxq “ ppxq . This is because of our assumption that φ is a strictly proper scoring function , i.e . for any distribution u , the weighted loss ř i uiφipvq is uniquely minimized by v “ u . Therefore ( 19 ) is minimized by h˚pxq “ ppxq , @ x . For the second part , we expand ( 5 ) with gpxq “ ppxq , and have : Lancphq “ Epx , yq „ D ra ¨ φphpxqqs , where a “ `` αppxq ` p1´ αqey if y “ argmaxk pkpxq ey otherwise , For a given x , let us denote jx “ argmaxk pkpxq . We then have : Lancphq “ Epx , yq „ D rp1py “ jxq pαppxq ` p1´ αqeyq ` 1py ‰ jxqeyq ¨ φphpxqqs “ Ex „ DX “ Ey|x rp1py “ jxq pαppxq ` p1´ αqeyq ` 1py ‰ jxqeyq ¨ φphpxqqs ‰ “ Ex „ DX « ÿ k pkpxq pp1pk “ jxq pαppxq ` p1´ αqekq ` 1pk ‰ jxqekq ¨ φphpxqqq ff “ Ex „ DX « pjxpxq pαppxq ` p1´ αqejxq ¨ φphpxq ` ÿ k‰jx pkpxqφkphpxqq ff “ Ex „ DX „ pjxpxq pαpjxpxq ` p1´ αqqφjxphpxqq ` pjxpxq ÿ k‰jx αpkpxqφkphpxqq ` ÿ k‰jx pkpxqφkphpxqq “ Ex „ DX « pjxpxq pαpjxpxq ` p1´ αqqφjxphpxqq ` pαpjxpxq ` q ÿ k‰jx pkpxqφkphpxqq ff “ Ex „ DX rrppxq ¨ φphpxqqs , ( 20 ) where rpspxq “ `` αp2spxq ` p1´ αqpspxq if s “ jx pαpjxpxq ` qpspxq otherwise “ `` αp2spxq ` p1´ αqpspxq if s “ argmaxk pkpxq pαmaxk pkpxq ` qpspxq otherwise . For a fixed x , the inner term in ( 20 ) is minimized when h˚pxq “ 1Zpxqrppxq , where Zpxq “ ř k rpkpxq . This follows from the fact that for a fixed x , the minimizer of the inner term rppxq ¨ φphpxqq is the same as the the minimizer of the scaled term 1Zpxqrppxq ¨ φphpxqq , and from φ being a strictly proper scoring function . This completes the proof . B GENERALIZATION BOUNDS We adapt a result from Menon et al . ( 2020 ) to provide generalization bounds for the classification risk and churn in terms of the empirical variance of the loss and churn values . Proposition 7 ( Generalization bound for classification risk ) . Let the scoring function φ : ∆mÑRm ` be bounded . Let Vφ Ď RX denote the class of loss functions vpx , yq “ ` φpy , hpxqq “ φyphpxqq induced by classifiers h P H. LetMRn “ N8p 1n , Vφ , 2nq denote the uniform L8 covering number for Vφ . Fix δ P p0 , 1q . Then with probability ě 1´ δ over draw of S „ Dn , for any h P H : Rphq ď pRphq ` O ˜ c VRn phq logpMRn { δq n ` logpM R n { δq n ¸ . where VRn phq denotes the empirical variance of the loss computed on n examples teJyiφphpxiqqu n i “ 1 . Proposition 8 ( Generalization bound for churn ) . Let the scoring function φ : ∆mÑRm ` be bounded . For base classifier g , let Uφ Ď RX denote the corresponding class of divergence functions upxq “ dφphpxq , gpxqq “ gpxqJ ` φphpxqq ´ φpgpxqq ˘ induced by classifiers h P H. Let MCn “ N8p 1n , Uφ , 2nq denote the uniform L8 covering number for Uφ . Fix δ P p0 , 1q . Then with probability ě 1´ δ over draw of S „ Dn , for any h P H : Cphq ď pCphq ` O ˜ c VCn phq logpMCn { δq n ` logpM C n { δq n ¸ . where VCn phq denotes the empirical variance of the churn values computed on n examples tgpxiqJ ` φphpxiqq ´ φpgpxiqq ˘ uni “ 1 . The term VCn captures the variance in the labels provided by the base model g. When this term is low ( which is what we would expect in practice from a base model that makes soft-predictions ) , the churn metric enjoys a tight generalization bound . In contrast , the classification risk is evaluated on one-hot training labels and the variance term VRn there is not impacted by the base model ’ s scores . C DEFINITIONS OF NETWORK ARCHITECTURES USED C.1 FULLY CONNECTED NETWORK FCN-x refers to the following model with size set to `` x '' . In other words , it ’ s a simple fully connected network with one hidden layer with x units . def get_fcn ( n_columns , num_classes=10 , size=100 , weight_init=None ) : model = None model = tf.keras.Sequential ( [ tf.keras.layers.Input ( shape= ( n_columns , ) ) , tf.keras.layers.Dense ( size , activation=tf.nn.relu ) , tf.keras.layers.Dense ( num_classes , activation= '' softmax '' ) , ] ) model.compile ( optimizer=tf.keras.optimizers.Adam ( ) , loss=tf.keras.losses.CategoricalCrossentropy ( ) , metrics= [ tf.keras.metrics.categorical_accuracy ] ) return model C.2 CONVOLUTIONAL NETWORK Convnet-x refers to the following model with size set to `` x '' . Convnet-1 is based on the lenet5 architecture LeCun et al . ( 1998 ) . def get_convnet ( input_shape= ( 28 , 28 , 3 ) , size=1 , num_classes=2 , weight_init=None ) : model = tf.keras.Sequential ( ) model.add ( tf.keras.layers.Conv2D ( filters=16 * size , kernel_size= ( 5 , 5 ) , padding= '' same '' , activation= '' relu '' , input_shape=input_shape ) ) model.add ( tf.keras.layers.MaxPool2D ( strides=2 ) ) model.add ( tf.keras.layers.Conv2D ( filters=24 * size , kernel_size= ( 5 , 5 ) , padding= '' valid '' , activation= '' relu '' ) ) model.add ( tf.keras.layers.MaxPool2D ( strides=2 ) ) model.add ( tf.keras.layers.Flatten ( ) ) model.add ( tf.keras.layers.Dense ( 128 * size , activation= '' relu '' ) ) model.add ( tf.keras.layers.Dense ( 84 , activation= '' relu '' ) ) model.add ( tf.keras.layers.Dense ( num_classes , activation= '' softmax '' ) ) model.compile ( optimizer=tf.keras.optimizers.Adam ( ) , loss=tf.keras.losses.CategoricalCrossentropy ( ) , metrics= [ tf.keras.metrics.categorical_accuracy ] ) return model C.3 TRANSFORMER Transformer-x refers to the following with size set to `` x '' . It is based on keras tutorial on text classification ( https : //keras.io/examples/nlp/text_classification_ with_transformer/ licensed under the Apache License , Version 2.0 ) . def get_transformer ( maxlen , size=1 , num_classes=2 , weight_init=None ) : model = None class TransformerBlock ( tf.keras.layers.Layer ) : def __init__ ( self , embed_dim , num_heads , ff_dim , rate=0.1 , weight_init=None ) : super ( TransformerBlock , self ) .__init__ ( ) self.att = tf.keras.layers.MultiHeadAttention ( num_heads=num_heads , key_dim=embed_dim ) self.ffn = tf.keras.Sequential ( [ tf.keras.layers.Dense ( ff_dim , activation= '' relu '' ) , tf.keras.layers.Dense ( embed_dim ) , ] ) self.layernorm1 = tf.keras.layers.LayerNormalization ( epsilon=1e-6 ) self.layernorm2 = tf.keras.layers.LayerNormalization ( epsilon=1e-6 ) def call ( self , inputs , training ) : attn_output = self.att ( inputs , inputs ) # attn_output = self.dropout1 ( attn_output , training=training ) out1 = self.layernorm1 ( inputs + attn_output ) ffn_output = self.ffn ( out1 ) return self.layernorm2 ( out1 + ffn_output ) class TokenAndPositionEmbedding ( tf.keras.layers.Layer ) : def __init__ ( self , maxlen , vocab_size , embed_dim , ) : super ( TokenAndPositionEmbedding , self ) .__init__ ( ) self.token_emb = tf.keras.layers.Embedding ( input_dim=vocab_size , output_dim=embed_dim ) self.pos_emb = tf.keras.layers.Embedding ( input_dim=maxlen , output_dim=embed_dim ) def call ( self , x ) : maxlen = tf.shape ( x ) [ -1 ] positions = tf.range ( start=0 , limit=maxlen , delta=1 ) positions = self.pos_emb ( positions ) x = self.token_emb ( x ) return x + positions embed_dim = 32 * size # Embedding size for each token num_heads = 2 * size # Number of attention heads ff_dim = 32 * size # Hidden layer size in feed forward network inside transformer inputs = tf.keras.layers.Input ( shape= ( maxlen , ) ) embedding_layer = TokenAndPositionEmbedding ( maxlen , 20000 , embed_dim ) x = embedding_layer ( inputs ) transformer_block = TransformerBlock ( embed_dim , num_heads , ff_dim , weight_init ) x = transformer_block ( x ) x = tf.keras.layers.GlobalAveragePooling1D ( ) ( x ) outputs = tf.keras.layers.Dense ( num_classes , activation= '' softmax '' ) ( x ) model = tf.keras.Model ( inputs=inputs , outputs=outputs ) model.compile ( optimizer=tf.keras.optimizers.Adam ( ) , loss=tf.keras.losses.CategoricalCrossentropy ( ) , metrics= [ tf.keras.metrics.categorical_accuracy ] ) return model D MODEL TRAINING CODE def model_trainer ( get_model , X_train , y_train , X_test , y_test , weight_init=None , validation_data=None , warm=True , mixup_alpha=-1 , codistill_alpha=-1 , distill_alpha=-1 , anchor_alpha=-1 , anchor_eps=-1 ) : model = get_model ( ) if weight_init is not None and warm : model.set_weights ( weight_init ) if FLAGS.loss == `` squared '' : model.compile ( optimizer=tf.keras.optimizers.Adam ( ) , loss=tf.keras.losses.MeanSquaredError ( ) , metrics= [ tf.keras.metrics.categorical_accuracy ] ) callback = tf.keras.callbacks.EarlyStopping ( monitor= '' val_loss '' , patience=3 ) history = None if distill_alpha > = 0 : original_model = get_model ( ) original_model.set_weights ( weight_init ) y_pred = original_model.predict ( X_train ) y_use = distill_alpha * y_pred + ( 1 - distill_alpha ) * y_train history = model.fit ( x=X_train , y=y_use , epochs=FLAGS.n_epochs , callbacks= [ callback ] , validation_data=validation_data ) elif anchor_alpha > = 0 and anchor_eps > = 0 : original_model = get_model ( ) original_model.set_weights ( weight_init ) y_pred = original_model.predict ( X_train ) y_pred_hard = np.argmax ( y_pred , axis=1 ) y_hard = np.argmax ( y_train , axis=1 ) correct = ( y_pred_hard == y_hard ) correct = np.tile ( correct , ( y_train.shape [ 1 ] , 1 ) ) correct = np.transpose ( correct ) correct = correct.reshape ( y_train.shape ) y_use = np.where ( correct , anchor_alpha * y_pred + ( 1 - anchor_alpha ) * y_train , y_train * anchor_eps ) history = model.fit ( x=X_train , y=y_use , epochs=FLAGS.n_epochs , callbacks= [ callback ] , validation_data=validation_data ) elif mixup_alpha > = 0 : training_generator = deep_utils.MixupGenerator ( X_train , y_train , alpha=mixup_alpha ) ( ) history = model.fit ( x=training_generator , validation_data=validation_data , steps_per_epoch=int ( X_train.shape [ 0 ] / 32 ) , epochs=FLAGS.n_epochs , callbacks= [ callback ] ) elif codistill_alpha > = 0 : teacher_model = get_model ( ) if weight_init is not None and warm : teacher_model.set_weights ( weight_init ) val_losses = [ ] optimizer = tf.keras.optimizers.Adam ( ) global_step = 0 alpha = 0 codistillation_warmup_steps = 0 for epoch in range ( FLAGS.n_epochs ) : X_train_ , y_train_ = sklearn.utils.shuffle ( X_train , y_train ) batch_size = 32 for i in range ( int ( X_train_.shape [ 0 ] / batch_size ) ) : if global_step > = codistillation_warmup_steps : alpha = codistill_alpha else : alpha = 0. with tf.GradientTape ( ) as tape : X_batch = X_train_ [ i * 32 : ( i + 1 ) * 32 , : ] y_batch = y_train_ [ i * 32 : ( i + 1 ) * 32 , : ] prob_student = model ( X_batch , training=True ) prob_teacher = teacher_model ( X_batch , training=True ) loss = deep_utils.compute_loss ( prob_student , prob_teacher , y_batch , alpha ) trainable_weights = model.trainable_weights + teacher_model.trainable_weights grads = tape.gradient ( loss , trainable_weights ) optimizer.apply_gradients ( zip ( grads , trainable_weights ) ) global_step += 1 val_preds = model.predict ( validation_data [ 0 ] ) val_loss = np.sum ( deep_utils.cross_entropy ( validation_data [ 1 ] .astype ( `` float32 '' ) , val_preds ) ) val_losses.append ( val_loss ) if len ( val_losses ) > 3 and min ( val_losses [ -3 : ] ) > val_losses [ -4 ] : break else : history = model.fit ( X_train , y_train , epochs=FLAGS.n_epochs , callbacks= [ callback ] , validation_data=validation_data ) y_pred_train = model.predict ( X_train ) y_pred_test = model.predict ( X_test ) return y_pred_train , y_pred_test , model.get_weights ( ) E ADDITIONAL EXPERIMENTAL RESULTS E.1 ADDITIONAL OPENML RESULTS E.1.1 INITIAL SAMPLE 100 , BATCH SIZE 1000 , VALIDATION SIZE 100 In Tables 3 and 4 , we show the churn at cold accuracy metric across network sizes ( fcn-10 , fcn-100 , fcn-1000 , fcn-10000 , fcn-100000 ) . Table 5 shows the standard error bars . They are obtained by fixing the dataset and model , and taking the 100 accuracy and churn results from each baseline and calculating the standard error , which is the standard deviation of the mean . We then report the average standard error across the baselines We see that distillation is the best 52 % of the time . E.1.2 INITIAL SAMPLE 1000 , BATCH SIZE 1000 , VALIDATION SIZE 100 In Tables 6 and 7 , we show the churn at cold accuracy metric across network sizes ( fcn-10 , fcn-100 , fcn-1000 , fcn-10000 , fcn-100000 ) . We see that distillation consistently performs strongly across datasets and sizes of networks . Table 8 shows the standard error bars . We see that distillation is the best 84 % of the time . E.2 ADDITIONAL MNIST VARIANT RESULTS E.2.1 INITIAL SAMPLE SIZE 100 , BATCH SIZE 1000 , VALIDATION SIZE 100 We show full results in Table 9 . We see that distillation is the best for 24 out of the 50 combinations of dataset and network . Error bands can be found in Table 10 . E.2.2 INITIAL SAMPLE SIZE 1000 , BATCH SIZE 1000 , VALIDATION SIZE 100 We show full results in Table 11 . We see that distillation is the best for 42 out of the 50 combinations of dataset and network . Error bands can be found in Table 12 . E.2.3 INITIAL SAMPLE SIZE 10000 , BATCH SIZE 1000 , VALIDATION SIZE 1000 We show full results in Table 13 . We see that in this situation , label smoothing starts becoming competitive with distillation with either of them being the best . Distillation is the best for 32 out of the 50 combinations of dataset and network , and losing marginally to label smoothing in other cases . See Table 14 for error bands . E.3 ADDITIONAL SVHN AND CIFAR RESULTS E.3.1 INITIAL SAMPLE 100 , BATCH SIZE 1000 , VALIDATION SIZE 100 Results are in Table 15 , where we see that distillation is best on 8 out of 10 combinations of dataset and network . Error bands can be found in Table 16 . E.3.2 INITIAL SAMPLE 1000 , BATCH SIZE 1000 , VALIDATION SIZE 100 The results can be found in Table 17 . We include the error bands here in Table 18 . Distillation is best in all combinations . E.3.3 INITIAL SAMPLE 10000 , BATCH SIZE 1000 , VALIDATION SIZE 1000 Results are in Table 19 , where we see that distillation is best on all combinations of dataset and network . Error bands can be found in Table 20 . E.4 ADDITIONAL CELEBA RESULTS E.4.1 INITIAL SAMPLE 100 , BATCH SIZE 1000 , VALIDATION SIZE 100 Tables 21 , 22 , 23 , and 24 show the performance of CelebA tasks when we instead use an initial sample size of 100 . We see that across the 200 combinations of task and network , distillation is the best 192 of time , or 96 % of the time . The error bands can be found in Table 25 . E.4.2 INITIAL SAMPLE 1000 , BATCH SIZE 1000 , VALIDATION SIZE 100 We show some additional CelebA results for initial sample 1000 and batch size 1000 in Tables 26 , 27 , 28 , and 29 which show performance for each dataset across convnet-1 , convnet-2 , convnet-4 , convent-8 , convnet-16 . This gives us 40 ¨ 5 “ 200 results , of which distillation performs the best 158 out of those settings , or 79 % of the time . The error bands can be found in Table 30 . E.4.3 INITIAL SAMPLE SIZE 10000 , BATCH SIZE 1000 , VALIDATION SIZE 1000 Tables 31 , 32 , 33 , and 34 show the performance of CelebA tasks when we instead use an initial sample size of 10000 . We see that across the 200 combinations of task and network , distillation is the best 183 of time , or 91.5 % of the time . The error bands can be found in Table 35 . E.5 CIFAR10 AND CIFAR100 ON RESNET Results can be found in Table 36 . We see that distillation outperforms in every case . E.6 ADDITIONAL IMDB RESULTS In Table 37 , we show the results for the IMDB dataset and transformer networks for initial batch sizes of 100 , 1000 and 10000 with the batch size fixed at 1000 . The error bands can be found in Table 38 . We see that for initial sample size of 100 , distillation performs poorly for the smaller networks as the process of distillation hurts the performance with a weak teacher trained on only 100 examples , but performs well for the larger networks . For initial sample size of 1000 and 10000 , distillation is the clear winner losing in only one instance . We show the full Pareto frontiers and cost curves in Figure 4 . | This paper explores the relationship between the low-churn problem and distillation. The authors show that there is an equivalence between those two methods and that distillation performs particularly well on low-churn dataset tasks. The authors propose a novel churn reduction algorithm based on distillation which involves the training of a classifier by minimizing a distilled loss and solving a convex program. The authors provide theoretical guarantees for the proposed algorithm and explain the advantages of the proposed approach compared to the anchor loss, another churn-reduction method. The authors validate empirically their approach on 12 OpenML datasets, 10 MNIST variants, CIFAR10, CIFAR100 and IMDB. | SP:08c1cbc07686b1f7b7fa01246b72929858c3846a |
Brain insights improve RNNs' accuracy and robustness for hierarchical control of continually learned autonomous motor motifs | 1 INTRODUCTION AND RELATION TO OTHER WORKS . Animals have the remarkable ability to efficiently learn and compose elaborate continuous behaviors , which often relies on the flexible chaining of ‘ motifs ’ - reproducible bouts of behavior - in response to hierarchical commands ( Zimnik & Churchland , 2021 ; Geddes et al. , 2018 ; Merel et al. , 2019b ) . The mechanisms behind this are however not fully understood , and engineering controllers for dynamical systems that could perform such complex and structured continuous behaviors in the real world has been a long-standing challenge of robotics ( Brooks , 1986 ; Prescott et al. , 1999 ; Merel et al. , 2019b ) . Such tasks involve two different computational operations : first , at coarse temporal intervals , the flexible selection of discrete and abstract action commands by a ‘ high-level controller ’ ; and , second , the transmission of each abstract command to a ‘ lower-level controller ’ that continuously produces a corresponding command - a ‘ motif ’ - for the motor effector . Many lines of work have used artificial neural networks ( ANNs ) to solve the first - discrete - computational operation ( notably using tools from deep reinforcement learning , e.g . ( Merel et al. , 2019a ; Frans et al. , 2018 ; Dennis et al. , 2020 ) ) and have leveraged the ability of ANNs to perform well when using a rich training set that includes many motif sequences ( Frans et al. , 2018 ; Dennis et al. , 2020 ; OpenAI et al. , 2021 ; Xu et al. , 2020 ) . However , to the best of our knowledge , in the context of the second computational operation , there is a gap in the literature about how to design ANN mechanisms that enable zero-shot transfer to performing new continuous motif sequences from a library of independently learned motifs . Here we will focus on this latter task , which notably highlights a remarkable human skill , as we can - for instance - learn to pronounce a new word and then immediately include it in arbitrary sentences . While this biological relevance has led the neuroscience community to start investigating related questions , this literature has focused on a hand-designed ‘ bottom-up ’ approach . This approach studies how networks can function in spite of strong constraints introduced e.g . to mimic the brain activity patterns or connectivity , and/or to ensure analytical tractability ( Kao et al. , 2020 ; Logiaco et al. , 2019 ; Sussillo et al. , 2015 ; Zimnik & Churchland , 2021 ; Ijspeert et al. , 2013 ; Kulvicius et al. , 2012 ) . Importantly , this approach can not determine whether specific network features are more generally advantageous for solving a task - instead , with this bottom-up approach , these chosen features could reflect constraints that arose through random evolutionary idiosyncracies , or could become irrelevant for networks that can not be designed through analytical insights but that can still be successfully trained with modern machine learning algorithms . Overcoming these limitations has implications for engineering and can provide new neuroscientific insight . To address this knowledge gap , here , we use Recurrent Neural Networks ( RNNs ) whose computational power is not arbitrarily constrained . An RNN is a type of ANN which is a generic dynamical system , and which therefore naturally fits the desired characteristics of motor outputs . Consequently , ANNs are indeed regularly used for tasks requiring the production of continuous outputs , including in the context of robotics ( Wyffels & Schrauwen , 2009 ; Sussillo & Abbott , 2009 ; Tani , 2003 ; Liu et al. , 2019 ; Merel et al. , 2019a ; Maheswaranathan et al. , 2019 ) . We will examine the ability of RNNs to ( i ) independently learn motor motifs in order to to build a continuously expandable motif library ; and ( ii ) flexibly chain motifs in arbitrary orders ( Fig . 1a and see Appendix A.1 for a formal definition ) . In order to better dissect the mechanisms by which RNNs can fail or succeed at this task , we focus on the purest form of motor control through dynamics : the production of trajectories without needing the external anchor of a time-dependent sensory input . We refer to this as autonomous control , which is especially relevant when sensory input is too unreliable ( Yeo et al. , 2016 ; Shenoy et al. , 2013 ; Brembs , 2021 ) , but also in other cases such as the above-mentioned speech production . We will show that while it is possible to engineer RNNs to independently , extendably and efficiently learn to produce single motifs in response to discrete input commands , these RNNs are limited in their generalization ability during improvisation of motif sequences . We will then use insights from the mammalian thalamocortical motor system - notably the presence of a motor preparation phase before each motif ( Zimnik & Churchland , 2021 ; Nashef et al. , 2021 ; Logiaco et al. , 2019 ) . We will show that weaving in these insights into performance-optimized RNNs leads to both improved motif production accuracy ( through a positive synergy with single motif gradient-descent training ) and excellent robustness during generalization to motif sequencing . 2 TASK AND ARCHITECTURE DESIGN . Here , we study the ability of RNNs to fulfill the first requirement of our task : the ability to acquire an extendable library of autonomous motifs ( Fig . 1a , left column ) . However , before describing these analyses , we want to clarify how we chose the motifs that the RNNs have to learn . First , we chose motifs of long durations – on the order of a thousand timesteps – so that they strongly leverage the above-mentioned autonomous capabilities of RNNs . Second , because we are interested in assessing the relative expressive power of the RNNs we study , we have designed two different types of motor motifs so that they constitute two ‘ difficulty ’ levels for the RNNs . Following recent analytical approaches for studying RNN dynamics ( Schuessler et al. , 2020b ; a ; Logiaco et al. , 2019 ) , we characterize the difficulty of motifs as the number of certain basis functions - complex exponentials that act similarly as different frequencies of a Fourier transform - needed to approximate a motif well through linear combination . Therefore , we define a set of oscillatory motifs that are relatively easy to produce , and a set of more difficult ‘ step ’ motifs ( Fig . 1d , f-g , see Appendix A.2 for the full list of motifs used in this paper ) . We train motifs using gradient descent – specifically , ADAM ( Kingma & Ba , 2015 ) ) . Our objective function is the mean square error between desired and actual output . We will now study how RNNs can meet the requirement to learn a new motif without ‘ catastrophic interference ’ with the memory of previously learned motifs - which relates to the literature on continual learning ( Kirkpatrick et al. , 2017 ; Parisi et al. , 2019 ; Yoon et al. , 2020 ; Farajtabar et al. , 2019 ) . This line of work emphasizes avoiding interference between gradient updates used to train a network on many sub-tasks , while promoting the re-use of neural resources across sub-tasks ( so that the network makes efficient use of its parameters ) . We will use an ‘ architectural ’ approach to e xinit xend xinit xend xinit≈ ? xend xend f g xinit xend From random From other motifs From random From other motifs xinit 0 20 40 60 80 100 3.0 2.0 1.0 0.0 -1.0 -2.0 -3.0 Time ( neuronal timescales ) Time ( neuronal timescales ) 0 20 40 60 80 100 1.0 0.0 -1.0 target motif 2 3 4 5 10 # of basis functions to fit Oscillating motif : easy Step motif : hardd Training single motifs to test generalization a 100 timescales Learn library of motifs String in arbitrary orders Learn new motif without interference ... ... ... Introduce it in a string with other motifs ... b c gad / g0 cn gmu / g0 cn g0 ptb go ptb=1.5 M in im um ro ot m ea n sq ua re e rr or A dd it iv e ar ch it ec tu re M ul ti pl ic at iv e ar ch it ec tu re M in im um ro ot m ea n sq ua re e rr or mean rms , add . architecture mean rms , control mean rms , mult . architecture mean rms , control gmu=1.2 Training all O ( Nmotif 2 ) transitions ... ... ... ... 1 2 2 1 1 10 10 1 10 2 2 10 Figure 1 : Task and candidate networks . a ) Task . Left : learning an extendable library of motifs without interference . Right : without additional training , stringing the motifs into chains with arbitrary orders . b ) Additive and multiplicative architectures , who may succeed in the task in a because they segregate parameters into motif-specific sets ( schematized in colors ) while benefiting from fixed shared recurrent and readout weights ( schematized in black ) . c ) Minimum root mean square error over training with overcomplete training set , depending on the gain hyperparameters gad , gmu , and gptbo . Dots are individual networks , the line is the average . In red , we show the mean minimum root mean square error in the control architecture ( averaged over five individually trained networks for each gcn0 ) . d ) Examples easy oscillatory motif and hard step motif ; fitting the latter requires a larger number of basis functions ( complex exponentials ) of varied oscillation frequencies . e ) ‘ Classical ’ strategy to promote zero-shot transfer in order to produce a chain of motifs : train ANNs to produce the motifs starting from random initial network states xinit that emulate the variability of network states at the end of motifs xµend ( where µ indexes the motif ) . f ) Increased inaccuracy during zeroshot transfer to chains of motifs compared to when starting from random xinit drawn from the same distribution as during training ( additive architecture ) . g ) As in f but for oscillatory motifs . Note that the target trajectory ( black ) is buried below the output of the network ( colored lines ) , because the latter is almost perfectly accurate with random xinit . continual learning that consists in segregating tuned parameters across the different motifs , because it will later facilitate the weaving of biological insights in our networks . With this approach , interference is fully prevented during sequential learning ; but the proposed architectures may not be efficient , which is what we investigate below . To do so , we will compare these architectures to a ‘ standard ’ RNN with no segregation of tuned parameters . This RNN ( ‘ control architecture ’ ) ( i ) is fully-tuned , ( ii ) receives a static input bµ to instruct the motif µ ( Maheswaranathan et al. , 2019 ; Sussillo & Abbott , 2009 ; Tani , 2003 ) , and ( iii ) is trained in a plausible noise-robust regime imposed by a random initialization of its state ( see A.6.3 ) . As expected given that this RNN has no protection against catastrophic interference ( Kirkpatrick et al. , 2017 ) , we indeed find that when learning motifs sequentially , this RNN immediately forgets previously learned motifs when learning a new one ( not shown ) – even though it is possible to learn motifs by simultaneously including all of them in each training batch ( Fig . 1c , and see below ) . This ‘ control architecture ’ obeys the following dynamics : τ ẋ = −x + gcn0 J tanh ( x ) + bµ . The output y = wᵀ tanh ( x ) = wᵀr is produced through the vector w that is initialized from a centered Gaussian distribution with std 1/ √ N cn – where hereN cn = 50 is the number of recurrent units of this control network , so that its outputs have appropriate maximal positive or negative magnitude scaling as √ N cn . Also , the recurrent interactions weights J are initialized with iid elements taken from a standard Gaussian as previous work has shown that when choosing a gain gcn0 > 1 this leads to rich dynamical regimes appropriate for complex computations ( Sompolinsky et al. , 1988 ; Sussillo & Abbott , 2009 ; Sussillo et al. , 2015 ; Schuessler et al. , 2020b ) . We performed hyperparameter tuning on g0 . The dynamics are discretized using Euler ’ s integration method where dt = 0.1τ . We now consider modifications of this control RNN that address this issue of catastrophic forgetting by segregating the parameters involved in learning different motifs while sharing common computational resources across motifs ( Parisi et al. , 2019 ; Merel et al. , 2019a ; b ) . Here , we will share ( i ) fixed readout weights ( with the above-mentioned centered Gaussian distribution with std 1/ √ N as this is sufficient to ensure the successful production of our motifs ) ; and ( ii ) the recurrent weights as they can set rich ‘ baseline dynamics ’ that can be modulated by some motif-specific weights . We globally adjust the shared recurrent weights through tuning their above-mentioned gain hyperparameter g ( a similar alternative could be to pre-tune these recurrent weights to an original set of motifs and to then freeze them , but - as we will see - our chosen approach leads to good results ) . First , we consider an ‘ additive ’ architecture ( Fig . 1b , top ) . Here , each motif µ is produced in response to learning the input vector bµ , leading to the following dynamics for the network activities : τ ẋ = −x + gad J tanh ( x ) + bµ , where the input acts as a motif-specific controller of the dynamics . Note that this occurs because the gradient of the loss with respect to the input weights propagates through the recurrent connections - whereas if different outputs weights would be learned for different motifs ( Jaeger , 2007 ) , the dynamics would not be affected by learning . Second , we consider a ‘ multiplicative ’ architecture ( Fig . 1b , bottom ) , that is inspired by both previous machine learning literature ( Sutskever et al. , 2011 ; Schuessler et al. , 2020b ) and the anatomy of the brain ’ s motor system ( Guo et al. , 2017 ; Logiaco et al. , 2019 ) . Here , each motif µ is produced in response to both a learned input vector bµ and a learned rank-one perturbation of the connectivity uµvᵀµ . The latter is equivalent to a loop through an instantaneous ‘ unit ’ receiving input from the recurrent network through the weights vµ and feeding back through the weights uµ . Here , this motif-specific loop participates to modulating the dynamics ( Logiaco et al. , 2019 ) , in a way that yields more computational flexibility compared to networks with random feedback weights ( Susman et al. , 2021 ) . Interestingly , learning the full recurrent weights in randomly initialized RNNs can yield a low-rank weight update ( Schuessler et al. , 2020b ) . Therefore , by imposing that the motif-specific learning is restricted to a low-rank weight perturbation , we expect to get close to full-weight learning while enabling segregation of the learned weights per motif . The dynamics are : τ ẋ = −x + ( gmu J + uµv ᵀ µ ) tanh ( x ) + bµ , where gmu and J are defined as for the additive network , and uµ and vµ are each learned and initialized iid from a centered Gaussian with std gptbo / √ N ( i.e . expected norm gptbo ) . To test the baseline relative accuracies of the additive , multiplicative and control networks , we trained them on all the possible chains of motifs of length two ( excluding repetitions of the same motif ) for the ten step motifs ( Fig . 1c left ) . Consistent with the continual learning aspect of our task – which values limiting the number of parameters that need to be tuned and stored per motif – we equalized the number of tunable parameters across architectures ( Collins et al. , 2017 ) . Interestingly , after optimizing over hyperparameters ( i.e. , gad , gmu , gptbo , and gcn0 ) , we found that all networks had similar accuracy ( Fig . 1c , reminiscent of ( Collins et al. , 2017 ) ) . This suggests that our strategy of segregating tunable parameters in the additive and multiplicative networks do not lead to drastic decrease in per-tuned-parameter expressivity , while by construction preventing interference when learning motifs sequentially ( while the control network suffers from forgetting of previously learned motifs when learning new ones as expected , not shown ) . On the other hand , we note that the multiplicative architecture , which is closer to models constrained to mimic brain dynamics and architecture ( Logiaco et al. , 2019 ) , appears to have similar accuracy as the additive network while requiring fewer neurons . This echoes recent results suggesting that more biologically-plausible object-recognition ANNs tend to have architectures that require fewer neurons ( Nayebi et al. , 2021 ) . We now turn to investigate the robustness of these different architectures when improvising motif sequences after being trained on single motifs – a training strategy which , by construction , enables learning new motifs without interference for the additive and multiplicative architectures . | The authors seek to use a small RNN to generate extended, time-varying outputs given a cue indicating which output to generate. They focus on the problem of generating a sequence of outputs, cued in series, without the end state of one interfering with the subsequently cued trajectory. They show that small vanilla RNNs do not readily perform this task: RNNs at a random state can generate a sequence when cued, but they fail to do so when their state is at the end point of another sequence. The authors then show that by biasing the activity of the network towards the origin in the absence of an input (which they suggest is inspired by neuroscience), RNNs more readily learn sequences and more consistently generate accurate sequences in series. My enthusiasm for the work is relatively low for a variety of reasons. Primarily, the authors fail to test obvious alternative hypotheses of ways that the problem domain they design could be solved by a learned system, without the need to add the inductive bias they add. Moreover, and more fundamentally, the relevance of such a small toy system (300 units, trained to generate just 10 trajectories) for machine learning is unclear. | SP:c55a51a4154e8f02f8b43cfed0a4216ba197c3a0 |
Brain insights improve RNNs' accuracy and robustness for hierarchical control of continually learned autonomous motor motifs | 1 INTRODUCTION AND RELATION TO OTHER WORKS . Animals have the remarkable ability to efficiently learn and compose elaborate continuous behaviors , which often relies on the flexible chaining of ‘ motifs ’ - reproducible bouts of behavior - in response to hierarchical commands ( Zimnik & Churchland , 2021 ; Geddes et al. , 2018 ; Merel et al. , 2019b ) . The mechanisms behind this are however not fully understood , and engineering controllers for dynamical systems that could perform such complex and structured continuous behaviors in the real world has been a long-standing challenge of robotics ( Brooks , 1986 ; Prescott et al. , 1999 ; Merel et al. , 2019b ) . Such tasks involve two different computational operations : first , at coarse temporal intervals , the flexible selection of discrete and abstract action commands by a ‘ high-level controller ’ ; and , second , the transmission of each abstract command to a ‘ lower-level controller ’ that continuously produces a corresponding command - a ‘ motif ’ - for the motor effector . Many lines of work have used artificial neural networks ( ANNs ) to solve the first - discrete - computational operation ( notably using tools from deep reinforcement learning , e.g . ( Merel et al. , 2019a ; Frans et al. , 2018 ; Dennis et al. , 2020 ) ) and have leveraged the ability of ANNs to perform well when using a rich training set that includes many motif sequences ( Frans et al. , 2018 ; Dennis et al. , 2020 ; OpenAI et al. , 2021 ; Xu et al. , 2020 ) . However , to the best of our knowledge , in the context of the second computational operation , there is a gap in the literature about how to design ANN mechanisms that enable zero-shot transfer to performing new continuous motif sequences from a library of independently learned motifs . Here we will focus on this latter task , which notably highlights a remarkable human skill , as we can - for instance - learn to pronounce a new word and then immediately include it in arbitrary sentences . While this biological relevance has led the neuroscience community to start investigating related questions , this literature has focused on a hand-designed ‘ bottom-up ’ approach . This approach studies how networks can function in spite of strong constraints introduced e.g . to mimic the brain activity patterns or connectivity , and/or to ensure analytical tractability ( Kao et al. , 2020 ; Logiaco et al. , 2019 ; Sussillo et al. , 2015 ; Zimnik & Churchland , 2021 ; Ijspeert et al. , 2013 ; Kulvicius et al. , 2012 ) . Importantly , this approach can not determine whether specific network features are more generally advantageous for solving a task - instead , with this bottom-up approach , these chosen features could reflect constraints that arose through random evolutionary idiosyncracies , or could become irrelevant for networks that can not be designed through analytical insights but that can still be successfully trained with modern machine learning algorithms . Overcoming these limitations has implications for engineering and can provide new neuroscientific insight . To address this knowledge gap , here , we use Recurrent Neural Networks ( RNNs ) whose computational power is not arbitrarily constrained . An RNN is a type of ANN which is a generic dynamical system , and which therefore naturally fits the desired characteristics of motor outputs . Consequently , ANNs are indeed regularly used for tasks requiring the production of continuous outputs , including in the context of robotics ( Wyffels & Schrauwen , 2009 ; Sussillo & Abbott , 2009 ; Tani , 2003 ; Liu et al. , 2019 ; Merel et al. , 2019a ; Maheswaranathan et al. , 2019 ) . We will examine the ability of RNNs to ( i ) independently learn motor motifs in order to to build a continuously expandable motif library ; and ( ii ) flexibly chain motifs in arbitrary orders ( Fig . 1a and see Appendix A.1 for a formal definition ) . In order to better dissect the mechanisms by which RNNs can fail or succeed at this task , we focus on the purest form of motor control through dynamics : the production of trajectories without needing the external anchor of a time-dependent sensory input . We refer to this as autonomous control , which is especially relevant when sensory input is too unreliable ( Yeo et al. , 2016 ; Shenoy et al. , 2013 ; Brembs , 2021 ) , but also in other cases such as the above-mentioned speech production . We will show that while it is possible to engineer RNNs to independently , extendably and efficiently learn to produce single motifs in response to discrete input commands , these RNNs are limited in their generalization ability during improvisation of motif sequences . We will then use insights from the mammalian thalamocortical motor system - notably the presence of a motor preparation phase before each motif ( Zimnik & Churchland , 2021 ; Nashef et al. , 2021 ; Logiaco et al. , 2019 ) . We will show that weaving in these insights into performance-optimized RNNs leads to both improved motif production accuracy ( through a positive synergy with single motif gradient-descent training ) and excellent robustness during generalization to motif sequencing . 2 TASK AND ARCHITECTURE DESIGN . Here , we study the ability of RNNs to fulfill the first requirement of our task : the ability to acquire an extendable library of autonomous motifs ( Fig . 1a , left column ) . However , before describing these analyses , we want to clarify how we chose the motifs that the RNNs have to learn . First , we chose motifs of long durations – on the order of a thousand timesteps – so that they strongly leverage the above-mentioned autonomous capabilities of RNNs . Second , because we are interested in assessing the relative expressive power of the RNNs we study , we have designed two different types of motor motifs so that they constitute two ‘ difficulty ’ levels for the RNNs . Following recent analytical approaches for studying RNN dynamics ( Schuessler et al. , 2020b ; a ; Logiaco et al. , 2019 ) , we characterize the difficulty of motifs as the number of certain basis functions - complex exponentials that act similarly as different frequencies of a Fourier transform - needed to approximate a motif well through linear combination . Therefore , we define a set of oscillatory motifs that are relatively easy to produce , and a set of more difficult ‘ step ’ motifs ( Fig . 1d , f-g , see Appendix A.2 for the full list of motifs used in this paper ) . We train motifs using gradient descent – specifically , ADAM ( Kingma & Ba , 2015 ) ) . Our objective function is the mean square error between desired and actual output . We will now study how RNNs can meet the requirement to learn a new motif without ‘ catastrophic interference ’ with the memory of previously learned motifs - which relates to the literature on continual learning ( Kirkpatrick et al. , 2017 ; Parisi et al. , 2019 ; Yoon et al. , 2020 ; Farajtabar et al. , 2019 ) . This line of work emphasizes avoiding interference between gradient updates used to train a network on many sub-tasks , while promoting the re-use of neural resources across sub-tasks ( so that the network makes efficient use of its parameters ) . We will use an ‘ architectural ’ approach to e xinit xend xinit xend xinit≈ ? xend xend f g xinit xend From random From other motifs From random From other motifs xinit 0 20 40 60 80 100 3.0 2.0 1.0 0.0 -1.0 -2.0 -3.0 Time ( neuronal timescales ) Time ( neuronal timescales ) 0 20 40 60 80 100 1.0 0.0 -1.0 target motif 2 3 4 5 10 # of basis functions to fit Oscillating motif : easy Step motif : hardd Training single motifs to test generalization a 100 timescales Learn library of motifs String in arbitrary orders Learn new motif without interference ... ... ... Introduce it in a string with other motifs ... b c gad / g0 cn gmu / g0 cn g0 ptb go ptb=1.5 M in im um ro ot m ea n sq ua re e rr or A dd it iv e ar ch it ec tu re M ul ti pl ic at iv e ar ch it ec tu re M in im um ro ot m ea n sq ua re e rr or mean rms , add . architecture mean rms , control mean rms , mult . architecture mean rms , control gmu=1.2 Training all O ( Nmotif 2 ) transitions ... ... ... ... 1 2 2 1 1 10 10 1 10 2 2 10 Figure 1 : Task and candidate networks . a ) Task . Left : learning an extendable library of motifs without interference . Right : without additional training , stringing the motifs into chains with arbitrary orders . b ) Additive and multiplicative architectures , who may succeed in the task in a because they segregate parameters into motif-specific sets ( schematized in colors ) while benefiting from fixed shared recurrent and readout weights ( schematized in black ) . c ) Minimum root mean square error over training with overcomplete training set , depending on the gain hyperparameters gad , gmu , and gptbo . Dots are individual networks , the line is the average . In red , we show the mean minimum root mean square error in the control architecture ( averaged over five individually trained networks for each gcn0 ) . d ) Examples easy oscillatory motif and hard step motif ; fitting the latter requires a larger number of basis functions ( complex exponentials ) of varied oscillation frequencies . e ) ‘ Classical ’ strategy to promote zero-shot transfer in order to produce a chain of motifs : train ANNs to produce the motifs starting from random initial network states xinit that emulate the variability of network states at the end of motifs xµend ( where µ indexes the motif ) . f ) Increased inaccuracy during zeroshot transfer to chains of motifs compared to when starting from random xinit drawn from the same distribution as during training ( additive architecture ) . g ) As in f but for oscillatory motifs . Note that the target trajectory ( black ) is buried below the output of the network ( colored lines ) , because the latter is almost perfectly accurate with random xinit . continual learning that consists in segregating tuned parameters across the different motifs , because it will later facilitate the weaving of biological insights in our networks . With this approach , interference is fully prevented during sequential learning ; but the proposed architectures may not be efficient , which is what we investigate below . To do so , we will compare these architectures to a ‘ standard ’ RNN with no segregation of tuned parameters . This RNN ( ‘ control architecture ’ ) ( i ) is fully-tuned , ( ii ) receives a static input bµ to instruct the motif µ ( Maheswaranathan et al. , 2019 ; Sussillo & Abbott , 2009 ; Tani , 2003 ) , and ( iii ) is trained in a plausible noise-robust regime imposed by a random initialization of its state ( see A.6.3 ) . As expected given that this RNN has no protection against catastrophic interference ( Kirkpatrick et al. , 2017 ) , we indeed find that when learning motifs sequentially , this RNN immediately forgets previously learned motifs when learning a new one ( not shown ) – even though it is possible to learn motifs by simultaneously including all of them in each training batch ( Fig . 1c , and see below ) . This ‘ control architecture ’ obeys the following dynamics : τ ẋ = −x + gcn0 J tanh ( x ) + bµ . The output y = wᵀ tanh ( x ) = wᵀr is produced through the vector w that is initialized from a centered Gaussian distribution with std 1/ √ N cn – where hereN cn = 50 is the number of recurrent units of this control network , so that its outputs have appropriate maximal positive or negative magnitude scaling as √ N cn . Also , the recurrent interactions weights J are initialized with iid elements taken from a standard Gaussian as previous work has shown that when choosing a gain gcn0 > 1 this leads to rich dynamical regimes appropriate for complex computations ( Sompolinsky et al. , 1988 ; Sussillo & Abbott , 2009 ; Sussillo et al. , 2015 ; Schuessler et al. , 2020b ) . We performed hyperparameter tuning on g0 . The dynamics are discretized using Euler ’ s integration method where dt = 0.1τ . We now consider modifications of this control RNN that address this issue of catastrophic forgetting by segregating the parameters involved in learning different motifs while sharing common computational resources across motifs ( Parisi et al. , 2019 ; Merel et al. , 2019a ; b ) . Here , we will share ( i ) fixed readout weights ( with the above-mentioned centered Gaussian distribution with std 1/ √ N as this is sufficient to ensure the successful production of our motifs ) ; and ( ii ) the recurrent weights as they can set rich ‘ baseline dynamics ’ that can be modulated by some motif-specific weights . We globally adjust the shared recurrent weights through tuning their above-mentioned gain hyperparameter g ( a similar alternative could be to pre-tune these recurrent weights to an original set of motifs and to then freeze them , but - as we will see - our chosen approach leads to good results ) . First , we consider an ‘ additive ’ architecture ( Fig . 1b , top ) . Here , each motif µ is produced in response to learning the input vector bµ , leading to the following dynamics for the network activities : τ ẋ = −x + gad J tanh ( x ) + bµ , where the input acts as a motif-specific controller of the dynamics . Note that this occurs because the gradient of the loss with respect to the input weights propagates through the recurrent connections - whereas if different outputs weights would be learned for different motifs ( Jaeger , 2007 ) , the dynamics would not be affected by learning . Second , we consider a ‘ multiplicative ’ architecture ( Fig . 1b , bottom ) , that is inspired by both previous machine learning literature ( Sutskever et al. , 2011 ; Schuessler et al. , 2020b ) and the anatomy of the brain ’ s motor system ( Guo et al. , 2017 ; Logiaco et al. , 2019 ) . Here , each motif µ is produced in response to both a learned input vector bµ and a learned rank-one perturbation of the connectivity uµvᵀµ . The latter is equivalent to a loop through an instantaneous ‘ unit ’ receiving input from the recurrent network through the weights vµ and feeding back through the weights uµ . Here , this motif-specific loop participates to modulating the dynamics ( Logiaco et al. , 2019 ) , in a way that yields more computational flexibility compared to networks with random feedback weights ( Susman et al. , 2021 ) . Interestingly , learning the full recurrent weights in randomly initialized RNNs can yield a low-rank weight update ( Schuessler et al. , 2020b ) . Therefore , by imposing that the motif-specific learning is restricted to a low-rank weight perturbation , we expect to get close to full-weight learning while enabling segregation of the learned weights per motif . The dynamics are : τ ẋ = −x + ( gmu J + uµv ᵀ µ ) tanh ( x ) + bµ , where gmu and J are defined as for the additive network , and uµ and vµ are each learned and initialized iid from a centered Gaussian with std gptbo / √ N ( i.e . expected norm gptbo ) . To test the baseline relative accuracies of the additive , multiplicative and control networks , we trained them on all the possible chains of motifs of length two ( excluding repetitions of the same motif ) for the ten step motifs ( Fig . 1c left ) . Consistent with the continual learning aspect of our task – which values limiting the number of parameters that need to be tuned and stored per motif – we equalized the number of tunable parameters across architectures ( Collins et al. , 2017 ) . Interestingly , after optimizing over hyperparameters ( i.e. , gad , gmu , gptbo , and gcn0 ) , we found that all networks had similar accuracy ( Fig . 1c , reminiscent of ( Collins et al. , 2017 ) ) . This suggests that our strategy of segregating tunable parameters in the additive and multiplicative networks do not lead to drastic decrease in per-tuned-parameter expressivity , while by construction preventing interference when learning motifs sequentially ( while the control network suffers from forgetting of previously learned motifs when learning new ones as expected , not shown ) . On the other hand , we note that the multiplicative architecture , which is closer to models constrained to mimic brain dynamics and architecture ( Logiaco et al. , 2019 ) , appears to have similar accuracy as the additive network while requiring fewer neurons . This echoes recent results suggesting that more biologically-plausible object-recognition ANNs tend to have architectures that require fewer neurons ( Nayebi et al. , 2021 ) . We now turn to investigate the robustness of these different architectures when improvising motif sequences after being trained on single motifs – a training strategy which , by construction , enables learning new motifs without interference for the additive and multiplicative architectures . | This paper considers the problem of RNNs learning arbitrary sequences of “motifs” (continuous time functions, discretized) after having learned the individual motifs separately. Inspired by some previous work, it proposes an architectural approach to deal with interference by introducing a motif-dependent low-rank perturbation to the recurrent weights of the RNN, following Schuessler et al. (2020). To this end, the paper introduces a new evaluation task (motif generation), which is used to compare that architecture and sensible baselines. It then moves to propose a new model inspired in thalamocortical interactions in the brain: an RNN equipped with a preparatory module that imposes a beneficial bias when following a specific training protocol. | SP:c55a51a4154e8f02f8b43cfed0a4216ba197c3a0 |
Multi-batch Reinforcement Learning via Sample Transfer and Imitation Learning | 1 INTRODUCTION . Reinforcement learning aims to learn an optimal control policy through interactions with the environment ( Sutton & Barto , 2018 ) . Deep reinforcement learning ( Deep RL or DRL ) combines neural networks with reinforcement learning and further enables RL agents to deal with more complex environments . DRL has achieved impressive successes in different areas including the game of Go ( Silver et al. , 2017 ) , Atari games ( Mnih et al. , 2015 ) , and continuous control tasks ( Lillicrap et al. , 2015 ) . However , deploying RL algorithms for real-world problems can be very challenging . Compared with supervised learning algorithms ( e.g. , classification or regression ) , most reinforcement learning algorithms need to interact with the environment many times to learn a reliable control policy . This process can be very costly or even dangerous for some real-world applications , e.g. , safety-critical applications . The current success of deep RL algorithms heavily depends on a large number of interactions with the environment . Thus the practical application of reinforcement learning algorithms in the real world is critically limited by its poor data efficiency and its inflexibility of learning in an offline fashion . Batch reinforcement learning ( also known as offline reinforcement learning ) algorithms have been developed to solve this issue . Batch RL aims to learn a control policy from a previously collected dataset without further interactions with the environment . Batch reinforcement learning has attracted a considerable amount of attention due to its potential in dealing with real-world problems . There have been many efforts in developing batch RL methods : early approaches include fitted Q iteration method ( Ernst et al. , 2005 ) which uses a tree-based model to estimate the state-action value function ( Q function ) in an iterative way , the neural fitted Q method ( Riedmiller , 2005 ) , which leverages a multilayer perceptron ( MLP ) to approximate the Q function . Since these earlier efforts , a number of batch reinforcement learning algorithms have been developed that further improve the learning performance . These approaches can be generally categorized as Q function based methods ( Fujimoto et al. , 2019 ; Kumar et al. , 2020 ) and imitation learning based methods ( Wang et al. , 2018 ; Peng et al. , 2019 ) . Examples of Q function based methods include ( Fujimoto et al. , 2019 ) , in which the authors constrain updates of Q values to deal with distribution drift , and ( Kumar et al. , 2020 ) , where an additional penalty term was added to constrain the update for the Q function . Best Action Imitation Learning ( BAIL ) ( Chen et al. , 2019 ) , on the other hand , leveraging the so-called upper envelope , directly select a sub-batch of the dataset and then execute imitation learning on the selected data to learn the control policy . Even with these advances , most existing batch RL algorithms assume that we have a large number of data points in the batch . In the real world , this may be an unrealistic assumption . For example , when learning energy management control policies , we may only have a very limited amount of collected data for newly built houses and buildings . It is hard for most current batch RL algorithms to learn a reliable policy with a limited amount of data points . In this work , we use transfer learning ( Torrey & Shavlik , 2010 ) to address this issue . Transfer learning aims to use the knowledge from source domains ( domains for which we have a large amount of data ) to improve the learning performance in the target domain ( the domain we are interested in , but for which we only have a limited amount of data ) . Depending on the manner in which knowledge is reused from the source domains , there are three main categories of transfer learning : sample transfer , model transfer , and representation transfer . In this work , we use sample transfer , i.e. , we transfer some related data points from the source tasks to improve the learning performance in the target control task . The proposed algorithm is referred to as BAIL+ as it is a direct extension of the BAIL algorithm proposed by ( Chen et al. , 2019 ) . Multitask learning ( MTL ) ( Caruana , 1997 ) aims to learn a set of tasks jointly instead of learning them separately to achieve a better overall learning performance . In general , multitask learning can help to improve the data efficiency via reusing the shared representations and training on data from multiple sources . Multitask learning has shown its effectiveness on different applications such as image classification ( Li et al. , 2015 ; 2018 ) , nature language processing ( Collobert & Weston , 2008 ; Liu et al. , 2019 ) , and speech recognition ( Siohan & Rybach , 2015 ) . It has shown to be useful in deep reinforcement learning field in recent years ( Li et al. , 2019 ) . The main objective is to learn one ( or more ) policy that can perform well on multiple tasks . In ( Rusu et al. , 2015 ) , the authors proposed to learn a multitask policy from a set of DQN experts that are pre-trained on a set of tasks separately . Lately , ( Hessel et al. , 2019 ) investigated how to balance the learning over multiple tasks with one deep neural network based policy network . The benefit of representation sharing is also investigated in ( D ’ Eramo et al. , 2019 ) for learning deep RL based control agent . Most of existing multitask RL works are based on online RL . There are some preliminary investigations on batch RL . As shown in ( Li et al. , 2019 ) , the authors proposed to learn distill pretarined BCQ models into one model . However , despite a few attempts , how to further improve the task-level generalization for batch RL is still an open question . In this work , different from previous works , we propose to first improve the learning performance on single tasks and then utilize the policy distillation to combine the learned policies into one single policy . Batch RL algorithms are typically designed to deal with single task scenario ( a single batch setting ) . In the real world , it is more common to have batches collected from a set of tasks that have similar Markov Decision Process ( MDP ) settings . For example , we may have collected datasets from a set of houses/buildings in the same area . Thus , it will very helpful if one general policy that can be learned from different bacthes that perform well on these different tasks even including unseen tasks without further adaption . In ( Li et al. , 2019 ) , the authors implemented preliminary investigations on multitask batch reinforcement learning . In our work , to improve the task-level generalization of the policy learned with batch RL , we further extend BAIL+ for multi-batch settings via policy distillation . The resulting algorithm is referred to as MBAIL . Specifically , we aims to learn a general policy without the need to inferring the task identity which can make the batch RL more applicable in real world . The remainder of this paper is organized as follows . The preliminaries of this work including batch reinforcement learning , multi-batch reinforcement learning , and BAIL algorithm are presented in Section II . The details for BAIL+ and MBAIL are presented in Section III . The effectiveness of the proposed methods are showcased in Section IV . Finally , conclusions and future work are presented in Section V . 2 PRELIMINARIES . This section reviews the main technical background and notation used in this paper including reinforcement learning , batch reinforcement learning , BAIL , and multi-batch reinforcement learning . Reinforcement Learning Reinforcement learning ( Sutton & Barto , 2018 ) is a learning paradigm in which a learning agent aims to learn an optimal control policy by interacting with the environment . Reinforcement learning has been successfully applied in many areas including transportation ( Haydari & Yilmaz , 2020 ) , smart grids ( Yang et al. , 2020 ) , and recommendation systems ( Zou et al. , 2019 ) . Formally , an RL problem is typically formulated as a Markov Decision Process ( MDP ) , i.e. , a tuple 〈S , A , p , r , µ , γ〉 , where S is the state space , A is the action space , p : S ⊗ A → S is the state transition function , r : S ⊗A → R is the reward function , µ is the initial state distribution , and γ is the discount factor . The solution to an RL problem ( control policy ) is a function : π : S → A . To obtain this solution , the agent needs to achieve the maximum expected cumulative reward , that is , the so-called value function . Given a state s ∈ S and a policy π , the state value function ( a.k.a. , value function ) under policy π is defined by V π ( s ) = Eπ ( R1 + γR2 + · · · γn−1Rn ) , where Ri denotes the reward obtained at time step i. Batch Reinforcement Learning In batch reinforcement learning ( batch RL ) , the goal is to learn a high performance control policy using an offline dataset without further interactions with the environment . The dataset consists of N data points B = { ( st , at , rt , s ′ t ) |t = 1 , .. , N } . In general , we have no prior requests on the policy used to collect the batch B . B can be obtained while training an RL policy in a episodic fashion or running some other control policy ( e.g. , rule-based methods ) in the same way . The distribution of states and actions in the dataset can be far from those induced by the current policy under consideration . This complicates computations related to evaluating and optimizing behaviors , and this has been the primary consideration of several Batch RL solution methods . To counter this extrapolation problem , ( Fujimoto et al. , 2019 ) proposed an algorithm to learn policies with soft constrain to lie near the batch , which alleviate the extrapolation problem . BAIL : Best Action Imitation Learning Best Action Imitation Learning ( BAIL ) ( Chen et al. , 2019 ) is a simple and computationally efficient imitation learning based batch RL algorithm . The core concept of BAIL is very simple : finding actions that can achieve high return for each state s and then learning a control policy based on these selection state-action pairs . To be more specific , for a particular state-action pair ( s , a ) , let G ( s , a ) denote the return starting in state s and action a , under the policy π. Denote the optimal value function by V ∗ ( s ) . Then if the action a∗ satisfies G ( s , a∗ ) = V ∗ ( s ) , a∗ is an optimal action for state s. The problem now becomes how to obtain V ∗ in a batch setting . Since there is no further interaction with the environment , it is impossible to find V ∗ . Therefore we seek to eliminate as many useless state-action pairs in the batch as possible , to avoid the algorithm inferring bad actions . To do this , we estimate a supremum of the optimal value function V ∗ , which is referred to as the upper envelope . Given φ = ( w , b ) , a neural network parameterized Vφ : S → R , a regularization weight λ and a dataset D of size m , where Di = ( si , Gi ) and Gi is the accumulated return of the state si computed within the given batch , then the upper envelope function V ∗ : S → R is estimated by minimizing the following loss function : min φ m∑ i=1 [ Vφ ( si ) −Gi ] 2 + λ||w||2 s.t . Vφ ( si ) > Gi where i = 1 , 2 , · · ·m ( 1 ) Once the upper envelope function Vφ is estimated , the best state-action pairs can be selected from the batch data B based on the estimated Vφ . One way of selecting such pair is that for a fixed β > 0 , we choose all ( si , ai ) pairs from the batch data set B such that : Gi > βVφ ( si ) ( 2 ) Typically , one can set β such that p % of the data points are selected , where p is a hyper-parameter . In this work , we follow the same setting as ( Chen et al. , 2019 ) , in which β is set to ensure that approximately 25 % of all the data points are selected for each batch . Algorithm 1 BAIL+ : Best Action Imitation Learning with Multi-source Sample Transfer Input : A target task batch Bt andN source task batches B1 , B2 , · · · , BN and the pre-defined sample selection ratio threshold α̃ 1 : Learn the upper envelope function Vt and the reward function r̂t for batch Bt . 2 : for j = 1 , · · · , N do 3 : for d = 1 , · · · , M do 4 : Denote the current state action pair by ( sjd , a j d ) . 5 : Following equation 5 , estimate return of sample ( sjd , a j d ) , denote by Ĝ j d. 6 : Compute the sample selection ratio α ( sjd , a j d ) via equation 4 . 7 : if α ( sjd , a j d ) > α̃ then 8 : Append ( sjd , a j d ) to dataset Bt 9 : end if 10 : end for 11 : Learn the final policy πt on Bt via imitation learning 12 : end for Output : the final policy of the target task πt Multi-Batch Reinforcement Learning To make batch RL more suitable for real-world applications , it is desirable that the learned control policy performs well in multiple situations . In this work , we aim to learn one RL agent from a set of batches sampled from a set of tasks { T1 , · · · , TN } . Then the multi-task ( multi-batch ) batch reinforcement learning can be formulated as : arg max θ J ( θ ) = ETi∼p ( T ) [ JTi ( πθ ) ] ( 3 ) where JTi ( πθ ) is referred to as the performance of control policy πθ on task i . Here p ( T ) defines the task distribution and for each task i , we have a corresponding dataset consisting of K tuples B = { ( sti , ati , rti , st ′ i ) |t = 1 , .. , K } . | This paper studies the problem of multi-task offline RL and propose 2 methods, BAIL+ and MBAIL, each is an algorithmic variant based on the BAIL algorithm. The overall goal is to improve performance on multiple tasks. This setting concerns with a number of tasks coming from a MDP distribution where transition function is the same and reward function is different. The authors propose a 2-stage method: first use the BAIL+ algorithm to boost the performance of a particular task with data from other tasks. Then, distillation is used to distill policies from these tasks into one policy. | SP:c06abfb514f76fecbab62bd5730df2012b3e896a |
Multi-batch Reinforcement Learning via Sample Transfer and Imitation Learning | 1 INTRODUCTION . Reinforcement learning aims to learn an optimal control policy through interactions with the environment ( Sutton & Barto , 2018 ) . Deep reinforcement learning ( Deep RL or DRL ) combines neural networks with reinforcement learning and further enables RL agents to deal with more complex environments . DRL has achieved impressive successes in different areas including the game of Go ( Silver et al. , 2017 ) , Atari games ( Mnih et al. , 2015 ) , and continuous control tasks ( Lillicrap et al. , 2015 ) . However , deploying RL algorithms for real-world problems can be very challenging . Compared with supervised learning algorithms ( e.g. , classification or regression ) , most reinforcement learning algorithms need to interact with the environment many times to learn a reliable control policy . This process can be very costly or even dangerous for some real-world applications , e.g. , safety-critical applications . The current success of deep RL algorithms heavily depends on a large number of interactions with the environment . Thus the practical application of reinforcement learning algorithms in the real world is critically limited by its poor data efficiency and its inflexibility of learning in an offline fashion . Batch reinforcement learning ( also known as offline reinforcement learning ) algorithms have been developed to solve this issue . Batch RL aims to learn a control policy from a previously collected dataset without further interactions with the environment . Batch reinforcement learning has attracted a considerable amount of attention due to its potential in dealing with real-world problems . There have been many efforts in developing batch RL methods : early approaches include fitted Q iteration method ( Ernst et al. , 2005 ) which uses a tree-based model to estimate the state-action value function ( Q function ) in an iterative way , the neural fitted Q method ( Riedmiller , 2005 ) , which leverages a multilayer perceptron ( MLP ) to approximate the Q function . Since these earlier efforts , a number of batch reinforcement learning algorithms have been developed that further improve the learning performance . These approaches can be generally categorized as Q function based methods ( Fujimoto et al. , 2019 ; Kumar et al. , 2020 ) and imitation learning based methods ( Wang et al. , 2018 ; Peng et al. , 2019 ) . Examples of Q function based methods include ( Fujimoto et al. , 2019 ) , in which the authors constrain updates of Q values to deal with distribution drift , and ( Kumar et al. , 2020 ) , where an additional penalty term was added to constrain the update for the Q function . Best Action Imitation Learning ( BAIL ) ( Chen et al. , 2019 ) , on the other hand , leveraging the so-called upper envelope , directly select a sub-batch of the dataset and then execute imitation learning on the selected data to learn the control policy . Even with these advances , most existing batch RL algorithms assume that we have a large number of data points in the batch . In the real world , this may be an unrealistic assumption . For example , when learning energy management control policies , we may only have a very limited amount of collected data for newly built houses and buildings . It is hard for most current batch RL algorithms to learn a reliable policy with a limited amount of data points . In this work , we use transfer learning ( Torrey & Shavlik , 2010 ) to address this issue . Transfer learning aims to use the knowledge from source domains ( domains for which we have a large amount of data ) to improve the learning performance in the target domain ( the domain we are interested in , but for which we only have a limited amount of data ) . Depending on the manner in which knowledge is reused from the source domains , there are three main categories of transfer learning : sample transfer , model transfer , and representation transfer . In this work , we use sample transfer , i.e. , we transfer some related data points from the source tasks to improve the learning performance in the target control task . The proposed algorithm is referred to as BAIL+ as it is a direct extension of the BAIL algorithm proposed by ( Chen et al. , 2019 ) . Multitask learning ( MTL ) ( Caruana , 1997 ) aims to learn a set of tasks jointly instead of learning them separately to achieve a better overall learning performance . In general , multitask learning can help to improve the data efficiency via reusing the shared representations and training on data from multiple sources . Multitask learning has shown its effectiveness on different applications such as image classification ( Li et al. , 2015 ; 2018 ) , nature language processing ( Collobert & Weston , 2008 ; Liu et al. , 2019 ) , and speech recognition ( Siohan & Rybach , 2015 ) . It has shown to be useful in deep reinforcement learning field in recent years ( Li et al. , 2019 ) . The main objective is to learn one ( or more ) policy that can perform well on multiple tasks . In ( Rusu et al. , 2015 ) , the authors proposed to learn a multitask policy from a set of DQN experts that are pre-trained on a set of tasks separately . Lately , ( Hessel et al. , 2019 ) investigated how to balance the learning over multiple tasks with one deep neural network based policy network . The benefit of representation sharing is also investigated in ( D ’ Eramo et al. , 2019 ) for learning deep RL based control agent . Most of existing multitask RL works are based on online RL . There are some preliminary investigations on batch RL . As shown in ( Li et al. , 2019 ) , the authors proposed to learn distill pretarined BCQ models into one model . However , despite a few attempts , how to further improve the task-level generalization for batch RL is still an open question . In this work , different from previous works , we propose to first improve the learning performance on single tasks and then utilize the policy distillation to combine the learned policies into one single policy . Batch RL algorithms are typically designed to deal with single task scenario ( a single batch setting ) . In the real world , it is more common to have batches collected from a set of tasks that have similar Markov Decision Process ( MDP ) settings . For example , we may have collected datasets from a set of houses/buildings in the same area . Thus , it will very helpful if one general policy that can be learned from different bacthes that perform well on these different tasks even including unseen tasks without further adaption . In ( Li et al. , 2019 ) , the authors implemented preliminary investigations on multitask batch reinforcement learning . In our work , to improve the task-level generalization of the policy learned with batch RL , we further extend BAIL+ for multi-batch settings via policy distillation . The resulting algorithm is referred to as MBAIL . Specifically , we aims to learn a general policy without the need to inferring the task identity which can make the batch RL more applicable in real world . The remainder of this paper is organized as follows . The preliminaries of this work including batch reinforcement learning , multi-batch reinforcement learning , and BAIL algorithm are presented in Section II . The details for BAIL+ and MBAIL are presented in Section III . The effectiveness of the proposed methods are showcased in Section IV . Finally , conclusions and future work are presented in Section V . 2 PRELIMINARIES . This section reviews the main technical background and notation used in this paper including reinforcement learning , batch reinforcement learning , BAIL , and multi-batch reinforcement learning . Reinforcement Learning Reinforcement learning ( Sutton & Barto , 2018 ) is a learning paradigm in which a learning agent aims to learn an optimal control policy by interacting with the environment . Reinforcement learning has been successfully applied in many areas including transportation ( Haydari & Yilmaz , 2020 ) , smart grids ( Yang et al. , 2020 ) , and recommendation systems ( Zou et al. , 2019 ) . Formally , an RL problem is typically formulated as a Markov Decision Process ( MDP ) , i.e. , a tuple 〈S , A , p , r , µ , γ〉 , where S is the state space , A is the action space , p : S ⊗ A → S is the state transition function , r : S ⊗A → R is the reward function , µ is the initial state distribution , and γ is the discount factor . The solution to an RL problem ( control policy ) is a function : π : S → A . To obtain this solution , the agent needs to achieve the maximum expected cumulative reward , that is , the so-called value function . Given a state s ∈ S and a policy π , the state value function ( a.k.a. , value function ) under policy π is defined by V π ( s ) = Eπ ( R1 + γR2 + · · · γn−1Rn ) , where Ri denotes the reward obtained at time step i. Batch Reinforcement Learning In batch reinforcement learning ( batch RL ) , the goal is to learn a high performance control policy using an offline dataset without further interactions with the environment . The dataset consists of N data points B = { ( st , at , rt , s ′ t ) |t = 1 , .. , N } . In general , we have no prior requests on the policy used to collect the batch B . B can be obtained while training an RL policy in a episodic fashion or running some other control policy ( e.g. , rule-based methods ) in the same way . The distribution of states and actions in the dataset can be far from those induced by the current policy under consideration . This complicates computations related to evaluating and optimizing behaviors , and this has been the primary consideration of several Batch RL solution methods . To counter this extrapolation problem , ( Fujimoto et al. , 2019 ) proposed an algorithm to learn policies with soft constrain to lie near the batch , which alleviate the extrapolation problem . BAIL : Best Action Imitation Learning Best Action Imitation Learning ( BAIL ) ( Chen et al. , 2019 ) is a simple and computationally efficient imitation learning based batch RL algorithm . The core concept of BAIL is very simple : finding actions that can achieve high return for each state s and then learning a control policy based on these selection state-action pairs . To be more specific , for a particular state-action pair ( s , a ) , let G ( s , a ) denote the return starting in state s and action a , under the policy π. Denote the optimal value function by V ∗ ( s ) . Then if the action a∗ satisfies G ( s , a∗ ) = V ∗ ( s ) , a∗ is an optimal action for state s. The problem now becomes how to obtain V ∗ in a batch setting . Since there is no further interaction with the environment , it is impossible to find V ∗ . Therefore we seek to eliminate as many useless state-action pairs in the batch as possible , to avoid the algorithm inferring bad actions . To do this , we estimate a supremum of the optimal value function V ∗ , which is referred to as the upper envelope . Given φ = ( w , b ) , a neural network parameterized Vφ : S → R , a regularization weight λ and a dataset D of size m , where Di = ( si , Gi ) and Gi is the accumulated return of the state si computed within the given batch , then the upper envelope function V ∗ : S → R is estimated by minimizing the following loss function : min φ m∑ i=1 [ Vφ ( si ) −Gi ] 2 + λ||w||2 s.t . Vφ ( si ) > Gi where i = 1 , 2 , · · ·m ( 1 ) Once the upper envelope function Vφ is estimated , the best state-action pairs can be selected from the batch data B based on the estimated Vφ . One way of selecting such pair is that for a fixed β > 0 , we choose all ( si , ai ) pairs from the batch data set B such that : Gi > βVφ ( si ) ( 2 ) Typically , one can set β such that p % of the data points are selected , where p is a hyper-parameter . In this work , we follow the same setting as ( Chen et al. , 2019 ) , in which β is set to ensure that approximately 25 % of all the data points are selected for each batch . Algorithm 1 BAIL+ : Best Action Imitation Learning with Multi-source Sample Transfer Input : A target task batch Bt andN source task batches B1 , B2 , · · · , BN and the pre-defined sample selection ratio threshold α̃ 1 : Learn the upper envelope function Vt and the reward function r̂t for batch Bt . 2 : for j = 1 , · · · , N do 3 : for d = 1 , · · · , M do 4 : Denote the current state action pair by ( sjd , a j d ) . 5 : Following equation 5 , estimate return of sample ( sjd , a j d ) , denote by Ĝ j d. 6 : Compute the sample selection ratio α ( sjd , a j d ) via equation 4 . 7 : if α ( sjd , a j d ) > α̃ then 8 : Append ( sjd , a j d ) to dataset Bt 9 : end if 10 : end for 11 : Learn the final policy πt on Bt via imitation learning 12 : end for Output : the final policy of the target task πt Multi-Batch Reinforcement Learning To make batch RL more suitable for real-world applications , it is desirable that the learned control policy performs well in multiple situations . In this work , we aim to learn one RL agent from a set of batches sampled from a set of tasks { T1 , · · · , TN } . Then the multi-task ( multi-batch ) batch reinforcement learning can be formulated as : arg max θ J ( θ ) = ETi∼p ( T ) [ JTi ( πθ ) ] ( 3 ) where JTi ( πθ ) is referred to as the performance of control policy πθ on task i . Here p ( T ) defines the task distribution and for each task i , we have a corresponding dataset consisting of K tuples B = { ( sti , ati , rti , st ′ i ) |t = 1 , .. , K } . | (Note that the rest of the review will use Offline RL and Batch RL interchangeably) Building on top of BAIL (Chen et al. 2019), this paper provides two algorithms for the multi-task offline RL setting. The BAIL+ algorithm assumes that we know the identity of the target task and that all tasks share the same transition function. It uses sample transfer from non-target task data to boost its performance on the target task. The MBAIL algorithm builds on top of BAIL+ and MBML (Li et al 2019), where it distills the BAIL+ policies for each individual task into one master policy that performs decently on all tasks. The authors evaluated the proposed methods on the Ant-Dir, Ant-Goal, and HalfCheetahVel environments from Mujoco. The authors compared MBAIL, BAIL+, BAIL, and contextual BCQ. MBAIL consistently performed the best out of the first three, while contextual BCQ performed better on the Ant-Goal env. | SP:c06abfb514f76fecbab62bd5730df2012b3e896a |
SUMNAS: Supernet with Unbiased Meta-Features for Neural Architecture Search | 1 INTRODUCTION . Recent Neural Architecture Search ( NAS ) algorithms often train an over-parameterized network called a supernet to obtain supreme sub-models by sharing parameters rapidly . In this process , the shared parameters are generally trained to reach a state where better architecture discovery is possible through comparing the sub-models . Supernet training is usually conducted by sampling one or more sub-models and updating them with their gradients . Since parameters are shared , learning not to be biased towards a particular sub-model is crucial in training a supernet . However , a parameter update due to sampling a specific sub-model often causes a bias for the sampled sub-model ; the supernet forgets the knowledge learned from the previous sampling because of the biased update , and other sub-models share the same parameters can be degraded and underrated . Benyahia et al . ( 2019 ) referred to this phenomenon as multi-model forgetting , and many current architecture search strategies have been designed without considering this phenomenon . To mitigate this problem , it is important to make the supernet parameters learn unbiased features that are globally suitable for the sub-models . Therefore , we propose a meta-learning-based approach that enables the supernet to learn unbiased meta-features . We adopt model-agnostic meta-learning ( MAML ) ( Finn et al. , 2017 ; Nichol et al. , 2018 ) during supernet training . MAML is designed to learn the meta-features suitable for multiple tasks , and the trained meta-parameters are then quickly adapted to an unseen task through few-shot learning while reusing the learned meta-features . We apply this concept of MAML to supernet training by assuming the multiple tasks in MAML as learning for multiple sub-models in a supernet . We call the proposed supernet training algorithm Supernet with Unbiased Meta-Features for Neural Architecture Search ( SUMNAS ) , given that it introduces the meta-learning principle to the multimodel forgetting problem . SUMNAS consists of two stages : supernet training and heuristic search with sub-model evaluation . In the supernet training stage , the parameters learn the meta-features for multiple sub-models , which can be considered as MAML ’ s training meta-features for multiple tasks . The meta-parameters obtained from the supernet training can then be directly used without additional training for comparison between sub-models during the evaluation phase . To the best of our knowledge , this work is the first to utilize the meta-learning capability—learning unbiased meta-features—for accurately ranking sub-models in a supernet . There have been approaches ( Shaw et al. , 2019 ; Lian et al. , 2020 ; Wang et al. , 2020 ; Elsken et al. , 2020 ) that adopt meta-learning principles to train a supernet using various tasks and search for the best architecture for an unseen task with a few data instances , which is the few-shot learning variant of NAS . They train the model parameters such that the models are robust to unseen datasets . On the other hand , we take a fundamentally different view of a task . A task usually refers to a dataset , but we show in Section 3 that each sub-model within a search space can be regarded as a separate “ task ” that the supernet has to adapt to . We therefore apply meta-learning principles to make the model parameters robust to several different sub-models . We evaluate SUMNAS with qualitative and quantitative experiments on the CIFAR10 and ImageNet datasets . We show that the architecture rankings SUMNAS predicts have a stronger correlation with the accurate rankings as compared to prior NAS algorithms that use a supernet . Besides , we observe better architecture search performance when an existing search methodology is applied to SUMNAS and show that SUMNAS parameters properly learn meta-features by investigating the performances of the sub-models . 2 PRELIMINARIES . One-shot NAS : In the context of a supernet that shares parameters with its sub-models , the majority of NAS approaches use the supernet as a proxy to indirectly predict the performance of sub-models . With the performance oracle , differentiable techniques ( Liu et al. , 2018 ; Cai et al. , 2019 ) , reinforcement learning ( Pham et al. , 2018 ) , and heuristic methods ( Guo et al. , 2019 ; Chu et al. , 2019 ) are used to determine the best architecture . Although the weight sharing has significantly improved search speed , it is not easy to provide accurate performance indicators when using a supernet as a proxy for sub-model performance . One of the most difficult challenges to obtaining an accurate performance oracle is overriding knowledge learned by previously sampled sub-models . Researchers have recently introduced the practice of repeatedly updating parameters through one or more sampled sub-models ( Guo et al. , 2019 ; Chu et al. , 2019 ; Li et al. , 2020 ) . However , Benyahia et al . ( 2019 ) have shown that catastrophic forgetting occurs in the sequential learning of sampled sub-models . Catastrophic forgetting is a phenomenon where a neural network forgets previously learned knowledge when learning new information . In sampling-based supernet training , repetitive sub-model sampling and training keep introducing new information into shared parameters . The knowledge learned with the previously sampled sub-models is forgotten due to training subsequently sampled sub-models without considering former training . Knowledge overriding causes the predicted performance of the architectures to fluctuate depending on the frequency or the sequence of the sampled architectures , which leads to inaccurately predicted rankings . The forgetting problem in the stochastic training of shared supernet parameters is termed multi-model forgetting . To alleviate this problem , Benyahia et al . ( 2019 ) and Zhang et al . ( 2020 ) add a regularizer to prevent weights from deviating too far from the posterior distribution learned from previously trained sub-models . Model-agnostic meta-learning : Model-agnostic meta-learning ( MAML ) is a process of learning initialization for few-shot learning of an unseen task . Let the ith task be Ti and θ′i be suitable to model parameters for Ti . MAML provides an objective function to obtain optimal meta-parameters by minimizing the loss function for each task Ti : θmeta = argmin θ ETi∼p ( T ) [ LTi ( fθ∗i ) ] ( 1 ) where f is the network function consisting of θ′ , and L is the loss function for each task Ti . The L term in equation 1 allows the θ∗i calculated by few-shot learning with initialization θ to learn a suitable feature for each task Ti , and θmeta minimizes the expected loss corresponding to the tasks . As a consequence , the learned meta feature is generalized across tasks , allowing for fast learning of new tasks . Finn et al . ( 2017 ) solve the objective function by repeatedly sampling tasks and updating parameters with gradient descents . They also suggest a first-order approximation to avoid the computation of second-order derivatives . Nichol et al . ( 2018 ) propose a variation of gradient descent for metalearning , which is called Reptile . The first-order MAML carries out task optimization using only the last gradient calculated by the inner loop step , ignoring the second-order derivative . However , Reptile uses the average of the gradients calculated over multiple inner loop steps to update the parameters . These averaged values reflect the generalization of the gradients for the data . 3 SUPERNET WITH UNBIASED META-FEATURES FOR NEURAL ARCHITECTURE SEARCH . To tackle the multi-model forgetting problem , we suggest a new supernet training strategy based on meta-learning , Supernet with Unbiased Meta-Features for Neural Architecture Search ( SUMNAS ) . As mentioned , learning unbiased features that are suitable for sub-models in the supernet is essential to alleviate the knowledge overriding . We take the idea of MAML , which learns meta-features suitable for multiple tasks , and apply it to supernet training so that SUMNAS learns such unbiased meta-features of sub-models . In this section , we describe our approach and explain how it can mitigate the multi-model forgetting problem . We then describe our supernet training algorithm . 3.1 META-FEATURE LEARNING . The approach we introduce to improve robustness against the multi-model forgetting problem is learning unbiased meta-features . SUMNAS has two stages : supernet training and heuristic search with sub-model evaluation , which are similar to existing sampling-based NAS algorithms ( Chu et al. , 2019 ; Guo et al. , 2019 ; Li et al. , 2020 ) , as presented in Figure 1 . First , SUMNAS trains parameters to learn meta-features ( Figure 1 [ a ] ) , which are not biased to a specific sub-model and are suitable for sub-models , so training them does not overwrite previously trained knowledge . Afterward , the parameters are used to evaluate a specific sub-model during the search ( Figure 1 [ b ] ) . Vanilla sampling-based supernet training optimizes strictly shared supernet parameters on sampled sub-models , and it can be expressed as follows : θs = argmin θ Ef∼p ( f ) [ LT ( fθ ) ] , ( 2 ) where f is a sub-model sampled from a distribution of sub-models p ( f ) , and T is a given dataset . The parameter θ is strictly shared among sub-models , and it has to learn model-specific features of all of the sub-models . The training for one or more sampled sub-models does not consider other ones that have been previously sampled , and simply overrides knowledge acquired from previous training to optimize currently sampled sub-models . In our approach , similar to previous works , parameters are shared during supernet training . However , the parameters are trained to learn unbiased meta-features . Here , the unbiased meta-features make fair comparison possible by providing each sub-model with an appropriate level of optimized parameters that are not overfitted for a specific sub-model . Our meta-learning approach has been formulated as follows : θs = argmin θ Ef∼p ( f ) [ LT ( fθ∗ ) ] s.t . θ∗ = Af ( L , T , θ ) , ( 3 ) whereAf is an adaptation function optimizing the parameters for the sub-model f with a few inputs . With this optimization , parameters naturally learn meta-features . This idea is based on MAML , which targets few-shot learning of an unseen task using adaptable parameters . MAML trains parameters to learn meta-features suitable for multiple tasks and uses the meta-parameters to initialize few-shot learning . Training meta-parameters for a weight-shared supernet is similar to training with MAML . We map a task used in meta-learning for few-shot learning to a sub-model in NAS using a supernet , as seen by comparing Equation 1 with Equation 3 . In other words , the difference is that MAML minimizes the expectation of the loss across tasks , but SUMNAS minimizes the expectation of the loss across sub-models . Interestingly , we notice that learning the meta-features in a supernet can be viewed as MAML of each operator in the supernet . We can convert Equation 3 into the following equation : θmeta = argmin θ Eo∼p ( o ) [ Ef∼p ( f |fi=o ) [ LI ( f i+1 : θ∗ ◦ o ) ] ] s.t . I = p ( f0 : i−1θ∗ ( x ) , y| { x , y } ∼ T ) θ∗ = Af ( L , T , θ ) ( 4 ) where f i : j is a composition of sampled operators from i-th layer to j-th layer . A sampled sub-model f0 : i−1 feeds an intermediate representation f0 : i−1 ( x ) to the operator o . There is a distribution of the intermediate representation fed to the operator , and the distribution is determined by a submodel sampled from p ( f ) . The operator also has a target distribution of outputs to learn determined by the sampled sub-model . In the MAML , examples fed to a meta-learning model also follow a distribution , and this distribution is a task Ti sampled from p ( T ) . Thus , inputs and targets of both a multi-task meta-learning model and an operator in a supernet follow sampled distributions . We also aim to train an operator that works well on a set of intermediate representations provided by a sampled sub-model , just as MAML learns to work well on a set of examples given by a sampled task . That is , a model and a task in MAML can be mapped to an operator and a distribution of intermediate representations fed to the operator in supernet training . In other words , we can regard the distribution of intermediate representations as to the task for the operator , although the task for the operator is parameterized and optimizable differently from tasks in MAML . This implies that we can use any MAML algorithms without alteration to solve Equation 4 . Moreover , it supports the assumption that the success of the meta-learning algorithms for few-shot learning will be transferred to NAS . However , supernet training and MAML have a fundamental difference . MAML targets to train parameters for an unseen task , but supernet training aims to obtain parameters for pre-defined submodels , which are known at train time , and participate in the training process . Therefore , after our supernet training , the supernet has knowledge of the sub-models , so additional training , such as fine-tuning for each sub-model is not mandatory . In our empirical results , the additional fitting during evaluation may improve the performance of each sub-model , but the ranking performance stays more or less the same . | The paper proposed an improved training strategy for oneshot NAS supernetworks. The key idea is to view the training of each subnetwork as a "task", and then to apply an MAML/Reptile-style meta-learning scheme to ensure efficient cross-task adaptivity. Experiments on NAS-Bench-201 and ImageNet show improved calibration between the supernet’s predictions and the architectures' true rankings. | SP:d45b7399209937f6c1af318573d6106beff29dcf |
SUMNAS: Supernet with Unbiased Meta-Features for Neural Architecture Search | 1 INTRODUCTION . Recent Neural Architecture Search ( NAS ) algorithms often train an over-parameterized network called a supernet to obtain supreme sub-models by sharing parameters rapidly . In this process , the shared parameters are generally trained to reach a state where better architecture discovery is possible through comparing the sub-models . Supernet training is usually conducted by sampling one or more sub-models and updating them with their gradients . Since parameters are shared , learning not to be biased towards a particular sub-model is crucial in training a supernet . However , a parameter update due to sampling a specific sub-model often causes a bias for the sampled sub-model ; the supernet forgets the knowledge learned from the previous sampling because of the biased update , and other sub-models share the same parameters can be degraded and underrated . Benyahia et al . ( 2019 ) referred to this phenomenon as multi-model forgetting , and many current architecture search strategies have been designed without considering this phenomenon . To mitigate this problem , it is important to make the supernet parameters learn unbiased features that are globally suitable for the sub-models . Therefore , we propose a meta-learning-based approach that enables the supernet to learn unbiased meta-features . We adopt model-agnostic meta-learning ( MAML ) ( Finn et al. , 2017 ; Nichol et al. , 2018 ) during supernet training . MAML is designed to learn the meta-features suitable for multiple tasks , and the trained meta-parameters are then quickly adapted to an unseen task through few-shot learning while reusing the learned meta-features . We apply this concept of MAML to supernet training by assuming the multiple tasks in MAML as learning for multiple sub-models in a supernet . We call the proposed supernet training algorithm Supernet with Unbiased Meta-Features for Neural Architecture Search ( SUMNAS ) , given that it introduces the meta-learning principle to the multimodel forgetting problem . SUMNAS consists of two stages : supernet training and heuristic search with sub-model evaluation . In the supernet training stage , the parameters learn the meta-features for multiple sub-models , which can be considered as MAML ’ s training meta-features for multiple tasks . The meta-parameters obtained from the supernet training can then be directly used without additional training for comparison between sub-models during the evaluation phase . To the best of our knowledge , this work is the first to utilize the meta-learning capability—learning unbiased meta-features—for accurately ranking sub-models in a supernet . There have been approaches ( Shaw et al. , 2019 ; Lian et al. , 2020 ; Wang et al. , 2020 ; Elsken et al. , 2020 ) that adopt meta-learning principles to train a supernet using various tasks and search for the best architecture for an unseen task with a few data instances , which is the few-shot learning variant of NAS . They train the model parameters such that the models are robust to unseen datasets . On the other hand , we take a fundamentally different view of a task . A task usually refers to a dataset , but we show in Section 3 that each sub-model within a search space can be regarded as a separate “ task ” that the supernet has to adapt to . We therefore apply meta-learning principles to make the model parameters robust to several different sub-models . We evaluate SUMNAS with qualitative and quantitative experiments on the CIFAR10 and ImageNet datasets . We show that the architecture rankings SUMNAS predicts have a stronger correlation with the accurate rankings as compared to prior NAS algorithms that use a supernet . Besides , we observe better architecture search performance when an existing search methodology is applied to SUMNAS and show that SUMNAS parameters properly learn meta-features by investigating the performances of the sub-models . 2 PRELIMINARIES . One-shot NAS : In the context of a supernet that shares parameters with its sub-models , the majority of NAS approaches use the supernet as a proxy to indirectly predict the performance of sub-models . With the performance oracle , differentiable techniques ( Liu et al. , 2018 ; Cai et al. , 2019 ) , reinforcement learning ( Pham et al. , 2018 ) , and heuristic methods ( Guo et al. , 2019 ; Chu et al. , 2019 ) are used to determine the best architecture . Although the weight sharing has significantly improved search speed , it is not easy to provide accurate performance indicators when using a supernet as a proxy for sub-model performance . One of the most difficult challenges to obtaining an accurate performance oracle is overriding knowledge learned by previously sampled sub-models . Researchers have recently introduced the practice of repeatedly updating parameters through one or more sampled sub-models ( Guo et al. , 2019 ; Chu et al. , 2019 ; Li et al. , 2020 ) . However , Benyahia et al . ( 2019 ) have shown that catastrophic forgetting occurs in the sequential learning of sampled sub-models . Catastrophic forgetting is a phenomenon where a neural network forgets previously learned knowledge when learning new information . In sampling-based supernet training , repetitive sub-model sampling and training keep introducing new information into shared parameters . The knowledge learned with the previously sampled sub-models is forgotten due to training subsequently sampled sub-models without considering former training . Knowledge overriding causes the predicted performance of the architectures to fluctuate depending on the frequency or the sequence of the sampled architectures , which leads to inaccurately predicted rankings . The forgetting problem in the stochastic training of shared supernet parameters is termed multi-model forgetting . To alleviate this problem , Benyahia et al . ( 2019 ) and Zhang et al . ( 2020 ) add a regularizer to prevent weights from deviating too far from the posterior distribution learned from previously trained sub-models . Model-agnostic meta-learning : Model-agnostic meta-learning ( MAML ) is a process of learning initialization for few-shot learning of an unseen task . Let the ith task be Ti and θ′i be suitable to model parameters for Ti . MAML provides an objective function to obtain optimal meta-parameters by minimizing the loss function for each task Ti : θmeta = argmin θ ETi∼p ( T ) [ LTi ( fθ∗i ) ] ( 1 ) where f is the network function consisting of θ′ , and L is the loss function for each task Ti . The L term in equation 1 allows the θ∗i calculated by few-shot learning with initialization θ to learn a suitable feature for each task Ti , and θmeta minimizes the expected loss corresponding to the tasks . As a consequence , the learned meta feature is generalized across tasks , allowing for fast learning of new tasks . Finn et al . ( 2017 ) solve the objective function by repeatedly sampling tasks and updating parameters with gradient descents . They also suggest a first-order approximation to avoid the computation of second-order derivatives . Nichol et al . ( 2018 ) propose a variation of gradient descent for metalearning , which is called Reptile . The first-order MAML carries out task optimization using only the last gradient calculated by the inner loop step , ignoring the second-order derivative . However , Reptile uses the average of the gradients calculated over multiple inner loop steps to update the parameters . These averaged values reflect the generalization of the gradients for the data . 3 SUPERNET WITH UNBIASED META-FEATURES FOR NEURAL ARCHITECTURE SEARCH . To tackle the multi-model forgetting problem , we suggest a new supernet training strategy based on meta-learning , Supernet with Unbiased Meta-Features for Neural Architecture Search ( SUMNAS ) . As mentioned , learning unbiased features that are suitable for sub-models in the supernet is essential to alleviate the knowledge overriding . We take the idea of MAML , which learns meta-features suitable for multiple tasks , and apply it to supernet training so that SUMNAS learns such unbiased meta-features of sub-models . In this section , we describe our approach and explain how it can mitigate the multi-model forgetting problem . We then describe our supernet training algorithm . 3.1 META-FEATURE LEARNING . The approach we introduce to improve robustness against the multi-model forgetting problem is learning unbiased meta-features . SUMNAS has two stages : supernet training and heuristic search with sub-model evaluation , which are similar to existing sampling-based NAS algorithms ( Chu et al. , 2019 ; Guo et al. , 2019 ; Li et al. , 2020 ) , as presented in Figure 1 . First , SUMNAS trains parameters to learn meta-features ( Figure 1 [ a ] ) , which are not biased to a specific sub-model and are suitable for sub-models , so training them does not overwrite previously trained knowledge . Afterward , the parameters are used to evaluate a specific sub-model during the search ( Figure 1 [ b ] ) . Vanilla sampling-based supernet training optimizes strictly shared supernet parameters on sampled sub-models , and it can be expressed as follows : θs = argmin θ Ef∼p ( f ) [ LT ( fθ ) ] , ( 2 ) where f is a sub-model sampled from a distribution of sub-models p ( f ) , and T is a given dataset . The parameter θ is strictly shared among sub-models , and it has to learn model-specific features of all of the sub-models . The training for one or more sampled sub-models does not consider other ones that have been previously sampled , and simply overrides knowledge acquired from previous training to optimize currently sampled sub-models . In our approach , similar to previous works , parameters are shared during supernet training . However , the parameters are trained to learn unbiased meta-features . Here , the unbiased meta-features make fair comparison possible by providing each sub-model with an appropriate level of optimized parameters that are not overfitted for a specific sub-model . Our meta-learning approach has been formulated as follows : θs = argmin θ Ef∼p ( f ) [ LT ( fθ∗ ) ] s.t . θ∗ = Af ( L , T , θ ) , ( 3 ) whereAf is an adaptation function optimizing the parameters for the sub-model f with a few inputs . With this optimization , parameters naturally learn meta-features . This idea is based on MAML , which targets few-shot learning of an unseen task using adaptable parameters . MAML trains parameters to learn meta-features suitable for multiple tasks and uses the meta-parameters to initialize few-shot learning . Training meta-parameters for a weight-shared supernet is similar to training with MAML . We map a task used in meta-learning for few-shot learning to a sub-model in NAS using a supernet , as seen by comparing Equation 1 with Equation 3 . In other words , the difference is that MAML minimizes the expectation of the loss across tasks , but SUMNAS minimizes the expectation of the loss across sub-models . Interestingly , we notice that learning the meta-features in a supernet can be viewed as MAML of each operator in the supernet . We can convert Equation 3 into the following equation : θmeta = argmin θ Eo∼p ( o ) [ Ef∼p ( f |fi=o ) [ LI ( f i+1 : θ∗ ◦ o ) ] ] s.t . I = p ( f0 : i−1θ∗ ( x ) , y| { x , y } ∼ T ) θ∗ = Af ( L , T , θ ) ( 4 ) where f i : j is a composition of sampled operators from i-th layer to j-th layer . A sampled sub-model f0 : i−1 feeds an intermediate representation f0 : i−1 ( x ) to the operator o . There is a distribution of the intermediate representation fed to the operator , and the distribution is determined by a submodel sampled from p ( f ) . The operator also has a target distribution of outputs to learn determined by the sampled sub-model . In the MAML , examples fed to a meta-learning model also follow a distribution , and this distribution is a task Ti sampled from p ( T ) . Thus , inputs and targets of both a multi-task meta-learning model and an operator in a supernet follow sampled distributions . We also aim to train an operator that works well on a set of intermediate representations provided by a sampled sub-model , just as MAML learns to work well on a set of examples given by a sampled task . That is , a model and a task in MAML can be mapped to an operator and a distribution of intermediate representations fed to the operator in supernet training . In other words , we can regard the distribution of intermediate representations as to the task for the operator , although the task for the operator is parameterized and optimizable differently from tasks in MAML . This implies that we can use any MAML algorithms without alteration to solve Equation 4 . Moreover , it supports the assumption that the success of the meta-learning algorithms for few-shot learning will be transferred to NAS . However , supernet training and MAML have a fundamental difference . MAML targets to train parameters for an unseen task , but supernet training aims to obtain parameters for pre-defined submodels , which are known at train time , and participate in the training process . Therefore , after our supernet training , the supernet has knowledge of the sub-models , so additional training , such as fine-tuning for each sub-model is not mandatory . In our empirical results , the additional fitting during evaluation may improve the performance of each sub-model , but the ranking performance stays more or less the same . | This work targets better ranking performance after supernet training in NAS. The authors leverage meta-learning to make the shared supernet weights adaptive to randomly sampled subnetworks. Experiments are conducted on NAS-Bench-201 and MobileNet space. | SP:d45b7399209937f6c1af318573d6106beff29dcf |
Proof Artifact Co-Training for Theorem Proving with Language Models | 1 INTRODUCTION . Deep learning-driven automated theorem proving in large libraries of formalized mathematics ( henceforth “ neural theorem proving ” ) has been the focus of increased attention in recent years . Labeled data for imitation learning of theorem proving is scarce—formalization is notoriously labor-intensive , with an estimated cost of 2.5 man-years per megabyte of formalized mathematics ( Wiedijk , 2000 ) , and complex projects require years of labor from human specialists . Within a fixed corpus of ( possibly unproven ) theorem statements , it is possible to augment a seed dataset of human proofs with new successful trajectories using reinforcement learning or expert iteration . However , for some large models this can be quite computationally intensive , and without a way to expand the curriculum of theorems , the agent will inevitably saturate and suffer from data starvation . Data scarcity is a particularly thorny obstruction for applying large language models ( LLMs ) to neural theorem proving . LLMs have achieved spectacular success in data-rich regimes such as plain text ( Brown et al. , 2020 ) , images ( Dosovitskiy et al. , 2020 ) , and joint text-image modeling ( Radford et al . ) , and the performance of decoder-only Transformers has been empirically shown to obey scaling power laws in model and data size ( Henighan et al. , 2020 ) . However , existing datasets of human proof steps for neural theorem proving are extremely small and exist at scales at which overfitting occurs extremely rapidly , disrupting the scaling of performance with respect to model size ( Kaplan et al. , 2020 ) . We make two contributions towards addressing the problem of data scarcity in the context of formal mathematics . First , we introduce PACT ( Proof Artifact Co-Training ) , a general methodology for extracting self-supervised auxiliary tasks for jointly training a language model alongside a tactic prediction objective for interactive theorem proving . Second , we present LEANSTEP , a collection of datasets and a machine learning environment for the Lean 3 theorem prover with support for PACT , supervised learning of tactic prediction , theorem proving evaluation , and reinforcement learning . We train large language models on these data and demonstrate that PACT significantly improves theorem proving success rate on a held-out suite of test theorems , from 32 % to 48 % . We then embark on a careful study of the effects of pre-training vs. co-training and show that PACT combined with WebMath pre-training ( Polu & Sutskever , 2020 ) achieves the best validation loss and theorem proving success rate . Finally , on an out-of-distribution collection of thousands of theorems ( some involving novel definitions ) added to Lean ’ s mathematical library after we extracted our train/test data , we achieve a theorem proving success rate of 37 % , suggesting strong generalization and usefulness at the frontier of formalized mathematics . 2 BACKGROUND AND RELATED WORK . LEAN Lean is an interactive theorem prover and functional programming language ( de Moura et al. , 2015 ) . It has an extremely active community and is host to some of the most sophisticated formalized mathematics in the world , including scheme theory ( Buzzard et al. , 2019 ) , forcing ( Han & van Doorn , 2020 ) , perfectoid spaces ( Buzzard et al. , 2020 ) , and condensed mathematics ( Scholze , 2020 ) . Lean ’ s foundational logic is a dependent type theory called the calculus of inductive constructions ( Pfenning & Paulin-Mohring , 1989 ) . This design means that terms , types and proofs are all represented with a single datatype called an expression . A proof term is a Lean expression whose type is a proposition , i.e . a theorem . This proof term serves as a checkable artifact for verifying the proposition . Lean uses a small , trusted kernel to verify proof terms . The primary repository of formalized mathematics in Lean is mathlib ( mathlib , 2020 ) . At the time of writing , 140 contributors have added almost 500,000 lines of code ; mathlib contains over 46,000 formalized lemmas backed by over 21,000 definitions , covering topics such as algebraic geometry , computability , measure theory , and category theory . The range of topics and the monolithic , unified organization of mathlib make it an excellent foundation for a neural theorem proving dataset . MACHINE LEARNING IN INTERACTIVE THEOREM PROVING In a tactic-based interactive theorem prover ( ITP ) such as Lean , a proof is a list of tactics , i.e . small proof-term-generating programs . Tactics can be simple one-word commands , e.g . refl , or be composed of many nested parts , e.g . simpa [ le_antisymm_iff , norm_nonneg ] using @ norm_eq_zero α _ g Here the brackets enclose a list of simplifier rules ( which often are just lemmas from the library ) , and @ norm_eq_zero α _ g is a proof term applying the lemma norm_eq_zero to the local variables α and g. Other ML and neural theorem provers for tactic-based ITPs take one of two approaches to tactic generation . TacticToe ( Gauthier et al. , 2018 ) for HOL4 and Tactician ( Blaauwbroek et al. , 2020 ) for Coq use k-NN to select similar tactics in the training set and apply modifications to the result , e.g . swapping the tactic variables with those found in the local context . HOList/DeepHOL ( Bansal et al. , 2019c ; a ; Paliwal et al. , 2020 ) for HOL Light ; TacticZero ( Wu et al. , 2021a ) for HOL4 ; and CoqGym/ASTactic ( Yang & Deng , 2019 ) and ProverBot9001 ( Sanchez-Stern et al. , 2020 ) for Coq hard-code the DSL for every tactic command . The model chooses a tactic command , and then fills in the tactic arguments using specialized argument selectors ( such as a lemma selector , a local hypothesis selector , and/or a variable selector ) . None of these selectors currently synthesize arbitrary terms . This prevents the tactic synthesis from constructing tactics with proof terms , such as @ norm_eq_zero α _ g , or directly proving an existential , e.g . ∃ ( x : R ) , x + 3 = 0 , by supplying the witnessing term -3 . Directly applying generative language modeling to tactic generation allows this setup to be considerably simplified . Our tactic generator is able to synthesize tactics of any form found in mathlib including , for example , the @ simpa example above as a one line proof to a test theorem , even though the string @ norm_eq_zero does not occur in our dataset . ( See more examples in the appendix . ) We leave as future work the possibility of re-integrating specialized components , e.g . lemma selection , found in other works ( possibly as , say , a source of additional prompts for the language model ) . Language models have also been explored in the first-order ITP Mizar for conjecturing and proof synthesis ( Urban & Jakubuv , 2020 ) . While their work shows the promise of such approaches , is not intended as a complete end-to-end theorem prover . For Metamath , which does not use tactics , language modeling approaches have been quite successful . Holophrasm ( Whalen , 2016 ) , MetaGen ( Wang & Deng , 2020 ) , and GPT-f ( Polu & Sutskever , 2020 ) all use RNNs or Transformers to generate proof steps . Indeed , our paper builds on the work of Metamath GPT-f ( Polu & Sutskever , 2020 ) ( MM GPT-f ) . Whereas MM GPT-f trained primarily on the Metamath proof step objective ( i.e . guessing the next lemma to be applied to a goal , which is similar to our NEXTLEMMA task in Section 3.2 ) , we co-train on a diverse suite of self-supervised tasks extracted from Lean proof terms and demonstrate significant improvements in theorem proving performance when doing so . This is our main result . REASONING WITH TRANSFORMERS Besides theorem proving , a number of recent papers have shown that language models , especially Transformers , are capable of something like mathematical and logical reasoning in integration ( Lample & Charton , 2020 ) , differential equations ( Charton et al. , 2020 ) , Boolean satisfiability ( Finkbeiner et al. , 2020 ) , and inferring missing proof steps ( Li et al. , 2021 ) . A closely-related vein of work has shown that pre-training Transformers on data engineered to reflect inductive biases conducive to mathematical reasoning is beneficial for downstream mathematical reasoning tasks ( Rabe et al. , 2020 ; Wu et al. , 2021b ) . Our work both builds on and departs from these ideas in several ways . Unlike skip-tree training ( Rabe et al. , 2020 ) , which focuses solely on predicting masked subterms of theorem statements , PACT derives its self-supervised training data from far more complex proofs . Unlike LIME ( Wu et al. , 2021b ) , which uses purely synthetic data and is presented as a pre-training methodology , our self-supervised tasks are extracted from non-synthetic human proofs . Moreover , we show that not only are Transformers capable of performing well on auxiliary tasks gathered from low-level proof artifact data , but that we can directly leverage this low-level data by jointly training a language model to greatly improve its performance at high-level theorem proving . MACHINE LEARNING WITH PROOF ARTIFACTS The idea of mining low-level proof artifacts was previously explored by Kaliszyk and Urban in the context of automated lemma extraction ( Kaliszyk & Urban , 2015b ; Kaliszyk et al. , 2015 ) . It has also been previously observed that training on fully elaborated Coq terms ( Nie et al. , 2020 ) helps with a downstream theorem naming task . However , similar to previous work on skip-tree training , their dataset focuses solely on theorem statements , i.e . types , does not cover the far more complex proof terms , and does not evaluate the effect of such training on theorem proving evaluations . While there exist environments and datasets for other formal mathematics libraries ( Kaliszyk et al. , 2017 ; Li et al. , 2021 ; Huang et al. , 2018 ; Kaliszyk & Urban , 2015a ) , LEANSTEP is the first and only tactic proof dataset for the Lean theorem prover . This makes available a large set of formal mathematical data to researchers covering a diverse and deep spectrum of pure mathematics . Moreover , LEANSTEP is unique in that it contains both high-level human-written tactics as well as kernel-level proof terms , which enables the extraction of self-supervised tasks for PACT ( Section 3.2 ) . 3 THE LEANSTEP DATASETS AND MACHINE LEARNING ENVIRONMENT . 3.1 HUMAN TACTIC PROOF STEPS . Tactics in Lean are metaprograms ( Ebner et al. , 2017 ) , which can construct Lean expressions , such as proof terms . A tactic state which tracks the list of open goals and other metadata ( like the partial proof term constructed so far ) is threaded through each tactic invocation . Lean has special support for treating tactics as an extensible domain-specific language ( DSL ) ; this DSL is how Lean is typically used as an interactive theorem prover . The DSL amounts to a linear chain of comma-separated invocations . The Lean proof step task is to predict the next tactic given this goal state . We refer the reader to the appendix for examples . Our human tactic proof step dataset consists of source-target pairs of strings , one for each tactic invocation in the Lean core library and in mathlib . The source string is the pretty-printed tactic state . The target string is the tactic invocation as entered by a human author of the source code . This data is gathered by hooking into the Lean parser and Lean ’ s compilation process . We refer to the task of predicting the next human tactic proof step given a tactic state as the proofstep objective . | To deal with data scarcity in context of learning large Transformer language models for theorem proving, this work proposes a methodology (called PACT) for extracting auxiliary task data for joint training alongside the tactic prediction objective. The methodology has been applied to Lean proof assistant. PACT significantly improves the theorem proving success rate on held-out suite of test theorems. | SP:4a83aeec9e7a387683678c056e71a70d238511ed |
Proof Artifact Co-Training for Theorem Proving with Language Models | 1 INTRODUCTION . Deep learning-driven automated theorem proving in large libraries of formalized mathematics ( henceforth “ neural theorem proving ” ) has been the focus of increased attention in recent years . Labeled data for imitation learning of theorem proving is scarce—formalization is notoriously labor-intensive , with an estimated cost of 2.5 man-years per megabyte of formalized mathematics ( Wiedijk , 2000 ) , and complex projects require years of labor from human specialists . Within a fixed corpus of ( possibly unproven ) theorem statements , it is possible to augment a seed dataset of human proofs with new successful trajectories using reinforcement learning or expert iteration . However , for some large models this can be quite computationally intensive , and without a way to expand the curriculum of theorems , the agent will inevitably saturate and suffer from data starvation . Data scarcity is a particularly thorny obstruction for applying large language models ( LLMs ) to neural theorem proving . LLMs have achieved spectacular success in data-rich regimes such as plain text ( Brown et al. , 2020 ) , images ( Dosovitskiy et al. , 2020 ) , and joint text-image modeling ( Radford et al . ) , and the performance of decoder-only Transformers has been empirically shown to obey scaling power laws in model and data size ( Henighan et al. , 2020 ) . However , existing datasets of human proof steps for neural theorem proving are extremely small and exist at scales at which overfitting occurs extremely rapidly , disrupting the scaling of performance with respect to model size ( Kaplan et al. , 2020 ) . We make two contributions towards addressing the problem of data scarcity in the context of formal mathematics . First , we introduce PACT ( Proof Artifact Co-Training ) , a general methodology for extracting self-supervised auxiliary tasks for jointly training a language model alongside a tactic prediction objective for interactive theorem proving . Second , we present LEANSTEP , a collection of datasets and a machine learning environment for the Lean 3 theorem prover with support for PACT , supervised learning of tactic prediction , theorem proving evaluation , and reinforcement learning . We train large language models on these data and demonstrate that PACT significantly improves theorem proving success rate on a held-out suite of test theorems , from 32 % to 48 % . We then embark on a careful study of the effects of pre-training vs. co-training and show that PACT combined with WebMath pre-training ( Polu & Sutskever , 2020 ) achieves the best validation loss and theorem proving success rate . Finally , on an out-of-distribution collection of thousands of theorems ( some involving novel definitions ) added to Lean ’ s mathematical library after we extracted our train/test data , we achieve a theorem proving success rate of 37 % , suggesting strong generalization and usefulness at the frontier of formalized mathematics . 2 BACKGROUND AND RELATED WORK . LEAN Lean is an interactive theorem prover and functional programming language ( de Moura et al. , 2015 ) . It has an extremely active community and is host to some of the most sophisticated formalized mathematics in the world , including scheme theory ( Buzzard et al. , 2019 ) , forcing ( Han & van Doorn , 2020 ) , perfectoid spaces ( Buzzard et al. , 2020 ) , and condensed mathematics ( Scholze , 2020 ) . Lean ’ s foundational logic is a dependent type theory called the calculus of inductive constructions ( Pfenning & Paulin-Mohring , 1989 ) . This design means that terms , types and proofs are all represented with a single datatype called an expression . A proof term is a Lean expression whose type is a proposition , i.e . a theorem . This proof term serves as a checkable artifact for verifying the proposition . Lean uses a small , trusted kernel to verify proof terms . The primary repository of formalized mathematics in Lean is mathlib ( mathlib , 2020 ) . At the time of writing , 140 contributors have added almost 500,000 lines of code ; mathlib contains over 46,000 formalized lemmas backed by over 21,000 definitions , covering topics such as algebraic geometry , computability , measure theory , and category theory . The range of topics and the monolithic , unified organization of mathlib make it an excellent foundation for a neural theorem proving dataset . MACHINE LEARNING IN INTERACTIVE THEOREM PROVING In a tactic-based interactive theorem prover ( ITP ) such as Lean , a proof is a list of tactics , i.e . small proof-term-generating programs . Tactics can be simple one-word commands , e.g . refl , or be composed of many nested parts , e.g . simpa [ le_antisymm_iff , norm_nonneg ] using @ norm_eq_zero α _ g Here the brackets enclose a list of simplifier rules ( which often are just lemmas from the library ) , and @ norm_eq_zero α _ g is a proof term applying the lemma norm_eq_zero to the local variables α and g. Other ML and neural theorem provers for tactic-based ITPs take one of two approaches to tactic generation . TacticToe ( Gauthier et al. , 2018 ) for HOL4 and Tactician ( Blaauwbroek et al. , 2020 ) for Coq use k-NN to select similar tactics in the training set and apply modifications to the result , e.g . swapping the tactic variables with those found in the local context . HOList/DeepHOL ( Bansal et al. , 2019c ; a ; Paliwal et al. , 2020 ) for HOL Light ; TacticZero ( Wu et al. , 2021a ) for HOL4 ; and CoqGym/ASTactic ( Yang & Deng , 2019 ) and ProverBot9001 ( Sanchez-Stern et al. , 2020 ) for Coq hard-code the DSL for every tactic command . The model chooses a tactic command , and then fills in the tactic arguments using specialized argument selectors ( such as a lemma selector , a local hypothesis selector , and/or a variable selector ) . None of these selectors currently synthesize arbitrary terms . This prevents the tactic synthesis from constructing tactics with proof terms , such as @ norm_eq_zero α _ g , or directly proving an existential , e.g . ∃ ( x : R ) , x + 3 = 0 , by supplying the witnessing term -3 . Directly applying generative language modeling to tactic generation allows this setup to be considerably simplified . Our tactic generator is able to synthesize tactics of any form found in mathlib including , for example , the @ simpa example above as a one line proof to a test theorem , even though the string @ norm_eq_zero does not occur in our dataset . ( See more examples in the appendix . ) We leave as future work the possibility of re-integrating specialized components , e.g . lemma selection , found in other works ( possibly as , say , a source of additional prompts for the language model ) . Language models have also been explored in the first-order ITP Mizar for conjecturing and proof synthesis ( Urban & Jakubuv , 2020 ) . While their work shows the promise of such approaches , is not intended as a complete end-to-end theorem prover . For Metamath , which does not use tactics , language modeling approaches have been quite successful . Holophrasm ( Whalen , 2016 ) , MetaGen ( Wang & Deng , 2020 ) , and GPT-f ( Polu & Sutskever , 2020 ) all use RNNs or Transformers to generate proof steps . Indeed , our paper builds on the work of Metamath GPT-f ( Polu & Sutskever , 2020 ) ( MM GPT-f ) . Whereas MM GPT-f trained primarily on the Metamath proof step objective ( i.e . guessing the next lemma to be applied to a goal , which is similar to our NEXTLEMMA task in Section 3.2 ) , we co-train on a diverse suite of self-supervised tasks extracted from Lean proof terms and demonstrate significant improvements in theorem proving performance when doing so . This is our main result . REASONING WITH TRANSFORMERS Besides theorem proving , a number of recent papers have shown that language models , especially Transformers , are capable of something like mathematical and logical reasoning in integration ( Lample & Charton , 2020 ) , differential equations ( Charton et al. , 2020 ) , Boolean satisfiability ( Finkbeiner et al. , 2020 ) , and inferring missing proof steps ( Li et al. , 2021 ) . A closely-related vein of work has shown that pre-training Transformers on data engineered to reflect inductive biases conducive to mathematical reasoning is beneficial for downstream mathematical reasoning tasks ( Rabe et al. , 2020 ; Wu et al. , 2021b ) . Our work both builds on and departs from these ideas in several ways . Unlike skip-tree training ( Rabe et al. , 2020 ) , which focuses solely on predicting masked subterms of theorem statements , PACT derives its self-supervised training data from far more complex proofs . Unlike LIME ( Wu et al. , 2021b ) , which uses purely synthetic data and is presented as a pre-training methodology , our self-supervised tasks are extracted from non-synthetic human proofs . Moreover , we show that not only are Transformers capable of performing well on auxiliary tasks gathered from low-level proof artifact data , but that we can directly leverage this low-level data by jointly training a language model to greatly improve its performance at high-level theorem proving . MACHINE LEARNING WITH PROOF ARTIFACTS The idea of mining low-level proof artifacts was previously explored by Kaliszyk and Urban in the context of automated lemma extraction ( Kaliszyk & Urban , 2015b ; Kaliszyk et al. , 2015 ) . It has also been previously observed that training on fully elaborated Coq terms ( Nie et al. , 2020 ) helps with a downstream theorem naming task . However , similar to previous work on skip-tree training , their dataset focuses solely on theorem statements , i.e . types , does not cover the far more complex proof terms , and does not evaluate the effect of such training on theorem proving evaluations . While there exist environments and datasets for other formal mathematics libraries ( Kaliszyk et al. , 2017 ; Li et al. , 2021 ; Huang et al. , 2018 ; Kaliszyk & Urban , 2015a ) , LEANSTEP is the first and only tactic proof dataset for the Lean theorem prover . This makes available a large set of formal mathematical data to researchers covering a diverse and deep spectrum of pure mathematics . Moreover , LEANSTEP is unique in that it contains both high-level human-written tactics as well as kernel-level proof terms , which enables the extraction of self-supervised tasks for PACT ( Section 3.2 ) . 3 THE LEANSTEP DATASETS AND MACHINE LEARNING ENVIRONMENT . 3.1 HUMAN TACTIC PROOF STEPS . Tactics in Lean are metaprograms ( Ebner et al. , 2017 ) , which can construct Lean expressions , such as proof terms . A tactic state which tracks the list of open goals and other metadata ( like the partial proof term constructed so far ) is threaded through each tactic invocation . Lean has special support for treating tactics as an extensible domain-specific language ( DSL ) ; this DSL is how Lean is typically used as an interactive theorem prover . The DSL amounts to a linear chain of comma-separated invocations . The Lean proof step task is to predict the next tactic given this goal state . We refer the reader to the appendix for examples . Our human tactic proof step dataset consists of source-target pairs of strings , one for each tactic invocation in the Lean core library and in mathlib . The source string is the pretty-printed tactic state . The target string is the tactic invocation as entered by a human author of the source code . This data is gathered by hooking into the Lean parser and Lean ’ s compilation process . We refer to the task of predicting the next human tactic proof step given a tactic state as the proofstep objective . | This paper addresses the general setup of using transformer models for interactive theorem proving (ITP) tasks. The ITP engine considered here is Lean. The contribution of the paper is a data augmentation method. This is achieved by mining low level artifacts from a given dataset of Lean proofs. This includes data extracted from additional type information inferred by Lean while processing the given proofs. Several prediction tasks are formed for this additional data, to allow pre-training and co-training in combination with the existing WebMath dataset. Results show that the additional datasets extracted in this way aid substantively when used for pretraining and cotraining. | SP:4a83aeec9e7a387683678c056e71a70d238511ed |
SketchODE: Learning neural sketch representation in continuous time | 1 INTRODUCTION Drawing-based communications such as sketches , writing , and diagrams come naturally to humans and have been used in some form since ancient times . Modeling such data is becoming an increasingly important and topical challenge area for machine learning systems aiming to interpret and simulate human creative expression . Such chirographic structures are challenging to interpret and generate due to their complex and unstructured nature . However recent progress has been strong , thanks to the advances in learning latent representations for sequences ( Bowman et al. , 2016 ; Graves , 2013 ; Srivastava et al. , 2015 ) . In particular , with the advent of variational sequence-to-sequence generative models ( Srivastava et al. , 2015 ; Bowman et al. , 2016 ) and the collection of largescale datasets ( Gervais et al. , 2020 ; Ha & Eck , 2018 ; Ge et al. , 2021 ) , we have seen a surge of advancements ( Ha & Eck , 2018 ; Aksan et al. , 2020 ) in this direction . Neverthe- less , a key missing link is the fact that drawing is intrinsically continuous in time , as is the resulting drawn artefact , when considered as a sequence rather than a raster image . The dominant approach to model free-hand sketches , popularized by Ha & Eck ( 2018 ) , has been to use a discrete-step recurrent network for capturing the latent process p ( h [ n ] |h [ n− 1 ] ) with discrete step n , and a complex observation model p ( s [ n ] | h [ n ] ) to explain local structure . The large body of approaches built upon this have a critical flaw : they ignore the very nature of the data , i.e . drawn structures are inherently continuous in time ( Refer to Fig . 1 ) . While a few attempts have been made to use continuous-time constructs like Bézier curves ( Das et al. , 2020 ; 2021 ) , their inconvenient mathematical form affect the representational power as well as training dynamics . The reason for such an absence of continuous-time models is the lack of fundamental tools for handling continuous time data . Lately however , the introduction of Neural Ordinary Differential Equations by Chen et al . ( 2018 ) followed by numerous extensions ( Dupont et al. , 2019 ; Yildiz et al. , 2019 ; Kidger et al. , 2020 ) have opened the possibility of building powerful models that can natively represent continuous time data such as handwriting , sketch etc . In this paper , we represent chirographic drawing data like handwriting , sketches etc . as continuoustime function s ( t ) and model the underlying latent process also as continuous-time latent function h ( t ) . Specifically , we propose a framework for capturing the derivative ( potentially higher order ) of the latent process dKh ( t ) dtK with parameterized neural networks . At inference , a solution trajectory ŝ ( t ) is computed from a given Initial Value Problem ( IVP ) that includes a learned ODE dynamics and an initial hidden state h0 , h ( t = 0 ) . An obvious advantage of the functional representation is its arbitrary spatial resolution , i.e . we can retrieve a structure at any spatial resolution by sampling the function at appropriate rate . Moreover , with a functional representation , we can systematically upgrade the representation along time with additional properties ( e.g . line thickness , color etc ) . So far , parameterized ODE dynamics models have largely been treated as a “ replacement for ResNets ” ( Massaroli et al. , 2020 ) where the intermediate states are of little importance . While the capability of ODE models are beginning to be tested in time-series data ( Kidger et al. , 2020 ; Morrill et al. , 2021 ) , they still remain largely unexplored . Following the introduction of Neural ODEs by Chen et al . ( 2018 ) , methods have been proposed in order to regularize the solution trajectory to be as simple as possible ( Kelly et al. , 2020 ; Finlay et al. , 2020 ) . However , time-series data such as chirography are entirely the opposite . The dynamics model , and consequently the solution trajectory should be as flexible as possible in order to learn the high degree of local and global variations with time often exhibited in such data . We increase the flexibility of Neural ODE models by introducing a new class of parameterized dynamics functions . Our experiments show that this is crucial in order to model chirographic data with satisfactory fidelity . We also introduce a new data augmentation method particularly geared towards ODE models and continuous-time sequences . Finally , we propose a deterministic autoencoder with a global latent variable z that exhibits some generative model properties by virtue of inherent continuity . Our final model , SketchODE , is similar to SketchRNN ( Ha & Eck , 2018 ) in terms of high-level design but lifts it to the realm of continuoustime . More generally , our model is the first continuous time Seq2Seq architecture . We explore some of the noteworthy features that differentiate SketchODE from discrete-time Seq2Seq analogoues . An inductive bias for continuity : Discrete-time sequence models ( Ha & Eck , 2018 ) need to use their capacity to model both global structure and temporal continuity in the data . ODE models on the other hand , have a strong intrinsic architectural bias towards temporal continuity . Sequences generated using a well-behaved ODE dynamics function are guaranteed to be continuous . Since they need not learn the continuity bias from scratch , ODE models are more data-efficient and are able to use majority of their capacity for modelling higher level structures . We even demonstrate that our SketchODE supports meaningful 1-shot learning when fine-tuned . Deterministic Autoencoding with Structured Latent Space : A surprising property of Seq2Seq ODE models is the ability to perform latent space interpolation and sampling without the necessity of imposing a prior distribution p ( z ) on the variational posterior qφ ( z|s ) . This property is also a consequence of the latent to output mapping being continuous . 2 RELATED WORK . With the advent of recurrent neural networks , sequential time-series models have seen significant adaptation in a variety of applications . Video ( Srivastava et al. , 2015 ) and natural language ( Bowman et al. , 2016 ) were two early applications to have seen significant success . A different but important time-series modality is chirographic data like handwriting ( Graves , 2013 ) and sketches ( Ha & Eck , 2018 ) which gained traction only recently . Subsequent developments also address data modalities that are not free-flowing , like Fonts ( Lopes et al. , 2019 ) and Icons ( Carlier et al. , 2020 ) which require slightly different models . Mirroring developments in natural language processing ( Vaswani et al. , 2017 ) , Transformers are now replacing ( Carlier et al. , 2020 ) recurrent networks in such models . Following the seminal work of Ha & Eck ( 2018 ) , the primary representation for chirographic data has been discrete sequence of waypoints or tokens from specialized Domain Specific Languages like SVG ( Scalable Vector Graphics ) , with a minority continuing to make use of standard raster graphic architectures ( Ge et al. , 2021 ) . Overall , few studies have attempted to develop better representations using specialized functional forms , with Aksan et al . ( 2020 ) learning stroke embeddings and Das et al . ( 2020 ; 2021 ) directly modeling strokes as parametric Bézier curves . However , to the best of our knowledge , modelling chirographic data as generic functions have not yet been tried so far . The Neural ODE ( Chen et al. , 2018 ) framework provides a credible tool for natively modelling continuous time functions using dynamical systems . Since their inception , Neural ODEs have sparked a wide range of discussions and developments ( Massaroli et al. , 2020 ) , mostly in the theoretical domain . Neural ODEs have been extended to work with latent states ( Dupont et al. , 2019 ) , to define generative models ( Song et al. , 2021 ) , and have external data control ( Kidger et al. , 2020 ) . While practical training of ODE models remain a challenge , significant amounts of work have been dedicated towards developing engineering techniques ( Finlay et al. , 2020 ; Grathwohl et al. , 2019 ; Kelly et al. , 2020 ; Poli et al. , 2020 ) for faster training convergence and better regularization . Despite flourishing theoretical interests , due to high training complexity , applied works have so far been limited to few domains including Reinforcement Learning ( Du et al. , 2020 ) , reasoning in visual question answering ( Kim & Lee , 2019 ) , and physics-inspired system identification ( Lutter et al. , 2019 ; Greydanus et al. , 2019 ; Finzi et al. , 2020 ) . 3 NEURAL ORDINARY DIFFERENTIAL EQUATIONS ( NEURAL ODE ) . Neural ODE proposed by Chen et al . ( 2018 ) provides a framework for modelling an inherently continuous-time process by means of its time derivative . Given a continuous-time process s ( t ) ∈ Rd over a defined time interval [ t0 , t1 ] , we can approximate its underlying true dynamics by learning the parameters of a neural networkFΘ ( · ) acting as a proxy for the same ṡ ( t ) = FΘ ( t , s ) ∈ Rd . ( 1 ) The time derivative of a vector-valued function , i.e . ṡ ( t ) defines a vector field over the state space Rd which guides the evolution of the process over time . Given the initial state s0 : = s ( t = t0 ) and the learned set of parameters Θ∗ , the process can be recovered by solving an Initial Value Problem ( IVP ) using any Differential Equation solver ( e.g . RK4 method ) s ( t ) = s0 + ∫ t t0 FΘ∗ ( t , s ) · dt ∀t ∈ [ t0 , t1 ] Given a set of time-varying functions , one can learn dynamics FΘ capturing high-level concepts . The learning algorithm provided by Chen et al . ( 2018 ) makes it practical to train such models with constant memory cost ( w.r.t time horizon ) both in forward and backward pass . 4 FRAMEWORK OVERVIEW . We borrow the notation of Section 3 and denote chirographic structures as continuous functions of time s ( t ) defined within [ t0 , t1 ] . The exact form of s ( t ) must include the running 2D coordinates over time and may include other properties like pen-state ( whether ink is visible at time t ) etc . In principle , one could directly model ṡ ( t ) with a Neural ODE as in Eq . 1 ( see Fig . 2 ) . However , such a model would possess low representational power resulting in underfitting , and furthermore would be incapable of representing self-overlaps naturally exhibited by chirographic data . 4.1 NEURAL ODE AS DECODER . Augmented ODE . We first augment ( Dupont et al. , 2019 ) the original state s ( t ) with the underlying hidden process h ( t ) in the same time interval . Unlike discrete-time analogues like recurrent networks which model discrete transition of the hidden state followed by an observation model p ( s|h ) , we model the “ co-evolution ” of the hidden process along with data using a dynamical system defined over the joint space a ( t ) : = [ s ( t ) h ( t ) ] T . Augmented ODEs ( Dupont et al. , 2019 ) improve representational power by lifting the vanilla ODE of Eq . 1 to a higher dimension , allowing the data trajectory to self-overlap in time . In addition , an augmented latent state h ( t ) provides a principled way to control the trajectory by simply solving the dynamics with a different initial condition h0 . Higher-Order Dynamics . To further increase representational power , we capture the secondorder derivative ä ( t ) with a parameterized neural network FΘ ( · ) like Yildiz et al . ( 2019 ) . This formulation however , leads to the requirement of an extra initial condition ȧ0 : = ȧ ( t = t0 ) [ a ( t ) ȧ ( t ) ] = [ a0 ȧ0 ] + ∫ t t0 [ ȧ ( t ) FΘ ( · ) ] dt ( 2 ) Traditionally , the quantities a ( t ) and ȧ ( t ) are termed as “ position ” and “ velocity ” respectively of the dynamical system represented by ä ( t ) = FΘ ( · ) . Inspired by SketchRNN ( Ha & Eck , 2018 ) , we simplify the model by dropping the position component a ( t ) as a dependency to the dynamics function , leading to a dynamics of ä ( t ) = FΘ ( ȧ ( t ) , t ) . Conditioning . Our model so far is deterministic given a particular choice of initial condition . In order to generate different samples , we need to condition the decoding on a latent vector . Thus , we introduce the following architectural changes : 1 . We define a global latent vector z for each sample and compute the hidden part of initial conditions by projecting it through Ldec , as H0 : = [ h0 ḣ0 ] T = Ldec ( z ) . Without loss of generality , we fix the visible part of the initial state , i.e . s0 and ṡ0 to be constants . 2 . We further include z directly as a fixed parameter to the dynamics function . This is a specific form of data-controlled dynamics ( Massaroli et al. , 2020 ) . Please see Appendix B for implementation details . The final second order dynamical system represented by our decoder is therefore ä ( t ) = FΘ ( ȧ ( t ) , t , z ) ( 3 ) Computing the forward pass ( Eq . 2 ) requires the knowledge of the exact value of z . The latent code can be computed by employing a parameterized encoder . Similar to discrete-time analogues , we propose to employ a non-causal model as an encoder . | The paper describes a way of representing online handwriting data, typically seen as a discrete sequence, in a continuous space, using neural ODEs. The learned representation allows sampling from and interpolation in the latent space. The results in the paper are compared (mostly qualitatively) to autoencoder-learned representations based on representing data as discrete sequences, Bezier curves, and implicit surface representation from differentable geometrty. | SP:ccef448cb87c0cac9153d0e5b3b266a6b794c277 |
SketchODE: Learning neural sketch representation in continuous time | 1 INTRODUCTION Drawing-based communications such as sketches , writing , and diagrams come naturally to humans and have been used in some form since ancient times . Modeling such data is becoming an increasingly important and topical challenge area for machine learning systems aiming to interpret and simulate human creative expression . Such chirographic structures are challenging to interpret and generate due to their complex and unstructured nature . However recent progress has been strong , thanks to the advances in learning latent representations for sequences ( Bowman et al. , 2016 ; Graves , 2013 ; Srivastava et al. , 2015 ) . In particular , with the advent of variational sequence-to-sequence generative models ( Srivastava et al. , 2015 ; Bowman et al. , 2016 ) and the collection of largescale datasets ( Gervais et al. , 2020 ; Ha & Eck , 2018 ; Ge et al. , 2021 ) , we have seen a surge of advancements ( Ha & Eck , 2018 ; Aksan et al. , 2020 ) in this direction . Neverthe- less , a key missing link is the fact that drawing is intrinsically continuous in time , as is the resulting drawn artefact , when considered as a sequence rather than a raster image . The dominant approach to model free-hand sketches , popularized by Ha & Eck ( 2018 ) , has been to use a discrete-step recurrent network for capturing the latent process p ( h [ n ] |h [ n− 1 ] ) with discrete step n , and a complex observation model p ( s [ n ] | h [ n ] ) to explain local structure . The large body of approaches built upon this have a critical flaw : they ignore the very nature of the data , i.e . drawn structures are inherently continuous in time ( Refer to Fig . 1 ) . While a few attempts have been made to use continuous-time constructs like Bézier curves ( Das et al. , 2020 ; 2021 ) , their inconvenient mathematical form affect the representational power as well as training dynamics . The reason for such an absence of continuous-time models is the lack of fundamental tools for handling continuous time data . Lately however , the introduction of Neural Ordinary Differential Equations by Chen et al . ( 2018 ) followed by numerous extensions ( Dupont et al. , 2019 ; Yildiz et al. , 2019 ; Kidger et al. , 2020 ) have opened the possibility of building powerful models that can natively represent continuous time data such as handwriting , sketch etc . In this paper , we represent chirographic drawing data like handwriting , sketches etc . as continuoustime function s ( t ) and model the underlying latent process also as continuous-time latent function h ( t ) . Specifically , we propose a framework for capturing the derivative ( potentially higher order ) of the latent process dKh ( t ) dtK with parameterized neural networks . At inference , a solution trajectory ŝ ( t ) is computed from a given Initial Value Problem ( IVP ) that includes a learned ODE dynamics and an initial hidden state h0 , h ( t = 0 ) . An obvious advantage of the functional representation is its arbitrary spatial resolution , i.e . we can retrieve a structure at any spatial resolution by sampling the function at appropriate rate . Moreover , with a functional representation , we can systematically upgrade the representation along time with additional properties ( e.g . line thickness , color etc ) . So far , parameterized ODE dynamics models have largely been treated as a “ replacement for ResNets ” ( Massaroli et al. , 2020 ) where the intermediate states are of little importance . While the capability of ODE models are beginning to be tested in time-series data ( Kidger et al. , 2020 ; Morrill et al. , 2021 ) , they still remain largely unexplored . Following the introduction of Neural ODEs by Chen et al . ( 2018 ) , methods have been proposed in order to regularize the solution trajectory to be as simple as possible ( Kelly et al. , 2020 ; Finlay et al. , 2020 ) . However , time-series data such as chirography are entirely the opposite . The dynamics model , and consequently the solution trajectory should be as flexible as possible in order to learn the high degree of local and global variations with time often exhibited in such data . We increase the flexibility of Neural ODE models by introducing a new class of parameterized dynamics functions . Our experiments show that this is crucial in order to model chirographic data with satisfactory fidelity . We also introduce a new data augmentation method particularly geared towards ODE models and continuous-time sequences . Finally , we propose a deterministic autoencoder with a global latent variable z that exhibits some generative model properties by virtue of inherent continuity . Our final model , SketchODE , is similar to SketchRNN ( Ha & Eck , 2018 ) in terms of high-level design but lifts it to the realm of continuoustime . More generally , our model is the first continuous time Seq2Seq architecture . We explore some of the noteworthy features that differentiate SketchODE from discrete-time Seq2Seq analogoues . An inductive bias for continuity : Discrete-time sequence models ( Ha & Eck , 2018 ) need to use their capacity to model both global structure and temporal continuity in the data . ODE models on the other hand , have a strong intrinsic architectural bias towards temporal continuity . Sequences generated using a well-behaved ODE dynamics function are guaranteed to be continuous . Since they need not learn the continuity bias from scratch , ODE models are more data-efficient and are able to use majority of their capacity for modelling higher level structures . We even demonstrate that our SketchODE supports meaningful 1-shot learning when fine-tuned . Deterministic Autoencoding with Structured Latent Space : A surprising property of Seq2Seq ODE models is the ability to perform latent space interpolation and sampling without the necessity of imposing a prior distribution p ( z ) on the variational posterior qφ ( z|s ) . This property is also a consequence of the latent to output mapping being continuous . 2 RELATED WORK . With the advent of recurrent neural networks , sequential time-series models have seen significant adaptation in a variety of applications . Video ( Srivastava et al. , 2015 ) and natural language ( Bowman et al. , 2016 ) were two early applications to have seen significant success . A different but important time-series modality is chirographic data like handwriting ( Graves , 2013 ) and sketches ( Ha & Eck , 2018 ) which gained traction only recently . Subsequent developments also address data modalities that are not free-flowing , like Fonts ( Lopes et al. , 2019 ) and Icons ( Carlier et al. , 2020 ) which require slightly different models . Mirroring developments in natural language processing ( Vaswani et al. , 2017 ) , Transformers are now replacing ( Carlier et al. , 2020 ) recurrent networks in such models . Following the seminal work of Ha & Eck ( 2018 ) , the primary representation for chirographic data has been discrete sequence of waypoints or tokens from specialized Domain Specific Languages like SVG ( Scalable Vector Graphics ) , with a minority continuing to make use of standard raster graphic architectures ( Ge et al. , 2021 ) . Overall , few studies have attempted to develop better representations using specialized functional forms , with Aksan et al . ( 2020 ) learning stroke embeddings and Das et al . ( 2020 ; 2021 ) directly modeling strokes as parametric Bézier curves . However , to the best of our knowledge , modelling chirographic data as generic functions have not yet been tried so far . The Neural ODE ( Chen et al. , 2018 ) framework provides a credible tool for natively modelling continuous time functions using dynamical systems . Since their inception , Neural ODEs have sparked a wide range of discussions and developments ( Massaroli et al. , 2020 ) , mostly in the theoretical domain . Neural ODEs have been extended to work with latent states ( Dupont et al. , 2019 ) , to define generative models ( Song et al. , 2021 ) , and have external data control ( Kidger et al. , 2020 ) . While practical training of ODE models remain a challenge , significant amounts of work have been dedicated towards developing engineering techniques ( Finlay et al. , 2020 ; Grathwohl et al. , 2019 ; Kelly et al. , 2020 ; Poli et al. , 2020 ) for faster training convergence and better regularization . Despite flourishing theoretical interests , due to high training complexity , applied works have so far been limited to few domains including Reinforcement Learning ( Du et al. , 2020 ) , reasoning in visual question answering ( Kim & Lee , 2019 ) , and physics-inspired system identification ( Lutter et al. , 2019 ; Greydanus et al. , 2019 ; Finzi et al. , 2020 ) . 3 NEURAL ORDINARY DIFFERENTIAL EQUATIONS ( NEURAL ODE ) . Neural ODE proposed by Chen et al . ( 2018 ) provides a framework for modelling an inherently continuous-time process by means of its time derivative . Given a continuous-time process s ( t ) ∈ Rd over a defined time interval [ t0 , t1 ] , we can approximate its underlying true dynamics by learning the parameters of a neural networkFΘ ( · ) acting as a proxy for the same ṡ ( t ) = FΘ ( t , s ) ∈ Rd . ( 1 ) The time derivative of a vector-valued function , i.e . ṡ ( t ) defines a vector field over the state space Rd which guides the evolution of the process over time . Given the initial state s0 : = s ( t = t0 ) and the learned set of parameters Θ∗ , the process can be recovered by solving an Initial Value Problem ( IVP ) using any Differential Equation solver ( e.g . RK4 method ) s ( t ) = s0 + ∫ t t0 FΘ∗ ( t , s ) · dt ∀t ∈ [ t0 , t1 ] Given a set of time-varying functions , one can learn dynamics FΘ capturing high-level concepts . The learning algorithm provided by Chen et al . ( 2018 ) makes it practical to train such models with constant memory cost ( w.r.t time horizon ) both in forward and backward pass . 4 FRAMEWORK OVERVIEW . We borrow the notation of Section 3 and denote chirographic structures as continuous functions of time s ( t ) defined within [ t0 , t1 ] . The exact form of s ( t ) must include the running 2D coordinates over time and may include other properties like pen-state ( whether ink is visible at time t ) etc . In principle , one could directly model ṡ ( t ) with a Neural ODE as in Eq . 1 ( see Fig . 2 ) . However , such a model would possess low representational power resulting in underfitting , and furthermore would be incapable of representing self-overlaps naturally exhibited by chirographic data . 4.1 NEURAL ODE AS DECODER . Augmented ODE . We first augment ( Dupont et al. , 2019 ) the original state s ( t ) with the underlying hidden process h ( t ) in the same time interval . Unlike discrete-time analogues like recurrent networks which model discrete transition of the hidden state followed by an observation model p ( s|h ) , we model the “ co-evolution ” of the hidden process along with data using a dynamical system defined over the joint space a ( t ) : = [ s ( t ) h ( t ) ] T . Augmented ODEs ( Dupont et al. , 2019 ) improve representational power by lifting the vanilla ODE of Eq . 1 to a higher dimension , allowing the data trajectory to self-overlap in time . In addition , an augmented latent state h ( t ) provides a principled way to control the trajectory by simply solving the dynamics with a different initial condition h0 . Higher-Order Dynamics . To further increase representational power , we capture the secondorder derivative ä ( t ) with a parameterized neural network FΘ ( · ) like Yildiz et al . ( 2019 ) . This formulation however , leads to the requirement of an extra initial condition ȧ0 : = ȧ ( t = t0 ) [ a ( t ) ȧ ( t ) ] = [ a0 ȧ0 ] + ∫ t t0 [ ȧ ( t ) FΘ ( · ) ] dt ( 2 ) Traditionally , the quantities a ( t ) and ȧ ( t ) are termed as “ position ” and “ velocity ” respectively of the dynamical system represented by ä ( t ) = FΘ ( · ) . Inspired by SketchRNN ( Ha & Eck , 2018 ) , we simplify the model by dropping the position component a ( t ) as a dependency to the dynamics function , leading to a dynamics of ä ( t ) = FΘ ( ȧ ( t ) , t ) . Conditioning . Our model so far is deterministic given a particular choice of initial condition . In order to generate different samples , we need to condition the decoding on a latent vector . Thus , we introduce the following architectural changes : 1 . We define a global latent vector z for each sample and compute the hidden part of initial conditions by projecting it through Ldec , as H0 : = [ h0 ḣ0 ] T = Ldec ( z ) . Without loss of generality , we fix the visible part of the initial state , i.e . s0 and ṡ0 to be constants . 2 . We further include z directly as a fixed parameter to the dynamics function . This is a specific form of data-controlled dynamics ( Massaroli et al. , 2020 ) . Please see Appendix B for implementation details . The final second order dynamical system represented by our decoder is therefore ä ( t ) = FΘ ( ȧ ( t ) , t , z ) ( 3 ) Computing the forward pass ( Eq . 2 ) requires the knowledge of the exact value of z . The latent code can be computed by employing a parameterized encoder . Similar to discrete-time analogues , we propose to employ a non-causal model as an encoder . | This paper proposes a new model, called SketchODE, for learning representations of sketches using neural ODEs. Specifically, the authors parameterize hand drawn strokes as solutions of ODEs and build an auto encoder like framework for learning the vector fields of these ODEs from data. The decoder is modelled with a second order neural ODE acting on an augmented state (which allows for self overlapping trajectories which is necessary for handwritten data) that includes the stroke itself as well as an underlying hidden state. The decoder is made conditional by passing a latent vector to the model, which is mapped both to the initial state of the ODE as well as the dynamics function of the ODE. Varying this latent vector and solving the ODE then gives rise to different handwritten patterns. The encoder is parameterized by a neural CDE which takes as input some ground truth trajectory and returns a new state which is mapped to the latent vector. The model is then trained end to end using a reconstruction loss between the ground truth and reconstructed trajectories. In addition, the authors show that two additional tricks can further improve performance: using sine activation functions in the MLPs parameterizing the vector fields as well as using perlin noise to create (continuous) augmentations of the data. The authors then discuss and demonstrate various compelling properties of their model. This includes the existence of a latent space (which allows for interpolation of handwritten data) as well as one shot learning capabilities for new classes of data. The main contributions of the paper in my eyes are then: - Introducing an interesting continuous neural model of handwritten data - Demonstrating compelling properties of this continuous model over traditional discrete models - Introducing the VectorMNIST dataset, which I believe could be useful for other researchers - Interesting experiments on various handwritten datasets | SP:ccef448cb87c0cac9153d0e5b3b266a6b794c277 |
Autonomous Learning of Object-Centric Abstractions for High-Level Planning | 1 INTRODUCTION . Model-based methods are a promising approach to improving sample efficiency in reinforcement learning ( RL ) . However , they require the agent to either learn a highly detailed model—which is infeasible for sufficiently complex problems ( Ho et al. , 2019 ) —or to build a compact , high-level model that abstracts away unimportant details while retaining only the information required to plan . This raises the question of how best to build such an abstract model . While recent advances have shown how to learn models of complex environments , they lack theoretical guarantees and require millions of sample interactions ( Schrittwieser et al. , 2020 ; Hafner et al. , 2021 ) . Fortunately , recent work has shown how to learn an abstraction of a task that is provably suitable for planning with a given set of skills ( Konidaris et al. , 2018 ) . However , these representations are highly task-specific and must be relearned for any new task , or even any small change to an existing task . This makes them fatally impractical , especially for agents that must solve multiple complex tasks . We extend these methods by incorporating additional structure—namely , that the world consists of objects , and that similar objects are common amongst tasks . For example , when we play video games , we solve the game quickly by leveraging our existing knowledge of objects and their affordances ( such as doors and ladders which occur across multiple levels ) ( Dubey et al. , 2018 ) . Similarly , robot manipulation tasks often use the same robot and a similar set of physical objects in different configurations . This can substantially improve learning efficiency , because an object-centric model can be reused wherever that same object appears ( within the same task , or across different tasks ) and can also be generalised across objects that behave similarly—object types . We assume that the agent is able to individuate the objects in its environment , and propose a framework for building portable object-centric abstractions given only the data collected by executing high-level skills . These abstractions specify both the abstract object attributes that support high-level planning , and an object-relative lifted transition model that can be instantiated in a new task . This reduces the number of samples required to learn a new task by allowing the agent to avoid relearning the dynamics of previously seen object types . We make the following contributions : under the assumption that the agent can individuate objects in its environment , we develop a framework for building portable , object-centric abstractions , and for estimating object types , given only the data collected by executing high-level skills . We also show how to integrate problem-specific information to instantiate these representations in a new task . This reduces the samples required to learn a new task by allowing the agent to avoid relearning the dynamics of previously-seen objects . We demonstrate our approach on a Blocks World domain and a 2D crafting domain , and then apply it to a series of Minecraft tasks where an agent autonomously learns an abstract representation of a high-dimensional task from raw pixel input . In particular , we use the probabilistic planning domain definition language ( PPDDL ) ( Younes & Littman , 2004 ) to represent our learned abstraction , which allows for the use of existing task-level planners . Our results show that an agent can leverage these portable abstractions to learn a representation of new Minecraft tasks using a diminishing number of samples , allowing it to quickly construct plans composed of hundreds of low-level actions.1 2 BACKGROUND . We assume that tasks are modelled as semi-Markov decision processesM = 〈S , O , T , R〉 where ( i ) S is the state space ; ( ii ) O ( s ) is the set of temporally-extended actions known as options available at state s ; ( iii ) T describes the transition dynamics , specifying the probability of arriving in state s′ after option o is executed from s ; and ( iv ) R specifies the reward for reaching state s′ after executing option o in state s. An option o is defined by the tuple 〈Io , πo ; βo〉 , where Io is the initiation set specifying the states where the option can be executed , πo is the option policy which specifies the action to execute , and βo the probability of the option terminating in each state ( Sutton et al. , 1999 ) . We adopt the object-centric formulation from Ugur & Piater ( 2015 ) : in a task with n objects , the state is represented by the set { fa , f1 , f2 , . . . , fn } , where fa is a vector of the agent ’ s features and fi is a vector of features particular to object i . Note that the feature vector describing each object can itself be arbitrarily complex , such as an image or voxel grid—in one of our domains , we use pixels . Our state space representation assumes that individual objects have already been factored into their constituent low-level attributes . Practically , this means that the agent is aware that the world consists of objects , but is unaware of what the objects are , or whether multiple instantiations of the same object are present . It is also easy to see that different tasks will have differing numbers of objects with potentially arbitrary ordering ; any learned abstract representation should be agnostic to this . 2.1 STATE ABSTRACTIONS FOR PLANNING . We intend to learn an abstract representation suitable for planning . Prior work has shown that a soundand complete abstract representation must necessarily be able to estimate the set of initiating and terminating states for each option ( Konidaris et al. , 2018 ) . In classical planning , this corresponds to the precondition and effect of each high-level action operator ( McDermott et al. , 1998 ) . The precondition is defined as Pre ( o ) = Pr ( s ∈ Io ) , which is a probabilistic classifier that expresses the probability that option o can be executed at state s. Similarly , the effect or image represents the distribution of states an agent may find itself in after executing an option from states drawn from some starting distribution ( Konidaris et al. , 2018 ) . Since the precondition is a probabilistic classifier and the effect is a density estimator , they can be learned directly from option execution data . We can use preconditions and effects to evaluate the probability of a sequence of options—a plan— executing successfully . Given an initial state distribution , the precondition is used to evaluate the probability that the first option can execute , and the effects are used to determine the resulting state distribution . We can apply the same logic to the subsequent options to compute the probability of the entire plan executing successfully . It follows that these representations are sufficient for evaluating the probability of successfully executing any plan ( Konidaris et al. , 2018 ) . Partitioned Options For large or continuous state spaces , estimating Pr ( s′ | s , o ) is difficult because the worst case requires learning a distribution conditioned on every state . However , if we assume that terminating states are independent of starting states , we can make the simplification Pr ( s′ | s , o ) = Pr ( s′ | o ) . These subgoal options ( Precup , 2000 ) are not overly restrictive , since they refer to options that drive an agent to some set of states with high reliability . Nonetheless , many options are not subgoal . It is often possible , however , to partition an option ’ s initiation set into a finite number of subsets , so that it is approximately subgoal when executed from any of the individual subsets . That is , we partition an option o ’ s start states into finite regions C such that Pr ( s′ | s , o , c ) ≈ Pr ( s′ | o , c ) , c ∈ C ( Konidaris et al. , 2018 ) . 1More results and videos can be found at : https : //sites.google.com/view/mine-pddl Factors We adopt the frame assumption , which states that aspects of the world not explicitly affected by an agent ’ s action remain the same ( Pasula et al. , 2004 ) . Prior work leverages this to learn a factored or STRIPS-like ( Fikes & Nilsson , 1971 ) representation by computing the option ’ s mask : the state variables explicitly changed by the option ( Konidaris et al. , 2018 ) . In our formulation , the state space is already factorised into its constituent objects , so computing the mask amounts to determining which objects are affected by a given option . 3 LEARNING OBJECT-CENTRIC REPRESENTATIONS . Although prior work ( Konidaris et al. , 2018 ) allows an agent to autonomously learn an abstract representation supporting fast task-level planning , that representation lacks generalisability—since the symbols are distributions over states in the current task , they can not be reused in new ones . This approach can be fatally expensive in complex domains , where learning an abstract model may be as hard as solving a task from scratch , and is therefore pointless if we only want to solve a single task . However , an agent able to reuse aspects of its learned representation can amortise the cost of learning over many interactions , accelerating learning in later tasks . The key question is what forms of representation support transfer in this way . We now introduce an object-centric generalisation of a learned symbolic representation that admits transfer in tasks when the state space representation consists of features centred on objects in the environment . This is common in robotics , where each object is often isolated from the environment and represented as a point cloud or subsequently a voxelised occupancy grid . Our approach builds on a significant amount of machinery , involving clustering , feature selection , classification and density estimation . We summarise our proposed approach in Figure 1 and provide a high-level description in the remainder of this section , but provide pseudocode and specific practical details in the appendix . 3.1 GENERATING A PROPOSITIONAL MODEL ( STEPS 1–2 ) ( AS IN KONIDARIS ET AL. , 2018 ) . The agent begins by collecting transition data using an exploration policy . The first step is to partition the options into approximately subgoal options . For each option o and empirical sets of initial and terminating states Ĩo and β̃o , the agent partitions Ĩo into a number of disjoint subsets , such that for each subset K ⊆ Ĩo , we have Pr ( s′ | si , o ) = Pr ( s′ | sj , o ) ∀si , sj ∈ K , s′ ∈ β̃o . In other words , the effect distribution of the option is identical , independent of the state in K from which it was executed . In practice , this can be approximated by first clustering state transition samples based on terminating states , and then assigning each cluster to a partition . Finally , pairs of partitions whose initiating states overlap are merged to handle probabilistic effects ( Konidaris et al. , 2018 ) . The agent next learns a precondition classifier for each approximately partitioned option . For each partition , the initiating states are used as positive examples , and all other states are treated as negative ones . A feature selection procedure next determines which objects are relevant to the precondition , and a classifier is fit using only those objects . A density estimator is then used to estimate the effect distribution for each partitioned option . The agent learns distributions over only the objects affected by the option , learning one estimator per object . Together these effect distributions form our propositional PPDDL vocabulary V . To construct a PPDDL representation for each partitioned option , we must specify both the precondition and effects using the state distributions ( propositions ) in V . The effects are directly specified using these distributions , and so pose no problem . However , the estimated precondition is a classifier rather than a state distribution . The agent must therefore iterate through all possible effect distributions to compute whether the skill can be executed there . To do so , we denote P as some set of propositions in V , and G ( s ; P ) as the probability that a low-level state s is drawn from the conjunction of propositions in P . Then , for an option with learned classifier Îo , we can represent the precondition with every P ∈ ℘ ( V ) such that ∫ S Îo ( s ) G ( s ; P ) ds > 0 , where ℘ ( V ) denotes the powerset of V . In other words , the agent considers every combination of effect state distributions and draws samples from their conjunction . If these samples are classified as positive by Îo , then the conjunction of P is used to represent the precondition . The preconditions and effects are now specified using distributions over state variables , where each distribution is a proposition—this is our PPDDL representation , which is sound and complete for planning . | This paper introduces a method for learning symbolic, object-centric abstractions from object-factored environment observations for long-term planning tasks. It extends the symbolic representation learning framework by Konidaris et al. (2018) by factoring the state into objects, learning object type abstractions, and “lifting” the model to operate on these object-centric, typed abstractions. The method is successfully demonstrated on three environments of complexity ranging from simple per-object discrete feature vectors to a Minecraft long-horizon planning task from (object-factored) pixel observations. | SP:8177c43096a6a35c7f64ccf714ae038b1ea4db7a |
Autonomous Learning of Object-Centric Abstractions for High-Level Planning | 1 INTRODUCTION . Model-based methods are a promising approach to improving sample efficiency in reinforcement learning ( RL ) . However , they require the agent to either learn a highly detailed model—which is infeasible for sufficiently complex problems ( Ho et al. , 2019 ) —or to build a compact , high-level model that abstracts away unimportant details while retaining only the information required to plan . This raises the question of how best to build such an abstract model . While recent advances have shown how to learn models of complex environments , they lack theoretical guarantees and require millions of sample interactions ( Schrittwieser et al. , 2020 ; Hafner et al. , 2021 ) . Fortunately , recent work has shown how to learn an abstraction of a task that is provably suitable for planning with a given set of skills ( Konidaris et al. , 2018 ) . However , these representations are highly task-specific and must be relearned for any new task , or even any small change to an existing task . This makes them fatally impractical , especially for agents that must solve multiple complex tasks . We extend these methods by incorporating additional structure—namely , that the world consists of objects , and that similar objects are common amongst tasks . For example , when we play video games , we solve the game quickly by leveraging our existing knowledge of objects and their affordances ( such as doors and ladders which occur across multiple levels ) ( Dubey et al. , 2018 ) . Similarly , robot manipulation tasks often use the same robot and a similar set of physical objects in different configurations . This can substantially improve learning efficiency , because an object-centric model can be reused wherever that same object appears ( within the same task , or across different tasks ) and can also be generalised across objects that behave similarly—object types . We assume that the agent is able to individuate the objects in its environment , and propose a framework for building portable object-centric abstractions given only the data collected by executing high-level skills . These abstractions specify both the abstract object attributes that support high-level planning , and an object-relative lifted transition model that can be instantiated in a new task . This reduces the number of samples required to learn a new task by allowing the agent to avoid relearning the dynamics of previously seen object types . We make the following contributions : under the assumption that the agent can individuate objects in its environment , we develop a framework for building portable , object-centric abstractions , and for estimating object types , given only the data collected by executing high-level skills . We also show how to integrate problem-specific information to instantiate these representations in a new task . This reduces the samples required to learn a new task by allowing the agent to avoid relearning the dynamics of previously-seen objects . We demonstrate our approach on a Blocks World domain and a 2D crafting domain , and then apply it to a series of Minecraft tasks where an agent autonomously learns an abstract representation of a high-dimensional task from raw pixel input . In particular , we use the probabilistic planning domain definition language ( PPDDL ) ( Younes & Littman , 2004 ) to represent our learned abstraction , which allows for the use of existing task-level planners . Our results show that an agent can leverage these portable abstractions to learn a representation of new Minecraft tasks using a diminishing number of samples , allowing it to quickly construct plans composed of hundreds of low-level actions.1 2 BACKGROUND . We assume that tasks are modelled as semi-Markov decision processesM = 〈S , O , T , R〉 where ( i ) S is the state space ; ( ii ) O ( s ) is the set of temporally-extended actions known as options available at state s ; ( iii ) T describes the transition dynamics , specifying the probability of arriving in state s′ after option o is executed from s ; and ( iv ) R specifies the reward for reaching state s′ after executing option o in state s. An option o is defined by the tuple 〈Io , πo ; βo〉 , where Io is the initiation set specifying the states where the option can be executed , πo is the option policy which specifies the action to execute , and βo the probability of the option terminating in each state ( Sutton et al. , 1999 ) . We adopt the object-centric formulation from Ugur & Piater ( 2015 ) : in a task with n objects , the state is represented by the set { fa , f1 , f2 , . . . , fn } , where fa is a vector of the agent ’ s features and fi is a vector of features particular to object i . Note that the feature vector describing each object can itself be arbitrarily complex , such as an image or voxel grid—in one of our domains , we use pixels . Our state space representation assumes that individual objects have already been factored into their constituent low-level attributes . Practically , this means that the agent is aware that the world consists of objects , but is unaware of what the objects are , or whether multiple instantiations of the same object are present . It is also easy to see that different tasks will have differing numbers of objects with potentially arbitrary ordering ; any learned abstract representation should be agnostic to this . 2.1 STATE ABSTRACTIONS FOR PLANNING . We intend to learn an abstract representation suitable for planning . Prior work has shown that a soundand complete abstract representation must necessarily be able to estimate the set of initiating and terminating states for each option ( Konidaris et al. , 2018 ) . In classical planning , this corresponds to the precondition and effect of each high-level action operator ( McDermott et al. , 1998 ) . The precondition is defined as Pre ( o ) = Pr ( s ∈ Io ) , which is a probabilistic classifier that expresses the probability that option o can be executed at state s. Similarly , the effect or image represents the distribution of states an agent may find itself in after executing an option from states drawn from some starting distribution ( Konidaris et al. , 2018 ) . Since the precondition is a probabilistic classifier and the effect is a density estimator , they can be learned directly from option execution data . We can use preconditions and effects to evaluate the probability of a sequence of options—a plan— executing successfully . Given an initial state distribution , the precondition is used to evaluate the probability that the first option can execute , and the effects are used to determine the resulting state distribution . We can apply the same logic to the subsequent options to compute the probability of the entire plan executing successfully . It follows that these representations are sufficient for evaluating the probability of successfully executing any plan ( Konidaris et al. , 2018 ) . Partitioned Options For large or continuous state spaces , estimating Pr ( s′ | s , o ) is difficult because the worst case requires learning a distribution conditioned on every state . However , if we assume that terminating states are independent of starting states , we can make the simplification Pr ( s′ | s , o ) = Pr ( s′ | o ) . These subgoal options ( Precup , 2000 ) are not overly restrictive , since they refer to options that drive an agent to some set of states with high reliability . Nonetheless , many options are not subgoal . It is often possible , however , to partition an option ’ s initiation set into a finite number of subsets , so that it is approximately subgoal when executed from any of the individual subsets . That is , we partition an option o ’ s start states into finite regions C such that Pr ( s′ | s , o , c ) ≈ Pr ( s′ | o , c ) , c ∈ C ( Konidaris et al. , 2018 ) . 1More results and videos can be found at : https : //sites.google.com/view/mine-pddl Factors We adopt the frame assumption , which states that aspects of the world not explicitly affected by an agent ’ s action remain the same ( Pasula et al. , 2004 ) . Prior work leverages this to learn a factored or STRIPS-like ( Fikes & Nilsson , 1971 ) representation by computing the option ’ s mask : the state variables explicitly changed by the option ( Konidaris et al. , 2018 ) . In our formulation , the state space is already factorised into its constituent objects , so computing the mask amounts to determining which objects are affected by a given option . 3 LEARNING OBJECT-CENTRIC REPRESENTATIONS . Although prior work ( Konidaris et al. , 2018 ) allows an agent to autonomously learn an abstract representation supporting fast task-level planning , that representation lacks generalisability—since the symbols are distributions over states in the current task , they can not be reused in new ones . This approach can be fatally expensive in complex domains , where learning an abstract model may be as hard as solving a task from scratch , and is therefore pointless if we only want to solve a single task . However , an agent able to reuse aspects of its learned representation can amortise the cost of learning over many interactions , accelerating learning in later tasks . The key question is what forms of representation support transfer in this way . We now introduce an object-centric generalisation of a learned symbolic representation that admits transfer in tasks when the state space representation consists of features centred on objects in the environment . This is common in robotics , where each object is often isolated from the environment and represented as a point cloud or subsequently a voxelised occupancy grid . Our approach builds on a significant amount of machinery , involving clustering , feature selection , classification and density estimation . We summarise our proposed approach in Figure 1 and provide a high-level description in the remainder of this section , but provide pseudocode and specific practical details in the appendix . 3.1 GENERATING A PROPOSITIONAL MODEL ( STEPS 1–2 ) ( AS IN KONIDARIS ET AL. , 2018 ) . The agent begins by collecting transition data using an exploration policy . The first step is to partition the options into approximately subgoal options . For each option o and empirical sets of initial and terminating states Ĩo and β̃o , the agent partitions Ĩo into a number of disjoint subsets , such that for each subset K ⊆ Ĩo , we have Pr ( s′ | si , o ) = Pr ( s′ | sj , o ) ∀si , sj ∈ K , s′ ∈ β̃o . In other words , the effect distribution of the option is identical , independent of the state in K from which it was executed . In practice , this can be approximated by first clustering state transition samples based on terminating states , and then assigning each cluster to a partition . Finally , pairs of partitions whose initiating states overlap are merged to handle probabilistic effects ( Konidaris et al. , 2018 ) . The agent next learns a precondition classifier for each approximately partitioned option . For each partition , the initiating states are used as positive examples , and all other states are treated as negative ones . A feature selection procedure next determines which objects are relevant to the precondition , and a classifier is fit using only those objects . A density estimator is then used to estimate the effect distribution for each partitioned option . The agent learns distributions over only the objects affected by the option , learning one estimator per object . Together these effect distributions form our propositional PPDDL vocabulary V . To construct a PPDDL representation for each partitioned option , we must specify both the precondition and effects using the state distributions ( propositions ) in V . The effects are directly specified using these distributions , and so pose no problem . However , the estimated precondition is a classifier rather than a state distribution . The agent must therefore iterate through all possible effect distributions to compute whether the skill can be executed there . To do so , we denote P as some set of propositions in V , and G ( s ; P ) as the probability that a low-level state s is drawn from the conjunction of propositions in P . Then , for an option with learned classifier Îo , we can represent the precondition with every P ∈ ℘ ( V ) such that ∫ S Îo ( s ) G ( s ; P ) ds > 0 , where ℘ ( V ) denotes the powerset of V . In other words , the agent considers every combination of effect state distributions and draws samples from their conjunction . If these samples are classified as positive by Îo , then the conjunction of P is used to represent the precondition . The preconditions and effects are now specified using distributions over state variables , where each distribution is a proposition—this is our PPDDL representation , which is sound and complete for planning . | This paper proposes a method for learning an object-centric symbolic representation of an environment that allows for planning. It extends the framework introduced in Konidaris et al. (2018), which learns a symbolic representation of an environment that can be expressed using a PDDL for planning. Importantly, the method presented in Konidaris et al. (2018) does not make assumptions about structure in the environment and how different states may relate to one another. As a consequence, the learned representations are highly tasks-specific and transfer/generalization to new tasks and across states is not possible. In this paper it is assumed that the world consists of objects and that similar objects are common across tasks. This enables object-centric representations (and the associated model) to be reused across tasks, and organizing objects according to 'object types' that behave similarly when acted upon. This is expected to improve learning efficiency and generalization/transfer to new tasks. The method presented here incorporates techniques for merging objects into object types, and integrating problem-specific information to ground these representations into specific tasks. The working of these techniques is validated experimentally, and it is shown how greater sample efficiency can be obtained when generalizing to new tasks. | SP:8177c43096a6a35c7f64ccf714ae038b1ea4db7a |
ED2: An Environment Dynamics Decomposition Framework for World Model Construction | 1 INTRODUCTION . Reinforcement Learning ( RL ) is a general learning framework for solving sequential decision-making problems and has made significant progress in many fields ( Mnih et al. , 2015 ; Silver et al. , 2016 ; Vinyals et al. , 2019 ; Schrittwieser et al. , 2019 ) . In general , RL methods can be divided into two categories regarding whether a world model is constructed for the policy deriving : model-free RL ( MFRL ) and model-based RL ( MBRL ) . MFRL methods train the policy by directly interacting with the environment , which results in good asymptotic performance but low sample efficiency . By contrast , MBRL methods improve the sample efficiency by modeling the environment , but often with limited asymptotic performance and suffer from the model error ( Lai et al. , 2020 ; Kaiser et al. , 2020 ) . Existing MBRL algorithms can be divided into four categories according to the paradigm they follow : the first category focuses on generating imaginary data by the world model and training the policy with these data via MFRL algorithms ( Kidambi et al. , 2020 ; Yu et al. , 2020 ) ; the second category leverages the differentiability of the world model , and generates differentiable trajectories for policy optimization ( Deisenroth & Rasmussen , 2011 ; Levine & Koltun , 2013 ; Zhu et al. , 2020 ) ; the third category aims to obtain an accurate value function by generating imaginations for temporal difference ( TD ) target calculation ( Buckman et al. , 2018 ; Feinberg et al. , 2018 ) ; the last category of works focuses on reducing the computational cost of the policy deriving by combining the optimal control algorithm ( e.g . model predictive control ) with the learned world models ( Chua et al. , 2018 ; Okada & Taniguchi , 2019 ; Argenson & Dulac-Arnold , 2020 ) . Regardless of paradigms , the performance of all existing MBRL algorithms depends on the accuracy of the world model . The more accurate the world model is , the more reliable data can be generated , and finally , the better policy performance can be achieved . Therefore , improving the world model accuracy is critical in MBRL . To this end , various techniques have been proposed to improve the model accuracy . For example , rather than directly predict the next state , some works construct a world model for the state change 1Our code is open source and available at https : //github.com/ED2-source-code/ED2 prediction ( Luo et al. , 2019 ; Kurutach et al. , 2018 ) . Model ensemble is also widely used in model construction for uncertainty estimation , which provides a more reliable prediction ( Janner et al. , 2019 ; Pan et al. , 2020 ) . To reduce the model error in long trajectory generation , optimizing the multi-step prediction errors is also an effective technique ( Hafner et al. , 2019 ) . However , these techniques improve the environment modeling in a black-box way , which ignores the inner decomposed structure of environment dynamics . For example , Figure 1 ( a ) shows the Cheetah task from DeepMindControl ( DMC ) Suite tasks , where the dynamics can be decomposed in various ways . Figure 1 ( b ) shows various decomposition on the dynamics : according to the role of sub-dynamics , we can decompose it into : { thigh , shin , foot } ; alternatively , according to the position of sub-dynamics , we can decompose it into : { back , front } . Figure 1 ( d ) shows that no matter whether we decompose the Cheetah task according to role or position , modeling each decomposed sub-dynamics separately can significantly reduce the model error of the existing MBRL algorithm ( e.g . Dreamer ( Hafner et al. , 2020 ) ) . Inspired by the above example , we propose environment dynamics decomposition ( ED2 ) , a novel world model construction framework that models the dynamics in a decomposing fashion . ED2 contains two main components : sub-dynamics discovery ( SD2 ) and dynamics decomposition prediction ( D2P ) . SD2 is proposed to decompose the dynamics into multiple sub-dynamics , which can be flexibly designed and we also provide three alternative approaches : complete decomposition , human prior , and the clustering-based method . D2P is proposed to construct the world model from the decomposed dynamics in SD2 , which models each sub-dynamics separately in an end-to-end training manner . ED2 is orthogonal to existing MBRL algorithms and can be used as a backbone to easily combine with any MBRL algorithm . Experiment shows ED2 improves the model accuracy and boosts the performance significantly when combined with existing MBRL algorithms . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . Given an environment , we can define a finite-horizon partially observable Markov decision process ( POMDP ) as ( S , A , R , P , γ , O , Ω , T ) , where S ∈ Rn is the state space , and A ∈ Rm is the action space , R : S × A → R denotes the reward function , P : S × A → S denotes the environment dynamics , γ is the discount factor . The agent receives an observation o ∈ Ω , which contain partial information about the state s ∈ S. O is the observation function , which mapping states to probability distributions over observations . The decision process length is denoted as T . Let η denote the expected return of a policy π over the initial state distribution ρ0 . The goal of an RL agent is to find the optimal policy π∗ which maximizes the expected return : π∗ = arg max π η [ π ] = arg max π Eπ [ T∑ t=0 γtR ( st , at ) ] , where s0 ∼ ρ0 , ot ∼ O ( ·|st ) , at ∼ π ( ·|ot ) , st+1 ∼ P ( ·|st , at ) . If the environment is fully observable , i.e. , Ω = S and O is an identity function , POMDP is equivalent to the MDP : ( S , A , R , P , γ , T ) . 2.2 REPRESENTATIVE WORLD MODELS IN MBRL . The world model is a key component of MBRL that directly impacts policy training . World models are usually formulated with latent dynamics ( Janner et al. , 2019 ; Hafner et al. , 2020 ) , and the general form of the latent dynamics model can be summarized as follows : Latent transition kernel : ht = f ( s≤t−1 , a≤t−1 ) Stochastic state function : p ( st|ht ) Reward function : p ( rt|ht ) The latent transition kernel ( shorthand as kernel ) predicts the latent state ht with input s≤t−1 and a≤t−1 . Based on latent state ht , the stochastic state function and reward function decode the state st and reward rt . For the partially observable environment , two additional functions are required : Observation function : p ( ot|st ) Representation function : p ( st|ht , ot ) In general , world models mainly differ at the implementation of kernel , which can be roughly divided into two categories : with non-recurrent kernel and with recurrent kernel . The formal definition of both kernels are as follows : ht = { f ( st−1 , at−1 ) With non-recurrent kernel f ( ht−1 , st−1 , at−1 ) With recurrent kernel Non-recurrent kernel are relatively basic kernel for modeling , which are often implemented as Fully-Connected Networks . Non-recurrent kernel takes the current state st−1 and action at−1 as input , outputs the latent state prediction ht . Compare to non-recurrent kernel , recurrent kernel is implemented as RNN and takes the additional input ht−1 , which performs better under POMDP setting . For both kernels , the st and rt can be generated from the latent prediction ht . 3 ENVIRONMENT DYNAMICS DECOMPOSITION . 3.1 MOTIVATION . An accurate world model is critical in MBRL policy deriving . To decrease the model error , existing works propose various techniques as introduced in Section 1 . However , these techniques improve the environment modeling in a black-box manner , which ignores the inner properties of environment dynamics , resulting in inaccurate world model construction and poor policy performance . To address this problem , we propose two important environment properties when modeling an environment : 1 ) Decomposability : The environment dynamics can be decomposed into multiple subdynamics in various ways and the decomposed sub-dynamics can be combined to reconstruct the entire dynamics . 2 ) Traceability : The environment dynamics can be traced to the action ’ s impact on the environment , and each sub-dynamics can be traced to the impact caused by a part of the action . For example in the Cheetah task , Figure 1 ( b ) demonstrates the decomposability : we can decompose the dynamics into { thigh , shin , foot } sub-dynamics or { back , front } sub-dynamics , which depends on the different decomposition perspectives and the combination of decomposed sub-dynamics can constitute the entire dynamics . Figure 1 ( c ) explains the traceability : each sub-dynamics can be traced to the corresponding subset of action dimensions : for the thigh dynamics , it can be regarded as the impact caused by the front-thigh and back-thigh action dimensions . The above two properties are closely related to environment modeling : the decomposability reveals the existence of sub-dynamics , which allows us to model the dynamics separately , while the traceability investigates the causes of the dynamics and guides us to decompose the dynamics at its root ( i.e . the action ) . To take the above properties into account , we propose the environment dynamics decomposition ( ED2 ) framework ( as shown in Figure 2 ) , which contains two key components : sub-dynamics discovery ( SD2 ) and dynamics decomposition prediction ( D2P ) . More specifically , by considering the traceability , we propose to discover the latent sub-dynamics by analyzing the action ( SD2 , the blue part in Figure 2 ) ; by considering the decomposability , we propose to construct the world model in a decomposing manner ( D2P , the green part in Figure 2 ) . Our framework can be used as a backbone in MBRL and the combination can lead to performance improvements over existing MBRL algorithms . 3.2 DYNAMICS DECOMPOSITION PREDICTION . Given an environment with m-dimensional action space A ⊂ Rm , the index of each action dimension constitutes a set Λ = { 1 , 2 , · · · , m } , any disjoint partition G = { G1 , . . . , Gk } over Λ corresponds to a particular way of decomposing action space . For each action dimension i in Λ , we define the action space as Ai , which satisfied A = A1 × · · · ×Am . The action space decomposition under partition G is defined as AG = { AG1 , · · · , AGk } , where sub-action space AGj = ∏ x∈Gj A x . Based on above definitions , we define the dynamics decomposition for P under partition G as follows : Definition 1 Given a partition G , the decomposition for P : S ×A→ S can be defined as : P ( s , a ) = fc ( 1 k k∑ i=1 Pi ( s , a Gi ) ) , ∀s , a ∈ S ×A , ( 1 ) with a set of sub-dynamics functions { P1 , ... , Pk } that Pi : S ×AGi → H , and a decoding function fc : H → S. Note H is a latent space and aGi ∈ AGi is a sub-action ( projection ) of action a . Intuitively , the choice of partition G is significant to the rationality of dynamics decomposition , which should be reasonably derived from the environments . In this section , we mainly focus on dynamics modeling , and we will introduce how to derive the partition G by using SD2 in section 3.3 . To implement D2P , we use model M iφi parameterized by φi ( i.e. , neural network parameters ) to approximate each sub-dynamics Pi . As illustrated in Figure 2 , given a partition G , an action a is divided into multiple sub-actions { aG1 , · · · , aGk } , each model M iφi takes state s and the sub-action aGi as input and output a latent prediction hi ∈ H . The separate latent predictions { h1 , · · · , hk } are aggregated and then decoded for the generation of state s′ and reward r. For each kernel described in Section 2.2 , we provide the formal description here when combine with D2P : ht = { 1 k ∑k i=1 f ( st−1 , a Gi t−1 ) For non-recurrent kernel 1 k ∑k i=1 f ( ht−1 , st−1 , a Gi t−1 ) For recurrent kernel We propose a set of kernels , where each kernel models a specific sub-dynamics with the input of current state s , corresponding sub-action aGi and hidden state ht−1 ( ignored when applying on non-recurrent kernel ) . The output of all kernels is averaged to get the final output ht . The prediction of reward rt and state st is generated from the output ht . Specifically , we provide an example when combining with the kernel of Recurrent State-Space Model ( RSSM ) ( Hafner et al. , 2019 ) in Figure 3 , which is a representative recurrent kernel-based world model . The original single kernel implemented as GRU are replace by multiple kernels with different action input . | This paper presents a model-based RL algorithm that provides decomposed dynamics models when the state and action spaces are defined as a multi-dimensional Cartesian space. We can divide the method into two parts: * The one for partitioning the action coordinates, and * The NN architecture, one for each partition and then combining the outcomes ($h^i_t$) of partitions into the final one ($h_t$). This paper also shows a clustering based on the correlation, human-provided partition, and full/random partitioning algorithms. The experiment shows the results on Mujoco-based environments that have orthogonal coordinates per position and velocity, so the action coordinate partitioning can be readily applicable. | SP:e3e6dc2285271426a37bc08e8989e03fe1891ea9 |
ED2: An Environment Dynamics Decomposition Framework for World Model Construction | 1 INTRODUCTION . Reinforcement Learning ( RL ) is a general learning framework for solving sequential decision-making problems and has made significant progress in many fields ( Mnih et al. , 2015 ; Silver et al. , 2016 ; Vinyals et al. , 2019 ; Schrittwieser et al. , 2019 ) . In general , RL methods can be divided into two categories regarding whether a world model is constructed for the policy deriving : model-free RL ( MFRL ) and model-based RL ( MBRL ) . MFRL methods train the policy by directly interacting with the environment , which results in good asymptotic performance but low sample efficiency . By contrast , MBRL methods improve the sample efficiency by modeling the environment , but often with limited asymptotic performance and suffer from the model error ( Lai et al. , 2020 ; Kaiser et al. , 2020 ) . Existing MBRL algorithms can be divided into four categories according to the paradigm they follow : the first category focuses on generating imaginary data by the world model and training the policy with these data via MFRL algorithms ( Kidambi et al. , 2020 ; Yu et al. , 2020 ) ; the second category leverages the differentiability of the world model , and generates differentiable trajectories for policy optimization ( Deisenroth & Rasmussen , 2011 ; Levine & Koltun , 2013 ; Zhu et al. , 2020 ) ; the third category aims to obtain an accurate value function by generating imaginations for temporal difference ( TD ) target calculation ( Buckman et al. , 2018 ; Feinberg et al. , 2018 ) ; the last category of works focuses on reducing the computational cost of the policy deriving by combining the optimal control algorithm ( e.g . model predictive control ) with the learned world models ( Chua et al. , 2018 ; Okada & Taniguchi , 2019 ; Argenson & Dulac-Arnold , 2020 ) . Regardless of paradigms , the performance of all existing MBRL algorithms depends on the accuracy of the world model . The more accurate the world model is , the more reliable data can be generated , and finally , the better policy performance can be achieved . Therefore , improving the world model accuracy is critical in MBRL . To this end , various techniques have been proposed to improve the model accuracy . For example , rather than directly predict the next state , some works construct a world model for the state change 1Our code is open source and available at https : //github.com/ED2-source-code/ED2 prediction ( Luo et al. , 2019 ; Kurutach et al. , 2018 ) . Model ensemble is also widely used in model construction for uncertainty estimation , which provides a more reliable prediction ( Janner et al. , 2019 ; Pan et al. , 2020 ) . To reduce the model error in long trajectory generation , optimizing the multi-step prediction errors is also an effective technique ( Hafner et al. , 2019 ) . However , these techniques improve the environment modeling in a black-box way , which ignores the inner decomposed structure of environment dynamics . For example , Figure 1 ( a ) shows the Cheetah task from DeepMindControl ( DMC ) Suite tasks , where the dynamics can be decomposed in various ways . Figure 1 ( b ) shows various decomposition on the dynamics : according to the role of sub-dynamics , we can decompose it into : { thigh , shin , foot } ; alternatively , according to the position of sub-dynamics , we can decompose it into : { back , front } . Figure 1 ( d ) shows that no matter whether we decompose the Cheetah task according to role or position , modeling each decomposed sub-dynamics separately can significantly reduce the model error of the existing MBRL algorithm ( e.g . Dreamer ( Hafner et al. , 2020 ) ) . Inspired by the above example , we propose environment dynamics decomposition ( ED2 ) , a novel world model construction framework that models the dynamics in a decomposing fashion . ED2 contains two main components : sub-dynamics discovery ( SD2 ) and dynamics decomposition prediction ( D2P ) . SD2 is proposed to decompose the dynamics into multiple sub-dynamics , which can be flexibly designed and we also provide three alternative approaches : complete decomposition , human prior , and the clustering-based method . D2P is proposed to construct the world model from the decomposed dynamics in SD2 , which models each sub-dynamics separately in an end-to-end training manner . ED2 is orthogonal to existing MBRL algorithms and can be used as a backbone to easily combine with any MBRL algorithm . Experiment shows ED2 improves the model accuracy and boosts the performance significantly when combined with existing MBRL algorithms . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING . Given an environment , we can define a finite-horizon partially observable Markov decision process ( POMDP ) as ( S , A , R , P , γ , O , Ω , T ) , where S ∈ Rn is the state space , and A ∈ Rm is the action space , R : S × A → R denotes the reward function , P : S × A → S denotes the environment dynamics , γ is the discount factor . The agent receives an observation o ∈ Ω , which contain partial information about the state s ∈ S. O is the observation function , which mapping states to probability distributions over observations . The decision process length is denoted as T . Let η denote the expected return of a policy π over the initial state distribution ρ0 . The goal of an RL agent is to find the optimal policy π∗ which maximizes the expected return : π∗ = arg max π η [ π ] = arg max π Eπ [ T∑ t=0 γtR ( st , at ) ] , where s0 ∼ ρ0 , ot ∼ O ( ·|st ) , at ∼ π ( ·|ot ) , st+1 ∼ P ( ·|st , at ) . If the environment is fully observable , i.e. , Ω = S and O is an identity function , POMDP is equivalent to the MDP : ( S , A , R , P , γ , T ) . 2.2 REPRESENTATIVE WORLD MODELS IN MBRL . The world model is a key component of MBRL that directly impacts policy training . World models are usually formulated with latent dynamics ( Janner et al. , 2019 ; Hafner et al. , 2020 ) , and the general form of the latent dynamics model can be summarized as follows : Latent transition kernel : ht = f ( s≤t−1 , a≤t−1 ) Stochastic state function : p ( st|ht ) Reward function : p ( rt|ht ) The latent transition kernel ( shorthand as kernel ) predicts the latent state ht with input s≤t−1 and a≤t−1 . Based on latent state ht , the stochastic state function and reward function decode the state st and reward rt . For the partially observable environment , two additional functions are required : Observation function : p ( ot|st ) Representation function : p ( st|ht , ot ) In general , world models mainly differ at the implementation of kernel , which can be roughly divided into two categories : with non-recurrent kernel and with recurrent kernel . The formal definition of both kernels are as follows : ht = { f ( st−1 , at−1 ) With non-recurrent kernel f ( ht−1 , st−1 , at−1 ) With recurrent kernel Non-recurrent kernel are relatively basic kernel for modeling , which are often implemented as Fully-Connected Networks . Non-recurrent kernel takes the current state st−1 and action at−1 as input , outputs the latent state prediction ht . Compare to non-recurrent kernel , recurrent kernel is implemented as RNN and takes the additional input ht−1 , which performs better under POMDP setting . For both kernels , the st and rt can be generated from the latent prediction ht . 3 ENVIRONMENT DYNAMICS DECOMPOSITION . 3.1 MOTIVATION . An accurate world model is critical in MBRL policy deriving . To decrease the model error , existing works propose various techniques as introduced in Section 1 . However , these techniques improve the environment modeling in a black-box manner , which ignores the inner properties of environment dynamics , resulting in inaccurate world model construction and poor policy performance . To address this problem , we propose two important environment properties when modeling an environment : 1 ) Decomposability : The environment dynamics can be decomposed into multiple subdynamics in various ways and the decomposed sub-dynamics can be combined to reconstruct the entire dynamics . 2 ) Traceability : The environment dynamics can be traced to the action ’ s impact on the environment , and each sub-dynamics can be traced to the impact caused by a part of the action . For example in the Cheetah task , Figure 1 ( b ) demonstrates the decomposability : we can decompose the dynamics into { thigh , shin , foot } sub-dynamics or { back , front } sub-dynamics , which depends on the different decomposition perspectives and the combination of decomposed sub-dynamics can constitute the entire dynamics . Figure 1 ( c ) explains the traceability : each sub-dynamics can be traced to the corresponding subset of action dimensions : for the thigh dynamics , it can be regarded as the impact caused by the front-thigh and back-thigh action dimensions . The above two properties are closely related to environment modeling : the decomposability reveals the existence of sub-dynamics , which allows us to model the dynamics separately , while the traceability investigates the causes of the dynamics and guides us to decompose the dynamics at its root ( i.e . the action ) . To take the above properties into account , we propose the environment dynamics decomposition ( ED2 ) framework ( as shown in Figure 2 ) , which contains two key components : sub-dynamics discovery ( SD2 ) and dynamics decomposition prediction ( D2P ) . More specifically , by considering the traceability , we propose to discover the latent sub-dynamics by analyzing the action ( SD2 , the blue part in Figure 2 ) ; by considering the decomposability , we propose to construct the world model in a decomposing manner ( D2P , the green part in Figure 2 ) . Our framework can be used as a backbone in MBRL and the combination can lead to performance improvements over existing MBRL algorithms . 3.2 DYNAMICS DECOMPOSITION PREDICTION . Given an environment with m-dimensional action space A ⊂ Rm , the index of each action dimension constitutes a set Λ = { 1 , 2 , · · · , m } , any disjoint partition G = { G1 , . . . , Gk } over Λ corresponds to a particular way of decomposing action space . For each action dimension i in Λ , we define the action space as Ai , which satisfied A = A1 × · · · ×Am . The action space decomposition under partition G is defined as AG = { AG1 , · · · , AGk } , where sub-action space AGj = ∏ x∈Gj A x . Based on above definitions , we define the dynamics decomposition for P under partition G as follows : Definition 1 Given a partition G , the decomposition for P : S ×A→ S can be defined as : P ( s , a ) = fc ( 1 k k∑ i=1 Pi ( s , a Gi ) ) , ∀s , a ∈ S ×A , ( 1 ) with a set of sub-dynamics functions { P1 , ... , Pk } that Pi : S ×AGi → H , and a decoding function fc : H → S. Note H is a latent space and aGi ∈ AGi is a sub-action ( projection ) of action a . Intuitively , the choice of partition G is significant to the rationality of dynamics decomposition , which should be reasonably derived from the environments . In this section , we mainly focus on dynamics modeling , and we will introduce how to derive the partition G by using SD2 in section 3.3 . To implement D2P , we use model M iφi parameterized by φi ( i.e. , neural network parameters ) to approximate each sub-dynamics Pi . As illustrated in Figure 2 , given a partition G , an action a is divided into multiple sub-actions { aG1 , · · · , aGk } , each model M iφi takes state s and the sub-action aGi as input and output a latent prediction hi ∈ H . The separate latent predictions { h1 , · · · , hk } are aggregated and then decoded for the generation of state s′ and reward r. For each kernel described in Section 2.2 , we provide the formal description here when combine with D2P : ht = { 1 k ∑k i=1 f ( st−1 , a Gi t−1 ) For non-recurrent kernel 1 k ∑k i=1 f ( ht−1 , st−1 , a Gi t−1 ) For recurrent kernel We propose a set of kernels , where each kernel models a specific sub-dynamics with the input of current state s , corresponding sub-action aGi and hidden state ht−1 ( ignored when applying on non-recurrent kernel ) . The output of all kernels is averaged to get the final output ht . The prediction of reward rt and state st is generated from the output ht . Specifically , we provide an example when combining with the kernel of Recurrent State-Space Model ( RSSM ) ( Hafner et al. , 2019 ) in Figure 3 , which is a representative recurrent kernel-based world model . The original single kernel implemented as GRU are replace by multiple kernels with different action input . | The authors propose an environment dynamics decomposition (ED2) framework to decompose the environmental dynamics into multiple sub-dynamics. Empirical results show that ED2 improves the performance of several model-based reinforcement learning (MBRL) algorithms. The paper is well-written and easy to read, and the motivation is clear. | SP:e3e6dc2285271426a37bc08e8989e03fe1891ea9 |
On Learning with Fairness Trade-Offs | 1 INTRODUCTION . Reducing bias in an algorithm requires a number of steps ; first , specify the fairness definitions ( and thus fairness metrics ) that apply , second , encode them with a penalty so as to measure the discrepancy between outcomes and the perfect fairness scenario , third , choose the trade-off between the original statistical goal and fairness constraints , and , fourth , pick a method to debias ( at least partially ) the model . There are a number of obstacles to this programme . Indeed , debiasing may come at a cost ( Rodolfa et al . ( 2020 ) ) , but there are also theoretical reasons behind this claim . Jointly tackling multiple fairness definitions is usually difficult if not impossible without a decrease in model performance due to a series of impossibility theorems ( Kleinberg et al . ( 2017 ) ; Pleiss et al . ( 2017 ) ; Chouldechova ( 2017 ) ; Kim et al . ( 2020 ) ) . Note that these results apply not only to fairnessperformance but also to fairness-fairness trade-offs . Research has recently been undertaken to tackle specifically learnability and generalisation in fairness-related problems . In particular , Oneto et al . ( 2020 ) ; Oneto et al . ( 2020 ) derive provable probabilistic upper bounds in a fairness setting where one estimates an overall statistical loss and monitors disparities across categories . In Chen et al . ( 2018 ) , the authors have applied bias-variance decomposition techniques to disentangle the sources of unfairness in a classifier . While this is not directly related to this present work , the principle of risk decomposability turns out to be fruitful in both setups . Finally , in Agrawal et al . ( 2020 ) , a limiting distribution is established in the simple case where there is a partial requirement involving a fairness metric with two categories . Our contributions . The novelty of our work consists of the application of known techniques such as PAC inequalities and the central limit theorem to the problem of learning under fairness constraints : • First , we develop a PAC framework for fairness trade-offs , generalising results from Donini et al . ( 2018 ) ; Oneto et al . ( 2020 ) and tackling explicitly partial debiasing ( see Kim et al . ( 2020 ) ; Agrawal et al . ( 2020 ) ) . • Second , we show that certain properties of the probabilistic upper bounds lead to the need for sample-efficient bias mitigation techniques . In particular , we show that a new quantity , ZS , which can be understood as measure of sample concentration , plays a crucial role . We also build on Agrawal et al . ( 2020 ) and develop an asymptotic framework with a known limiting distribution ; this is useful as PAC bounds may not always be very sharp . • Third , in both frameworks , we put forward the decomposability of the learning risk , expressed as an upper bound in the PAC realm and a limiting variance in the asymptotic one . • Last , we illustrate our results on real-life data and show that generalisation is indeed difficult . 2 FAIRNESS METRICS AND LOSS FUNCTIONS . 2.1 SET-UP AND DEFINITIONS . In this article , we define s = 1 , · · · , C to be a categorical ( protected ) attribute , x ∈ X ⊂ Rd to be a set of non-protected features excluding s. y ∈ { − 1 , +1 } is a binary outcome variable and ŷ ∈ { − 1 , +1 } is an estimator for y . Finally , z = ( x , y , s ) . Note that ŷ is derived from a learner h ∈ H , whereH is a given functional space . Furthermore , we define S to be the in-sample ( training ) empirical distribution , and D to refer to the true distribution . Throughout this paper , we consider the case of an overall objective as a statistical performance objective , L0 , plus a fairness loss function φ . The trade-off is tuned by a hyper-parameter λ ≥ 0 . As usual , the aim is to minimise the overall objective , i.e. , minimise the statistical loss and the lack of fairness . This set-up is typical for partial debiasing and can be found in Kim et al . ( 2020 ) . Definition 1 . The overall objective or fairness trade-off is defined as LD ( h ) = L0D ( h ) + λφ ( L+D,1 ( h ) , L − D,1 ( h ) , · · · , L + D , C ( h ) , L − D , C ( h ) ) = L0D ( h ) + λφ ( L ± D ( h ) ) , where we have used the standard notations ( see Shalev-Shwartz & Ben-David ( 2014 ) ) : L0D ( h ) = E [ ` 0 ( h , z ) ] L+D , a ( h ) = E [ ` + ( h , z ) |s = a , y = 1 ] L−D , a ( h ) = E [ ` − ( h , z ) |s = a , y = −1 ] and the vector notation L±D ( h ) = [ L+D,1 ( h ) , L − D,1 ( h ) , · · · , L + D , C ( h ) , L − D , C ( h ) ] T , for some functions ` 0 , ` + and ` − . Remark 1 . In the whole paper , we assume that there exists a uniform bound B > 0 on all functions ` , i.e. , | ` ( h , z ) | ≤ B for all h and z . ` refers to any such function in what follows . Let us detail two particular cases . If λ = 0 , then the overall objective boils down to the usual risk minimisation problem . If ` 0 = 0 or λ→ +∞ , then the trade-off becomes a fairness constraint . We are now in a position to define the empirical counterpart by considering the empirical distribution rather than the true distribution . With the additional notationsN±a = { i ∈ { 1 , · · · , n } ; si = a , yi = ±1 } , |N±a | = n±a , the corresponding sample versions can be defined as L0S ( h ) = 1n ∑n i=1 ` 0 ( h , zi ) , L+S , a ( h ) = 1 n+a ∑ i∈N+a ` + ( h , zi ) and L−S , a ( h ) = 1 n−a ∑ i∈N−a ` − ( h , zi ) , where na = n+a + n − a , n = ∑C a=1 na , leading to LS ( h ) = L0S ( h ) + λφ ( L±S ( h ) ) ( 1 ) For the sake of simplicity , we will generally refer to φ ( L±T ( h ) ) as φT ( h ) for T = D , S . 2.2 FAIRNESS DEFINITIONS AND METRICS . One can find multiple technical definitions of fairness in the literature ; they have been reviewed in various papers ( Narayanan , 2018 ; Verma & Rubin , 2018 ; Berk et al. , 2018 ; Kim et al. , 2020 ; Agrawal et al. , 2020 ) . We offer an overview of the most frequent metrics in Table 1 . We note that all fairness metrics considered here can be expressed as an equality requirement on probabilities , hence on the expectation of indicator variables . These fall within our framework . However , this framework can also handle other types of functions . 2.3 FAIRNESS LOSS FUNCTIONS . In addition to picking one or multiple fairness definitions , we need to specify φ to measure the discrepancy from perfect fairness , the ideal case . A first approach , similar to the Calders-Verwer gap ( Calders & Verwer , 2010 ) , consists of looking at the discrepancy between two categories a and a′ : ∆L , ±T , a , a′ ( h ) = L ± T , a ( h ) − L ± T , a′ ( h ) , for T ∈ { D , S } . It is worth 0 under perfect fairness , but is asymmetric and thus requires defining an advantaged ( or benchmark ) category a . To avoid the issue of asymmetry , one can follow Oneto et al . ( 2020 ) , and define φ as the sum of all absolute discrepancies across categories : φ ( L±T ( h ) ) = ∑ a6=a′ { ∣∣∣L+T , a − L+T , a′ ∣∣∣+ ∣∣∣L−T , a − L−T , a′∣∣∣ } , ( 2 ) for T = D , S . Notice that , thanks to the reverse triangle inequality , the function φ is Lipschitz continuous . One could naturally weigh the various contributions to this fairness function and use another norm than L1 . Other functions involving ratios or relative differences , are also possible . Finally , in Kim et al . ( 2020 ) , the authors consider a convex combination of multiple loss functions φ ( 1 ) , · · · , φ ( M ) : φ ( L±T ( h ) ) = M∑ j=1 λjφ ( j ) ( L±T ( h ) ) . Note that if the φ ( j ) ’ s are Lipschitz-continuous ( with respect to the loss vector L±T ( h ) ) , with respective Lipschitz constant K ( j ) , then the overall loss φ is itself Lipschitz-continuous with constant Kφ = ∑M j=1 λjK ( j ) . 3 LEARNING FAIRNESS TRADE-OFFS . In this section , we consider the learning problem from different angles . We start by considering the statistical loss L0 and derive a bound based on the Rademacher complexity observed in each category . To do so , we borrow from the statistical learning toolbox and briefly review Rademacher complexities . We then move on to learning the fairness part per se , namely study the generalisation properties of φS ( h ) . Finally , we bring the two together and derive probabilistic upper bounds on fairness trade-off generalisation . 3.1 LEARNING THE STATISTICAL LOSS . Let us first focus on the statistical component of the loss , i.e. , L0T for T ∈ { S , D } . Much of this paper is based on theory of bounds derived from Rademacher complexities , introduced in Bartlett & Mendelson ( 2003 ) and surveyed in Boucheron , Stéphane et al . ( 2005 ) . Recent textbooks such as Shalev-Shwartz & Ben-David ( 2014 ) ; Mohri et al . ( 2012 ) provide excellent introductions to the topic . We start by recalling the definition of Rademacher complexities , as indicated in Boucheron , Stéphane et al . ( 2005 ) : Definition 2 . The Rademacher complexity ( or average ) of a function ` is given by R ( ` ◦ H ◦ S ) = E [ sup h∈H 1 n ∣∣∣∣∣ n∑ i=1 σi ` ( h , zi ) ∣∣∣∣∣ ] . ( 3 ) Note that definitions can slightly vary ( depending for instance on the presence or absence of the absolute value in the definition ) , but all downstream results are qualitatively similar , up to some multiplicative constants . To make this more concrete , we can consider a simple ( but usual ) case ( as describe in ShalevShwartz & Ben-David ( 2014 ) ) . We suppose that almost surely ‖x‖2 ≤ R. In addition , if we let H = { w ; ‖w‖2 ≤ R′ } and assume that the loss function ` is of the type ` ( w , ( x , y ) ) = ρ ( wTx , y ) , where |ρ| is bounded by B and is Lρ-Lipschitz , then , almost surely , R ( ` ◦ H ◦ S ) ≤ L ρRR′√ n . Rademacher complexities are key to establishing learnability bounds and lead to fundamental results in statistical learning theory . In particular , we will make use of a standard result ( see Boucheron , Stéphane et al . ( 2005 ) ) . Proposition 1 . With probability at least 1− δ , it holds sup h∈H |LD ( h ) − LS ( h ) | ≤ 2R ( ` ◦ H ◦ S ) +B √ 2 log 2δ n . ( 4 ) It will also be useful to introduce conditional Rademacher complexities that will be used throughout this paper . In particular , since we have a partition of the sample in terms of categories S = ⋃C a=1Na , we can consider the Rademacher complexity of that particular sample : R ( ` ◦ H ◦ Na ) = E [ sup h∈H 1 na ∣∣∣∣∣∑ i∈Na σi ` ( h , zi ) ∣∣∣∣∣ ] . ( 5 ) Lemma 1 . The sample Rademacher complexity can be bounded from above by the weighted sum of conditional Rademacher complexities : R ( ` ◦ H ◦ S ) ≤ C∑ a=1 na n R ( ` ◦ H ◦ Na ) , ( 6 ) where R ( ` ◦ H ◦ Na ) = E [ suph∈H 1 na ∣∣∑ i∈Na σi ` ( h , zi ) ∣∣ ] . Proof . This comes directly from the sub-additivity of the absolute value and the supremum . One can further interpret the non-negative gap ∑C a=1 na n R ( ` ◦ H ◦ Na ) − R ( ` ◦ H ◦ S ) as a diversification benefit amongst categories . Finally , this leads us to a proposition leveraging our results so far : Proposition 2 . With probability at least 1− δ , sup h∈H ∣∣L0D ( h ) − L0S ( h ) ∣∣ ≤ 2 C∑ a=1 na n R ( ` 0 ◦ H ◦ Na ) +B √ 2 log 2δ n . ( 7 ) | This paper establishes the guarantee for the generalization of fairness-aware learning in binary classification under PAC-learning and a more practical asymptotic framework. Through the derived theorem, authors conclude that low sample size and class balance lead to the poor generalization of fairness-aware learning, and the need for a sample-efficient method is also argued. The experimental results using real data are aligned with the theorem, emphasizing the validity of theorem. | SP:3e69b216b8568e1ae0f25c73d45219ac3bef8a43 |
On Learning with Fairness Trade-Offs | 1 INTRODUCTION . Reducing bias in an algorithm requires a number of steps ; first , specify the fairness definitions ( and thus fairness metrics ) that apply , second , encode them with a penalty so as to measure the discrepancy between outcomes and the perfect fairness scenario , third , choose the trade-off between the original statistical goal and fairness constraints , and , fourth , pick a method to debias ( at least partially ) the model . There are a number of obstacles to this programme . Indeed , debiasing may come at a cost ( Rodolfa et al . ( 2020 ) ) , but there are also theoretical reasons behind this claim . Jointly tackling multiple fairness definitions is usually difficult if not impossible without a decrease in model performance due to a series of impossibility theorems ( Kleinberg et al . ( 2017 ) ; Pleiss et al . ( 2017 ) ; Chouldechova ( 2017 ) ; Kim et al . ( 2020 ) ) . Note that these results apply not only to fairnessperformance but also to fairness-fairness trade-offs . Research has recently been undertaken to tackle specifically learnability and generalisation in fairness-related problems . In particular , Oneto et al . ( 2020 ) ; Oneto et al . ( 2020 ) derive provable probabilistic upper bounds in a fairness setting where one estimates an overall statistical loss and monitors disparities across categories . In Chen et al . ( 2018 ) , the authors have applied bias-variance decomposition techniques to disentangle the sources of unfairness in a classifier . While this is not directly related to this present work , the principle of risk decomposability turns out to be fruitful in both setups . Finally , in Agrawal et al . ( 2020 ) , a limiting distribution is established in the simple case where there is a partial requirement involving a fairness metric with two categories . Our contributions . The novelty of our work consists of the application of known techniques such as PAC inequalities and the central limit theorem to the problem of learning under fairness constraints : • First , we develop a PAC framework for fairness trade-offs , generalising results from Donini et al . ( 2018 ) ; Oneto et al . ( 2020 ) and tackling explicitly partial debiasing ( see Kim et al . ( 2020 ) ; Agrawal et al . ( 2020 ) ) . • Second , we show that certain properties of the probabilistic upper bounds lead to the need for sample-efficient bias mitigation techniques . In particular , we show that a new quantity , ZS , which can be understood as measure of sample concentration , plays a crucial role . We also build on Agrawal et al . ( 2020 ) and develop an asymptotic framework with a known limiting distribution ; this is useful as PAC bounds may not always be very sharp . • Third , in both frameworks , we put forward the decomposability of the learning risk , expressed as an upper bound in the PAC realm and a limiting variance in the asymptotic one . • Last , we illustrate our results on real-life data and show that generalisation is indeed difficult . 2 FAIRNESS METRICS AND LOSS FUNCTIONS . 2.1 SET-UP AND DEFINITIONS . In this article , we define s = 1 , · · · , C to be a categorical ( protected ) attribute , x ∈ X ⊂ Rd to be a set of non-protected features excluding s. y ∈ { − 1 , +1 } is a binary outcome variable and ŷ ∈ { − 1 , +1 } is an estimator for y . Finally , z = ( x , y , s ) . Note that ŷ is derived from a learner h ∈ H , whereH is a given functional space . Furthermore , we define S to be the in-sample ( training ) empirical distribution , and D to refer to the true distribution . Throughout this paper , we consider the case of an overall objective as a statistical performance objective , L0 , plus a fairness loss function φ . The trade-off is tuned by a hyper-parameter λ ≥ 0 . As usual , the aim is to minimise the overall objective , i.e. , minimise the statistical loss and the lack of fairness . This set-up is typical for partial debiasing and can be found in Kim et al . ( 2020 ) . Definition 1 . The overall objective or fairness trade-off is defined as LD ( h ) = L0D ( h ) + λφ ( L+D,1 ( h ) , L − D,1 ( h ) , · · · , L + D , C ( h ) , L − D , C ( h ) ) = L0D ( h ) + λφ ( L ± D ( h ) ) , where we have used the standard notations ( see Shalev-Shwartz & Ben-David ( 2014 ) ) : L0D ( h ) = E [ ` 0 ( h , z ) ] L+D , a ( h ) = E [ ` + ( h , z ) |s = a , y = 1 ] L−D , a ( h ) = E [ ` − ( h , z ) |s = a , y = −1 ] and the vector notation L±D ( h ) = [ L+D,1 ( h ) , L − D,1 ( h ) , · · · , L + D , C ( h ) , L − D , C ( h ) ] T , for some functions ` 0 , ` + and ` − . Remark 1 . In the whole paper , we assume that there exists a uniform bound B > 0 on all functions ` , i.e. , | ` ( h , z ) | ≤ B for all h and z . ` refers to any such function in what follows . Let us detail two particular cases . If λ = 0 , then the overall objective boils down to the usual risk minimisation problem . If ` 0 = 0 or λ→ +∞ , then the trade-off becomes a fairness constraint . We are now in a position to define the empirical counterpart by considering the empirical distribution rather than the true distribution . With the additional notationsN±a = { i ∈ { 1 , · · · , n } ; si = a , yi = ±1 } , |N±a | = n±a , the corresponding sample versions can be defined as L0S ( h ) = 1n ∑n i=1 ` 0 ( h , zi ) , L+S , a ( h ) = 1 n+a ∑ i∈N+a ` + ( h , zi ) and L−S , a ( h ) = 1 n−a ∑ i∈N−a ` − ( h , zi ) , where na = n+a + n − a , n = ∑C a=1 na , leading to LS ( h ) = L0S ( h ) + λφ ( L±S ( h ) ) ( 1 ) For the sake of simplicity , we will generally refer to φ ( L±T ( h ) ) as φT ( h ) for T = D , S . 2.2 FAIRNESS DEFINITIONS AND METRICS . One can find multiple technical definitions of fairness in the literature ; they have been reviewed in various papers ( Narayanan , 2018 ; Verma & Rubin , 2018 ; Berk et al. , 2018 ; Kim et al. , 2020 ; Agrawal et al. , 2020 ) . We offer an overview of the most frequent metrics in Table 1 . We note that all fairness metrics considered here can be expressed as an equality requirement on probabilities , hence on the expectation of indicator variables . These fall within our framework . However , this framework can also handle other types of functions . 2.3 FAIRNESS LOSS FUNCTIONS . In addition to picking one or multiple fairness definitions , we need to specify φ to measure the discrepancy from perfect fairness , the ideal case . A first approach , similar to the Calders-Verwer gap ( Calders & Verwer , 2010 ) , consists of looking at the discrepancy between two categories a and a′ : ∆L , ±T , a , a′ ( h ) = L ± T , a ( h ) − L ± T , a′ ( h ) , for T ∈ { D , S } . It is worth 0 under perfect fairness , but is asymmetric and thus requires defining an advantaged ( or benchmark ) category a . To avoid the issue of asymmetry , one can follow Oneto et al . ( 2020 ) , and define φ as the sum of all absolute discrepancies across categories : φ ( L±T ( h ) ) = ∑ a6=a′ { ∣∣∣L+T , a − L+T , a′ ∣∣∣+ ∣∣∣L−T , a − L−T , a′∣∣∣ } , ( 2 ) for T = D , S . Notice that , thanks to the reverse triangle inequality , the function φ is Lipschitz continuous . One could naturally weigh the various contributions to this fairness function and use another norm than L1 . Other functions involving ratios or relative differences , are also possible . Finally , in Kim et al . ( 2020 ) , the authors consider a convex combination of multiple loss functions φ ( 1 ) , · · · , φ ( M ) : φ ( L±T ( h ) ) = M∑ j=1 λjφ ( j ) ( L±T ( h ) ) . Note that if the φ ( j ) ’ s are Lipschitz-continuous ( with respect to the loss vector L±T ( h ) ) , with respective Lipschitz constant K ( j ) , then the overall loss φ is itself Lipschitz-continuous with constant Kφ = ∑M j=1 λjK ( j ) . 3 LEARNING FAIRNESS TRADE-OFFS . In this section , we consider the learning problem from different angles . We start by considering the statistical loss L0 and derive a bound based on the Rademacher complexity observed in each category . To do so , we borrow from the statistical learning toolbox and briefly review Rademacher complexities . We then move on to learning the fairness part per se , namely study the generalisation properties of φS ( h ) . Finally , we bring the two together and derive probabilistic upper bounds on fairness trade-off generalisation . 3.1 LEARNING THE STATISTICAL LOSS . Let us first focus on the statistical component of the loss , i.e. , L0T for T ∈ { S , D } . Much of this paper is based on theory of bounds derived from Rademacher complexities , introduced in Bartlett & Mendelson ( 2003 ) and surveyed in Boucheron , Stéphane et al . ( 2005 ) . Recent textbooks such as Shalev-Shwartz & Ben-David ( 2014 ) ; Mohri et al . ( 2012 ) provide excellent introductions to the topic . We start by recalling the definition of Rademacher complexities , as indicated in Boucheron , Stéphane et al . ( 2005 ) : Definition 2 . The Rademacher complexity ( or average ) of a function ` is given by R ( ` ◦ H ◦ S ) = E [ sup h∈H 1 n ∣∣∣∣∣ n∑ i=1 σi ` ( h , zi ) ∣∣∣∣∣ ] . ( 3 ) Note that definitions can slightly vary ( depending for instance on the presence or absence of the absolute value in the definition ) , but all downstream results are qualitatively similar , up to some multiplicative constants . To make this more concrete , we can consider a simple ( but usual ) case ( as describe in ShalevShwartz & Ben-David ( 2014 ) ) . We suppose that almost surely ‖x‖2 ≤ R. In addition , if we let H = { w ; ‖w‖2 ≤ R′ } and assume that the loss function ` is of the type ` ( w , ( x , y ) ) = ρ ( wTx , y ) , where |ρ| is bounded by B and is Lρ-Lipschitz , then , almost surely , R ( ` ◦ H ◦ S ) ≤ L ρRR′√ n . Rademacher complexities are key to establishing learnability bounds and lead to fundamental results in statistical learning theory . In particular , we will make use of a standard result ( see Boucheron , Stéphane et al . ( 2005 ) ) . Proposition 1 . With probability at least 1− δ , it holds sup h∈H |LD ( h ) − LS ( h ) | ≤ 2R ( ` ◦ H ◦ S ) +B √ 2 log 2δ n . ( 4 ) It will also be useful to introduce conditional Rademacher complexities that will be used throughout this paper . In particular , since we have a partition of the sample in terms of categories S = ⋃C a=1Na , we can consider the Rademacher complexity of that particular sample : R ( ` ◦ H ◦ Na ) = E [ sup h∈H 1 na ∣∣∣∣∣∑ i∈Na σi ` ( h , zi ) ∣∣∣∣∣ ] . ( 5 ) Lemma 1 . The sample Rademacher complexity can be bounded from above by the weighted sum of conditional Rademacher complexities : R ( ` ◦ H ◦ S ) ≤ C∑ a=1 na n R ( ` ◦ H ◦ Na ) , ( 6 ) where R ( ` ◦ H ◦ Na ) = E [ suph∈H 1 na ∣∣∑ i∈Na σi ` ( h , zi ) ∣∣ ] . Proof . This comes directly from the sub-additivity of the absolute value and the supremum . One can further interpret the non-negative gap ∑C a=1 na n R ( ` ◦ H ◦ Na ) − R ( ` ◦ H ◦ S ) as a diversification benefit amongst categories . Finally , this leads us to a proposition leveraging our results so far : Proposition 2 . With probability at least 1− δ , sup h∈H ∣∣L0D ( h ) − L0S ( h ) ∣∣ ≤ 2 C∑ a=1 na n R ( ` 0 ◦ H ◦ Na ) +B √ 2 log 2δ n . ( 7 ) | The paper provides an analysis of fairness via regularizing a loss function with a suitably selected fairness measure / metric. In particular, the paper analysis this under a PAC learning setting and a sample limit (asymptotic) setting. A majority of the paper focuses on the PAC setting, where Rademacher complexity bounds are made and analysed. Some experiments are included. | SP:3e69b216b8568e1ae0f25c73d45219ac3bef8a43 |
On the Pitfalls of Analyzing Individual Neurons in Language Models | 1 INTRODUCTION . Many studies attempt to interpret language models by predicting different linguistic properties from word representations , an approach called probing classifiers ( Adi et al. , 2017 ; Conneau et al. , 2018 , inter alia ) . A growing body of work focuses on individual neurons within the representation , attempting to show in which neurons some information is encoded , and whether it is localized ( concentrated in a small set of neurons ) or dispersed . Such knowledge may allow us to control the model ’ s output ( Bau et al. , 2019 ) , to reduce the number of parameters in the model ( Voita et al. , 2019 ; Sajjad et al. , 2020 ) , and to gain a general scientific knowledge of the model . The common methodology is to train a probe to predict some linguistic attribute from a representation , and to use it , in different ways , to rank the neurons of the representation according to their importance for the attribute in question . The same probe is then used to predict the attribute , but using only the k-highest ranked neurons from the obtained ranking , and the probe ’ s accuracy in this scenario is considered as a measure of the ranking ’ s quality ( Dalvi et al. , 2019 ; Torroba Hennigen et al. , 2020 ; Durrani et al. , 2020 ) . We see this framework as exhibiting Pitfall I : Two distinct factors are conflated—the probe ’ s classification quality and the quality of the ranking it produces . A good classifier may provide good results even if its ranking is bad , and an optimal ranking may cause an average classifier to provide better results than a good classifier that is given a bad ranking . Another shortcoming of the current methodology , which we mark as Pitfall II , is the focus on encoded information , regardless of whether it is actually used by the model in its language modeling task . A few studies ( Elazar et al. , 2021 ; Feder et al. , 2020 ) have considered this question , and shown that encoded information is not necessarily being used for language modeling , but these do not look at individual neurons . We argue that in order to evaluate a ranking , one should also examine if , and how , the k-highest ranked neurons are used by the model for the attribute in question , meaning that modifying them would change the model ’ s prediction—but with respect to that attribute only . This would allow some control over the model ’ s output , and grant us parameter-level explanations of the model ’ s decisions . In this work , we analyze three neuron ranking methods . Since the ranking space is too large ( 768 ! in BERT ’ s case ) , these methods provide approximations to the problem and are non-optimal . Two of these methods—LINEAR ( Dalvi et al. , 2019 ) and GAUSSIAN ( Torroba Hennigen et al. , 2020 ) —rely on an external probe to obtain a ranking : the first makes use of the internal weights of a linear probe , while the second considers the performance of a decomposable generative probe . The third is a simple ranking method we propose , PROBELESS , which ranks neurons according to the difference in their values across labels , and thus can be derived directly from the data , with no probing involved . 1Our code is available at : < ANONYMIZED > We experiment with disentangling probe quality and ranking quality , by using a probe from one method with a ranking from another method , and comparing the different probe–ranking combinations . We expose the problematic nature of the current methodology ( Pitfall I ) , by showing that in some cases , a suitable probe which is given an intentionally bad ranking , or a random one , provides higher accuracy than another which is given its allegedly optimal ranking . We find that while the GAUSSIAN method generally provides higher accuracy , its probe ’ s selectivity ( Hewitt & Liang , 2019 ) is lower , implying that it performs the probing task by memorizing , which improves probing quality but not necessarily ranking quality . We further find that GAUSSIAN provides the best ranking for small sets of neurons , while LINEAR provides a better ranking for large sets . We then turn to analyzing which ranking selects neurons that are used by the model , by applying interventions on the representation : we modify subsets of neurons from each ranking and measure— using a novel metric we introduce—the effect on language modeling w.r.t to the property in question . We highlight the need to focus on used information ( Pitfall II ) : even though PROBELESS does not excel in the probing scenario , it selects neurons that are used by the model , more so than the two probing-based rankings . We find that there is an overlap between encoded information and used information , but they are not the same , and argue that more attention should be given to the latter . We primarily experiment with the M-BERT model ( Devlin et al. , 2019 ) on 9 languages and 13 morphological attributes , from the Universal Dependencies dataset ( Zeman et al. , 2020 ) . We also experiment with XLM-R ( Conneau et al. , 2020 ) , and find that most of our results are similar between the models , with a few differences which we discuss . Our experiments reveal the following insights : • We show the need to separate between probing quality and ranking quality , via cases where intentionally poor rankings provide better accuracy than good rankings , due to probing weaknesses . • We present a new ranking method that is free of any probes , and tends to prefer neurons that are being used by the model , more so than existing probing-based rankings . • We show that there is an overlap between encoded information and used information , but they are not the same . 2 NEURON RANKINGS AND DATA . We begin by introducing some notation . Denote the word representation space as H ⊆ Rd and an auxiliary task as a function F : H → Z , for some task labels Z ( e.g. , part-of-speech labels ) . Given a word representation h ∈ H and some subset of neurons S ⊆ { 1 , ... , d } , we use hS to denote the subvector of h in dimensions S. For some auxiliary task F and k ∈ N , we search for an optimal subset S∗ such that |S∗| = k and hS∗ contains more information regarding F than any other subvector hS′ , |S′| = k. For the search task , we define neuron-ranking as a permutation Π ( d ) on { 1 , ... , d } and consider the subset Π ( d ) [ k ] = { Π ( d ) 1 , ... , Π ( d ) k } . One may wish to find an optimal ranking Π∗ ( d ) such that ∀k , Π∗ ( d ) [ k ] is the optimal subset with respect to F . However , finding an optimal ranking , or even an optimal subset , is NP-hard ( Binshtok et al. , 2007 ) . Thus , we focus on several methods to produce rankings , which provide approximations to the problem , and compare them . 2.1 RANKINGS . The ranking methods we compare include two rankings obtained from prior probing-based neuronranking methods , and a novel ranking we propose , based on data statistics rather than probing . LINEAR The first method , henceforth LINEAR ( named linguistic correlation analysis in Dalvi et al . 2019 ) ,2 trains a linear classifier on the representations to learn the task F . Then , it uses the trained classifier ’ s weights to rank the neurons according to their importance for F . Intuitively , neurons with a higher magnitude of absolute weights should be more important , or contain more relevant information , for solving the task . Dalvi et al . ( 2019 ) showed that their method identifies important neurons through probing and ablation studies , and found that while the information is distributed across neurons , the distribution is not uniform , meaning it is skewed towards the top-ranked neurons . In this work , we use a slightly modified version of the suggested approach ( Appendix A.1 ) . GAUSSIAN The second method , henceforth GAUSSIAN ( Torroba Hennigen et al. , 2020 ) , trains a generative classifier on the task F , based on the assumption that each dimension in { 1 , ... , d } 2A small enhancement to the algorithm was presented in Durrani et al . ( 2020 ) . is Gaussian-distributed . Then , it makes use of the decomposability of the multivariate Gaussian distribution to greedily select the most informative neuron , according to the classifier ’ s performance , at every iteration . This way we obtain a full neuron ranking after training only once , while applying this greedy method to LINEAR would require retraining the probe d ! times , which is clearly infeasible . Torroba Hennigen et al . ( 2020 ) found that most of the tasks can be solved using a low number of neurons , but also noted that their classifier is limited due to the Gaussian distribution assumption . PROBELESS The third neuron-ranking method we experiment with is based purely on the representations , with no probing involved , making it free of probing limitations ( Belinkov , 2021 ) that might affect ranking quality . For every attribute label z ∈ Z , we calculate q ( z ) , the mean vector of all representations of words that possess the attribute and the value z . Then , we calculate the element-wise difference between the mean vectors , r = ∑ z , z′∈Z |q ( z ) − q ( z′ ) | , r ∈ Rd ( 1 ) and obtain a ranking by arg-sorting r , i.e. , the first neuron in the ranking corresponds to the highest value in r. For binary-labeled attributes , this is simply the difference in means . In the general case , PROBELESS assigns high values to neurons that are most sensitive to a given attribute . 2.2 DATA AND MODELS . Throughout our work , we follow the experimental setting of Torroba Hennigen et al . ( 2020 ) : we map the UD treebanks ( Zeman et al. , 2020 ) to the UniMorph schema ( Kirov et al. , 2018 ) using the mapping by McCarthy et al . ( 2018 ) . We select a subset of the languages used by Torroba Hennigen et al . ( 2020 ) : Arabic , Bulgarian , English , Finnish , French , Hindi , Russian , Spanish and Turkish , to keep linguistic diversity . The tasks we experiment with are predictions of morphological attributes from these languages . Full data details are provided in Torroba Hennigen et al . ( 2020 ) and further data preparation steps are detailed in Appendix A.2 . We process each sentence in pre-trained M-BERT and XLM-R ( unless stated otherwise , all results are with M-BERT ) , and take word representations from layers 2 , 7 and 12 of each model , to see if there are different patterns in the beginning , middle and end of the models . We end up with a total of 156 different configs ( language × attribute × layer ) to test for each model . For words that are split during tokenization , we define their final representation to be the average over their sub-token representations . Thus , each word has one representation for each layer , of dimension d = 768 . We do not mask any words throughout our work . | The paper revisits two methods, Linear and Gaussian, as described in the literature, to probe language models based on individual neurons. The paper suggests that these methods contain two limitations - 1. by ranking the neurons on a linguistic task, the methods conflate between ranking quality and the probe's classification quality; 2. the probing methods do not take into account whether the individual neurons are at all used by the model in the downstream linguistic tasks. Finally, the paper presents a new probing method, which does not rely on training a classifier, and shows that this simple method is able to discern among the neurons better than the preceding two methods in the literature. | SP:71274cc432cc95a2d80418365896096b875be5c1 |
On the Pitfalls of Analyzing Individual Neurons in Language Models | 1 INTRODUCTION . Many studies attempt to interpret language models by predicting different linguistic properties from word representations , an approach called probing classifiers ( Adi et al. , 2017 ; Conneau et al. , 2018 , inter alia ) . A growing body of work focuses on individual neurons within the representation , attempting to show in which neurons some information is encoded , and whether it is localized ( concentrated in a small set of neurons ) or dispersed . Such knowledge may allow us to control the model ’ s output ( Bau et al. , 2019 ) , to reduce the number of parameters in the model ( Voita et al. , 2019 ; Sajjad et al. , 2020 ) , and to gain a general scientific knowledge of the model . The common methodology is to train a probe to predict some linguistic attribute from a representation , and to use it , in different ways , to rank the neurons of the representation according to their importance for the attribute in question . The same probe is then used to predict the attribute , but using only the k-highest ranked neurons from the obtained ranking , and the probe ’ s accuracy in this scenario is considered as a measure of the ranking ’ s quality ( Dalvi et al. , 2019 ; Torroba Hennigen et al. , 2020 ; Durrani et al. , 2020 ) . We see this framework as exhibiting Pitfall I : Two distinct factors are conflated—the probe ’ s classification quality and the quality of the ranking it produces . A good classifier may provide good results even if its ranking is bad , and an optimal ranking may cause an average classifier to provide better results than a good classifier that is given a bad ranking . Another shortcoming of the current methodology , which we mark as Pitfall II , is the focus on encoded information , regardless of whether it is actually used by the model in its language modeling task . A few studies ( Elazar et al. , 2021 ; Feder et al. , 2020 ) have considered this question , and shown that encoded information is not necessarily being used for language modeling , but these do not look at individual neurons . We argue that in order to evaluate a ranking , one should also examine if , and how , the k-highest ranked neurons are used by the model for the attribute in question , meaning that modifying them would change the model ’ s prediction—but with respect to that attribute only . This would allow some control over the model ’ s output , and grant us parameter-level explanations of the model ’ s decisions . In this work , we analyze three neuron ranking methods . Since the ranking space is too large ( 768 ! in BERT ’ s case ) , these methods provide approximations to the problem and are non-optimal . Two of these methods—LINEAR ( Dalvi et al. , 2019 ) and GAUSSIAN ( Torroba Hennigen et al. , 2020 ) —rely on an external probe to obtain a ranking : the first makes use of the internal weights of a linear probe , while the second considers the performance of a decomposable generative probe . The third is a simple ranking method we propose , PROBELESS , which ranks neurons according to the difference in their values across labels , and thus can be derived directly from the data , with no probing involved . 1Our code is available at : < ANONYMIZED > We experiment with disentangling probe quality and ranking quality , by using a probe from one method with a ranking from another method , and comparing the different probe–ranking combinations . We expose the problematic nature of the current methodology ( Pitfall I ) , by showing that in some cases , a suitable probe which is given an intentionally bad ranking , or a random one , provides higher accuracy than another which is given its allegedly optimal ranking . We find that while the GAUSSIAN method generally provides higher accuracy , its probe ’ s selectivity ( Hewitt & Liang , 2019 ) is lower , implying that it performs the probing task by memorizing , which improves probing quality but not necessarily ranking quality . We further find that GAUSSIAN provides the best ranking for small sets of neurons , while LINEAR provides a better ranking for large sets . We then turn to analyzing which ranking selects neurons that are used by the model , by applying interventions on the representation : we modify subsets of neurons from each ranking and measure— using a novel metric we introduce—the effect on language modeling w.r.t to the property in question . We highlight the need to focus on used information ( Pitfall II ) : even though PROBELESS does not excel in the probing scenario , it selects neurons that are used by the model , more so than the two probing-based rankings . We find that there is an overlap between encoded information and used information , but they are not the same , and argue that more attention should be given to the latter . We primarily experiment with the M-BERT model ( Devlin et al. , 2019 ) on 9 languages and 13 morphological attributes , from the Universal Dependencies dataset ( Zeman et al. , 2020 ) . We also experiment with XLM-R ( Conneau et al. , 2020 ) , and find that most of our results are similar between the models , with a few differences which we discuss . Our experiments reveal the following insights : • We show the need to separate between probing quality and ranking quality , via cases where intentionally poor rankings provide better accuracy than good rankings , due to probing weaknesses . • We present a new ranking method that is free of any probes , and tends to prefer neurons that are being used by the model , more so than existing probing-based rankings . • We show that there is an overlap between encoded information and used information , but they are not the same . 2 NEURON RANKINGS AND DATA . We begin by introducing some notation . Denote the word representation space as H ⊆ Rd and an auxiliary task as a function F : H → Z , for some task labels Z ( e.g. , part-of-speech labels ) . Given a word representation h ∈ H and some subset of neurons S ⊆ { 1 , ... , d } , we use hS to denote the subvector of h in dimensions S. For some auxiliary task F and k ∈ N , we search for an optimal subset S∗ such that |S∗| = k and hS∗ contains more information regarding F than any other subvector hS′ , |S′| = k. For the search task , we define neuron-ranking as a permutation Π ( d ) on { 1 , ... , d } and consider the subset Π ( d ) [ k ] = { Π ( d ) 1 , ... , Π ( d ) k } . One may wish to find an optimal ranking Π∗ ( d ) such that ∀k , Π∗ ( d ) [ k ] is the optimal subset with respect to F . However , finding an optimal ranking , or even an optimal subset , is NP-hard ( Binshtok et al. , 2007 ) . Thus , we focus on several methods to produce rankings , which provide approximations to the problem , and compare them . 2.1 RANKINGS . The ranking methods we compare include two rankings obtained from prior probing-based neuronranking methods , and a novel ranking we propose , based on data statistics rather than probing . LINEAR The first method , henceforth LINEAR ( named linguistic correlation analysis in Dalvi et al . 2019 ) ,2 trains a linear classifier on the representations to learn the task F . Then , it uses the trained classifier ’ s weights to rank the neurons according to their importance for F . Intuitively , neurons with a higher magnitude of absolute weights should be more important , or contain more relevant information , for solving the task . Dalvi et al . ( 2019 ) showed that their method identifies important neurons through probing and ablation studies , and found that while the information is distributed across neurons , the distribution is not uniform , meaning it is skewed towards the top-ranked neurons . In this work , we use a slightly modified version of the suggested approach ( Appendix A.1 ) . GAUSSIAN The second method , henceforth GAUSSIAN ( Torroba Hennigen et al. , 2020 ) , trains a generative classifier on the task F , based on the assumption that each dimension in { 1 , ... , d } 2A small enhancement to the algorithm was presented in Durrani et al . ( 2020 ) . is Gaussian-distributed . Then , it makes use of the decomposability of the multivariate Gaussian distribution to greedily select the most informative neuron , according to the classifier ’ s performance , at every iteration . This way we obtain a full neuron ranking after training only once , while applying this greedy method to LINEAR would require retraining the probe d ! times , which is clearly infeasible . Torroba Hennigen et al . ( 2020 ) found that most of the tasks can be solved using a low number of neurons , but also noted that their classifier is limited due to the Gaussian distribution assumption . PROBELESS The third neuron-ranking method we experiment with is based purely on the representations , with no probing involved , making it free of probing limitations ( Belinkov , 2021 ) that might affect ranking quality . For every attribute label z ∈ Z , we calculate q ( z ) , the mean vector of all representations of words that possess the attribute and the value z . Then , we calculate the element-wise difference between the mean vectors , r = ∑ z , z′∈Z |q ( z ) − q ( z′ ) | , r ∈ Rd ( 1 ) and obtain a ranking by arg-sorting r , i.e. , the first neuron in the ranking corresponds to the highest value in r. For binary-labeled attributes , this is simply the difference in means . In the general case , PROBELESS assigns high values to neurons that are most sensitive to a given attribute . 2.2 DATA AND MODELS . Throughout our work , we follow the experimental setting of Torroba Hennigen et al . ( 2020 ) : we map the UD treebanks ( Zeman et al. , 2020 ) to the UniMorph schema ( Kirov et al. , 2018 ) using the mapping by McCarthy et al . ( 2018 ) . We select a subset of the languages used by Torroba Hennigen et al . ( 2020 ) : Arabic , Bulgarian , English , Finnish , French , Hindi , Russian , Spanish and Turkish , to keep linguistic diversity . The tasks we experiment with are predictions of morphological attributes from these languages . Full data details are provided in Torroba Hennigen et al . ( 2020 ) and further data preparation steps are detailed in Appendix A.2 . We process each sentence in pre-trained M-BERT and XLM-R ( unless stated otherwise , all results are with M-BERT ) , and take word representations from layers 2 , 7 and 12 of each model , to see if there are different patterns in the beginning , middle and end of the models . We end up with a total of 156 different configs ( language × attribute × layer ) to test for each model . For words that are split during tokenization , we define their final representation to be the average over their sub-token representations . Thus , each word has one representation for each layer , of dimension d = 768 . We do not mask any words throughout our work . | This paper responds to several recent works focused on identifying important individual neurons for particular classifying tasks. They consider 2 existing methods which rely on an external probe to rank the neurons in a network. They also introduce a method that does not rely on a probe, instead ranking neurons according to the difference between their values across labels. The primary focus in this paper is on two claimed flaws of the existing neuron ranking methods. The first is that the authors feel evaluation of the neuron rankings is unfair because a high quality probe can have higher performance on a worse ranking, simply because it is better able to take advantage of the data that is offered. The second is that not all rankings actually point to neurons that are specialized for a particular task or label, instead just indicating highly informative neurons. In analyzing these two flaws, the authors develop evaluation metrics for the rankings of these neurons. They consider a ranking better if the neurons encode information that is specific to the label while maintaining the particular lemma being encoded. They also consider the ranking to be better if removing the features indicated damages the performance of the language model. They test two ways of modifying the network to remove important neurons, first by ablating the neurons and then by learning a geometric translation that moves a word towards a different attribute label. They also consider the ranking better if a ranking from top to bottom outperforms a random ranking, which outperforms a reversed ranking. This seems like a fairly weak standard to hold a ranking to, but some rankings fail to adhere to it. | SP:71274cc432cc95a2d80418365896096b875be5c1 |
Rethinking Temperature in Graph Contrastive Learning | 1 INTRODUCTION . Self-supervised learning provides a good learning paradigm without high-cost label information for computer vision ( Chen et al . ( 2020 ) ; Chen & He ( 2021 ) ; Grill et al . ( 2020 ) ) , natural language processing ( Wu et al . ( 2019 ) ; Gao et al . ( 2021 ) ) , and speech recognition ( Ravanelli et al . ( 2020 ) ; Kharitonov et al . ( 2021 ) ) . Contrastive-based methods have a prominent place among numerous self-supervised learning methods ( Jaiswal et al . ( 2021 ) ; Le-Khac et al . ( 2020 ) ) . Recently , some researchers have explored graph contrastive frameworks ( consisting of data augmentation , network encoding , and contrastive learning ) for self-supervised learning on graphs ( Velickovic et al . ( 2019 ) ; Peng et al . ( 2020 ) ; Zhu et al . ( 2020 ; 2021c ) ; Thakoor et al . ( 2021 ) ) . On benchmark datasets , the stateof-the-art graph contrastive learning ( GCL ) algorithms have been verified to be competitive with or even superior to the supervised learning algorithms in downstream tasks , such as node classification and graph classification . Investigating the quality of contrastive representations is important for contrastive learning , but it is absent in the domain of graphs . Wang & Isola ( 2020 ) propose two novel loss functions , alignment loss Lalign and uniformity loss Luniform , for measuring the image representation quality . By optimizing the integration of Lalign , Luniform , and contrastive loss , the performance on the downstream task ( such as image classification on IMAGENET ( Tian et al . ( 2019 ) ) ) becomes better . However , Lalign and Luniform designed for independent and identically distributed ( IID ) data are not suitable for Non-IID data such as nodes in a graph . The InfoNCE ( here “ NCE ” denotes “ Noise-Contrastive Estimation ” ) contrastive loss ( Oord et al . ( 2018 ) ) , as a universal loss function of contrastive learning , generally samples negative examples uniformly from the whole training data . To optimize the InfoNCE loss , the strategy of hard negative sampling ( Zhuang et al . ( 2019 ) ; Robinson et al . ( 2021 ) ) collects highly similar but true negative examples in advance to construct contrastive pairs and is widely used in image processing and metric learning ( Duan et al . ( 2018 ) ; Robinson et al . ( 2021 ) ; Wang & Liu ( 2021 ) ) . This strategy makes the model be able to correct its mistakes quickly and improve the semantic closeness between represen- tations ( Wang & Liu ( 2021 ) ) . Despite hard negative sampling ’ s practical importance in the image domain , Yang et al . ( 2020 ) have proved that it is sub-optimal in the graph domain . In this paper , we investigate how to generate high-quality node representations under the InfoNCE contrastive objective in graphs . We highlight the importance of embeddings ’ global uniformity and local separation for GCL . The temperature coefficient τ , as a key component of InfoNCE loss , decides how the current learning state focuses on global uniformity and local separation . We illustrate this conclusion in Figure 1 and prove it by a gradient analysis in Section 3 . The dynamic setting of τ can generate different learning states for the same task and help the algorithm smoothly transit from one state to another . Therefore , inspired by the optimizer called Momentum ( Qian ( 1999 ) ) , we develop a Graph contrastive Learning algorithm with dynAmic Temperature Estimation ( GLATE ) . Compared with the fixed setting of τ , GLATE develops to its full potential on contrastive learning by further maximizing the self-supervised Information Bottleneck objective . In a nutshell , the main contributions of this work are as follows : • To evaluate the node representations in GCL , we propose two new metrics : global uniformity ( Eq . ( 9 ) ) and local separation ( Eq . ( 10 ) ) . We prove the importance of temperature τ to them by theoretical analysis ( Section 3.1 ) . • We develop a simple but very effective GCL algorithm GLATE ( Section 3.2 ) . In the training phase , GLATE dynamically adjusts τ ’ s value with its momentum to learn high-quality representations . With the help of the information bottleneck principle , we analyze the connections between GLATE and entropy information ( Section 3.3 ) . • The experimental results on node and graph classification tasks indicate that GLATE is superior to the state-of-the-art GCL algorithms ( Section 4 ) . For example , for the task of transductive node classification , GLATE outperforms baselines by 2.8 percent on average . 2 BACKGROUND AND RELATED WORK . Preliminaries of Graph Contrastive Learning . Let us review an overall graph contrastive learning process with contrastive pairs . We consider a graph G = ( A , X ) where A ∈ RN×N denotes the adjacency matrix , X ∈ RN×M denotes the attribute distribution matrix , N is the number of nodes , and M is the dimension of attributes . The raw graph G is distorted via random data augmentation strategies ( such as edge removing and attribute masking ) to generate two new graphs G̃1 = ( A′ , X ′ ) as well as G̃2 = ( A′′ , X ′′ ) . Then , the message-passing-based graph neural network ( e.g. , GCN proposed in ( Kipf & Welling ( 2016 ) ) ) is used as a shared encoder of G̃1 and G̃2 . It aims to learn node embeddings from G̃1 and G̃2 . The message passing form of a two-layer GCN is : Z′ = softmax ( Â′ReLU ( Â′X ′W 0 ) W 1 ) , ( 1 ) whereZ ′ , Â′ , andW 0 ( W 1 ) are node embedding matrix , renormalizedA′ , and first-layer ( secondlayer ) neural network parameters , respectively . Next , in each training epoch , the pre-defined contrastive objective encourages GCN to minimize the distance between Z ′ and Z ′′ ( another node embedding matrix learned from G̃2 ) , and meanwhile to maximize the distance between different nodes ( Zhu et al . ( 2020 ) ; Zbontar et al . ( 2021b ) ; Bardes et al . ( 2021 ) ) . After n rounds of iteration , we use trained GCN to infer each node ’ s embedding on G , which is used as the initial value of new node embedding in different downstream tasks , such as node classification . Graph Contrastive Objectives . To make positive examples closer and negative examples farther , different graph contrastive loss functions are defined , such as : 1 ) Jason-Shannon Divergence ( JSD ) loss ( Velickovic et al . ( 2019 ) ; Hassani & Khasahmadi ( 2020 ) ) , 2 ) InfoNCE loss ( Zhu et al . ( 2020 ) ; You et al . ( 2020 ) ; Zhu et al . ( 2021c ) ; Pan & Kang ( 2021 ) ; Xu et al . ( 2021 ) ) , and 3 ) Triplet Margin ( TM ) loss ( Zhang et al . ( 2019 ) ) . According to a recent empirical study ( Zhu et al . ( 2021b ) ) , the models under the InfoNCE loss generally achieve better performance compared to those under the other graph contrastive objectives with the participation of negative examples . The important role of InfoNCE loss in graph contrastive learning motivates our work to have deep insights into a series of GCL algorithms ( e.g . GRACE ( Zhu et al . ( 2020 ) ) ) derived by it . Understanding Contrastive Representation Learning . Another main area of concern is understanding contrastive representation learning and exploring what high-quality representations are . In the image domain , alignment , uniformity , and semantic closeness ( Wang & Isola ( 2020 ) ; Wang & Liu ( 2021 ) ) are thought to be three important metrics for measuring the quality of learned representations . Hard negative sampling ( Robinson et al . ( 2021 ) ; Wang & Liu ( 2021 ) ) has been proved to be a successful strategy to meet the needs of the above three metrics . However , recent studies ( Xia et al . ( 2021 ) ; Yang et al . ( 2020 ) ) have confirmed the existence of sampling bias when using hard negative sampling for a graph , which limits its application to the graph domain . 3 METHOD . 3.1 GRADIENT ANALYSIS . We start with a general contrastive loss function , i.e. , InfoNCE loss ( Oord et al . ( 2018 ) ; Zhu et al . ( 2020 ) ) . In the graph domain , it can be formulated as : L1,2i = −log exp ( 1 τ · S ( i′ , i′′ ) ) exp ( 1 τ · S ( i′ , i′′ ) ) + ∑ k 6=i exp ( 1 τ · S ( i′ , k′′ ) ) ︸ ︷︷ ︸ inter−view term + ∑ k 6=i exp ( 1 τ · S ( i′ , k′ ) ) ︸ ︷︷ ︸ intra−view term . ( 2 ) The vector similarity function S ( i′ , k′′ ) = f ( ||h ( Z ′i , : ) ||2 , ||h ( Z ′′ k , : ) ||2 ) where f ( · ) and h ( · ) are respectively cosine similarity and non-linear projection transformation ( two layers of multilayer perception ) . As h ( · ) is not shared between different channels , the whole network structure is technically asymmetric , which leads to the asymmetric result of learned node representations . To eliminate the bias between different channels , G̃1 and G̃2 are swapped to use , and thus the total loss function is L = 12N ∑N i=1 ( L 1,2 i + L 2,1 i ) . To simplify the symbols in Eq . ( 2 ) , we replace exp ( 1 τ · S ( i ′ , k′′ ) ) by Ei′ , k′′ , and then derive the following gradient results : ∇S ( i′ , i′′ ) L1,2i = − ∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ τ · ( Ei′ , i′′ + ∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ ) , ( 3 ) ∇S ( i′ , j′ ) L1,2i = − Ei′ , j′ τ · ( Ei′ , i′′ + ∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ ) , ( 4 ) ∇S ( i′ , j′′ ) L1,2i = − Ei′ , j′′ τ · ( Ei′ , i′′ + ∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ ) . ( 5 ) These gradient results have the same denominator , which can be removed when we calculate the gradient ratio between any two of them . Lemma 1 . Following Eq . ( 2 ) , the ratio of intra-view negative gradient to positive gradient is R ( i′ , j′ ; i′ , i′′ ) = ∇S ( i′ , j′ ) L1,2i ∇S ( i′ , i′′ ) L1,2i = Ei′ , j′∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ . ( 6 ) The result in Eq . ( 6 ) obeys the Boltzmann distribution , as shown in Figure 2 ( a , b ) ( note that we omit the specific form of R ( i′ , j′′ ; i′ , i′′ ) because it has the same property as R ( i′ , j′ ; i′ , i′′ ) ) . It means that if τ is low ( τ ∈ ( 0 , 0.5 ) ) , the optimizer will punish more on the hard negative examples than the easy ones ; otherwise , it will nearly make no difference of them . So high τ is good at distinguishing similar and dissimilar embeddings and low τ is good at partitioning highly similar embeddings to ensure local separation . It is also worth noting the rate between the negative gradients of different views , which is defined as : R ( i′ , j′ ; i′ , j′′ ) = ∇S ( i′ , j′ ) L1,2i ∇S ( i′ , j′′ ) L1,2i = Ei′ , j′ Ei′ , j′′ = exp ( 1 τ · ( S ( i′ , j′ ) − S ( i′ , j′′ ) ) ) . ( 7 ) In Eq . ( 7 ) , τ still plays an important role ( note that we omit the specific form of R ( i′ , j′′ ; i′ , j′ ) because it has the same property as R ( i′ , j′ ; i′ , j′′ ) ) . We have the following proposition : Proposition 1 . ( Different views ’ imbalanced update. ) . The temperature τ is crucial to balance the rate between the negative gradients of different views . If τ → ∞ , then ∇S ( i′ , j′ ) L1,2i → ∇S ( i′ , j′′ ) L1,2i , i.e. , the negative examples of inter-view and intra-view keep the gradient update at the same speed . Otherwise , there exists an imbalance between ∇S ( i′ , j′ ) L1,2i and ∇S ( i′ , j′′ ) L 1,2 i . The imbalance will deepen as τ decreases . We refer the reader to Appendix A.1 for detailed proof . We illustrate the phenomenon of the imbalanced gradient update in Figure 2 ( c ) and ( d ) . Based on the above analysis , we conclude that 1 ) high τ is helpful to the embedding ’ s global uniformity and low τ is helpful to the embedding ’ s local separation ; 2 ) high τ can alleviate the negative effect of imbalanced update . Therefore , dynamic estimation of τ with a relatively high initial value is recommended . | This paper investigates a crucial problem, i.e., how to generate good node representations under the InfoNCE contrastive loss in graphs. By studying the feature of loss function’s gradients, it finds that the dynamic temperature change is beneficial to learn uniform node representations. The proposed method GLATE is simple but effective. The analysis of the connection between GLATE and the information bottleneck principle helps us understand why GLATE is so effective. The presentation of this paper is clear and well-organized. For example, Figure 1 makes us easy to understand the problem generated by the static temperature and the significance of this work. The experiments are laid out in detail. The results on the tasks of transductive and inductive learning show GLATE’s advantage over the SOTA graph contrastive learning algorithms. | SP:b3951670f71d8af5e59144d68c7d50405a001227 |
Rethinking Temperature in Graph Contrastive Learning | 1 INTRODUCTION . Self-supervised learning provides a good learning paradigm without high-cost label information for computer vision ( Chen et al . ( 2020 ) ; Chen & He ( 2021 ) ; Grill et al . ( 2020 ) ) , natural language processing ( Wu et al . ( 2019 ) ; Gao et al . ( 2021 ) ) , and speech recognition ( Ravanelli et al . ( 2020 ) ; Kharitonov et al . ( 2021 ) ) . Contrastive-based methods have a prominent place among numerous self-supervised learning methods ( Jaiswal et al . ( 2021 ) ; Le-Khac et al . ( 2020 ) ) . Recently , some researchers have explored graph contrastive frameworks ( consisting of data augmentation , network encoding , and contrastive learning ) for self-supervised learning on graphs ( Velickovic et al . ( 2019 ) ; Peng et al . ( 2020 ) ; Zhu et al . ( 2020 ; 2021c ) ; Thakoor et al . ( 2021 ) ) . On benchmark datasets , the stateof-the-art graph contrastive learning ( GCL ) algorithms have been verified to be competitive with or even superior to the supervised learning algorithms in downstream tasks , such as node classification and graph classification . Investigating the quality of contrastive representations is important for contrastive learning , but it is absent in the domain of graphs . Wang & Isola ( 2020 ) propose two novel loss functions , alignment loss Lalign and uniformity loss Luniform , for measuring the image representation quality . By optimizing the integration of Lalign , Luniform , and contrastive loss , the performance on the downstream task ( such as image classification on IMAGENET ( Tian et al . ( 2019 ) ) ) becomes better . However , Lalign and Luniform designed for independent and identically distributed ( IID ) data are not suitable for Non-IID data such as nodes in a graph . The InfoNCE ( here “ NCE ” denotes “ Noise-Contrastive Estimation ” ) contrastive loss ( Oord et al . ( 2018 ) ) , as a universal loss function of contrastive learning , generally samples negative examples uniformly from the whole training data . To optimize the InfoNCE loss , the strategy of hard negative sampling ( Zhuang et al . ( 2019 ) ; Robinson et al . ( 2021 ) ) collects highly similar but true negative examples in advance to construct contrastive pairs and is widely used in image processing and metric learning ( Duan et al . ( 2018 ) ; Robinson et al . ( 2021 ) ; Wang & Liu ( 2021 ) ) . This strategy makes the model be able to correct its mistakes quickly and improve the semantic closeness between represen- tations ( Wang & Liu ( 2021 ) ) . Despite hard negative sampling ’ s practical importance in the image domain , Yang et al . ( 2020 ) have proved that it is sub-optimal in the graph domain . In this paper , we investigate how to generate high-quality node representations under the InfoNCE contrastive objective in graphs . We highlight the importance of embeddings ’ global uniformity and local separation for GCL . The temperature coefficient τ , as a key component of InfoNCE loss , decides how the current learning state focuses on global uniformity and local separation . We illustrate this conclusion in Figure 1 and prove it by a gradient analysis in Section 3 . The dynamic setting of τ can generate different learning states for the same task and help the algorithm smoothly transit from one state to another . Therefore , inspired by the optimizer called Momentum ( Qian ( 1999 ) ) , we develop a Graph contrastive Learning algorithm with dynAmic Temperature Estimation ( GLATE ) . Compared with the fixed setting of τ , GLATE develops to its full potential on contrastive learning by further maximizing the self-supervised Information Bottleneck objective . In a nutshell , the main contributions of this work are as follows : • To evaluate the node representations in GCL , we propose two new metrics : global uniformity ( Eq . ( 9 ) ) and local separation ( Eq . ( 10 ) ) . We prove the importance of temperature τ to them by theoretical analysis ( Section 3.1 ) . • We develop a simple but very effective GCL algorithm GLATE ( Section 3.2 ) . In the training phase , GLATE dynamically adjusts τ ’ s value with its momentum to learn high-quality representations . With the help of the information bottleneck principle , we analyze the connections between GLATE and entropy information ( Section 3.3 ) . • The experimental results on node and graph classification tasks indicate that GLATE is superior to the state-of-the-art GCL algorithms ( Section 4 ) . For example , for the task of transductive node classification , GLATE outperforms baselines by 2.8 percent on average . 2 BACKGROUND AND RELATED WORK . Preliminaries of Graph Contrastive Learning . Let us review an overall graph contrastive learning process with contrastive pairs . We consider a graph G = ( A , X ) where A ∈ RN×N denotes the adjacency matrix , X ∈ RN×M denotes the attribute distribution matrix , N is the number of nodes , and M is the dimension of attributes . The raw graph G is distorted via random data augmentation strategies ( such as edge removing and attribute masking ) to generate two new graphs G̃1 = ( A′ , X ′ ) as well as G̃2 = ( A′′ , X ′′ ) . Then , the message-passing-based graph neural network ( e.g. , GCN proposed in ( Kipf & Welling ( 2016 ) ) ) is used as a shared encoder of G̃1 and G̃2 . It aims to learn node embeddings from G̃1 and G̃2 . The message passing form of a two-layer GCN is : Z′ = softmax ( Â′ReLU ( Â′X ′W 0 ) W 1 ) , ( 1 ) whereZ ′ , Â′ , andW 0 ( W 1 ) are node embedding matrix , renormalizedA′ , and first-layer ( secondlayer ) neural network parameters , respectively . Next , in each training epoch , the pre-defined contrastive objective encourages GCN to minimize the distance between Z ′ and Z ′′ ( another node embedding matrix learned from G̃2 ) , and meanwhile to maximize the distance between different nodes ( Zhu et al . ( 2020 ) ; Zbontar et al . ( 2021b ) ; Bardes et al . ( 2021 ) ) . After n rounds of iteration , we use trained GCN to infer each node ’ s embedding on G , which is used as the initial value of new node embedding in different downstream tasks , such as node classification . Graph Contrastive Objectives . To make positive examples closer and negative examples farther , different graph contrastive loss functions are defined , such as : 1 ) Jason-Shannon Divergence ( JSD ) loss ( Velickovic et al . ( 2019 ) ; Hassani & Khasahmadi ( 2020 ) ) , 2 ) InfoNCE loss ( Zhu et al . ( 2020 ) ; You et al . ( 2020 ) ; Zhu et al . ( 2021c ) ; Pan & Kang ( 2021 ) ; Xu et al . ( 2021 ) ) , and 3 ) Triplet Margin ( TM ) loss ( Zhang et al . ( 2019 ) ) . According to a recent empirical study ( Zhu et al . ( 2021b ) ) , the models under the InfoNCE loss generally achieve better performance compared to those under the other graph contrastive objectives with the participation of negative examples . The important role of InfoNCE loss in graph contrastive learning motivates our work to have deep insights into a series of GCL algorithms ( e.g . GRACE ( Zhu et al . ( 2020 ) ) ) derived by it . Understanding Contrastive Representation Learning . Another main area of concern is understanding contrastive representation learning and exploring what high-quality representations are . In the image domain , alignment , uniformity , and semantic closeness ( Wang & Isola ( 2020 ) ; Wang & Liu ( 2021 ) ) are thought to be three important metrics for measuring the quality of learned representations . Hard negative sampling ( Robinson et al . ( 2021 ) ; Wang & Liu ( 2021 ) ) has been proved to be a successful strategy to meet the needs of the above three metrics . However , recent studies ( Xia et al . ( 2021 ) ; Yang et al . ( 2020 ) ) have confirmed the existence of sampling bias when using hard negative sampling for a graph , which limits its application to the graph domain . 3 METHOD . 3.1 GRADIENT ANALYSIS . We start with a general contrastive loss function , i.e. , InfoNCE loss ( Oord et al . ( 2018 ) ; Zhu et al . ( 2020 ) ) . In the graph domain , it can be formulated as : L1,2i = −log exp ( 1 τ · S ( i′ , i′′ ) ) exp ( 1 τ · S ( i′ , i′′ ) ) + ∑ k 6=i exp ( 1 τ · S ( i′ , k′′ ) ) ︸ ︷︷ ︸ inter−view term + ∑ k 6=i exp ( 1 τ · S ( i′ , k′ ) ) ︸ ︷︷ ︸ intra−view term . ( 2 ) The vector similarity function S ( i′ , k′′ ) = f ( ||h ( Z ′i , : ) ||2 , ||h ( Z ′′ k , : ) ||2 ) where f ( · ) and h ( · ) are respectively cosine similarity and non-linear projection transformation ( two layers of multilayer perception ) . As h ( · ) is not shared between different channels , the whole network structure is technically asymmetric , which leads to the asymmetric result of learned node representations . To eliminate the bias between different channels , G̃1 and G̃2 are swapped to use , and thus the total loss function is L = 12N ∑N i=1 ( L 1,2 i + L 2,1 i ) . To simplify the symbols in Eq . ( 2 ) , we replace exp ( 1 τ · S ( i ′ , k′′ ) ) by Ei′ , k′′ , and then derive the following gradient results : ∇S ( i′ , i′′ ) L1,2i = − ∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ τ · ( Ei′ , i′′ + ∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ ) , ( 3 ) ∇S ( i′ , j′ ) L1,2i = − Ei′ , j′ τ · ( Ei′ , i′′ + ∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ ) , ( 4 ) ∇S ( i′ , j′′ ) L1,2i = − Ei′ , j′′ τ · ( Ei′ , i′′ + ∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ ) . ( 5 ) These gradient results have the same denominator , which can be removed when we calculate the gradient ratio between any two of them . Lemma 1 . Following Eq . ( 2 ) , the ratio of intra-view negative gradient to positive gradient is R ( i′ , j′ ; i′ , i′′ ) = ∇S ( i′ , j′ ) L1,2i ∇S ( i′ , i′′ ) L1,2i = Ei′ , j′∑ k 6=i Ei′ , k′′ + ∑ k 6=i Ei′ , k′ . ( 6 ) The result in Eq . ( 6 ) obeys the Boltzmann distribution , as shown in Figure 2 ( a , b ) ( note that we omit the specific form of R ( i′ , j′′ ; i′ , i′′ ) because it has the same property as R ( i′ , j′ ; i′ , i′′ ) ) . It means that if τ is low ( τ ∈ ( 0 , 0.5 ) ) , the optimizer will punish more on the hard negative examples than the easy ones ; otherwise , it will nearly make no difference of them . So high τ is good at distinguishing similar and dissimilar embeddings and low τ is good at partitioning highly similar embeddings to ensure local separation . It is also worth noting the rate between the negative gradients of different views , which is defined as : R ( i′ , j′ ; i′ , j′′ ) = ∇S ( i′ , j′ ) L1,2i ∇S ( i′ , j′′ ) L1,2i = Ei′ , j′ Ei′ , j′′ = exp ( 1 τ · ( S ( i′ , j′ ) − S ( i′ , j′′ ) ) ) . ( 7 ) In Eq . ( 7 ) , τ still plays an important role ( note that we omit the specific form of R ( i′ , j′′ ; i′ , j′ ) because it has the same property as R ( i′ , j′ ; i′ , j′′ ) ) . We have the following proposition : Proposition 1 . ( Different views ’ imbalanced update. ) . The temperature τ is crucial to balance the rate between the negative gradients of different views . If τ → ∞ , then ∇S ( i′ , j′ ) L1,2i → ∇S ( i′ , j′′ ) L1,2i , i.e. , the negative examples of inter-view and intra-view keep the gradient update at the same speed . Otherwise , there exists an imbalance between ∇S ( i′ , j′ ) L1,2i and ∇S ( i′ , j′′ ) L 1,2 i . The imbalance will deepen as τ decreases . We refer the reader to Appendix A.1 for detailed proof . We illustrate the phenomenon of the imbalanced gradient update in Figure 2 ( c ) and ( d ) . Based on the above analysis , we conclude that 1 ) high τ is helpful to the embedding ’ s global uniformity and low τ is helpful to the embedding ’ s local separation ; 2 ) high τ can alleviate the negative effect of imbalanced update . Therefore , dynamic estimation of τ with a relatively high initial value is recommended . | The authors explore the role of the temperature in the loss function for graph contrastive learning. They argue that global uniformity and local separation are both necessary to the learning quality and this can be controlled by the temperature. Thus, they develop a simple but effective algorithm GLATE to dynamically adjust the temperature value in the training phase. Experiments are also conducted to demonstrate the effectiveness of the method. | SP:b3951670f71d8af5e59144d68c7d50405a001227 |
Resolving label uncertainty with implicit generative models | 1 INTRODUCTION . We consider the problem of joint inference of latent label variables ℓ8 in a collection of data samples indexed by 8 consisting of observations ( features ) G8 and corresponding prior beliefs about their latent label variables ? 8 ( ℓ ) . Two illustrative examples are shown in Fig . 1 . In the first example , the G8 are 784-dimensional vectors representing 28×28 MNIST digits . We aim to infer the digit classes ℓ8 ∈ { 0 , 1 , ... , 9 } for all images in the given collection based on data in which we are given just one negative label per sample , i.e. , the prior beliefs ? 8 ( ℓ ) ( top row ) are uniform over all classes except for one incorrect class . The procedure described in this paper produces inferred distributions over labels ( bottom row ) that are usually peaky and place the maximum at the correct digit 97 % of the time ( Fig . 3 ) . In the second example , the observations G8 are image patches centered around each pixel coordinate 8 in a Surrealist painting , with patch size ( 11×11 ) equal to the receptive field of a 5-layer convolutional neural network used in our inference procedure . The prior beliefs ? 8 ( ℓ ) are distributions over 3 classes ( sky , boat , water ) depending on the coordinate 8 . The joint inference of all labels in this image yields a feasible segmentation despite the high similarity in colors and textures ( see §E.4 ) . In both examples , the inference technique needs to estimate statistical links between observations G8 and corresponding latents ℓ8 that would both be highly confident ( i.e. , lead to low entropy in the posterior distributions ) and explain the varying prior beliefs , which typically have low confidence ( high entropy in the prior distributions ) . This problem of training on weak beliefs , in some form , is often encountered in machine learning , e.g , weak supervision and semi-supervised learning , domain transfer , and integration of modalities – settings where coarse , partial , or inexact sources of data can provide rich information about the state of a prediction instance , though not always a “ ground truth ” label for each instance . In many such settings , fusing the weak input data into a probability distribution over classes is a more natural alternative to transforming the weak input into hard labels [ 26 ] . However , because supervised models target the distribution over labels , training machine learning models with supervision from probabilistic prior beliefs results in uncertain predictions . Most approaches to resolve these uncertainties involve iterative generation of hard pseudo labels [ 56 ] , or loss functions promot- ( a ) Le séducteur , René Magritte ( b ) Boat Prior , anonymous artist ( c ) Inferred segmentation Figure 1 : Above : Inference of latent MNIST digit classes with negative label supervision using a small CNN trained on the RQ criterion ( §2.1 ) . Below : Joint inference of latent pixel classes in an image ( a ) . The prior beliefs ? 8 ( ℓ ) over three classes – sky ( red ) , boat ( green ) , water ( blue ) – are manually set ( b ) . A small CNN using no data except ( a , b ) infers the posterior classes ( c ) . ing low entropy of predictions [ 35 ; 55 ; 59 ; 54 ] . Typically , these approaches are application-specific [ 10 ; 57 ; 1 ; 22 ] . Further discussion of related work is provided in §B and §C . Our key modeling insight is to associate the output distribution of a discriminative model , a feedforward neural network @ , with an implicit generative model ( §2.1 ) of features and to consider the given prior belief as part of that model . We show that without estimating the full generative model , it is possible to learn the network that performs inference in it and reap the benefits of generative modeling , including high certainty in the posterior under soft priors and rich opportunities to model structure in the prior beliefs . We validate the effectiveness of our approach with experiments ( §4 , §E ) that highlight : prior beliefs as a natural way to fuse weak inputs , graceful degradation of performance with increasingly noisy or incomplete inputs , and comparison of our implicit generative model with explicitly generative modeling approaches . 2 BACKGROUND AND APPROACH . Supervised learning on prior beliefs . Supervised learning models , including many neural nets , are typically trained to minimize the cross-entropy −∑8 ∑ℓ ? 83 ( ℓ ) log @ 8 ( ℓ ) between the data distribution over labels ? 3 8 ( ℓ ) and the distribution @ 8 ( ℓ ) = @ ( ℓ |G8 ; \ ) output by a predictor @ using data features G8 . This is equivalent to minimizing the KL divergence ∑ 8 KL ( ? 38 ‖ @ 8 ) which is not wellsuited to training with uncertain labels defined by prior beliefs . If we were to set ? 3 8 ( ℓ ) to equal a much softer prior over latent labels , ? 8 ( ℓ ) , the minimum would be attained when the two distributions ? 8 ( ℓ ) and @ 8 ( ℓ ) are equal : when the prior belief is soft , the trained model @ will also be highly uncertain . Turning soft labels into hard training targets , ( ? 3 8 ( ℓ ) = 1 [ ℓ = arg maxℓ ? 8 ( ℓ ) ] , or by sampling ) , introduces the opposite bias . Now , the cost would indeed be minimized if the predictions had zero entropy , but learning such a prediction function faces difficulty with overconfident labels which are often wrong , as well as the possibility that certain labels often receive substantial weight in the prior , but never the maximum . These issues are illustrated in Figure D.3 . Generative modeling resolves the prior ’ s uncertainty . The approach to classification problems through generative modeling , instead of targeting the conditional probability of latents given the data features , assumes that there is a forward ( generative ) distribution ? ( G8 |ℓ ) and optimizes the log-likelihood of the observed features , ∑ 8 log ( G8 ) = ∑ 8 log ∑ ℓ ? ( G8 |ℓ ) ? 8 ( ℓ ) , with respect to the parameters of the forward distribution . The posterior under the model ? ( ℓ |G8 ) ∝ ? ( G8 |ℓ ) ? 8 ( ℓ ) is then used to infer latent labels for individual data points [ 44 ] . As a result , the generative modeling approach does not suffer from uncertainty in the posterior distribution over latents given the input features , even when the priors ? 8 ( ℓ ) are soft.1 Further discussion relating our method with existing generative modeling approaches is given in §C . However , expressive generative models are typically harder and more expensive to train compared to supervised training of neural networks , as they often require sampling ( e.g. , sampling of the posterior in VAEs [ 21 ] and sampling of the generative model in GANs [ 8 ] ) . Furthermore , the modeling often requires doubling of parameters to express both the forward ( generative ) model and the reverse ( posterior ) model . And , in case of GANs , the learning algorithms may not even cover all modes in the data , which would prevent joint inference for all data points . 2.1 OPTIMIZATION UNDER IMPLICIT GENERATIVE MODELS . Suppose that there is a generative model ? ( G |ℓ ) of observed features conditioned on latent labels . Optimization of the log-likelihood of observed features , ∑ 8 log ( G8 ) = ∑ 8 log ∑ ℓ ? ( G8 |ℓ ) ? 8 ( ℓ ) , can be achieved by introducing a variational distribution @ 8 ( ℓ ) over the latent variable for each instance G8 , then minimizing the variational free energy , which we review next . Recall that the free energy , also known as the negative evidence lower bound ( ELBO ) , is defined as : = − ∑ 8 @ 8 ( ℓ ) log ? ( G8 |ℓ ) ? 8 ( ℓ ) @ 8 ( ℓ ) = ∑ 8 ( KL ( @ 8 ‖A8 ) − log ? ( G8 ) ) , ( 1 ) where A8 ( ℓ ) = ? ( G8 |ℓ ) ? 8 ( ℓ ) ∑ ℓ ? ( G8 |ℓ ) ? 8 ( ℓ ) is the posterior distribution under the generative model . As the KL distance is always positive , the free energy is minimized when @ 8 = A8 , in which case the free energy equals the negative log-likelihood of the data . Free energy minimization is used in various approaches to latent variable modeling . The expectation-maximization ( EM ) [ 6 ] algorithm minimizes ( 1 ) by iteratively setting @ 8 to equal the posteriors A8 and updating the parameters of the forward distributions ? ( G8 |ℓ ) while keeping @ 8 fixed , until guaranteed convergence to a local minimum . In applications of EM predating deep generative models , the @ 8 distributions are not given by a predictor taking G8 as input ; they are are auxiliary distributions used only as part of the learning procedure and replaced at each EM iteration . Their dependence on G8 is implicit in energy minimization . In variational auto-encoders ( VAEs ) [ 21 ] , both the generative model ? ( G |ℎ ) and the posterior @ ( ℎ|G ) are represented as neural networks . Here ? is a deep nonlinear generative model with hidden latents ℎ ( not necessearily corresponding to labels ) , and evaluation of the full posterior is intractable . Instead , @ is learned as a neural network @ ( ℎ|G8 , \ ) jointly with ? . VAE training involves sampling from @ 8 ( ℎ ) = @ ( G |G8 , \ ) in learning to improve the agreement between the forward ( generative ) model ? ( and its true posterior A8 ( ℎ ) ) and the reverse ( posterior ) neural network model @ . Implicit generative models . To derive our training objectives , we will assume the existence of the generative model ? ( G |ℓ ) without ever parametrizing it or sampling from it . Instead , we parametrize only the reverse ( variational posterior ) model @ 8 ( ℓ ) = @ ( ℓ |G8 ; \ ) in the form of a neural net taking G8 as input . The trained model @ is the direct output of our algorithms . Given the variational distributions @ 8 , we can minimize ( 1 ) with respect to the forward probabilities ? ( G8 |ℓ ) for all G8 and ℓ such that ∑ 8 ? ( G8 |ℓ ) = 1 for all ℓ . The optimum is achieved by : ? ( G8 |ℓ ) = @ 8 ( ℓ ) ∑ 9 @ 9 ( ℓ ) . ( 2 ) We refer to this as the implicit generative model associated to @ . Much like the posterior is implicit in the free energy minimization in the EM algorithm , here instead the forward model is just a matrix of numbers ? 8 , ℓ , implicit to free energy minimization – the opposite , in some sense , to what is done in EM optimization . The link G − ℓ is left entirely to the neural network @ to capture explicitly . The probability mass of the implicit generative model is not uniformly distributed , rather , the data points for which the variational posterior @ 8 ( ℓ ) is more certain are considered more likely under that latent ℓ , unless it corresponds to a popular class ℓ to which many other data points are also assigned . 1For intuition , consider a model where ℓ is the mixture index in a mixture of high-dimensional Gaussians . If the observations G8 naturally cluster into several well-defined clusters , then the prior ? 8 ( ℓ ) may be flat , but the posterior distributions under the model assign data points to clusters with high certainty . The posterior under the implied generative model is A8 ( ℓ ) ∝ ? 8 ( ℓ ) @ 8 ( ℓ ) ∑ 9 @ 9 ( ℓ ) . ( 3 ) Note that we can compute A8 ( ℓ ) by multiplying the re-normalized model outputs with the prior at each instance , so that for each instance 8 we have two outputs : @ 8 and A8 . We propose two methods of optimizing the free energy with respect to the parameters \ by gradient steps . Each method iterates the following : ( 1 ) Calculate the posterior distributions A8 in terms of @ 8 as in ( 3 ) ( 2 ) Update the parameters of @ with a gradient step : • Option QR : \ ← \ − [ ∇\ ∑ 8 KL ( @ 8 ‖A8 ) . • Option RQ : \ ← \ − [ ∇\ ∑ 8 KL ( A8 ‖ @ 8 ) . Gradients of the model parameters are propagated to the expression of A8 through @ 8 ( see Fig . 2 ) . Both losses have a stable point when @ 8 = A8 . A discussion of the relative benefits and limitations of the QR and RQ loss formulations is given in §A , along with practical considerations for implementing these methods . Option QR uses the KL distance in the direction it appears in ( 1 ) and thus guarantees continual improvements in free energy and convergence to a local minimum ( with the exception for the effects of stochasticity in minibatch sampling ) . Substituting A8 from ( 3 ) , the free energy ( 1 ) becomes : = ∑ 8 , ℓ @ 8 ( ℓ ) log ( ∑ 9 @ 9 ( ℓ ) ) − ∑ 8 , ℓ @ 8 ( ℓ ) log ( ? 8 ( ℓ ) ) ( 4 ) This criterion does not encourage entropy of individual @ 8 distributions , but of their average . The second term alone would be minimized if the @ network could put all the mass on arg maxℓ ? 8 ( ℓ ) for each data point , but the first term promotes diversity in assignment of latents ( labels ) ℓ across the entire dataset . Thus a network @ can optimize ( 4 ) if it makes different confident predictions for different data points . To illustrate this , consider the case when ? 8 ( ℓ ) = ? ( ℓ ) , i.e . all data points have the same prior . Then ( 4 ) is minimized when 1 # ∑ 8 @ 8 ( ℎ ) = ? ( ℓ ) , and this can be achieved when the network learns a constant distribution @ ( ℓ |G8 ; \ ) = ? ( ℓ ) . But the free energy is also minimized if the network predicts only a single label for each data point with high certainty , but it varies in predictions so that the counts of label predictions match the prior . As demonstrated in Fig . 1 and in our experiments , avoiding degenerate solutions is not hard . We attribute this to two factors . First , the situations of interest typically involve uncertain , but varying distributions ? 8 ( ℓ ) which break symmetries that could lead to ignoring the data features G8 . Second , the neural networks used as posterior models @ come with their own constraints in parametrization ( e.g . a multi-layer convolutional net with certain number of filters in each layer ) , and , the inductive biases as a consequence of gradient descent learning in nonlinear , multilayer networks . In fact , as discussed in §3 and §E.1 , even unsupervised clustering is possible with suitably chosen priors that break symmetry , allowing this approach to be used for self-supervised training . See also §C for more on relationships with other approaches . | The paper presents a weakly supervised learning strategy, which exploits instance labels in a form of label prior distributions for training classifiers. The main idea of this work is to build an implicit generative model from a probabilistic label prediction network, which is then trained with an ELBO loss. The paper applies this framework to several types of weakly supervised learning scenarios, including classification with negative labels or labels from ranking, semantic segmentation and text classification from coarse labels, and other tasks with structured label priors, etc. The experimental evaluation validates the efficacy of the proposed learning strategy on the aforementioned tasks with comparisons to the prior works. | SP:aa1054c1782ad4562ed402bb56f70f9287d197ce |
Resolving label uncertainty with implicit generative models | 1 INTRODUCTION . We consider the problem of joint inference of latent label variables ℓ8 in a collection of data samples indexed by 8 consisting of observations ( features ) G8 and corresponding prior beliefs about their latent label variables ? 8 ( ℓ ) . Two illustrative examples are shown in Fig . 1 . In the first example , the G8 are 784-dimensional vectors representing 28×28 MNIST digits . We aim to infer the digit classes ℓ8 ∈ { 0 , 1 , ... , 9 } for all images in the given collection based on data in which we are given just one negative label per sample , i.e. , the prior beliefs ? 8 ( ℓ ) ( top row ) are uniform over all classes except for one incorrect class . The procedure described in this paper produces inferred distributions over labels ( bottom row ) that are usually peaky and place the maximum at the correct digit 97 % of the time ( Fig . 3 ) . In the second example , the observations G8 are image patches centered around each pixel coordinate 8 in a Surrealist painting , with patch size ( 11×11 ) equal to the receptive field of a 5-layer convolutional neural network used in our inference procedure . The prior beliefs ? 8 ( ℓ ) are distributions over 3 classes ( sky , boat , water ) depending on the coordinate 8 . The joint inference of all labels in this image yields a feasible segmentation despite the high similarity in colors and textures ( see §E.4 ) . In both examples , the inference technique needs to estimate statistical links between observations G8 and corresponding latents ℓ8 that would both be highly confident ( i.e. , lead to low entropy in the posterior distributions ) and explain the varying prior beliefs , which typically have low confidence ( high entropy in the prior distributions ) . This problem of training on weak beliefs , in some form , is often encountered in machine learning , e.g , weak supervision and semi-supervised learning , domain transfer , and integration of modalities – settings where coarse , partial , or inexact sources of data can provide rich information about the state of a prediction instance , though not always a “ ground truth ” label for each instance . In many such settings , fusing the weak input data into a probability distribution over classes is a more natural alternative to transforming the weak input into hard labels [ 26 ] . However , because supervised models target the distribution over labels , training machine learning models with supervision from probabilistic prior beliefs results in uncertain predictions . Most approaches to resolve these uncertainties involve iterative generation of hard pseudo labels [ 56 ] , or loss functions promot- ( a ) Le séducteur , René Magritte ( b ) Boat Prior , anonymous artist ( c ) Inferred segmentation Figure 1 : Above : Inference of latent MNIST digit classes with negative label supervision using a small CNN trained on the RQ criterion ( §2.1 ) . Below : Joint inference of latent pixel classes in an image ( a ) . The prior beliefs ? 8 ( ℓ ) over three classes – sky ( red ) , boat ( green ) , water ( blue ) – are manually set ( b ) . A small CNN using no data except ( a , b ) infers the posterior classes ( c ) . ing low entropy of predictions [ 35 ; 55 ; 59 ; 54 ] . Typically , these approaches are application-specific [ 10 ; 57 ; 1 ; 22 ] . Further discussion of related work is provided in §B and §C . Our key modeling insight is to associate the output distribution of a discriminative model , a feedforward neural network @ , with an implicit generative model ( §2.1 ) of features and to consider the given prior belief as part of that model . We show that without estimating the full generative model , it is possible to learn the network that performs inference in it and reap the benefits of generative modeling , including high certainty in the posterior under soft priors and rich opportunities to model structure in the prior beliefs . We validate the effectiveness of our approach with experiments ( §4 , §E ) that highlight : prior beliefs as a natural way to fuse weak inputs , graceful degradation of performance with increasingly noisy or incomplete inputs , and comparison of our implicit generative model with explicitly generative modeling approaches . 2 BACKGROUND AND APPROACH . Supervised learning on prior beliefs . Supervised learning models , including many neural nets , are typically trained to minimize the cross-entropy −∑8 ∑ℓ ? 83 ( ℓ ) log @ 8 ( ℓ ) between the data distribution over labels ? 3 8 ( ℓ ) and the distribution @ 8 ( ℓ ) = @ ( ℓ |G8 ; \ ) output by a predictor @ using data features G8 . This is equivalent to minimizing the KL divergence ∑ 8 KL ( ? 38 ‖ @ 8 ) which is not wellsuited to training with uncertain labels defined by prior beliefs . If we were to set ? 3 8 ( ℓ ) to equal a much softer prior over latent labels , ? 8 ( ℓ ) , the minimum would be attained when the two distributions ? 8 ( ℓ ) and @ 8 ( ℓ ) are equal : when the prior belief is soft , the trained model @ will also be highly uncertain . Turning soft labels into hard training targets , ( ? 3 8 ( ℓ ) = 1 [ ℓ = arg maxℓ ? 8 ( ℓ ) ] , or by sampling ) , introduces the opposite bias . Now , the cost would indeed be minimized if the predictions had zero entropy , but learning such a prediction function faces difficulty with overconfident labels which are often wrong , as well as the possibility that certain labels often receive substantial weight in the prior , but never the maximum . These issues are illustrated in Figure D.3 . Generative modeling resolves the prior ’ s uncertainty . The approach to classification problems through generative modeling , instead of targeting the conditional probability of latents given the data features , assumes that there is a forward ( generative ) distribution ? ( G8 |ℓ ) and optimizes the log-likelihood of the observed features , ∑ 8 log ( G8 ) = ∑ 8 log ∑ ℓ ? ( G8 |ℓ ) ? 8 ( ℓ ) , with respect to the parameters of the forward distribution . The posterior under the model ? ( ℓ |G8 ) ∝ ? ( G8 |ℓ ) ? 8 ( ℓ ) is then used to infer latent labels for individual data points [ 44 ] . As a result , the generative modeling approach does not suffer from uncertainty in the posterior distribution over latents given the input features , even when the priors ? 8 ( ℓ ) are soft.1 Further discussion relating our method with existing generative modeling approaches is given in §C . However , expressive generative models are typically harder and more expensive to train compared to supervised training of neural networks , as they often require sampling ( e.g. , sampling of the posterior in VAEs [ 21 ] and sampling of the generative model in GANs [ 8 ] ) . Furthermore , the modeling often requires doubling of parameters to express both the forward ( generative ) model and the reverse ( posterior ) model . And , in case of GANs , the learning algorithms may not even cover all modes in the data , which would prevent joint inference for all data points . 2.1 OPTIMIZATION UNDER IMPLICIT GENERATIVE MODELS . Suppose that there is a generative model ? ( G |ℓ ) of observed features conditioned on latent labels . Optimization of the log-likelihood of observed features , ∑ 8 log ( G8 ) = ∑ 8 log ∑ ℓ ? ( G8 |ℓ ) ? 8 ( ℓ ) , can be achieved by introducing a variational distribution @ 8 ( ℓ ) over the latent variable for each instance G8 , then minimizing the variational free energy , which we review next . Recall that the free energy , also known as the negative evidence lower bound ( ELBO ) , is defined as : = − ∑ 8 @ 8 ( ℓ ) log ? ( G8 |ℓ ) ? 8 ( ℓ ) @ 8 ( ℓ ) = ∑ 8 ( KL ( @ 8 ‖A8 ) − log ? ( G8 ) ) , ( 1 ) where A8 ( ℓ ) = ? ( G8 |ℓ ) ? 8 ( ℓ ) ∑ ℓ ? ( G8 |ℓ ) ? 8 ( ℓ ) is the posterior distribution under the generative model . As the KL distance is always positive , the free energy is minimized when @ 8 = A8 , in which case the free energy equals the negative log-likelihood of the data . Free energy minimization is used in various approaches to latent variable modeling . The expectation-maximization ( EM ) [ 6 ] algorithm minimizes ( 1 ) by iteratively setting @ 8 to equal the posteriors A8 and updating the parameters of the forward distributions ? ( G8 |ℓ ) while keeping @ 8 fixed , until guaranteed convergence to a local minimum . In applications of EM predating deep generative models , the @ 8 distributions are not given by a predictor taking G8 as input ; they are are auxiliary distributions used only as part of the learning procedure and replaced at each EM iteration . Their dependence on G8 is implicit in energy minimization . In variational auto-encoders ( VAEs ) [ 21 ] , both the generative model ? ( G |ℎ ) and the posterior @ ( ℎ|G ) are represented as neural networks . Here ? is a deep nonlinear generative model with hidden latents ℎ ( not necessearily corresponding to labels ) , and evaluation of the full posterior is intractable . Instead , @ is learned as a neural network @ ( ℎ|G8 , \ ) jointly with ? . VAE training involves sampling from @ 8 ( ℎ ) = @ ( G |G8 , \ ) in learning to improve the agreement between the forward ( generative ) model ? ( and its true posterior A8 ( ℎ ) ) and the reverse ( posterior ) neural network model @ . Implicit generative models . To derive our training objectives , we will assume the existence of the generative model ? ( G |ℓ ) without ever parametrizing it or sampling from it . Instead , we parametrize only the reverse ( variational posterior ) model @ 8 ( ℓ ) = @ ( ℓ |G8 ; \ ) in the form of a neural net taking G8 as input . The trained model @ is the direct output of our algorithms . Given the variational distributions @ 8 , we can minimize ( 1 ) with respect to the forward probabilities ? ( G8 |ℓ ) for all G8 and ℓ such that ∑ 8 ? ( G8 |ℓ ) = 1 for all ℓ . The optimum is achieved by : ? ( G8 |ℓ ) = @ 8 ( ℓ ) ∑ 9 @ 9 ( ℓ ) . ( 2 ) We refer to this as the implicit generative model associated to @ . Much like the posterior is implicit in the free energy minimization in the EM algorithm , here instead the forward model is just a matrix of numbers ? 8 , ℓ , implicit to free energy minimization – the opposite , in some sense , to what is done in EM optimization . The link G − ℓ is left entirely to the neural network @ to capture explicitly . The probability mass of the implicit generative model is not uniformly distributed , rather , the data points for which the variational posterior @ 8 ( ℓ ) is more certain are considered more likely under that latent ℓ , unless it corresponds to a popular class ℓ to which many other data points are also assigned . 1For intuition , consider a model where ℓ is the mixture index in a mixture of high-dimensional Gaussians . If the observations G8 naturally cluster into several well-defined clusters , then the prior ? 8 ( ℓ ) may be flat , but the posterior distributions under the model assign data points to clusters with high certainty . The posterior under the implied generative model is A8 ( ℓ ) ∝ ? 8 ( ℓ ) @ 8 ( ℓ ) ∑ 9 @ 9 ( ℓ ) . ( 3 ) Note that we can compute A8 ( ℓ ) by multiplying the re-normalized model outputs with the prior at each instance , so that for each instance 8 we have two outputs : @ 8 and A8 . We propose two methods of optimizing the free energy with respect to the parameters \ by gradient steps . Each method iterates the following : ( 1 ) Calculate the posterior distributions A8 in terms of @ 8 as in ( 3 ) ( 2 ) Update the parameters of @ with a gradient step : • Option QR : \ ← \ − [ ∇\ ∑ 8 KL ( @ 8 ‖A8 ) . • Option RQ : \ ← \ − [ ∇\ ∑ 8 KL ( A8 ‖ @ 8 ) . Gradients of the model parameters are propagated to the expression of A8 through @ 8 ( see Fig . 2 ) . Both losses have a stable point when @ 8 = A8 . A discussion of the relative benefits and limitations of the QR and RQ loss formulations is given in §A , along with practical considerations for implementing these methods . Option QR uses the KL distance in the direction it appears in ( 1 ) and thus guarantees continual improvements in free energy and convergence to a local minimum ( with the exception for the effects of stochasticity in minibatch sampling ) . Substituting A8 from ( 3 ) , the free energy ( 1 ) becomes : = ∑ 8 , ℓ @ 8 ( ℓ ) log ( ∑ 9 @ 9 ( ℓ ) ) − ∑ 8 , ℓ @ 8 ( ℓ ) log ( ? 8 ( ℓ ) ) ( 4 ) This criterion does not encourage entropy of individual @ 8 distributions , but of their average . The second term alone would be minimized if the @ network could put all the mass on arg maxℓ ? 8 ( ℓ ) for each data point , but the first term promotes diversity in assignment of latents ( labels ) ℓ across the entire dataset . Thus a network @ can optimize ( 4 ) if it makes different confident predictions for different data points . To illustrate this , consider the case when ? 8 ( ℓ ) = ? ( ℓ ) , i.e . all data points have the same prior . Then ( 4 ) is minimized when 1 # ∑ 8 @ 8 ( ℎ ) = ? ( ℓ ) , and this can be achieved when the network learns a constant distribution @ ( ℓ |G8 ; \ ) = ? ( ℓ ) . But the free energy is also minimized if the network predicts only a single label for each data point with high certainty , but it varies in predictions so that the counts of label predictions match the prior . As demonstrated in Fig . 1 and in our experiments , avoiding degenerate solutions is not hard . We attribute this to two factors . First , the situations of interest typically involve uncertain , but varying distributions ? 8 ( ℓ ) which break symmetries that could lead to ignoring the data features G8 . Second , the neural networks used as posterior models @ come with their own constraints in parametrization ( e.g . a multi-layer convolutional net with certain number of filters in each layer ) , and , the inductive biases as a consequence of gradient descent learning in nonlinear , multilayer networks . In fact , as discussed in §3 and §E.1 , even unsupervised clustering is possible with suitably chosen priors that break symmetry , allowing this approach to be used for self-supervised training . See also §C for more on relationships with other approaches . | In this paper, the authors present implicit generative models in a free energy criterion with a combination of both the training of neural networks for label prediction and modeling of a label prior. They also discuss multiple sources of label priors to handle label uncertainty for coarse and imprecise data input. The extensive experiments conducted on five different tasks and the experimental results seem to well validate the efficacy of the proposed method. | SP:aa1054c1782ad4562ed402bb56f70f9287d197ce |
Neural Capacitance: A New Perspective of Neural Network Selection via Edge Dynamics | Efficient model selection for identifying a suitable pre-trained neural network to a downstream task is a fundamental yet challenging task in deep learning . Current practice requires expensive computational costs in model training for performance prediction . In this paper , we propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections ( edges ) during training . Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections . Therefore , a converged neural network is associated with an equilibrium state of a networked system composed of those edges . To this end , we construct a network mapping φ , converting a neural network GA to a directed line graph GB that is defined on those edges in GA. Next , we derive a neural capacitance metric βeff as a predictive measure universally capturing the generalization capability of GA on the downstream task using only a handful of early training results . We carried out extensive experiments using 17 popular pre-trained ImageNet models and five benchmark datasets , including CIFAR10 , CIFAR100 , SVHN , Fashion MNIST and Birds , to evaluate the fine-tuning performance of our framework . Our neural capacitance metric is shown to be a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods . 1 INTRODUCTION . Leveraging a pre-trained neural network ( i.e. , a source model ) and fine-tuning it to solve a target task is a common and effective practice in deep learning , such as transfer learning . Transfer learning has been widely used to solve complex tasks in text and vision domains . In vision , models trained on ImageNet are leveraged to solve diverse tasks such as image classification and object detection . In text , language models that are trained on a large amount of public data comprising of books , Wikipedia etc are employed to solve tasks such as classification and language generation . Although such technique can achieve good performance on a target task , a fundamental yet challenging problem is how to select a suitable pre-trained model from a pool of candidates in an efficient manner . The naive solution of training each candidate fully with the target data can find the best pre-trained model but is infeasible due to considerable consumption on time and computation resources . This challenge motivates the need for an efficient predictive measure to capture the performance of a pre-trained model on the target task based only on early training results ( e.g. , predicting final model performance based on the statistics obtained from first few training epochs ) . In order to implement an efficient neural network ( NN ) model selection , this paper proposes a novel framework to forecast the predictive ability of a model with its cumulative information in the early phase of NN training , as practised in learning curve prediction ( Domhan et al. , 2015 ; Chandrashekaran & Lane , 2017 ; Baker et al. , 2017 ; Wistuba & Pedapati , 2020 ) . Most prior work on learning curve prediction aims to capture the trajectory of learning curves with a regression function of models ’ validation accuracy . Some of the previous algorithms developed in this field require training data from additional learning curves to train the predictors ( Chandrashekaran & Lane , 2017 ; Baker et al. , 2017 ; Wistuba & Pedapati , 2020 ) . On the other hand , our model does not require any such data . It solely relies on the NN architecture . Ranking models according to their final accuracy after fine-tuning is a lot more challenging as the learning curves are very similar to each other . The entire NN training process involves iterative updates of the weights of synaptic connections , according to one particular optimization algorithm , e.g. , gradient descent or stochastic gradient descent ( SGD ) ( Bottou , 2012 ; LeCun et al. , 2015 ) . In essence , many factors contribute to impact how weights are updated , including the training data , the neural architecture , the loss function , and the optimization algorithm . Moreover , weights evolving during NN training in many aspects can be viewed as a discrete dynamical system . The perspective of viewing NN training as a dynamical system has been studied by the community ( Mei et al. , 2018 ; Chang et al. , 2018 ; Banburski et al. , 2019 ; Dogra , 2020 ; Tano et al. , 2020 ; Dogra & Redman , 2020 ; Feng & Tu , 2021 ) , and many attempted to make some theoretical explanation of the convergence rate and generalization error bounds . In this paper , we will provide the first attempt in exploring its power in neural model selection . One limitation of current approaches is that they concentrated on the macroscopic and collective behavior of the system , but lacks a dedicated examination of the individual interactions between the trainable weights or synaptic connections , which are crucial in understanding of the dependency of these weights , and how they co-evolve during training . To fill the gap , we study the system from a microscopic perspective , build edge dynamics of synaptic connections from SGD in terms of differential equations , from which we build an associated network as well . The edge dynamics induced from SGD is nonlinear and highly coupling . It will be very challenging to solve , considering millions of weights in many convolutional neural networks ( CNNs ) , e.g. , 16M weights in MobileNet ( Howard et al. , 2017 ) and 528M in VGG16 ( Simonyan & Zisserman , 2014 ) . Gao et al . ( 2016 ) proposed a universal topological metric for the associated network to decouple the system . The metric will be used for model selection in our approach , and it is shown to be powerful in search of the best predictive model . We illustrate our proposed framework in Fig.1 . The main contributions of our framework can be summarized as follows : • View NN training as a dynamical system over synaptic connections , and first time investigate the interactions of synaptic connections in a microscopic perspective . • Propose neural capacitance metric βeff for neural network model selection . • Empirical results of 17 pre-trained models on five benchmark datasets show that our βeff based approach outperforms current learning curve prediction approaches . • For rank prediction according to the performance of pre-trained models , our approach improves by 9.1/38.3/12.4/65.3/40.1 % on CIFAR10/CIFAR100/SVHN/Fashion MNIST/Birds over the best baseline with observations from learning curves of length only 5 epochs . 2 RELATED WORK . Learning Curve Prediction . Chandrashekaran & Lane ( 2017 ) treated the current learning curve ( LC ) as an affine transformation of previous LCs . They built an ensemble of transformations employing previous LCs and the first few epochs of the current LC to predict the final accuracy of the current LC . Baker et al . ( 2017 ) proposed an SVM based LC predictor using features extracted from previous LCs , including the architecture information such as number of layers , parameters , and training technique such as learning rate and learning rate decay . A separate SVM is used to predict the accuracy of an LC at a particular epoch . Domhan et al . ( 2015 ) trained an ensemble of parametric functions that observe the first few epochs of an LC and extrapolate it . Klein et al . ( 2017a ) devised a Bayesian NN to model the functions that Domhan formulated to capture the structure of the LCs more effectively . Wistuba & Pedapati ( 2020 ) developed a transfer learning based predictor that was trained on LCs generated from other datasets . It is a NN based predictor that leverages architecture and dataset embeddings to capture the similarities between the architectures of various models and also the other datasets that it was trained on . Dynamical System View of NNs . There are many efforts to study the dynamics of NN training . Some prior work on SGD dynamics for NNs generally have a pre-assumption of the input distribution or how the labels are generated . They obtained global convergence for shallow NNs ( Tian , 2017 ; Banburski et al. , 2019 ) . System identification itself is a complicated task ( Haykin , 2010 ; Lillicrap et al. , 2020 ) . In studying the generalisation phenomenon of deep NNs , Goldt et al . ( 2019 ) formulated SGD with a set of differential equations . But , it is limited to over-parameterised two-layer NNs under the teacher-student framework . The teacher network determines how the labels are generated . Also , some interesting phenomena ( Frankle et al. , 2020 ) are observed during the early phase of NN training , such as trainable sparse sub-networks emerge ( Frankle et al. , 2019 ) , gradient descent moves into a small subspace ( Gur-Ari et al. , 2018 ) , and there exists a critical effective connection between layers ( Achille et al. , 2019 ) . Bhardwaj et al . ( 2021 ) built a nice connection between architectures ( with concatenation-type skip connections ) and the performance , and proposed a new topological metric to identify NNs with similar accuracy . Many of these studies are built on dynamical system and network science . It will be a promising direction to study deep learning mechanism . 3 PRELIMINARIES . Dynamical System of a Network . Many real complex systems , e.g. , plant-pollinator interactions ( Waser & Ollerton , 2006 ) and the spread of COVID-19 ( Thurner et al. , 2020 ) , can be described with networks ( Mitchell , 2006 ; Barabási & Pósfai , 2016 ) . Let G = ( V , E ) be a network with node set V and edge set E. Assuming n = |V | , the interactions between nodes can be formulated as a set of differential equations ẋi = f ( xi ) + ∑ j∈V Pijg ( xi , xj ) , ∀i ∈ V , ( 1 ) where xi is the state of node i . In real systems , it could be the abundance of a plant in ecological network , the infection rate of a person in epidemic network , or the expression level of a gene in regulatory network . The term P is the adjacency matrix of G , where the entry Pij indicates the interaction strength between nodes i and j . The functions f ( · ) and g ( · , · ) capture the internal and external impacts on node i , respectively . Usually , they are nonlinear . Let x = ( x1 , x2 , . . . , xn ) . For a small network , given an initial state , one can run a forward simulation for an equilibrium state x∗ , such that ẋ∗i = f ( x ∗ i ) + ∑ j∈V Pijg ( x ∗ i , x ∗ j ) = 0 . However , when the size of the system goes up to millions or even billions , it will pose a big challenge to solve the coupled differential equations . The problem can be efficiently addressed by employing a mean-field technique ( Gao et al. , 2016 ) , where a linear operator LP ( · ) is introduced to decouple the system . In specific , the operator depends on the adjacency matrix P and is defined as LP ( z ) = 1TPz 1TP1 , ( 2 ) where z ∈ Rn . Let δin = P1 be nodes ’ in-degrees and δout = 1TP be nodes ’ out-degrees . For a weighted G , the degrees are weighted as well . Applying LP ( · ) to δin , it gives βeff = LP ( δin ) = 1TPδin 1T δin = δToutδin 1T δin , ( 3 ) which proves to be a powerful metric to measure the resilience of networks , and has been applied to make reliable inferences from incomplete networks ( Jiang et al. , 2020b ; a ) . We use it to measure the predictive ability of a NN ( see Section 4.3 ) , whose training in essence is a dynamical system . For an overview of the related technique , the readers are referred to Appendix H. NN Training is a Dynamical System . Conventionally , training a NN is a nonlinear optimization problem . Because of the hierarchical structure of NNs , the training procedure is implemented by two alternate procedures : forward-propagation ( FP ) and back-propagation ( BP ) , as described in Fig.1 ( a ) . During FP , data goes through the input layer , hidden layers , up to the output layer , which produces the predictions of the input data . The differences between the outputs and the labels of the input data are used to define an objective function C , a.k.a training error function . BP proceeds to minimize C , in a reverse way as did in FP , by propagating the error from the output layer down to the input layer . The trainable weights of synaptic connections are updated accordingly . Let GA be a NN , w be the flattened weight vector for GA , and z be the set of activation values . As a whole , the training of GA can be described with two coupled dynamics : A on GA , and B on GB , where nodes in GA are neurons , and nodes in GB are the synaptic connections . The coupling relation arises from the strong inter-dependency between z andw : the states z ( activation values or activation gradients ) of GA are the parameters of B , and the states w of GB are the trainable parameters of GA . If we put the whole training process in the context of networked systems , A denotes a node dynamics because the states of nodes evolve during FP , and B expresses an edge dynamics because of the updates of edge weights during BP ( Mei et al. , 2018 ; Poggio et al. , 2020a ; b ) . Mathematically , we formulate the node and edge dynamics based on the gradients of C : ( A ) dz/dt ≈ hA ( z , t ; w ) = −∇zC ( z ( t ) ) , ( 4 ) ( B ) dw/dt ≈ hB ( w , t ; z ) = −∇wC ( w ( t ) ) , ( 5 ) where t denotes the training step . Let a ( ` ) i be the pre-activation of node i on layer ` , and σ ` ( · ) be the activation function of layer ` . Usually , the output activation function is a softmax . The hierarchical structure of GA exerts some constraints over z for neighboring layers , i.e. , z ( ` ) i = σ ` ( a ( ` ) i ) , 1 ≤ i ≤ n ` , ∀1 ≤ ` < L and z ( L ) k = exp { a ( L ) k } / ∑ j exp { a ( L ) j } , 1 ≤ k ≤ nL , where n ` is the total number of neurons on layer ` , and GA has L+ 1 layers . It also presents a dependency between z and w. For example , when GA is an MLP without bias , a ( ` ) i = w ( ` ) T i z ( ` −1 ) , which builds an interconnection from GA to GB . It is obvious , givenw , the activation z satisfying all these constraints , is also a fixed point of A . Meanwhile , an equilibrium state of B provides a set of optimal weights for GA . | Accurately predicting the performance at early stage is important for efficient model selection without incurring too much computation. The paper proposes a neural capacitance metric as a predictive measure to capture the performance of a model on the downstream task using only a handful of early training results. The metric is derived from a line graph mapped from a neural network by modeling the dynamical system of the network. | SP:d18240d9eca55834f96c0223924e425a38685368 |
Neural Capacitance: A New Perspective of Neural Network Selection via Edge Dynamics | Efficient model selection for identifying a suitable pre-trained neural network to a downstream task is a fundamental yet challenging task in deep learning . Current practice requires expensive computational costs in model training for performance prediction . In this paper , we propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections ( edges ) during training . Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections . Therefore , a converged neural network is associated with an equilibrium state of a networked system composed of those edges . To this end , we construct a network mapping φ , converting a neural network GA to a directed line graph GB that is defined on those edges in GA. Next , we derive a neural capacitance metric βeff as a predictive measure universally capturing the generalization capability of GA on the downstream task using only a handful of early training results . We carried out extensive experiments using 17 popular pre-trained ImageNet models and five benchmark datasets , including CIFAR10 , CIFAR100 , SVHN , Fashion MNIST and Birds , to evaluate the fine-tuning performance of our framework . Our neural capacitance metric is shown to be a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods . 1 INTRODUCTION . Leveraging a pre-trained neural network ( i.e. , a source model ) and fine-tuning it to solve a target task is a common and effective practice in deep learning , such as transfer learning . Transfer learning has been widely used to solve complex tasks in text and vision domains . In vision , models trained on ImageNet are leveraged to solve diverse tasks such as image classification and object detection . In text , language models that are trained on a large amount of public data comprising of books , Wikipedia etc are employed to solve tasks such as classification and language generation . Although such technique can achieve good performance on a target task , a fundamental yet challenging problem is how to select a suitable pre-trained model from a pool of candidates in an efficient manner . The naive solution of training each candidate fully with the target data can find the best pre-trained model but is infeasible due to considerable consumption on time and computation resources . This challenge motivates the need for an efficient predictive measure to capture the performance of a pre-trained model on the target task based only on early training results ( e.g. , predicting final model performance based on the statistics obtained from first few training epochs ) . In order to implement an efficient neural network ( NN ) model selection , this paper proposes a novel framework to forecast the predictive ability of a model with its cumulative information in the early phase of NN training , as practised in learning curve prediction ( Domhan et al. , 2015 ; Chandrashekaran & Lane , 2017 ; Baker et al. , 2017 ; Wistuba & Pedapati , 2020 ) . Most prior work on learning curve prediction aims to capture the trajectory of learning curves with a regression function of models ’ validation accuracy . Some of the previous algorithms developed in this field require training data from additional learning curves to train the predictors ( Chandrashekaran & Lane , 2017 ; Baker et al. , 2017 ; Wistuba & Pedapati , 2020 ) . On the other hand , our model does not require any such data . It solely relies on the NN architecture . Ranking models according to their final accuracy after fine-tuning is a lot more challenging as the learning curves are very similar to each other . The entire NN training process involves iterative updates of the weights of synaptic connections , according to one particular optimization algorithm , e.g. , gradient descent or stochastic gradient descent ( SGD ) ( Bottou , 2012 ; LeCun et al. , 2015 ) . In essence , many factors contribute to impact how weights are updated , including the training data , the neural architecture , the loss function , and the optimization algorithm . Moreover , weights evolving during NN training in many aspects can be viewed as a discrete dynamical system . The perspective of viewing NN training as a dynamical system has been studied by the community ( Mei et al. , 2018 ; Chang et al. , 2018 ; Banburski et al. , 2019 ; Dogra , 2020 ; Tano et al. , 2020 ; Dogra & Redman , 2020 ; Feng & Tu , 2021 ) , and many attempted to make some theoretical explanation of the convergence rate and generalization error bounds . In this paper , we will provide the first attempt in exploring its power in neural model selection . One limitation of current approaches is that they concentrated on the macroscopic and collective behavior of the system , but lacks a dedicated examination of the individual interactions between the trainable weights or synaptic connections , which are crucial in understanding of the dependency of these weights , and how they co-evolve during training . To fill the gap , we study the system from a microscopic perspective , build edge dynamics of synaptic connections from SGD in terms of differential equations , from which we build an associated network as well . The edge dynamics induced from SGD is nonlinear and highly coupling . It will be very challenging to solve , considering millions of weights in many convolutional neural networks ( CNNs ) , e.g. , 16M weights in MobileNet ( Howard et al. , 2017 ) and 528M in VGG16 ( Simonyan & Zisserman , 2014 ) . Gao et al . ( 2016 ) proposed a universal topological metric for the associated network to decouple the system . The metric will be used for model selection in our approach , and it is shown to be powerful in search of the best predictive model . We illustrate our proposed framework in Fig.1 . The main contributions of our framework can be summarized as follows : • View NN training as a dynamical system over synaptic connections , and first time investigate the interactions of synaptic connections in a microscopic perspective . • Propose neural capacitance metric βeff for neural network model selection . • Empirical results of 17 pre-trained models on five benchmark datasets show that our βeff based approach outperforms current learning curve prediction approaches . • For rank prediction according to the performance of pre-trained models , our approach improves by 9.1/38.3/12.4/65.3/40.1 % on CIFAR10/CIFAR100/SVHN/Fashion MNIST/Birds over the best baseline with observations from learning curves of length only 5 epochs . 2 RELATED WORK . Learning Curve Prediction . Chandrashekaran & Lane ( 2017 ) treated the current learning curve ( LC ) as an affine transformation of previous LCs . They built an ensemble of transformations employing previous LCs and the first few epochs of the current LC to predict the final accuracy of the current LC . Baker et al . ( 2017 ) proposed an SVM based LC predictor using features extracted from previous LCs , including the architecture information such as number of layers , parameters , and training technique such as learning rate and learning rate decay . A separate SVM is used to predict the accuracy of an LC at a particular epoch . Domhan et al . ( 2015 ) trained an ensemble of parametric functions that observe the first few epochs of an LC and extrapolate it . Klein et al . ( 2017a ) devised a Bayesian NN to model the functions that Domhan formulated to capture the structure of the LCs more effectively . Wistuba & Pedapati ( 2020 ) developed a transfer learning based predictor that was trained on LCs generated from other datasets . It is a NN based predictor that leverages architecture and dataset embeddings to capture the similarities between the architectures of various models and also the other datasets that it was trained on . Dynamical System View of NNs . There are many efforts to study the dynamics of NN training . Some prior work on SGD dynamics for NNs generally have a pre-assumption of the input distribution or how the labels are generated . They obtained global convergence for shallow NNs ( Tian , 2017 ; Banburski et al. , 2019 ) . System identification itself is a complicated task ( Haykin , 2010 ; Lillicrap et al. , 2020 ) . In studying the generalisation phenomenon of deep NNs , Goldt et al . ( 2019 ) formulated SGD with a set of differential equations . But , it is limited to over-parameterised two-layer NNs under the teacher-student framework . The teacher network determines how the labels are generated . Also , some interesting phenomena ( Frankle et al. , 2020 ) are observed during the early phase of NN training , such as trainable sparse sub-networks emerge ( Frankle et al. , 2019 ) , gradient descent moves into a small subspace ( Gur-Ari et al. , 2018 ) , and there exists a critical effective connection between layers ( Achille et al. , 2019 ) . Bhardwaj et al . ( 2021 ) built a nice connection between architectures ( with concatenation-type skip connections ) and the performance , and proposed a new topological metric to identify NNs with similar accuracy . Many of these studies are built on dynamical system and network science . It will be a promising direction to study deep learning mechanism . 3 PRELIMINARIES . Dynamical System of a Network . Many real complex systems , e.g. , plant-pollinator interactions ( Waser & Ollerton , 2006 ) and the spread of COVID-19 ( Thurner et al. , 2020 ) , can be described with networks ( Mitchell , 2006 ; Barabási & Pósfai , 2016 ) . Let G = ( V , E ) be a network with node set V and edge set E. Assuming n = |V | , the interactions between nodes can be formulated as a set of differential equations ẋi = f ( xi ) + ∑ j∈V Pijg ( xi , xj ) , ∀i ∈ V , ( 1 ) where xi is the state of node i . In real systems , it could be the abundance of a plant in ecological network , the infection rate of a person in epidemic network , or the expression level of a gene in regulatory network . The term P is the adjacency matrix of G , where the entry Pij indicates the interaction strength between nodes i and j . The functions f ( · ) and g ( · , · ) capture the internal and external impacts on node i , respectively . Usually , they are nonlinear . Let x = ( x1 , x2 , . . . , xn ) . For a small network , given an initial state , one can run a forward simulation for an equilibrium state x∗ , such that ẋ∗i = f ( x ∗ i ) + ∑ j∈V Pijg ( x ∗ i , x ∗ j ) = 0 . However , when the size of the system goes up to millions or even billions , it will pose a big challenge to solve the coupled differential equations . The problem can be efficiently addressed by employing a mean-field technique ( Gao et al. , 2016 ) , where a linear operator LP ( · ) is introduced to decouple the system . In specific , the operator depends on the adjacency matrix P and is defined as LP ( z ) = 1TPz 1TP1 , ( 2 ) where z ∈ Rn . Let δin = P1 be nodes ’ in-degrees and δout = 1TP be nodes ’ out-degrees . For a weighted G , the degrees are weighted as well . Applying LP ( · ) to δin , it gives βeff = LP ( δin ) = 1TPδin 1T δin = δToutδin 1T δin , ( 3 ) which proves to be a powerful metric to measure the resilience of networks , and has been applied to make reliable inferences from incomplete networks ( Jiang et al. , 2020b ; a ) . We use it to measure the predictive ability of a NN ( see Section 4.3 ) , whose training in essence is a dynamical system . For an overview of the related technique , the readers are referred to Appendix H. NN Training is a Dynamical System . Conventionally , training a NN is a nonlinear optimization problem . Because of the hierarchical structure of NNs , the training procedure is implemented by two alternate procedures : forward-propagation ( FP ) and back-propagation ( BP ) , as described in Fig.1 ( a ) . During FP , data goes through the input layer , hidden layers , up to the output layer , which produces the predictions of the input data . The differences between the outputs and the labels of the input data are used to define an objective function C , a.k.a training error function . BP proceeds to minimize C , in a reverse way as did in FP , by propagating the error from the output layer down to the input layer . The trainable weights of synaptic connections are updated accordingly . Let GA be a NN , w be the flattened weight vector for GA , and z be the set of activation values . As a whole , the training of GA can be described with two coupled dynamics : A on GA , and B on GB , where nodes in GA are neurons , and nodes in GB are the synaptic connections . The coupling relation arises from the strong inter-dependency between z andw : the states z ( activation values or activation gradients ) of GA are the parameters of B , and the states w of GB are the trainable parameters of GA . If we put the whole training process in the context of networked systems , A denotes a node dynamics because the states of nodes evolve during FP , and B expresses an edge dynamics because of the updates of edge weights during BP ( Mei et al. , 2018 ; Poggio et al. , 2020a ; b ) . Mathematically , we formulate the node and edge dynamics based on the gradients of C : ( A ) dz/dt ≈ hA ( z , t ; w ) = −∇zC ( z ( t ) ) , ( 4 ) ( B ) dw/dt ≈ hB ( w , t ; z ) = −∇wC ( w ( t ) ) , ( 5 ) where t denotes the training step . Let a ( ` ) i be the pre-activation of node i on layer ` , and σ ` ( · ) be the activation function of layer ` . Usually , the output activation function is a softmax . The hierarchical structure of GA exerts some constraints over z for neighboring layers , i.e. , z ( ` ) i = σ ` ( a ( ` ) i ) , 1 ≤ i ≤ n ` , ∀1 ≤ ` < L and z ( L ) k = exp { a ( L ) k } / ∑ j exp { a ( L ) j } , 1 ≤ k ≤ nL , where n ` is the total number of neurons on layer ` , and GA has L+ 1 layers . It also presents a dependency between z and w. For example , when GA is an MLP without bias , a ( ` ) i = w ( ` ) T i z ( ` −1 ) , which builds an interconnection from GA to GB . It is obvious , givenw , the activation z satisfying all these constraints , is also a fixed point of A . Meanwhile , an equilibrium state of B provides a set of optimal weights for GA . | This paper proposes a framework to select the neural networks for downstream tasks. To identify the better generalization model, the authors propose a new metric (Neural Capacitance, NCP) to predict precise learning curves. And the authors provide the theoretical explanations for NCP. Then the authors have verified the advantages of the proposed method when being applied to different datasets (CIFAR10, CIFAR100, SVHN, Fashion MNIST, Birds) with 17 CNN models. | SP:d18240d9eca55834f96c0223924e425a38685368 |
Know Your Action Set: Learning Action Relations for Reinforcement Learning | Intelligent agents can solve tasks in a variety of ways depending on the action set at their disposal . For instance , while using a toolkit for repair , the choice of tool ( the action ) closely depends on what other tools are available . Yet , such dependence on other available actions is ignored in conventional reinforcement learning ( RL ) since it assumes a fixed action set . In this work , we posit that learning the interdependence between actions is crucial for RL agents acting under a varying action set . To this end , we propose a novel policy architecture that consists of an input graph composed of available actions and a graph attention network to learn the action interdependence . We demonstrate that our architecture makes action decisions by correctly attending to the relevant actions in both value-based and policy-based RL . Consequently , it consistently outperforms non-relational architectures on applications where the action space can vary , such as recommender systems and physical reasoning with tools and skills . 1 INTRODUCTION . Imagine you want to hang a picture on the wall . You may start by placing a nail on the wall and then use a hammer to nail it in . However , if you do not have access to a hammer , you would not use the nail . Instead , you would try alternative approaches such as using a hook and adhesive-strips , or a screw and a drill ( Figure 1 ) . In general , we solve tasks by choosing actions that interact with each other to achieve a desired outcome in the environment . Therefore , the best action decision depends not only on the environment , but also on what other actions are available to use . In this work , we address the setting of varying action space in sequential decision-making . Typically reinforcement learning ( RL ) assumes a fixed action space , but recent work has explored variations in action space when adding or removing actions ( Boutilier et al. , 2018 ; Chandak et al. , 2020a ; b ) or for generalization to unseen actions ( Jain et al. , 2020 ) . These assume that the given actions can be treated independently in decision-making . But this assumption often does not hold . As the picture hanging example illustrates , the optimality of choosing the nail is dependent on the availability of a hammer . Therefore , our goal is to address this problem of learning the interdependence of actions . Addressing this problem is important for many varying action space applications . These include recommender systems where articles to recommend vary every day and physical reasoning where decision-making must be robust to any given set of tools , objects , or skills . In this paper , we benchmark three such scenarios where learning action interdependence is crucial for solving the task optimally . These are ( i ) shortcut-actions in grid navigation , whose presence shortens the path to the goal , ( ii ) co-dependent actions in tool reasoning , where tools need other tools to be useful , and ( iii ) list-actions or slate-actions ( Sunehag et al. , 2015 ) in simulated and real-data recommender systems , where the user satisfaction is a consequence of the collective effect of the recommended list . There are three key challenges in learning action interdependence for RL with varying action space . First , since the overall actions are not known in advance , the policy framework must be flexible to work with action representations . Second , the action space is now an additional variable component just like state observations in RL . Therefore , the policy framework must incorporate a variably sized set of action representations as part of the input . Finally , an agent ’ s decision for each action ’ s utility ( e.g . Q-value or probability ) should explicitly model its relationship with other available actions . Action Relations → Action Choice . To address these challenges , we propose a novel policy architecture : AGILE , Action Graph for Interdependence Learning . AGILE builds on the utility network proposed in Jain et al . ( 2020 ) to incorporate action representations . Its key component is a graph attention network ( GAT ) ( Veličković et al. , 2017 ) over a fully connected graph of actions . This serves two objectives : summarizing the action set input and computing the action utility with relational information of other actions . Our primary contribution is introducing the problem of learning action interdependence for RL with variable action space . We demonstrate our proposed policy architecture , AGILE , learns meaningful action relations . This enables it to improve optimality in varying action space tasks such as simulated and real-world recommender systems , and reasoning with skills and tools . 2 RELATED WORK . 2.1 STOCHASTIC ACTION SETS . In recent work , Boutilier et al . ( 2018 ) provide the theoretical foundations of MDPs with Stochastic Action Sets ( SAS-MDPs ) , where actions are sampled from a known base set of actions . They propose a solution with Q-learning and Chandak et al . ( 2020a ) extend it to policy gradients . An instance of SAS-MDPs is when certain actions become invalid , e.g . in games , they are masked out from the output action probability distribution ( Huang & Ontañón , 2020 ; Ye et al. , 2020 ; Kanervisto et al. , 2020 ) . However , the assumption of knowing the finite base action set limits the practical applicability of SAS-MDPs . For example , recommender agents often receive unseen items to recommend and a robotic agent does not know beforehand what tools it would encounter in the future . We work with action representations to alleviate this limitation . Furthermore , in SAS-MDPs the action set can vary at any timestep of an episode . Thus , the learned policy will only be optimal on average over all the possible action sets ( Qin et al. , 2020 ) . Whereas in our setting , the action sets only change at the beginning of a task instance and stays constant over the episode . This is a more practical setup and raises the challenging problem of solving a task optimally with any given action space . 2.2 ACTION REPRESENTATIONS . In discrete action RL , action representations have enabled learning in large action spaces ( DulacArnold et al. , 2015 ; Chandak et al. , 2019 ) , transfer learning to a different action space ( Chen et al. , 2019b ) , and efficient exploration by exploiting the shared structure among actions ( He et al. , 2015 ; Tennenholtz & Mannor , 2019 ; Kim et al. , 2019 ) . Recently , Chandak et al . ( 2020b ) use them to accelerate adaptation when new actions are added to an existing action set . In contrast , our setting requires learning in a constantly varying action space where actions can be added , removed , or completely replaced in an episode . Closely related to our work , Jain et al . ( 2020 ) assume a similar setting of varying action space while training to generalize to unseen actions . Following their motivation , we use action representations to avoid making assumptions on the base action set . However , their policy treats each action independently , which we demonstrate leads to a suboptimal performance . 2.3 LIST-WISE ACTION SPACE . In listwise RL ( or slate RL ) , the action space is combinatorial in list size . Commonly it has applications in recommendation systems ( Sunehag et al. , 2015 ; Zhao et al. , 2017 ; 2018 ; Ie et al. , 2019b ; Gong et al. , 2019 ; Liu et al. , 2021 ; Jiang et al. , 2018 ; Song et al. , 2020 ) . In recent work , Chen et al . ( 2019a ) proposed Cascaded DQNs ( CDQN ) framework which learns a Q-network for every index in the list and train them all with a shared reward . For our experiments on listwise RL , we utilize CDQN as the algorithm and show how it can be used with AGILE as the policy framework . 2.4 RELATIONAL REINFORCEMENT LEARNING . Graph neural networks ( Battaglia et al. , 2018 ) have been explored in RL tasks with a rich relational structure , such as morphological control ( Wang et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ; Pathak et al. , 2019 ) , multi-agent RL ( Tacchetti et al. , 2019 ) , physical construction ( Hamrick et al. , 2018 ) , and structured perception in games like StarCraft ( Zambaldi et al. , 2018 ) . In this paper , we propose that the set of actions possess a relational structure which enables the actions to interact and solve tasks in the environment . Therefore , we leverage graph attention networks ( Veličković et al. , 2017 ) to learn these action relations and show that it can represent meaningful action interactions . 3 PROBLEM FORMULATION . A hallmark of intelligence is the ability to be robust in an ever changing environment . To this end , we consider the setting of RL with a varying action space , where an agent receives a different action set in every task instance . Our key problem is to learn the interdependence of actions so the agent can act optimally with any given action set . Figure 1 illustrates that for a robot with the task of hanging a picture on the wall , starting with a nail is optimal only if it can access a hammer subsequently . 3.1 REINFORCEMENT LEARNING WITH VARYING ACTION SPACE . We consider episodic Markov Decision Processes ( MDPs ) with discrete action spaces , supplemented with action representations . The MDP is defined by a tuple { S , A , T , R , γ } of states , actions , transition probability , reward function , and discount factor , respectively . The base set of actions A can be countably infinite . To support infinite base actions , we use D-dimensional action representations ca ∈ RD to denote an action a ∈ A . These can be image or text features of a recommendable item , or behavior characteristics of a tool , or simply be one-hot vectors for a known and finite action set . In each instance of the MDP , a subset of actions A ⊂ A is given to the agent , with associated representations C. Thereafter , at each time step t in the episode , the agent receives a state observation st ∈ S from the environment and acts with at ∈ A . This results in a state transition to st+1 and a reward rt . The objective of the agent is to learn a policy π ( a|s , A ) that maximizes the expected discounted reward over evaluation episodes with potentially unseen actions , EA⊂A [ ∑ γt−1rt ] . 3.2 CHALLENGES OF VARYING ACTION SPACE . 1 . Using Action Representations : The policy framework should be flexible to take a set of action representations C as input , and output corresponding Q-values or probability distribution for RL . 2 . Action Set as part of State : When action set varies , the original state space S is not anymore Markovian . For example , the state of a robot hanging the picture is under-specified without knowing if its toolbox contains a hammer or not . The MDP can be preserved by reformulating the state space as S′ = { s ◦ CA : s ∈ S , A ⊂ A } to include the representations CA associated with the available actions A ( Boutilier et al. , 2018 ) . Thus , the policy framework must support the input of s ◦ CA , where A is a variably sized action set . 3 . Interdependence of Actions : The optimal choice of an action at ∈ A is dependent on the action choices that would be available in the future steps of the episode , at′ ∈ A. Recalling Figure 1 , a nail should only be picked initially from the toolbox if a hammer is accessible later . Thus , we must explicitly model the relationships between the characteristics of the current action cat and the possible future actions cai∀ai ∈ A . | This paper tackles an RL problem setting in which the actions available to an agent vary from episode to episode, and the optimal action in some states depends on the other actions that are available. The authors’ approach to this setting is to use a graph neural network to process all available actions, both to summarise the available set and to produce a relationally-informed representation for each action. These are then fed, with the state, into a utility function to give an action value or logit. Experiments in a number of benchmarks show the value of including information about the available actions. | SP:80376e14141e0c667c4e1c1568ca9d545a1c5fbd |
Know Your Action Set: Learning Action Relations for Reinforcement Learning | Intelligent agents can solve tasks in a variety of ways depending on the action set at their disposal . For instance , while using a toolkit for repair , the choice of tool ( the action ) closely depends on what other tools are available . Yet , such dependence on other available actions is ignored in conventional reinforcement learning ( RL ) since it assumes a fixed action set . In this work , we posit that learning the interdependence between actions is crucial for RL agents acting under a varying action set . To this end , we propose a novel policy architecture that consists of an input graph composed of available actions and a graph attention network to learn the action interdependence . We demonstrate that our architecture makes action decisions by correctly attending to the relevant actions in both value-based and policy-based RL . Consequently , it consistently outperforms non-relational architectures on applications where the action space can vary , such as recommender systems and physical reasoning with tools and skills . 1 INTRODUCTION . Imagine you want to hang a picture on the wall . You may start by placing a nail on the wall and then use a hammer to nail it in . However , if you do not have access to a hammer , you would not use the nail . Instead , you would try alternative approaches such as using a hook and adhesive-strips , or a screw and a drill ( Figure 1 ) . In general , we solve tasks by choosing actions that interact with each other to achieve a desired outcome in the environment . Therefore , the best action decision depends not only on the environment , but also on what other actions are available to use . In this work , we address the setting of varying action space in sequential decision-making . Typically reinforcement learning ( RL ) assumes a fixed action space , but recent work has explored variations in action space when adding or removing actions ( Boutilier et al. , 2018 ; Chandak et al. , 2020a ; b ) or for generalization to unseen actions ( Jain et al. , 2020 ) . These assume that the given actions can be treated independently in decision-making . But this assumption often does not hold . As the picture hanging example illustrates , the optimality of choosing the nail is dependent on the availability of a hammer . Therefore , our goal is to address this problem of learning the interdependence of actions . Addressing this problem is important for many varying action space applications . These include recommender systems where articles to recommend vary every day and physical reasoning where decision-making must be robust to any given set of tools , objects , or skills . In this paper , we benchmark three such scenarios where learning action interdependence is crucial for solving the task optimally . These are ( i ) shortcut-actions in grid navigation , whose presence shortens the path to the goal , ( ii ) co-dependent actions in tool reasoning , where tools need other tools to be useful , and ( iii ) list-actions or slate-actions ( Sunehag et al. , 2015 ) in simulated and real-data recommender systems , where the user satisfaction is a consequence of the collective effect of the recommended list . There are three key challenges in learning action interdependence for RL with varying action space . First , since the overall actions are not known in advance , the policy framework must be flexible to work with action representations . Second , the action space is now an additional variable component just like state observations in RL . Therefore , the policy framework must incorporate a variably sized set of action representations as part of the input . Finally , an agent ’ s decision for each action ’ s utility ( e.g . Q-value or probability ) should explicitly model its relationship with other available actions . Action Relations → Action Choice . To address these challenges , we propose a novel policy architecture : AGILE , Action Graph for Interdependence Learning . AGILE builds on the utility network proposed in Jain et al . ( 2020 ) to incorporate action representations . Its key component is a graph attention network ( GAT ) ( Veličković et al. , 2017 ) over a fully connected graph of actions . This serves two objectives : summarizing the action set input and computing the action utility with relational information of other actions . Our primary contribution is introducing the problem of learning action interdependence for RL with variable action space . We demonstrate our proposed policy architecture , AGILE , learns meaningful action relations . This enables it to improve optimality in varying action space tasks such as simulated and real-world recommender systems , and reasoning with skills and tools . 2 RELATED WORK . 2.1 STOCHASTIC ACTION SETS . In recent work , Boutilier et al . ( 2018 ) provide the theoretical foundations of MDPs with Stochastic Action Sets ( SAS-MDPs ) , where actions are sampled from a known base set of actions . They propose a solution with Q-learning and Chandak et al . ( 2020a ) extend it to policy gradients . An instance of SAS-MDPs is when certain actions become invalid , e.g . in games , they are masked out from the output action probability distribution ( Huang & Ontañón , 2020 ; Ye et al. , 2020 ; Kanervisto et al. , 2020 ) . However , the assumption of knowing the finite base action set limits the practical applicability of SAS-MDPs . For example , recommender agents often receive unseen items to recommend and a robotic agent does not know beforehand what tools it would encounter in the future . We work with action representations to alleviate this limitation . Furthermore , in SAS-MDPs the action set can vary at any timestep of an episode . Thus , the learned policy will only be optimal on average over all the possible action sets ( Qin et al. , 2020 ) . Whereas in our setting , the action sets only change at the beginning of a task instance and stays constant over the episode . This is a more practical setup and raises the challenging problem of solving a task optimally with any given action space . 2.2 ACTION REPRESENTATIONS . In discrete action RL , action representations have enabled learning in large action spaces ( DulacArnold et al. , 2015 ; Chandak et al. , 2019 ) , transfer learning to a different action space ( Chen et al. , 2019b ) , and efficient exploration by exploiting the shared structure among actions ( He et al. , 2015 ; Tennenholtz & Mannor , 2019 ; Kim et al. , 2019 ) . Recently , Chandak et al . ( 2020b ) use them to accelerate adaptation when new actions are added to an existing action set . In contrast , our setting requires learning in a constantly varying action space where actions can be added , removed , or completely replaced in an episode . Closely related to our work , Jain et al . ( 2020 ) assume a similar setting of varying action space while training to generalize to unseen actions . Following their motivation , we use action representations to avoid making assumptions on the base action set . However , their policy treats each action independently , which we demonstrate leads to a suboptimal performance . 2.3 LIST-WISE ACTION SPACE . In listwise RL ( or slate RL ) , the action space is combinatorial in list size . Commonly it has applications in recommendation systems ( Sunehag et al. , 2015 ; Zhao et al. , 2017 ; 2018 ; Ie et al. , 2019b ; Gong et al. , 2019 ; Liu et al. , 2021 ; Jiang et al. , 2018 ; Song et al. , 2020 ) . In recent work , Chen et al . ( 2019a ) proposed Cascaded DQNs ( CDQN ) framework which learns a Q-network for every index in the list and train them all with a shared reward . For our experiments on listwise RL , we utilize CDQN as the algorithm and show how it can be used with AGILE as the policy framework . 2.4 RELATIONAL REINFORCEMENT LEARNING . Graph neural networks ( Battaglia et al. , 2018 ) have been explored in RL tasks with a rich relational structure , such as morphological control ( Wang et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ; Pathak et al. , 2019 ) , multi-agent RL ( Tacchetti et al. , 2019 ) , physical construction ( Hamrick et al. , 2018 ) , and structured perception in games like StarCraft ( Zambaldi et al. , 2018 ) . In this paper , we propose that the set of actions possess a relational structure which enables the actions to interact and solve tasks in the environment . Therefore , we leverage graph attention networks ( Veličković et al. , 2017 ) to learn these action relations and show that it can represent meaningful action interactions . 3 PROBLEM FORMULATION . A hallmark of intelligence is the ability to be robust in an ever changing environment . To this end , we consider the setting of RL with a varying action space , where an agent receives a different action set in every task instance . Our key problem is to learn the interdependence of actions so the agent can act optimally with any given action set . Figure 1 illustrates that for a robot with the task of hanging a picture on the wall , starting with a nail is optimal only if it can access a hammer subsequently . 3.1 REINFORCEMENT LEARNING WITH VARYING ACTION SPACE . We consider episodic Markov Decision Processes ( MDPs ) with discrete action spaces , supplemented with action representations . The MDP is defined by a tuple { S , A , T , R , γ } of states , actions , transition probability , reward function , and discount factor , respectively . The base set of actions A can be countably infinite . To support infinite base actions , we use D-dimensional action representations ca ∈ RD to denote an action a ∈ A . These can be image or text features of a recommendable item , or behavior characteristics of a tool , or simply be one-hot vectors for a known and finite action set . In each instance of the MDP , a subset of actions A ⊂ A is given to the agent , with associated representations C. Thereafter , at each time step t in the episode , the agent receives a state observation st ∈ S from the environment and acts with at ∈ A . This results in a state transition to st+1 and a reward rt . The objective of the agent is to learn a policy π ( a|s , A ) that maximizes the expected discounted reward over evaluation episodes with potentially unseen actions , EA⊂A [ ∑ γt−1rt ] . 3.2 CHALLENGES OF VARYING ACTION SPACE . 1 . Using Action Representations : The policy framework should be flexible to take a set of action representations C as input , and output corresponding Q-values or probability distribution for RL . 2 . Action Set as part of State : When action set varies , the original state space S is not anymore Markovian . For example , the state of a robot hanging the picture is under-specified without knowing if its toolbox contains a hammer or not . The MDP can be preserved by reformulating the state space as S′ = { s ◦ CA : s ∈ S , A ⊂ A } to include the representations CA associated with the available actions A ( Boutilier et al. , 2018 ) . Thus , the policy framework must support the input of s ◦ CA , where A is a variably sized action set . 3 . Interdependence of Actions : The optimal choice of an action at ∈ A is dependent on the action choices that would be available in the future steps of the episode , at′ ∈ A. Recalling Figure 1 , a nail should only be picked initially from the toolbox if a hammer is accessible later . Thus , we must explicitly model the relationships between the characteristics of the current action cat and the possible future actions cai∀ai ∈ A . | This paper introduces a novel policy architecture (AGILE) for RL agents that learn action interdependence from a varying action space. A graph attention network is used to calculate the action utility and to summarize the action set input. Authors argue that this architecture allows the RL agents’ to learn action relations that lead to optimal behaviour in a changing action space environment. This architecture is then evaluated in three benchmark domains. Further evaluations are done in the recommender systems context, with both simulated systems and using real-world data. Results seem to indicate that the proposed method outperforms non-relational RL methods in most cases. | SP:80376e14141e0c667c4e1c1568ca9d545a1c5fbd |
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer | 1 INTRODUCTION . Self-attention-based models , especially vision transformers ( ViTs ; Figure 1a ; Dosovitskiy et al. , 2021 ) , are an alternative to convolutional neural networks ( CNNs ) to learn visual representations . Briefly , ViT divides an image into a sequence of non-overlapping patches and then learns interpatch representations using multi-headed self-attention in transformers ( Vaswani et al. , 2017 ) . The general trend is to increase the number of parameters in ViT networks to improve the performance ( e.g. , Touvron et al. , 2021a ; Graham et al. , 2021 ; Wu et al. , 2021 ) . However , these performance improvements come at the cost of model size ( network parameters ) and latency . Many real-world applications ( e.g. , augmented reality and autonomous wheelchairs ) require visual recognition tasks ( e.g. , object detection and semantic segmentation ) to run on resource-constrained mobile devices in a timely fashion . To be effective , ViT models for such tasks should be light-weight and fast . Even if the model size of ViT models is reduced to match the resource constraints of mobile devices , their performance is significantly worse than light-weight CNNs . For instance , for a parameter budget of about 5-6 million , DeIT ( Touvron et al. , 2021a ) is 3 % less accurate than MobileNetv3 ( Howard et al. , 2019 ) . Therefore , the need to design light-weight ViT models is imperative . Light-weight CNNs have powered many mobile vision tasks . However , ViT-based networks are still far from being used on such devices . Unlike light-weight CNNs that are easy to optimize and integrate with task-specific networks , ViTs are heavy-weight ( e.g. , ViT-B/16 vs. MobileNetv3 : 86 vs. 7.5 million parameters ) , harder to optimize ( Xiao et al. , 2021 ) , need extensive data augmentation and L2 regularization to prevent over-fitting ( Touvron et al. , 2021a ; Wang et al. , 2021 ) , and require expensive decoders for down-stream tasks , especially for dense prediction tasks . For instance , a ViT-based segmentation network ( Ranftl et al. , 2021 ) learns about 345 million parameters and achieves similar performance as the CNN-based network , DeepLabv3 ( Chen et al. , 2017 ) , with 59 million parameters . The need for more parameters in ViT-based models is likely because they lack image-specific inductive bias , which is inherent in CNNs ( Xiao et al. , 2021 ) . To build robust and high-performing ViT models , hybrid approaches that combine convolutions and transformers are gaining interest ( Xiao et al. , 2021 ; d ’ Ascoli et al. , 2021 ; Chen et al. , 2021b ) . However , these N 3P Flatten image patches Linear N d ⊕ Positional encoding Transformer L× Linear Logits ( a ) Standard visual transformer ( ViT ) MobileViT block Local representations Transformers as Convolutions ( global representations ) Fusion W H C X Conv-n× ny Conv-1x1 W H d P N d Transformer L× P N d W H d Conv-1x1 W H C W H 2C Conv-n× n W H C Y Unfold Fold Conv-3× 3 ↓ 2 MV2 MV2 ↓ 2 MV2 2× MV2 ↓ 2 MobileViT block L = 2 h = w = 2 MV2 ↓ 2 MobileViT block L = 4 h = w = 2 MV2 ↓ 2 MobileViT block L = 3 h = w = 2 Conv-1× 1 Global pool→ Linear Logits Output spatial→ dimensions 128× 128 64× 64 32× 32 16× 16 8× 8 1× 1 ~w ( b ) MobileViT . Here , Conv-n× n in the MobileViT block represents a standard n × n convolution and MV2 refers to MobileNetv2 block . Blocks that perform down-sampling are marked with ↓ 2 . Figure 1 : Visual transformers vs. MobileViT hybrid models are still heavy-weight and are sensitive to data augmentation . For example , removing CutMix ( Zhong et al. , 2020 ) and DeIT-style ( Touvron et al. , 2021a ) data augmentation causes a significant drop in ImageNet accuracy ( 78.1 % to 72.4 % ) of Heo et al . ( 2021 ) . It remains an open question to combine the strengths of CNNs and transformers to build ViT models for mobile vision tasks . Mobile vision tasks require light-weight , low latency , and accurate models that satisfy the device ’ s resource constraints , and are general-purpose so that they can be applied to different tasks ( e.g. , segmentation and detection ) . Note that floating-point operations ( FLOPs ) are not sufficient for low latency on mobile devices because FLOPs ignore several important inferencerelated factors such as memory access , degree of parallelism , and platform characteristics ( Ma et al. , 2018 ) . For example , the ViT-based method of Heo et al . ( 2021 ) , PiT , has 3× fewer FLOPs than DeIT ( Touvron et al. , 2021a ) but has a similar inference speed on a mobile device ( DeIT vs . PiT on iPhone-12 : 10.99 ms vs. 10.56 ms ) . Therefore , instead of optimizing for FLOPs1 , this paper focuses on designing a light-weight ( §3 ) , general-purpose ( §4.1 & §4.2 ) , and low latency ( §4.3 ) network for mobile vision tasks . We achieve this goal with MobileViT that combines the benefits of CNNs ( e.g. , spatial inductive biases and less sensitivity to data augmentation ) and ViTs ( e.g. , input-adaptive weighting and global processing ) . Specifically , we introduce the MobileViT block that encodes both local and global information in a tensor effectively ( Figure 1b ) . Unlike ViT and its variants ( with and without convolutions ) , MobileViT presents a different perspective to learn global representations . Standard convolution involves three operations : unfolding , local processing , and folding . MobileViT block replaces local processing in convolutions with global processing using 1MobileViT FLOPs can be further reduced using existing methods ( e.g. , DynamicViT ( Rao et al. , 2021 ) ) . transformers . This allows MobileViT block to have CNN- and ViT-like properties , which helps it learn better representations with fewer parameters and simple training recipes ( e.g. , basic augmentation ) . To the best of our knowledge , this is the first work that shows that light-weight ViTs can achieve light-weight CNN-level performance with simple training recipes across different mobile vision tasks . For a parameter budget of about 5-6 million , MobileViT achieves a top-1 accuracy of 78.4 % on the ImageNet-1k dataset ( Russakovsky et al. , 2015 ) , which is 3.2 % more accurate than MobileNetv3 and has a simple training recipe ( MobileViT vs. MobileNetv3 : 300 vs. 600 epochs ; 1024 vs. 4096 batch size ) . We also observe significant gains in performance when MobileViT is used as a feature backbone in highly optimized mobile vision task-specific architectures . Replacing MNASNet ( Tan et al. , 2019 ) with MobileViT as a feature backbone in SSDLite ( Sandler et al. , 2018 ) resulted in a better ( +1.8 % mAP ) and smaller ( 1.8× ) detection network ( Figure 2 ) . 2 RELATED WORK . Light-weight CNNs . The basic building layer in CNNs is a standard convolutional layer . Because this layer is computationally expensive , several factorization-based methods have been proposed to make it light-weight and mobile-friendly ( e.g. , Jin et al. , 2014 ; Chollet , 2017 ; Mehta et al. , 2020 ) . Of these , separable convolutions of Chollet ( 2017 ) have gained interest , and are widely used across state-of-the-art light-weight CNNs for mobile vision tasks , including MobileNets ( Howard et al. , 2017 ; Sandler et al. , 2018 ; Howard et al. , 2019 ) , ShuffleNetv2 ( Ma et al. , 2018 ) , ESPNetv2 ( Mehta et al. , 2019 ) , MixNet ( Tan & Le , 2019b ) , and MNASNet ( Tan et al. , 2019 ) . These light-weight CNNs are versatile and easy to train . For example , these networks can easily replace the heavyweight backbones ( e.g. , ResNet ( He et al. , 2016 ) ) in existing task-specific models ( e.g. , DeepLabv3 ) to reduce the network size and improve latency . Despite these benefits , one major drawback of these methods is that they are spatially local . This work views transformers as convolutions ; allowing to leverage the merits of both convolutions ( e.g. , versatile and simple training ) and transformers ( e.g. , global processing ) to build light-weight ( §3 ) and general-purpose ( §4.1 and §4.2 ) ViTs . Vision transformers . Dosovitskiy et al . ( 2021 ) apply transformers of Vaswani et al . ( 2017 ) for large-scale image recognition and showed that with extremely large-scale datasets ( e.g. , JFT-300M ) , ViTs can achieve CNN-level accuracy without image-specific inductive bias . With extensive data augmentation , heavy L2 regularization , and distillation , ViTs can be trained on the ImageNet dataset to achieve CNN-level performance ( Touvron et al. , 2021a ; b ; Zhou et al. , 2021 ) . However , unlike CNNs , ViTs show substandard optimizability and are difficult to train . Subsequent works ( e.g. , Graham et al. , 2021 ; Dai et al. , 2021 ; Liu et al. , 2021 ; Wang et al. , 2021 ; Yuan et al. , 2021b ; Chen et al. , 2021b ) shows that this substandard optimizability is due to the lack of spatial inductive biases in ViTs . Incorporating such biases using convolutions in ViTs improves their stability and performance . Different designs have been explored to reap the benefits of convolutions and transformers . For instance , ViT-C of Xiao et al . ( 2021 ) adds an early convolutional stem to ViT . CvT ( Wu et al. , 2021 ) modifies the multi-head attention in transformers and uses depth-wise separable convolutions instead of linear projections . BoTNet ( Srinivas et al. , 2021 ) replaces the standard 3×3 convolution in the bottleneck unit of ResNet with multi-head attention . ConViT ( d ’ Ascoli et al. , 2021 ) incorporates soft convolutional inductive biases using a gated positional self-attention . PiT ( Heo et al. , 2021 ) extends ViT with depth-wise convolution-based pooling layer . Though these models can achieve competitive performance to CNNs with extensive augmentation , the majority of these models are heavy-weight . For instance , PiT and CvT learns 6.1× and 1.7× more parameters than EfficientNet ( Tan & Le , 2019a ) and achieves similar performance ( top-1 accuracy of about 81.6 % ) on ImageNet1k dataset , respectively . Also , when these models are scaled down to build light-weight ViT models , their performance is significantly worse than light-weight CNNs . For a parameter budget of about 6 million , ImageNet-1k accuracy of PiT is 2.2 % less than MobileNetv3 . Discussion . Combining convolutions and transformers results in robust and high-performing ViTs as compared to vanilla ViTs . However , an open question here is : how to combine the strengths of convolutions and transformers to build light-weight networks for mobile vision tasks ? This paper focuses on designing light-weight ViT models that outperform state-of-the-art models with simple training recipes . Towards this end , we introduce MobileViT that combines the strengths of CNNs and ViTs to build a light-weight , general-purpose , and mobile-friendly network . MobileViT brings several novel observations . ( i ) Better performance : For a given parameter budget , MobileViT models achieve better performance as compared to existing light-weight CNNs across different mobile vision tasks ( §4.1 and §4.2 ) . ( ii ) Generalization capability : Generalization capability refers to the gap between training and evaluation metrics . For two models with similar training metrics , the model with better evaluation metrics is more generalizable because it can predict better on an unseen dataset . Unlike previous ViT variants ( with and without convolutions ) which show poor generalization capability even with extensive data augmentation as compared to CNNs ( Dai et al. , 2021 ) , MobileViT shows better generalization capability ( Figure 3 ) . ( iii ) Robust : A good model should be robust to hyper-parameters ( e.g. , data augmentation and L2 regularization ) because tuning these hyper-parameters is time- and resource-consuming . Unlike most ViT-based models , MobileViT models train with basic augmentation and are less sensitive to L2 regularization ( §C ) . | This work proposes a hybrid backbone which combines existing MV2 block and new MobileViT block for efficient classification, detection and segmentation. The core idea with MobileViT block is to use both convolution and self-attention block to aggregate local and global feature, respectively. Experiments on IN-1K, COCO and PASCAL VOC have shown quite promising results. | SP:9b2ac905ecf71d7207dab06d2367d7259ac4968b |
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer | 1 INTRODUCTION . Self-attention-based models , especially vision transformers ( ViTs ; Figure 1a ; Dosovitskiy et al. , 2021 ) , are an alternative to convolutional neural networks ( CNNs ) to learn visual representations . Briefly , ViT divides an image into a sequence of non-overlapping patches and then learns interpatch representations using multi-headed self-attention in transformers ( Vaswani et al. , 2017 ) . The general trend is to increase the number of parameters in ViT networks to improve the performance ( e.g. , Touvron et al. , 2021a ; Graham et al. , 2021 ; Wu et al. , 2021 ) . However , these performance improvements come at the cost of model size ( network parameters ) and latency . Many real-world applications ( e.g. , augmented reality and autonomous wheelchairs ) require visual recognition tasks ( e.g. , object detection and semantic segmentation ) to run on resource-constrained mobile devices in a timely fashion . To be effective , ViT models for such tasks should be light-weight and fast . Even if the model size of ViT models is reduced to match the resource constraints of mobile devices , their performance is significantly worse than light-weight CNNs . For instance , for a parameter budget of about 5-6 million , DeIT ( Touvron et al. , 2021a ) is 3 % less accurate than MobileNetv3 ( Howard et al. , 2019 ) . Therefore , the need to design light-weight ViT models is imperative . Light-weight CNNs have powered many mobile vision tasks . However , ViT-based networks are still far from being used on such devices . Unlike light-weight CNNs that are easy to optimize and integrate with task-specific networks , ViTs are heavy-weight ( e.g. , ViT-B/16 vs. MobileNetv3 : 86 vs. 7.5 million parameters ) , harder to optimize ( Xiao et al. , 2021 ) , need extensive data augmentation and L2 regularization to prevent over-fitting ( Touvron et al. , 2021a ; Wang et al. , 2021 ) , and require expensive decoders for down-stream tasks , especially for dense prediction tasks . For instance , a ViT-based segmentation network ( Ranftl et al. , 2021 ) learns about 345 million parameters and achieves similar performance as the CNN-based network , DeepLabv3 ( Chen et al. , 2017 ) , with 59 million parameters . The need for more parameters in ViT-based models is likely because they lack image-specific inductive bias , which is inherent in CNNs ( Xiao et al. , 2021 ) . To build robust and high-performing ViT models , hybrid approaches that combine convolutions and transformers are gaining interest ( Xiao et al. , 2021 ; d ’ Ascoli et al. , 2021 ; Chen et al. , 2021b ) . However , these N 3P Flatten image patches Linear N d ⊕ Positional encoding Transformer L× Linear Logits ( a ) Standard visual transformer ( ViT ) MobileViT block Local representations Transformers as Convolutions ( global representations ) Fusion W H C X Conv-n× ny Conv-1x1 W H d P N d Transformer L× P N d W H d Conv-1x1 W H C W H 2C Conv-n× n W H C Y Unfold Fold Conv-3× 3 ↓ 2 MV2 MV2 ↓ 2 MV2 2× MV2 ↓ 2 MobileViT block L = 2 h = w = 2 MV2 ↓ 2 MobileViT block L = 4 h = w = 2 MV2 ↓ 2 MobileViT block L = 3 h = w = 2 Conv-1× 1 Global pool→ Linear Logits Output spatial→ dimensions 128× 128 64× 64 32× 32 16× 16 8× 8 1× 1 ~w ( b ) MobileViT . Here , Conv-n× n in the MobileViT block represents a standard n × n convolution and MV2 refers to MobileNetv2 block . Blocks that perform down-sampling are marked with ↓ 2 . Figure 1 : Visual transformers vs. MobileViT hybrid models are still heavy-weight and are sensitive to data augmentation . For example , removing CutMix ( Zhong et al. , 2020 ) and DeIT-style ( Touvron et al. , 2021a ) data augmentation causes a significant drop in ImageNet accuracy ( 78.1 % to 72.4 % ) of Heo et al . ( 2021 ) . It remains an open question to combine the strengths of CNNs and transformers to build ViT models for mobile vision tasks . Mobile vision tasks require light-weight , low latency , and accurate models that satisfy the device ’ s resource constraints , and are general-purpose so that they can be applied to different tasks ( e.g. , segmentation and detection ) . Note that floating-point operations ( FLOPs ) are not sufficient for low latency on mobile devices because FLOPs ignore several important inferencerelated factors such as memory access , degree of parallelism , and platform characteristics ( Ma et al. , 2018 ) . For example , the ViT-based method of Heo et al . ( 2021 ) , PiT , has 3× fewer FLOPs than DeIT ( Touvron et al. , 2021a ) but has a similar inference speed on a mobile device ( DeIT vs . PiT on iPhone-12 : 10.99 ms vs. 10.56 ms ) . Therefore , instead of optimizing for FLOPs1 , this paper focuses on designing a light-weight ( §3 ) , general-purpose ( §4.1 & §4.2 ) , and low latency ( §4.3 ) network for mobile vision tasks . We achieve this goal with MobileViT that combines the benefits of CNNs ( e.g. , spatial inductive biases and less sensitivity to data augmentation ) and ViTs ( e.g. , input-adaptive weighting and global processing ) . Specifically , we introduce the MobileViT block that encodes both local and global information in a tensor effectively ( Figure 1b ) . Unlike ViT and its variants ( with and without convolutions ) , MobileViT presents a different perspective to learn global representations . Standard convolution involves three operations : unfolding , local processing , and folding . MobileViT block replaces local processing in convolutions with global processing using 1MobileViT FLOPs can be further reduced using existing methods ( e.g. , DynamicViT ( Rao et al. , 2021 ) ) . transformers . This allows MobileViT block to have CNN- and ViT-like properties , which helps it learn better representations with fewer parameters and simple training recipes ( e.g. , basic augmentation ) . To the best of our knowledge , this is the first work that shows that light-weight ViTs can achieve light-weight CNN-level performance with simple training recipes across different mobile vision tasks . For a parameter budget of about 5-6 million , MobileViT achieves a top-1 accuracy of 78.4 % on the ImageNet-1k dataset ( Russakovsky et al. , 2015 ) , which is 3.2 % more accurate than MobileNetv3 and has a simple training recipe ( MobileViT vs. MobileNetv3 : 300 vs. 600 epochs ; 1024 vs. 4096 batch size ) . We also observe significant gains in performance when MobileViT is used as a feature backbone in highly optimized mobile vision task-specific architectures . Replacing MNASNet ( Tan et al. , 2019 ) with MobileViT as a feature backbone in SSDLite ( Sandler et al. , 2018 ) resulted in a better ( +1.8 % mAP ) and smaller ( 1.8× ) detection network ( Figure 2 ) . 2 RELATED WORK . Light-weight CNNs . The basic building layer in CNNs is a standard convolutional layer . Because this layer is computationally expensive , several factorization-based methods have been proposed to make it light-weight and mobile-friendly ( e.g. , Jin et al. , 2014 ; Chollet , 2017 ; Mehta et al. , 2020 ) . Of these , separable convolutions of Chollet ( 2017 ) have gained interest , and are widely used across state-of-the-art light-weight CNNs for mobile vision tasks , including MobileNets ( Howard et al. , 2017 ; Sandler et al. , 2018 ; Howard et al. , 2019 ) , ShuffleNetv2 ( Ma et al. , 2018 ) , ESPNetv2 ( Mehta et al. , 2019 ) , MixNet ( Tan & Le , 2019b ) , and MNASNet ( Tan et al. , 2019 ) . These light-weight CNNs are versatile and easy to train . For example , these networks can easily replace the heavyweight backbones ( e.g. , ResNet ( He et al. , 2016 ) ) in existing task-specific models ( e.g. , DeepLabv3 ) to reduce the network size and improve latency . Despite these benefits , one major drawback of these methods is that they are spatially local . This work views transformers as convolutions ; allowing to leverage the merits of both convolutions ( e.g. , versatile and simple training ) and transformers ( e.g. , global processing ) to build light-weight ( §3 ) and general-purpose ( §4.1 and §4.2 ) ViTs . Vision transformers . Dosovitskiy et al . ( 2021 ) apply transformers of Vaswani et al . ( 2017 ) for large-scale image recognition and showed that with extremely large-scale datasets ( e.g. , JFT-300M ) , ViTs can achieve CNN-level accuracy without image-specific inductive bias . With extensive data augmentation , heavy L2 regularization , and distillation , ViTs can be trained on the ImageNet dataset to achieve CNN-level performance ( Touvron et al. , 2021a ; b ; Zhou et al. , 2021 ) . However , unlike CNNs , ViTs show substandard optimizability and are difficult to train . Subsequent works ( e.g. , Graham et al. , 2021 ; Dai et al. , 2021 ; Liu et al. , 2021 ; Wang et al. , 2021 ; Yuan et al. , 2021b ; Chen et al. , 2021b ) shows that this substandard optimizability is due to the lack of spatial inductive biases in ViTs . Incorporating such biases using convolutions in ViTs improves their stability and performance . Different designs have been explored to reap the benefits of convolutions and transformers . For instance , ViT-C of Xiao et al . ( 2021 ) adds an early convolutional stem to ViT . CvT ( Wu et al. , 2021 ) modifies the multi-head attention in transformers and uses depth-wise separable convolutions instead of linear projections . BoTNet ( Srinivas et al. , 2021 ) replaces the standard 3×3 convolution in the bottleneck unit of ResNet with multi-head attention . ConViT ( d ’ Ascoli et al. , 2021 ) incorporates soft convolutional inductive biases using a gated positional self-attention . PiT ( Heo et al. , 2021 ) extends ViT with depth-wise convolution-based pooling layer . Though these models can achieve competitive performance to CNNs with extensive augmentation , the majority of these models are heavy-weight . For instance , PiT and CvT learns 6.1× and 1.7× more parameters than EfficientNet ( Tan & Le , 2019a ) and achieves similar performance ( top-1 accuracy of about 81.6 % ) on ImageNet1k dataset , respectively . Also , when these models are scaled down to build light-weight ViT models , their performance is significantly worse than light-weight CNNs . For a parameter budget of about 6 million , ImageNet-1k accuracy of PiT is 2.2 % less than MobileNetv3 . Discussion . Combining convolutions and transformers results in robust and high-performing ViTs as compared to vanilla ViTs . However , an open question here is : how to combine the strengths of convolutions and transformers to build light-weight networks for mobile vision tasks ? This paper focuses on designing light-weight ViT models that outperform state-of-the-art models with simple training recipes . Towards this end , we introduce MobileViT that combines the strengths of CNNs and ViTs to build a light-weight , general-purpose , and mobile-friendly network . MobileViT brings several novel observations . ( i ) Better performance : For a given parameter budget , MobileViT models achieve better performance as compared to existing light-weight CNNs across different mobile vision tasks ( §4.1 and §4.2 ) . ( ii ) Generalization capability : Generalization capability refers to the gap between training and evaluation metrics . For two models with similar training metrics , the model with better evaluation metrics is more generalizable because it can predict better on an unseen dataset . Unlike previous ViT variants ( with and without convolutions ) which show poor generalization capability even with extensive data augmentation as compared to CNNs ( Dai et al. , 2021 ) , MobileViT shows better generalization capability ( Figure 3 ) . ( iii ) Robust : A good model should be robust to hyper-parameters ( e.g. , data augmentation and L2 regularization ) because tuning these hyper-parameters is time- and resource-consuming . Unlike most ViT-based models , MobileViT models train with basic augmentation and are less sensitive to L2 regularization ( §C ) . | This paper introduces a lightweight and general-purpose vision transformer, termed MobileViT, for mobile devices. It attempts to build a model which can combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks. MobileViT achieves top-1 accuracy of 78.4% on ImageNet-1k dataset (classification task), 27.7 mAP on MS-COCO dataset (object detection task), and 79.1 mIOU on VOC 2012 dataset (semantic segmentation task). | SP:9b2ac905ecf71d7207dab06d2367d7259ac4968b |
Learned Index with Dynamic $\epsilon$ | 1 INTRODUCTION . Data indexing ( Graefe & Kuno , 2011 ; Wang et al. , 2018 ; Luo & Carey , 2020 ; Zhou et al. , 2020 ) , which stores keys and corresponding payloads with designed structures , supports efficient query operations over data and benefits various data retrieval applications . Recently , Machine Learning ( ML ) models have been incorporated into the design of index structure , leading to substantial improvements in terms of both storage space and querying efficiency ( Kipf et al. , 2019 ; Ferragina & Vinciguerra , 2020a ; Mitzenmacher , 2018 ; Vaidya et al. , 2021 ) . The key insight behind this trending topic of “ learned index ” is that the data to be indexed contain useful distribution information and such information can be utilized by trainable ML models that map the keys to their stored positions . State-of-the-art learned index methods ( Galakatos et al. , 2019 ; Kipf et al. , 2020 ; Ferragina & Vinciguerra , 2020b ; Ferragina et al. , 2020 ) adopt piece-wise linear segments to approximate the data distribution and introduce an important pre-defined parameter . These methods ensure that the maximal prediction error of each learned segment is no more than and provide a worst-case guarantee of querying efficiency . By tuning , various space-time preferences from users can be met . For example , a relatively large can result in a small index size while having large prediction errors , and on the other hand , a relatively small provides with small prediction errors while having more learned segments and thus a large index size . However , existing learned index methods implicitly assume that the whole dataset to be indexed contains the same characteristics for different localities and thus adopt the same for all the learned segments , leading to sub-optimal index performance . More importantly , the impact of on index performance is intrinsically linked to data characteristics , which are not fully explored and utilized by existing learned index methods . Motivated by these , in this paper , we theoretically analyze the impact of on index performance , and link the characteristics of data localities with the dynamic adjustments of . Based on the derived theoretical results , we propose an efficient and pluggable learned index framework that dynamically adjusts in a principled way . To be specific , under the setting of an illustrative learned index method MET ( Ferragina et al. , 2020 ) , we present novel analysis about the prediction error bounds of each segment that link with the mean and variance of data localities . The segment-wise prediction error embeds the space-error trade-off as it is the product of the number of covered keys and mean absolute error , which determine the index size and preciseness respectively . The derived mathematical relationships enable our framework to fully explore diverse data localities with an -learner module , which learns to predict the impact of on the index performance and adaptively choose a suitable to achieve a better space-time trade-off . We apply the proposed framework to several state-of-the-art ( SOTA ) learned index methods , and conduct a series of experiments on three widely adopted real-world datasets . Comparing with the original learned index methods with fixed , our dynamic versions achieve significant index performance improvements with better space-time trade-offs . We also conduct various experiments to verify the necessity and effectiveness of the proposed framework , and provide both ablation study and case study to understand how the proposed framework works . 2 BACKGROUND . 2.1 -BOUNDED LEARNED INDEX . Given a dataset D = { ( x , y ) |x ∈ X , y ∈ Y } , X is the set of keys over a universe U such as reals or integers , and Y is the set of positions where the keys and corresponding payloads are stored . The index such as B+-tree ( Abel , 1984 ) aims to build a compact structure to support efficient query operations over D. Typically , the keys are assumed to be sorted in ascending order to satisfy the key-position monotonicity , i.e. , for any two keys , xi > xj iff their positions yi > yj , such that the range query ( X ∩ [ xlow , xhigh ] ) can be handled . Recently , learned index methods ( Kraska et al. , 2018 ; Li et al. , 2019 ; Tang et al. , 2020 ; Dai et al. , 2020 ; Crotty , 2021 ) leverage ML models to mine useful distribution information from D , and incorporate such information to boost the index performance . To look up a given key x , the learned index first predicts position ŷ using the learned models , and subsequently finds the stored true position y based on ŷ with a binary search or exponential search . Thus the querying time consists of the inference time of the learned models and the search time in O ( log ( |ŷ − y| ) ) . By modeling the data distribution information , learned indexes achieve faster query speed than traditional B+-tree index , meanwhile using several orders-of-magnitude smaller storage space ( Ding et al. , 2020 ; Galakatos et al. , 2019 ; Ferragina & Vinciguerra , 2020b ; Kipf et al. , 2020 ; Marcus et al. , 2020 ) . Many existing learned index methods adopt piece-wise linear segments to approximate the distribution of D due to their effectiveness and low computing cost , and introduce the parameter to provides a worst-case preciseness guarantee and a tunable knob to meet various space-time trade-off preferences . Here we briefly introduce the SOTA -bounded learned index methods that are most closely to our work , and refer to the review chapter of ( Ferragina & Vinciguerra , 2020a ) for details of other methods . Specifically , Galakatos et al . ( 2019 ) greedily learn a set of piece-wise linear segments in a one-pass manner . Ferragina & Vinciguerra ( 2020b ) adopt another one-pass algorithm that achieves the optimal number of learned segments . Kipf et al . ( 2020 ) introduce a radix structure to organize the learned segments . Ferragina et al . ( 2020 ) adopt a fixed slope setting for all learned segments . Existing methods constraint all learned segments with the same , i.e. , the learning process ensures that the maximum prediction error is within a pre-defined where ∈ Z > 1 . In this paper , we will discuss the impact of in more depth and invistigate how to enhance existing learned index methods from a new perspective : dynamic adjustment of considering the diversity of different data localities . 2.2 MEAN EXIT TIME ( MET ) ALGORITHM . Here we describe an illustrative learned index algorithm MET ( Ferragina et al. , 2020 ) . Specifically , for any two consecutive keys of D , suppose their key interval ( xi − xi−1 ) is drawn according to a random process { Gi } i∈N , where Gi is a positive independent and identically distributed ( i.i.d . ) random variable whose mean is µ and variance is σ2 . The MET algorithm learns linear segments S = [ S1 , ... , Si , ... , SN ] in one pass , where Si : y = aix+ bi is the learnable linear segment and N is the total number of learned segments . The learning process is as follows : The current linear segment fixes the slope ai = 1/µ and goes through the first available data point , thus bi is also determined . Then the segment covers as many data points as possible until a data point , say ( x′ , y′ ) achieves the prediction error larger than . The violation of triggers a new linear segment , and the data point ( x′ , y′ ) will be the first available data point , and the process repeats until no data point is available . Although the MET algorithm adopts a simple learning mechanism , it makes the theoretical analysis convenient by modeling the learning process as a random walk process . Other -bounded learned index methods such as FITing-Tree ( Galakatos et al. , 2019 ) , PGM ( Ferragina & Vinciguerra , 2020b ) and Radix-Spline ( Kipf et al. , 2020 ) learn linear segments in a similar manner while having different mechanisms to determine the parameters of { Si } . Ferragina et al . ( 2020 ) reveal the relationship between and index size performance based on MET . In Section 3.3 , we present novel analysis about the impact of on not only the index size , but also the index preciseness and a comprehensive trade-off quantity , which facilitates the proposed dynamic adjustment . 3 LEARN TO VARY . 3.1 PROBLEM FORMULATION AND MOTIVATION . Before introducing the proposed framework , we first formulate the task of learning index from data with guarantee , and provide some discussions about why we need to vary . Given a dataset D to be indexed and an -bounded learned index algorithm A , we aim to learn segments S = [ S1 , ... , Si ... , SN ] as index structure such that the size of S and ∑N i=1 MAE ( Di|Si ) are both small , where Si is the i-th learned linear segment with the parameter i , MAE is the mean absolute prediction error , and Di ⊂ D is the data whose keys are covered by Si . For the remaining data D \ ⋃ j < iDj , the algorithm A repeatedly checks whether the prediction error of new data point violates the given i , and outputs the learned segment Si that covers Di . When all the is for i ∈ [ N ] take the same value , the problem becomes the one that existing learned index methods are dealing with . Now let ’ s examine the effect of parameter . To query a specific data point , say ( x , y ) , we first need to find the specific segment S′ that covers x , and then search its true position y based on the estimated one ŷ = S′ ( x ) . For the first step , we can find S′ from S in O ( log ( N ) ) ; for the second step , the search of y based on ŷ can be done in O ( log ( |ŷ − y| ) ) . In summary , we can find the true position of the queried data point in O ( log ( N ) + log ( |ŷ − y| ) ) 1 . From here , we can see that the parameter plays an important role to trade off two contradictory performance terms , i.e. , the size of the learned index N , and MAE of the whole data MAE ( D|S ) . If we adopt a small , the maximal prediction error constraint is more frequently violated , leading to a large N ; meanwhile , the preciseness of learned index is improved , leading to a small MAE ( D|S ) . On the other hand , with a large , we will get a more compact learned index ( i.e. , a small N ) with larger prediction errors ( i.e. , a large MAE ( D|S ) ) . Thus these two inversely changed terms jointly impact the querying efficiency . Actually , the effect of on index performance is intrinsically linked to the characteristic of the data to be indexed . For real-world datasets , an important observation is that the linearity degree varies in different data localities . Recall that we use piece-wise linear segments to fit the data , and determines the partition and the fitness of the segments . By varying , we can adapt to the local variations of D and adjust the partition such that each learned segment fits the data better . Formally , let ’ s consider the quantity SegErri that is defined as the total prediction error within a segment Si , i.e. , SegErri = ∑ ( x , y ) ∈Di |y − Si ( x ) | , which is also the product of the number of covered keys Len ( Di ) and the mean absolute error MAE ( Di|Si ) . Note that a large Len ( Di ) leads to a small N since |D| = ∑N i=1 Len ( Di ) . From this view , the quantity SegErri internally reflects the space-error trade-off . Later we will show how to leverage this quantity to dynamically adjust . | This paper considers learning based methods for constructing index structures, a fundamental data structure. Recent work has given learning based index structures which guarantee a maximum error of $\epsilon$ in the predicted index. At a high level, this is done by breaking up the data set into several segments, and then a linear function is fitted to each segment, while maintaining the $\epsilon$ error guarantee. The main idea behind this paper is to allow $\epsilon$ to vary across the different segments. This is motivated by the fact that answering a query must first find the correct segment via a binary search, and then perform another binary search within the segment using the fitted linear function. The runtime of the first step depends on the number of segments and the runtime of the second step depends on the prediction error. By allowing the prediction error of some segments to go above $\epsilon$, the number of segments can be decreased, allowing for better trade-offs in some cases. The main conceptual contribution is a method for tuning $\epsilon$ across different segments, which can be plugged directly into pre-existing learned index methods. The authors give a theoretical analysis based on a stochastic analysis of the MET algorithm from prior work which is then leveraged to design their method for tuning $\epsilon$ across different segments. An empirical study of their algorithm is performed on standard datasets for this area, comparing to several baselines from recent work in this area which use a fixed $\epsilon$. The results show that the proposed $\epsilon$ tuning method can improve several learned index methods from prior work. | SP:123d38f124b95d956b18e6906a2e26171fb94224 |
Learned Index with Dynamic $\epsilon$ | 1 INTRODUCTION . Data indexing ( Graefe & Kuno , 2011 ; Wang et al. , 2018 ; Luo & Carey , 2020 ; Zhou et al. , 2020 ) , which stores keys and corresponding payloads with designed structures , supports efficient query operations over data and benefits various data retrieval applications . Recently , Machine Learning ( ML ) models have been incorporated into the design of index structure , leading to substantial improvements in terms of both storage space and querying efficiency ( Kipf et al. , 2019 ; Ferragina & Vinciguerra , 2020a ; Mitzenmacher , 2018 ; Vaidya et al. , 2021 ) . The key insight behind this trending topic of “ learned index ” is that the data to be indexed contain useful distribution information and such information can be utilized by trainable ML models that map the keys to their stored positions . State-of-the-art learned index methods ( Galakatos et al. , 2019 ; Kipf et al. , 2020 ; Ferragina & Vinciguerra , 2020b ; Ferragina et al. , 2020 ) adopt piece-wise linear segments to approximate the data distribution and introduce an important pre-defined parameter . These methods ensure that the maximal prediction error of each learned segment is no more than and provide a worst-case guarantee of querying efficiency . By tuning , various space-time preferences from users can be met . For example , a relatively large can result in a small index size while having large prediction errors , and on the other hand , a relatively small provides with small prediction errors while having more learned segments and thus a large index size . However , existing learned index methods implicitly assume that the whole dataset to be indexed contains the same characteristics for different localities and thus adopt the same for all the learned segments , leading to sub-optimal index performance . More importantly , the impact of on index performance is intrinsically linked to data characteristics , which are not fully explored and utilized by existing learned index methods . Motivated by these , in this paper , we theoretically analyze the impact of on index performance , and link the characteristics of data localities with the dynamic adjustments of . Based on the derived theoretical results , we propose an efficient and pluggable learned index framework that dynamically adjusts in a principled way . To be specific , under the setting of an illustrative learned index method MET ( Ferragina et al. , 2020 ) , we present novel analysis about the prediction error bounds of each segment that link with the mean and variance of data localities . The segment-wise prediction error embeds the space-error trade-off as it is the product of the number of covered keys and mean absolute error , which determine the index size and preciseness respectively . The derived mathematical relationships enable our framework to fully explore diverse data localities with an -learner module , which learns to predict the impact of on the index performance and adaptively choose a suitable to achieve a better space-time trade-off . We apply the proposed framework to several state-of-the-art ( SOTA ) learned index methods , and conduct a series of experiments on three widely adopted real-world datasets . Comparing with the original learned index methods with fixed , our dynamic versions achieve significant index performance improvements with better space-time trade-offs . We also conduct various experiments to verify the necessity and effectiveness of the proposed framework , and provide both ablation study and case study to understand how the proposed framework works . 2 BACKGROUND . 2.1 -BOUNDED LEARNED INDEX . Given a dataset D = { ( x , y ) |x ∈ X , y ∈ Y } , X is the set of keys over a universe U such as reals or integers , and Y is the set of positions where the keys and corresponding payloads are stored . The index such as B+-tree ( Abel , 1984 ) aims to build a compact structure to support efficient query operations over D. Typically , the keys are assumed to be sorted in ascending order to satisfy the key-position monotonicity , i.e. , for any two keys , xi > xj iff their positions yi > yj , such that the range query ( X ∩ [ xlow , xhigh ] ) can be handled . Recently , learned index methods ( Kraska et al. , 2018 ; Li et al. , 2019 ; Tang et al. , 2020 ; Dai et al. , 2020 ; Crotty , 2021 ) leverage ML models to mine useful distribution information from D , and incorporate such information to boost the index performance . To look up a given key x , the learned index first predicts position ŷ using the learned models , and subsequently finds the stored true position y based on ŷ with a binary search or exponential search . Thus the querying time consists of the inference time of the learned models and the search time in O ( log ( |ŷ − y| ) ) . By modeling the data distribution information , learned indexes achieve faster query speed than traditional B+-tree index , meanwhile using several orders-of-magnitude smaller storage space ( Ding et al. , 2020 ; Galakatos et al. , 2019 ; Ferragina & Vinciguerra , 2020b ; Kipf et al. , 2020 ; Marcus et al. , 2020 ) . Many existing learned index methods adopt piece-wise linear segments to approximate the distribution of D due to their effectiveness and low computing cost , and introduce the parameter to provides a worst-case preciseness guarantee and a tunable knob to meet various space-time trade-off preferences . Here we briefly introduce the SOTA -bounded learned index methods that are most closely to our work , and refer to the review chapter of ( Ferragina & Vinciguerra , 2020a ) for details of other methods . Specifically , Galakatos et al . ( 2019 ) greedily learn a set of piece-wise linear segments in a one-pass manner . Ferragina & Vinciguerra ( 2020b ) adopt another one-pass algorithm that achieves the optimal number of learned segments . Kipf et al . ( 2020 ) introduce a radix structure to organize the learned segments . Ferragina et al . ( 2020 ) adopt a fixed slope setting for all learned segments . Existing methods constraint all learned segments with the same , i.e. , the learning process ensures that the maximum prediction error is within a pre-defined where ∈ Z > 1 . In this paper , we will discuss the impact of in more depth and invistigate how to enhance existing learned index methods from a new perspective : dynamic adjustment of considering the diversity of different data localities . 2.2 MEAN EXIT TIME ( MET ) ALGORITHM . Here we describe an illustrative learned index algorithm MET ( Ferragina et al. , 2020 ) . Specifically , for any two consecutive keys of D , suppose their key interval ( xi − xi−1 ) is drawn according to a random process { Gi } i∈N , where Gi is a positive independent and identically distributed ( i.i.d . ) random variable whose mean is µ and variance is σ2 . The MET algorithm learns linear segments S = [ S1 , ... , Si , ... , SN ] in one pass , where Si : y = aix+ bi is the learnable linear segment and N is the total number of learned segments . The learning process is as follows : The current linear segment fixes the slope ai = 1/µ and goes through the first available data point , thus bi is also determined . Then the segment covers as many data points as possible until a data point , say ( x′ , y′ ) achieves the prediction error larger than . The violation of triggers a new linear segment , and the data point ( x′ , y′ ) will be the first available data point , and the process repeats until no data point is available . Although the MET algorithm adopts a simple learning mechanism , it makes the theoretical analysis convenient by modeling the learning process as a random walk process . Other -bounded learned index methods such as FITing-Tree ( Galakatos et al. , 2019 ) , PGM ( Ferragina & Vinciguerra , 2020b ) and Radix-Spline ( Kipf et al. , 2020 ) learn linear segments in a similar manner while having different mechanisms to determine the parameters of { Si } . Ferragina et al . ( 2020 ) reveal the relationship between and index size performance based on MET . In Section 3.3 , we present novel analysis about the impact of on not only the index size , but also the index preciseness and a comprehensive trade-off quantity , which facilitates the proposed dynamic adjustment . 3 LEARN TO VARY . 3.1 PROBLEM FORMULATION AND MOTIVATION . Before introducing the proposed framework , we first formulate the task of learning index from data with guarantee , and provide some discussions about why we need to vary . Given a dataset D to be indexed and an -bounded learned index algorithm A , we aim to learn segments S = [ S1 , ... , Si ... , SN ] as index structure such that the size of S and ∑N i=1 MAE ( Di|Si ) are both small , where Si is the i-th learned linear segment with the parameter i , MAE is the mean absolute prediction error , and Di ⊂ D is the data whose keys are covered by Si . For the remaining data D \ ⋃ j < iDj , the algorithm A repeatedly checks whether the prediction error of new data point violates the given i , and outputs the learned segment Si that covers Di . When all the is for i ∈ [ N ] take the same value , the problem becomes the one that existing learned index methods are dealing with . Now let ’ s examine the effect of parameter . To query a specific data point , say ( x , y ) , we first need to find the specific segment S′ that covers x , and then search its true position y based on the estimated one ŷ = S′ ( x ) . For the first step , we can find S′ from S in O ( log ( N ) ) ; for the second step , the search of y based on ŷ can be done in O ( log ( |ŷ − y| ) ) . In summary , we can find the true position of the queried data point in O ( log ( N ) + log ( |ŷ − y| ) ) 1 . From here , we can see that the parameter plays an important role to trade off two contradictory performance terms , i.e. , the size of the learned index N , and MAE of the whole data MAE ( D|S ) . If we adopt a small , the maximal prediction error constraint is more frequently violated , leading to a large N ; meanwhile , the preciseness of learned index is improved , leading to a small MAE ( D|S ) . On the other hand , with a large , we will get a more compact learned index ( i.e. , a small N ) with larger prediction errors ( i.e. , a large MAE ( D|S ) ) . Thus these two inversely changed terms jointly impact the querying efficiency . Actually , the effect of on index performance is intrinsically linked to the characteristic of the data to be indexed . For real-world datasets , an important observation is that the linearity degree varies in different data localities . Recall that we use piece-wise linear segments to fit the data , and determines the partition and the fitness of the segments . By varying , we can adapt to the local variations of D and adjust the partition such that each learned segment fits the data better . Formally , let ’ s consider the quantity SegErri that is defined as the total prediction error within a segment Si , i.e. , SegErri = ∑ ( x , y ) ∈Di |y − Si ( x ) | , which is also the product of the number of covered keys Len ( Di ) and the mean absolute error MAE ( Di|Si ) . Note that a large Len ( Di ) leads to a small N since |D| = ∑N i=1 Len ( Di ) . From this view , the quantity SegErri internally reflects the space-error trade-off . Later we will show how to leverage this quantity to dynamically adjust . | This paper mainly studies the index structure problem. Existing learned index methods use a fixed value for all the learned segments. In this paper, the author contributes a deeper understanding of how the impacts the index performance, and enlightens the exploration of fine-grained trade-off adjustments by considering data local characteristics. Experiments with real-world datasets and several state-of-the-art methods demonstrate the efficiency, effectiveness, and usability of the proposed framework. | SP:123d38f124b95d956b18e6906a2e26171fb94224 |
Generalizing MLPs With Dropouts, Batch Normalization, and Skip Connections | 1 INTRODUCTION . 1.1 MLP AND ITS TRAINING OBJECTIVE . A multilayer perceptron ( MLP ) is one of the simplest and the oldest artificial neural network architectures . Its origin dates back to 1958 ( Rosenblatt , 1958 ) . Its basic building blocks are linear regression and nonlinear activation functions . One can stack them up to create an MLP with L layers ( See Equation 1 ) . ŷ = f ( L ) ( W ( L ) , . . . , f ( 2 ) ( W ( 2 ) f ( 1 ) ( W ( 1 ) x ) ) , . . . ) ( 1 ) where x , ŷ , W ( l ) , and f ( l ) are an input vector , an output vector , the weight matrix , and the nonlinear activation function at the lth layer , respectively . Nonlinearity is necessary since without it , an MLP will collapse into one matrix . The choice of a nonlinear function is a hyperparameter . ReLU ( Nair & Hinton , 2010 ) is one of the most widely used activation functions . In a supervised setup , where we have a pair of a data sample x ( i ) and a label y ( i ) , we aim to reduce the loss ( distance ) between the model output ŷ ( i ) and the label y ( i ) . The loss , LW ( 1 ) , ... , W ( L ) ( ŷ ( i ) , y ( i ) ) , is parameterized by the weights of all L layers . If we have m pairs of data samples and labels , { ( x ( 1 ) , y ( 1 ) ) , . . . , ( x ( m ) , y ( m ) ) } , the objective is to minimize the expectation of the loss ( See Equation 2 ) . Ex , y∼p̂dataLW ( 1 ) , ... , W ( L ) ( x , y ) = 1 m m∑ i=1 LW ( 1 ) , ... , W ( L ) ( x ( i ) , y ( i ) ) ( 2 ) Since m can be a large number , we typically batch the m data samples into multiple smaller batches and perform stochastic gradient descent to speed up the optimization process and to reduce memory consumption . Backpropagation ( Rumelhart et al. , 1986 ) is a useful tool to perform gradient descent . This allows us to easily differentiate the loss function with respect to the weights of all L layers , taking advantage of the chain rule . So far , we talked about the basics of MLPs and how to optimize them in a supervised setup . In the past decade , there have been many works to improve deep neural network architectures and their performances . In the next section , we ’ ll go through some of the works that can be used to generalize MLPs . 2 RELATED WORK . 2.1 DROPOUT . Dropout ( Srivastava et al. , 2014 ) was introduced to prevent neural networks from overfitting . Every neuron in layer l − 1 has probability 1 − p of being present . That is , with the probability of p , a neuron is dropped out and the weights that connect to the neurons in layer l will not play a role . p is a hyperparameter whose optimum value can depend on the data , neural network architecture , etc . Dropout makes the neural network stochastic . During training , this behavior is desired since this effectively trains many neural networks that share most of the weights . Obviously , the weights that are dropped out won ’ t be shared between the sampled neural networks . This stochasticity works as a regularizer which prevents overfitting . The stochasticity isn ’ t always desired at test time . In order to make it deterministic , the weights are scaled down by multiplying by p. That is , every weight that once was dropped out during training will be used at test time , but its magnitude is scaled down to account for the fact that it used to be not present during training . In Bayesian statistics , the trained neural network ( s ) can be seen as a posterior predictive distribution . The authors compare the performance between the Monte-Carlo model averaging and the weight scaling . They empirically show that the difference is minimal . In some situations , the stochasticity of a neural network is desired , especially if we want to estimate the uncertainty of model predictions . Unlike regular neural networks , where the trained weights are deterministic , Bayesian neural networks learn the distributions of weights , which can capture uncertainty ( Tishby et al. , 1989 ) . At test time , the posterior predictive distribution is used for inference ( See Equation 3 ) . p ( y | x , X , Y ) = ∫ p ( y | x , X , Y , w ) p ( w | X , Y ) dw ( 3 ) where w = { W ( i ) } Li=1 . Equation 3 is intractable . When a regular neural network has dropouts , it ’ s possible to approximate the posterior predictive distribution by performing T stochastic forward passes and averaging the results . This is called ” Monte-Carlo dropout ” ( Gal & Ghahramani , 2016 ) . In practice , the difference between the Monte-Carlo model averaging and weight scaling is minimal . However , we can use the variance or entropy of the results of T stochastic forward passes to estimate the model uncertainty . 2.2 BATCH NORMALIZATION . Batch normalization was first introduced to address the internal covariate shift problem ( Ioffe & Szegedy , 2015 ) . Normalization is done simply by subtracting the batch mean from every element in the batch and dividing it by the batch standard deviation . It ’ s empirically shown that batch normalization allows higher learning rates to be used , and thus leading to faster convergence . 2.3 WHITENING INPUTS OF NEURAL NETWORKS . Whitening decorrelates and normalizes inputs . Instead of using computationally expensive existing techniques , such as ICA ( independent component analysis ) ( Hyvärinen & Oja , 2000 ) , it was shown that using a batch normalization layer followed by a dropout layer can also achieve whitening ( Chen et al. , 2019 ) . The authors called this the IC ( Independent-Component ) layer . They argued that their IC layer can reduce the mutual information and correlation between the neurons by a factor of p2 and p , respectively , where p is the dropout probability . The authors suggested that an IC layer should be placed before a weight layer . They empirically showed that modifying ResNet ( He et al. , 2016 ) to include IC layers led to a more stable training process , faster convergence , and better convergence limit . 2.4 SKIP CONNECTIONS . Skip connections were first introduced in convolutional neural networks ( CNNs ) ( He et al. , 2016 ) . Before then , having deeper CNNs with more layers didn ’ t necessarily lead to better performance , but rather they suffered from the degradation problem . ResNets solved this problem by introducing skip connections . They managed to train deep CNNs , even with 152 layers , which wasn ’ t seen before , and showed great results on some computer vision benchmark datasets . 3 METHODOLOGY . 3.1 MOTIVATION . We want to take advantage of the previous works on whitening neural network inputs ( Chen et al. , 2019 ) and skip connections ( He et al. , 2016 ) , and apply them to MLPs . In each of their original papers , the ideas were tested on CNNs , not on MLPs . However , considering that both a fully connected layer and a convolutional layer are a linear function of f that satisfies Equation 4 , and that their main job is feature extraction , we assume that MLPs can also benefit from them . f ( x+ y ) = f ( x ) + f ( y ) and f ( ax ) = af ( x ) ( 4 ) The biggest difference between a convolutional layer and a fully connected layer is that the former works locally , but the latter works globally . Also , even the latest neural network architectures , such as the Transformer ( Vaswani et al. , 2017 ) , don ’ t use IC layers within their fully connected layers . They can potentially benefit from our work . 3.2 NEURAL NETWORK ARCHITECTURE . Our MLP closely follows ( Chen et al. , 2019 ) and ( He et al. , 2016 ) . Our MLP is composed of a series of two basic building blocks . One is called the residual block and the other is called the downsample block . The residual block starts with an IC layer , followed by a fully connected layer and ReLU nonlinearity . This is repeated twice , as it was done in the original ResNet paper ( He et al. , 2016 ) . The skip connection is added before the last ReLU layer ( See Equation 5 ) . y = ReLU ( W ( 2 ) Dropout ( BN ( ReLU ( W ( 1 ) Dropout ( BN ( x ) ) ) ) ) + x ) ( 5 ) where x is an input vector to the residual block , BN is a batch normalization layer , Dropout is a dropout layer , ReLU is a ReLU nonlinearity , W1 is the first fully connected layer , and W2 is the second fully connected layer . This building preserves the dimension of the input vector . That is , x , y ∈ RN and W ( 1 ) , W ( 2 ) ∈ RN×N . This is done intentionally so that the skip connections can be made easily . The downsample block also starts with an IC layer , followed by a fully connected layer and ReLU nonlinearity ( See Equation 6 ) . y = ReLU ( WDropout ( BN ( x ) ) ) ( 6 ) The dimension is halved after the weight matrix W . That is , x ∈ RN , W ∈ RN2 ×N , and y ∈ RN2 . This block reduces the number of features so that the network can condense information which will later be used for classification or regression . After a series of the two building blocks , one fully connected layer is appended at the end for the regression or classification task . We will only carry out classification tasks in our experiments for simplicity . Therefore , the softmax function is used for the last nonlinearity and the cross-entropy loss will be used for the loss function in Equation 2 . Although our MLP consists of the functions and layers that already exist , we haven ’ t yet found any works that look exactly like ours . The most similar works were found at ( Martinez et al. , 2017 ) and ( Siravenha et al. , 2019 ) . ( Martinez et al. , 2017 ) used an MLP , whose order is FC → BN → ReLU → Dropout→ FC → BN → ReLU → Dropout , where FC stands for fully-connected . A skip connection is added after the last Dropout . Since ReLU is added between BN and Dropout , their MLP architecture does not benefit from whitening . ( Siravenha et al. , 2019 ) used an MLP , whose order is FC → BN → Dropout → FC → BN → Dropout → ReLU , where FC stands for fully-connected . A skip connection is added before the last ReLU . Their block does include BN → Dropout→ FC , like our MLP . However , there is no nonlinearity after the FC . Without nonlinearity after a fully-connected layer , it won ’ t benefit from potentially more complex decision boundaries . Since our neural network architecture includes dropouts , we can also take advantage of the MonteCarlo dropout ( Gal & Ghahramani , 2016 ) to model the prediction uncertainty that a model makes on test examples . | The paper proposes a network architecture for MLP networks. The authors build upon work by Chen et al. (2019) to achieve whitening by batchnorm followed by dropout (IC Layer). Further they use skip-connections (He et al., 2016) in their architecture. The authors tested their approach in an ablation study for gender and age classification. With dropout included in their architecture, they computed the entropy values, which showed high values for some baby or kid faces, that were infrequent in the dataset. | SP:a5347c01c781befd65ffc3899fc313627891fd65 |
Generalizing MLPs With Dropouts, Batch Normalization, and Skip Connections | 1 INTRODUCTION . 1.1 MLP AND ITS TRAINING OBJECTIVE . A multilayer perceptron ( MLP ) is one of the simplest and the oldest artificial neural network architectures . Its origin dates back to 1958 ( Rosenblatt , 1958 ) . Its basic building blocks are linear regression and nonlinear activation functions . One can stack them up to create an MLP with L layers ( See Equation 1 ) . ŷ = f ( L ) ( W ( L ) , . . . , f ( 2 ) ( W ( 2 ) f ( 1 ) ( W ( 1 ) x ) ) , . . . ) ( 1 ) where x , ŷ , W ( l ) , and f ( l ) are an input vector , an output vector , the weight matrix , and the nonlinear activation function at the lth layer , respectively . Nonlinearity is necessary since without it , an MLP will collapse into one matrix . The choice of a nonlinear function is a hyperparameter . ReLU ( Nair & Hinton , 2010 ) is one of the most widely used activation functions . In a supervised setup , where we have a pair of a data sample x ( i ) and a label y ( i ) , we aim to reduce the loss ( distance ) between the model output ŷ ( i ) and the label y ( i ) . The loss , LW ( 1 ) , ... , W ( L ) ( ŷ ( i ) , y ( i ) ) , is parameterized by the weights of all L layers . If we have m pairs of data samples and labels , { ( x ( 1 ) , y ( 1 ) ) , . . . , ( x ( m ) , y ( m ) ) } , the objective is to minimize the expectation of the loss ( See Equation 2 ) . Ex , y∼p̂dataLW ( 1 ) , ... , W ( L ) ( x , y ) = 1 m m∑ i=1 LW ( 1 ) , ... , W ( L ) ( x ( i ) , y ( i ) ) ( 2 ) Since m can be a large number , we typically batch the m data samples into multiple smaller batches and perform stochastic gradient descent to speed up the optimization process and to reduce memory consumption . Backpropagation ( Rumelhart et al. , 1986 ) is a useful tool to perform gradient descent . This allows us to easily differentiate the loss function with respect to the weights of all L layers , taking advantage of the chain rule . So far , we talked about the basics of MLPs and how to optimize them in a supervised setup . In the past decade , there have been many works to improve deep neural network architectures and their performances . In the next section , we ’ ll go through some of the works that can be used to generalize MLPs . 2 RELATED WORK . 2.1 DROPOUT . Dropout ( Srivastava et al. , 2014 ) was introduced to prevent neural networks from overfitting . Every neuron in layer l − 1 has probability 1 − p of being present . That is , with the probability of p , a neuron is dropped out and the weights that connect to the neurons in layer l will not play a role . p is a hyperparameter whose optimum value can depend on the data , neural network architecture , etc . Dropout makes the neural network stochastic . During training , this behavior is desired since this effectively trains many neural networks that share most of the weights . Obviously , the weights that are dropped out won ’ t be shared between the sampled neural networks . This stochasticity works as a regularizer which prevents overfitting . The stochasticity isn ’ t always desired at test time . In order to make it deterministic , the weights are scaled down by multiplying by p. That is , every weight that once was dropped out during training will be used at test time , but its magnitude is scaled down to account for the fact that it used to be not present during training . In Bayesian statistics , the trained neural network ( s ) can be seen as a posterior predictive distribution . The authors compare the performance between the Monte-Carlo model averaging and the weight scaling . They empirically show that the difference is minimal . In some situations , the stochasticity of a neural network is desired , especially if we want to estimate the uncertainty of model predictions . Unlike regular neural networks , where the trained weights are deterministic , Bayesian neural networks learn the distributions of weights , which can capture uncertainty ( Tishby et al. , 1989 ) . At test time , the posterior predictive distribution is used for inference ( See Equation 3 ) . p ( y | x , X , Y ) = ∫ p ( y | x , X , Y , w ) p ( w | X , Y ) dw ( 3 ) where w = { W ( i ) } Li=1 . Equation 3 is intractable . When a regular neural network has dropouts , it ’ s possible to approximate the posterior predictive distribution by performing T stochastic forward passes and averaging the results . This is called ” Monte-Carlo dropout ” ( Gal & Ghahramani , 2016 ) . In practice , the difference between the Monte-Carlo model averaging and weight scaling is minimal . However , we can use the variance or entropy of the results of T stochastic forward passes to estimate the model uncertainty . 2.2 BATCH NORMALIZATION . Batch normalization was first introduced to address the internal covariate shift problem ( Ioffe & Szegedy , 2015 ) . Normalization is done simply by subtracting the batch mean from every element in the batch and dividing it by the batch standard deviation . It ’ s empirically shown that batch normalization allows higher learning rates to be used , and thus leading to faster convergence . 2.3 WHITENING INPUTS OF NEURAL NETWORKS . Whitening decorrelates and normalizes inputs . Instead of using computationally expensive existing techniques , such as ICA ( independent component analysis ) ( Hyvärinen & Oja , 2000 ) , it was shown that using a batch normalization layer followed by a dropout layer can also achieve whitening ( Chen et al. , 2019 ) . The authors called this the IC ( Independent-Component ) layer . They argued that their IC layer can reduce the mutual information and correlation between the neurons by a factor of p2 and p , respectively , where p is the dropout probability . The authors suggested that an IC layer should be placed before a weight layer . They empirically showed that modifying ResNet ( He et al. , 2016 ) to include IC layers led to a more stable training process , faster convergence , and better convergence limit . 2.4 SKIP CONNECTIONS . Skip connections were first introduced in convolutional neural networks ( CNNs ) ( He et al. , 2016 ) . Before then , having deeper CNNs with more layers didn ’ t necessarily lead to better performance , but rather they suffered from the degradation problem . ResNets solved this problem by introducing skip connections . They managed to train deep CNNs , even with 152 layers , which wasn ’ t seen before , and showed great results on some computer vision benchmark datasets . 3 METHODOLOGY . 3.1 MOTIVATION . We want to take advantage of the previous works on whitening neural network inputs ( Chen et al. , 2019 ) and skip connections ( He et al. , 2016 ) , and apply them to MLPs . In each of their original papers , the ideas were tested on CNNs , not on MLPs . However , considering that both a fully connected layer and a convolutional layer are a linear function of f that satisfies Equation 4 , and that their main job is feature extraction , we assume that MLPs can also benefit from them . f ( x+ y ) = f ( x ) + f ( y ) and f ( ax ) = af ( x ) ( 4 ) The biggest difference between a convolutional layer and a fully connected layer is that the former works locally , but the latter works globally . Also , even the latest neural network architectures , such as the Transformer ( Vaswani et al. , 2017 ) , don ’ t use IC layers within their fully connected layers . They can potentially benefit from our work . 3.2 NEURAL NETWORK ARCHITECTURE . Our MLP closely follows ( Chen et al. , 2019 ) and ( He et al. , 2016 ) . Our MLP is composed of a series of two basic building blocks . One is called the residual block and the other is called the downsample block . The residual block starts with an IC layer , followed by a fully connected layer and ReLU nonlinearity . This is repeated twice , as it was done in the original ResNet paper ( He et al. , 2016 ) . The skip connection is added before the last ReLU layer ( See Equation 5 ) . y = ReLU ( W ( 2 ) Dropout ( BN ( ReLU ( W ( 1 ) Dropout ( BN ( x ) ) ) ) ) + x ) ( 5 ) where x is an input vector to the residual block , BN is a batch normalization layer , Dropout is a dropout layer , ReLU is a ReLU nonlinearity , W1 is the first fully connected layer , and W2 is the second fully connected layer . This building preserves the dimension of the input vector . That is , x , y ∈ RN and W ( 1 ) , W ( 2 ) ∈ RN×N . This is done intentionally so that the skip connections can be made easily . The downsample block also starts with an IC layer , followed by a fully connected layer and ReLU nonlinearity ( See Equation 6 ) . y = ReLU ( WDropout ( BN ( x ) ) ) ( 6 ) The dimension is halved after the weight matrix W . That is , x ∈ RN , W ∈ RN2 ×N , and y ∈ RN2 . This block reduces the number of features so that the network can condense information which will later be used for classification or regression . After a series of the two building blocks , one fully connected layer is appended at the end for the regression or classification task . We will only carry out classification tasks in our experiments for simplicity . Therefore , the softmax function is used for the last nonlinearity and the cross-entropy loss will be used for the loss function in Equation 2 . Although our MLP consists of the functions and layers that already exist , we haven ’ t yet found any works that look exactly like ours . The most similar works were found at ( Martinez et al. , 2017 ) and ( Siravenha et al. , 2019 ) . ( Martinez et al. , 2017 ) used an MLP , whose order is FC → BN → ReLU → Dropout→ FC → BN → ReLU → Dropout , where FC stands for fully-connected . A skip connection is added after the last Dropout . Since ReLU is added between BN and Dropout , their MLP architecture does not benefit from whitening . ( Siravenha et al. , 2019 ) used an MLP , whose order is FC → BN → Dropout → FC → BN → Dropout → ReLU , where FC stands for fully-connected . A skip connection is added before the last ReLU . Their block does include BN → Dropout→ FC , like our MLP . However , there is no nonlinearity after the FC . Without nonlinearity after a fully-connected layer , it won ’ t benefit from potentially more complex decision boundaries . Since our neural network architecture includes dropouts , we can also take advantage of the MonteCarlo dropout ( Gal & Ghahramani , 2016 ) to model the prediction uncertainty that a model makes on test examples . | This paper proposes a general architecture for MLPs. The main ingredients are skip connections and whitening. These methods have been proven to be effective in CNNs. The authors show that MLPs do benefit from them as well. The proposed skip connection (SO,SI - skip-out, skip-in) is added before the last ReLu (R) of each block. The whitening (BND) is implemented by a batch normalization (BN) layer followed by a dropout (D) layer before the fully-connected layer (FC). The proposed block is then: SO-BND-FC-R-BND-FC-SI-R-BND-FCS-R. While FC has the same number of input as output, FCS halves its number of output. The experimental section shows the performance of such MLPs, ablating skips and whitenings. Two datasets of faces are used (IMDB-Wiki and Adience) with 3 classification task of age and gender recognition. | SP:a5347c01c781befd65ffc3899fc313627891fd65 |
Interpretable Semantic Role Relation Table for Supporting Facts Recognition of Reading Comprehension | 1 INTRODUCTION . There has been an increasing interest in the explainability of Machine Reading Comprehension ( MRC ) in recent years . For enhancing the explainability in MRC , some researchers ( Qiu et al. , 2019 ; Tu et al. , 2020 ; Fang et al. , 2020 ) utilize Graph Networks . The relational inductive bias encoded in Graph Networks ( Battaglia et al. , 2018 ) provides viable support for reasoning and learning over structured representations . Some researchers ( Feng et al. , 2020 ) utilize explicit inference chains for multi-hop reasoning . Yang et al . ( 2018 ) provide sentence-level supporting facts required for reasoning , allowing QA systems to reason with strong supervision and explain the predictions . This paper focuses on establishing an interpretable model sentence-level supporting facts recognition . A system capable of delivering explanations is generally more interpretable , meeting some of the requirements for real-world applications , such as user trust , acceptance , and confidence ( Thayaparan et al. , 2020 ) . An example from HotpotQA ( Yang et al. , 2018 ) is illustrated in Figure1 . To correctly answer the question ( ” The director of the romantic comedy Big Stone Gap is based in what New York City ” ) the model is required to first identify ParagraphB as a relevant paragraph , whose title contains the keywords that appear in the question ( ” Big Stone Gap ” ) . S3 , the first sentence of ParagraphB , is then chosen by the model as a supporting fact that leads to the next-hop paragraph ParagraphA . Lastly , from ParagraphA , the span Greenwich Village , New York City is selected as the predicted answer . In this example , s1 and s3 contain the critical information needed to reason the answer . When we judge whether S3 is a supporting fact , our model needs to understand the semantic relation between the S3 , question and the paragraphs . People can identify the supporting factors in the paragraph and give a detailed explanation of the judgment result . We argue that the more specific the semantic interpretation , the more helpful the model imitates the human reasoning process . The input of the most recent model is Pre-trained embeddings , which have the advantage of capturing semantic similarity , but it is hard to explain in detail . We believe that the model uses interpretable features for reasoning , which contributes to enhanced interpretability . Recently , the attention mechanism has achieved remarkable performance on many natural language processing tasks . The model of attention mechanism learns the relevance between words through training ( Figure 2 ) . Inspired by the attention mechanism , for establishing rich and interpretable semantic features , we propose the semantic relational table ( Figure 3 ) . Semantic role labeling ( SRL ) is a shallow semantic parsing task aiming to discover who did what to whom , when and why ( He et al. , 2018 ; Li et al. , 2018 ) , providing explicit contextual semantics , which naturally matches the task target of text comprehension and It is easy to explain for people . Recently , Ribeiro et al . ( 2020 ) show that although measuring held-out accuracy has been the primary approach to evaluate generalization , it often overestimates the performance of NLP models . Moreover , the model does not seem to resolve basic Coreferences and grasp simple subject/object or active/passive distinctions ( SRL ) , all of which are critical to comprehension . Zhang et al . ( 2018 ) regard the semantic signals as SRL embeddings and employ a lookup table to map each label to vectors , similar to the implementation of word embedding . For each word , a joint embedding is obtained by the concatenation of word embedding and SRL embedding . Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps for the model . The previous work indicates that SRL may hopefully help to understand contextual semantics relation . Formal semantics generally presents the semantic relation as ” predicate argument ” structure . For example , given the following sentence with target verb ( predicate ) sold , all the arguments are labeled as follows , [ ARG0 : Tom ] [ V : sold ] [ ARG1 : a book ] [ ARG2 : to jerry ] [ ARGM-TMP : last week ] . Where ARG0 represents the seller ( agent ) , ARG1 represents the thing sold ( theme ) , ARG2 represents the buyer ( recipient ) , ARGM-TMP is an adjunct indicating the timing of the action , and V represents the predicate . A question sentence example : [ ARG0 : Who ] [ V : bought ] [ ARG1 : flowers ] [ ARG2 : for jerry ] [ ARGM-LOC : in the park ] [ ARGM-TMP : last week ] . In the reference text , the sentences highly related to the semantic roles of the question are essential reasoning information . In this paper , we use the name of entities to integrate into the semantic role relation table to establish the semantic relation between sentences . Since many features are difficult to process efficiently , we simplified the semantic role relation table between two sentences . In particular , the contributions of this work are : ( 1 ) We believe that the model uses interpretable features for reasoning , which contributes to enhanced interpretability . So we propose an interpretable form of semantic relations to enhance the interpretability of the model ’ s input data . We use the enti- ty ’ s name to integrate into the semantic role relation table to establish the semantic relation between question , article , and the target sentence . ( 2 ) With few training data sets , the model ’ s performance based on the semantic role relation table is still stable . 2 METHOD . 2.1 A SIMPLIFIED EXAMPLE . For each sentence , we extract predicate-argument tuples via SRL toolkits . We employ the BERTbased model ( Shi & Lin , 2019 ) in the AllenNLP toolkit to perform SRL . We arrange the different semantic roles labels in a fixed order . A simplified example of supporting sentence prediction task : • question : who eats bread at night ? • sent1 : Tom eats bread . • sent2 : Jerry eats an apple . • sent3 : Tom eats something at night . • sent4 : Jerry drinks milk . We need to recognize sent1 and sent3 as the supporting facts . First step : we use semantic role label tools to parse sentences , set the position that the semantic role label contained in the sentence to 1 , and set the position that the semantic role label is not contained in the sentence to 0 . Second step : We match the entities between the two sentences , the position of the semantic role tag corresponding to the same named entities set to 1 , and the matching result is regarded as the semantic relation between the sentences . The two steps can build the following semantic role relation tables : The ” q ” stands for ” question ” , the ” a ” stands for ” article ” , the ” t ” stands for ” table ” . The question feature table ( Figure 4 ) shows the distribution of the semantic structure information of the question sentence . ” Who ” does not appear in the figure due to it is regarded as a stop word . The semantic role label feature of the question sentence is an essential clue for judging whether the sentence is a piece of evidence . The sent1 is the target sentence for which the model makes evidence prediction . sent1 feature table ( Figure 5 ) , show the distribution of the semantic structure information of the sentence . The article feature table ( Figure 6 ) show the distribution of the global semantic structure information of the article . The question feature sent1 table ( Figure 7 ) and The sent1 feature question table ( Figure 8 ) denotes the semantic relationship between the problem and the target sentence . The sent1 feature article table ( Figure 9 ) and article feature sent1 table ( Figure 10 ) denotes the semantic relationship between the article and the target sentence . The question feature article table ( Figure 11 ) and article feature question table ( Figure 12 ) denotes the semantic relationship between the article and the question . 2.2 MODEL . The model architecture , shown in ( Figure13 ) , the input data of the model is semantic role relation tables ( Figure 3∼11 ) , each figure undergoes the same convolution operation . The k is the set of types of semantic role tags . Let i k , the i-th represents a specific semantic role tag in a sentence . Let the v ( padded where necessary ) means the maximum number of verbs in a sentence . Let the s ( padded where necessary ) means the maximum number of sentences in the article . A convolution operation involves a filter w1 Rvs , which is applied to a window of each semantic role relation table to produce a new feature . ci = f ( w1 · xi : i+1 + b ) ( 1 ) For example , a feature ci is generated from a window of a table . Here b R is a bias term , and f is a non-linear function such as the hyperbolic tangent . This filter is applied to each possible semantic role relation table window to produce a feature map . c1 = [ c1 , c2 , · · · , c25 ] ( 2 ) Another convolution operation involves a filter w2 Rkvs , which is applied to a window of each semantic role relation table to produce a global feature . c2 = f ( w2 · x1 :25 + b ) ( 3 ) g = c1 ⊕ c2 ( 4 ) Where ⊕ is the concatenation operator to contact local features with global features . g = g1 ⊕ g2 · · · ⊕g9 ( 5 ) These features extracted from each semantic role relation table are spliced . Then These features are passed to a fully connected softmax layer whose output is the probability distribution over labels . 3 EXPERIMENT . 3.1 RESULT ANALYSIS . We conduct experiments on the HotpotQA data set . HotpotQA encourages explainable QA models by providing supporting sentences for the answer , which usually come from several documents ( a document is called ” gold doc ” if it contains supporting facts ) . Each case includes ten paragraphs , two of which are ” gold doc. ” We only use ” gold doc ” as an article . Since the test set of HotpotQA is not publicly available , our evaluations are based on the dev set . We use exact match ( EM ) , F1 , Accuracy as three evaluation metrics . The number of sentences and the number of verbs in sentences affects the semantic role feature map size . Therefore , we conducted experiments on the data of the maximum number of sentences ( 5∼10 ) in an article and the maximum number of verbs ( 3∼4 ) in a sentence on the proposed model , and the experimental results are shown in Table 1 . The ( article sent num ) stands for the maximum number of sentences in an article . The ( verb num ) stands for the maximum number of verbs in a sentence The ( case num ) stands for the number of all the articles in the data sets that satisfy the constraint condition . The ( sent num ) stands for the number of all the sentences in the data sets that satisfy the constraint condition . From ( table 1 ) , as the number of sentences in articles increases , EM decreases , F1 decreases , and Accuracy does not change much . As the number of verbs in the sentence increases , EM , F1 , and Accuracy do not change much . There are some experiment result in table 2 . From ( table 2 ) , within a certain range , EM does not Table 2 : Some experimental results in the number of sentences not more than 10 , the number of verbs not more than 4 . The case num = 5219 , the sent num = 30976. article sent num ( ≤ ) verb num ( ≤ ) Accuracy EM F1 10 4 0.7559 0.2433 0.7132 10 4 0.7621 0.2603 0.6939 10 4 0.7628 0.2638 0.6778 increase with the increase of F1 . Sometimes , the EM decreases with the increase of F1 . From ( table 3 ) , Although we did not use all the data in the dev set , we used most of the dev set . so experimental results can be compared to the Baseline . Our model performance should be close to the Baseline model . However , our model structure is simpler than Bert and Baseline . The Baseline model reimplemented the architecture described in Clark & Gardner ( 2018 ) The Baseline model subsumes the technical advances on question answering , including word embedding , character-level models , bi-attention Seo et al . ( 2017 ) and self-attention Wang et al . ( 2017 ) . Recently proposed models such as SAE Tu et al . ( 2020 ) include BERT module , so the performance is excellent . This shows that the semantic role relation table is a critical feature in machine reading comprehension . | The paper presents a model for the supporting sentence prediction task in machine reading comprehension. To enhance interpretability of the model, the paper proposes to use the semantic relations among a question, target sentence, and sentences in an article as features of the model. The semantic relations are obtained from semantic role labeling and named entities. The paper shows the prediction accuracy of the proposed model compared against some prior models, and conducts ablation experiments and case studies. | SP:46f8bfb644d14e8ff9a9053057e27f96c4c32a97 |
Interpretable Semantic Role Relation Table for Supporting Facts Recognition of Reading Comprehension | 1 INTRODUCTION . There has been an increasing interest in the explainability of Machine Reading Comprehension ( MRC ) in recent years . For enhancing the explainability in MRC , some researchers ( Qiu et al. , 2019 ; Tu et al. , 2020 ; Fang et al. , 2020 ) utilize Graph Networks . The relational inductive bias encoded in Graph Networks ( Battaglia et al. , 2018 ) provides viable support for reasoning and learning over structured representations . Some researchers ( Feng et al. , 2020 ) utilize explicit inference chains for multi-hop reasoning . Yang et al . ( 2018 ) provide sentence-level supporting facts required for reasoning , allowing QA systems to reason with strong supervision and explain the predictions . This paper focuses on establishing an interpretable model sentence-level supporting facts recognition . A system capable of delivering explanations is generally more interpretable , meeting some of the requirements for real-world applications , such as user trust , acceptance , and confidence ( Thayaparan et al. , 2020 ) . An example from HotpotQA ( Yang et al. , 2018 ) is illustrated in Figure1 . To correctly answer the question ( ” The director of the romantic comedy Big Stone Gap is based in what New York City ” ) the model is required to first identify ParagraphB as a relevant paragraph , whose title contains the keywords that appear in the question ( ” Big Stone Gap ” ) . S3 , the first sentence of ParagraphB , is then chosen by the model as a supporting fact that leads to the next-hop paragraph ParagraphA . Lastly , from ParagraphA , the span Greenwich Village , New York City is selected as the predicted answer . In this example , s1 and s3 contain the critical information needed to reason the answer . When we judge whether S3 is a supporting fact , our model needs to understand the semantic relation between the S3 , question and the paragraphs . People can identify the supporting factors in the paragraph and give a detailed explanation of the judgment result . We argue that the more specific the semantic interpretation , the more helpful the model imitates the human reasoning process . The input of the most recent model is Pre-trained embeddings , which have the advantage of capturing semantic similarity , but it is hard to explain in detail . We believe that the model uses interpretable features for reasoning , which contributes to enhanced interpretability . Recently , the attention mechanism has achieved remarkable performance on many natural language processing tasks . The model of attention mechanism learns the relevance between words through training ( Figure 2 ) . Inspired by the attention mechanism , for establishing rich and interpretable semantic features , we propose the semantic relational table ( Figure 3 ) . Semantic role labeling ( SRL ) is a shallow semantic parsing task aiming to discover who did what to whom , when and why ( He et al. , 2018 ; Li et al. , 2018 ) , providing explicit contextual semantics , which naturally matches the task target of text comprehension and It is easy to explain for people . Recently , Ribeiro et al . ( 2020 ) show that although measuring held-out accuracy has been the primary approach to evaluate generalization , it often overestimates the performance of NLP models . Moreover , the model does not seem to resolve basic Coreferences and grasp simple subject/object or active/passive distinctions ( SRL ) , all of which are critical to comprehension . Zhang et al . ( 2018 ) regard the semantic signals as SRL embeddings and employ a lookup table to map each label to vectors , similar to the implementation of word embedding . For each word , a joint embedding is obtained by the concatenation of word embedding and SRL embedding . Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps for the model . The previous work indicates that SRL may hopefully help to understand contextual semantics relation . Formal semantics generally presents the semantic relation as ” predicate argument ” structure . For example , given the following sentence with target verb ( predicate ) sold , all the arguments are labeled as follows , [ ARG0 : Tom ] [ V : sold ] [ ARG1 : a book ] [ ARG2 : to jerry ] [ ARGM-TMP : last week ] . Where ARG0 represents the seller ( agent ) , ARG1 represents the thing sold ( theme ) , ARG2 represents the buyer ( recipient ) , ARGM-TMP is an adjunct indicating the timing of the action , and V represents the predicate . A question sentence example : [ ARG0 : Who ] [ V : bought ] [ ARG1 : flowers ] [ ARG2 : for jerry ] [ ARGM-LOC : in the park ] [ ARGM-TMP : last week ] . In the reference text , the sentences highly related to the semantic roles of the question are essential reasoning information . In this paper , we use the name of entities to integrate into the semantic role relation table to establish the semantic relation between sentences . Since many features are difficult to process efficiently , we simplified the semantic role relation table between two sentences . In particular , the contributions of this work are : ( 1 ) We believe that the model uses interpretable features for reasoning , which contributes to enhanced interpretability . So we propose an interpretable form of semantic relations to enhance the interpretability of the model ’ s input data . We use the enti- ty ’ s name to integrate into the semantic role relation table to establish the semantic relation between question , article , and the target sentence . ( 2 ) With few training data sets , the model ’ s performance based on the semantic role relation table is still stable . 2 METHOD . 2.1 A SIMPLIFIED EXAMPLE . For each sentence , we extract predicate-argument tuples via SRL toolkits . We employ the BERTbased model ( Shi & Lin , 2019 ) in the AllenNLP toolkit to perform SRL . We arrange the different semantic roles labels in a fixed order . A simplified example of supporting sentence prediction task : • question : who eats bread at night ? • sent1 : Tom eats bread . • sent2 : Jerry eats an apple . • sent3 : Tom eats something at night . • sent4 : Jerry drinks milk . We need to recognize sent1 and sent3 as the supporting facts . First step : we use semantic role label tools to parse sentences , set the position that the semantic role label contained in the sentence to 1 , and set the position that the semantic role label is not contained in the sentence to 0 . Second step : We match the entities between the two sentences , the position of the semantic role tag corresponding to the same named entities set to 1 , and the matching result is regarded as the semantic relation between the sentences . The two steps can build the following semantic role relation tables : The ” q ” stands for ” question ” , the ” a ” stands for ” article ” , the ” t ” stands for ” table ” . The question feature table ( Figure 4 ) shows the distribution of the semantic structure information of the question sentence . ” Who ” does not appear in the figure due to it is regarded as a stop word . The semantic role label feature of the question sentence is an essential clue for judging whether the sentence is a piece of evidence . The sent1 is the target sentence for which the model makes evidence prediction . sent1 feature table ( Figure 5 ) , show the distribution of the semantic structure information of the sentence . The article feature table ( Figure 6 ) show the distribution of the global semantic structure information of the article . The question feature sent1 table ( Figure 7 ) and The sent1 feature question table ( Figure 8 ) denotes the semantic relationship between the problem and the target sentence . The sent1 feature article table ( Figure 9 ) and article feature sent1 table ( Figure 10 ) denotes the semantic relationship between the article and the target sentence . The question feature article table ( Figure 11 ) and article feature question table ( Figure 12 ) denotes the semantic relationship between the article and the question . 2.2 MODEL . The model architecture , shown in ( Figure13 ) , the input data of the model is semantic role relation tables ( Figure 3∼11 ) , each figure undergoes the same convolution operation . The k is the set of types of semantic role tags . Let i k , the i-th represents a specific semantic role tag in a sentence . Let the v ( padded where necessary ) means the maximum number of verbs in a sentence . Let the s ( padded where necessary ) means the maximum number of sentences in the article . A convolution operation involves a filter w1 Rvs , which is applied to a window of each semantic role relation table to produce a new feature . ci = f ( w1 · xi : i+1 + b ) ( 1 ) For example , a feature ci is generated from a window of a table . Here b R is a bias term , and f is a non-linear function such as the hyperbolic tangent . This filter is applied to each possible semantic role relation table window to produce a feature map . c1 = [ c1 , c2 , · · · , c25 ] ( 2 ) Another convolution operation involves a filter w2 Rkvs , which is applied to a window of each semantic role relation table to produce a global feature . c2 = f ( w2 · x1 :25 + b ) ( 3 ) g = c1 ⊕ c2 ( 4 ) Where ⊕ is the concatenation operator to contact local features with global features . g = g1 ⊕ g2 · · · ⊕g9 ( 5 ) These features extracted from each semantic role relation table are spliced . Then These features are passed to a fully connected softmax layer whose output is the probability distribution over labels . 3 EXPERIMENT . 3.1 RESULT ANALYSIS . We conduct experiments on the HotpotQA data set . HotpotQA encourages explainable QA models by providing supporting sentences for the answer , which usually come from several documents ( a document is called ” gold doc ” if it contains supporting facts ) . Each case includes ten paragraphs , two of which are ” gold doc. ” We only use ” gold doc ” as an article . Since the test set of HotpotQA is not publicly available , our evaluations are based on the dev set . We use exact match ( EM ) , F1 , Accuracy as three evaluation metrics . The number of sentences and the number of verbs in sentences affects the semantic role feature map size . Therefore , we conducted experiments on the data of the maximum number of sentences ( 5∼10 ) in an article and the maximum number of verbs ( 3∼4 ) in a sentence on the proposed model , and the experimental results are shown in Table 1 . The ( article sent num ) stands for the maximum number of sentences in an article . The ( verb num ) stands for the maximum number of verbs in a sentence The ( case num ) stands for the number of all the articles in the data sets that satisfy the constraint condition . The ( sent num ) stands for the number of all the sentences in the data sets that satisfy the constraint condition . From ( table 1 ) , as the number of sentences in articles increases , EM decreases , F1 decreases , and Accuracy does not change much . As the number of verbs in the sentence increases , EM , F1 , and Accuracy do not change much . There are some experiment result in table 2 . From ( table 2 ) , within a certain range , EM does not Table 2 : Some experimental results in the number of sentences not more than 10 , the number of verbs not more than 4 . The case num = 5219 , the sent num = 30976. article sent num ( ≤ ) verb num ( ≤ ) Accuracy EM F1 10 4 0.7559 0.2433 0.7132 10 4 0.7621 0.2603 0.6939 10 4 0.7628 0.2638 0.6778 increase with the increase of F1 . Sometimes , the EM decreases with the increase of F1 . From ( table 3 ) , Although we did not use all the data in the dev set , we used most of the dev set . so experimental results can be compared to the Baseline . Our model performance should be close to the Baseline model . However , our model structure is simpler than Bert and Baseline . The Baseline model reimplemented the architecture described in Clark & Gardner ( 2018 ) The Baseline model subsumes the technical advances on question answering , including word embedding , character-level models , bi-attention Seo et al . ( 2017 ) and self-attention Wang et al . ( 2017 ) . Recently proposed models such as SAE Tu et al . ( 2020 ) include BERT module , so the performance is excellent . This shows that the semantic role relation table is a critical feature in machine reading comprehension . | The authors propose a method for adding interpretable semantic features for a Machine Reading Comprehension model based on semantic relations across words and sentences. The goal is to use semantic role labeling annotation to produce features used on a neural model for a language downstream task. The main contributions are: semantic features based on linguistic annotation, and interpretable semantic features for training a neural model. The study shows that the semantic features used to train Question Answering model performs comparable to related work. | SP:46f8bfb644d14e8ff9a9053057e27f96c4c32a97 |
ViDT: An Efficient and Effective Fully Transformer-based Object Detector | Transformers are transforming the landscape of computer vision , especially for recognition tasks . Detection transformers are the first fully end-to-end learning systems for object detection , while vision transformers are the first fully transformer-based architecture for image classification . In this paper , we integrate Vision and Detection Transformers ( ViDT ) to build an effective and efficient object detector . ViDT introduces a reconfigured attention module to extend the recent Swin Transformer to be a standalone object detector , followed by a computationally efficient transformer decoder that exploits multi-scale features and auxiliary techniques essential to boost the detection performance without much increase in computational load . Extensive evaluation results on the Microsoft COCO benchmark dataset demonstrate that ViDT obtains the best AP and latency trade-off among existing fully transformer-based object detectors , and achieves 49.2AP owing to its high scalability for large models . We will release the code and trained models upon acceptance . 1 INTRODUCTION . Object detection is the task of predicting both bounding boxes and object classes for each object of interest in an image . Modern deep object detectors heavily rely on meticulously designed components , such as anchor generation and non-maximum suppression ( Papageorgiou & Poggio , 2000 ; Liu et al. , 2020 ) . As a result , the performance of these object detectors depend on specific postprocessing steps , which involve complex pipelines and make fully end-to-end training difficult . Motivated by the recent success of Transformers ( Vaswani et al. , 2017 ) in NLP , numerous studies introduce Transformers into computer vision tasks . Carion et al . ( 2020 ) proposed Detection Transformers ( DETR ) to eliminate the meticulously designed components by employing a simple transformer encoder and decoder architecture , which serves as a neck component to bridge a CNN body for feature extraction and a detector head for prediction . Thus , DETR enables end-to-end training of deep object detectors . By contrast , Dosovitskiy et al . ( 2021 ) showed that a fully-transformer backbone without any convolutional layers , Vision Transformer ( ViT ) , achieves the state-of-theart results in image classification benchmarks . Approaches like ViT have been shown to learn effective representation models without strong human inductive biases , e.g. , meticulously designed components in object detection ( DETR ) , locality-aware designs such as convolutional layers and pooling mechanisms . However , there is a lack of effort to synergize DETR and ViT for a better object detection architecture . In this paper , we integrate both approaches to build a fully transformer-based , end-to-end object detector that achieves state-of-the-art performance without increasing computational load . A straightforward integration of DETR and ViT can be achieved by replacing the ResNet backbone ( body ) of DETR with ViT – Figure 2 ( a ) . This naive integration , DETR ( ViT ) 1 , has two limitations . First , the canonical ViT suffers from the quadratic increase in complexity w.r.t . image size , resulting in the lack of scalability . Furthermore , the attention operation at the transformer encoder and decoder ( i.e. , the “ neck ” component ) adds significant computational overhead to the detector . Therefore , the naive integration of DETR and ViT show very high latency – the blue lines of Figure 1 . Recently , Fang et al . ( 2021 ) propose an extension of ViT to object detection , named YOLOS , by appending the detection tokens [ DET ] to the patch tokens [ PATCH ] ( Figure 2 ( b ) ) , where [ DET ] tokens are learnable embeddings to specify different objects to detect . YOLOS is a neck-free architecture and removes the additional computational costs from the neck encoder . However , YOLOS shows limited performance because it can not use additional optimization techniques on the neck architecture , e.g. , multi-scale features and auxiliary loss . In addition , YOLOS can only accommodate the canonical transformer due to its architectural limitation , resulting in a quadratic complexity w.r.t . the input size . In this paper , we propose a novel integration of Vision and Detection Transformers ( ViDT ) ( Figure 2 ( c ) ) . Our contributions are three-folds . First , ViDT introduces a modified attention mechanism , named Reconfigured Attention Module ( RAM ) , that facilitates any ViT variant to handle the appended [ DET ] and [ PATCH ] tokens for object detection . Thus , we can modify the latest Swin Transformer ( Liu et al. , 2021 ) backbone with RAM to be an object detector and obtain high scalability using its local attention mechanism with linear complexity . Second , ViDT adopts a lightweight encoder-free neck architecture to reduce the computational overhead while still enabling the additional optimization techniques on the neck module . Note that the neck encoder is unnecessary because RAM directly extracts fine-grained representation for object detection , i.e. , [ DET ] tokens . As a result , ViDT obtains better performance than neck-free counterparts . Finally , we introduce a new concept of token matching for knowledge distillation , which brings additional performance gains from a large model to a small model without compromising detection efficiency . ViDT has two architectural advantages over existing approaches . First , similar to YOLOS , ViDT takes [ DET ] tokens as the additional input , maintaining a fixed scale for object detection , but constructs hierarchical representations starting with small-sized image patches for [ PATCH ] tokens . Second , ViDT can use the hierarchical ( multi-scale ) features and additional techniques without a significant computation overhead . Therefore , as a fully transformer-based object detector , ViDT facilitates better integration of vision and detection transformers . Extensive experiments on Microsoft COCO benchmark ( Lin et al. , 2014 ) show that ViDT is highly scalable even for large ViT models , such as Swin-base with 0.1 billion parameters , and achieves the best AP and latency trade-off . 2 PRELIMINARIES . Vision transformers process an image as a sequence of small-sized image patches , thereby allowing all the positions in the image to interact in attention operations ( i.e. , global attention ) . However , the canonical ViT ( Dosovitskiy et al. , 2021 ) is not compatible with a broad range of vision tasks due to its high computational complexity , which increases quadratically with respect to image size . The Swin Transformer ( Liu et al. , 2021 ) resolves the complexity issue by introducing the notion of shifted windows that support local attention and patch reduction operations , thereby improving compatibility for dense prediction task such as object detection . A few approaches use vision transformers as detector backbones but achieve limited success ( Heo et al. , 2021 ; Fang et al. , 2021 ) . 1We refer to each model based on the combinations of its body and neck . For example , DETR ( DeiT ) indicates that DeiT ( vision transformers ) is integrated with DETR ( detection transformers ) . Detection transformers eliminate the meticulously designed components ( e.g. , anchor generation and non-maximum suppression ) by combining convolutional network backbones and Transformer encoder-decoders . While the canonical DETR ( Carion et al. , 2020 ) achieves high detection performance , it suffers from very slow convergence compared to previous detectors . For example , DETR requires 500 epochs while the conventional Faster R-CNN ( Ren et al. , 2015 ) training needs only 37 epochs ( Wu et al. , 2019 ) . To mitigate the issue , Zhu et al . ( 2021 ) propose Deformable DETR which introduces deformable attention for utilizing multi-scale features as well as expediting the slow training convergence of DETR . In this paper , we use the Deformable DETR as our base detection transformer framework and integrate it with the recent vision transformers . DETR ( ViT ) is a straightforward integration of DETR and ViT , which uses ViT as a feature extractor , followed by the transformer encoder-decoder in DETR . As illustrated in Figure 2 ( a ) , it is a body–neck–head structure ; the representation of input [ PATCH ] tokens are extracted by the ViT backbone and then directly fed to the transformer-based encoding and decoding pipeline . To predict multiple objects , a fixed number of learnable [ DET ] tokens are provided as additional input to the decoder . Subsequently , output embeddings by the decoder produce final predictions through the detection heads for classification and box regression . Since DETR ( ViT ) does not modify the backbone at all , it can be flexibly changed to any latest ViT model , e.g. , Swin Transformer . Additionally , its neck decoder facilitates the aggregation of multi-scale features and the use of additional techniques , which help detect objects of different sizes and speed up training ( Zhu et al. , 2021 ) . However , the attention operation at the neck encoder adds significant computational overhead to the detector . In contrast , ViDT resolves this issue by directly extracting fine-grained [ DET ] features from Swin Transformer with RAM without maintaining the transformer encoder in the neck architecture . YOLOS ( Fang et al. , 2021 ) is a canonical ViT architecture for object detection with minimal modifications . As illustrated in Figure 2 ( b ) , YOLOS achieves a neck-free structure by appending randomly initialized learnable [ DET ] tokens to the sequence of input [ PATCH ] tokens . Since all the embeddings for [ PATCH ] and [ DET ] tokens interact via global attention , the final [ DET ] tokens are generated by the fine-tuned ViT backbone and then directly generate predictions through the detection heads without requiring any neck layer . While the naive DETR ( ViT ) suffers from the computational overhead from the neck layer , YOLOS enjoys efficient computations by treating the [ DET ] tokens as additional input for ViT . YOLOS shows that 2D object detection can be accomplished in a pure sequence-to-sequence manner , but this solution entails two inherent limitations : 1 ) YOLOS inherits the drawback of the canonical ViT ; the high computational complexity attributed to the global attention operation . As illustrated in Figure 1 , YOLOS shows very poor latency compared with other fully transformer-based detectors , especially when its model size becomes larger , i.e. , small→ base . Thus , YOLOS is not scalable for the large model . 2 ) YOLOS can not benefit from using additional techniques essential for better performance , e.g. , multi-scale features , due to the absence of the neck layer . Although YOLOS used the same DeiT backbone with Deformable DETR ( DeiT ) , its AP was lower than the straightforward integration . In contrast , the encoder-free neck architecture of ViDT enjoys the additional optimization techniques from Zhu et al . ( 2021 ) , resulting in the faster convergence and the better performance . Further , our RAM enables to combine Swin Transformer and the sequence-to-sequence paradigm for detection . 3 VIDT : VISION AND DETECTION TRANSFORMERS . ViDT first reconfigures the attention model of Swin Transformer to support standalone object detection while fully reusing the parameters of Swin Transformer . Next , it incorporates an encoder-free neck layer to exploit multi-scale features and two essential techniques : auxiliary decoding loss and iterative box refinement . We further introduce knowledge distillation with token matching to benefit from large ViDT models . 3.1 RECONFIGURED ATTENTION MODULE . Applying patch reduction and local attention scheme of Swin Transformer to the sequence-tosequence paradigm is challenging because ( 1 ) the number of [ DET ] tokens must be maintained at a fixed-scale and ( 2 ) the lack of locality between [ DET ] tokens . To address this challenge , we introduce a reconfigured attention module ( RAM ) 2 that decomposes a single global attention associated with [ PATCH ] and [ DET ] tokens into the three different attention , namely [ PATCH ] × [ PATCH ] , [ DET ] × [ DET ] , 2This reconfiguration scheme can be easily applied to other ViT variants with simple modification . and [ DET ] × [ PATCH ] attention . Based on the decomposition , the efficient schemes of Swin Transformer are applied only to [ PATCH ] × [ PATCH ] attention , which is the heaviest part in computational complexity , without breaking the two constraints on [ DET ] tokens . As illustrated in Figure 3 , these modifications fully reuse all the parameters of Swin Transformer by sharing projection layers for [ DET ] and [ PATCH ] tokens , and perform the three different attention operations : • [ PATCH ] × [ PATCH ] Attention : The initial [ PATCH ] tokens are progressively calibrated across the attention layers such that they aggregate the key contents in the global feature map ( i.e. , spatial form of [ PATCH ] tokens ) according to the attention weights , which are computed by 〈query , key〉 pairs . For [ PATCH ] × [ PATCH ] attention , Swin Transformer performs local attention on each window partition , but its shifted window partitioning in successive blocks bridges the windows of the preceding layer , providing connections among partitions to capture global information . Without modifying this concept , we use the same policy to generate hierarchical [ PATCH ] tokens . Thus , the number of [ PATCH ] tokens is reduced by a factor of 4 at each stage ; the resolution of feature maps decreases from H/4 ×W/4 to H/32 ×W/32 over a total of four stages , where H and W denote the width and height of the input image , respectively . • [ DET ] × [ DET ] Attention : Like YOLOS , we append one hundred learnable [ DET ] tokens as the additional input to the [ PATCH ] tokens . As the number of [ DET ] tokens specifies the number of objects to detect , their number must be maintained with a fixed-scale over the transformer layers . In addition , [ DET ] tokens do not have any locality unlike the [ PATCH ] tokens . Hence , for [ DET ] × [ DET ] attention , we perform global self-attention while maintaining the number of them ; this attention helps each [ DET ] token to localize a different object by capturing the relationship between them . • [ DET ] × [ PATCH ] Attention : This is cross-attention between [ DET ] and [ PATCH ] tokens , which produces an object embedding per [ DET ] token . For each [ DET ] token , the key contents in [ PATCH ] tokens are aggregated to represent the target object . Since the [ DET ] tokens specify different objects , it produces different object embeddings for diverse objects in the image . Without the crossattention , it is infeasible to realize the standalone object detector . As shown in Figure 3 , ViDT binds [ DET ] × [ DET ] and [ DET ] × [ PATCH ] attention to process them at once to increase efficiency . We replace all the attention modules in Swin Transformer with the proposed RAM , which receives [ PATCH ] and [ DET ] tokens ( as shown in “ Body ” of Figure 2 ( c ) ) and then outputs their calibrated new tokens by performing the three different attention operations in parallel . Positional Encoding . ViDT adopts different positional encodings for different types of attention . For [ PATCH ] × [ PATCH ] attention , we use the relative position bias ( Hu et al. , 2019 ) originally used in Swin Transformer . In contrast , the learnable positional encoding is added for [ DET ] tokens for [ DET ] × [ DET ] attention because there is no particular order between [ DET ] tokens . However , for [ DET ] × [ PATCH ] attention , it is crucial to inject spatial bias to the [ PATCH ] tokens due to the permutation-equivariant in transformers , ignoring spatial information of the feature map . Thus , ViDT adds the sinusoidalbased spatial positional encoding to the feature map , which is reconstructed from the [ PATCH ] tokens for [ DET ] × [ PATCH ] attention , as can be seen from the left side of Figure 3 . We present a thorough analysis of various spatial positional encodings in Section 4.2.1 . Use of [ DET ] × [ PATCH ] Attention . Applying cross-attention between [ DET ] and [ PATCH ] tokens adds additional computational overhead to Swin Transformer , especially when it is activated at the bottom layer due to the large number of [ PATCH ] tokens . To minimize such computational overhead , ViDT only activates the cross-attention at the last stage ( the top level of the pyramid ) of Swin Transformer , which consists of two transformer layers that receives [ PATCH ] tokens of size H/32×W/32 . Thus , only self-attention for [ DET ] and [ PATCH ] tokens are performed for the remaining stages except the last one . In Section 4.2.2 we show that this design choice helps achieve the highest FPS , while achieving similar detection performance as when cross-attention is enabled at every stage . We provide more details on RAM including its complexity analysis and algorithmic design in Appendix A . | This paper proposes ViDT, a high-performance Detection Transformer with an impressive accuracy-speed trade-off. A lot of experiments as well as ablation studies are conducted to prove the effectiveness of the proposed detector. The design principle of ViDT can also generalize and inspire future detector design. Moreover, this paper also includes an in-depth analysis of several current Detection Transformer architectures. | SP:c5fa77eef56d8878f58c99c5c50902a3f739e2fb |
ViDT: An Efficient and Effective Fully Transformer-based Object Detector | Transformers are transforming the landscape of computer vision , especially for recognition tasks . Detection transformers are the first fully end-to-end learning systems for object detection , while vision transformers are the first fully transformer-based architecture for image classification . In this paper , we integrate Vision and Detection Transformers ( ViDT ) to build an effective and efficient object detector . ViDT introduces a reconfigured attention module to extend the recent Swin Transformer to be a standalone object detector , followed by a computationally efficient transformer decoder that exploits multi-scale features and auxiliary techniques essential to boost the detection performance without much increase in computational load . Extensive evaluation results on the Microsoft COCO benchmark dataset demonstrate that ViDT obtains the best AP and latency trade-off among existing fully transformer-based object detectors , and achieves 49.2AP owing to its high scalability for large models . We will release the code and trained models upon acceptance . 1 INTRODUCTION . Object detection is the task of predicting both bounding boxes and object classes for each object of interest in an image . Modern deep object detectors heavily rely on meticulously designed components , such as anchor generation and non-maximum suppression ( Papageorgiou & Poggio , 2000 ; Liu et al. , 2020 ) . As a result , the performance of these object detectors depend on specific postprocessing steps , which involve complex pipelines and make fully end-to-end training difficult . Motivated by the recent success of Transformers ( Vaswani et al. , 2017 ) in NLP , numerous studies introduce Transformers into computer vision tasks . Carion et al . ( 2020 ) proposed Detection Transformers ( DETR ) to eliminate the meticulously designed components by employing a simple transformer encoder and decoder architecture , which serves as a neck component to bridge a CNN body for feature extraction and a detector head for prediction . Thus , DETR enables end-to-end training of deep object detectors . By contrast , Dosovitskiy et al . ( 2021 ) showed that a fully-transformer backbone without any convolutional layers , Vision Transformer ( ViT ) , achieves the state-of-theart results in image classification benchmarks . Approaches like ViT have been shown to learn effective representation models without strong human inductive biases , e.g. , meticulously designed components in object detection ( DETR ) , locality-aware designs such as convolutional layers and pooling mechanisms . However , there is a lack of effort to synergize DETR and ViT for a better object detection architecture . In this paper , we integrate both approaches to build a fully transformer-based , end-to-end object detector that achieves state-of-the-art performance without increasing computational load . A straightforward integration of DETR and ViT can be achieved by replacing the ResNet backbone ( body ) of DETR with ViT – Figure 2 ( a ) . This naive integration , DETR ( ViT ) 1 , has two limitations . First , the canonical ViT suffers from the quadratic increase in complexity w.r.t . image size , resulting in the lack of scalability . Furthermore , the attention operation at the transformer encoder and decoder ( i.e. , the “ neck ” component ) adds significant computational overhead to the detector . Therefore , the naive integration of DETR and ViT show very high latency – the blue lines of Figure 1 . Recently , Fang et al . ( 2021 ) propose an extension of ViT to object detection , named YOLOS , by appending the detection tokens [ DET ] to the patch tokens [ PATCH ] ( Figure 2 ( b ) ) , where [ DET ] tokens are learnable embeddings to specify different objects to detect . YOLOS is a neck-free architecture and removes the additional computational costs from the neck encoder . However , YOLOS shows limited performance because it can not use additional optimization techniques on the neck architecture , e.g. , multi-scale features and auxiliary loss . In addition , YOLOS can only accommodate the canonical transformer due to its architectural limitation , resulting in a quadratic complexity w.r.t . the input size . In this paper , we propose a novel integration of Vision and Detection Transformers ( ViDT ) ( Figure 2 ( c ) ) . Our contributions are three-folds . First , ViDT introduces a modified attention mechanism , named Reconfigured Attention Module ( RAM ) , that facilitates any ViT variant to handle the appended [ DET ] and [ PATCH ] tokens for object detection . Thus , we can modify the latest Swin Transformer ( Liu et al. , 2021 ) backbone with RAM to be an object detector and obtain high scalability using its local attention mechanism with linear complexity . Second , ViDT adopts a lightweight encoder-free neck architecture to reduce the computational overhead while still enabling the additional optimization techniques on the neck module . Note that the neck encoder is unnecessary because RAM directly extracts fine-grained representation for object detection , i.e. , [ DET ] tokens . As a result , ViDT obtains better performance than neck-free counterparts . Finally , we introduce a new concept of token matching for knowledge distillation , which brings additional performance gains from a large model to a small model without compromising detection efficiency . ViDT has two architectural advantages over existing approaches . First , similar to YOLOS , ViDT takes [ DET ] tokens as the additional input , maintaining a fixed scale for object detection , but constructs hierarchical representations starting with small-sized image patches for [ PATCH ] tokens . Second , ViDT can use the hierarchical ( multi-scale ) features and additional techniques without a significant computation overhead . Therefore , as a fully transformer-based object detector , ViDT facilitates better integration of vision and detection transformers . Extensive experiments on Microsoft COCO benchmark ( Lin et al. , 2014 ) show that ViDT is highly scalable even for large ViT models , such as Swin-base with 0.1 billion parameters , and achieves the best AP and latency trade-off . 2 PRELIMINARIES . Vision transformers process an image as a sequence of small-sized image patches , thereby allowing all the positions in the image to interact in attention operations ( i.e. , global attention ) . However , the canonical ViT ( Dosovitskiy et al. , 2021 ) is not compatible with a broad range of vision tasks due to its high computational complexity , which increases quadratically with respect to image size . The Swin Transformer ( Liu et al. , 2021 ) resolves the complexity issue by introducing the notion of shifted windows that support local attention and patch reduction operations , thereby improving compatibility for dense prediction task such as object detection . A few approaches use vision transformers as detector backbones but achieve limited success ( Heo et al. , 2021 ; Fang et al. , 2021 ) . 1We refer to each model based on the combinations of its body and neck . For example , DETR ( DeiT ) indicates that DeiT ( vision transformers ) is integrated with DETR ( detection transformers ) . Detection transformers eliminate the meticulously designed components ( e.g. , anchor generation and non-maximum suppression ) by combining convolutional network backbones and Transformer encoder-decoders . While the canonical DETR ( Carion et al. , 2020 ) achieves high detection performance , it suffers from very slow convergence compared to previous detectors . For example , DETR requires 500 epochs while the conventional Faster R-CNN ( Ren et al. , 2015 ) training needs only 37 epochs ( Wu et al. , 2019 ) . To mitigate the issue , Zhu et al . ( 2021 ) propose Deformable DETR which introduces deformable attention for utilizing multi-scale features as well as expediting the slow training convergence of DETR . In this paper , we use the Deformable DETR as our base detection transformer framework and integrate it with the recent vision transformers . DETR ( ViT ) is a straightforward integration of DETR and ViT , which uses ViT as a feature extractor , followed by the transformer encoder-decoder in DETR . As illustrated in Figure 2 ( a ) , it is a body–neck–head structure ; the representation of input [ PATCH ] tokens are extracted by the ViT backbone and then directly fed to the transformer-based encoding and decoding pipeline . To predict multiple objects , a fixed number of learnable [ DET ] tokens are provided as additional input to the decoder . Subsequently , output embeddings by the decoder produce final predictions through the detection heads for classification and box regression . Since DETR ( ViT ) does not modify the backbone at all , it can be flexibly changed to any latest ViT model , e.g. , Swin Transformer . Additionally , its neck decoder facilitates the aggregation of multi-scale features and the use of additional techniques , which help detect objects of different sizes and speed up training ( Zhu et al. , 2021 ) . However , the attention operation at the neck encoder adds significant computational overhead to the detector . In contrast , ViDT resolves this issue by directly extracting fine-grained [ DET ] features from Swin Transformer with RAM without maintaining the transformer encoder in the neck architecture . YOLOS ( Fang et al. , 2021 ) is a canonical ViT architecture for object detection with minimal modifications . As illustrated in Figure 2 ( b ) , YOLOS achieves a neck-free structure by appending randomly initialized learnable [ DET ] tokens to the sequence of input [ PATCH ] tokens . Since all the embeddings for [ PATCH ] and [ DET ] tokens interact via global attention , the final [ DET ] tokens are generated by the fine-tuned ViT backbone and then directly generate predictions through the detection heads without requiring any neck layer . While the naive DETR ( ViT ) suffers from the computational overhead from the neck layer , YOLOS enjoys efficient computations by treating the [ DET ] tokens as additional input for ViT . YOLOS shows that 2D object detection can be accomplished in a pure sequence-to-sequence manner , but this solution entails two inherent limitations : 1 ) YOLOS inherits the drawback of the canonical ViT ; the high computational complexity attributed to the global attention operation . As illustrated in Figure 1 , YOLOS shows very poor latency compared with other fully transformer-based detectors , especially when its model size becomes larger , i.e. , small→ base . Thus , YOLOS is not scalable for the large model . 2 ) YOLOS can not benefit from using additional techniques essential for better performance , e.g. , multi-scale features , due to the absence of the neck layer . Although YOLOS used the same DeiT backbone with Deformable DETR ( DeiT ) , its AP was lower than the straightforward integration . In contrast , the encoder-free neck architecture of ViDT enjoys the additional optimization techniques from Zhu et al . ( 2021 ) , resulting in the faster convergence and the better performance . Further , our RAM enables to combine Swin Transformer and the sequence-to-sequence paradigm for detection . 3 VIDT : VISION AND DETECTION TRANSFORMERS . ViDT first reconfigures the attention model of Swin Transformer to support standalone object detection while fully reusing the parameters of Swin Transformer . Next , it incorporates an encoder-free neck layer to exploit multi-scale features and two essential techniques : auxiliary decoding loss and iterative box refinement . We further introduce knowledge distillation with token matching to benefit from large ViDT models . 3.1 RECONFIGURED ATTENTION MODULE . Applying patch reduction and local attention scheme of Swin Transformer to the sequence-tosequence paradigm is challenging because ( 1 ) the number of [ DET ] tokens must be maintained at a fixed-scale and ( 2 ) the lack of locality between [ DET ] tokens . To address this challenge , we introduce a reconfigured attention module ( RAM ) 2 that decomposes a single global attention associated with [ PATCH ] and [ DET ] tokens into the three different attention , namely [ PATCH ] × [ PATCH ] , [ DET ] × [ DET ] , 2This reconfiguration scheme can be easily applied to other ViT variants with simple modification . and [ DET ] × [ PATCH ] attention . Based on the decomposition , the efficient schemes of Swin Transformer are applied only to [ PATCH ] × [ PATCH ] attention , which is the heaviest part in computational complexity , without breaking the two constraints on [ DET ] tokens . As illustrated in Figure 3 , these modifications fully reuse all the parameters of Swin Transformer by sharing projection layers for [ DET ] and [ PATCH ] tokens , and perform the three different attention operations : • [ PATCH ] × [ PATCH ] Attention : The initial [ PATCH ] tokens are progressively calibrated across the attention layers such that they aggregate the key contents in the global feature map ( i.e. , spatial form of [ PATCH ] tokens ) according to the attention weights , which are computed by 〈query , key〉 pairs . For [ PATCH ] × [ PATCH ] attention , Swin Transformer performs local attention on each window partition , but its shifted window partitioning in successive blocks bridges the windows of the preceding layer , providing connections among partitions to capture global information . Without modifying this concept , we use the same policy to generate hierarchical [ PATCH ] tokens . Thus , the number of [ PATCH ] tokens is reduced by a factor of 4 at each stage ; the resolution of feature maps decreases from H/4 ×W/4 to H/32 ×W/32 over a total of four stages , where H and W denote the width and height of the input image , respectively . • [ DET ] × [ DET ] Attention : Like YOLOS , we append one hundred learnable [ DET ] tokens as the additional input to the [ PATCH ] tokens . As the number of [ DET ] tokens specifies the number of objects to detect , their number must be maintained with a fixed-scale over the transformer layers . In addition , [ DET ] tokens do not have any locality unlike the [ PATCH ] tokens . Hence , for [ DET ] × [ DET ] attention , we perform global self-attention while maintaining the number of them ; this attention helps each [ DET ] token to localize a different object by capturing the relationship between them . • [ DET ] × [ PATCH ] Attention : This is cross-attention between [ DET ] and [ PATCH ] tokens , which produces an object embedding per [ DET ] token . For each [ DET ] token , the key contents in [ PATCH ] tokens are aggregated to represent the target object . Since the [ DET ] tokens specify different objects , it produces different object embeddings for diverse objects in the image . Without the crossattention , it is infeasible to realize the standalone object detector . As shown in Figure 3 , ViDT binds [ DET ] × [ DET ] and [ DET ] × [ PATCH ] attention to process them at once to increase efficiency . We replace all the attention modules in Swin Transformer with the proposed RAM , which receives [ PATCH ] and [ DET ] tokens ( as shown in “ Body ” of Figure 2 ( c ) ) and then outputs their calibrated new tokens by performing the three different attention operations in parallel . Positional Encoding . ViDT adopts different positional encodings for different types of attention . For [ PATCH ] × [ PATCH ] attention , we use the relative position bias ( Hu et al. , 2019 ) originally used in Swin Transformer . In contrast , the learnable positional encoding is added for [ DET ] tokens for [ DET ] × [ DET ] attention because there is no particular order between [ DET ] tokens . However , for [ DET ] × [ PATCH ] attention , it is crucial to inject spatial bias to the [ PATCH ] tokens due to the permutation-equivariant in transformers , ignoring spatial information of the feature map . Thus , ViDT adds the sinusoidalbased spatial positional encoding to the feature map , which is reconstructed from the [ PATCH ] tokens for [ DET ] × [ PATCH ] attention , as can be seen from the left side of Figure 3 . We present a thorough analysis of various spatial positional encodings in Section 4.2.1 . Use of [ DET ] × [ PATCH ] Attention . Applying cross-attention between [ DET ] and [ PATCH ] tokens adds additional computational overhead to Swin Transformer , especially when it is activated at the bottom layer due to the large number of [ PATCH ] tokens . To minimize such computational overhead , ViDT only activates the cross-attention at the last stage ( the top level of the pyramid ) of Swin Transformer , which consists of two transformer layers that receives [ PATCH ] tokens of size H/32×W/32 . Thus , only self-attention for [ DET ] and [ PATCH ] tokens are performed for the remaining stages except the last one . In Section 4.2.2 we show that this design choice helps achieve the highest FPS , while achieving similar detection performance as when cross-attention is enabled at every stage . We provide more details on RAM including its complexity analysis and algorithmic design in Appendix A . | The paper builds upon the recent advances in transformer based image classification methods (ViT variants) and detection methods (DETR variants). It argues that naively replacing the conv feature backbone in DETR with a ViT based one, is problematic due to (i) the quadratic complexity of the self attention module in ViT, and (ii) then again the attention module in the transformer encoder decoder part (which they call "neck"). To remedy, it proposes to build on top of Swin Transformer backbone, by proposing a novel reconfigured attention module (RAM), and further removing the encoder-decoder in the neck, replacing it with only a lightweight decoder. | SP:c5fa77eef56d8878f58c99c5c50902a3f739e2fb |
Graphon based Clustering and Testing of Networks: Algorithms and Theory | 1 INTRODUCTION . Machine learning on graphs has evolved considerably over the past two decades . The traditional view towards network analysis is limited to modelling interactions among entities of interest , for instance social networks or world wide web , and learning algorithms based on graph theory have been commonly used to solve these problems ( Von Luxburg , 2007 ; Yan et al. , 2006 ) . However , recent applications in bioinformatics and other disciplines require a different perspective , where the networks are the quantities of interest . For instance , it is of practical interest to classify protein structures as enzyme or non-enzyme ( Dobson & Doig , 2003 ) or detect topological changes in brain networks caused by Alzheimer ’ s disease ( Stam et al. , 2007 ) . In this paper , learning from network-valued data refers to clustering where each network is treated as an entity , as opposed to the traditional network analysis problems that involve a single network of interactions ( Newman , 2003 ) . Machine learning on network-valued data has been an active area of research in recent years , although most works focus on the network classification problem . The generic approach is to convert the network-valued data into a standard representation . Graph neural networks are commonly used for network embedding , that is , finding Euclidean representations of each network that can be further used in standard machine learning models ( Narayanan et al. , 2017 ; Xu et al. , 2019 ) . In contrast , graph kernels capture similarities between pairs of networks that can be used in kernel based learning algorithms ( Shervashidze et al. , 2011 ; Kondor & Pan , 2016 ; Togninalli et al. , 2019 ) . In particular , the graph neural tangent kernel defines a graph kernel that corresponds to infinitely wide graph neural networks , and typically outperforms neural networks in classification tasks ( Du et al. , 2019 ) . A more classical equivalent for graph kernels is to define metrics that characterise the distances between pairs of graphs ( Bunke & Shearer , 1998 ) , but there has been limited research on designing efficient graph distances and developing algorithms for clustering network-valued data . The motivation for this paper stems from two shortcomings in the literature on network-valued data analysis : first , the efficacy of existing kernels or embeddings have not been studied beyond network classification , and second is the lack of theoretical analysis of these methods , particularly in the small sample setting . Generalisation error bounds for graph kernel based learning exist ( Du et al. , 2019 ) , but these bounds , based on learning theory , are meaningful only when many networks are available . However , in many applications , one needs to learn from a small population of large networks and , in such cases , an informative statistical analysis should consider the small sample , large graph regime . To address this issue , we take inspiration from the recent statistics literature on graph two-sample testing—given two ( populations of ) large graphs , the goal is to decide if they are from same statistical model or not . Although most theoretical studies in graph two-sample testing focus on graph with vertex correspondence ( Tang et al. , 2017a ; Ghoshdastidar & von Luxburg , 2018 ) , some works address the problem of testing graphs on different vertex sets either by defining distances between graphs ( Tang et al. , 2017b ; Agterberg et al. , 2020 ) or by representing networks in terms of pre-specified network statistics ( Ghoshdastidar et al. , 2017 ) . The use of network statistics for clustering network-valued data is studied in Mukherjee et al . ( 2017 ) . Another fundamental approach for dealing with graphs of different sizes is graph matching , where the objective is to determine the vertex correspondence . Graph matching is often solved by formulating it as an optimization problem ( Zaslavskiy et al. , 2008 ; Guo et al. , 2019 ) or defining graph edit distance between the graphs ( Riesen & Bunke , 2009 ; Gao et al. , 2010 ) . Although there is extensive research on graph matching , the efficacy of these methods in clustering network-valued data remains unexplored . Contribution and organisation . In this work , we follow the approach of defining meaningful graph distances based on statistical models , and use the proposed graph distance in the context of learning from networks without vertex correspondence . In particular , we propose graph distances based on graphons . Graphons are symmetric bivariate functions that represent the limiting structure for a sequence of graphs with increasing number of nodes ( Lovász & Szegedy , 2006 ) , but can be also viewed as a nonparametric statistical model for exchangeable random graphs ( Diaconis & Janson , 2007 ; Bickel & Chen , 2009 ) . The latter perspective is useful for the purpose of machine learning since it allows us to view the multiple graphs as random samples drawn from one or more graphon models . This perspective forms the basis of our contributions , which are listed below : 1 ) In Section 2 , we propose a distance between two networks , that do not have vertex correspondence and could have different number of vertices . We view the networks as random samples from ( unknown ) graphons , and propose a graph distance that estimates theL2-distance between the graphons . The distance is inspired by the sorting-and-smoothing graphon estimator ( Chan & Airoldi , 2014 ) . 2 ) In Section 3 , we present two algorithms for clustering network-valued data based on the proposed graph distance : a distance-based spectral clustering algorithm , and a similarity based semi-definite programming ( SDP ) approach . We derive performance guarantees for both algorithms under the assumption that the networks are sampled from graphons satisfying certain smoothness conditions . 3 ) We empirically compare the performance of our algorithms with other clustering strategies based on graph kernels , graph matching , network statistics etc . and show that , on both simulated and real data , our graph distance-based spectral clustering algorithm outperforms others while the SDP approach also shows reasonable performance , and they also scale to large networks ( Section 3.3 ) . 4 ) Inspired by the success of the proposed graph distance in clustering , we use the distance for graph two-sample testing . In Section 4 , we show that the proposed two-sample test is statistically consistent for large graphs , and also demonstrate the efficacy of the test through numerical simulation . We provide further discussion in Section 5 and present the proofs of theoretical results in Appendix . 2 GRAPH DISTANCE BASED ON GRAPHONS . Clustering or testing of multiple networks requires a notion of distance between the networks . In this section , we present a transformation that converts graphs of different sizes into a fixed size representation , and subsequently , propose a graph distance inspired by the theory of graphons . We first provide some background on graphons and graphon estimation . Graphon has been studied in the literature from two perspectives : as limiting structure for infinite sequence of growing graphs ( Lovász & Szegedy , 2006 ) , or as exchangeable random graph model . In this paper , we follow the latter perspective . A random graph is said to be exchangeable if its distribution is invariant under permutation of nodes . Diaconis & Janson ( 2007 ) showed that any statistical model that generates exchangeable random graphs can be characterised by graphons , as introduced by Lovász & Szegedy ( 2006 ) . Formally , a graphon is a symmetric measurable continuous function w : [ 0 , 1 ] 2 → [ 0 , 1 ] where w ( x , y ) can be interpreted as the link probability between two nodes of the graph that are assigned values x and y , respectively . This interpretation propounds the following two stage sampling procedure for graphons . To sample a random graph G with n nodes from a graphon w , in the first stage , one samples n variables U1 , . . . , Un uniformly from [ 0 , 1 ] and constructs a latent mapping between the sampled points and the node labels . In the second stage , edges between any two nodes i , j are randomly added based on the link probability w ( Ui , Uj ) . Mathematically , if we abuse notation to denote the adjacency matrix by G ∈ { 0 , 1 } n×n , we have U1 , . . . , Un iid∼ Uniform [ 0 , 1 ] and Gij |Ui , Uj ∼ Bernoulli ( w ( Ui , Uj ) ) for all i < j . We consider problems involving multiple networks sampled independently from the same ( or different ) graphons . We make the following smoothness assumptions on the graphons . Assumption 1 ( Lipschitz continuous ) A graphon w is Lipschitz continuous with constant L if |w ( u , v ) − w ( u′ , v′ ) | ≤ L √ ( u− u′ ) 2 + ( v − v′ ) 2 for every u , v , u′ , v′ ∈ [ 0 , 1 ] . Assumption 2 ( Two-sided Lipschitz degree ) A graphon w has two-sided Lipschitz degree with constants λ1 , λ2 > 0 if its expected degree function g , defined by g ( u ) = ∫ 1 0 w ( u , v ) dv , satisfies λ2|u− u′| ≤ |g ( u ) − g ( u′ ) | ≤ λ1|u− u′| for every u , u′ ∈ [ 0 , 1 ] . One of the challenges in graphon estimation is due to the issue of non-identifiability , that is , different graphon functions w can generate the same random graph model . In particular , two graphons w and w′ generate the same random graph model if they are weakly isomorphic—there exist two measure preserving transformations φ , φ′ : [ 0 , 1 ] → [ 0 , 1 ] such that w ( φ ( u ) , φ ( v ) ) = w′ ( φ′ ( u ) , φ′ ( v ) ) . Moreover , the converse also holds meaning that such transformations are known to be the only source of non-identifiability ( Diaconis & Janson , 2007 ) . This weak isomorphism induces equivalence classes on the space of graphons . Since our goal is only to cluster graphs belonging to random graph models , we simply make the following assumption on our graphons . Assumption 3 ( Equivalence classes ) Any reference toK graphons , w1 , . . . , wK , assumes that , for every i , j , either wi = wj or wi and wj belong to different equivalence classes . Furthermore , without loss of generality , we assume that every graphon wi is represented such that the corresponding degree function gi is non-decreasing . Remark on the necessity of Assumptions 1–3 . Assumption 1 is standard in graphon estimation literature ( Klopp et al. , 2017 ) since it avoids graphons corresponding to inhomogeneous random graph models . It is known that two graphs from widely separated inhomogeneous models ( in L2distance ) are statistically indistinguishable ( Ghoshdastidar et al. , 2020 ) , and hence , it is essential to ignore such models to derive meaningful guarantees . Assumption 2 ensures that , under a measurepreserving transformation , the graphon has strictly increasing degree function , which is a canonical representation of an equivalence class of graphons ( Bickel & Chen , 2009 ) . Assumption 3 is needed since graphons can only be estimated up to measure-preserving transformation . As noted above , it is inconsequential for all practical purposes but simplifies the theoretical exposition . Graph transformation . In order to deal with multiple graphs and measure distances among pairs of graphs , we require a transformation that maps all graphs into a common metric space—the space of all n0 × n0 symmetric matrices for some integer n0 . While the graphon estimation literature provides several consistent estimators ( Klopp et al. , 2017 ; Zhang et al. , 2017 ) , only the histogram based sorting-and-smoothing graphon estimator of Chan & Airoldi ( 2014 ) can be adapted to meet the above requirement . We use the following graph transformation , inspired by Chan & Airoldi ( 2014 ) . The adjacency matrix G of size n× n is first reordered based on a non-unique permutation σ , such that the empirical degree based on this permutation is monotonically increasing . The degree sorted adjacency matrix is denoted by Gσ . It is then transformed to a ‘ histogram ’ A ∈ Rn0×n0 as Aij = 1 h2 h∑ i1=1 h∑ j1=1 Gσ ( i−1 ) h+i1 , ( j−1 ) h+j1 , where h = ⌊ n n0 ⌋ and b·c is the floor function . ( 1 ) Proposed graph distance . Given two directed or undirected graphs G1 and G2 with n1 and n2 nodes , respectively , we apply the transformation ( 1 ) to both the graphs with n0 ≤ min { n1 , n2 } . We propose to use the graph distance d ( G1 , G2 ) = 1 n0 ‖A1 −A2‖F , ( 2 ) where A1 and A2 denote the transformed matrices and ‖ · ‖F denotes the matrix Frobenius norm . Proposition 1 shows that , if G1 and G2 are sampled from two graphons , then the graph distance ( 2 ) consistently estimates the L2-distance between the two graphons , which is defined as ‖w1 − w2‖2L2 = ∫ 1 0 ∫ 1 0 ( w1 ( x , y ) − w2 ( x , y ) ) 2 dx dy . Proposition 1 ( Graph distance is consistent ) Let w1 and w2 satisfy Assumptions 1–3 . Let G1 and G2 be random graphs with at least n nodes sampled from the graphons w1 and w2 , respectively . If n→∞ and n0 is chosen such that n 2 0 logn n → 0 , then with high probability ( w.h.p . ) , ∣∣‖w1 − w2 ‖L2 − d ( G1 , G2 ) ∣∣ = O ( 1n0 ) . ( 3 ) Proof sketch . We define a novel technique for approximating the graphon . The proof in Appendix A.1 first establishes that the approximation error is bounded using Assumption 1 . Consequently , a relation between approximated graphons and transformed graphs is derived using lemmas from Chan & Airoldi ( 2014 ) . Proposition 1 is subsequently proved using the above two results . Remark on Proposition 1 for sparse graphs . The defined sampling procedure for graphon generates dense graphs , deviating from the real world sparse networks . To adapt it to sparse graphs , one may modify the sampling procedure to Gij |Ui , Uj ∼ Bernoulli ( ρw ( Ui , Uj ) ) where ρ depends only on n ( Olhede & Wolfe , 2014 ) . Under this process , the consistency result in Proposition 1 remains unchanged for ρ = Ω ( √ log n/n ) ( proof in Appendix A.2 ) . This bound can not be improved to the expected real world measure where ρ = Ω ( log n/n ) , because of the degree sorting step in ( 1 ) . Nevertheless , our analysis allows for relatively sparser graphs with strong consistency result . Notation . For ease of exposition , Proposition 1 as well as main results are stated asymptotically using the standard O ( · ) and Ω ( · ) notations , which subsume absolute and Lipschitz constants . We use “ with high probability ” ( w.h.p . ) to state that the probability of an event converges to 1 as n→∞ . | Motivated by the sort-and-smooth graphon estimator of Chan and Airoldi, 2014, this paper proposes two new clustering algorithms (graph distance based spectral clustering and similarity based semidefinite programming) for multiple graphs observed without vertex correspondence. The idea is to use the graphon approximation method to obtain a histogram estimator with the same number of bins for each graph and subsequently apply an existing spectral clustering approach on the graphon based distance matrix. Theoretical properties of their approach are studied (under reasonable smoothness assumptions on the generating process i.e. graphon) and consistency is established. The graph distance metric is applied to test for similarity of two collections of graphs. | SP:789952dc98861a18ec301641a66f64eabc8a6e13 |
Graphon based Clustering and Testing of Networks: Algorithms and Theory | 1 INTRODUCTION . Machine learning on graphs has evolved considerably over the past two decades . The traditional view towards network analysis is limited to modelling interactions among entities of interest , for instance social networks or world wide web , and learning algorithms based on graph theory have been commonly used to solve these problems ( Von Luxburg , 2007 ; Yan et al. , 2006 ) . However , recent applications in bioinformatics and other disciplines require a different perspective , where the networks are the quantities of interest . For instance , it is of practical interest to classify protein structures as enzyme or non-enzyme ( Dobson & Doig , 2003 ) or detect topological changes in brain networks caused by Alzheimer ’ s disease ( Stam et al. , 2007 ) . In this paper , learning from network-valued data refers to clustering where each network is treated as an entity , as opposed to the traditional network analysis problems that involve a single network of interactions ( Newman , 2003 ) . Machine learning on network-valued data has been an active area of research in recent years , although most works focus on the network classification problem . The generic approach is to convert the network-valued data into a standard representation . Graph neural networks are commonly used for network embedding , that is , finding Euclidean representations of each network that can be further used in standard machine learning models ( Narayanan et al. , 2017 ; Xu et al. , 2019 ) . In contrast , graph kernels capture similarities between pairs of networks that can be used in kernel based learning algorithms ( Shervashidze et al. , 2011 ; Kondor & Pan , 2016 ; Togninalli et al. , 2019 ) . In particular , the graph neural tangent kernel defines a graph kernel that corresponds to infinitely wide graph neural networks , and typically outperforms neural networks in classification tasks ( Du et al. , 2019 ) . A more classical equivalent for graph kernels is to define metrics that characterise the distances between pairs of graphs ( Bunke & Shearer , 1998 ) , but there has been limited research on designing efficient graph distances and developing algorithms for clustering network-valued data . The motivation for this paper stems from two shortcomings in the literature on network-valued data analysis : first , the efficacy of existing kernels or embeddings have not been studied beyond network classification , and second is the lack of theoretical analysis of these methods , particularly in the small sample setting . Generalisation error bounds for graph kernel based learning exist ( Du et al. , 2019 ) , but these bounds , based on learning theory , are meaningful only when many networks are available . However , in many applications , one needs to learn from a small population of large networks and , in such cases , an informative statistical analysis should consider the small sample , large graph regime . To address this issue , we take inspiration from the recent statistics literature on graph two-sample testing—given two ( populations of ) large graphs , the goal is to decide if they are from same statistical model or not . Although most theoretical studies in graph two-sample testing focus on graph with vertex correspondence ( Tang et al. , 2017a ; Ghoshdastidar & von Luxburg , 2018 ) , some works address the problem of testing graphs on different vertex sets either by defining distances between graphs ( Tang et al. , 2017b ; Agterberg et al. , 2020 ) or by representing networks in terms of pre-specified network statistics ( Ghoshdastidar et al. , 2017 ) . The use of network statistics for clustering network-valued data is studied in Mukherjee et al . ( 2017 ) . Another fundamental approach for dealing with graphs of different sizes is graph matching , where the objective is to determine the vertex correspondence . Graph matching is often solved by formulating it as an optimization problem ( Zaslavskiy et al. , 2008 ; Guo et al. , 2019 ) or defining graph edit distance between the graphs ( Riesen & Bunke , 2009 ; Gao et al. , 2010 ) . Although there is extensive research on graph matching , the efficacy of these methods in clustering network-valued data remains unexplored . Contribution and organisation . In this work , we follow the approach of defining meaningful graph distances based on statistical models , and use the proposed graph distance in the context of learning from networks without vertex correspondence . In particular , we propose graph distances based on graphons . Graphons are symmetric bivariate functions that represent the limiting structure for a sequence of graphs with increasing number of nodes ( Lovász & Szegedy , 2006 ) , but can be also viewed as a nonparametric statistical model for exchangeable random graphs ( Diaconis & Janson , 2007 ; Bickel & Chen , 2009 ) . The latter perspective is useful for the purpose of machine learning since it allows us to view the multiple graphs as random samples drawn from one or more graphon models . This perspective forms the basis of our contributions , which are listed below : 1 ) In Section 2 , we propose a distance between two networks , that do not have vertex correspondence and could have different number of vertices . We view the networks as random samples from ( unknown ) graphons , and propose a graph distance that estimates theL2-distance between the graphons . The distance is inspired by the sorting-and-smoothing graphon estimator ( Chan & Airoldi , 2014 ) . 2 ) In Section 3 , we present two algorithms for clustering network-valued data based on the proposed graph distance : a distance-based spectral clustering algorithm , and a similarity based semi-definite programming ( SDP ) approach . We derive performance guarantees for both algorithms under the assumption that the networks are sampled from graphons satisfying certain smoothness conditions . 3 ) We empirically compare the performance of our algorithms with other clustering strategies based on graph kernels , graph matching , network statistics etc . and show that , on both simulated and real data , our graph distance-based spectral clustering algorithm outperforms others while the SDP approach also shows reasonable performance , and they also scale to large networks ( Section 3.3 ) . 4 ) Inspired by the success of the proposed graph distance in clustering , we use the distance for graph two-sample testing . In Section 4 , we show that the proposed two-sample test is statistically consistent for large graphs , and also demonstrate the efficacy of the test through numerical simulation . We provide further discussion in Section 5 and present the proofs of theoretical results in Appendix . 2 GRAPH DISTANCE BASED ON GRAPHONS . Clustering or testing of multiple networks requires a notion of distance between the networks . In this section , we present a transformation that converts graphs of different sizes into a fixed size representation , and subsequently , propose a graph distance inspired by the theory of graphons . We first provide some background on graphons and graphon estimation . Graphon has been studied in the literature from two perspectives : as limiting structure for infinite sequence of growing graphs ( Lovász & Szegedy , 2006 ) , or as exchangeable random graph model . In this paper , we follow the latter perspective . A random graph is said to be exchangeable if its distribution is invariant under permutation of nodes . Diaconis & Janson ( 2007 ) showed that any statistical model that generates exchangeable random graphs can be characterised by graphons , as introduced by Lovász & Szegedy ( 2006 ) . Formally , a graphon is a symmetric measurable continuous function w : [ 0 , 1 ] 2 → [ 0 , 1 ] where w ( x , y ) can be interpreted as the link probability between two nodes of the graph that are assigned values x and y , respectively . This interpretation propounds the following two stage sampling procedure for graphons . To sample a random graph G with n nodes from a graphon w , in the first stage , one samples n variables U1 , . . . , Un uniformly from [ 0 , 1 ] and constructs a latent mapping between the sampled points and the node labels . In the second stage , edges between any two nodes i , j are randomly added based on the link probability w ( Ui , Uj ) . Mathematically , if we abuse notation to denote the adjacency matrix by G ∈ { 0 , 1 } n×n , we have U1 , . . . , Un iid∼ Uniform [ 0 , 1 ] and Gij |Ui , Uj ∼ Bernoulli ( w ( Ui , Uj ) ) for all i < j . We consider problems involving multiple networks sampled independently from the same ( or different ) graphons . We make the following smoothness assumptions on the graphons . Assumption 1 ( Lipschitz continuous ) A graphon w is Lipschitz continuous with constant L if |w ( u , v ) − w ( u′ , v′ ) | ≤ L √ ( u− u′ ) 2 + ( v − v′ ) 2 for every u , v , u′ , v′ ∈ [ 0 , 1 ] . Assumption 2 ( Two-sided Lipschitz degree ) A graphon w has two-sided Lipschitz degree with constants λ1 , λ2 > 0 if its expected degree function g , defined by g ( u ) = ∫ 1 0 w ( u , v ) dv , satisfies λ2|u− u′| ≤ |g ( u ) − g ( u′ ) | ≤ λ1|u− u′| for every u , u′ ∈ [ 0 , 1 ] . One of the challenges in graphon estimation is due to the issue of non-identifiability , that is , different graphon functions w can generate the same random graph model . In particular , two graphons w and w′ generate the same random graph model if they are weakly isomorphic—there exist two measure preserving transformations φ , φ′ : [ 0 , 1 ] → [ 0 , 1 ] such that w ( φ ( u ) , φ ( v ) ) = w′ ( φ′ ( u ) , φ′ ( v ) ) . Moreover , the converse also holds meaning that such transformations are known to be the only source of non-identifiability ( Diaconis & Janson , 2007 ) . This weak isomorphism induces equivalence classes on the space of graphons . Since our goal is only to cluster graphs belonging to random graph models , we simply make the following assumption on our graphons . Assumption 3 ( Equivalence classes ) Any reference toK graphons , w1 , . . . , wK , assumes that , for every i , j , either wi = wj or wi and wj belong to different equivalence classes . Furthermore , without loss of generality , we assume that every graphon wi is represented such that the corresponding degree function gi is non-decreasing . Remark on the necessity of Assumptions 1–3 . Assumption 1 is standard in graphon estimation literature ( Klopp et al. , 2017 ) since it avoids graphons corresponding to inhomogeneous random graph models . It is known that two graphs from widely separated inhomogeneous models ( in L2distance ) are statistically indistinguishable ( Ghoshdastidar et al. , 2020 ) , and hence , it is essential to ignore such models to derive meaningful guarantees . Assumption 2 ensures that , under a measurepreserving transformation , the graphon has strictly increasing degree function , which is a canonical representation of an equivalence class of graphons ( Bickel & Chen , 2009 ) . Assumption 3 is needed since graphons can only be estimated up to measure-preserving transformation . As noted above , it is inconsequential for all practical purposes but simplifies the theoretical exposition . Graph transformation . In order to deal with multiple graphs and measure distances among pairs of graphs , we require a transformation that maps all graphs into a common metric space—the space of all n0 × n0 symmetric matrices for some integer n0 . While the graphon estimation literature provides several consistent estimators ( Klopp et al. , 2017 ; Zhang et al. , 2017 ) , only the histogram based sorting-and-smoothing graphon estimator of Chan & Airoldi ( 2014 ) can be adapted to meet the above requirement . We use the following graph transformation , inspired by Chan & Airoldi ( 2014 ) . The adjacency matrix G of size n× n is first reordered based on a non-unique permutation σ , such that the empirical degree based on this permutation is monotonically increasing . The degree sorted adjacency matrix is denoted by Gσ . It is then transformed to a ‘ histogram ’ A ∈ Rn0×n0 as Aij = 1 h2 h∑ i1=1 h∑ j1=1 Gσ ( i−1 ) h+i1 , ( j−1 ) h+j1 , where h = ⌊ n n0 ⌋ and b·c is the floor function . ( 1 ) Proposed graph distance . Given two directed or undirected graphs G1 and G2 with n1 and n2 nodes , respectively , we apply the transformation ( 1 ) to both the graphs with n0 ≤ min { n1 , n2 } . We propose to use the graph distance d ( G1 , G2 ) = 1 n0 ‖A1 −A2‖F , ( 2 ) where A1 and A2 denote the transformed matrices and ‖ · ‖F denotes the matrix Frobenius norm . Proposition 1 shows that , if G1 and G2 are sampled from two graphons , then the graph distance ( 2 ) consistently estimates the L2-distance between the two graphons , which is defined as ‖w1 − w2‖2L2 = ∫ 1 0 ∫ 1 0 ( w1 ( x , y ) − w2 ( x , y ) ) 2 dx dy . Proposition 1 ( Graph distance is consistent ) Let w1 and w2 satisfy Assumptions 1–3 . Let G1 and G2 be random graphs with at least n nodes sampled from the graphons w1 and w2 , respectively . If n→∞ and n0 is chosen such that n 2 0 logn n → 0 , then with high probability ( w.h.p . ) , ∣∣‖w1 − w2 ‖L2 − d ( G1 , G2 ) ∣∣ = O ( 1n0 ) . ( 3 ) Proof sketch . We define a novel technique for approximating the graphon . The proof in Appendix A.1 first establishes that the approximation error is bounded using Assumption 1 . Consequently , a relation between approximated graphons and transformed graphs is derived using lemmas from Chan & Airoldi ( 2014 ) . Proposition 1 is subsequently proved using the above two results . Remark on Proposition 1 for sparse graphs . The defined sampling procedure for graphon generates dense graphs , deviating from the real world sparse networks . To adapt it to sparse graphs , one may modify the sampling procedure to Gij |Ui , Uj ∼ Bernoulli ( ρw ( Ui , Uj ) ) where ρ depends only on n ( Olhede & Wolfe , 2014 ) . Under this process , the consistency result in Proposition 1 remains unchanged for ρ = Ω ( √ log n/n ) ( proof in Appendix A.2 ) . This bound can not be improved to the expected real world measure where ρ = Ω ( log n/n ) , because of the degree sorting step in ( 1 ) . Nevertheless , our analysis allows for relatively sparser graphs with strong consistency result . Notation . For ease of exposition , Proposition 1 as well as main results are stated asymptotically using the standard O ( · ) and Ω ( · ) notations , which subsume absolute and Lipschitz constants . We use “ with high probability ” ( w.h.p . ) to state that the probability of an event converges to 1 as n→∞ . | The paper proposes a graph distance based on graphons that can operate on the small sample size regime; this is possible by exploiting a large number of nodes. The proposed paper is meant to address two shortcomings that the authors have found in the literature: - a lack of study of current graph processing methods beyond graph-level classification, and - a lack of theoretically sound methods. The paper also shows theoretical guarantees associated with two existing clustering methods and a two-sample statistical tests when operating with the proposed distance. | SP:789952dc98861a18ec301641a66f64eabc8a6e13 |
Large-Scale Representation Learning on Graphs via Bootstrapping | 1 INTRODUCTION . Graphs provide a powerful abstraction for complex datasets that arise in a variety of applications such as social networks , transportation networks , and biological sciences ( Hamilton et al. , 2017 ; DerrowPinion et al. , 2021 ; Zitnik & Leskovec , 2017 ; Chanussot et al. , 2021 ) . Despite recent advances in graph neural networks ( GNNs ) , when trained with supervised data alone , these networks can easily overfit and may fail to generalize ( Rong et al. , 2019 ) . Thus , finding ways to form simplified representations of graph-structured data without labels is an important yet unsolved challenge . Current state-of-the-art methods for unsupervised representation learning on graphs ( Veličković et al. , 2019 ; Peng et al. , 2020 ; Hassani & Khasahmadi , 2020 ; Zhu et al. , 2020b ; a ; You et al. , 2020 ) are contrastive . These methods work by pulling together representations of related objects and pushing apart representations of unrelated ones . For example , current best methods Zhu et al . ( 2020b ) and Zhu et al . ( 2020a ) learn node representations by creating two augmented versions of a graph , pulling together the representation of the same node in the two graphs , while pushing apart every other node pair . As such , they inherently rely on the ability to compare each object to a large number of negatives . In the absence of a principled way of choosing these negatives , this can require computation and memory quadratic in the number of nodes . In many cases , the generation of a large number of negatives poses a prohibitive cost , especially for large graphs . In this paper , we introduce a scalable approach for self-supervised representation learning on graphs called Bootstrapped Graph Latents ( BGRL ) . Inspired by recent advances in self-supervised learning in vision ( Grill et al. , 2020 ) , BGRL learns node representations by encoding two augmented versions of a graph using two distinct graph encoders : an online encoder , and a target encoder . The online encoder is trained through predicting the representation of the target encoder , while the target encoder is updated as an exponential moving average of the online network . Critically , BGRL does not require contrasting negative examples , and thus can scale easily to very large graphs . Our main contributions are : • We introduce Bootstrapped Graph Latents ( BGRL ) , a graph self-supervised learning method that effectively scales to extremely large graphs and outperforms existing methods , while using only simple graph augmentations and not requiring negative examples ( Section 2 ) . • We show that contrastive methods face a trade-off between peak performance and memory constraints , due to their reliance on negative examples ( Section 4.2 ) . Due to having time and space complexity scaling only linearly in the size of the input , BGRL avoids the performancememory trade-off inherent to contrastive methods altogether . BGRL provides performance competitive with the best contrastive methods , while using 2-10x less memory on standard benchmarks ( Section 3 ) . • We show that leveraging the scalability of BGRL allows making full use of the vast amounts of unlabeled data present in large graphs via semi-supervised learning . In particular , we find that efficient use of unlabeled data for representation learning prevents representations from overfitting to the classification task , and achieves significantly higher , state-of-the-art performance . This was critical to the success of our solution at KDD Cup 2021 in which our BGRL-based solution was awarded one of the winners , on the largest publicly available graph dataset , of size 360GB consisting of 240 million nodes and 1.7 billion edges ( Section 4.3 ) . 2 BOOTSTRAPPED GRAPH LATENTS 2.1 BGRL COMPONENTS BGRL builds representations through the use of two graph encoders , an online encoder Eθ and a target encoder Eφ , where θ and φ denote two distinct sets of parameters . We consider a graph G = ( X , A ) , with node features X ∈ RN×F and adjacency matrix A ∈ RN×N . BGRL first produces two alternate views of G : G1 = ( X̃1 , Ã1 ) and G2 = ( X̃2 , Ã2 ) , by applying stochastic graph augmentation functions T1 and T2 respectively . The online encoder produces an online representation from the first augmented graph , H̃1 : = Eθ ( X̃1 , Ã1 ) ; similarly the target encoder produces a target representation of the second augmented graph , H̃2 : = Eφ ( X̃2 , Ã2 ) . The online representation is fed into a node-level predictor pθ that outputs a prediction of the target representation , Z̃1 : = pθ ( H̃1 ) . BGRL differs from prior bootstrapping approaches such as BYOL ( Grill et al. , 2020 ) in that it does not use a projector network . Unlike vision tasks , in which a projection step is used by BYOL for dimensionality reduction , common embedding sizes are quite small for graph tasks and so this is not a concern in our case . In fact , we observe that this step can be eliminated altogether without loss in performance ( Appendix B ) . The augmentation functions T1 and T2 used are simple , standard graph perturbations previously explored ( You et al. , 2020 ; Zhu et al. , 2020b ) . We use a combination of random node feature masking and edge masking with fixed masking probabilities pf and pe respectively . More details and background on graph augmentations is provided in Appendix D. 2.2 BGRL UPDATE STEP Updating the online encoder Eθ : The online parameters θ ( and not φ ) , are updated to make the predicted target representations Z̃1 closer to the true target representations H̃2 for each node , by following the gradient of the cosine similarity w.r.t . θ , i.e. , ` ( θ , φ ) = − 2 N N−1∑ i=0 Z̃ ( 1 , i ) H̃ > ( 2 , i ) ‖Z̃ ( 1 , i ) ‖‖H̃ ( 2 , i ) ‖ ( 1 ) θ ← optimize ( θ , η , ∂θ ` ( θ , φ ) ) , ( 2 ) where η is the learning rate and the final updates are computed from the gradients of the objective with respect to θ only , using an optimization method such as SGD or Adam ( Kingma & Ba , 2015 ) . In practice , we symmetrize this loss , by also predicting the target representation of the first view using the online representation of the second . Updating the target encoder Eφ : The target parameters φ are updated as an exponential moving average of the online parameters θ , using a decay rate τ , i.e. , φ← τφ+ ( 1− τ ) θ , ( 3 ) Figure 1 visually summarizes BGRL ’ s architecture . Note that although the objective ` ( θ , φ ) has undesirable or trivial solutions , BGRL does not actually optimize this loss . Only the online parameters θ are updated to reduce this loss , while the target parameters φ follow a different objective . This non-collapsing behavior even without relying on negatives has been studied further ( Tian et al. , 2021 ) . We provide an empirical analysis of this behavior in Appendix A , showing that in practice BGRL does not collapse to trivial solutions and ` ( θ , φ ) does not converge to 0 . Scalable non-contrastive objective : Here we note that a contrastive approach would instead encourage Z̃ ( 1 , i ) and H̃ ( 2 , j ) to be far apart for node pairs ( i , j ) that are dissimilar . In the absence of a principled way of choosing such dissimilar pairs , the naïve approach of simply contrasting all pairs { ( i , j ) | i 6= j } , scales quadratically in the size of the input . As BGRL does not rely on this contrastive step , BGRL scales linearly in the size of the graph , and thus is scalable by design . 3 COMPUTATIONAL COMPLEXITY ANALYSIS . We provide a brief description of the time and space complexities of the BGRL update step , and illustrate its advantages compared to previous strong contrastive methods such as GRACE ( Zhu et al. , 2020b ) , which perform a quadratic all-pairs contrastive computation at each update step . The same analysis applies to variations of the GRACE method such as GCA ( Zhu et al. , 2020a ) . Consider a graph with N nodes and M edges , and simple encoders E that compute embeddings in time and space O ( N +M ) . This property is satisfied by most popular GNN architectures such as convolutional ( Kipf & Welling , 2017 ) , attentional ( Veličković et al. , 2018 ) , or message-passing ( Gilmer et al. , 2017 ) networks . BGRL performs four encoder computations per update step ( twice for the target and online encoders , and twice for each augmentation ) plus a node-level prediction step ; GRACE performs two encoder computations ( once for each augmentation ) , plus a node-level projection step . Both methods backpropagate the learning signal twice ( once for each augmentation ) , and we assume the backward pass to be approximately as costly as a forward pass . We ignore the cost for computation of the augmentations in this analysis . Thus the total time and space complexity per update step for BGRL is 6Cencoder ( M +N ) + 4CpredictionN +CBGRLN , compared to 4Cencoder ( M +N ) + 4CprojectionN +CGRACEN2 for GRACE , where C· are constants depending on architecture of the different components . Table 1 shows an empirical comparison of BGRL and GRACE ’ s computational requirements on a set of benchmark tasks . 4 EXPERIMENTAL ANALYSIS . We present an extensive empirical study of performance and scalability , showing that BGRL is effective across a wide range of settings from frozen linear evaluation to semi-supervised learning , and both when performing full-graph training and training on subsampled node neighborhoods . We give results across a range of dataset scales and encoder architectures including convolutional , attentional , and message-passing neural networks . We analyze the performance of BGRL on a set of 7 standard transductive and inductive benchmark tasks , as well as in the very high-data regime by evaluating on the MAG240M dataset ( Hu et al. , 2021 ) . We present results on medium-sized datasets where contrastive objectives can be computed on the entire graph ( Section 4.1 ) , on larger datasets where this objective must be approximated ( Section 4.2 ) , and finally on the much larger MAG240M dataset designed to test scalability limits ( Section 4.3 ) , showing that BGRL improves performance across all scales of datasets . In Appendix C , we show that BGRL achieves state-of-the-art performance even in the low-data regime on a set of 4 small-scale datasets . Dataset sizes are summarized in Table 2 and described further in Appendix E. Evaluation protocol : In most tasks , we follow the standard linear-evaluation protocol on graphs ( Veličković et al. , 2019 ) . This involves first training each graph encoder in a fully unsupervised manner and computing embeddings for each node ; a simple linear model is then trained on top of these frozen embeddings through a logistic regression loss with ` 2 regularization , without flowing any gradients back to the graph encoder network . In the more challenging MAG240M task , we extend BGRL to the semi-supervised setting by combining our self-supervised representation learning loss together with a supervised loss . We show that BGRL ’ s bootstrapping objective obtains state-of-the-art performance in this hybrid setting , and even improves further with the added use of unlabeled data for representation learning - properties which have not been previously demonstrated by prior works on self-supervised representation learning on graphs . Implementation details including model architectures and hyperparameters are provided in Appendix F. Algorithm implementation and experiment code for most tasks have been provided in supplementary material , while code for our solution on MAG240M has been open-sourced1 as part of the KDD Cup 2021 . | The paper proposes a self-supervised graph representation learning algorithm named BGRL. BGRL employs similar architecture & training mechanism with BYOL and necessary adjustments (e.g., the augmentation methods, dropping the prejector module) to adapt to the characteristics of graph datasets. Given the nature of BYOL-based methods that negative samples are not used during training, BGRL scales only to O(n) when computing the objective function, which essentially breaks the computation limit incurred by the quadratically complex objective function adopted by SimCLR-based models (e.g., GRACE), hence makes the proposed model applicable to large graphs. The major contributions of the paper are summarized as follows: (1) The paper highlights and empirically verifies BYOL's advantages in not only boosting task performance but also enhancing model scalability, the second aspect is important but ignored by previous / concurrent BYOL-based methods. (2) The paper tests the effect of introducing label supervision to the BYOL-based scheme on learning node representations for large graphs, which is not studied in previous / concurrent BYOL-based methods. (3) The paper evaluated BGRL against other base models on various benchmarks, which not only provides multiple aspects of BGRL's advantages, but also enriches the baselines of unsupervised graph representation learning, especially on large graphs. | SP:cd488af7b4ffc290fcd06ca9577156fa67ec3286 |
Large-Scale Representation Learning on Graphs via Bootstrapping | 1 INTRODUCTION . Graphs provide a powerful abstraction for complex datasets that arise in a variety of applications such as social networks , transportation networks , and biological sciences ( Hamilton et al. , 2017 ; DerrowPinion et al. , 2021 ; Zitnik & Leskovec , 2017 ; Chanussot et al. , 2021 ) . Despite recent advances in graph neural networks ( GNNs ) , when trained with supervised data alone , these networks can easily overfit and may fail to generalize ( Rong et al. , 2019 ) . Thus , finding ways to form simplified representations of graph-structured data without labels is an important yet unsolved challenge . Current state-of-the-art methods for unsupervised representation learning on graphs ( Veličković et al. , 2019 ; Peng et al. , 2020 ; Hassani & Khasahmadi , 2020 ; Zhu et al. , 2020b ; a ; You et al. , 2020 ) are contrastive . These methods work by pulling together representations of related objects and pushing apart representations of unrelated ones . For example , current best methods Zhu et al . ( 2020b ) and Zhu et al . ( 2020a ) learn node representations by creating two augmented versions of a graph , pulling together the representation of the same node in the two graphs , while pushing apart every other node pair . As such , they inherently rely on the ability to compare each object to a large number of negatives . In the absence of a principled way of choosing these negatives , this can require computation and memory quadratic in the number of nodes . In many cases , the generation of a large number of negatives poses a prohibitive cost , especially for large graphs . In this paper , we introduce a scalable approach for self-supervised representation learning on graphs called Bootstrapped Graph Latents ( BGRL ) . Inspired by recent advances in self-supervised learning in vision ( Grill et al. , 2020 ) , BGRL learns node representations by encoding two augmented versions of a graph using two distinct graph encoders : an online encoder , and a target encoder . The online encoder is trained through predicting the representation of the target encoder , while the target encoder is updated as an exponential moving average of the online network . Critically , BGRL does not require contrasting negative examples , and thus can scale easily to very large graphs . Our main contributions are : • We introduce Bootstrapped Graph Latents ( BGRL ) , a graph self-supervised learning method that effectively scales to extremely large graphs and outperforms existing methods , while using only simple graph augmentations and not requiring negative examples ( Section 2 ) . • We show that contrastive methods face a trade-off between peak performance and memory constraints , due to their reliance on negative examples ( Section 4.2 ) . Due to having time and space complexity scaling only linearly in the size of the input , BGRL avoids the performancememory trade-off inherent to contrastive methods altogether . BGRL provides performance competitive with the best contrastive methods , while using 2-10x less memory on standard benchmarks ( Section 3 ) . • We show that leveraging the scalability of BGRL allows making full use of the vast amounts of unlabeled data present in large graphs via semi-supervised learning . In particular , we find that efficient use of unlabeled data for representation learning prevents representations from overfitting to the classification task , and achieves significantly higher , state-of-the-art performance . This was critical to the success of our solution at KDD Cup 2021 in which our BGRL-based solution was awarded one of the winners , on the largest publicly available graph dataset , of size 360GB consisting of 240 million nodes and 1.7 billion edges ( Section 4.3 ) . 2 BOOTSTRAPPED GRAPH LATENTS 2.1 BGRL COMPONENTS BGRL builds representations through the use of two graph encoders , an online encoder Eθ and a target encoder Eφ , where θ and φ denote two distinct sets of parameters . We consider a graph G = ( X , A ) , with node features X ∈ RN×F and adjacency matrix A ∈ RN×N . BGRL first produces two alternate views of G : G1 = ( X̃1 , Ã1 ) and G2 = ( X̃2 , Ã2 ) , by applying stochastic graph augmentation functions T1 and T2 respectively . The online encoder produces an online representation from the first augmented graph , H̃1 : = Eθ ( X̃1 , Ã1 ) ; similarly the target encoder produces a target representation of the second augmented graph , H̃2 : = Eφ ( X̃2 , Ã2 ) . The online representation is fed into a node-level predictor pθ that outputs a prediction of the target representation , Z̃1 : = pθ ( H̃1 ) . BGRL differs from prior bootstrapping approaches such as BYOL ( Grill et al. , 2020 ) in that it does not use a projector network . Unlike vision tasks , in which a projection step is used by BYOL for dimensionality reduction , common embedding sizes are quite small for graph tasks and so this is not a concern in our case . In fact , we observe that this step can be eliminated altogether without loss in performance ( Appendix B ) . The augmentation functions T1 and T2 used are simple , standard graph perturbations previously explored ( You et al. , 2020 ; Zhu et al. , 2020b ) . We use a combination of random node feature masking and edge masking with fixed masking probabilities pf and pe respectively . More details and background on graph augmentations is provided in Appendix D. 2.2 BGRL UPDATE STEP Updating the online encoder Eθ : The online parameters θ ( and not φ ) , are updated to make the predicted target representations Z̃1 closer to the true target representations H̃2 for each node , by following the gradient of the cosine similarity w.r.t . θ , i.e. , ` ( θ , φ ) = − 2 N N−1∑ i=0 Z̃ ( 1 , i ) H̃ > ( 2 , i ) ‖Z̃ ( 1 , i ) ‖‖H̃ ( 2 , i ) ‖ ( 1 ) θ ← optimize ( θ , η , ∂θ ` ( θ , φ ) ) , ( 2 ) where η is the learning rate and the final updates are computed from the gradients of the objective with respect to θ only , using an optimization method such as SGD or Adam ( Kingma & Ba , 2015 ) . In practice , we symmetrize this loss , by also predicting the target representation of the first view using the online representation of the second . Updating the target encoder Eφ : The target parameters φ are updated as an exponential moving average of the online parameters θ , using a decay rate τ , i.e. , φ← τφ+ ( 1− τ ) θ , ( 3 ) Figure 1 visually summarizes BGRL ’ s architecture . Note that although the objective ` ( θ , φ ) has undesirable or trivial solutions , BGRL does not actually optimize this loss . Only the online parameters θ are updated to reduce this loss , while the target parameters φ follow a different objective . This non-collapsing behavior even without relying on negatives has been studied further ( Tian et al. , 2021 ) . We provide an empirical analysis of this behavior in Appendix A , showing that in practice BGRL does not collapse to trivial solutions and ` ( θ , φ ) does not converge to 0 . Scalable non-contrastive objective : Here we note that a contrastive approach would instead encourage Z̃ ( 1 , i ) and H̃ ( 2 , j ) to be far apart for node pairs ( i , j ) that are dissimilar . In the absence of a principled way of choosing such dissimilar pairs , the naïve approach of simply contrasting all pairs { ( i , j ) | i 6= j } , scales quadratically in the size of the input . As BGRL does not rely on this contrastive step , BGRL scales linearly in the size of the graph , and thus is scalable by design . 3 COMPUTATIONAL COMPLEXITY ANALYSIS . We provide a brief description of the time and space complexities of the BGRL update step , and illustrate its advantages compared to previous strong contrastive methods such as GRACE ( Zhu et al. , 2020b ) , which perform a quadratic all-pairs contrastive computation at each update step . The same analysis applies to variations of the GRACE method such as GCA ( Zhu et al. , 2020a ) . Consider a graph with N nodes and M edges , and simple encoders E that compute embeddings in time and space O ( N +M ) . This property is satisfied by most popular GNN architectures such as convolutional ( Kipf & Welling , 2017 ) , attentional ( Veličković et al. , 2018 ) , or message-passing ( Gilmer et al. , 2017 ) networks . BGRL performs four encoder computations per update step ( twice for the target and online encoders , and twice for each augmentation ) plus a node-level prediction step ; GRACE performs two encoder computations ( once for each augmentation ) , plus a node-level projection step . Both methods backpropagate the learning signal twice ( once for each augmentation ) , and we assume the backward pass to be approximately as costly as a forward pass . We ignore the cost for computation of the augmentations in this analysis . Thus the total time and space complexity per update step for BGRL is 6Cencoder ( M +N ) + 4CpredictionN +CBGRLN , compared to 4Cencoder ( M +N ) + 4CprojectionN +CGRACEN2 for GRACE , where C· are constants depending on architecture of the different components . Table 1 shows an empirical comparison of BGRL and GRACE ’ s computational requirements on a set of benchmark tasks . 4 EXPERIMENTAL ANALYSIS . We present an extensive empirical study of performance and scalability , showing that BGRL is effective across a wide range of settings from frozen linear evaluation to semi-supervised learning , and both when performing full-graph training and training on subsampled node neighborhoods . We give results across a range of dataset scales and encoder architectures including convolutional , attentional , and message-passing neural networks . We analyze the performance of BGRL on a set of 7 standard transductive and inductive benchmark tasks , as well as in the very high-data regime by evaluating on the MAG240M dataset ( Hu et al. , 2021 ) . We present results on medium-sized datasets where contrastive objectives can be computed on the entire graph ( Section 4.1 ) , on larger datasets where this objective must be approximated ( Section 4.2 ) , and finally on the much larger MAG240M dataset designed to test scalability limits ( Section 4.3 ) , showing that BGRL improves performance across all scales of datasets . In Appendix C , we show that BGRL achieves state-of-the-art performance even in the low-data regime on a set of 4 small-scale datasets . Dataset sizes are summarized in Table 2 and described further in Appendix E. Evaluation protocol : In most tasks , we follow the standard linear-evaluation protocol on graphs ( Veličković et al. , 2019 ) . This involves first training each graph encoder in a fully unsupervised manner and computing embeddings for each node ; a simple linear model is then trained on top of these frozen embeddings through a logistic regression loss with ` 2 regularization , without flowing any gradients back to the graph encoder network . In the more challenging MAG240M task , we extend BGRL to the semi-supervised setting by combining our self-supervised representation learning loss together with a supervised loss . We show that BGRL ’ s bootstrapping objective obtains state-of-the-art performance in this hybrid setting , and even improves further with the added use of unlabeled data for representation learning - properties which have not been previously demonstrated by prior works on self-supervised representation learning on graphs . Implementation details including model architectures and hyperparameters are provided in Appendix F. Algorithm implementation and experiment code for most tasks have been provided in supplementary material , while code for our solution on MAG240M has been open-sourced1 as part of the KDD Cup 2021 . | This paper proposes BRGL, a method for graph representation learning based on ‘bootstrapping’ (in the same sense BYOL is). Different from the prior art, which mainly relies on contrastive learning, BRGL naturally scales linearly with graph size. An extensive suite of experiments shows that BRGL scales better than previous works while yielding state-of-the-art (SOTA) performance in node classification. | SP:cd488af7b4ffc290fcd06ca9577156fa67ec3286 |
Learning Versatile Neural Architectures by Propagating Network Codes | 1 INTRODUCTION . Designing a single neural network architecture that adapts to multiple different tasks is challenging . This is because different tasks , such as image segmentation in Cityscapes ( Cordts et al. , 2016 ) and video recognition in HMDB51 ( Kuehne et al. , 2011 ) , have different data distributions and require different granularity of feature representations . For example , although the manually designed networks ResNet ( He et al. , 2016 ) and HRNet ( Wang et al. , 2020a ) work well on certain tasks such as image classification on ImageNet , they deteriorate in the other tasks . Intuitively , manually designing a single neural architecture that is applicable in all these tasks is difficult . Recently , neural architecture search ( NAS ) has achieved great success in searching network architectures automatically . However , existing NAS methods ( Wu et al. , 2019 ; Liang et al. , 2019 ; Liu et al. , 2019b ; Xie et al. , 2018 ; Cai et al. , 2020 ; Yu et al. , 2020 ; Shaw et al. , 2019b ; a ) typically search on a single task . Though works ( Ding et al. , 2021 ; Duan et al. , 2021 ; Zamir et al. , 2018 ) designed NAS algorithms or datasets that can be used for multiple tasks , they still search different architectures for different tasks , indicating that the costly searching procedure needs to be repeated many times . The problem of learning versatile neural architectures capable of adapting to multiple different tasks remains unsolved , i.e. , searching a multitask architecture or transferring architectures between different tasks . In principle , it faces the challenge of task and dataset inconsistency . Different tasks may require different granularity of feature representations , e.g. , the segmentation task requires more multi-scale features and low-level representations than classification . The key to solving task inconsistency is to design a unified architecture search space for multiple tasks . In contrast to most previous works that simply extend search space designed for image classification 1Project page : https : //network-propagation.github.io for other tasks and build NAS benchmarks ( Dong & Yang , 2020 ; Ying et al. , 2019 ; Siems et al. , 2020 ) on small datasets ( e.g. , CIFAR10 , ImageNet-16 ) with unrealistic settings , we design a multiresolution network space and build a multi-task practical NAS benchmark ( NAS-Bench-MR ) on four challenging datasets including ImageNet-224 ( Deng et al. , 2009 ) , Cityscapes ( Cordts et al. , 2016 ) , KITTI ( Geiger et al. , 2012 ) , and HMDB51 ( Kuehne et al. , 2011 ) . Inspired by HRNet ( Wang et al. , 2020a ) , our network space is a multi-branch multi-resolution space that naturally contains various granularities of representations for different tasks , e.g. , high-resolution features ( Wang et al. , 2020a ) for segmentation while low-resolution ones for classification . NAS-Bench-MR closes the gap between existing benchmarks and NAS in multi-task and real-world scenarios . It serves as an important contribution of this work to facilitate future cross-task NAS research . To solve the challenge of dataset inconsistency , in this work , we propose a novel predictorbased NAS algorithm , termed Network Coding Propagation ( NCP ) , for finding versatile and tasktransferable architectures . NCP transforms data-oriented optimization into architecture-oriented by learning to traverse the search space . It works as follows . We formulate all network hyperparameters into a coding space by representing each architecture hyper-parameter as a code , e.g. , ‘ 3,2,64 ’ denotes there are 3 blocks and each block contains 2 residual blocks with a channel width of 64 . We then learn neural predictors to build the mapping between the network coding and its evaluation metrics ( e.g. , Acc , mIoU , FLOPs ) for each task . By setting high-desired accuracy of each task and FLOPs as the target , we back-propagate gradients of the learned predictor to directly update values of network codes to achieve the target . In this way , good architectures can be found in several forward-backward iterations in seconds , as shown in Fig . 1 . NCP has several appealing benefits : ( 1 ) NCP addresses the data mismatch problem in multi-task learning by learning from network coding but not original data . ( 2 ) NCP works in large spaces in seconds by back-propagating the neural predictor and traversing the search space along the gradient direction . ( 3 ) NCP can use multiple neural predictors for architecture transferring across tasks , as shown in Fig . 1 ( a ) , it adapts an architecture to a new task with only a few iterations . ( 4 ) In NCP , the multi-task learning objective is transformed to gradient accumulation across multiple predictors , as shown in Fig . 1 ( b ) , making NCP naturally applicable to various even conflicting objectives , such as multi-task structure optimization , architecture transferring across tasks , and accuracy-efficiency trade-off for specific computational budgets . Our main contributions are three-fold . ( 1 ) We propose Network Coding Propagation ( NCP ) , which back-propagates the gradients of neural predictors to directly update architecture codes along desired gradient directions for various objectives . ( 2 ) We build NAS-Bench-MR on four challenging datasets under practical training settings for learning task-transferable architectures . We believe it will facilitate future NAS research , especially for multi-task NAS and architecture transferring across tasks . ( 3 ) Extensive studies on inter- , intra- , cross-task generalizability show the effectiveness of NCP in finding versatile and transferable architectures among different even conflicting objectives and tasks . 2 RELATED WORK . Neural Architecture Search Spaces and Benchmarks . Most existing NAS spaces ( Jin et al. , 2019 ; Xu et al. , 2019 ; Wu et al. , 2019 ; Cai et al. , 2019 ; Xie et al. , 2018 ; Stamoulis et al. , 2019 ; Mei et al. , 2020 ; Guo et al. , 2020 ; Dai et al. , 2020 ) are designed for image classification using either a single-branch structure with a group of candidate operators in each layer or a repeated cell structure , e.g. , Darts-based ( Liu et al. , 2019b ) and MobileNet-based ( Sandler et al. , 2018 ) search spaces . Based on these spaces , several NAS benchmarks ( Ying et al. , 2019 ; Dong & Yang , 2020 ; Siems et al. , 2020 ; Duan et al. , 2021 ) have been proposed to pre-evaluate the architectures . However , the above search spaces and benchmarks are built either on proxy settings or small datasets , such as Cifar-10 and ImageNet-16 ( 16 × 16 ) , which is less suitable for other tasks that rely on multiscale information . For those tasks , some search space are explored in segmentation ( Shaw et al. , 2019a ; Liu et al. , 2019a ; Nekrasov et al. , 2019 ; Lin et al. , 2020 ; Chen et al. , 2018 ) and object detection ( Chen et al. , 2019 ; Ghiasi et al. , 2019 ; Du et al. , 2020 ; Wang et al. , 2020b ) by introducing feature aggregation heads ( e.g. , ASPP ( Chen et al. , 2017 ) ) for multi-scale information . Nevertheless , the whole network is still organized in a chain-like single branch manner , resulting in sub-optimal performance . Another relevant work to ours is NAS-Bench-NLP Klyuchnikov et al . ( 2020 ) , which constructs a benchmark with 14k trained architectures in search space of recurrent neural networks on two language modeling datasets . Compared to previous spaces , our multi-resolution search space , including searchable numbers of resolutions/blocks/channels , is naturally designed for multiple vision tasks as it contains various granularities of feature representations . Based on our search space , we build NAS-Bench-MR for various vision tasks , including classification , segmentation , 3D detection , and video recognition . Detailed comparisons of NAS benchmarks can be found in Tab . 1 . Neural Architecture Search Methods . Generally , NAS trains numerous candidate architectures from a search space and evaluates their performance to find the optimal architecture , which is costly . To reduce training costs , weight-sharing NAS methods ( Liu et al. , 2019b ; Jin et al. , 2019 ; Xu et al. , 2019 ; Cai et al. , 2019 ; Guo et al. , 2020 ; Li & Talwalkar , 2020 ) are proposed to jointly train a large number of candidate networks within a super-network . Different searching strategies are employed within this framework such as reinforcement learning ( Pham et al. , 2018 ) , importance factor learning ( Liu et al. , 2019b ; Cai et al. , 2019 ; Stamoulis et al. , 2019 ; Xu et al. , 2019 ) , path sampling ( You et al. , 2020 ; Guo et al. , 2020 ; Xie et al. , 2018 ) , and channel pruning ( Mei et al. , 2020 ; Yu & Huang , 2019 ) . However , recent analysis ( Sciuto et al. , 2020 ; Wang et al. , 2021 ) shows that the magnitude of importance parameters in the weight-sharing NAS framework does not reflect the true ranking of the final architectures . Without weight-sharing , hyperparameter optimization methods ( Tan & Le , 2019 ; Radosavovic et al. , 2020 ; Baker et al. , 2017 ; Wen et al. , 2020 ; Lu et al. , 2019 ; Luo et al. , 2020 ; Chau et al. , 2020 ; Yan et al. , 2020 ) has shown its effectiveness by learning the relationship between network hyperparameters and their performance . For example , RegNet ( Radosavovic et al. , 2020 ) explains the widths and depths of good networks by a quantized linear function . Predictor-based methods ( Wen et al. , 2020 ; Luo et al. , 2020 ; Chau et al. , 2020 ; Yan et al. , 2020 ; Luo et al. , 2018 ) learn predictors , such as Gaussian process ( Dai et al. , 2019 ) and graph convolution networks ( Wen et al. , 2020 ) , to predict the performance of all candidate models in the search space . A subset of models with high predicted accuracies is then trained for the final selection . Our Network Coding Propagation ( NCP ) belongs to the predictor-based method , but is different from existing methods in : ( 1 ) NCP searches in a large search space in seconds without evaluating all candidate models by back-propagating the neural predictor and traversing the search space along the gradient direction . ( 2 ) Benefiting from the gradient back-propagation in network coding space , NCP is naturally applicable to various objectives across different tasks , such as multi-task structure optimization , architecture transferring , and accuracy-efficiency trade-offs . ( 3 ) Different from Luo et al . ( 2018 ) ; Baker et al . ( 2017 ) jointly train an encoder , a performance predictor , and a decoder to minimize the combination of performance prediction loss and structure reconstruction loss , we learn and inverse the neural predictor directly in our coding space to make the gradient updating and network editing explicit and transparent . 3 METHODOLOGY . NCP aims to customize specific network hyperparameters for different optimization objectives , such as single- and multi-task learning and accuracy-efficiency trade-offs , in an architecture coding space . An overview of NCP is shown in Fig . 2 . In this section , we first discuss learning efficient models on a single task with two strategies and then show that it can be easily extended to multi-task scenarios . Lastly , we demonstrate our multi-resolution coding space and NAS-Bench-MR . | This paper tackles neural architecture search (NAS), addressing the problem of finding architectures suitable for a multitude of vision-related tasks, ranging from object detection to semantic segmentation. To the best of my knowledge, this is the first attempt in NAS for computer vision models that explicitly design the search space that allows for a search of architectures that are broadly applicable to a variety of tasks that usually rely on the different granularity of feature representation. This paper makes two important steps towards the automated search of such general, multi-purpose architectures: (1) to study this problem, the paper formalizes a multi-task NAS benchmark, covering recognition, detection, semantic segmentation, and activity recognition (video) datasets. This allows for assessing the generality of the sought architecture across vision tasks. (2) to tackle the architecture search in such a setting, this paper propose makes the following contributions. First, it augments the search space with multiple stages that extract feature maps at several resolutions, followed by a feature fusion step (the number of blocks and feat. map resolutions are governed by the network hyperparameters.). This seems to be one of the key features that help to find architectures suitable for tasks that differ as much as image classification and semantic segmentation. Next, for assessing the performance in this large search space, this paper builds on predictor-based methods, where the idea is to train a network that predicts the performance (i.e., the learned predictor) of the sought-network based on the (encoded) hyperparameters, governing the network architecture. Similarly, the proposed neural coding propagation (NCP) directly encodes the hyperparameters in the network coding space — those can thus be directly updated via back-propagation of the learned predictor. A predictor is learned for each task independently, and this way, gradients can be accumulated separately across different tasks before jointly updating the “architecture codes”. | SP:36f276695fe54a7290e68a69eac687c0557bf454 |
Learning Versatile Neural Architectures by Propagating Network Codes | 1 INTRODUCTION . Designing a single neural network architecture that adapts to multiple different tasks is challenging . This is because different tasks , such as image segmentation in Cityscapes ( Cordts et al. , 2016 ) and video recognition in HMDB51 ( Kuehne et al. , 2011 ) , have different data distributions and require different granularity of feature representations . For example , although the manually designed networks ResNet ( He et al. , 2016 ) and HRNet ( Wang et al. , 2020a ) work well on certain tasks such as image classification on ImageNet , they deteriorate in the other tasks . Intuitively , manually designing a single neural architecture that is applicable in all these tasks is difficult . Recently , neural architecture search ( NAS ) has achieved great success in searching network architectures automatically . However , existing NAS methods ( Wu et al. , 2019 ; Liang et al. , 2019 ; Liu et al. , 2019b ; Xie et al. , 2018 ; Cai et al. , 2020 ; Yu et al. , 2020 ; Shaw et al. , 2019b ; a ) typically search on a single task . Though works ( Ding et al. , 2021 ; Duan et al. , 2021 ; Zamir et al. , 2018 ) designed NAS algorithms or datasets that can be used for multiple tasks , they still search different architectures for different tasks , indicating that the costly searching procedure needs to be repeated many times . The problem of learning versatile neural architectures capable of adapting to multiple different tasks remains unsolved , i.e. , searching a multitask architecture or transferring architectures between different tasks . In principle , it faces the challenge of task and dataset inconsistency . Different tasks may require different granularity of feature representations , e.g. , the segmentation task requires more multi-scale features and low-level representations than classification . The key to solving task inconsistency is to design a unified architecture search space for multiple tasks . In contrast to most previous works that simply extend search space designed for image classification 1Project page : https : //network-propagation.github.io for other tasks and build NAS benchmarks ( Dong & Yang , 2020 ; Ying et al. , 2019 ; Siems et al. , 2020 ) on small datasets ( e.g. , CIFAR10 , ImageNet-16 ) with unrealistic settings , we design a multiresolution network space and build a multi-task practical NAS benchmark ( NAS-Bench-MR ) on four challenging datasets including ImageNet-224 ( Deng et al. , 2009 ) , Cityscapes ( Cordts et al. , 2016 ) , KITTI ( Geiger et al. , 2012 ) , and HMDB51 ( Kuehne et al. , 2011 ) . Inspired by HRNet ( Wang et al. , 2020a ) , our network space is a multi-branch multi-resolution space that naturally contains various granularities of representations for different tasks , e.g. , high-resolution features ( Wang et al. , 2020a ) for segmentation while low-resolution ones for classification . NAS-Bench-MR closes the gap between existing benchmarks and NAS in multi-task and real-world scenarios . It serves as an important contribution of this work to facilitate future cross-task NAS research . To solve the challenge of dataset inconsistency , in this work , we propose a novel predictorbased NAS algorithm , termed Network Coding Propagation ( NCP ) , for finding versatile and tasktransferable architectures . NCP transforms data-oriented optimization into architecture-oriented by learning to traverse the search space . It works as follows . We formulate all network hyperparameters into a coding space by representing each architecture hyper-parameter as a code , e.g. , ‘ 3,2,64 ’ denotes there are 3 blocks and each block contains 2 residual blocks with a channel width of 64 . We then learn neural predictors to build the mapping between the network coding and its evaluation metrics ( e.g. , Acc , mIoU , FLOPs ) for each task . By setting high-desired accuracy of each task and FLOPs as the target , we back-propagate gradients of the learned predictor to directly update values of network codes to achieve the target . In this way , good architectures can be found in several forward-backward iterations in seconds , as shown in Fig . 1 . NCP has several appealing benefits : ( 1 ) NCP addresses the data mismatch problem in multi-task learning by learning from network coding but not original data . ( 2 ) NCP works in large spaces in seconds by back-propagating the neural predictor and traversing the search space along the gradient direction . ( 3 ) NCP can use multiple neural predictors for architecture transferring across tasks , as shown in Fig . 1 ( a ) , it adapts an architecture to a new task with only a few iterations . ( 4 ) In NCP , the multi-task learning objective is transformed to gradient accumulation across multiple predictors , as shown in Fig . 1 ( b ) , making NCP naturally applicable to various even conflicting objectives , such as multi-task structure optimization , architecture transferring across tasks , and accuracy-efficiency trade-off for specific computational budgets . Our main contributions are three-fold . ( 1 ) We propose Network Coding Propagation ( NCP ) , which back-propagates the gradients of neural predictors to directly update architecture codes along desired gradient directions for various objectives . ( 2 ) We build NAS-Bench-MR on four challenging datasets under practical training settings for learning task-transferable architectures . We believe it will facilitate future NAS research , especially for multi-task NAS and architecture transferring across tasks . ( 3 ) Extensive studies on inter- , intra- , cross-task generalizability show the effectiveness of NCP in finding versatile and transferable architectures among different even conflicting objectives and tasks . 2 RELATED WORK . Neural Architecture Search Spaces and Benchmarks . Most existing NAS spaces ( Jin et al. , 2019 ; Xu et al. , 2019 ; Wu et al. , 2019 ; Cai et al. , 2019 ; Xie et al. , 2018 ; Stamoulis et al. , 2019 ; Mei et al. , 2020 ; Guo et al. , 2020 ; Dai et al. , 2020 ) are designed for image classification using either a single-branch structure with a group of candidate operators in each layer or a repeated cell structure , e.g. , Darts-based ( Liu et al. , 2019b ) and MobileNet-based ( Sandler et al. , 2018 ) search spaces . Based on these spaces , several NAS benchmarks ( Ying et al. , 2019 ; Dong & Yang , 2020 ; Siems et al. , 2020 ; Duan et al. , 2021 ) have been proposed to pre-evaluate the architectures . However , the above search spaces and benchmarks are built either on proxy settings or small datasets , such as Cifar-10 and ImageNet-16 ( 16 × 16 ) , which is less suitable for other tasks that rely on multiscale information . For those tasks , some search space are explored in segmentation ( Shaw et al. , 2019a ; Liu et al. , 2019a ; Nekrasov et al. , 2019 ; Lin et al. , 2020 ; Chen et al. , 2018 ) and object detection ( Chen et al. , 2019 ; Ghiasi et al. , 2019 ; Du et al. , 2020 ; Wang et al. , 2020b ) by introducing feature aggregation heads ( e.g. , ASPP ( Chen et al. , 2017 ) ) for multi-scale information . Nevertheless , the whole network is still organized in a chain-like single branch manner , resulting in sub-optimal performance . Another relevant work to ours is NAS-Bench-NLP Klyuchnikov et al . ( 2020 ) , which constructs a benchmark with 14k trained architectures in search space of recurrent neural networks on two language modeling datasets . Compared to previous spaces , our multi-resolution search space , including searchable numbers of resolutions/blocks/channels , is naturally designed for multiple vision tasks as it contains various granularities of feature representations . Based on our search space , we build NAS-Bench-MR for various vision tasks , including classification , segmentation , 3D detection , and video recognition . Detailed comparisons of NAS benchmarks can be found in Tab . 1 . Neural Architecture Search Methods . Generally , NAS trains numerous candidate architectures from a search space and evaluates their performance to find the optimal architecture , which is costly . To reduce training costs , weight-sharing NAS methods ( Liu et al. , 2019b ; Jin et al. , 2019 ; Xu et al. , 2019 ; Cai et al. , 2019 ; Guo et al. , 2020 ; Li & Talwalkar , 2020 ) are proposed to jointly train a large number of candidate networks within a super-network . Different searching strategies are employed within this framework such as reinforcement learning ( Pham et al. , 2018 ) , importance factor learning ( Liu et al. , 2019b ; Cai et al. , 2019 ; Stamoulis et al. , 2019 ; Xu et al. , 2019 ) , path sampling ( You et al. , 2020 ; Guo et al. , 2020 ; Xie et al. , 2018 ) , and channel pruning ( Mei et al. , 2020 ; Yu & Huang , 2019 ) . However , recent analysis ( Sciuto et al. , 2020 ; Wang et al. , 2021 ) shows that the magnitude of importance parameters in the weight-sharing NAS framework does not reflect the true ranking of the final architectures . Without weight-sharing , hyperparameter optimization methods ( Tan & Le , 2019 ; Radosavovic et al. , 2020 ; Baker et al. , 2017 ; Wen et al. , 2020 ; Lu et al. , 2019 ; Luo et al. , 2020 ; Chau et al. , 2020 ; Yan et al. , 2020 ) has shown its effectiveness by learning the relationship between network hyperparameters and their performance . For example , RegNet ( Radosavovic et al. , 2020 ) explains the widths and depths of good networks by a quantized linear function . Predictor-based methods ( Wen et al. , 2020 ; Luo et al. , 2020 ; Chau et al. , 2020 ; Yan et al. , 2020 ; Luo et al. , 2018 ) learn predictors , such as Gaussian process ( Dai et al. , 2019 ) and graph convolution networks ( Wen et al. , 2020 ) , to predict the performance of all candidate models in the search space . A subset of models with high predicted accuracies is then trained for the final selection . Our Network Coding Propagation ( NCP ) belongs to the predictor-based method , but is different from existing methods in : ( 1 ) NCP searches in a large search space in seconds without evaluating all candidate models by back-propagating the neural predictor and traversing the search space along the gradient direction . ( 2 ) Benefiting from the gradient back-propagation in network coding space , NCP is naturally applicable to various objectives across different tasks , such as multi-task structure optimization , architecture transferring , and accuracy-efficiency trade-offs . ( 3 ) Different from Luo et al . ( 2018 ) ; Baker et al . ( 2017 ) jointly train an encoder , a performance predictor , and a decoder to minimize the combination of performance prediction loss and structure reconstruction loss , we learn and inverse the neural predictor directly in our coding space to make the gradient updating and network editing explicit and transparent . 3 METHODOLOGY . NCP aims to customize specific network hyperparameters for different optimization objectives , such as single- and multi-task learning and accuracy-efficiency trade-offs , in an architecture coding space . An overview of NCP is shown in Fig . 2 . In this section , we first discuss learning efficient models on a single task with two strategies and then show that it can be easily extended to multi-task scenarios . Lastly , we demonstrate our multi-resolution coding space and NAS-Bench-MR . | This paper organized a multi-task NAS benchmark including 4 widely used datasets and more than 20,000 models. It also proposed a searching architecture named Network Coding Propagation (NCP) to effectively find the optical model for specific tasks. Experimental results indicate that the proposed model can be applied to inter-task, cross-task, and intra-tasks problems and the authors also showed the generalization capability to other benchmarks. | SP:36f276695fe54a7290e68a69eac687c0557bf454 |
Joint Shapley values: a measure of joint feature importance | The Shapley value is one of the most widely used measures of feature importance partly as it measures a feature ’ s average effect on a model ’ s prediction . We introduce joint Shapley values , which directly extend Shapley ’ s axioms and intuitions : joint Shapley values measure a set of features ’ average contribution to a model ’ s prediction . We prove the uniqueness of joint Shapley values , for any order of explanation . Results for games show that joint Shapley values present different insights from existing interaction indices , which assess the effect of a feature within a set of features . The joint Shapley values provide intuitive results in ML attribution problems . With binary features , we present a presence-adjusted global value that is more consistent with local intuitions than the usual approach . 1 INTRODUCTION . Game theory ’ s Shapley value partitions the value arising from joint efforts among individual agents ( Shapley , 1953 ) . Specifically , denote by N = { 1 , . . . , n } a set of agents , and by GN the set of games on N , where a game is a set function v from 2N to R with v ( ∅ ) = 0 . Then v ( S ) is the worth created by coalition S ⊆ N . When there is no risk of confusion , we omit braces to indicate singletons ( e.g . i rather than { i } ) and denote a set ’ s cardinality by the corresponding lower case letter ( e.g . s = |S| ) . For any agent i , Shapley ’ s value is then ψi ( v ) ≡ ∑ S⊆N\ { i } s ! ( n− s− 1 ) ! n ! [ v ( S ∪ i ) − v ( S ) ] . ( 1 ) This is the average worth that i adds to possible coalitions S , weighted as follows : if the set of agents , S , has already ‘ arrived ’ , draw the next agent , i , to arrive uniformly over the remaining N\S agents . Shapley ’ s value is widely used in explainable AI ’ s attribution problem , partitioning model predictions among individual features ( q.v . Štrumbelj & Kononenko ( 2014 ) ; Lundberg & Lee ( 2017 ) ) after the model has been trained . Evaluating the prediction function at a specific feature value corresponds to an agent ’ s presence ; evaluating it at a reference ( or baseline ) feature value corresponds to the agent ’ s absence . A feature ’ s Shapley value is its average marginal contribution to the model ’ s predictions . When features are correlated , individual measures of importance may mislead ( Bhatt et al. , 2020 ; Patel et al. , 2021 ) , as measures of individual significance such as the t-test do . Thus , Shapley ’ s value has been extended to sets of features ( Grabisch & Roubens , 1999a ; Marichal et al. , 2007 ; Alshebli et al. , 2019 ; Dhamdhere et al. , 2020 ) . As these extensions introduce axioms not present in Shapley , they do not preserve the Shapley value ’ s intuition . We extend Shapley ’ s axioms to sets of features , randomizing over sets rather than individual features . The resulting joint Shapley value thus directly extends Shapley ’ s value to a measure of sets of feature importance : the average marginal contribution of a set of features to a model ’ s predictions . Our approach ’ s novelty is seen in our extension of the null axiom : in Shapley ( 1953 ) , a null agent contributes nothing to any set of agents to which it may belong ; here , a null set contributes nothing to any set to which it may belong . By contrast , interaction indices ( Grabisch & Roubens , 1999a ; Alshebli et al. , 2019 ; Dhamdhere et al. , 2020 ) recursively decompose sets into individual elements , retaining the original Shapley null axiom . Thus , these indices measure sets ’ contributions relative to their constituent elements — so are complementary to the joint Shapley value . The generalised Shapley value ( Marichal et al. , 2007 ) is closer to our work , but differs in a number of respects : our probabilities are independent of the size of the set of features under consideration1 ; our efficiency axiom is fully joint , while theirs are based on singletons and pairs ; our symmetry axiom provides uniqueness without reliance on recursion or partnership axioms . To illustrate , consider a null coalition , T , whose individual members contribute positively to coalitions that they join . Interaction indices assign a negative value to T , as the individual members act discordantly together . However , from a joint feature importance viewpoint , T should be assigned value 0 . The joint Shapley value matches this intuition . In the movie review application presented below , the joint Shapley values reveal contributions of collections of words , including grammatical features such as negation ( { disappointed } versus { won ’ t , disappointed } ) , and adjectives ( { effort } versus { terrific , effort } ) . Like the Shapley-Taylor interaction index ( Dhamdhere et al. , 2020 ) , our efficiency axiom depends on a positive integer k , the order of explanation , which limits the number of joint Shapley values to those for subsets up to cardinality k. As there are 2N − 1 non-empty subsets of N , the full set of joint Shapley values rapidly becomes unmanageable otherwise . In practice , k should be set to trade off insight ( favouring higher k ) against computational cost ( favouring lower k ) . For each k , the extended axioms yield a unique joint Shapley value . Unlike in Shapley ( 1953 ) , the joint anonymity and symmetry axioms are not interchangeable : each imposes distinct restrictions . Section 2 presents and extends the original Shapley axioms . Section 3 introduces joint Shapley values , deriving them as the unique solution to the extended axioms . Section 4 illustrates joint Shapley values in the game theoretical environment and applies them to the Boston housing ( Harrison & Rubinfeld , 1978 ) and Rotten Tomatoes movie review ( Pang & Lee , 2005 ) datasets , comparing them to interaction indices and presenting a sampling technique to facilitate calculation . Section 5 concludes . The Appendix collects proofs and other supplemental material . 2 EXTENDING SHAPLEY ’ S AXIOMS . For a game v ∈ GN , and a permutation σ on N , denote a permuted game by σv ∈ GN such that σv ( σ ( S ) ) = v ( S ) , ∀S ⊆ N , where σ ( S ) = { σ ( i ) : i ∈ S } . An index φ ( v ) of the game v ∈ GN is any real-valued function on 2N . The original axioms uniquely satisfied by the ( standard ) Shapley value ψ , are : LI linearity : ψ is a linear function on GN , i.e . ψ ( v + w ) = ψ ( v ) + ψ ( w ) and ψ ( av ) = aψ ( v ) for any v , w ∈ GN and a ∈ R. NU null : An agent that adds no worth to any coalition has no value , i.e . if v ( S ∪ { i } ) = v ( S ) for all S ⊆ N \ { i } , then ψi ( v ) = 0 . This axiom is sometimes called dummy . EF efficiency : The sum of the values of all agents is equal to the worth of the entire set , i.e . for all v ∈ GN , ∑n i=1 ψi ( v ) = v ( N ) . AN anonymity : For any σ on N and any v ∈ GN , ψi ( v ) = ψσ ( i ) ( σv ) , for all i ∈ N . 1Measures of joint significance ( e.g . F -tests ) and model selection ( e.g . BIC or AIC ) penalize larger feature sets to avoid overfitting . As joint Shapley values are calculated after model training , this problem does not arise . SY symmetry : If two agents add equal worth to all coalitions that they can both join then they receive equal value : if v ( S ∪ { i } ) = v ( S ∪ { j } ) ∀S ⊆ N \ { i , j } then ψi ( v ) = ψj ( v ) . This is strictly weaker than anonymity . Now extend each of these axioms in natural ways to conditions on sets rather than singletons . Below , φS ( v ) denotes an index for coalition S on game v. JLI joint linearity : φ is a linear function on GN , i.e . φ ( v + w ) = φ ( v ) + φ ( w ) and φ ( av ) = aφ ( v ) for any v , w ∈ GN and a ∈ R. ( This axiom has not been modified . ) JNU joint null : A coalition that adds no worth to any coalition has no value , i.e . if v ( S ∪T ) = v ( S ) for all S ⊆ N \ T , then φT ( v ) = 0 . JEF joint efficiency : The sum of the values of all coalitions up to cardinality k is equal to the worth of the entire set , i.e . for all v ∈ GN , ∑ ∅6=T⊆N : |T |≤k φT ( v ) = v ( N ) . JAN joint anonymity : For any σ on N and any v ∈ GN , φT ( v ) = φσ ( T ) ( σv ) , for all T ⊆ N . JSY joint symmetry : If two coalitions perform equally when joining coalitions that they can both join and for other coalitions they add no worth then they receive an equal value , i.e . if 1. v ( S ∪ T ) = v ( S ∪ T ′ ) for all S ⊆ N \ ( T ∪ T ′ ) , 2. v ( S ∪ T ) = v ( S ) for all S ⊆ N \ T such that S ∩ T ′ 6= ∅ , 3. v ( S ∪ T ′ ) = v ( S ) for all S ⊆ N \ T ′ such that S ∩ T 6= ∅ , then φT ( v ) = φT ′ ( v ) . Axiom JSY only equates the joint Shapley values for coalitions T and T ′ if they contribute identically to coalitions that they may both join , and contribute nothing to the other coalitions . Axioms JLI , JEF and JAN are all also used in Dhamdhere et al . ( 2020 ) . Our joint null and joint symmetry notions appear to be new : they reflect our interest in a set of features ’ contribution to a model ’ s predictions , so that the set ’ s cardinality should not play a role in determining its value . 3 JOINT SHAPLEY VALUES . Our main result is that there is a unique solution to axioms JLI , JNU , JEF , JAN and JSY , the joint Shapley value . The uniqueness is up to the kth order of explanation ; we say nothing about |T | > k. Theorem 1 . For each order of explanation k ∈ { 1 , . . . , n } , there is a unique ( up to the kth order of explanation ) index φJ which satisfies axioms JLI , JNU , JEF , JAN and JSY . It has the form φJT ( v ) = ∑ S⊆N\T q|S| [ v ( S ∪ T ) − v ( S ) ] for each ∅ 6= T ⊆ N with |T | ≤ k , where ( q0 , . . . , qn−1 ) uniquely solves the recursive system q0 = 1∑k i=1 ( n i ) , qr = ∑r−1s= ( r−k ) ∨0 ( rs ) qs∑k∧ ( n−r ) s=1 ( n−r s ) ; ( 2 ) for all r ∈ 1 , . . . , n− 1 . When k = 1 , the joint Shapley values coincide with Shapley ’ s values.2 When k = n , we have qr = r∑ j=0 ( r j ) ( −2 ) r−j 2n−j − 1 , ∀ r ∈ { 0 , . . . , n− 1 } . For each k , the constants qr are non-negative and satisfy ∑n−1 s=n−k ( n s ) qs = 1 . Further , as the joint Shapley value is similar in form to equation ( 1 ) ’ s standard Shapley value , the value of coalition T 2The Shapley-Taylor interaction index , φST , also has this property ( Dhamdhere et al. , 2020 ) . depends on its marginal contribution to other coalitions ; unlike interaction indices , it does not depend on the worth of its constituent agents . As with the Shapley value , the joint Shapley value can be seen as the worth brought by ‘ arriving ’ agents but , rather than arriving one at a time , they can now also arrive in coalitions . We develop this interpretation in Appendix A . We show here the implication of each of the joint axioms introduced above and prove Theorem 1 . It is already known that joint linearity restricts measures to be linear combinations of worths : Lemma 1 ( Grabisch & Roubens ( 1999a ) , Proposition 1 ) . If φ satisfies JLI , then for every ∅ 6= T ⊆ N there exists a family of real constants { aST } S⊆N such that for every v ∈ GN , φT ( v ) = ∑ S⊆N aTSv ( S ) . Axiom JNU then constrains the values of the constants { aTS } : Lemma 2 . Suppose φ satisfies JLI and JNU and let { aST } S⊆N , ∅ 6=T⊆N be the constants from Lemma 1 . Then for every ∅ 6= T ⊆ N and ∅ 6= S ⊆ N \ T , aTS = −aTS∪T . Further , for every ∅ 6= T ⊆ N , S ⊆ N \ T , and ∅ 6= H ( T , aTS∪H = 0 . SCombining these two lemmas yields : Proposition 1 . Suppose φ satisfies JLI and JNU . Then there exist constants { pT ( S ) } that depend on T and S such that for every ∅ 6= T ⊆ N and v ∈ GN φT ( v ) = ∑ S⊆N\T pT ( S ) [ v ( S ∪ T ) − v ( S ) ] . ( 3 ) Now establish a condition on the { pT ( S ) } values which must be satisfied under JEF : Proposition 2 . For each order of explanation k , φ satisfies axioms JLI , JNU and JEF if and only if for every ∅ 6= T ⊆ N with |T | ≤ k and v ∈ GN , φT ( v ) = ∑ S⊆N\T p T ( S ) [ v ( S ∪ T ) − v ( S ) ] with { pT ( S ) } satisfying δN ( S ) = ∑ ∅ 6=T⊆S : |T |≤k pT ( S \ T ) − ∑ ∅6=T⊆N\S : |T |≤k pT ( S ) , ( 4 ) for all ∅ 6= S ⊆ N , where δN ( S ) equals 1 if S = N and 0 otherwise . Recall that symmetry axiom SY is strictly weaker than anonymity axiom AN ( Malawski , 2020 ) . This is not the case for their joint counterparts : each imposes a different constraint on the constants { pT ( S ) } , as we shall see here . First , consider the effect of imposing JAN . Proposition 3 . For each order of explanation k , φ satisfies axioms JLI , JNU , JEF and JAN if and only if for every ∅ 6= T ⊆ N and v ∈ GN , φT ( v ) = ∑ S⊆N\T p T ( S ) [ v ( S ∪ T ) − v ( S ) ] with { pT ( S ) } satisfying ( 4 ) and pT ( S ) = pT ′ ( S′ ) ∀∅ 6= T , T ′ ⊆ N , S ⊆ N \ T , S′ ⊆ N \ T ′ s.t . s = s′ , t = t′ . ( 5 ) The analogous result for JSY is : Proposition 4 . For each order of explanation k , φ satisfies axioms JLI , JNU , JEF and JSY if and only if for every ∅ 6= T ⊆ N and v ∈ GN , φT ( v ) = ∑ S⊆N\T p T ( S ) [ v ( S ∪ T ) − v ( S ) ] with { pT ( S ) } satisfying ( 4 ) and pT ( S ) = pT ′ ( S ) ∀∅ 6= T , T ′ ⊆ N , S ⊆ N \ ( T ∪ T ′ ) . ( 6 ) Combining Propositions 2–4 completes the proof of Theorem 1 . The computational complexity of deriving the ( q0 , . . . , qn−1 ) is O ( nk2 ) . Once these have been determined , the joint Shapley values can be calculated ; the complexity of doing so isO ( 3n∧ ( 2nnk ) ) . | This work introduces "joint Shapley values", which directly extend Shapley’s axioms and intuitions: joint Shapley values measure a set of features’ average effect on a model’s prediction. This work naturally extends Shapley's axioms from a single feature, to sets of features. In a nutshell: joint Shapley values measure the average marginal contribution of a set of features to a model’s predictions. This work presents rigorous mathematical results for the joint Shapley values approach. This approach is then evaluated on several datasets, including: (i) simulated data, (ii) Boston Housing data, (iii) Movie Review data. | SP:86972f8c4420d86b88dfcf2aaf13b88d53b98de5 |
Joint Shapley values: a measure of joint feature importance | The Shapley value is one of the most widely used measures of feature importance partly as it measures a feature ’ s average effect on a model ’ s prediction . We introduce joint Shapley values , which directly extend Shapley ’ s axioms and intuitions : joint Shapley values measure a set of features ’ average contribution to a model ’ s prediction . We prove the uniqueness of joint Shapley values , for any order of explanation . Results for games show that joint Shapley values present different insights from existing interaction indices , which assess the effect of a feature within a set of features . The joint Shapley values provide intuitive results in ML attribution problems . With binary features , we present a presence-adjusted global value that is more consistent with local intuitions than the usual approach . 1 INTRODUCTION . Game theory ’ s Shapley value partitions the value arising from joint efforts among individual agents ( Shapley , 1953 ) . Specifically , denote by N = { 1 , . . . , n } a set of agents , and by GN the set of games on N , where a game is a set function v from 2N to R with v ( ∅ ) = 0 . Then v ( S ) is the worth created by coalition S ⊆ N . When there is no risk of confusion , we omit braces to indicate singletons ( e.g . i rather than { i } ) and denote a set ’ s cardinality by the corresponding lower case letter ( e.g . s = |S| ) . For any agent i , Shapley ’ s value is then ψi ( v ) ≡ ∑ S⊆N\ { i } s ! ( n− s− 1 ) ! n ! [ v ( S ∪ i ) − v ( S ) ] . ( 1 ) This is the average worth that i adds to possible coalitions S , weighted as follows : if the set of agents , S , has already ‘ arrived ’ , draw the next agent , i , to arrive uniformly over the remaining N\S agents . Shapley ’ s value is widely used in explainable AI ’ s attribution problem , partitioning model predictions among individual features ( q.v . Štrumbelj & Kononenko ( 2014 ) ; Lundberg & Lee ( 2017 ) ) after the model has been trained . Evaluating the prediction function at a specific feature value corresponds to an agent ’ s presence ; evaluating it at a reference ( or baseline ) feature value corresponds to the agent ’ s absence . A feature ’ s Shapley value is its average marginal contribution to the model ’ s predictions . When features are correlated , individual measures of importance may mislead ( Bhatt et al. , 2020 ; Patel et al. , 2021 ) , as measures of individual significance such as the t-test do . Thus , Shapley ’ s value has been extended to sets of features ( Grabisch & Roubens , 1999a ; Marichal et al. , 2007 ; Alshebli et al. , 2019 ; Dhamdhere et al. , 2020 ) . As these extensions introduce axioms not present in Shapley , they do not preserve the Shapley value ’ s intuition . We extend Shapley ’ s axioms to sets of features , randomizing over sets rather than individual features . The resulting joint Shapley value thus directly extends Shapley ’ s value to a measure of sets of feature importance : the average marginal contribution of a set of features to a model ’ s predictions . Our approach ’ s novelty is seen in our extension of the null axiom : in Shapley ( 1953 ) , a null agent contributes nothing to any set of agents to which it may belong ; here , a null set contributes nothing to any set to which it may belong . By contrast , interaction indices ( Grabisch & Roubens , 1999a ; Alshebli et al. , 2019 ; Dhamdhere et al. , 2020 ) recursively decompose sets into individual elements , retaining the original Shapley null axiom . Thus , these indices measure sets ’ contributions relative to their constituent elements — so are complementary to the joint Shapley value . The generalised Shapley value ( Marichal et al. , 2007 ) is closer to our work , but differs in a number of respects : our probabilities are independent of the size of the set of features under consideration1 ; our efficiency axiom is fully joint , while theirs are based on singletons and pairs ; our symmetry axiom provides uniqueness without reliance on recursion or partnership axioms . To illustrate , consider a null coalition , T , whose individual members contribute positively to coalitions that they join . Interaction indices assign a negative value to T , as the individual members act discordantly together . However , from a joint feature importance viewpoint , T should be assigned value 0 . The joint Shapley value matches this intuition . In the movie review application presented below , the joint Shapley values reveal contributions of collections of words , including grammatical features such as negation ( { disappointed } versus { won ’ t , disappointed } ) , and adjectives ( { effort } versus { terrific , effort } ) . Like the Shapley-Taylor interaction index ( Dhamdhere et al. , 2020 ) , our efficiency axiom depends on a positive integer k , the order of explanation , which limits the number of joint Shapley values to those for subsets up to cardinality k. As there are 2N − 1 non-empty subsets of N , the full set of joint Shapley values rapidly becomes unmanageable otherwise . In practice , k should be set to trade off insight ( favouring higher k ) against computational cost ( favouring lower k ) . For each k , the extended axioms yield a unique joint Shapley value . Unlike in Shapley ( 1953 ) , the joint anonymity and symmetry axioms are not interchangeable : each imposes distinct restrictions . Section 2 presents and extends the original Shapley axioms . Section 3 introduces joint Shapley values , deriving them as the unique solution to the extended axioms . Section 4 illustrates joint Shapley values in the game theoretical environment and applies them to the Boston housing ( Harrison & Rubinfeld , 1978 ) and Rotten Tomatoes movie review ( Pang & Lee , 2005 ) datasets , comparing them to interaction indices and presenting a sampling technique to facilitate calculation . Section 5 concludes . The Appendix collects proofs and other supplemental material . 2 EXTENDING SHAPLEY ’ S AXIOMS . For a game v ∈ GN , and a permutation σ on N , denote a permuted game by σv ∈ GN such that σv ( σ ( S ) ) = v ( S ) , ∀S ⊆ N , where σ ( S ) = { σ ( i ) : i ∈ S } . An index φ ( v ) of the game v ∈ GN is any real-valued function on 2N . The original axioms uniquely satisfied by the ( standard ) Shapley value ψ , are : LI linearity : ψ is a linear function on GN , i.e . ψ ( v + w ) = ψ ( v ) + ψ ( w ) and ψ ( av ) = aψ ( v ) for any v , w ∈ GN and a ∈ R. NU null : An agent that adds no worth to any coalition has no value , i.e . if v ( S ∪ { i } ) = v ( S ) for all S ⊆ N \ { i } , then ψi ( v ) = 0 . This axiom is sometimes called dummy . EF efficiency : The sum of the values of all agents is equal to the worth of the entire set , i.e . for all v ∈ GN , ∑n i=1 ψi ( v ) = v ( N ) . AN anonymity : For any σ on N and any v ∈ GN , ψi ( v ) = ψσ ( i ) ( σv ) , for all i ∈ N . 1Measures of joint significance ( e.g . F -tests ) and model selection ( e.g . BIC or AIC ) penalize larger feature sets to avoid overfitting . As joint Shapley values are calculated after model training , this problem does not arise . SY symmetry : If two agents add equal worth to all coalitions that they can both join then they receive equal value : if v ( S ∪ { i } ) = v ( S ∪ { j } ) ∀S ⊆ N \ { i , j } then ψi ( v ) = ψj ( v ) . This is strictly weaker than anonymity . Now extend each of these axioms in natural ways to conditions on sets rather than singletons . Below , φS ( v ) denotes an index for coalition S on game v. JLI joint linearity : φ is a linear function on GN , i.e . φ ( v + w ) = φ ( v ) + φ ( w ) and φ ( av ) = aφ ( v ) for any v , w ∈ GN and a ∈ R. ( This axiom has not been modified . ) JNU joint null : A coalition that adds no worth to any coalition has no value , i.e . if v ( S ∪T ) = v ( S ) for all S ⊆ N \ T , then φT ( v ) = 0 . JEF joint efficiency : The sum of the values of all coalitions up to cardinality k is equal to the worth of the entire set , i.e . for all v ∈ GN , ∑ ∅6=T⊆N : |T |≤k φT ( v ) = v ( N ) . JAN joint anonymity : For any σ on N and any v ∈ GN , φT ( v ) = φσ ( T ) ( σv ) , for all T ⊆ N . JSY joint symmetry : If two coalitions perform equally when joining coalitions that they can both join and for other coalitions they add no worth then they receive an equal value , i.e . if 1. v ( S ∪ T ) = v ( S ∪ T ′ ) for all S ⊆ N \ ( T ∪ T ′ ) , 2. v ( S ∪ T ) = v ( S ) for all S ⊆ N \ T such that S ∩ T ′ 6= ∅ , 3. v ( S ∪ T ′ ) = v ( S ) for all S ⊆ N \ T ′ such that S ∩ T 6= ∅ , then φT ( v ) = φT ′ ( v ) . Axiom JSY only equates the joint Shapley values for coalitions T and T ′ if they contribute identically to coalitions that they may both join , and contribute nothing to the other coalitions . Axioms JLI , JEF and JAN are all also used in Dhamdhere et al . ( 2020 ) . Our joint null and joint symmetry notions appear to be new : they reflect our interest in a set of features ’ contribution to a model ’ s predictions , so that the set ’ s cardinality should not play a role in determining its value . 3 JOINT SHAPLEY VALUES . Our main result is that there is a unique solution to axioms JLI , JNU , JEF , JAN and JSY , the joint Shapley value . The uniqueness is up to the kth order of explanation ; we say nothing about |T | > k. Theorem 1 . For each order of explanation k ∈ { 1 , . . . , n } , there is a unique ( up to the kth order of explanation ) index φJ which satisfies axioms JLI , JNU , JEF , JAN and JSY . It has the form φJT ( v ) = ∑ S⊆N\T q|S| [ v ( S ∪ T ) − v ( S ) ] for each ∅ 6= T ⊆ N with |T | ≤ k , where ( q0 , . . . , qn−1 ) uniquely solves the recursive system q0 = 1∑k i=1 ( n i ) , qr = ∑r−1s= ( r−k ) ∨0 ( rs ) qs∑k∧ ( n−r ) s=1 ( n−r s ) ; ( 2 ) for all r ∈ 1 , . . . , n− 1 . When k = 1 , the joint Shapley values coincide with Shapley ’ s values.2 When k = n , we have qr = r∑ j=0 ( r j ) ( −2 ) r−j 2n−j − 1 , ∀ r ∈ { 0 , . . . , n− 1 } . For each k , the constants qr are non-negative and satisfy ∑n−1 s=n−k ( n s ) qs = 1 . Further , as the joint Shapley value is similar in form to equation ( 1 ) ’ s standard Shapley value , the value of coalition T 2The Shapley-Taylor interaction index , φST , also has this property ( Dhamdhere et al. , 2020 ) . depends on its marginal contribution to other coalitions ; unlike interaction indices , it does not depend on the worth of its constituent agents . As with the Shapley value , the joint Shapley value can be seen as the worth brought by ‘ arriving ’ agents but , rather than arriving one at a time , they can now also arrive in coalitions . We develop this interpretation in Appendix A . We show here the implication of each of the joint axioms introduced above and prove Theorem 1 . It is already known that joint linearity restricts measures to be linear combinations of worths : Lemma 1 ( Grabisch & Roubens ( 1999a ) , Proposition 1 ) . If φ satisfies JLI , then for every ∅ 6= T ⊆ N there exists a family of real constants { aST } S⊆N such that for every v ∈ GN , φT ( v ) = ∑ S⊆N aTSv ( S ) . Axiom JNU then constrains the values of the constants { aTS } : Lemma 2 . Suppose φ satisfies JLI and JNU and let { aST } S⊆N , ∅ 6=T⊆N be the constants from Lemma 1 . Then for every ∅ 6= T ⊆ N and ∅ 6= S ⊆ N \ T , aTS = −aTS∪T . Further , for every ∅ 6= T ⊆ N , S ⊆ N \ T , and ∅ 6= H ( T , aTS∪H = 0 . SCombining these two lemmas yields : Proposition 1 . Suppose φ satisfies JLI and JNU . Then there exist constants { pT ( S ) } that depend on T and S such that for every ∅ 6= T ⊆ N and v ∈ GN φT ( v ) = ∑ S⊆N\T pT ( S ) [ v ( S ∪ T ) − v ( S ) ] . ( 3 ) Now establish a condition on the { pT ( S ) } values which must be satisfied under JEF : Proposition 2 . For each order of explanation k , φ satisfies axioms JLI , JNU and JEF if and only if for every ∅ 6= T ⊆ N with |T | ≤ k and v ∈ GN , φT ( v ) = ∑ S⊆N\T p T ( S ) [ v ( S ∪ T ) − v ( S ) ] with { pT ( S ) } satisfying δN ( S ) = ∑ ∅ 6=T⊆S : |T |≤k pT ( S \ T ) − ∑ ∅6=T⊆N\S : |T |≤k pT ( S ) , ( 4 ) for all ∅ 6= S ⊆ N , where δN ( S ) equals 1 if S = N and 0 otherwise . Recall that symmetry axiom SY is strictly weaker than anonymity axiom AN ( Malawski , 2020 ) . This is not the case for their joint counterparts : each imposes a different constraint on the constants { pT ( S ) } , as we shall see here . First , consider the effect of imposing JAN . Proposition 3 . For each order of explanation k , φ satisfies axioms JLI , JNU , JEF and JAN if and only if for every ∅ 6= T ⊆ N and v ∈ GN , φT ( v ) = ∑ S⊆N\T p T ( S ) [ v ( S ∪ T ) − v ( S ) ] with { pT ( S ) } satisfying ( 4 ) and pT ( S ) = pT ′ ( S′ ) ∀∅ 6= T , T ′ ⊆ N , S ⊆ N \ T , S′ ⊆ N \ T ′ s.t . s = s′ , t = t′ . ( 5 ) The analogous result for JSY is : Proposition 4 . For each order of explanation k , φ satisfies axioms JLI , JNU , JEF and JSY if and only if for every ∅ 6= T ⊆ N and v ∈ GN , φT ( v ) = ∑ S⊆N\T p T ( S ) [ v ( S ∪ T ) − v ( S ) ] with { pT ( S ) } satisfying ( 4 ) and pT ( S ) = pT ′ ( S ) ∀∅ 6= T , T ′ ⊆ N , S ⊆ N \ ( T ∪ T ′ ) . ( 6 ) Combining Propositions 2–4 completes the proof of Theorem 1 . The computational complexity of deriving the ( q0 , . . . , qn−1 ) is O ( nk2 ) . Once these have been determined , the joint Shapley values can be calculated ; the complexity of doing so isO ( 3n∧ ( 2nnk ) ) . | The paper proposes an extension of the Shapley values, namely, the joint Shapley values which measure a set of features' average effect of a models prediction. The uniqueness of the joint Shapley values is proved. Moreover, training details and tuning parameters are provided in the accompanying code. | SP:86972f8c4420d86b88dfcf2aaf13b88d53b98de5 |
DMSANET: DUAL MULTI SCALE ATTENTION NETWORK | 1 INTRODUCTION . The local receptive field of the human eye has led to the construction of convolutional neural networks which has powered much of the recent advances in computer vision . Multi scale architecture used in the famous InceptionNet ( Szegedy et al. , 2016 ) aggregates multi-scale information from different size convolutional kernels . Attention Networks has attracted a lot of attention recently as it allows the network to focus on only then essential aspects while ignoring the ones which are not useful ( Li et al. , 2019 ) , ( Cao et al. , 2019 ) and ( Li et al. , 2019 ) . A lot of problems have been successfully tackled using attention mechanism in computer vision like image classification , image segmentation , object detection and image generation . Most of the attention mechanisms can be broadly classified into two types channel attention and spatial attention , both of which strengthens the original features by aggregating the same feature from all the positions with different aggregation strategies , transformations , and strengthening functions ( Zhang et al. , 2021 ) . Some of the work combined both these mechanism together and achieved better results ( Cao et al. , 2019 ) and ( Woo et al. , 2018 ) . The computational burden was reduced by ( Wang et al. , 2020 ) using efficient channel attention and 1 × 1 convolution . The most popular attention mechanism is the Squeeze-and Excitation module ( Hu et al. , 2018b ) , which can significantly improve the performance with a considerably low cost . The “ channel shuffle ” operator is used ( Zhang and Yang , 2021 ) to enable information communication between the two branches . It uses a grouping strategy , which divides the input feature map into groups along the channel dimension . 2 RELATED WORK . There are two main problems which hinders the progress in this field : 1 ) Both spatial and channel attention as well as network using combination of two uses only local information while ignoring long range channel dependency , 2 ) The previous architectures fail to capture spatial information at different scales to be more robust and handle more complex problems . These two challenges were tackled by ( Duta et al. , 2020 ) and ( Li et al. , 2019 ) respectivly . The problem with these architectures is that the number of parameters increased considerably . Pyramid Split Attention ( PSA ) ( Zhang et al. , 2021 ) has the ability to process the input tensor at multiple scales . A multi-scale pyramid convolution structure is used to integrate information at different scales on each channel-wise feature map . The channel-wise attention weight of the multi-scale feature maps are extracted hence long range channel dependency is done . Non-Local block ( Wang et al. , 2018 ) is proposed to build a dense spatial feature map and capture the long-range dependency using non-local operations . ( Li et al. , 2019 ) used a dynamic selection attention mechanism that allows each neuron to adaptively adjust its receptive field size based on multiple scales of input feature map . ( Fu et al. , 2019 ) proposed a network to integrate local features with their global dependencies by summing these two attention modules from different branches . Multi scale architectures have been used sucessfully for a lot of vision problems ( ? ) , ( Hu et al. , 2018b ) and ( Sagar and Soundrapandiyan , 2020 ) . ( Fu et al. , 2019 ) adaptively integrated local features with their global dependencies by summing the two attention modules from different branches . ( Hu et al. , 2018a ) used spatial extension using a depth-wise convolution to aggregate individual features . Our network borrows ideas from ( Gao et al. , 2018 ) which used a network to capture local cross-channel interactions . The performance ( in terms of accuracy ) vs computational complexity ( in terms of number of parameters ) of the state of art attention modules is shown in Figure 1 : Our main contributions can be summarized as follows : • A new attention module is proposed which aggregates feature information at various scales . Our network is scalable and can be easily plugged into various computer vision problems . • Our network captures more contextual information using both spatial and channel attention at various scales . • Our experiments demonstrate that our network outperforms previous state of the art with lesser computational cost . 3 METHOD . 3.1 FEATURE GROUPING . Shuffle Attention module divides the input feature map into groups and uses Shuffle Unit to integrate the channel attention and spatial attention into one block for each group . The sub-features are aggregated and a “ channel shuffle ” operator is used for communicating the information between different sub-features . For a given feature map X ∈ RC×H×W , where C , H , W indicate the channel number , spatial height , and width , respectively , shuffle attention module divides X into G groups along the channel dimension , i.e. , X = [ X1 , XG ] , Xk ∈ RC/G×H×W . An attention module is used to weight the importance of each feature . The input of Xk is split into two networks along the channel dimension Xk1 , Xk2 ∈ RC/2G×H×W . The first branch is used to produce a channel attention map by using the relationship of channels , while the second branch is used to generate a spatial attention map by using the spatial relationship of different features . 3.2 CHANNEL ATTENTION MODULE . The channel attention module is used to selectively weight the importance of each channel and thus produces best output features . This helps in reducing the number of parameters of the network . Let X ∈ RC×H×W denotes the input feature map , where the quantityH , W , C represent its height , width and number of input channels respectively . A SE block consists of two parts : squeeze and excitation , which are respectively designed for encoding the global information and adaptively recalibrating the channel-wise relationship . The Global Average Pooling ( GAP ) operation can be calculated by the as shown in Equation 1 : GAPc = 1 H ×W H∑ i=1 W∑ j=1 xc ( i , j ) ( 1 ) The attention weight of the cth channel in the SE block can be written as denoted in Equation 2 : wc = σ ( W1ReLU ( W0 ( GAPc ) ) ) ( 2 ) where W0 ∈ RC×Cr and W1 ∈ RCr×C represent the fully-connected ( FC ) layers . The symbol σ represents the excitation function where Sigmoid function is usually used . We calculate the channel attention map X ∈ RC×C from the original features A ∈ RC×H×W . We reshape A to RC×N , and then perform a matrix multiplication between A and the transpose of A . We then apply a softmax layer to obtain the channel attention map X ∈ RC×C as shown in Equation 3 : xji = exp ( Ai ·Aj ) ∑C i=1 exp ( Ai ·Aj ) ( 3 ) where xji measures the ith channel ’ s impact on the jth channel . We perform a matrix multiplication between the transpose of X and A and reshape their result to RC×H×W . We also multiply the result by a scale parameter β and perform an element-wise sum operation with A to obtain the final output E ∈ RC×H×W as shown in Equation 4 : E1j = β C∑ i=1 ( xjiAi ) +Aj ( 4 ) 3.3 SPATIAL ATTENTION MODULE . We use Instance Normalization ( IN ) over Xk2 to obtain spatial-wise statistics . A Fc ( ) operation is used to enhance the representation of Xk2 . The final output of spatial attention is obtained by where W2 and b2 are parameters with shape RC/2G×1×1 . After that the two branches are concatenated to make the number of channels equal to the number of input . A local feature denoted by A ∈ RC×H×W is fed into a convolution layer to generate two new feature maps B and C , respectively where B , C ∈ RC×H×W . We reshape them to RC×N , where N = H ×W is the number of pixels . Next a matrix multiplication is done between the transpose of C and B , and apply a softmax layer to calculate the spatial attention map S ∈ RN×N . This operation is shown in Equation 1 : sji = exp ( Bi · Cj ) ∑N i=1 exp ( Bi · Cj ) ( 5 ) where sji measures the ith position ’ s impact on jth position . Next we feed feature A into a convolution layer to generate a new feature map D ∈ RC×H×W and reshape it to RC×N . We perform a matrix multiplication between D and the transpose of S and reshape the result to RC×H×W . We multiply it by a scale parameter α and perform a element-wise sum operation with the feature A to obtain the final output E ∈ RC×H×W as shown in Equation 2 : E2j = α N∑ i=1 ( sjiDi ) +Aj ( 6 ) 3.4 AGGREGATION . In the final part of the network , all the sub-features are aggregated . We use a ” channel shuffle ” operator to enable cross-group information flow along the channel dimension . The final output of our module is the same size as that of input , making our attention module quite easy to integrate with other networks . The whole multi-scale pre-processed feature map can be obtained by a concatenation way as defined in Equation 7 : F = Concat ( [ E1j , E2j ] ) ( 7 ) where F ∈ RC×H×W is the obtained multi-scale feature map . Our attention module is used across channels to adaptively select different spatial scales which is guided by the feature descriptor . This operation is defined in Equation 8 : atti = Softmax ( Zi ) = exp ( Zi ) ∑S−1 i=0 exp ( Zi ) ( 8 ) Finally we multiply the re-calibrated weight of multi-scale channel attention atti with the feature map of the corresponding scale Fi as shown in Equation 9 : Yi = Fi atti i = 1 , 2 , 3 , · · ·S − 1 ( 9 ) 3.5 NETWORK ARCHITECTURE . We propose DMSA module with the goal to build more efficient and scalable architecture . The first part of our network borrows ideas from ( Li et al. , 2019 ) and ( Zhang and Yang , 2021 ) . An input feature mapX is splitted intoN parts along with the channel dimension . For each splitted parts , it has C0=CS number of common channels , and the ith feature map is Xi ∈ RC0×H×W . The individual features are fused before being passed to two different branches . These two branches are comprised of position attention module and channel attention module as proposed in ( Fu et al. , 2019 ) for semantic segmentation . The second part of our network does the following 1 ) Builds a spatial attention matrix which models the spatial relationship between any two pixels of the features , 2 ) A matrix multiplication between the attention matrix and the original features . 3 ) An element-wise sum operation is done on the resulting matrix and original features . The operators concat and sum are used to reshape the features . The features from the two parallel branches are aggregated to produce the final output . The complete network architecture is shown in Figure 2 : We compare our network architecture with Resnet ( Wang et al. , 2017 ) , SENet ( Hu et al. , 2018b ) and EPSANet ( Zhang et al. , 2021 ) in Figure 3 . We use our DMSA module in between 3× 3 convolution and 1× 1 convolution . Our network is able to extract features at various scales and aggregate those individual features before passing through the attention module . The architectural details our proposed attention network is shown in Table 1 : | In this paper, the authors proposed a new attention module, named Dual Multi-Scale Attention. Three commonly used methods are combined: multi-scale, channel-attention, and position attention. By introducing the new module (which is a simple combination without any novelty) to ResNet, it SIGNIFICANTLY improves the performance, e.g, 75.20 to 80.02 (and 76.83 to 81.54 for ResNet101) on ImageNet, 36.4 to 41.4 on MS COCO. | SP:ee608c38bc5cba62161a824d237c680050ba3484 |
DMSANET: DUAL MULTI SCALE ATTENTION NETWORK | 1 INTRODUCTION . The local receptive field of the human eye has led to the construction of convolutional neural networks which has powered much of the recent advances in computer vision . Multi scale architecture used in the famous InceptionNet ( Szegedy et al. , 2016 ) aggregates multi-scale information from different size convolutional kernels . Attention Networks has attracted a lot of attention recently as it allows the network to focus on only then essential aspects while ignoring the ones which are not useful ( Li et al. , 2019 ) , ( Cao et al. , 2019 ) and ( Li et al. , 2019 ) . A lot of problems have been successfully tackled using attention mechanism in computer vision like image classification , image segmentation , object detection and image generation . Most of the attention mechanisms can be broadly classified into two types channel attention and spatial attention , both of which strengthens the original features by aggregating the same feature from all the positions with different aggregation strategies , transformations , and strengthening functions ( Zhang et al. , 2021 ) . Some of the work combined both these mechanism together and achieved better results ( Cao et al. , 2019 ) and ( Woo et al. , 2018 ) . The computational burden was reduced by ( Wang et al. , 2020 ) using efficient channel attention and 1 × 1 convolution . The most popular attention mechanism is the Squeeze-and Excitation module ( Hu et al. , 2018b ) , which can significantly improve the performance with a considerably low cost . The “ channel shuffle ” operator is used ( Zhang and Yang , 2021 ) to enable information communication between the two branches . It uses a grouping strategy , which divides the input feature map into groups along the channel dimension . 2 RELATED WORK . There are two main problems which hinders the progress in this field : 1 ) Both spatial and channel attention as well as network using combination of two uses only local information while ignoring long range channel dependency , 2 ) The previous architectures fail to capture spatial information at different scales to be more robust and handle more complex problems . These two challenges were tackled by ( Duta et al. , 2020 ) and ( Li et al. , 2019 ) respectivly . The problem with these architectures is that the number of parameters increased considerably . Pyramid Split Attention ( PSA ) ( Zhang et al. , 2021 ) has the ability to process the input tensor at multiple scales . A multi-scale pyramid convolution structure is used to integrate information at different scales on each channel-wise feature map . The channel-wise attention weight of the multi-scale feature maps are extracted hence long range channel dependency is done . Non-Local block ( Wang et al. , 2018 ) is proposed to build a dense spatial feature map and capture the long-range dependency using non-local operations . ( Li et al. , 2019 ) used a dynamic selection attention mechanism that allows each neuron to adaptively adjust its receptive field size based on multiple scales of input feature map . ( Fu et al. , 2019 ) proposed a network to integrate local features with their global dependencies by summing these two attention modules from different branches . Multi scale architectures have been used sucessfully for a lot of vision problems ( ? ) , ( Hu et al. , 2018b ) and ( Sagar and Soundrapandiyan , 2020 ) . ( Fu et al. , 2019 ) adaptively integrated local features with their global dependencies by summing the two attention modules from different branches . ( Hu et al. , 2018a ) used spatial extension using a depth-wise convolution to aggregate individual features . Our network borrows ideas from ( Gao et al. , 2018 ) which used a network to capture local cross-channel interactions . The performance ( in terms of accuracy ) vs computational complexity ( in terms of number of parameters ) of the state of art attention modules is shown in Figure 1 : Our main contributions can be summarized as follows : • A new attention module is proposed which aggregates feature information at various scales . Our network is scalable and can be easily plugged into various computer vision problems . • Our network captures more contextual information using both spatial and channel attention at various scales . • Our experiments demonstrate that our network outperforms previous state of the art with lesser computational cost . 3 METHOD . 3.1 FEATURE GROUPING . Shuffle Attention module divides the input feature map into groups and uses Shuffle Unit to integrate the channel attention and spatial attention into one block for each group . The sub-features are aggregated and a “ channel shuffle ” operator is used for communicating the information between different sub-features . For a given feature map X ∈ RC×H×W , where C , H , W indicate the channel number , spatial height , and width , respectively , shuffle attention module divides X into G groups along the channel dimension , i.e. , X = [ X1 , XG ] , Xk ∈ RC/G×H×W . An attention module is used to weight the importance of each feature . The input of Xk is split into two networks along the channel dimension Xk1 , Xk2 ∈ RC/2G×H×W . The first branch is used to produce a channel attention map by using the relationship of channels , while the second branch is used to generate a spatial attention map by using the spatial relationship of different features . 3.2 CHANNEL ATTENTION MODULE . The channel attention module is used to selectively weight the importance of each channel and thus produces best output features . This helps in reducing the number of parameters of the network . Let X ∈ RC×H×W denotes the input feature map , where the quantityH , W , C represent its height , width and number of input channels respectively . A SE block consists of two parts : squeeze and excitation , which are respectively designed for encoding the global information and adaptively recalibrating the channel-wise relationship . The Global Average Pooling ( GAP ) operation can be calculated by the as shown in Equation 1 : GAPc = 1 H ×W H∑ i=1 W∑ j=1 xc ( i , j ) ( 1 ) The attention weight of the cth channel in the SE block can be written as denoted in Equation 2 : wc = σ ( W1ReLU ( W0 ( GAPc ) ) ) ( 2 ) where W0 ∈ RC×Cr and W1 ∈ RCr×C represent the fully-connected ( FC ) layers . The symbol σ represents the excitation function where Sigmoid function is usually used . We calculate the channel attention map X ∈ RC×C from the original features A ∈ RC×H×W . We reshape A to RC×N , and then perform a matrix multiplication between A and the transpose of A . We then apply a softmax layer to obtain the channel attention map X ∈ RC×C as shown in Equation 3 : xji = exp ( Ai ·Aj ) ∑C i=1 exp ( Ai ·Aj ) ( 3 ) where xji measures the ith channel ’ s impact on the jth channel . We perform a matrix multiplication between the transpose of X and A and reshape their result to RC×H×W . We also multiply the result by a scale parameter β and perform an element-wise sum operation with A to obtain the final output E ∈ RC×H×W as shown in Equation 4 : E1j = β C∑ i=1 ( xjiAi ) +Aj ( 4 ) 3.3 SPATIAL ATTENTION MODULE . We use Instance Normalization ( IN ) over Xk2 to obtain spatial-wise statistics . A Fc ( ) operation is used to enhance the representation of Xk2 . The final output of spatial attention is obtained by where W2 and b2 are parameters with shape RC/2G×1×1 . After that the two branches are concatenated to make the number of channels equal to the number of input . A local feature denoted by A ∈ RC×H×W is fed into a convolution layer to generate two new feature maps B and C , respectively where B , C ∈ RC×H×W . We reshape them to RC×N , where N = H ×W is the number of pixels . Next a matrix multiplication is done between the transpose of C and B , and apply a softmax layer to calculate the spatial attention map S ∈ RN×N . This operation is shown in Equation 1 : sji = exp ( Bi · Cj ) ∑N i=1 exp ( Bi · Cj ) ( 5 ) where sji measures the ith position ’ s impact on jth position . Next we feed feature A into a convolution layer to generate a new feature map D ∈ RC×H×W and reshape it to RC×N . We perform a matrix multiplication between D and the transpose of S and reshape the result to RC×H×W . We multiply it by a scale parameter α and perform a element-wise sum operation with the feature A to obtain the final output E ∈ RC×H×W as shown in Equation 2 : E2j = α N∑ i=1 ( sjiDi ) +Aj ( 6 ) 3.4 AGGREGATION . In the final part of the network , all the sub-features are aggregated . We use a ” channel shuffle ” operator to enable cross-group information flow along the channel dimension . The final output of our module is the same size as that of input , making our attention module quite easy to integrate with other networks . The whole multi-scale pre-processed feature map can be obtained by a concatenation way as defined in Equation 7 : F = Concat ( [ E1j , E2j ] ) ( 7 ) where F ∈ RC×H×W is the obtained multi-scale feature map . Our attention module is used across channels to adaptively select different spatial scales which is guided by the feature descriptor . This operation is defined in Equation 8 : atti = Softmax ( Zi ) = exp ( Zi ) ∑S−1 i=0 exp ( Zi ) ( 8 ) Finally we multiply the re-calibrated weight of multi-scale channel attention atti with the feature map of the corresponding scale Fi as shown in Equation 9 : Yi = Fi atti i = 1 , 2 , 3 , · · ·S − 1 ( 9 ) 3.5 NETWORK ARCHITECTURE . We propose DMSA module with the goal to build more efficient and scalable architecture . The first part of our network borrows ideas from ( Li et al. , 2019 ) and ( Zhang and Yang , 2021 ) . An input feature mapX is splitted intoN parts along with the channel dimension . For each splitted parts , it has C0=CS number of common channels , and the ith feature map is Xi ∈ RC0×H×W . The individual features are fused before being passed to two different branches . These two branches are comprised of position attention module and channel attention module as proposed in ( Fu et al. , 2019 ) for semantic segmentation . The second part of our network does the following 1 ) Builds a spatial attention matrix which models the spatial relationship between any two pixels of the features , 2 ) A matrix multiplication between the attention matrix and the original features . 3 ) An element-wise sum operation is done on the resulting matrix and original features . The operators concat and sum are used to reshape the features . The features from the two parallel branches are aggregated to produce the final output . The complete network architecture is shown in Figure 2 : We compare our network architecture with Resnet ( Wang et al. , 2017 ) , SENet ( Hu et al. , 2018b ) and EPSANet ( Zhang et al. , 2021 ) in Figure 3 . We use our DMSA module in between 3× 3 convolution and 1× 1 convolution . Our network is able to extract features at various scales and aggregate those individual features before passing through the attention module . The architectural details our proposed attention network is shown in Table 1 : | In this paper, the authors propose a new attention module that shows better performance and lesser computation than most existing attention modules. Based on this module, the so-called Dual Multi Scale Attention Network is proposed. Several experiments are conducted to verify the performance on image classification, object detection and instance segmentation. Results shows that the proposed network has some advantages than previous works. | SP:ee608c38bc5cba62161a824d237c680050ba3484 |
Rethinking Goal-Conditioned Supervised Learning and Its Connection to Offline RL | 1 INTRODUCTION . Reinforcement learning ( RL ) enables automatic skill learning and has achieved great success in various tasks ( Mnih et al. , 2015 ; Lillicrap et al. , 2016 ; Haarnoja et al. , 2018 ; Vinyals et al. , 2019 ) . Recently , goal-conditioned RL is gaining attention from the community , as it encourages agents to reach multiple goals and learn general policies ( Schaul et al. , 2015 ) . Previous advanced works for goal-conditioned RL consist of a variety of approaches based on hindsight experience replay ( Andrychowicz et al. , 2017 ; Fang et al. , 2019 ) , exploration ( Florensa et al. , 2018 ; Ren et al. , 2019 ; Pitis et al. , 2020 ) , and imitation learning ( Sun et al. , 2019 ; Ding et al. , 2019 ; Sun et al. , 2020 ; Ghosh et al. , 2021 ) . However , these goal-conditioned methods need intense online interaction with the environment , which could be costly and dangerous for real-world applications . A direct solution to learn goal-conditioned policy from offline data is through imitation learning . For instance , goal-conditioned supervised learning ( GCSL ) ( Ghosh et al. , 2021 ) iteratively relabels collected trajectories and imitates them directly . It is substantially simple and stable , as it does not require expert demonstrations or value estimation . Theoretically , GCSL guarantees to optimize over a lower bound of the objective for goal reaching problem . Although hindsight relabeling ( Andrychowicz et al. , 2017 ) with future reached states can be optimal under certain conditions ( Eysenbach et al. , 2020 ) , it would generate non-optimal experiences in more general offline goal-conditioned RL set- ∗Corresponding Authors ting , as discussed in Appendix B.1 . As a result , GCSL suffers from the same issue as other behavior cloning methods by assigning uniform weights for all experiences and results in suboptimal policies . To this end , leveraging the simplicity and stability of GCSL , we generalize it to the offline goal-conditioned RL setting and propose an effective and theoretically grounded method , named Weighted GCSL ( WGCSL ) . We first revisit the theoretical foundation of GCSL by additionally considering discounted rewards , and this allows us to obtain a weighted supervised learning objective . To learn a better policy from the offline dataset and promote learning efficiency , we introduce a more general weighting scheme by considering the importance of different relabeling goals and the expected return estimated with a value function . Theoretically , the discounted weight for relabeling goals contributes to optimizing a tighter lower bound than GCSL . Based on the discounted weight , we show that additionally re-weighting with an exponential function over advantage value guarantees monotonic policy improvement for goal-conditioned RL . Moreover , the introduced weights build a natural connection between WGCSL and offline RL , making WGCSL available for both online and offline setting . Another major challenge in goal-conditioned RL is the multi-modality problem ( Lynch et al. , 2020 ) , i.e. , there are generally many valid trajectories from a state to a goal , which can present multiple counteracting action labels and even impede learning . To tackle the challenge , we further introduce the best-advantage weight under the general weighting scheme , which contributes to the asymptotic performance when the data is multi-modal . Although WGCSL has the cost of learning the value function , we empirically show that it is worth for its remarkable improvement . For the evaluation of offline goal-conditioned RL algorithms , we provide a public benchmark and offline datasets including a set of challenging multi-goal manipulation tasks with a robotics arm or an anthropomorphic hand . Experiments1 conducted in the introduced benchmark show that WGCSL significantly outperforms other state-of-the-art baselines in the fully offline goalconditioned settings , especially in the difficult anthropomorphic hand task and when learning from randomly collected datasets with sparse rewards . 2 PRELIMINARIES . Markov Decision Process and offline RL The RL problem can be described as a Markov Decision Process ( MDP ) , denoted by a tuple ( S , A , P , r , γ ) , where S and A are the state and action spaces ; P describes the transition probability as S ×A× S → [ 0 , 1 ] ; r : S ×A → R is the reward function and γ ∈ ( 0 , 1 ] is the discount factor ; π : S → A denotes the policy , and an optimal policy π∗ satisfies π∗ = arg maxπ Es1∼ρ ( s1 ) , at∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∑∞ t=1 γ t−1r ( st , at ) ] , where ρ ( s1 ) is the distribution of initial states . For offline RL problems , the agent can only access a static dataset D , and is not allowed to interact with the environment during the training process . The offline data can be collected by some unknown policies . Goal-conditioned RL Goal-conditioned RL further considers a goal space G. The policy π : S × G → A and the reward function r : S × G × A → R are both conditioned on goal g ∈ G. The agent learns to maximize the expected discounted cumulative return J ( π ) = Eg∼p ( g ) , s1∼ρ ( s1 ) , at∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∑∞ t=1 γ t−1r ( st , at , g ) ] , where p ( g ) is the distribution over goals g. We study goal-conditioned RL with sparse rewards , and the reward function is typically defined as : r ( st , at , g ) = { 1 , ||φ ( st ) − g||22 < some threshold 0 , otherwise , where φ : S → G is a mapping from states to goals . For theoretical analysis in Section 3 , we consider MDPs with discrete state/goal space , such that the reward function can be written as r ( st , at , g ) = 1 [ φ ( st ) = g ] , which provides a positive signal only when st achieves g. The value function V : S × G → R is defined as V π ( s , g ) = Eat∼π , st+1∼P ( ·|st , at ) [ ∑∞ t=1 γ t−1r ( st , at , g ) |s1 = s ] . Goal-conditioned Supervised Learning Different with goal-conditioned RL that maximizes discounted cumulative return , GCSL considers the goal-reaching problem , i.e. , maximizing the last-step reward Eg∼p ( g ) , τ∼π [ r ( sT , aT , g ) ] for trajectories τ of horizon T . GCSL iterates between two process , relabeling and imitating . Trajectories in the replay buffer are relabeled with 1Code and offline dataset are available at https : //github.com/YangRui2015/AWGCSL hindsight methods ( Kaelbling , 1993 ; Andrychowicz et al. , 2017 ) to form a relabeled dataset Drelabel = { ( st , at , g′ ) } , where g′ = φ ( si ) for i ≥ t denotes the relabeled goal . The policy is optimized through supervised learning to mimic those relabeled transitions : θ = arg maxθ E ( st , at , g′ ) ∼Drelabel log πθ ( at|st , g′ ) . 3 WEIGHTED GOAL-CONDITIONED SUPERVISED LEARNING . In this section , we will revisit GCSL and introduce the general weighting scheme which generalizes GCSL to offline goal-conditioned RL . 3.1 REVISITING GOAL-CONDITIONED SUPERVISED LEARNING . As an imitation learning method , GCSL sticks the agent ’ s policy to the relabeled data distribution , therefore it naturally alleviates the problem of out-of-distribution actions . Besides , it has the potential to reach any goal in the offline dataset with hindsight relabeling and the generalization ability of neural networks . Despite its advantages , GCSL has a major disadvantage for offline goalconditioned RL , i.e. , it only considers the last step reward r ( sT , aT , g ) and generally results in suboptimal policies . A Motivating Example We provide training results of GCSL in the PointReach task in Figure 2 . The objective of the task is to move a point from the starting position to the desired goal as quickly as possible . The offline dataset is collected using a random policy . As shown in Figure 2 , GCSL learns a suboptimal policy which detours to reach goals . This is because GCSL only considers the last step reward r ( sT , aT , g ) . As long as the trajectories reaching the goal at the end of the episode , they are all considered equally to GCSL . To improve the learned policy , a straightforward way is to evaluate the importance of samples or trajectories for policy learning using importance weights . As a comparison , Weighted GCSL ( WGCSL ) , which uses a novel weighting scheme , learns the optimal policy in the PointReach task . We will derive the formulation of WGCSL in the following analysis . Connection between Goal-Conditioned RL and SL . GCSL has been proved to optimize a lower bound on the goal-reaching objective ( Ghosh et al. , 2021 ) , which is different from the goal-conditioned RL objective that optimizes the cumulative return . In this section , we will introduce a surrogate function for goal-conditioned RL and then derive the connection between goal-conditioned RL and weighted goal-conditioned supervised learning ( WGCSL ) . For overall consistency , we provide the formulation of WGCSL as below : JWGCSL ( π ) = Eg∼p ( g ) , τ∼πb ( ·|g ) , t∼ [ 1 , T ] , i∼ [ t , T ] [ wt , i log πθ ( at|st , φ ( si ) ) ] , ( 1 ) where p ( g ) is the distribution of desired goals and πb refers to the data collecting policy . The weight wt , i is subscripted by both the time step t and the index i of the relabeled goal . When wt , i = 1 , JWGCSL reduces to GCSL ( Ghosh et al. , 2021 ) , and we denote the corresponding objective as JGCSL for comparison convenience . Note that in Eq . 1 , φ ( si ) , i ≥ t implies that any future visited states after st can be relabeled as goals . Theorem 1 . Assume a finite-horizon discrete MDP , a stochastic discrete policy π which selects actions with non-zero probability and a sparse reward function r ( st , at , g ) = 1 [ φ ( st ) = g ] , where φ is the state-to-goal mapping and 1 [ φ ( st ) = g ] is an indicator function . Given trajectories τ = ( s1 , a1 , · · · , sT , aT ) and discount factor γ ∈ ( 0 , 1 ] , let the weight wt , i = γi−t , t ∈ [ 1 , T ] , i ∈ [ t , T ] , then the following bounds hold : Jsurr ( π ) ≥ T · JWGCSL ( π ) ≥ T · JGCSL ( π ) , where Jsurr ( π ) = 1T Eg∼p ( g ) , τ∼πb ( ·|g ) [ ∑T t=1 log π ( at|st , g ) ∑T i=t γ i−1 · 1 [ φ ( si ) = g ] ] is a surro- gate function of J ( π ) . We defer the proof to Appendix B.2 , where we also show that under mild conditions , ( 1 ) Jsurr is a lower bound of log J , and ( 2 ) Jsurr shares the same gradient direction with J at πb . Theorem 1 reveals the connection between the goal-conditioned RL objective and the WGCSL/GCSL objective . Meanwhile , it suggests that GCSL with the discount weight γi−t is a tighter lower bound compared to the unweighted version . We name this weight as discounted relabeling weight ( DRW ) as it intuitively assigns smaller weights on longer trajectories reaching the same relabeled goal . | The proposed method proposes a method for goal-conditioned RL that can be interpreted as a weighted version of prior work.These weights resemble a combination of discounting and advantage weighting. The paper provides theory arguing that these weights cause the proposed method to optimize a tighter lower bound than prior work. Experiments show that it outperforms the prior work, which does not include weights on each training example. | SP:b35f74a2fd21a865da1621b2c59ead20e912e3ad |
Rethinking Goal-Conditioned Supervised Learning and Its Connection to Offline RL | 1 INTRODUCTION . Reinforcement learning ( RL ) enables automatic skill learning and has achieved great success in various tasks ( Mnih et al. , 2015 ; Lillicrap et al. , 2016 ; Haarnoja et al. , 2018 ; Vinyals et al. , 2019 ) . Recently , goal-conditioned RL is gaining attention from the community , as it encourages agents to reach multiple goals and learn general policies ( Schaul et al. , 2015 ) . Previous advanced works for goal-conditioned RL consist of a variety of approaches based on hindsight experience replay ( Andrychowicz et al. , 2017 ; Fang et al. , 2019 ) , exploration ( Florensa et al. , 2018 ; Ren et al. , 2019 ; Pitis et al. , 2020 ) , and imitation learning ( Sun et al. , 2019 ; Ding et al. , 2019 ; Sun et al. , 2020 ; Ghosh et al. , 2021 ) . However , these goal-conditioned methods need intense online interaction with the environment , which could be costly and dangerous for real-world applications . A direct solution to learn goal-conditioned policy from offline data is through imitation learning . For instance , goal-conditioned supervised learning ( GCSL ) ( Ghosh et al. , 2021 ) iteratively relabels collected trajectories and imitates them directly . It is substantially simple and stable , as it does not require expert demonstrations or value estimation . Theoretically , GCSL guarantees to optimize over a lower bound of the objective for goal reaching problem . Although hindsight relabeling ( Andrychowicz et al. , 2017 ) with future reached states can be optimal under certain conditions ( Eysenbach et al. , 2020 ) , it would generate non-optimal experiences in more general offline goal-conditioned RL set- ∗Corresponding Authors ting , as discussed in Appendix B.1 . As a result , GCSL suffers from the same issue as other behavior cloning methods by assigning uniform weights for all experiences and results in suboptimal policies . To this end , leveraging the simplicity and stability of GCSL , we generalize it to the offline goal-conditioned RL setting and propose an effective and theoretically grounded method , named Weighted GCSL ( WGCSL ) . We first revisit the theoretical foundation of GCSL by additionally considering discounted rewards , and this allows us to obtain a weighted supervised learning objective . To learn a better policy from the offline dataset and promote learning efficiency , we introduce a more general weighting scheme by considering the importance of different relabeling goals and the expected return estimated with a value function . Theoretically , the discounted weight for relabeling goals contributes to optimizing a tighter lower bound than GCSL . Based on the discounted weight , we show that additionally re-weighting with an exponential function over advantage value guarantees monotonic policy improvement for goal-conditioned RL . Moreover , the introduced weights build a natural connection between WGCSL and offline RL , making WGCSL available for both online and offline setting . Another major challenge in goal-conditioned RL is the multi-modality problem ( Lynch et al. , 2020 ) , i.e. , there are generally many valid trajectories from a state to a goal , which can present multiple counteracting action labels and even impede learning . To tackle the challenge , we further introduce the best-advantage weight under the general weighting scheme , which contributes to the asymptotic performance when the data is multi-modal . Although WGCSL has the cost of learning the value function , we empirically show that it is worth for its remarkable improvement . For the evaluation of offline goal-conditioned RL algorithms , we provide a public benchmark and offline datasets including a set of challenging multi-goal manipulation tasks with a robotics arm or an anthropomorphic hand . Experiments1 conducted in the introduced benchmark show that WGCSL significantly outperforms other state-of-the-art baselines in the fully offline goalconditioned settings , especially in the difficult anthropomorphic hand task and when learning from randomly collected datasets with sparse rewards . 2 PRELIMINARIES . Markov Decision Process and offline RL The RL problem can be described as a Markov Decision Process ( MDP ) , denoted by a tuple ( S , A , P , r , γ ) , where S and A are the state and action spaces ; P describes the transition probability as S ×A× S → [ 0 , 1 ] ; r : S ×A → R is the reward function and γ ∈ ( 0 , 1 ] is the discount factor ; π : S → A denotes the policy , and an optimal policy π∗ satisfies π∗ = arg maxπ Es1∼ρ ( s1 ) , at∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∑∞ t=1 γ t−1r ( st , at ) ] , where ρ ( s1 ) is the distribution of initial states . For offline RL problems , the agent can only access a static dataset D , and is not allowed to interact with the environment during the training process . The offline data can be collected by some unknown policies . Goal-conditioned RL Goal-conditioned RL further considers a goal space G. The policy π : S × G → A and the reward function r : S × G × A → R are both conditioned on goal g ∈ G. The agent learns to maximize the expected discounted cumulative return J ( π ) = Eg∼p ( g ) , s1∼ρ ( s1 ) , at∼π ( ·|st ) , st+1∼P ( ·|st , at ) [ ∑∞ t=1 γ t−1r ( st , at , g ) ] , where p ( g ) is the distribution over goals g. We study goal-conditioned RL with sparse rewards , and the reward function is typically defined as : r ( st , at , g ) = { 1 , ||φ ( st ) − g||22 < some threshold 0 , otherwise , where φ : S → G is a mapping from states to goals . For theoretical analysis in Section 3 , we consider MDPs with discrete state/goal space , such that the reward function can be written as r ( st , at , g ) = 1 [ φ ( st ) = g ] , which provides a positive signal only when st achieves g. The value function V : S × G → R is defined as V π ( s , g ) = Eat∼π , st+1∼P ( ·|st , at ) [ ∑∞ t=1 γ t−1r ( st , at , g ) |s1 = s ] . Goal-conditioned Supervised Learning Different with goal-conditioned RL that maximizes discounted cumulative return , GCSL considers the goal-reaching problem , i.e. , maximizing the last-step reward Eg∼p ( g ) , τ∼π [ r ( sT , aT , g ) ] for trajectories τ of horizon T . GCSL iterates between two process , relabeling and imitating . Trajectories in the replay buffer are relabeled with 1Code and offline dataset are available at https : //github.com/YangRui2015/AWGCSL hindsight methods ( Kaelbling , 1993 ; Andrychowicz et al. , 2017 ) to form a relabeled dataset Drelabel = { ( st , at , g′ ) } , where g′ = φ ( si ) for i ≥ t denotes the relabeled goal . The policy is optimized through supervised learning to mimic those relabeled transitions : θ = arg maxθ E ( st , at , g′ ) ∼Drelabel log πθ ( at|st , g′ ) . 3 WEIGHTED GOAL-CONDITIONED SUPERVISED LEARNING . In this section , we will revisit GCSL and introduce the general weighting scheme which generalizes GCSL to offline goal-conditioned RL . 3.1 REVISITING GOAL-CONDITIONED SUPERVISED LEARNING . As an imitation learning method , GCSL sticks the agent ’ s policy to the relabeled data distribution , therefore it naturally alleviates the problem of out-of-distribution actions . Besides , it has the potential to reach any goal in the offline dataset with hindsight relabeling and the generalization ability of neural networks . Despite its advantages , GCSL has a major disadvantage for offline goalconditioned RL , i.e. , it only considers the last step reward r ( sT , aT , g ) and generally results in suboptimal policies . A Motivating Example We provide training results of GCSL in the PointReach task in Figure 2 . The objective of the task is to move a point from the starting position to the desired goal as quickly as possible . The offline dataset is collected using a random policy . As shown in Figure 2 , GCSL learns a suboptimal policy which detours to reach goals . This is because GCSL only considers the last step reward r ( sT , aT , g ) . As long as the trajectories reaching the goal at the end of the episode , they are all considered equally to GCSL . To improve the learned policy , a straightforward way is to evaluate the importance of samples or trajectories for policy learning using importance weights . As a comparison , Weighted GCSL ( WGCSL ) , which uses a novel weighting scheme , learns the optimal policy in the PointReach task . We will derive the formulation of WGCSL in the following analysis . Connection between Goal-Conditioned RL and SL . GCSL has been proved to optimize a lower bound on the goal-reaching objective ( Ghosh et al. , 2021 ) , which is different from the goal-conditioned RL objective that optimizes the cumulative return . In this section , we will introduce a surrogate function for goal-conditioned RL and then derive the connection between goal-conditioned RL and weighted goal-conditioned supervised learning ( WGCSL ) . For overall consistency , we provide the formulation of WGCSL as below : JWGCSL ( π ) = Eg∼p ( g ) , τ∼πb ( ·|g ) , t∼ [ 1 , T ] , i∼ [ t , T ] [ wt , i log πθ ( at|st , φ ( si ) ) ] , ( 1 ) where p ( g ) is the distribution of desired goals and πb refers to the data collecting policy . The weight wt , i is subscripted by both the time step t and the index i of the relabeled goal . When wt , i = 1 , JWGCSL reduces to GCSL ( Ghosh et al. , 2021 ) , and we denote the corresponding objective as JGCSL for comparison convenience . Note that in Eq . 1 , φ ( si ) , i ≥ t implies that any future visited states after st can be relabeled as goals . Theorem 1 . Assume a finite-horizon discrete MDP , a stochastic discrete policy π which selects actions with non-zero probability and a sparse reward function r ( st , at , g ) = 1 [ φ ( st ) = g ] , where φ is the state-to-goal mapping and 1 [ φ ( st ) = g ] is an indicator function . Given trajectories τ = ( s1 , a1 , · · · , sT , aT ) and discount factor γ ∈ ( 0 , 1 ] , let the weight wt , i = γi−t , t ∈ [ 1 , T ] , i ∈ [ t , T ] , then the following bounds hold : Jsurr ( π ) ≥ T · JWGCSL ( π ) ≥ T · JGCSL ( π ) , where Jsurr ( π ) = 1T Eg∼p ( g ) , τ∼πb ( ·|g ) [ ∑T t=1 log π ( at|st , g ) ∑T i=t γ i−1 · 1 [ φ ( si ) = g ] ] is a surro- gate function of J ( π ) . We defer the proof to Appendix B.2 , where we also show that under mild conditions , ( 1 ) Jsurr is a lower bound of log J , and ( 2 ) Jsurr shares the same gradient direction with J at πb . Theorem 1 reveals the connection between the goal-conditioned RL objective and the WGCSL/GCSL objective . Meanwhile , it suggests that GCSL with the discount weight γi−t is a tighter lower bound compared to the unweighted version . We name this weight as discounted relabeling weight ( DRW ) as it intuitively assigns smaller weights on longer trajectories reaching the same relabeled goal . | This paper proposes an extension of GCSL where the goal-conditioned BC loss is weighted by a variable that correlates with the number of steps necessary to achieve the desired goal. The advantage of this approach is that sub-optimal trajectories to a particular goal will get downweighted to the benefit of more direct trajectories. This effectively adds a policy improvement step to GCSL with the underlying objective function being that of arriving at goal states as fast as possible. Results on goal-conditioned tasks show that the proposed approach (WGCSL) performs consistently better than GCSL and in some cases significantly better (HandReach-expert). | SP:b35f74a2fd21a865da1621b2c59ead20e912e3ad |
Interpreting Reinforcement Policies through Local Behaviors | 1 INTRODUCTION . Deep reinforcement learning has seen stupendous success over the last decade with superhuman performance in games such as Go ( Silver et al. , 2016 ) , Chess ( Silver et al. , 2018 ) as well as Atari benchmarks ( Mnih et al. , 2015 ) . With increasing superior capabilities of automated ( learning ) systems , there is a strong push to understand the reasoning behind their decision making . One motivation is for ( professional ) humans to improve their performance in these games ( Rensch , 2021 ) . An even deeper reason is for humans to be able to trust these systems if they are deployed in real life scenarios ( Gunning , 2017 ) . Safety , for instance , is of paramount importance in applications such as self-driving cars or deployments on unmanned aerial vehicles ( UAVs ) ( Garcia & Fernandez , 2015 ) . The General Data Protection Regulation ( Yannella & Kagan , 2018 ) passed in Europe demands that explanations need to be provided for any automated decisions that affect humans . While various methods with different flavors have been provided to explain classification models ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ; Lapuschkin et al. , 2016 ; Dhurandhar et al. , 2018 ) and evaluated in application-grounded manner ( Doshi-Velez & Kim , 2017 ; Dhurandhar et al. , 2017 ) , the exploration of different perspectives to explain reinforcement learning ( RL ) policies has been limited and user study evaluations comparing methods are rarely employed in this space . In this paper , we provide a novel perspective to produce human understandable explanations with a task-oriented user study that evaluates which explanations help users predict the behavior of a policy better . Our approach involves two steps : 1 ) learning meta-states , i.e. , clusters of states , based on the dynamics of the policy being explained , and 2 ) within eat meta-state , identifying states that act intermediate goals , which we refer to as strategic states . Contrary to the global nature of recent explainability works in RL ( Topin & Veloso , 2019 ; Sreedharan et al. , 2020 ; Amir & Amir , 2018 ) , our focus is on local explanations ; given the current state , we explain the policy moving forward within a fixed distance from the current state . This key distinction allows us to consider richer state spaces ( i.e. , with more features ) because the locality restricts the size of the state space we consider , as will be demonstrated . It is also important to recognize the difference from bottlenecks ( Menache et al. , 2002 ; Simsek & Barto , 2004 ) which are policy-independent and learned by approximating the state space with randomly sampled trajectories ; rather than help explain a policy , bottlenecks are used to learn efficient policies such as through hierarchical RL ( Botvinick et al. , 2008 ) or options frameworks ( Ramesh et al. , 2019 ) . Strategic states , however , are learned with respect to a policy and identified without assuming access to the underlying topology . An example of this is seen in Figure 1a . Each position is a state and a meta-state is a cluster of possible positions ( states sharing a color/marker ) . Within each meta-state , we identify certain states as strategic states ( shown with larger markers ) , which are the intermediate states that moving towards will allow the agent to move to another meta-state and get closer to the goal state , which is the final state that the agent wants to get to . In Figure 1a , each room is ( roughly ) identified as a meta-state by our method with the corresponding doors being the respective strategic states . Topology refers to the graph connecting states to one another ; our method only has access to the knowledge of which states are connected ( through the policy ) , whereas reinforcement learning algorithms might have access to properties of the topology , e.g. , the ability to access similar states using successor representations ( Machado et al. , 2018 ) . In Figure 1 , the topology is a graph connecting the different positions in each room or the doors connecting one room to another . A key conceptual difference between our approach and others is that other methods aggregate insight ( i.e . reduce dimension ) as a function of actions ( Bastani et al. , 2018 ) or formulas derived over factors of the state space ( Sreedharan et al. , 2020 ) to output a policy summary , whereas we aggregate based on locality of the states determined by the expert policy dynamics and further identify strategic states based on these dynamics . Still other summarization methods simply output simulated trajectories deemed important ( Amir & Amir , 2018 ; Huber et al. , 2021 ) as judged by whether or not the action taken at some state matters . We use the term policy dynamics to refer to state transitions and high probability paths . We use the term dynamics because this notion contrasts other methods that use actions to explain what to do in a state or to identify important states ; strategic states are selected according to the trajectories that lead to them , and these trajectories are implicitly determined by the policy . The example in Figure 1 also exposes the global view of our explanations when the state space is small because local approximations of the state space are not needed . We show that this perspective leads to more understandable explanations ; aggregating based on actions , while precise , are too granular a view where the popular idiom can ’ t see the forest for the trees comes to mind . We conjecture that the improved understanding is due to our grouping of states being more intuitive with strategic states indicating tractable intermediate goals that are easier to follow . An example of this is again seen in Figures 1b and 1c , where grouping based on actions for interpretability or for efficiency leads to less intuitive results ( note that Figure 1c replicates Figure 4b from ( Abel et al. , 2019 ) ) . A more detailed discussion of this scenario can be found in section 5 , where yet other domains have large state spaces and require strategic states to explain local scenarios . As such , our main contributions are two-fold : 1 . We offer a novel framework for understanding RL policies , which to the best of our knowledge , differs greatly from other methods in this space which create explanations based on similarity of actions rather than policy dynamics . We demonstrate on three domains of increasing difficulty . 2 . We conduct a task-oriented user study to evaluate effectiveness of our method . Task-oriented evaluations are one of the most thorough ways of evaluating explanation methods ( Doshi-Velez & Kim , 2017 ; Lipton , 2016 ; Dhurandhar et al. , 2017 ) as they assess simulatability of a complex AI model by a human , yet to our knowledge , have rarely been used in the RL space . 2 NOTATION . We use the following notations . Let S define the full state space and s ∈ S be a state in the full state space . Denote the expert policy by πE ( · , · ) : ( A , S ) → R where A is the action space . The notation πE ∈ R|A|×|S| is a matrix where each column is a distribution of actions to take given a state ( i.e. , the policy is stochastic ) . Note that we assume a transition function fE ( · , · ) : ( S , S ) → R that defines the likelihood of going from one state to another state in one jump by following the expert policy . Let Sφ = { Φ1 , ... , Φk } denote a meta-state space of cardinality k and φ ( · ) : S → Sφ denote a meta-state mapping such that φ ( s ) ∈ Sφ is the meta-state assigned to s ∈ S. Denote m strategic states of meta-state Φ by GΦ = { gΦ1 , ... , gΦm } where gΦi ∈ S ∀i ∈ { 1 , ... , m } . 3 METHOD . We now describe our algorithm , the Strategic State eXplanation ( SSX ) method , which involves computing shortest paths between states , identifying meta-states , and selecting their corresponding strategic states . However , we first define certain terms . Recall that all paths discussed below are based on transitions dictated by an expert policy because we want to explain the policy ; the well-known concept called bottlenecks are identified from paths generated as random walks through the state space and are meant to help learn policies rather than explain them . Maximum likelihood ( expert ) paths : One criterion used below is that two states in the same metastate should not be far away from each other . The distance we consider is the most likely path from state s to state s′ under πE . Consider a fully connected , directed ( in both directions ) graph where the states are vertices and an edge from s to s′ has weight − log fE ( s , s′ ) . By this definition , the shortest path is also the maximum likelihood path from s to s′ . Denote by γ ( s , s′ ) the value of this maximum likelihood path and Γ ∈ R|S|×|S| a matrix containing the values of these paths for all pairs of states in the state space . Γ , along with a predecessor matrix P that can be used to derive the shortest paths , can be computed using Dijkstra ’ s shortest path algorithm in O ( |S|2 log |S| ) because all edge weights are non-negative . Section 3.4 below discusses how our algorithm is applied with a large state space . Note that computation of Γ means that SSX requires access to a policy simulator for πE , and in practice , might require simulation for estimation when Γ can not be computed exactly . This is a common requirement among related explanation methods , e.g. , in order to simulate important trajectories Amir & Amir ( 2018 ) or samples to train a decision tree Bastani et al . ( 2018 ) , that are discussed below in Section 4 . Counts of Out-paths : Another criterion used below for assigning states to meta-states is that if state s lies on many of the paths between one meta-state Φi and all other meta-states , then s should be assigned the meta-state Φi , i.e. , φ ( s ) = Φi . We define , for fixed state s and its assigned meta-state φ ( s ) , the number of shortest paths leaving φ ( s ) that s lies on . Denote T ( s , s′ ) as the set of states that lie on the maximum likelihood path between s and s′ , i.e. , the set of states that define γ ( s , s′ ) . Then 1 [ s ∈ T ( s′ , s′′ ) ] is the indicator of whether state s lies on the maximum likelihood path between s′ and s′′ , and we compute the count of the number of such paths for state s and meta-state φ ( s ) via C ( s , φ ( s ) ) = ∑ s′ 6=s , φ ( s′ ) =φ ( s ) ∑ s′′ : φ ( s′′ ) 6=φ ( s ) 1 [ s ∈ T ( s′ , s′′ ) ] . ( 1 ) C ( s , φ ( s ) ) can be computed for all s ∈ S in O ( |S|2 ) by iteratively checking if predecessors of shortest paths from each node to every other node lie in the same meta-state as the first node on the path . Note this predecessor matrix was already computed for matrix Γ above . One may also consider the likelihood ( rather than count ) of out-paths by replacing the indicator in eq . ( 1 ) with γ ( s′ , s′′ ) . | This work proposes to explain an RL policy by clustering states into meta-states and presenting strategic state(s) for each meta-state. This clustering is performed based on policy rollouts, balancing likelihood of paths within a meta-state and number of paths from states within the meta-state. The authors present example explanations for three domains. A user study is performed to compare VIPER-D to the proposed method (SSX). | SP:dde554a1db9bb15202e77adaf6edd9868865605b |
Interpreting Reinforcement Policies through Local Behaviors | 1 INTRODUCTION . Deep reinforcement learning has seen stupendous success over the last decade with superhuman performance in games such as Go ( Silver et al. , 2016 ) , Chess ( Silver et al. , 2018 ) as well as Atari benchmarks ( Mnih et al. , 2015 ) . With increasing superior capabilities of automated ( learning ) systems , there is a strong push to understand the reasoning behind their decision making . One motivation is for ( professional ) humans to improve their performance in these games ( Rensch , 2021 ) . An even deeper reason is for humans to be able to trust these systems if they are deployed in real life scenarios ( Gunning , 2017 ) . Safety , for instance , is of paramount importance in applications such as self-driving cars or deployments on unmanned aerial vehicles ( UAVs ) ( Garcia & Fernandez , 2015 ) . The General Data Protection Regulation ( Yannella & Kagan , 2018 ) passed in Europe demands that explanations need to be provided for any automated decisions that affect humans . While various methods with different flavors have been provided to explain classification models ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ; Lapuschkin et al. , 2016 ; Dhurandhar et al. , 2018 ) and evaluated in application-grounded manner ( Doshi-Velez & Kim , 2017 ; Dhurandhar et al. , 2017 ) , the exploration of different perspectives to explain reinforcement learning ( RL ) policies has been limited and user study evaluations comparing methods are rarely employed in this space . In this paper , we provide a novel perspective to produce human understandable explanations with a task-oriented user study that evaluates which explanations help users predict the behavior of a policy better . Our approach involves two steps : 1 ) learning meta-states , i.e. , clusters of states , based on the dynamics of the policy being explained , and 2 ) within eat meta-state , identifying states that act intermediate goals , which we refer to as strategic states . Contrary to the global nature of recent explainability works in RL ( Topin & Veloso , 2019 ; Sreedharan et al. , 2020 ; Amir & Amir , 2018 ) , our focus is on local explanations ; given the current state , we explain the policy moving forward within a fixed distance from the current state . This key distinction allows us to consider richer state spaces ( i.e. , with more features ) because the locality restricts the size of the state space we consider , as will be demonstrated . It is also important to recognize the difference from bottlenecks ( Menache et al. , 2002 ; Simsek & Barto , 2004 ) which are policy-independent and learned by approximating the state space with randomly sampled trajectories ; rather than help explain a policy , bottlenecks are used to learn efficient policies such as through hierarchical RL ( Botvinick et al. , 2008 ) or options frameworks ( Ramesh et al. , 2019 ) . Strategic states , however , are learned with respect to a policy and identified without assuming access to the underlying topology . An example of this is seen in Figure 1a . Each position is a state and a meta-state is a cluster of possible positions ( states sharing a color/marker ) . Within each meta-state , we identify certain states as strategic states ( shown with larger markers ) , which are the intermediate states that moving towards will allow the agent to move to another meta-state and get closer to the goal state , which is the final state that the agent wants to get to . In Figure 1a , each room is ( roughly ) identified as a meta-state by our method with the corresponding doors being the respective strategic states . Topology refers to the graph connecting states to one another ; our method only has access to the knowledge of which states are connected ( through the policy ) , whereas reinforcement learning algorithms might have access to properties of the topology , e.g. , the ability to access similar states using successor representations ( Machado et al. , 2018 ) . In Figure 1 , the topology is a graph connecting the different positions in each room or the doors connecting one room to another . A key conceptual difference between our approach and others is that other methods aggregate insight ( i.e . reduce dimension ) as a function of actions ( Bastani et al. , 2018 ) or formulas derived over factors of the state space ( Sreedharan et al. , 2020 ) to output a policy summary , whereas we aggregate based on locality of the states determined by the expert policy dynamics and further identify strategic states based on these dynamics . Still other summarization methods simply output simulated trajectories deemed important ( Amir & Amir , 2018 ; Huber et al. , 2021 ) as judged by whether or not the action taken at some state matters . We use the term policy dynamics to refer to state transitions and high probability paths . We use the term dynamics because this notion contrasts other methods that use actions to explain what to do in a state or to identify important states ; strategic states are selected according to the trajectories that lead to them , and these trajectories are implicitly determined by the policy . The example in Figure 1 also exposes the global view of our explanations when the state space is small because local approximations of the state space are not needed . We show that this perspective leads to more understandable explanations ; aggregating based on actions , while precise , are too granular a view where the popular idiom can ’ t see the forest for the trees comes to mind . We conjecture that the improved understanding is due to our grouping of states being more intuitive with strategic states indicating tractable intermediate goals that are easier to follow . An example of this is again seen in Figures 1b and 1c , where grouping based on actions for interpretability or for efficiency leads to less intuitive results ( note that Figure 1c replicates Figure 4b from ( Abel et al. , 2019 ) ) . A more detailed discussion of this scenario can be found in section 5 , where yet other domains have large state spaces and require strategic states to explain local scenarios . As such , our main contributions are two-fold : 1 . We offer a novel framework for understanding RL policies , which to the best of our knowledge , differs greatly from other methods in this space which create explanations based on similarity of actions rather than policy dynamics . We demonstrate on three domains of increasing difficulty . 2 . We conduct a task-oriented user study to evaluate effectiveness of our method . Task-oriented evaluations are one of the most thorough ways of evaluating explanation methods ( Doshi-Velez & Kim , 2017 ; Lipton , 2016 ; Dhurandhar et al. , 2017 ) as they assess simulatability of a complex AI model by a human , yet to our knowledge , have rarely been used in the RL space . 2 NOTATION . We use the following notations . Let S define the full state space and s ∈ S be a state in the full state space . Denote the expert policy by πE ( · , · ) : ( A , S ) → R where A is the action space . The notation πE ∈ R|A|×|S| is a matrix where each column is a distribution of actions to take given a state ( i.e. , the policy is stochastic ) . Note that we assume a transition function fE ( · , · ) : ( S , S ) → R that defines the likelihood of going from one state to another state in one jump by following the expert policy . Let Sφ = { Φ1 , ... , Φk } denote a meta-state space of cardinality k and φ ( · ) : S → Sφ denote a meta-state mapping such that φ ( s ) ∈ Sφ is the meta-state assigned to s ∈ S. Denote m strategic states of meta-state Φ by GΦ = { gΦ1 , ... , gΦm } where gΦi ∈ S ∀i ∈ { 1 , ... , m } . 3 METHOD . We now describe our algorithm , the Strategic State eXplanation ( SSX ) method , which involves computing shortest paths between states , identifying meta-states , and selecting their corresponding strategic states . However , we first define certain terms . Recall that all paths discussed below are based on transitions dictated by an expert policy because we want to explain the policy ; the well-known concept called bottlenecks are identified from paths generated as random walks through the state space and are meant to help learn policies rather than explain them . Maximum likelihood ( expert ) paths : One criterion used below is that two states in the same metastate should not be far away from each other . The distance we consider is the most likely path from state s to state s′ under πE . Consider a fully connected , directed ( in both directions ) graph where the states are vertices and an edge from s to s′ has weight − log fE ( s , s′ ) . By this definition , the shortest path is also the maximum likelihood path from s to s′ . Denote by γ ( s , s′ ) the value of this maximum likelihood path and Γ ∈ R|S|×|S| a matrix containing the values of these paths for all pairs of states in the state space . Γ , along with a predecessor matrix P that can be used to derive the shortest paths , can be computed using Dijkstra ’ s shortest path algorithm in O ( |S|2 log |S| ) because all edge weights are non-negative . Section 3.4 below discusses how our algorithm is applied with a large state space . Note that computation of Γ means that SSX requires access to a policy simulator for πE , and in practice , might require simulation for estimation when Γ can not be computed exactly . This is a common requirement among related explanation methods , e.g. , in order to simulate important trajectories Amir & Amir ( 2018 ) or samples to train a decision tree Bastani et al . ( 2018 ) , that are discussed below in Section 4 . Counts of Out-paths : Another criterion used below for assigning states to meta-states is that if state s lies on many of the paths between one meta-state Φi and all other meta-states , then s should be assigned the meta-state Φi , i.e. , φ ( s ) = Φi . We define , for fixed state s and its assigned meta-state φ ( s ) , the number of shortest paths leaving φ ( s ) that s lies on . Denote T ( s , s′ ) as the set of states that lie on the maximum likelihood path between s and s′ , i.e. , the set of states that define γ ( s , s′ ) . Then 1 [ s ∈ T ( s′ , s′′ ) ] is the indicator of whether state s lies on the maximum likelihood path between s′ and s′′ , and we compute the count of the number of such paths for state s and meta-state φ ( s ) via C ( s , φ ( s ) ) = ∑ s′ 6=s , φ ( s′ ) =φ ( s ) ∑ s′′ : φ ( s′′ ) 6=φ ( s ) 1 [ s ∈ T ( s′ , s′′ ) ] . ( 1 ) C ( s , φ ( s ) ) can be computed for all s ∈ S in O ( |S|2 ) by iteratively checking if predecessors of shortest paths from each node to every other node lie in the same meta-state as the first node on the path . Note this predecessor matrix was already computed for matrix Γ above . One may also consider the likelihood ( rather than count ) of out-paths by replacing the indicator in eq . ( 1 ) with γ ( s′ , s′′ ) . | The paper proposes an approach to interpret a black-box control policy of a reinforcement learning (RL) problem such that its interpretations can be understood by a human user. For a given expert policy (i.e., a fitted deep neural network), the proposed approach uses transition probabilities induced by the expert policy to define regions of the state space that are “similar.” These policy-dependent regions are referred to as meta-states and are computed using an algorithm similar to spectral clustering. The proposed method then computes so-called strategic states for each meta-state, where strategic states of a meta-state are those states that belong to the meta-state and bridge the meta-state to other ones. In other words, the expert policy should pass through the strategic states frequently when it goes from one meta-state to another. The paper performs numerical experiments using standard RL applications, i.e., four rooms, door-key, and mini-Pacman, to show the effectiveness of the proposed approach. | SP:dde554a1db9bb15202e77adaf6edd9868865605b |
Analytically Tractable Bayesian Deep Q-Learning | 1 INTRODUCTION . Reinforcement learning ( RL ) has gained increasing interest since the demonstration it was able to reach human performance on video game benchmarks using deep Q-learning ( DQN ) ( Mnih et al. , 2015 ; Van Hasselt et al. , 2016 ) . Deep RL methods typically require an explicit definition of an exploration-exploitation function in order to compromise between using the current policy and exploring the potential of new actions . Such an issue can be mitigated by opting for a Bayesian approach where we estimate the uncertainty of the neural network ’ s parameters using Bayes ’ theorem , and select the optimal action given a state according to the parameter uncertainty using Thompson sampling technique ( Strens , 2000 ) . Bayesian deep learning ( BDL ) methods based on variational inference ( Kingma et al. , 2015 ; Hernández-Lobato & Adams , 2015 ; Blundell et al. , 2015 ; Louizos & Welling , 2016 ; Osawa et al. , 2019 ; Wu et al. , 2019 ) , Monte-Carlo dropout ( Gal & Ghahramani , 2016 ) , or Hamiltonian Monte-Carlo sampling ( Neal , 1995 ) have shown to perform well on regression and classification benchmarks , despite being generally computationally more demanding than their deterministic counterparts . Note that none of these approaches allow performing the analytical inference for the weights and biases defining the neural network . Goulet et al . ( 2021 ) recently proposed the tractable approximate Gaussian inference ( TAGI ) method that allows estimating the posterior of the neural network ’ s parameters using a closed-form analytical method . More specifically , TAGI leverages Gaussian conditional equations to analytically infer this posterior without the need of numerical approximations ( i.e. , sampling techniques or Monte Carlo integration ) or optimization on which existing BDL methods rely . In addition , TAGI is able to maintain the same computational complexity as deterministic neural networks based on the gradient backpropagation . For convolutional architectures applied on classification benchmarks , this approach was shown to exceed the performance of other Bayesian and deterministic approaches based on gradient backpropagation , and to do so while requiring a smaller number of training epochs ( Nguyen & Goulet , 2021 ) . In this paper , we propose a new perspective on probabilistic inference for Bayesian deep reinforcement learning which , up to now has been relying on gradient-based methods . More specifically , we present how can we adapt the temporal difference Q-learning framework ( Sutton , 1988 ; Watkins & Dayan , 1992 ) to make it compatible with TAGI . Section 2 first reviews the theory behind TAGI and the expected value formulation through the Bellman ’ s Equation . Then , we present how the actionvalue function can be learned using TAGI . Section 3 presents the related work associated with Bayesian reinforcement learning , and Section 4 compares the performance of the TAGI-DQN with the deterministic DQN on on- and off-policy RL approaches . 2 TAGI-DQN FORMULATION . This section presents how to adapt the DQN frameworks in order to make them compatible with analytical inference . First , Section 2.1 reviews the fundamental theory behind TAGI , and Section 2.1 reviews the concept of long-term expected value through the Bellman ’ s equation ( Sutton & Barto , 2018 ) . Then , Section 2.3 presents how to make the Q-learning formulation ( Watkins & Dayan , 1992 ) compatible with TAGI . 2.1 TRACTABLE APPROXIMATE GAUSSIAN INFERENCE . Bayesian deep learning aims to estimate the posterior of neural network ’ s parameters θ conditional on a given training dataset D = { ( xi , yi ) , ∀i ∈ 1 : D } using Bayes ’ theorem so that f ( θ|D ) = f ( θ ) f ( D|θ ) f ( D ) , ( 1 ) where f ( θ ) is the prior Probability Density Function ( PDF ) of parameters , f ( D|θ ) is the likelihood and f ( D ) is the marginal likelihood or evidence . Until recently , the posterior f ( θ|D ) in Equation 1 has been considered to be intractable ( Goodfellow et al. , 2016 ; Goan & Fookes , 2020 ) . TAGI has addressed this intractability issue by leveraging the Gaussian conditional equations ; For a random vector of parameter θ and observations Y for which the joint PDF satisfies f ( θ y ) = N ( [ θ y ] ; [ µθ µY ] , [ Σθ Σ ᵀ Yθ ΣYθ ΣY ] ) , ( 2 ) the Gaussian conditional equation describing the PDF of θ conditional on y is defined following f ( θ|y ) = N ( θ ; µθ|y , Σθ|y ) µθ|y = µθ + Σ ᵀ YθΣ −1 Y ( y − µY ) Σθ|y = Σθ −ΣᵀYθΣ−1Y ΣYθ . ( 3 ) In its simple form , the intractable computational requirements for directly applying Equation 3 limits it to trivial neural networks . In order to overcome this challenge , TAGI leverages the inherent conditional independence structure of hidden layers Z ( j−1 ) ⊥ Z ( j+1 ) |z ( j ) under the assumptions that parameters θ are independent between layers . This conditional independence structure allows breaking down the computations for Equation 3 into a layer-wise two-step procedure ; forward uncertainty propagation and backward update . The first forward uncertainty propagation step is intended to build the joint prior between the hidden states of a layer j , Z ( j ) , and the parameters θ ( j ) directly connecting into it . This operation is made by propagating the uncertainty from the parameters and the input layer through the neural network . In order to maintain the analytical tractability of the forward step , we must address two challenges : The first challenge is related to the product of the weights W and activation units A involved in the calculation of the hidden states Z so that Z ( j+1 ) i = A∑ k=1 W ( j ) i , k A ( j ) k +B ( j ) i , ( 4 ) where B is the bias parameter , ( j ) refers to the jth layer , i refers to a node in the current layer , k refers to a node from the previous layer , and A is the number of hidden units from the previous layer . The second challenge is to propagate uncertainty through the non-linearity when activating hidden states with an activation function φ ( . ) , A ( j+1 ) i = φ ( Z ( j+1 ) i ) . ( 5 ) In order to tackle these issues , TAGI made the following assumptions A1 . The prior for the parameters and variables in the input layer are Gaussians . A2 . The product of a weight and an activation unit in Equation 4 is approximated by a Gaussian random variable , WA ≈ N ( µWA , σ2WA ) , whose moments are computed analytically using Gaussian multiplicative approximation ( GMA ) . A3 . The activation function φ ( . ) in Equation 5 is locally linearized using a first-order Taylor expansion evaluated at the expected value of the hidden unit being activated . Note that the linearization procedure in the assumption A3 may seems to be a crude approximation , yet it has been shown to match or exceeds the state-of-the-art performance on fully-connected neural networks ( FNN ) ( Goulet et al. , 2021 ) , as well as convolutional neural networks ( CNN ) and generative adversarial networks ( Nguyen & Goulet , 2021 ) . In order to maintain a linear computational complexity with respect to the number of weights for the forward steps , TAGI relies on the following assumptions A4 . The covariance for all parameters in the network and for all the hidden units within a same layer are diagonal . A5 . The hidden states on a given layer are independent from the parameters connecting into the subsequent layer , i.e. , Z ( j ) ⊥ θ ( j ) . A6 . The hidden states at layer j + 1 depend only on the hidden states and parameters directly connecting into it . The empirical validity of those assumptions as well as the GMA formulations are provided by Goulet et al . ( 2021 ) . The second backward update-step consists in performing layer-wise recursive Bayesian inference going from hidden-layer to hidden-layer , and from hidden-layer to the parameters connecting into it ( see Figure 1 ) . The assumptions A1-A6 allow performing analytical inference while still maintaining a linear computational complexity with respect to the number of weight parameters in the network where only the posterior of hidden states for the output layer are computed using Equation 3 . For the remaining layers , those quantities are obtained using the Rauch-Tung-Striebel recursive procedure that was developed in the context of state-space models ( Rauch et al. , 1965 ) . For simplicity , we define the short-hand notation { θ+ , Z+ } ≡ { θ ( j+1 ) , Z ( j+1 ) } and { θ , Z } ≡ { θ ( j ) , Z ( j ) } so that the posterior for the parameters and hidden states are computed following f ( Z|y ) = N ( z ; µZ|y , ΣZ|y ) µZ|y = µZ + JZ ( µZ+|y − µZ+ ) ΣZ|y = ΣZ + JZ ( ΣZ+|y −ΣZ+ ) JᵀZ JZ = ΣZZ+Σ −1 Z+ , ( 6 ) f ( θ|y ) = N ( θ ; µθ|y , Σθ|y ) µθ|y = µθ + Jθ ( µZ+|y − µZ+ ) Σθ|y = Σθ + Jθ ( ΣZ+|y −ΣZ+ ) Jᵀθ Jθ = ΣθZ+Σ −1 Z+ . ( 7 ) Figure 1 presents a directed acyclic graph ( DAG ) describing the interconnectivity in such a neural network . TAGI allows inferring the diagonal posterior knowledge for weights and bias parameters , either using one observation at a time , or using mini-batches of data . As we will show in the next sections , this online learning capacity is best suited for RL problems where we experience episodes sequentially and where we need to define a tradeoff between exploration and exploitation , as a function of our knowledge of the expected value associated with being in a state and taking an action . 2.2 EXPECTED VALUE AND BELLMAN ’ S EQUATION . We define r ( s , a , s′ ) as the reward for being in a state s ∈ RS , taking an action a ∈ A = { a1 , a2 , · · · aA } , and ending in a state s′ ∈ RS . For simplicity , we use the short-form notation for the reward r ( s , a , s′ ) ≡ r ( s ) in order to define the value as the infinite sum of discounted rewards v ( s ) = ∞∑ k=0 γkr ( st+k ) . ( 8 ) As we do not know what will be the future states st+k for k > 0 , we need to consider them as random variables ( St+k ) , so that the value V ( st ) becomes a random variable as well , V ( st ) = r ( st ) + ∞∑ k=1 γkr ( St+k ) . ( 9 ) Rational decisions regarding which action to take among the set A is based the maximization of the expected value as defined by the action-value function q ( st , at ) = µV ≡ E [ V ( st , at , π ) ] = r ( st ) + E [ ∞∑ k=1 γkr ( St+k ) ] , ( 10 ) where it is assumed that at each time t , the agent takes the action defined in the policy π . In the case of episode-based learning where the agent interacts with the environment , we assume we know the tuple of states st and st+1 , so that we can redefine the value as V ( st , at ) = r ( st ) + γ ( r ( st+1 ) + ∞∑ k=1 γkr ( St+1+k ) ) = r ( st ) + γV ( st+1 , at+1 ) . ( 11 ) Assuming that the value V ∼ N ( v ; µV , σ2V ) in Equations 9 and 11 is described by Gaussian random variables , we can reparameterize these equations as the sum of the expected value q ( s , a ) and a zero-mean Gaussian random variable E ∼ N ( ; 0 , 1 ) , so that V ( s , a ) = q ( s , a ) + σV E , ( 12 ) where the variance σ2V and E are assumed here to be independent of s and a . Although in a more general framework this assumption could be relaxed , such an heteroscedastic variance term is outside from the scope of this paper . Using this reparameterization , we can write Equation 11 as the discounted difference between the expected values of two subsequent states q ( st , at ) = r ( st ) + γq ( st+1 , at+1 ) − σVtEt + γσVt+1Et+1 = r ( st ) + γq ( st+1 , at+1 ) + σV E . ( 13 ) Note that in Equation 13 , σVt and γσVt+1 can be combined in a single standard deviation parameters σV with the assumption that Ei ⊥ Ej , ∀i 6= j . In the case where at a time t , we want to update the Q-values encoded in the neural net only after observing n-step returns ( Mnih et al. , 2016 ) , we can reformulate the observation equation so that q ( st , at ) = n−t−1∑ i=0 γir ( st+i ) + γ n−tq ( sn , an ) + σV Et , ∀t = { 1 , 2 , · · · , n− 1 } . ( 14 ) Note that in the application of Equation 14 , we employ the simplifying assumption that Et ⊥ Et+i , ∀i 6= 0 , as Equation 13 already makes simplifying assumptions for the independence of σ2V and E . Note that in a more general framework , this assumption could be relaxed . An example of n-step returns is presented in the the algorithm displayed in §1 from Appendix A . The following subsections will present , for the case of categorical actions , how to model the deterministic action-value function q ( s , a ) using a neural network . | This paper proposed a new Bayesian Deep RL algorithm that learns deep Q networks (DQN) without using gradient descent. The updates of DQN's parameters follows from the tractable approximate Gaussian inference (TAGI) algorithm. Then it studies the empirical performance of proposed algorithm with DQN learned by gradient based optimization in several Atari benchmarks. | SP:879e4af8012f31b4bc12387f8573889b7d341bd6 |
Analytically Tractable Bayesian Deep Q-Learning | 1 INTRODUCTION . Reinforcement learning ( RL ) has gained increasing interest since the demonstration it was able to reach human performance on video game benchmarks using deep Q-learning ( DQN ) ( Mnih et al. , 2015 ; Van Hasselt et al. , 2016 ) . Deep RL methods typically require an explicit definition of an exploration-exploitation function in order to compromise between using the current policy and exploring the potential of new actions . Such an issue can be mitigated by opting for a Bayesian approach where we estimate the uncertainty of the neural network ’ s parameters using Bayes ’ theorem , and select the optimal action given a state according to the parameter uncertainty using Thompson sampling technique ( Strens , 2000 ) . Bayesian deep learning ( BDL ) methods based on variational inference ( Kingma et al. , 2015 ; Hernández-Lobato & Adams , 2015 ; Blundell et al. , 2015 ; Louizos & Welling , 2016 ; Osawa et al. , 2019 ; Wu et al. , 2019 ) , Monte-Carlo dropout ( Gal & Ghahramani , 2016 ) , or Hamiltonian Monte-Carlo sampling ( Neal , 1995 ) have shown to perform well on regression and classification benchmarks , despite being generally computationally more demanding than their deterministic counterparts . Note that none of these approaches allow performing the analytical inference for the weights and biases defining the neural network . Goulet et al . ( 2021 ) recently proposed the tractable approximate Gaussian inference ( TAGI ) method that allows estimating the posterior of the neural network ’ s parameters using a closed-form analytical method . More specifically , TAGI leverages Gaussian conditional equations to analytically infer this posterior without the need of numerical approximations ( i.e. , sampling techniques or Monte Carlo integration ) or optimization on which existing BDL methods rely . In addition , TAGI is able to maintain the same computational complexity as deterministic neural networks based on the gradient backpropagation . For convolutional architectures applied on classification benchmarks , this approach was shown to exceed the performance of other Bayesian and deterministic approaches based on gradient backpropagation , and to do so while requiring a smaller number of training epochs ( Nguyen & Goulet , 2021 ) . In this paper , we propose a new perspective on probabilistic inference for Bayesian deep reinforcement learning which , up to now has been relying on gradient-based methods . More specifically , we present how can we adapt the temporal difference Q-learning framework ( Sutton , 1988 ; Watkins & Dayan , 1992 ) to make it compatible with TAGI . Section 2 first reviews the theory behind TAGI and the expected value formulation through the Bellman ’ s Equation . Then , we present how the actionvalue function can be learned using TAGI . Section 3 presents the related work associated with Bayesian reinforcement learning , and Section 4 compares the performance of the TAGI-DQN with the deterministic DQN on on- and off-policy RL approaches . 2 TAGI-DQN FORMULATION . This section presents how to adapt the DQN frameworks in order to make them compatible with analytical inference . First , Section 2.1 reviews the fundamental theory behind TAGI , and Section 2.1 reviews the concept of long-term expected value through the Bellman ’ s equation ( Sutton & Barto , 2018 ) . Then , Section 2.3 presents how to make the Q-learning formulation ( Watkins & Dayan , 1992 ) compatible with TAGI . 2.1 TRACTABLE APPROXIMATE GAUSSIAN INFERENCE . Bayesian deep learning aims to estimate the posterior of neural network ’ s parameters θ conditional on a given training dataset D = { ( xi , yi ) , ∀i ∈ 1 : D } using Bayes ’ theorem so that f ( θ|D ) = f ( θ ) f ( D|θ ) f ( D ) , ( 1 ) where f ( θ ) is the prior Probability Density Function ( PDF ) of parameters , f ( D|θ ) is the likelihood and f ( D ) is the marginal likelihood or evidence . Until recently , the posterior f ( θ|D ) in Equation 1 has been considered to be intractable ( Goodfellow et al. , 2016 ; Goan & Fookes , 2020 ) . TAGI has addressed this intractability issue by leveraging the Gaussian conditional equations ; For a random vector of parameter θ and observations Y for which the joint PDF satisfies f ( θ y ) = N ( [ θ y ] ; [ µθ µY ] , [ Σθ Σ ᵀ Yθ ΣYθ ΣY ] ) , ( 2 ) the Gaussian conditional equation describing the PDF of θ conditional on y is defined following f ( θ|y ) = N ( θ ; µθ|y , Σθ|y ) µθ|y = µθ + Σ ᵀ YθΣ −1 Y ( y − µY ) Σθ|y = Σθ −ΣᵀYθΣ−1Y ΣYθ . ( 3 ) In its simple form , the intractable computational requirements for directly applying Equation 3 limits it to trivial neural networks . In order to overcome this challenge , TAGI leverages the inherent conditional independence structure of hidden layers Z ( j−1 ) ⊥ Z ( j+1 ) |z ( j ) under the assumptions that parameters θ are independent between layers . This conditional independence structure allows breaking down the computations for Equation 3 into a layer-wise two-step procedure ; forward uncertainty propagation and backward update . The first forward uncertainty propagation step is intended to build the joint prior between the hidden states of a layer j , Z ( j ) , and the parameters θ ( j ) directly connecting into it . This operation is made by propagating the uncertainty from the parameters and the input layer through the neural network . In order to maintain the analytical tractability of the forward step , we must address two challenges : The first challenge is related to the product of the weights W and activation units A involved in the calculation of the hidden states Z so that Z ( j+1 ) i = A∑ k=1 W ( j ) i , k A ( j ) k +B ( j ) i , ( 4 ) where B is the bias parameter , ( j ) refers to the jth layer , i refers to a node in the current layer , k refers to a node from the previous layer , and A is the number of hidden units from the previous layer . The second challenge is to propagate uncertainty through the non-linearity when activating hidden states with an activation function φ ( . ) , A ( j+1 ) i = φ ( Z ( j+1 ) i ) . ( 5 ) In order to tackle these issues , TAGI made the following assumptions A1 . The prior for the parameters and variables in the input layer are Gaussians . A2 . The product of a weight and an activation unit in Equation 4 is approximated by a Gaussian random variable , WA ≈ N ( µWA , σ2WA ) , whose moments are computed analytically using Gaussian multiplicative approximation ( GMA ) . A3 . The activation function φ ( . ) in Equation 5 is locally linearized using a first-order Taylor expansion evaluated at the expected value of the hidden unit being activated . Note that the linearization procedure in the assumption A3 may seems to be a crude approximation , yet it has been shown to match or exceeds the state-of-the-art performance on fully-connected neural networks ( FNN ) ( Goulet et al. , 2021 ) , as well as convolutional neural networks ( CNN ) and generative adversarial networks ( Nguyen & Goulet , 2021 ) . In order to maintain a linear computational complexity with respect to the number of weights for the forward steps , TAGI relies on the following assumptions A4 . The covariance for all parameters in the network and for all the hidden units within a same layer are diagonal . A5 . The hidden states on a given layer are independent from the parameters connecting into the subsequent layer , i.e. , Z ( j ) ⊥ θ ( j ) . A6 . The hidden states at layer j + 1 depend only on the hidden states and parameters directly connecting into it . The empirical validity of those assumptions as well as the GMA formulations are provided by Goulet et al . ( 2021 ) . The second backward update-step consists in performing layer-wise recursive Bayesian inference going from hidden-layer to hidden-layer , and from hidden-layer to the parameters connecting into it ( see Figure 1 ) . The assumptions A1-A6 allow performing analytical inference while still maintaining a linear computational complexity with respect to the number of weight parameters in the network where only the posterior of hidden states for the output layer are computed using Equation 3 . For the remaining layers , those quantities are obtained using the Rauch-Tung-Striebel recursive procedure that was developed in the context of state-space models ( Rauch et al. , 1965 ) . For simplicity , we define the short-hand notation { θ+ , Z+ } ≡ { θ ( j+1 ) , Z ( j+1 ) } and { θ , Z } ≡ { θ ( j ) , Z ( j ) } so that the posterior for the parameters and hidden states are computed following f ( Z|y ) = N ( z ; µZ|y , ΣZ|y ) µZ|y = µZ + JZ ( µZ+|y − µZ+ ) ΣZ|y = ΣZ + JZ ( ΣZ+|y −ΣZ+ ) JᵀZ JZ = ΣZZ+Σ −1 Z+ , ( 6 ) f ( θ|y ) = N ( θ ; µθ|y , Σθ|y ) µθ|y = µθ + Jθ ( µZ+|y − µZ+ ) Σθ|y = Σθ + Jθ ( ΣZ+|y −ΣZ+ ) Jᵀθ Jθ = ΣθZ+Σ −1 Z+ . ( 7 ) Figure 1 presents a directed acyclic graph ( DAG ) describing the interconnectivity in such a neural network . TAGI allows inferring the diagonal posterior knowledge for weights and bias parameters , either using one observation at a time , or using mini-batches of data . As we will show in the next sections , this online learning capacity is best suited for RL problems where we experience episodes sequentially and where we need to define a tradeoff between exploration and exploitation , as a function of our knowledge of the expected value associated with being in a state and taking an action . 2.2 EXPECTED VALUE AND BELLMAN ’ S EQUATION . We define r ( s , a , s′ ) as the reward for being in a state s ∈ RS , taking an action a ∈ A = { a1 , a2 , · · · aA } , and ending in a state s′ ∈ RS . For simplicity , we use the short-form notation for the reward r ( s , a , s′ ) ≡ r ( s ) in order to define the value as the infinite sum of discounted rewards v ( s ) = ∞∑ k=0 γkr ( st+k ) . ( 8 ) As we do not know what will be the future states st+k for k > 0 , we need to consider them as random variables ( St+k ) , so that the value V ( st ) becomes a random variable as well , V ( st ) = r ( st ) + ∞∑ k=1 γkr ( St+k ) . ( 9 ) Rational decisions regarding which action to take among the set A is based the maximization of the expected value as defined by the action-value function q ( st , at ) = µV ≡ E [ V ( st , at , π ) ] = r ( st ) + E [ ∞∑ k=1 γkr ( St+k ) ] , ( 10 ) where it is assumed that at each time t , the agent takes the action defined in the policy π . In the case of episode-based learning where the agent interacts with the environment , we assume we know the tuple of states st and st+1 , so that we can redefine the value as V ( st , at ) = r ( st ) + γ ( r ( st+1 ) + ∞∑ k=1 γkr ( St+1+k ) ) = r ( st ) + γV ( st+1 , at+1 ) . ( 11 ) Assuming that the value V ∼ N ( v ; µV , σ2V ) in Equations 9 and 11 is described by Gaussian random variables , we can reparameterize these equations as the sum of the expected value q ( s , a ) and a zero-mean Gaussian random variable E ∼ N ( ; 0 , 1 ) , so that V ( s , a ) = q ( s , a ) + σV E , ( 12 ) where the variance σ2V and E are assumed here to be independent of s and a . Although in a more general framework this assumption could be relaxed , such an heteroscedastic variance term is outside from the scope of this paper . Using this reparameterization , we can write Equation 11 as the discounted difference between the expected values of two subsequent states q ( st , at ) = r ( st ) + γq ( st+1 , at+1 ) − σVtEt + γσVt+1Et+1 = r ( st ) + γq ( st+1 , at+1 ) + σV E . ( 13 ) Note that in Equation 13 , σVt and γσVt+1 can be combined in a single standard deviation parameters σV with the assumption that Ei ⊥ Ej , ∀i 6= j . In the case where at a time t , we want to update the Q-values encoded in the neural net only after observing n-step returns ( Mnih et al. , 2016 ) , we can reformulate the observation equation so that q ( st , at ) = n−t−1∑ i=0 γir ( st+i ) + γ n−tq ( sn , an ) + σV Et , ∀t = { 1 , 2 , · · · , n− 1 } . ( 14 ) Note that in the application of Equation 14 , we employ the simplifying assumption that Et ⊥ Et+i , ∀i 6= 0 , as Equation 13 already makes simplifying assumptions for the independence of σ2V and E . Note that in a more general framework , this assumption could be relaxed . An example of n-step returns is presented in the the algorithm displayed in §1 from Appendix A . The following subsections will present , for the case of categorical actions , how to model the deterministic action-value function q ( s , a ) using a neural network . | The paper describes how to adapt the Q-learning algorithm so that it is compatible with Bayesian deep learning, as opposed to the standard (semi-)gradient-based deep learning implementation. It explains how to propagate uncertainties about network parameters through the neural network computations and ultimately to the q-values, which enables action selection using Thompson sampling. Finally, it presents empirical results on CartPole, Lunar Lander, and five Atari games. | SP:879e4af8012f31b4bc12387f8573889b7d341bd6 |
Revisiting the Lottery Ticket Hypothesis: A Ramanujan Graph Perspective | 1 INTRODUCTION AND RELATED WORK . Neural Network ( NN ) and its recent advancements have made a significant contribution to solve various machine learning applications . The power of an over-parameterized NN lies in its capability to learn simple patterns and memorize the noise in the data ( Neyshabur et al. , 2018 ) . However , the training of such networks requires enormous computational resources , and often the deployment onto low-resource environments such as mobile devices , or embedded systems becomes difficult . Recent trend in research to reduce training time of deep neural networks has shifted towards pretraining following the introduction of a remarkable contribution , named the Lottery Ticket Hypothesis ( LTH ) , which hypothesize the existence of a highly sparse subnetwork and weight initialization to reduce the training resources as well ( Frankle & Carbin , 2019 ) . It uses a simple iterative , magnitudebased pruning algorithm , and empirically shows that even after removing approximately 90 % of the weights , the subnetwork can preserve the original generalization error . In the subsequent studies , the focus goes on finding this lottery ticket for more competitive tasks by pruning with weight rewinding ( Frankle et al. , 2019a ) , fine tuning the learning rates ( Renda et al. , 2020 ) , more efficient training ( You et al. , 2019 ; Brix et al. , 2020 ; Girish et al. , 2021 ) . Neural network pruning involves sparsification of the network ( LeCun et al. , 1990 ; Blalock et al. , 2020 ) . It identifies the weight parameters , removal of which incurs minimal effect on the generalization error . There exists different categories of pruning based on ( i ) how the pruning is performed , for instance based on the weight magnitude ( Han et al. , 2015 ; Zhu & Gupta , 2017 ) , gradient in the backpropagation , hessian of the weight ( Hassibi et al. , 1993 ; Dong et al. , 2017 ; Lee et al. , 2018 ) , etc ; ( ii ) whether the pruning is global or local ; ( iii ) how often pruning should be applied like one-shot Lee et al . ( 2018 ) ; Wang et al . ( 2020 ) , iterative Tanaka et al . ( 2020 ) . One of the primary goals in the literature has been to reduce the computational footprint at the time of prediction , i.e. , during post-training . In recent LTH studies , the victim weights are determined by their value at the ini- tialization , gradient of the error , and network topology Lee et al . ( 2019 ) ; Tanaka et al . ( 2020 ) . To understand weight initialization , Malach et al . ( 2020 ) show that pruning makes a stronger hypothesis with bounded weight distribution . The sparsity of the network is reduced from polynomial to a logarithmic factor of the number of training variables ( Orseau et al. , 2020 ) . Mocanu et al . ( 2018 ) suggest to consider the topology of the network from a network science point of view . The pruning algorithm starts from a random Erdős Rényi graph and returns a scale-free network of a high sparsity factor based on the number of neurons in each layer . The method is further evolved for convolution layers considering both the magnitude and gradient of the weights ( Evci et al. , 2020a ) . Various analysis for explaining the LTH have been attempted in the past . Researchers Evci et al . ( 2020b ) explain empirically why the LTH works through gradient flow at different stages of the training . Despite previous attempts to explain why the Lottery Ticket Hypothesis works , the underlying phenomenon associated with the hypothesis still remains ill-understood . All of these studies related to LTH identify that a sparse sub-network can be trained instead of a complete network and the network needs to be connected from input to output layers . However , none of them try to explain the LTH and the properties of the pruned network through the lens of spectral graph theory . The network connectivity can be described from the graph expansion point of view , where any subset of vertices of size less than or equal to half of the number of vertices in a graph , is adjacent to at least a fraction of the number of vertices in that set ; for details , see ( Lubotzky , 2010 ) . Graphs satisfying this property are known as expander graphs . The Ramanujan Graph is a special graph in a bounded degree expander family , where the eigenbound is maximal ( Nilli , 1991 ) . This leads to a maximum possible sparsity of a network while preserving the connectivity . In this paper , we initiate a study to observe the characteristics of a pruned sub-network from the spectral properties of its adjacency matrix , which , has not been reported previously . We represent a feed-forward neural network as a series of connected bipartite graphs . Both weighted and unweighted bi-adjacency matrices are considered . The Ramanujan graph properties of each of the bipartite layers are studied . We use the results of Hoory ( 2005 ) on the bound of spectral gap for the weight matrix of a pruned network . It is empirically observed that existence of winning tickets in a pruned network is dependent on the Ramanujan graph properties of the bipartite layers . As network sparsity increases with more aggressive pruning , we obtain regions where test set accuracy do not decrease much and the bipartite layers satisfy Ramanujan graph property . Subsequently we obtain regions where the Ramanujan graph properties are lost for all the layers and test accuracy decreases sharply . Also , we observe that the impact of noise in the data , on test set accuracy is more adverse when some of the layers lose their Ramanujan graph properties . Experimental results are presented for the Lenet architecture on the MNIST dataset and the Conv4 architecture on the CIFAR10 dataset . Results for other popular feed-forward network are presented in the Appendix . We suggest that preservation of Ramanujan graph properties may benefit existing network pruning algorithms . We propose a modified pruning algorithm that uses the spectral bounds . The algorithm identifies network layers that may still be pruned further , while avoiding pruning in layers that have already lost their Ramanujan graph property . Neural network weight score functions suggested by various pruning algorithms are used to represent the bipartite layers as weighted graphs . Spectral bounds for these graphs are used to verify the Ramanujan property . For a number of popular pruning algorithms , we experimentally demonstrate significant improvement in accuracy for sparse networks by using these connectivity criteria . Contributions : The contributions of the paper are the followings : 1 . We propose a methodology for analyzing winning lottery tickets with respect to their spectral properties . 2 . We empirically observe that winning lottery tickets often satisfy layerwise bipartite Ramanujan graph property representing a sparse but resiliently connected global network . The property is checked using spectral bounds that generalize to irregular networks . We also notice better noise robustness when all the layers of the pruned sparse networks preserve the Ramanujan graph property . 3 . Based on the above property we propose a modified iterative network pruning algorithm that attempts to preserve Ramanujan graph property for all the layers even at low network densities . It identifies layers that are still amenable to pruning while avoiding further pruning in layers that have lost their Ramanujan graph property . We report significant performance improvement for a number of popular pruning algorithms modified using this criteria . 2 THE LOTTERY TICKET HYPOTHESIS AND NETWORK PRUNING . The lottery ticket hypothesis states that a randomly initialized , dense neural network contains a subnetwork which when trained independently using the same initialization achieves a test accuracy close to the original network after training for less or at most the same number of iterations Frankle & Carbin ( 2019 ) . These subnetworks , denoted as ‘ winning tickets ’ , can be uncovered by network pruning algorithms . Weight pruning is one such simple strategy . Let the original dense network be represented as the function N ( x ; θ ) , where x is the input and θ are the weights . The weights have an initialization of θ0 . Weight pruning generates a mask m ∈ { 0 , 1 } |θ| such that the pruned network can be represented by N ( x ; m⊙ θ ) with initialization m⊙ θ0 . Pruning algorithms that are used to obtain the winning tickets can be either one-shot or iterative . In one-shot pruning the original network is trained to convergence , then p % of weights are pruned and the surviving weights are re-initialized to their values in θ0 followed retraining/fine tuning the subnetwork . Here , the network training and pruning are not simultaneously performed and pruning occurs only after convergence is reached . Iterative pruning repeats one-shot pruning over several iteration . This often leads to higher pruning percentage while retaining test set accuracy . However , iterative pruning is more time consuming than one-shot pruning . After pruning the surviving weights may alternately be initialized to their values in θi , i denoting a small iteration number , rather than their initial values in θ0 . This process , as illustrated in Figure 1 ( Frankle et al. , 2019b ) , is denoted as rewinding and is effective for large networks . We adopt this in our study . Various scoring functions are used to prioritize the weights for pruning . They may be based on weight magnitudes , gradient , information flow ( Blalock et al. , 2020 ; Hoefler et al. , 2021 ) or saliency Tanaka et al . ( 2020 ) . Magnitude pruning provides a simple method for obtaining the pruning mask by retaining the top p % weights wi ∈ θ that have the highest values of |wi| . The role of the weights in local computation in the network layers is not considered . A higher pruning efficiency may be achieved by algorithms that account for connectivity structures in the individual layers . In the next section we describe graph parameters of the network that determine such connectivity . 3 EXPANDERS AND RAMANUJAN GRAPHS . Expanders are highly connected , and yet sparse graphs . In this work , we shall be considering finite , connected , undirected , but not necessarily regular graphs . Recall that the degree of a vertex v in a graph is the number of half edges emanating from v. Definition 1 ( ( n , d , ε ) -expander ) Let ε > 0 . An ( n , d , ε ) -expander is a graph G = ( V , E ) on |V | = n vertices , having maximal degree d , such that for every set ∅ ≠ U ⊆ V satisfying |U | ≤ n2 , |δ ( U ) | ≥ ε|U | holds . Here , δ ( U ) denotes the vertex boundary of U . The quantity |δ ( U ) ||U | measures the rate of expansion and the infimum |δ ( U ) ||U | as U varies among the non-empty subsets of V with |U | ≤ |V | 2 is called the ( vertex ) Cheeger constant h ( G ) of the graph G. The higher the value of h ( G ) , the more expansion property it exhibits and vice versa . Expansion and the Cheeger constant quantifies the connectivity properties of a graph as a high value of h ( G ) signifies that the graph is strongly connected . This ensures that information can flow freely without much bottlenecks . Definition 2 ( Expander family ) A sequence of finite , connected graphs { Gi = ( Vi , Ei ) } i=1,2 , ··· on Vi vertices and Ei edges is called an expander family if there exists an uniform ϵ > 0 such that each graph in the sequence is an ( |Vi| , di , ϵ ) expander for some di ’ s . The study of expansion properties of graphs is closely related to the study of the spectrum ( distribution of eigenvalues ) of the adjacency operator defined on them . Given a finite r-regular graph of size |V | = n , the eigenvalues ti of the adjacency matrix are all real and they satisfy , −r ≤ tn ≤ . . . ≤ t2 ≤ t1 = r. The graph is connected iff t2 < t1 and is bipartite iff ti ’ s are symmetric about 0 ( in particular tn = −r ) . The quantity t1 − t2 is known as the spectral gap and it is related to the Cheeger constant via the discrete Cheeger-Buser inequality , discovered independently by Dodziuk ( 1984 ) and by Alon & Milman ( 1985 ) . In our context , we consider a stronger notion of the spectral gap ( but it is equivalent for bipartite graphs ) . Let t : = max { |ti| : 1 < i ≤ n , |ti| < t1 } . In here , the quantity t1 − t will denote the spectral gap . The more connected a graph is , the larger is the spectral gap and ideally , a graph with strong expansion properties has a very large spectral gap . However , for a bounded degree expander family , this spectral gap can not be arbitrarily large . This brings us to the notion of Ramanujan graphs . Definition 3 ( Ramanujan graph ) Let G be a r-regular graph on n vertices , with adjacency eigenvalues { ti } i=1,2 , ... n , satisfying −r ≤ tn ≤ . . . ≤ t2 ≤ t1 = r. Let t ( G ) : = max { |ti| : 1 < i ≤ n , |ti| < t1 } . Then G is a Ramanujan graph if t ( G ) ≤ 2 √ r − 1 = 2 √ t1 − 1 . The fact that in a bounded degree expander family , the eigenvalue bound in Ramanujan graphs is maximal can be deduced from the following result due to Alon , see Nilli ( 1991 ) , t ( G ) ≥ 2 √ r − 1− 2 √ r−1−1 ⌊m/2⌋ where m denotes the diameter of the graph . As m → +∞ and r is bounded , we obtain the result ( this also follows from the Alon-Boppana theorem ) . For our applications , we shall encounter not necessarily regular graphs , thus we need a notion of irregular version of Ramanujan graphs . The two ways we shall exploit Definition 3 for irregular graphs will be to : 1 . Use the average degree davg in place of the regularity . 2 . For weighted graphs , use t1 , the largest eigenvalue of the adjacency matrix . Note that a motivation for considering the above bounds comes from the following generalisation of the definition of Ramanujan graphs to irregular graphs . For a finite , connected graph G ( not necessarily regular ) consider its universal cover G̃ , for details see ( Hoory et al. , 2006 , sec . 6 ) . A Ramanujan graph is a graph satisfying t ( G ) ≤ ρ ( G̃ ) where ρ ( G̃ ) denotes the spectral radius of G̃ . See also Marcus–Spielman–Srivastava ( Marcus et al. , 2015 , sec . 2.3 ) . A result of Hoory , see Hoory ( 2005 ) implies that 2 √ davg − 1 ≤ ρ ( G̃ ) . Thus , it makes sense to consider t ( G ) ≤ 2 √ davg − 1 ≤ ρ ( G̃ ) . The above consideration also result in extremal families , Hoory ( 2005 ) . Further , for irregular bipartite graphs , with minimal degree at least two and an average degree davgL on the left and davgR on the right , we can further consider the sharper estimate t ( G ) ≤ √ davgL − 1 + √ davgR − 1 ≤ ρ ( G̃ ) see Hoory ( 2005 ) . Upto now , we had only discussed about unweighted graphs . A weighted graph is a graph with a weight function w : E → R≥0 attached on the edges . It can be looked upon as a generalisation of unweighted graphs , in the sense that in the unweighted case , the function w takes values in the set { 0 , 1 } . In the case of weighted networks , we shall use the absolute values of the weight functions according to the architecture for the corresponding dataset . This also means that in the case of weighted graphs , we need to modify the definition of the edge set of the graph to incorporate multiple ( as well as fractional ) edges . The theory of characterisation of weighted Ramanujan graphs is not well developed . However , characterisation of weighted expanders ( with positive weights ) exist , due to the Cheeger inequality for such graphs , see ( Chung , 1996 , sec . 5 ) and we use the largest eigenvalue of the adjacency matrix in place of the regularity . In the case of regular graphs , it coincides with the notion of Ramanujan graphs and even in the general case , by the Cheeger inequality , it ensures a large expansion which in turn implies that there is no bottleneck to the free flow of information . This forms the theoretical basis of our work . | This paper revisits the Lottery Ticket Hypotheis (LTH), where aggressively pruning a neural network in a smart way allows it to retain its convergence properties, at a lesser computational cost. The authors advance an explanation relating this LTH to expanders, graphs that have the property that information circulates really well between the vertices, and then to Ramanujan graphs. They then propose an algorithm built on this premise, that prunes the graph as long as the ramanujan property is still satisfied. Finally, a wealth of experiments is shown, showing the relation between the Ramanujan property and the performance of the pruned network. The authors also test the performance of their Ramanujan pruning algorithm against the same datasets. | SP:18de06d24b04d5bda2a3f7d6c3cd4f28076cfee5 |
Revisiting the Lottery Ticket Hypothesis: A Ramanujan Graph Perspective | 1 INTRODUCTION AND RELATED WORK . Neural Network ( NN ) and its recent advancements have made a significant contribution to solve various machine learning applications . The power of an over-parameterized NN lies in its capability to learn simple patterns and memorize the noise in the data ( Neyshabur et al. , 2018 ) . However , the training of such networks requires enormous computational resources , and often the deployment onto low-resource environments such as mobile devices , or embedded systems becomes difficult . Recent trend in research to reduce training time of deep neural networks has shifted towards pretraining following the introduction of a remarkable contribution , named the Lottery Ticket Hypothesis ( LTH ) , which hypothesize the existence of a highly sparse subnetwork and weight initialization to reduce the training resources as well ( Frankle & Carbin , 2019 ) . It uses a simple iterative , magnitudebased pruning algorithm , and empirically shows that even after removing approximately 90 % of the weights , the subnetwork can preserve the original generalization error . In the subsequent studies , the focus goes on finding this lottery ticket for more competitive tasks by pruning with weight rewinding ( Frankle et al. , 2019a ) , fine tuning the learning rates ( Renda et al. , 2020 ) , more efficient training ( You et al. , 2019 ; Brix et al. , 2020 ; Girish et al. , 2021 ) . Neural network pruning involves sparsification of the network ( LeCun et al. , 1990 ; Blalock et al. , 2020 ) . It identifies the weight parameters , removal of which incurs minimal effect on the generalization error . There exists different categories of pruning based on ( i ) how the pruning is performed , for instance based on the weight magnitude ( Han et al. , 2015 ; Zhu & Gupta , 2017 ) , gradient in the backpropagation , hessian of the weight ( Hassibi et al. , 1993 ; Dong et al. , 2017 ; Lee et al. , 2018 ) , etc ; ( ii ) whether the pruning is global or local ; ( iii ) how often pruning should be applied like one-shot Lee et al . ( 2018 ) ; Wang et al . ( 2020 ) , iterative Tanaka et al . ( 2020 ) . One of the primary goals in the literature has been to reduce the computational footprint at the time of prediction , i.e. , during post-training . In recent LTH studies , the victim weights are determined by their value at the ini- tialization , gradient of the error , and network topology Lee et al . ( 2019 ) ; Tanaka et al . ( 2020 ) . To understand weight initialization , Malach et al . ( 2020 ) show that pruning makes a stronger hypothesis with bounded weight distribution . The sparsity of the network is reduced from polynomial to a logarithmic factor of the number of training variables ( Orseau et al. , 2020 ) . Mocanu et al . ( 2018 ) suggest to consider the topology of the network from a network science point of view . The pruning algorithm starts from a random Erdős Rényi graph and returns a scale-free network of a high sparsity factor based on the number of neurons in each layer . The method is further evolved for convolution layers considering both the magnitude and gradient of the weights ( Evci et al. , 2020a ) . Various analysis for explaining the LTH have been attempted in the past . Researchers Evci et al . ( 2020b ) explain empirically why the LTH works through gradient flow at different stages of the training . Despite previous attempts to explain why the Lottery Ticket Hypothesis works , the underlying phenomenon associated with the hypothesis still remains ill-understood . All of these studies related to LTH identify that a sparse sub-network can be trained instead of a complete network and the network needs to be connected from input to output layers . However , none of them try to explain the LTH and the properties of the pruned network through the lens of spectral graph theory . The network connectivity can be described from the graph expansion point of view , where any subset of vertices of size less than or equal to half of the number of vertices in a graph , is adjacent to at least a fraction of the number of vertices in that set ; for details , see ( Lubotzky , 2010 ) . Graphs satisfying this property are known as expander graphs . The Ramanujan Graph is a special graph in a bounded degree expander family , where the eigenbound is maximal ( Nilli , 1991 ) . This leads to a maximum possible sparsity of a network while preserving the connectivity . In this paper , we initiate a study to observe the characteristics of a pruned sub-network from the spectral properties of its adjacency matrix , which , has not been reported previously . We represent a feed-forward neural network as a series of connected bipartite graphs . Both weighted and unweighted bi-adjacency matrices are considered . The Ramanujan graph properties of each of the bipartite layers are studied . We use the results of Hoory ( 2005 ) on the bound of spectral gap for the weight matrix of a pruned network . It is empirically observed that existence of winning tickets in a pruned network is dependent on the Ramanujan graph properties of the bipartite layers . As network sparsity increases with more aggressive pruning , we obtain regions where test set accuracy do not decrease much and the bipartite layers satisfy Ramanujan graph property . Subsequently we obtain regions where the Ramanujan graph properties are lost for all the layers and test accuracy decreases sharply . Also , we observe that the impact of noise in the data , on test set accuracy is more adverse when some of the layers lose their Ramanujan graph properties . Experimental results are presented for the Lenet architecture on the MNIST dataset and the Conv4 architecture on the CIFAR10 dataset . Results for other popular feed-forward network are presented in the Appendix . We suggest that preservation of Ramanujan graph properties may benefit existing network pruning algorithms . We propose a modified pruning algorithm that uses the spectral bounds . The algorithm identifies network layers that may still be pruned further , while avoiding pruning in layers that have already lost their Ramanujan graph property . Neural network weight score functions suggested by various pruning algorithms are used to represent the bipartite layers as weighted graphs . Spectral bounds for these graphs are used to verify the Ramanujan property . For a number of popular pruning algorithms , we experimentally demonstrate significant improvement in accuracy for sparse networks by using these connectivity criteria . Contributions : The contributions of the paper are the followings : 1 . We propose a methodology for analyzing winning lottery tickets with respect to their spectral properties . 2 . We empirically observe that winning lottery tickets often satisfy layerwise bipartite Ramanujan graph property representing a sparse but resiliently connected global network . The property is checked using spectral bounds that generalize to irregular networks . We also notice better noise robustness when all the layers of the pruned sparse networks preserve the Ramanujan graph property . 3 . Based on the above property we propose a modified iterative network pruning algorithm that attempts to preserve Ramanujan graph property for all the layers even at low network densities . It identifies layers that are still amenable to pruning while avoiding further pruning in layers that have lost their Ramanujan graph property . We report significant performance improvement for a number of popular pruning algorithms modified using this criteria . 2 THE LOTTERY TICKET HYPOTHESIS AND NETWORK PRUNING . The lottery ticket hypothesis states that a randomly initialized , dense neural network contains a subnetwork which when trained independently using the same initialization achieves a test accuracy close to the original network after training for less or at most the same number of iterations Frankle & Carbin ( 2019 ) . These subnetworks , denoted as ‘ winning tickets ’ , can be uncovered by network pruning algorithms . Weight pruning is one such simple strategy . Let the original dense network be represented as the function N ( x ; θ ) , where x is the input and θ are the weights . The weights have an initialization of θ0 . Weight pruning generates a mask m ∈ { 0 , 1 } |θ| such that the pruned network can be represented by N ( x ; m⊙ θ ) with initialization m⊙ θ0 . Pruning algorithms that are used to obtain the winning tickets can be either one-shot or iterative . In one-shot pruning the original network is trained to convergence , then p % of weights are pruned and the surviving weights are re-initialized to their values in θ0 followed retraining/fine tuning the subnetwork . Here , the network training and pruning are not simultaneously performed and pruning occurs only after convergence is reached . Iterative pruning repeats one-shot pruning over several iteration . This often leads to higher pruning percentage while retaining test set accuracy . However , iterative pruning is more time consuming than one-shot pruning . After pruning the surviving weights may alternately be initialized to their values in θi , i denoting a small iteration number , rather than their initial values in θ0 . This process , as illustrated in Figure 1 ( Frankle et al. , 2019b ) , is denoted as rewinding and is effective for large networks . We adopt this in our study . Various scoring functions are used to prioritize the weights for pruning . They may be based on weight magnitudes , gradient , information flow ( Blalock et al. , 2020 ; Hoefler et al. , 2021 ) or saliency Tanaka et al . ( 2020 ) . Magnitude pruning provides a simple method for obtaining the pruning mask by retaining the top p % weights wi ∈ θ that have the highest values of |wi| . The role of the weights in local computation in the network layers is not considered . A higher pruning efficiency may be achieved by algorithms that account for connectivity structures in the individual layers . In the next section we describe graph parameters of the network that determine such connectivity . 3 EXPANDERS AND RAMANUJAN GRAPHS . Expanders are highly connected , and yet sparse graphs . In this work , we shall be considering finite , connected , undirected , but not necessarily regular graphs . Recall that the degree of a vertex v in a graph is the number of half edges emanating from v. Definition 1 ( ( n , d , ε ) -expander ) Let ε > 0 . An ( n , d , ε ) -expander is a graph G = ( V , E ) on |V | = n vertices , having maximal degree d , such that for every set ∅ ≠ U ⊆ V satisfying |U | ≤ n2 , |δ ( U ) | ≥ ε|U | holds . Here , δ ( U ) denotes the vertex boundary of U . The quantity |δ ( U ) ||U | measures the rate of expansion and the infimum |δ ( U ) ||U | as U varies among the non-empty subsets of V with |U | ≤ |V | 2 is called the ( vertex ) Cheeger constant h ( G ) of the graph G. The higher the value of h ( G ) , the more expansion property it exhibits and vice versa . Expansion and the Cheeger constant quantifies the connectivity properties of a graph as a high value of h ( G ) signifies that the graph is strongly connected . This ensures that information can flow freely without much bottlenecks . Definition 2 ( Expander family ) A sequence of finite , connected graphs { Gi = ( Vi , Ei ) } i=1,2 , ··· on Vi vertices and Ei edges is called an expander family if there exists an uniform ϵ > 0 such that each graph in the sequence is an ( |Vi| , di , ϵ ) expander for some di ’ s . The study of expansion properties of graphs is closely related to the study of the spectrum ( distribution of eigenvalues ) of the adjacency operator defined on them . Given a finite r-regular graph of size |V | = n , the eigenvalues ti of the adjacency matrix are all real and they satisfy , −r ≤ tn ≤ . . . ≤ t2 ≤ t1 = r. The graph is connected iff t2 < t1 and is bipartite iff ti ’ s are symmetric about 0 ( in particular tn = −r ) . The quantity t1 − t2 is known as the spectral gap and it is related to the Cheeger constant via the discrete Cheeger-Buser inequality , discovered independently by Dodziuk ( 1984 ) and by Alon & Milman ( 1985 ) . In our context , we consider a stronger notion of the spectral gap ( but it is equivalent for bipartite graphs ) . Let t : = max { |ti| : 1 < i ≤ n , |ti| < t1 } . In here , the quantity t1 − t will denote the spectral gap . The more connected a graph is , the larger is the spectral gap and ideally , a graph with strong expansion properties has a very large spectral gap . However , for a bounded degree expander family , this spectral gap can not be arbitrarily large . This brings us to the notion of Ramanujan graphs . Definition 3 ( Ramanujan graph ) Let G be a r-regular graph on n vertices , with adjacency eigenvalues { ti } i=1,2 , ... n , satisfying −r ≤ tn ≤ . . . ≤ t2 ≤ t1 = r. Let t ( G ) : = max { |ti| : 1 < i ≤ n , |ti| < t1 } . Then G is a Ramanujan graph if t ( G ) ≤ 2 √ r − 1 = 2 √ t1 − 1 . The fact that in a bounded degree expander family , the eigenvalue bound in Ramanujan graphs is maximal can be deduced from the following result due to Alon , see Nilli ( 1991 ) , t ( G ) ≥ 2 √ r − 1− 2 √ r−1−1 ⌊m/2⌋ where m denotes the diameter of the graph . As m → +∞ and r is bounded , we obtain the result ( this also follows from the Alon-Boppana theorem ) . For our applications , we shall encounter not necessarily regular graphs , thus we need a notion of irregular version of Ramanujan graphs . The two ways we shall exploit Definition 3 for irregular graphs will be to : 1 . Use the average degree davg in place of the regularity . 2 . For weighted graphs , use t1 , the largest eigenvalue of the adjacency matrix . Note that a motivation for considering the above bounds comes from the following generalisation of the definition of Ramanujan graphs to irregular graphs . For a finite , connected graph G ( not necessarily regular ) consider its universal cover G̃ , for details see ( Hoory et al. , 2006 , sec . 6 ) . A Ramanujan graph is a graph satisfying t ( G ) ≤ ρ ( G̃ ) where ρ ( G̃ ) denotes the spectral radius of G̃ . See also Marcus–Spielman–Srivastava ( Marcus et al. , 2015 , sec . 2.3 ) . A result of Hoory , see Hoory ( 2005 ) implies that 2 √ davg − 1 ≤ ρ ( G̃ ) . Thus , it makes sense to consider t ( G ) ≤ 2 √ davg − 1 ≤ ρ ( G̃ ) . The above consideration also result in extremal families , Hoory ( 2005 ) . Further , for irregular bipartite graphs , with minimal degree at least two and an average degree davgL on the left and davgR on the right , we can further consider the sharper estimate t ( G ) ≤ √ davgL − 1 + √ davgR − 1 ≤ ρ ( G̃ ) see Hoory ( 2005 ) . Upto now , we had only discussed about unweighted graphs . A weighted graph is a graph with a weight function w : E → R≥0 attached on the edges . It can be looked upon as a generalisation of unweighted graphs , in the sense that in the unweighted case , the function w takes values in the set { 0 , 1 } . In the case of weighted networks , we shall use the absolute values of the weight functions according to the architecture for the corresponding dataset . This also means that in the case of weighted graphs , we need to modify the definition of the edge set of the graph to incorporate multiple ( as well as fractional ) edges . The theory of characterisation of weighted Ramanujan graphs is not well developed . However , characterisation of weighted expanders ( with positive weights ) exist , due to the Cheeger inequality for such graphs , see ( Chung , 1996 , sec . 5 ) and we use the largest eigenvalue of the adjacency matrix in place of the regularity . In the case of regular graphs , it coincides with the notion of Ramanujan graphs and even in the general case , by the Cheeger inequality , it ensures a large expansion which in turn implies that there is no bottleneck to the free flow of information . This forms the theoretical basis of our work . | In this paper, the authors study the properties of the subnetworks in lottery ticket hypothesis (LTH) from the perspective of the spectral graph theory. They argue that the pruned network in LTH remains a Ramanujan graph. The performance of the subnetworks begins to degrade with the loss of Ramanujan graph property. To this end, they propose an eigen-bound based network pruning algorithm to preserve the graph property. Extensive experiments are conducted to demonstrate the effectiveness of the proposed method. | SP:18de06d24b04d5bda2a3f7d6c3cd4f28076cfee5 |
Associated Learning: an Alternative to End-to-End Backpropagation that Works on CNN, RNN, and Transformer | 1 INTRODUCTION . Backpropagation ( BP ) is the keystone of modern deep learning . Although BP is the standard way to learn network parameters , it is far from ideal . Some of the most discussed issues of BP are optimization difficulties ( e.g. , vanishing gradients and exploding gradients ( Hochreiter et al. , 2001 ) ) and training performance ( e.g. , backward locking ( Jaderberg et al. , 2017 ) ) . It appears that custom network structures may be needed for different types of learning tasks . Among the various forms , convolutional neural networks ( CNNs ) , recurrent neural networks ( RNNs ) , and Transformer networks ( along with their extensions , e.g. , LSTM ( Hochreiter & Schmidhuber , 1997 ) and VGG ( Simonyan & Zisserman , 2015 ) ) are particularly useful . These networks have been successfully applied in fields as varied as computer vision , natural language processing , signal processing , and others ( Goodfellow et al. , 2016 ; Deng & Yu , 2014 ) . This paper studies a new learning approach—associated learning ( AL ) —an alternative to end-toend backpropagation learning . AL decomposes BP ’ s global end-to-end training strategy into several small local optimization targets such that each layer has an isolated gradient flow . However , since most layers in AL do not interact with the final loss directly , we would expect AL-training models to be less accurate than BP . Surprisingly , the original AL paper compares AL and BP using the CNN network ( and its extensions , e.g. , VGG ) and shows impressive results based on image classification datasets ( MNIST , CIFAR-10 , and CIFAR-100 ) ( Kao & Chen , 2021 ) . We continue this line of study in two ways . First , we discover more interesting properties of AL . Second , we show how to apply AL on different network structures , including VGG ( Simonyan & Zisserman , 2015 ) , LSTM ( Hochreiter & Schmidhuber , 1997 ) , and Transformer ( Vaswani et al. , 2017 ) . Eventually , we compare the networks learned via AL and via BP on various tasks ( image classification , sentiment analysis , and topic classification ) based on public datasets ( CIFAR-10 , CIFAR-100 , IMDB movie reviews , AG ’ s News , Stanford Sentiment Treebank ( SST ) , and DBpedia ) . We find that AL consistently outperforms BP in most datasets . Additionally , AL requires fewer epochs than BP when using early stopping but still yields excellent accuracy . These results suggest that AL is a strong alternative to BP , as AL is effective for various tasks and various network structures . The rest of the paper is organized as follows . In Section 2 , we introduce associated learning and its properties . Section 3 presents the experiments , including a comparison of the model accuracies and convergence speed of AL and BP on VGG , LSTM , and Transformer , a generalization test of AL and BP , and an ablation study . Section 4 reviews previous work on backpropagation alternatives and the network structures that may look similar to AL . Finally , we conclude our contribution in Section 5 . 2 ASSOCIATED LEARNING . 2.1 AN OVERVIEW OF AL : AL FORM , TRAINING , AND INFERENCE . To apply AL , we must transform the network into a different structure , which we call AL form below . Instead of learning a function to map a feature vector x to a target y , the AL form performs metric learning by searching for functions to transform x and y into latent representations such that the distance between these two latent representations is close . Algorithm 1 shows the procedure to convert a neural work into an AL form . We use Figure 1 ( a neural network with 3 hidden layers ) as an example to show the process . We can regard functions f1 , f2 , f3 as the encoders to convert the input feature vector into a latent representation and clf as a classifier to transform the latent representation into a target . When converting this network to its AL form ( referring to Figure 2 ) , we only keep the encoders fi-s and extend the model architecture by adding a bridge function bi and an autoencoder for each layer i . Let si−1 and ti−1 be the inputs of layer i ( assuming s0 : = x and t0 : = y ) , functions fi and gi convert si−1 and ti−1 into latent representations si and ti . The function fi can be any type of forwarding block , such as a convolution layer , an LSTM layer , or a residual block in ResNet . Each bridge function bi projects si ( the latent representation of x at layer i ) to be close to ti ( the latent representation of y at layer i ) , i.e. , bi ( fi ( si−1 ) ) ≈ gi ( ti−1 ) . Meanwhile , gi not only extracts the latent representation of y , but also serves as the encoder for an autoencoder such that hi ( gi ( ti−1 ) ) ≈ ti−1 ( hi is the decoder ) . For all the AL-form networks in our experiments , each of the functions gi and hi is constructed by a linear transform followed by a non-linear activation function ( e.g. , tanh ) , and bi is a linear transformation of the input vector or input matrix . We discuss the number of neurons for the input , output , and hidden layers of the autoencoders in Section 6.9 . To summarize , we minimize the following objective for each layer i during training : LiAL = LiA + LiAE = ∥∥bi ( fi ( si−1 ) ) − gi ( ti−1 ) ∥∥2 +∥∥hi ( gi ( ti−1 ) ) − ti−1∥∥2 , ( 3 ) where LiA denotes the associated loss of layer i , and LiAE indicates the autoencoder loss of layer i. Algorithm 2 gives the training algorithm for AL . Since each layer has its objective function ( Equation 3 ) , we can parallelly update the parameters of different layers using a pipeline to increase the training throughput ( details in Section 2.2.3 ) . For inference , if an AL network has L layers , an input x goes through f1 , . . . , fL to generate x ’ s latent representation ( sL ) , which is transformed to y ’ s latent representation ( tL ) via bL . Next , tL is converted to y via hL , . . . , h1 . The autoencoders ’ encoding functions gi-s are not used during inference . Take Figure 2 as an example , the inference path is ( f1 −→ f2 −→ f3 −→ b3 −→ h3 −→ h2 −→ h1 ) . Although it could be unintuitive to concatenate two unconnected transformation functions as an inference path ( i.e. , hi+1 followed by hi , i = 1 , 2 , 3 ) , it is valid because hi+1 ( ti+1 ) = t′i ≈ ti . Algorithm 1 Converting a neural network to its AL form Input a neural network N = ( f1 , f2 , ... fL ) , target y ∈ R1 , feature vector x ∈ Rm Output an AL network A 1 : s0 ← x ; t0 ← y 2 : for i = 1 to L do 3 : si = fi ( si−1 ) 4 : insert a function gi , where ti = gi ( ti−1 ) 5 : insert a function bi , where s′i = bi ( si ) ▷ adding bridge function 6 : insert a function hi , where t′i−1 = hi ( ti ) ▷ adding a decoder for an autoencoder 7 : LiAL = MSE ( s′i , ti ) +MSE ( ti−1 , t′i−1 ) ▷ associated loss and autoencoder loss 8 : return A = ( a1 , . . . , aL ) , ai = { fi , bi , gi , hi } Li=1 Algorithm 2 Training an AL network Input an AL network A = ( a1 , . . . , aL ) , ai = { fi , bi , gi , hi } Li=1 , training features and targets ( X , Y ) Output a fine-tuned AL network A 1 : repeat 2 : Sample x , y from X , Y . 3 : s0 ← x ; t0 ← y 4 : for i = 1 to L do 5 : si ← fi ( si−1 ) 6 : ti ← gi ( ti−1 ) 7 : s′i ← bi ( si ) 8 : t′i−1 ← hi ( ti ) 9 : LiA ←MSE ( s′i , ti ) ▷ associated loss 10 : LiAE ←MSE ( ti−1 , t′i−1 ) ▷ autoencoder loss 11 : Update the parameter of fi , bi , gi according to∇LA 12 : Update the parameter of gi , hi according to∇LAE 13 : until converges 2.1.1 APPLYING AL TO DIFFERENT NEURAL NETWORK STRUCTURES . This section introduces details on converting some of the most successful neural network structures into their corresponding AL forms . We discuss vanilla CNN and VGG ( used as the representative models for CNN-based models ) , LSTM ( used as the representative model for RNN-based models ) , and Transformer . We also discuss how to integrate word embeddings into an AL-form network . CNN The AL form of an CNN architecture ( here , vanilla CNN and VGG ) uses fi-s to convert an input image x into latent representations by convolutions , just like a regular CNN or a VGG does . The associated loss at layer i is defined as the distance between bi ( si ) ( transforming the flattened feature map at layer i into ti ’ s shape ) and ti ( the latent representation of y ) . RNN The AL form of an RNN architecture ( here , LSTM and Bi-LSTM ) uses the internal state to iteratively process each element in the input sequence x and generates the new internal state , just like a regular RNN does . After reading the entire input sequence x , we define si the latent representation of x at layer i by the final internal state . Consequently , the associated loss is defined as the distance between bi ( si ) ( transforming the final internal state at layer i into ti ’ s shape ) and ti ( the latent representation of y ) . Details are shown in Section 6.2 . Transformer The AL form of a Transformer encodes the input sequence x into a list of vectors , as a regular Transformer does . We define si ( the latent representation of x ) by computing the mean-pooling on the encoded vectors . The associated loss is defined as the distance between bi ( si ) ( transforming the mean-pooling output into ti ’ s shape ) and ti ( the latent representations of y ) . Details are shown in Section 6.2 . Word Embedding Word embeddings are frequently used as the input of LSTM or Transformer for NLP tasks . We use mean-pooling to aggregate all token ’ s word embeddings as si . 2.2 PROPERTIES OF AL . This section presents three properties of AL that do not exist in BP : forward shortcuts , dynamic layer accumulation , and pipelines . 2.2.1 FORWARD SHORTCUTS . Forward shortcuts enable faster inference . However , we can also leverage “ shortcut paths ” for faster inference . As shown in Algorithm 3 , given an integer ℓ ( 1 ≤ ℓ ≤ L ) , the bridge function bℓ can serve as a shortcut to transform sℓ to s′ℓ , which should be close to tℓ when the network is well trained . As a result , we can skip fj , bj , and hj for all j > ℓ to reduce the length of the inference function . In other words , we have multiple inference functions based on a single model . When inference time is critical , we can select a shorter inference path , e.g. , x f1−→ s1 b1−→ t1 h1−→ y . On the other hand , if inference time is unimportant , we can dynamically adjust the model complexity by modifying the number of AL layers used at the inference phase to reduce overfitting or underfitting . | This paper is a continuation of an original associated learning paper by Kao&Chen 2021. It attempts to propoose new learning approach associated learning as an alternative way to back-propagation. On top of the original paper, it discovers more interesting properties and extend AL to CNN, LSTM and transformers (though lacking sufficient details). | SP:590a081dcfa6a61b6228700092654cf8647ffecd |
Associated Learning: an Alternative to End-to-End Backpropagation that Works on CNN, RNN, and Transformer | 1 INTRODUCTION . Backpropagation ( BP ) is the keystone of modern deep learning . Although BP is the standard way to learn network parameters , it is far from ideal . Some of the most discussed issues of BP are optimization difficulties ( e.g. , vanishing gradients and exploding gradients ( Hochreiter et al. , 2001 ) ) and training performance ( e.g. , backward locking ( Jaderberg et al. , 2017 ) ) . It appears that custom network structures may be needed for different types of learning tasks . Among the various forms , convolutional neural networks ( CNNs ) , recurrent neural networks ( RNNs ) , and Transformer networks ( along with their extensions , e.g. , LSTM ( Hochreiter & Schmidhuber , 1997 ) and VGG ( Simonyan & Zisserman , 2015 ) ) are particularly useful . These networks have been successfully applied in fields as varied as computer vision , natural language processing , signal processing , and others ( Goodfellow et al. , 2016 ; Deng & Yu , 2014 ) . This paper studies a new learning approach—associated learning ( AL ) —an alternative to end-toend backpropagation learning . AL decomposes BP ’ s global end-to-end training strategy into several small local optimization targets such that each layer has an isolated gradient flow . However , since most layers in AL do not interact with the final loss directly , we would expect AL-training models to be less accurate than BP . Surprisingly , the original AL paper compares AL and BP using the CNN network ( and its extensions , e.g. , VGG ) and shows impressive results based on image classification datasets ( MNIST , CIFAR-10 , and CIFAR-100 ) ( Kao & Chen , 2021 ) . We continue this line of study in two ways . First , we discover more interesting properties of AL . Second , we show how to apply AL on different network structures , including VGG ( Simonyan & Zisserman , 2015 ) , LSTM ( Hochreiter & Schmidhuber , 1997 ) , and Transformer ( Vaswani et al. , 2017 ) . Eventually , we compare the networks learned via AL and via BP on various tasks ( image classification , sentiment analysis , and topic classification ) based on public datasets ( CIFAR-10 , CIFAR-100 , IMDB movie reviews , AG ’ s News , Stanford Sentiment Treebank ( SST ) , and DBpedia ) . We find that AL consistently outperforms BP in most datasets . Additionally , AL requires fewer epochs than BP when using early stopping but still yields excellent accuracy . These results suggest that AL is a strong alternative to BP , as AL is effective for various tasks and various network structures . The rest of the paper is organized as follows . In Section 2 , we introduce associated learning and its properties . Section 3 presents the experiments , including a comparison of the model accuracies and convergence speed of AL and BP on VGG , LSTM , and Transformer , a generalization test of AL and BP , and an ablation study . Section 4 reviews previous work on backpropagation alternatives and the network structures that may look similar to AL . Finally , we conclude our contribution in Section 5 . 2 ASSOCIATED LEARNING . 2.1 AN OVERVIEW OF AL : AL FORM , TRAINING , AND INFERENCE . To apply AL , we must transform the network into a different structure , which we call AL form below . Instead of learning a function to map a feature vector x to a target y , the AL form performs metric learning by searching for functions to transform x and y into latent representations such that the distance between these two latent representations is close . Algorithm 1 shows the procedure to convert a neural work into an AL form . We use Figure 1 ( a neural network with 3 hidden layers ) as an example to show the process . We can regard functions f1 , f2 , f3 as the encoders to convert the input feature vector into a latent representation and clf as a classifier to transform the latent representation into a target . When converting this network to its AL form ( referring to Figure 2 ) , we only keep the encoders fi-s and extend the model architecture by adding a bridge function bi and an autoencoder for each layer i . Let si−1 and ti−1 be the inputs of layer i ( assuming s0 : = x and t0 : = y ) , functions fi and gi convert si−1 and ti−1 into latent representations si and ti . The function fi can be any type of forwarding block , such as a convolution layer , an LSTM layer , or a residual block in ResNet . Each bridge function bi projects si ( the latent representation of x at layer i ) to be close to ti ( the latent representation of y at layer i ) , i.e. , bi ( fi ( si−1 ) ) ≈ gi ( ti−1 ) . Meanwhile , gi not only extracts the latent representation of y , but also serves as the encoder for an autoencoder such that hi ( gi ( ti−1 ) ) ≈ ti−1 ( hi is the decoder ) . For all the AL-form networks in our experiments , each of the functions gi and hi is constructed by a linear transform followed by a non-linear activation function ( e.g. , tanh ) , and bi is a linear transformation of the input vector or input matrix . We discuss the number of neurons for the input , output , and hidden layers of the autoencoders in Section 6.9 . To summarize , we minimize the following objective for each layer i during training : LiAL = LiA + LiAE = ∥∥bi ( fi ( si−1 ) ) − gi ( ti−1 ) ∥∥2 +∥∥hi ( gi ( ti−1 ) ) − ti−1∥∥2 , ( 3 ) where LiA denotes the associated loss of layer i , and LiAE indicates the autoencoder loss of layer i. Algorithm 2 gives the training algorithm for AL . Since each layer has its objective function ( Equation 3 ) , we can parallelly update the parameters of different layers using a pipeline to increase the training throughput ( details in Section 2.2.3 ) . For inference , if an AL network has L layers , an input x goes through f1 , . . . , fL to generate x ’ s latent representation ( sL ) , which is transformed to y ’ s latent representation ( tL ) via bL . Next , tL is converted to y via hL , . . . , h1 . The autoencoders ’ encoding functions gi-s are not used during inference . Take Figure 2 as an example , the inference path is ( f1 −→ f2 −→ f3 −→ b3 −→ h3 −→ h2 −→ h1 ) . Although it could be unintuitive to concatenate two unconnected transformation functions as an inference path ( i.e. , hi+1 followed by hi , i = 1 , 2 , 3 ) , it is valid because hi+1 ( ti+1 ) = t′i ≈ ti . Algorithm 1 Converting a neural network to its AL form Input a neural network N = ( f1 , f2 , ... fL ) , target y ∈ R1 , feature vector x ∈ Rm Output an AL network A 1 : s0 ← x ; t0 ← y 2 : for i = 1 to L do 3 : si = fi ( si−1 ) 4 : insert a function gi , where ti = gi ( ti−1 ) 5 : insert a function bi , where s′i = bi ( si ) ▷ adding bridge function 6 : insert a function hi , where t′i−1 = hi ( ti ) ▷ adding a decoder for an autoencoder 7 : LiAL = MSE ( s′i , ti ) +MSE ( ti−1 , t′i−1 ) ▷ associated loss and autoencoder loss 8 : return A = ( a1 , . . . , aL ) , ai = { fi , bi , gi , hi } Li=1 Algorithm 2 Training an AL network Input an AL network A = ( a1 , . . . , aL ) , ai = { fi , bi , gi , hi } Li=1 , training features and targets ( X , Y ) Output a fine-tuned AL network A 1 : repeat 2 : Sample x , y from X , Y . 3 : s0 ← x ; t0 ← y 4 : for i = 1 to L do 5 : si ← fi ( si−1 ) 6 : ti ← gi ( ti−1 ) 7 : s′i ← bi ( si ) 8 : t′i−1 ← hi ( ti ) 9 : LiA ←MSE ( s′i , ti ) ▷ associated loss 10 : LiAE ←MSE ( ti−1 , t′i−1 ) ▷ autoencoder loss 11 : Update the parameter of fi , bi , gi according to∇LA 12 : Update the parameter of gi , hi according to∇LAE 13 : until converges 2.1.1 APPLYING AL TO DIFFERENT NEURAL NETWORK STRUCTURES . This section introduces details on converting some of the most successful neural network structures into their corresponding AL forms . We discuss vanilla CNN and VGG ( used as the representative models for CNN-based models ) , LSTM ( used as the representative model for RNN-based models ) , and Transformer . We also discuss how to integrate word embeddings into an AL-form network . CNN The AL form of an CNN architecture ( here , vanilla CNN and VGG ) uses fi-s to convert an input image x into latent representations by convolutions , just like a regular CNN or a VGG does . The associated loss at layer i is defined as the distance between bi ( si ) ( transforming the flattened feature map at layer i into ti ’ s shape ) and ti ( the latent representation of y ) . RNN The AL form of an RNN architecture ( here , LSTM and Bi-LSTM ) uses the internal state to iteratively process each element in the input sequence x and generates the new internal state , just like a regular RNN does . After reading the entire input sequence x , we define si the latent representation of x at layer i by the final internal state . Consequently , the associated loss is defined as the distance between bi ( si ) ( transforming the final internal state at layer i into ti ’ s shape ) and ti ( the latent representation of y ) . Details are shown in Section 6.2 . Transformer The AL form of a Transformer encodes the input sequence x into a list of vectors , as a regular Transformer does . We define si ( the latent representation of x ) by computing the mean-pooling on the encoded vectors . The associated loss is defined as the distance between bi ( si ) ( transforming the mean-pooling output into ti ’ s shape ) and ti ( the latent representations of y ) . Details are shown in Section 6.2 . Word Embedding Word embeddings are frequently used as the input of LSTM or Transformer for NLP tasks . We use mean-pooling to aggregate all token ’ s word embeddings as si . 2.2 PROPERTIES OF AL . This section presents three properties of AL that do not exist in BP : forward shortcuts , dynamic layer accumulation , and pipelines . 2.2.1 FORWARD SHORTCUTS . Forward shortcuts enable faster inference . However , we can also leverage “ shortcut paths ” for faster inference . As shown in Algorithm 3 , given an integer ℓ ( 1 ≤ ℓ ≤ L ) , the bridge function bℓ can serve as a shortcut to transform sℓ to s′ℓ , which should be close to tℓ when the network is well trained . As a result , we can skip fj , bj , and hj for all j > ℓ to reduce the length of the inference function . In other words , we have multiple inference functions based on a single model . When inference time is critical , we can select a shorter inference path , e.g. , x f1−→ s1 b1−→ t1 h1−→ y . On the other hand , if inference time is unimportant , we can dynamically adjust the model complexity by modifying the number of AL layers used at the inference phase to reduce overfitting or underfitting . | This paper proposes associated learning (AL) for CNN, RNN, and transformer. Different from back-propagation (BP), AL decomposes BP’s global end-to-end training strategy into several small local optimization targets such that each sub-networks has an isolated gradient flow. To achieve this, the paper proposes to map input $x$ and output $y$ into intermediate AL layers and performs metric learning (e.g., $t_1=b_1(s_1)$) and auto-encoder learning ($t_1=t_1^{‘}$), as shown in Figure 2. Moreover, Each AL layer can be optimized locally. The idea is interesting. The experiments demonstrate the effectiveness on (IMDB Review, AG’s News corpus, DBpedia Ontology, the Stanford Sentiment Treebank, CIFAR10, and Fashion-MNIST. | SP:590a081dcfa6a61b6228700092654cf8647ffecd |
Diverse and Consistent Multi-view Networks for Semi-supervised Regression | 1 INTRODUCTION . Deep neural networks have achieved tremendous success across several domains , ranging from computer vision , natural language processing , to audio analysis ( LeCun et al. , 2015 ) . However , to train neural networks that perform well typically requires a large amount of labeled data . In many cases , this requirement for a large labeled dataset presents a challenge , because the annotation process can be labour-intensive and thus expensive , especially when specialized expertise is required . To address this challenge , semi-supervised learning methods ( Van Engelen & Hoos , 2020 ) that can achieve similarly high performance with less labeled data by using unlabeled data have been developed . We focus on semi-supervised learning in the regression setting . There are several approaches for semi-supervised regression , including graph-based methods ( Zhur & Ghahramanirh , 2002 ) , cotraining ( Blum & Mitchell , 1998 ) and entropy minimization ( Jean et al. , 2018 ) . Consistency-based approaches that have been popular in the classification setting , such as Mean Teacher ( Tarvainen & Valpola , 2017 ) and Virtual Adversarial Training ( Miyato et al. , 2018 ) , which reinforce the output consistency of the network under input perturbations , have also been adapted to the regression setting ( Jean et al. , 2018 ) . However , enforcing consistency alone may not be sufficient for good performance , and may lead to model collapsing ( Qiao et al. , 2018 ) or confirmation bias issues ( Ke et al. , 2019 ) . To address these issues , we draw inspiration from ensemble learning with neural networks for regression . A necessary and sufficient condition for an ensemble of learners to be more accurate than any of its individual members is if the base learners are accurate and diverse ( Dietterich , 2000 ) . Therefore , the key component that can make or break an ensemble is the diversity ( or disagreement ) among its individual regressors . If this diversity is insufficient , the ensembling may not result in better performance . On the other hand , overemphasizing diversity can degrade the learnability of the ensemble members . So far , the most successful mechanism to leverage ensemble diversity in regression is Negative Correlation Learning ( Liu & Yao , 1999 ; Zhang et al. , 2019 ) . In this work , we propose Diverse and Consistent Multi-view Networks for Semi-supervised Regression ( DiCoM ) that elegantly unifies consistency and diversity in a multi-view learning framework . Based on probabilistic graphical assumptions , we derive a loss function that integrates both consistency and diversity components – diversity is encouraged on labeled data , while consistency is enforced on unlabeled data . Furthermore , we develop two variants of DiCoM : the first uses multiple networks to achieve better performance , while the second employs a single network with multiple branches to help with scalability . We compare DiCoM against state-of-the-art methods on eight tabular datasets and a crowd-counting dataset , where we show that DiCoM outperforms existing methods . We further perform ablation studies to analyze the importance of diversity and consistency , and the effect of varying the number of views in the model . While other works have leveraged related ideas of complementary and consensus in multi-view classification ( Xu et al. , 2013 ) ; or explored commonality and individuality in multi-modal curriculum learning ( Gong , 2017 ) , these methods were developed for classification or clustering tasks , and can not be easily modified to suit semi-supervised regression . In summary , the major contributions of this work are as follows : • We derive a novel objective function from a probabilistic graphical perspective . Our objective function unifies multi-view diversity and consistency and provides theoretical insights into the relationship between diversity-consistency trade-off and the number of views . • We show the high flexibility of DiCoM , which can be adaptively scaled up to larger number of views while maintaining competitive performance . • We demonstrate the performance of DiCoM on both tabular and visual types of input data , where it outperforms competing methods . Our ablation studies validate the importance of having both diversity and consistency . 2 RELATED WORK . Semi-supervised Regression : Semi-supervised learning is a data-efficient learning paradigm that offers the ability to learn from unlabeled data . In recent years , much work has focused on semisupervised classification , and there have been far fewer studies on semi-supervised regression . For regression tasks , graph-based methods are among the first to be developed . One example is Label Propagation ( LP ) ( Zhur & Ghahramanirh , 2002 ) which defines a graph of training data and propagates ground-truth labels through high density regions of the graph . Kernel methods have also been proposed , such as Semi-supervised Deep Kernel Learning ( SSDKL ) ( Jean et al. , 2018 ) . This method minimizes the predictive variance in a posterior regularization framework to learn a more generalizable feature embedding on unlabeled data . Co-training regressors ( COREG ) ( Zhou & Li , 2005 ) employs k-Nearest neighbor regressors , each of which generate pseudo-labels for the other during training ; this helps to maximize their agreement on unlabeled data . Apart from the aforementioned approaches , consistency-based methods are also gaining traction . Mean Teacher ( MT ) ( Tarvainen & Valpola , 2017 ) enforces posterior consistency between two neural networks , a student and a teacher , the latter being an exponential moving average of the former in the parameter space . An orthogonal approach is to enforce consistency on adversarially augmented input , as implemented in Virtual Adversarial Training ( VAT ) ( Miyato et al. , 2018 ) . These methods were originally developed for classification , and were subsequently adapted to regression tasks ( Jean et al. , 2018 ) . However , both MT and VAT maintain only a single trainable network , which may lead to problems such as confirmation bias ( Ke et al. , 2019 ) and overly-sensitive hyperparameters . In this paper , we show that consistency-based methods can be further improved with ensemble diversity . Ensemble Diversity : Ensembles of neural networks have been extensively studied and widely used in many applications . Their effectiveness largely depends on the level of diversity ( or disagreement ) among members of the ensemble . It is well-understood that a good ensemble must manage the tradeoff between the accuracy of the individual learners and the diversity among them ( Brown et al. , 2005 ; Tang et al. , 2006 ) . For regression tasks , a commonly-used ensemble technique is Negative Correlation Learning ( NCL ) ( Liu & Yao , 1999 ; Liu et al. , 2000 ) , which formulates a diversity-promoting loss using an ambiguity decomposition of the squared ensemble loss ( Krogh et al. , 1995 ) . In this formulation , a correlation penalty term ( also refered to as an ambiguity term ) measures how much each member ’ s prediction deviates from the ensemble output . When this penalty term is maximized , the errors of individual learners become negatively correlated . It was theoretically proven ( Brown et al. , 2005 ) that the strategy employed by NCL is equivalent to leveraging a bias-variance-covariance trade-off ( Ueda & Nakano , 1996 ) of the ensemble error . Recently , NCL has been extended to semi-supervised learning ( Chen et al. , 2018 ) , where the correlation penalty term is extended to the unlabeled data . However , this method was demonstrated only on tabular data . Another variant of NCL is Deep Negative Correlation Learning ( DNCL ) ( Shi et al. , 2018 ; Zhang et al. , 2019 ) , which is designed for visual regression tasks in a purely supervised learning setting . Multi-view Learning : A dataset is considered as having multiple views when its data samples are represented by more than one feature set , each of which is sufficient for the learning task . Although each view is supposed to be sufficient for learning the task , a model trained on only one single view often faces the risk of overfitting , especially when labeled data is limited ( Xu et al. , 2013 ) . To address this problem , multi-view learning assigns a modeling function to each view and jointly optimize these functions to improve overall generalization performance ( Zhao et al. , 2017 ) . By analyzing the development of various techniques , Xu et al . ( Xu et al. , 2013 ) summarized two significant principles that underpin multi-view learning : consensus and complementary . The consensus principle states that a multi-view technique must aim at maximizing the agreement on different views . This is similar to how consistency-based semi-supervised learning methods works : for instance , MT enforces agreement with its past self . The complementary principle states that in order to make improvement , each view must contain some information that the other views do not carry . In other words , the views should be sufficiently diverse . This is related to diversity regularization in ensemble learning , where individual learners are encouraged to give diverse predictions . Thus , multi-view learning offers a unifying perspective of both consistency and diversity . 3 PROPOSED METHOD . We start by describing how multiple deep views can be generated from input data . Then , we propose our multi-view learning framework for regression , in which multiple deep views can be simultaneously optimized via backpropagation . We then discuss the graphical models that govern the probabilistic dependencies among the ground-truth label and the deep views . Finally , we derive the DiCoM loss function using these graphical models and provide a few insights . View creation : Consider a regression task where the goal is to estimate a label y ∈ R from an input x . To create multiple views , our first approach is to use M neural networks F1 , F2 , . . . , FM , each parameterized by θ1 , θ2 , . . . , θM , respectively . By applying different data augmentations η1 , η2 , . . . , ηM on the original x , we generate M different augmented inputs xm = ηm ( x ) ∀m= 1 , . . . , M . With each augmented input , the corresponding neural network produces a regression output fm ( x ) = Fm ( xm , θm ) ∀ m = 1 , . . . , M . Due to the different augmentations and network parameters , each output fm can be treated as one deep view of the original input x . We call this multi-network setup DiCoM-N , where the N stands for ‘ network ’ ( see Fig . 1 ( a ) ) . The second approach is to utilize a single network with a shared backbone B and multiple branches F1 , F2 , . . . , FM . We use θB to denote the learnable parameters from the backbone and θ1 , θ2 , . . . , θM to denote the parameters of the branches . The hidden features generated by the backbone serve as input to the branches . While the backbone still applies a random augmentation to the input x , each branch Fm applies its own random augmentation ηm as well . Thus , regression outputs f1 , f2 , . . . , fM from the branches can be considered as deep views of the original input . We name this setup DiCoM-B , where the B is short for ‘ branch ’ ( see Fig . 1 ( b ) ) . Multi-branch technique was widely adopted for supervised classification ( Xie et al. , 2017 ) . In this work , it allows us to harness the power of multi-view learning with a relatively lower number of trainable parameters . Multi-view learning framework for regression : Regardless of how they are generated , the deep views are used together with the true label y to compute a semi-supervised loss function LDiCoM . During training phase , LDiCoM is back-propagated simultaneously through the deep views to optimize network parameters θ1 , θ2 , . . . , θM ( including θB in the case of DiCoM-B ) . During inference , all augmentations are removed so that the forward pass is applied on the raw input x . The final prediction is computed as the average of all deep views : µ ( x ) = ∑M i=1 1 M fm ( x ) . The general DiCoM framework is illustrated in Fig . 1 . In the next step , we derive LDiCoM based on a probabilistic graphical assumption . Probabilistic graphical models : Since the augmented inputs are generated from the same sample , the deep views should be close to each other . Motivated by previous work in kernel learning ( Yu et al. , 2011 ) and linear regression ( Nguyen et al. , 2019 ) , we consider f1 , f2 , . . . , fM as random variables and introduce a consensus function fc as a latent variable that connects to each of the deep views . This function enforces the mutual agreement among the views . We assume that the difference between the consensus function and each view follows a zero-mean Gaussian distribution fc − fm ∝ N ( 0 , σ2m ) ∀m = 1 , . . . , M. ( 1 ) This probabilistic relation is known as the consensus potential ( Yu et al. , 2011 ) . Considering the whole graph , this potential implies that all views are random Gaussian variables with a shared mean fc and variance σ2m . As a result , the views stay consistent w.r.t . each other by taking values not too far away from the shared consensus . This graphical model , shown in Fig . 3 ( a ) , is assumed for each unlabeled sample . The joint density associated with the graph is given by p ( fc , f1 , . . . , fM ) = 1 Z M∏ m=1 Ψ ( fc , fm ) ( 2 ) where Z is a normalizing constant and Ψ ( fc , fm ) = exp [ − ( fc−fm ) 2 2σ2m ] is the potential function of the edge connecting fc and fm . From this model , we derive two important results . On a side note , our derivation generalizes to vector-valued labels y , but here we assume scalar labels for ease of exposition . The proofs of our results are provided in Appendix A . ( I ) Marginalization of the views : By integrating the latent consensus function fc out of the joint density , the marginal distribution of the views is p ( f1 , . . . , fM ) ∝ exp [ M∑ m=1 ∑ k > m −λm , k ( fm − fk ) 2 ] ( 3 ) where λm , k = [ 2σ2mσ 2 k ( ∑ m 1 σ2m ) ] −1 . This result implies that the marginal likelihood can be fac- torized as a product of ( M 2 ) terms . Each term is an isotropic Gaussian distribution on the difference between a pair of views ( fm , fk ) , with zero mean and variance ( 2λm , k ) −1 . The equivalent graphical model is shown in Fig . 3 ( b ) . ( II ) Conditional of the consensus function : By applying Bayes ’ theorem , the conditional distribution of the consensus function fc given all the views is a Gaussian fc|f1 , . . . , fM ∼ N ( µ̃ , σ2µ ) . ( 4 ) where σ2µ = ( ∑ m 1 σ2m ) −1 and µ̃ = σ2µ ∑ m fm σ2m . This result highlights that the conditional distribution of fc depends only on the weighted average µ̃ , and the values of individual views are not required . Furthermore , µ̃ can be treated as a view itself , with a variance that is smaller than any of the variances of the views . Derivation of DiCoM loss function : For simplicity , we assume equal variance for different deep views , i.e. , σ2m = σ 2 v ∀m . For an unlabeled sample ( xn ) , we directly apply the first result ( I ) to obtain the following negative log likelihood function Lunl = M∑ m=1 ∑ k > m 1 2Mσ2v [ fm ( xn ) − fk ( xn ) ] 2 ( 5 ) For a labeled sample ( xn , yn ) , since the ground-truth is given , we assume a graphical model that involves the final DiCoM prediction , i.e. , the averaged output µ . This graph is shown in Fig . 3 ( c ) . Since we assume a shared variance σ2v , the weighted output now reduces to an equal-weight average , following from result ( II ) µ̃ ( xn ) = M∑ m=1 fm ( xn ) M = µ ( xn ) σ 2 µ = σ2v M ( 6 ) Subsequently , we apply result ( I ) on this graph to get the negative log likelihood as follows Llab = M 2Mσ2y + 2σ 2 v [ yn − µ ( xn ) ] 2 ( 7 ) = 1 2Mσ2y + 2σ 2 v M∑ m=1 { [ fm ( xn ) − yn ] 2 − [ fm ( xn ) − µ ( xn ) ] 2 } ( 8 ) ≈ 1 2σ2v M∑ m=1 { [ fm ( xn ) − yn ] 2 − [ fm ( xn ) − µ ( xn ) ] 2 } ( 9 ) where in equation ( 8 ) , we have applied the ambiguity decomposition ( Krogh et al. , 1995 ) and in equation ( 9 ) , we have assumed that the label is accurate , i.e. , σ2y σ2v . Given a training batch of labeled samples { ( xn , yn ) } Ln=1 and unlabeled samples { ( xn ) } Un=1 , assuming that the samples are independently generated , we can add the log-likelihood functions across all training samples . This can be done via simply adding up two equations ( 5 ) and ( 9 ) LDiCoM = 1 L L∑ n=1 M∑ m=1 { [ fm ( xn ) − yn ] 2 − κdiv [ fm ( xn ) − µ ( xn ) ] 2 } + 1 U U∑ n=1 M∑ m=1 ∑ k > m κcsc [ fm ( xn ) − fk ( xn ) ] 2 ( 10 ) where we introduce two hyperparameters κdiv and κcsc to absorb other constants and to enable a trade-off between diversity and consistency components of the loss . The DiCoM loss encourages diversity on labeled data , while enforcing consistency on unlabeled data . These two seemingly opposing components can both be derived from the same underlying graphical assumptions . Furthermore , they should not be weighted equally . In fact , we have shown that it depends on the number of views : when M increases , diversity grows in O ( M ) , while consistency grows in O ( M2 ) . It is worth noting that our method is fundamentally different from other extensions of NCL such as Semi-supervised NCL ( Jean et al. , 2018 ) , which enforces diversity on both labeled and unlabeled data . Last but not least , since both diversity and consistency are incorporated in the DiCoM objective function , the method is highly adaptable to different implementations such as multi-network or multi-branch , as long as the views are provided . | This paper proposed a method for **semi-supervised multi-view regression** based on **data augmentation** and an **undirected graphical model**. The author derived a "**consistency**" term for unlabeled data and a "**diversity**" term for labeled data based on their log-likelihood and combined them linearly with two hyperparameters. The proposed method was evaluated on tabular and image data and analyzed via ablation study. | SP:da67860e703f8b08a84c6ba79fe3a32b33a187ad |
Diverse and Consistent Multi-view Networks for Semi-supervised Regression | 1 INTRODUCTION . Deep neural networks have achieved tremendous success across several domains , ranging from computer vision , natural language processing , to audio analysis ( LeCun et al. , 2015 ) . However , to train neural networks that perform well typically requires a large amount of labeled data . In many cases , this requirement for a large labeled dataset presents a challenge , because the annotation process can be labour-intensive and thus expensive , especially when specialized expertise is required . To address this challenge , semi-supervised learning methods ( Van Engelen & Hoos , 2020 ) that can achieve similarly high performance with less labeled data by using unlabeled data have been developed . We focus on semi-supervised learning in the regression setting . There are several approaches for semi-supervised regression , including graph-based methods ( Zhur & Ghahramanirh , 2002 ) , cotraining ( Blum & Mitchell , 1998 ) and entropy minimization ( Jean et al. , 2018 ) . Consistency-based approaches that have been popular in the classification setting , such as Mean Teacher ( Tarvainen & Valpola , 2017 ) and Virtual Adversarial Training ( Miyato et al. , 2018 ) , which reinforce the output consistency of the network under input perturbations , have also been adapted to the regression setting ( Jean et al. , 2018 ) . However , enforcing consistency alone may not be sufficient for good performance , and may lead to model collapsing ( Qiao et al. , 2018 ) or confirmation bias issues ( Ke et al. , 2019 ) . To address these issues , we draw inspiration from ensemble learning with neural networks for regression . A necessary and sufficient condition for an ensemble of learners to be more accurate than any of its individual members is if the base learners are accurate and diverse ( Dietterich , 2000 ) . Therefore , the key component that can make or break an ensemble is the diversity ( or disagreement ) among its individual regressors . If this diversity is insufficient , the ensembling may not result in better performance . On the other hand , overemphasizing diversity can degrade the learnability of the ensemble members . So far , the most successful mechanism to leverage ensemble diversity in regression is Negative Correlation Learning ( Liu & Yao , 1999 ; Zhang et al. , 2019 ) . In this work , we propose Diverse and Consistent Multi-view Networks for Semi-supervised Regression ( DiCoM ) that elegantly unifies consistency and diversity in a multi-view learning framework . Based on probabilistic graphical assumptions , we derive a loss function that integrates both consistency and diversity components – diversity is encouraged on labeled data , while consistency is enforced on unlabeled data . Furthermore , we develop two variants of DiCoM : the first uses multiple networks to achieve better performance , while the second employs a single network with multiple branches to help with scalability . We compare DiCoM against state-of-the-art methods on eight tabular datasets and a crowd-counting dataset , where we show that DiCoM outperforms existing methods . We further perform ablation studies to analyze the importance of diversity and consistency , and the effect of varying the number of views in the model . While other works have leveraged related ideas of complementary and consensus in multi-view classification ( Xu et al. , 2013 ) ; or explored commonality and individuality in multi-modal curriculum learning ( Gong , 2017 ) , these methods were developed for classification or clustering tasks , and can not be easily modified to suit semi-supervised regression . In summary , the major contributions of this work are as follows : • We derive a novel objective function from a probabilistic graphical perspective . Our objective function unifies multi-view diversity and consistency and provides theoretical insights into the relationship between diversity-consistency trade-off and the number of views . • We show the high flexibility of DiCoM , which can be adaptively scaled up to larger number of views while maintaining competitive performance . • We demonstrate the performance of DiCoM on both tabular and visual types of input data , where it outperforms competing methods . Our ablation studies validate the importance of having both diversity and consistency . 2 RELATED WORK . Semi-supervised Regression : Semi-supervised learning is a data-efficient learning paradigm that offers the ability to learn from unlabeled data . In recent years , much work has focused on semisupervised classification , and there have been far fewer studies on semi-supervised regression . For regression tasks , graph-based methods are among the first to be developed . One example is Label Propagation ( LP ) ( Zhur & Ghahramanirh , 2002 ) which defines a graph of training data and propagates ground-truth labels through high density regions of the graph . Kernel methods have also been proposed , such as Semi-supervised Deep Kernel Learning ( SSDKL ) ( Jean et al. , 2018 ) . This method minimizes the predictive variance in a posterior regularization framework to learn a more generalizable feature embedding on unlabeled data . Co-training regressors ( COREG ) ( Zhou & Li , 2005 ) employs k-Nearest neighbor regressors , each of which generate pseudo-labels for the other during training ; this helps to maximize their agreement on unlabeled data . Apart from the aforementioned approaches , consistency-based methods are also gaining traction . Mean Teacher ( MT ) ( Tarvainen & Valpola , 2017 ) enforces posterior consistency between two neural networks , a student and a teacher , the latter being an exponential moving average of the former in the parameter space . An orthogonal approach is to enforce consistency on adversarially augmented input , as implemented in Virtual Adversarial Training ( VAT ) ( Miyato et al. , 2018 ) . These methods were originally developed for classification , and were subsequently adapted to regression tasks ( Jean et al. , 2018 ) . However , both MT and VAT maintain only a single trainable network , which may lead to problems such as confirmation bias ( Ke et al. , 2019 ) and overly-sensitive hyperparameters . In this paper , we show that consistency-based methods can be further improved with ensemble diversity . Ensemble Diversity : Ensembles of neural networks have been extensively studied and widely used in many applications . Their effectiveness largely depends on the level of diversity ( or disagreement ) among members of the ensemble . It is well-understood that a good ensemble must manage the tradeoff between the accuracy of the individual learners and the diversity among them ( Brown et al. , 2005 ; Tang et al. , 2006 ) . For regression tasks , a commonly-used ensemble technique is Negative Correlation Learning ( NCL ) ( Liu & Yao , 1999 ; Liu et al. , 2000 ) , which formulates a diversity-promoting loss using an ambiguity decomposition of the squared ensemble loss ( Krogh et al. , 1995 ) . In this formulation , a correlation penalty term ( also refered to as an ambiguity term ) measures how much each member ’ s prediction deviates from the ensemble output . When this penalty term is maximized , the errors of individual learners become negatively correlated . It was theoretically proven ( Brown et al. , 2005 ) that the strategy employed by NCL is equivalent to leveraging a bias-variance-covariance trade-off ( Ueda & Nakano , 1996 ) of the ensemble error . Recently , NCL has been extended to semi-supervised learning ( Chen et al. , 2018 ) , where the correlation penalty term is extended to the unlabeled data . However , this method was demonstrated only on tabular data . Another variant of NCL is Deep Negative Correlation Learning ( DNCL ) ( Shi et al. , 2018 ; Zhang et al. , 2019 ) , which is designed for visual regression tasks in a purely supervised learning setting . Multi-view Learning : A dataset is considered as having multiple views when its data samples are represented by more than one feature set , each of which is sufficient for the learning task . Although each view is supposed to be sufficient for learning the task , a model trained on only one single view often faces the risk of overfitting , especially when labeled data is limited ( Xu et al. , 2013 ) . To address this problem , multi-view learning assigns a modeling function to each view and jointly optimize these functions to improve overall generalization performance ( Zhao et al. , 2017 ) . By analyzing the development of various techniques , Xu et al . ( Xu et al. , 2013 ) summarized two significant principles that underpin multi-view learning : consensus and complementary . The consensus principle states that a multi-view technique must aim at maximizing the agreement on different views . This is similar to how consistency-based semi-supervised learning methods works : for instance , MT enforces agreement with its past self . The complementary principle states that in order to make improvement , each view must contain some information that the other views do not carry . In other words , the views should be sufficiently diverse . This is related to diversity regularization in ensemble learning , where individual learners are encouraged to give diverse predictions . Thus , multi-view learning offers a unifying perspective of both consistency and diversity . 3 PROPOSED METHOD . We start by describing how multiple deep views can be generated from input data . Then , we propose our multi-view learning framework for regression , in which multiple deep views can be simultaneously optimized via backpropagation . We then discuss the graphical models that govern the probabilistic dependencies among the ground-truth label and the deep views . Finally , we derive the DiCoM loss function using these graphical models and provide a few insights . View creation : Consider a regression task where the goal is to estimate a label y ∈ R from an input x . To create multiple views , our first approach is to use M neural networks F1 , F2 , . . . , FM , each parameterized by θ1 , θ2 , . . . , θM , respectively . By applying different data augmentations η1 , η2 , . . . , ηM on the original x , we generate M different augmented inputs xm = ηm ( x ) ∀m= 1 , . . . , M . With each augmented input , the corresponding neural network produces a regression output fm ( x ) = Fm ( xm , θm ) ∀ m = 1 , . . . , M . Due to the different augmentations and network parameters , each output fm can be treated as one deep view of the original input x . We call this multi-network setup DiCoM-N , where the N stands for ‘ network ’ ( see Fig . 1 ( a ) ) . The second approach is to utilize a single network with a shared backbone B and multiple branches F1 , F2 , . . . , FM . We use θB to denote the learnable parameters from the backbone and θ1 , θ2 , . . . , θM to denote the parameters of the branches . The hidden features generated by the backbone serve as input to the branches . While the backbone still applies a random augmentation to the input x , each branch Fm applies its own random augmentation ηm as well . Thus , regression outputs f1 , f2 , . . . , fM from the branches can be considered as deep views of the original input . We name this setup DiCoM-B , where the B is short for ‘ branch ’ ( see Fig . 1 ( b ) ) . Multi-branch technique was widely adopted for supervised classification ( Xie et al. , 2017 ) . In this work , it allows us to harness the power of multi-view learning with a relatively lower number of trainable parameters . Multi-view learning framework for regression : Regardless of how they are generated , the deep views are used together with the true label y to compute a semi-supervised loss function LDiCoM . During training phase , LDiCoM is back-propagated simultaneously through the deep views to optimize network parameters θ1 , θ2 , . . . , θM ( including θB in the case of DiCoM-B ) . During inference , all augmentations are removed so that the forward pass is applied on the raw input x . The final prediction is computed as the average of all deep views : µ ( x ) = ∑M i=1 1 M fm ( x ) . The general DiCoM framework is illustrated in Fig . 1 . In the next step , we derive LDiCoM based on a probabilistic graphical assumption . Probabilistic graphical models : Since the augmented inputs are generated from the same sample , the deep views should be close to each other . Motivated by previous work in kernel learning ( Yu et al. , 2011 ) and linear regression ( Nguyen et al. , 2019 ) , we consider f1 , f2 , . . . , fM as random variables and introduce a consensus function fc as a latent variable that connects to each of the deep views . This function enforces the mutual agreement among the views . We assume that the difference between the consensus function and each view follows a zero-mean Gaussian distribution fc − fm ∝ N ( 0 , σ2m ) ∀m = 1 , . . . , M. ( 1 ) This probabilistic relation is known as the consensus potential ( Yu et al. , 2011 ) . Considering the whole graph , this potential implies that all views are random Gaussian variables with a shared mean fc and variance σ2m . As a result , the views stay consistent w.r.t . each other by taking values not too far away from the shared consensus . This graphical model , shown in Fig . 3 ( a ) , is assumed for each unlabeled sample . The joint density associated with the graph is given by p ( fc , f1 , . . . , fM ) = 1 Z M∏ m=1 Ψ ( fc , fm ) ( 2 ) where Z is a normalizing constant and Ψ ( fc , fm ) = exp [ − ( fc−fm ) 2 2σ2m ] is the potential function of the edge connecting fc and fm . From this model , we derive two important results . On a side note , our derivation generalizes to vector-valued labels y , but here we assume scalar labels for ease of exposition . The proofs of our results are provided in Appendix A . ( I ) Marginalization of the views : By integrating the latent consensus function fc out of the joint density , the marginal distribution of the views is p ( f1 , . . . , fM ) ∝ exp [ M∑ m=1 ∑ k > m −λm , k ( fm − fk ) 2 ] ( 3 ) where λm , k = [ 2σ2mσ 2 k ( ∑ m 1 σ2m ) ] −1 . This result implies that the marginal likelihood can be fac- torized as a product of ( M 2 ) terms . Each term is an isotropic Gaussian distribution on the difference between a pair of views ( fm , fk ) , with zero mean and variance ( 2λm , k ) −1 . The equivalent graphical model is shown in Fig . 3 ( b ) . ( II ) Conditional of the consensus function : By applying Bayes ’ theorem , the conditional distribution of the consensus function fc given all the views is a Gaussian fc|f1 , . . . , fM ∼ N ( µ̃ , σ2µ ) . ( 4 ) where σ2µ = ( ∑ m 1 σ2m ) −1 and µ̃ = σ2µ ∑ m fm σ2m . This result highlights that the conditional distribution of fc depends only on the weighted average µ̃ , and the values of individual views are not required . Furthermore , µ̃ can be treated as a view itself , with a variance that is smaller than any of the variances of the views . Derivation of DiCoM loss function : For simplicity , we assume equal variance for different deep views , i.e. , σ2m = σ 2 v ∀m . For an unlabeled sample ( xn ) , we directly apply the first result ( I ) to obtain the following negative log likelihood function Lunl = M∑ m=1 ∑ k > m 1 2Mσ2v [ fm ( xn ) − fk ( xn ) ] 2 ( 5 ) For a labeled sample ( xn , yn ) , since the ground-truth is given , we assume a graphical model that involves the final DiCoM prediction , i.e. , the averaged output µ . This graph is shown in Fig . 3 ( c ) . Since we assume a shared variance σ2v , the weighted output now reduces to an equal-weight average , following from result ( II ) µ̃ ( xn ) = M∑ m=1 fm ( xn ) M = µ ( xn ) σ 2 µ = σ2v M ( 6 ) Subsequently , we apply result ( I ) on this graph to get the negative log likelihood as follows Llab = M 2Mσ2y + 2σ 2 v [ yn − µ ( xn ) ] 2 ( 7 ) = 1 2Mσ2y + 2σ 2 v M∑ m=1 { [ fm ( xn ) − yn ] 2 − [ fm ( xn ) − µ ( xn ) ] 2 } ( 8 ) ≈ 1 2σ2v M∑ m=1 { [ fm ( xn ) − yn ] 2 − [ fm ( xn ) − µ ( xn ) ] 2 } ( 9 ) where in equation ( 8 ) , we have applied the ambiguity decomposition ( Krogh et al. , 1995 ) and in equation ( 9 ) , we have assumed that the label is accurate , i.e. , σ2y σ2v . Given a training batch of labeled samples { ( xn , yn ) } Ln=1 and unlabeled samples { ( xn ) } Un=1 , assuming that the samples are independently generated , we can add the log-likelihood functions across all training samples . This can be done via simply adding up two equations ( 5 ) and ( 9 ) LDiCoM = 1 L L∑ n=1 M∑ m=1 { [ fm ( xn ) − yn ] 2 − κdiv [ fm ( xn ) − µ ( xn ) ] 2 } + 1 U U∑ n=1 M∑ m=1 ∑ k > m κcsc [ fm ( xn ) − fk ( xn ) ] 2 ( 10 ) where we introduce two hyperparameters κdiv and κcsc to absorb other constants and to enable a trade-off between diversity and consistency components of the loss . The DiCoM loss encourages diversity on labeled data , while enforcing consistency on unlabeled data . These two seemingly opposing components can both be derived from the same underlying graphical assumptions . Furthermore , they should not be weighted equally . In fact , we have shown that it depends on the number of views : when M increases , diversity grows in O ( M ) , while consistency grows in O ( M2 ) . It is worth noting that our method is fundamentally different from other extensions of NCL such as Semi-supervised NCL ( Jean et al. , 2018 ) , which enforces diversity on both labeled and unlabeled data . Last but not least , since both diversity and consistency are incorporated in the DiCoM objective function , the method is highly adaptable to different implementations such as multi-network or multi-branch , as long as the views are provided . | This paper concerns the semi-supervised problem. Different from SOTA deep semi-supervised methods, the proposed DiCom employs a diversity measure on the labeled multi-view data, and combines diversity with consistency based on underlying probabilistic graphical assumptions. Experiments verify the effectiveness. | SP:da67860e703f8b08a84c6ba79fe3a32b33a187ad |
Learning to Abstain in the Presence of Uninformative Data | 1 INTRODUCTION . Despite the success of machine learning in computer vision ( Deng et al. , 2009 ; Krizhevsky et al. , 2009 ; He et al. , 2016a ; Huang et al. , 2017 ) and natural language processing ( Vaswani et al. , 2017 ; Devlin et al. , 2018 ) , the power of ML is yet to make significant impact in other areas . One major challenge is the inherently high noise-to-signal ratio in certain domains . For example , in Finance , while stock prices generally reflect the information about financial health of their companies , over the short term , their fluctuations most closely resemble random walks - which are naturally unpredictable - and are usually modeled as such ( Tsay , 2005 ) . In biomedical research , the underlying phenomena are often highly complex and are affected by unobservable factors . The outcome may appear highly random to the measurements ( gene expression , medical histories ) , if the true causing factor is not included or is overwhelmed by others . We are interested in dealing with datasets that may contain large fraction of noisy/uninformative/not learnable data in both training and testing stages . Direct application of standard supervised learning methods to such datasets is both challenging and unwarranted . At the training stage , the uninformative data can significantly bias the model or even completely overwhelm the true signal ( Nettleton et al. , 2010 ) . Therefore , naïve forecasts in majority-uninformative datasets are doomed to be unreliable . Compared to other learning methods , deep neural networks are even more affected by the presence of noise , due to their strong memorization power ( Zhang et al. , 2017 ) : they are likely to overfit the noise and make overly confident predictions where no real structure exists . In this paper , we propose a novel method for learning on datasets where a significant portion of content is pure noise . Instead of forcing the classifier to make predictions for every sample , we learn to decide whether a datapoint is informative or not . If successful , the method abstains from making decisions where no structure exists , and predominantly learns from the remaining predictable data . Our idea is inspired by the classic selective prediction problem ( Chow , 1957 ) , in which one learns to select a subset of data and only predict on that subset . However , the goal of selective prediction is very different from ours . A selective prediction method considers all data relevant . It pursues a balance between coverage ( i.e . proportion of the data selected ) and conditional accuracy on the selected data . In our problem , we assume that uninformative data is an integral part of the data generative process . No learning method , no matter how powerful , can be successful on such data . Our goal is to identify these uninformative samples as well as possible , and at the same time , to train a classifier by minimizing conditional risk on the remaining informative data . Our method learns both a predictor , f , that classifies samples , and a selector , g , that selects learnable data for the predictor and rejects/abstains from the uninformative data . We are using g to approximate the ground truth indicator function of structured/informative data , g∗ . We assume that g∗ exists as a part of the data generation process , but it is never revealed to us , even during training . Instead of direct supervision , we therefore must rely on the predictor ’ s mistakes to train the selector . To achieve this goal , we propose a novel selector loss enforcing that ( 1 ) the selected data best fits the predictor , and ( 2 ) the portion of the data where we abstain from forecasting , does not contain many correct predictions . This loss function is quite different from the loss in classic selective prediction , which penalizes all unselected data equally . A major contribution of this paper is the derivation of theoretical guarantees for the empirical minimizer of our loss . We analyze the proposed selector loss function and provide sample complexity for learning a nearly optimal selector . We show that optimizing such loss can recover nearly all the structured/informative data in a PAC fashion ( Valiant , 1984 ; Kearns et al. , 1994 ; Valiant , 2013 ) , i.e . one can approximate the ground truth selector function g∗ well with high probability , given sufficient samples . What may be surprising is that this guarantee holds even in a challenging setting where the uninformative data represents the majority of the training set . This theoretical guarantee lets us expand to a more challenging and realistic setting . When the sample size is limited and the initially learned predictor is not sufficiently close to the ground truth , we extend our method to an iterative algorithm , in which we progressively optimize both the predictor and the selector . The selector is improved by optimizing our novel selector loss . Meanwhile , the predictor is improved by optimizing the empirical risk , reweighted based on the selector ’ s output ; uninformative or nearly-uninformative samples identified by the selector will be down-weighed . Experiments on real-world datasets demonstrate superiority of our method to existing baselines . Note that in this paper , we assume that each sample is either informative or not . Extending our method to a more general setting with continuous transitions between informative and uninformative is non-trivial and is left as future work . 1.1 RELATED WORK . Learning with untrusted data aims to recover the ground truth model from a partially corrupted dataset . Different noise models for untrusted data have been studied , including random label noise ( Bylander , 1994 ; Natarajan et al. , 2013 ; Han et al. , 2018 ; Yu et al. , 2019 ; Zheng et al. , 2020 ; Zhang et al. , 2020 ) , bounded label noise ( Massart & Nédélec , 2006 ; Awasthi et al. , 2015 ; Diakonikolas et al. , 2019 ; 2020 ) and adversarial noise ( Kearns & Li , 1993 ; Kearns et al. , 1994 ; Kalai et al. , 2008 ; Klivans et al. , 2009 ; Awasthi et al. , 2017 ) . In the most pessimistic setting , if the majority data is corrupted by arbitrary adversarial noise , even mean estimation may be impossible . This is known as List-Decodable problem ( Balcan et al. , 2008 ; Charikar et al. , 2017 ; Diakonikolas et al. , 2018 ) , where the best one can do when the proportion of trusted data α is less than 0.5 , is to return 1α many hypotheses of the mean , knowing one of them is promising . We , on the other hand , aim to produce a single accurate model even in a setting where the majority of the data is uninformative ( the noise , of course , must be of a certain type , see next section ) . While above works assume the presence of noisy data only in the training stage , we study the case where noise is an integral part of the generative process and thus will appear during inference as well , where it must be detected and discarded once more . Selective learning is an active research area . It extends the classic selective prediction problem and studies how to select a subset of data for different learning tasks . We can summarize existing methods into 4 categories : Monte Carlo sampling based methods ( Gal & Ghahramani , 2016 ; Kendall & Gal , 2017 ; Pearce et al. , 2020 ) , margin based methods ( Fumera & Roli , 2002 ; Bartlett & Wegkamp , 2008 ; Grandvalet et al. , 2008 ; Wegkamp et al. , 2011 ; Zhang et al. , 2018 ) , confidence based methods ( Wiener & El-Yaniv , 2011 ; Geifman & El-Yaniv , 2017 ; Jiang et al. , 2018 ) and customized selective loss ( Cortes et al. , 2016 ; Geifman & El-Yaniv , 2019 ; Liu et al. , 2019 ) . Notably , several works propose customized losses , and incorporate them into neural networks . In ( Geifman & El-Yaniv , 2019 ) , the network maintains an extra output neuron to indicate rejection of datapoints . Liu et al . ( 2019 ) use Gambler loss where a cost term is associated with each output neuron and a doubling-rate-like loss function is used to balance rejections and predictions . Cortes et al . ( 2016 ) perform data selection with an extra model and introduce a selective loss that helps maximize the coverage ratio , thus trading off a small fraction of data for a better precision . Existing works on selective prediction are all motivated from the coverage perspective - i.e . one wants to make safe prediction to achieve higher precision while maintain a reasonable recall ( El-Yaniv et al. , 2010 ) . Whereas our paper is the first to investigate the case where some ( or even majority ) of the data is uninformative , and thus must be discarded at prediction time . Unlike with the selective prediction , there is a latent ground truth indicator function of whether a datapoint should be selected or not . Our method is guaranteed to identify those uninformative samples . 2 PROBLEM FORMULATION . In this section , we describe the inherently-noisy data generation process that we aim to study . The model has three important features : 1 ) the uninformative portion of the data has labels generated by coin flipping ; 2 ) the informative datapoints are labeled with a latent ground truth function ; and 3 ) the uninformative data has a distinguishable support from the informative data . Formally : Definition 1 ( Noisy Generative Process ) . We define Noisy Generative Process by the following notation x ∼ Dα where Dα ≡ { x ∼ DU with prob . 1− α ( Uninformative/Noisy Data ) x ∼ DI with prob . α ( Informative/Structured Data ) . ( 1 ) Let ΩD ⊆ Rd be the support of Dα . Suppose { ΩU , ΩI } is a partition of ΩD , the ground truth labeling function f∗ : X → { +1 , −1 } is in hypothesis class F and data is sampled according to : x ∼ Dα ; y ≡ { Bernoulli ( 0.5 ) , if x ∈ ΩU f∗ ( x ) , if x ∈ ΩI . ( 2 ) Note that the f∗ is defined on the whole domain , although only the part within ΩI is relevant . α represents the fraction of informative or structured data in the population . For the rest of the paper , we abuse the notation and use ( x , y ) ∼ Dα to refer to samples generated from this process . Next definition describes a separability condition between informative and uninformative data . It allows to distinguish noise from structure and enables us to approximate the ground truth selector via empirical minimization . Definition 2 ( H-Separable ) . Given compact set Ω and its partition { ΩU , ΩI } , { ΩU , ΩI } satisfies H-Separable condition if there exists g∗ ∈ H satisfying g∗ ( x ) : X → { +1 , −1 } : g∗ ( x ) ≡ { −1 , if x ∈ ΩU 1 , if x ∈ ΩI . ( 3 ) One can view g∗ ( · ) in definition 2 as the target selector we wish to recover . Having introduced the data generation process and the separability condition , we now describe our main assumption for the remainder of the paper . Assumption 1 . Data Sn = { xi , yi } Ni=1 is i.i.d generated according to the Noisy Generative Process ( Definition 1 ) , with f∗ ∈ F , support ΩDα = ΩU ∪̇ΩI where ΩU , ΩI areH-Separable . Throughout this paper , we are interested in the following learning task : Problem 1 ( Abstain from Uninformative Data ) . Under Assumption 1 with enough i.i.d observations from Dα : 1 ) given hypothesis class F , we aim to learn the underlying ground truth classifier f∗ ( x ) ; and 2 ) given the hypothesis classH , we aim to learn the selector g∗ ( x ) . Evaluation metrics . We define metrics to evaluate the quality of both the prediction and the selection . For the prediction , we borrow the selective risk definition from selective learning ( El-Yaniv et al. , 2010 ; Geifman & El-Yaniv , 2019 ; 2017 ) using our own notation . Definition 3 ( Selective Risk ) . Given a predictor f ∈ F and a selector g ∈ H , we define the selective risk as : CR ( f , g ) = E ( x , y ) ∼Dα [ 1 { f ( x ) 6= y } |g ( x ) ≥ 0 ] and its empirical version : CRS ( f , g ) = ∑n i=1 1 { f ( xi ) 6=yi } 1 { g ( xi ) ≥0 } ∑n i=1 1 { g ( xi ) ≥0 } . Selective risk measures the average risk conditioned on instances that are picked by the selector g ( x ) . Note here when combined with the ground truth selector g∗ , the selective risk of the ground truth predictor f∗ goes to zero . Without a selector , however , f∗ has classical classification risk ( which will be formally defined in Definition 5 later ) of more than 12 ( 1− α ) . The metric used to evaluate the quality of a learned selector g is its false positive/negative rate . When g ( x ) ≥ 0 and g∗ ( x ) < 0 , the selector g accepts an uninformative datapoint and thus commits a false positive error . When g ( x ) < 0 and g∗ ( x ) ≥ 0 , g rejects/abstains from an informative datapoint resulting in a false negative error . Definition 4 ( Evaluation Metric for the Selector ) . Given distribution Dα as defined in Defnition 1 and g∗ as target selector , we denote the false positive and false negative of a selector g as False Positive : P [ g ( x ) ≥ 0|g∗ ( x ) < 0 ] ; False Negative : P [ g ( x ) < 0|g∗ ( x ) ≥ 0 ] ( 4 ) | The paper is concerned with selective classification in a stylised 'realisably noisy' data model, wherein the support of the input distribution is partitioned into two chunks, the "informative" $\Omega_I$ and the "uninformative" $\Omega_U,$ such that - The labels are completely noisy ($\mathrm{Bern}(1/2)$) if the input lies in $\Omega_U$. - The labels are completely clean if the input lies in $\Omega_I$. - The learner has access to hypothesis classes $\mathcal{F}, \mathcal{H}$ such that - There exists an $f^* \in \mathcal{F}$ with $f^*(x) = y$ on $\Omega_I$. - There exists a $g^* \in \mathcal{H}$ with $g^*(x) = 2\mathbf{1}\{x \in \Omega_I\} - 1.$ The paper studies the problem of learning a pair $(f,g),$ with $g$ serving as a selector, and $f$ as a predictor, such that $g \approx g^*$ and $f \approx y$ on $\{g \ge 0\}$ (and $\approx f^*$ on $\Omega_I$ if $g$ is learnt well), with the concrete goal of attaining both small selective risk ($P(f^*(x) \neq y|g(x) \ge 0)$) and both low false alarm and low missed detection for $g$ with respect to $g^*$. As is standard in selective classification, supervision of the value of $g^*$ is not given, and instead only $(x,y)$ pairs are provided. This is approached by designing a new loss for learning a selector given a predictor $f$, denoted $W(g;f, \theta)$, that basically uses $\mathbf{1}\{f(x) = y\}$ as proxy supervision for when $g < 0$ or $g > 0$ is preferred, but weighted by a factor to account for the prevalence of uninformative versus informative data. It is first shown, using uniform convergence techniques, that if the indicator ERM problem can be solved, then given if $f^*_{S_n}$ is taken to be a minimiser of standard empirical risk, and $g^*_{S_n}$ to be the minimiser of an empirical version of $W(g;f^*_{S_n}, \theta)$ for appropriately chosen $\theta$ (in a way which depends on the prevalence of informative data $\alpha$), then these goals can be attained in a PAC sense, with sample complexity scaling as $\tilde\{O\}( \frac{d_\{\mathrm{vc}\}(\mathcal{F}) + d_\{\mathrm{vc}\}(\mathcal{H}) }{\epsilon^2 \alpha^2})$. The paper then switches gears, and a alternating minimisation based heuristic method for practically learning $(f,g)$ is proposed. The idea is to first learn (soft functions) $f$ and then $g$ using relaxed versions of the above losses, and then multiplicatively increase the weight of points that are selected by the $g$ in their contribution to the loss for $f$, and repeat. The remainder of the paper is devoted to empirical evaluation of this method in two situations - firstly, uniformative (randomised labels) MNIST data is given along with varying amounts of clean Fashion MNIST data; and secondly, part of the classes from the SVHN dataset have their labels randomised to represent uninformative data, while the rest remain clean. While the scheme is observed to have similar performance in how well the selector identifies the underlying $g^*$, the proposed method has remarkably better selective risk compared to baselines from the recent selective classification literature. | SP:4313c3bed7eaa75d90537a6d993bb6207f4adae2 |
Learning to Abstain in the Presence of Uninformative Data | 1 INTRODUCTION . Despite the success of machine learning in computer vision ( Deng et al. , 2009 ; Krizhevsky et al. , 2009 ; He et al. , 2016a ; Huang et al. , 2017 ) and natural language processing ( Vaswani et al. , 2017 ; Devlin et al. , 2018 ) , the power of ML is yet to make significant impact in other areas . One major challenge is the inherently high noise-to-signal ratio in certain domains . For example , in Finance , while stock prices generally reflect the information about financial health of their companies , over the short term , their fluctuations most closely resemble random walks - which are naturally unpredictable - and are usually modeled as such ( Tsay , 2005 ) . In biomedical research , the underlying phenomena are often highly complex and are affected by unobservable factors . The outcome may appear highly random to the measurements ( gene expression , medical histories ) , if the true causing factor is not included or is overwhelmed by others . We are interested in dealing with datasets that may contain large fraction of noisy/uninformative/not learnable data in both training and testing stages . Direct application of standard supervised learning methods to such datasets is both challenging and unwarranted . At the training stage , the uninformative data can significantly bias the model or even completely overwhelm the true signal ( Nettleton et al. , 2010 ) . Therefore , naïve forecasts in majority-uninformative datasets are doomed to be unreliable . Compared to other learning methods , deep neural networks are even more affected by the presence of noise , due to their strong memorization power ( Zhang et al. , 2017 ) : they are likely to overfit the noise and make overly confident predictions where no real structure exists . In this paper , we propose a novel method for learning on datasets where a significant portion of content is pure noise . Instead of forcing the classifier to make predictions for every sample , we learn to decide whether a datapoint is informative or not . If successful , the method abstains from making decisions where no structure exists , and predominantly learns from the remaining predictable data . Our idea is inspired by the classic selective prediction problem ( Chow , 1957 ) , in which one learns to select a subset of data and only predict on that subset . However , the goal of selective prediction is very different from ours . A selective prediction method considers all data relevant . It pursues a balance between coverage ( i.e . proportion of the data selected ) and conditional accuracy on the selected data . In our problem , we assume that uninformative data is an integral part of the data generative process . No learning method , no matter how powerful , can be successful on such data . Our goal is to identify these uninformative samples as well as possible , and at the same time , to train a classifier by minimizing conditional risk on the remaining informative data . Our method learns both a predictor , f , that classifies samples , and a selector , g , that selects learnable data for the predictor and rejects/abstains from the uninformative data . We are using g to approximate the ground truth indicator function of structured/informative data , g∗ . We assume that g∗ exists as a part of the data generation process , but it is never revealed to us , even during training . Instead of direct supervision , we therefore must rely on the predictor ’ s mistakes to train the selector . To achieve this goal , we propose a novel selector loss enforcing that ( 1 ) the selected data best fits the predictor , and ( 2 ) the portion of the data where we abstain from forecasting , does not contain many correct predictions . This loss function is quite different from the loss in classic selective prediction , which penalizes all unselected data equally . A major contribution of this paper is the derivation of theoretical guarantees for the empirical minimizer of our loss . We analyze the proposed selector loss function and provide sample complexity for learning a nearly optimal selector . We show that optimizing such loss can recover nearly all the structured/informative data in a PAC fashion ( Valiant , 1984 ; Kearns et al. , 1994 ; Valiant , 2013 ) , i.e . one can approximate the ground truth selector function g∗ well with high probability , given sufficient samples . What may be surprising is that this guarantee holds even in a challenging setting where the uninformative data represents the majority of the training set . This theoretical guarantee lets us expand to a more challenging and realistic setting . When the sample size is limited and the initially learned predictor is not sufficiently close to the ground truth , we extend our method to an iterative algorithm , in which we progressively optimize both the predictor and the selector . The selector is improved by optimizing our novel selector loss . Meanwhile , the predictor is improved by optimizing the empirical risk , reweighted based on the selector ’ s output ; uninformative or nearly-uninformative samples identified by the selector will be down-weighed . Experiments on real-world datasets demonstrate superiority of our method to existing baselines . Note that in this paper , we assume that each sample is either informative or not . Extending our method to a more general setting with continuous transitions between informative and uninformative is non-trivial and is left as future work . 1.1 RELATED WORK . Learning with untrusted data aims to recover the ground truth model from a partially corrupted dataset . Different noise models for untrusted data have been studied , including random label noise ( Bylander , 1994 ; Natarajan et al. , 2013 ; Han et al. , 2018 ; Yu et al. , 2019 ; Zheng et al. , 2020 ; Zhang et al. , 2020 ) , bounded label noise ( Massart & Nédélec , 2006 ; Awasthi et al. , 2015 ; Diakonikolas et al. , 2019 ; 2020 ) and adversarial noise ( Kearns & Li , 1993 ; Kearns et al. , 1994 ; Kalai et al. , 2008 ; Klivans et al. , 2009 ; Awasthi et al. , 2017 ) . In the most pessimistic setting , if the majority data is corrupted by arbitrary adversarial noise , even mean estimation may be impossible . This is known as List-Decodable problem ( Balcan et al. , 2008 ; Charikar et al. , 2017 ; Diakonikolas et al. , 2018 ) , where the best one can do when the proportion of trusted data α is less than 0.5 , is to return 1α many hypotheses of the mean , knowing one of them is promising . We , on the other hand , aim to produce a single accurate model even in a setting where the majority of the data is uninformative ( the noise , of course , must be of a certain type , see next section ) . While above works assume the presence of noisy data only in the training stage , we study the case where noise is an integral part of the generative process and thus will appear during inference as well , where it must be detected and discarded once more . Selective learning is an active research area . It extends the classic selective prediction problem and studies how to select a subset of data for different learning tasks . We can summarize existing methods into 4 categories : Monte Carlo sampling based methods ( Gal & Ghahramani , 2016 ; Kendall & Gal , 2017 ; Pearce et al. , 2020 ) , margin based methods ( Fumera & Roli , 2002 ; Bartlett & Wegkamp , 2008 ; Grandvalet et al. , 2008 ; Wegkamp et al. , 2011 ; Zhang et al. , 2018 ) , confidence based methods ( Wiener & El-Yaniv , 2011 ; Geifman & El-Yaniv , 2017 ; Jiang et al. , 2018 ) and customized selective loss ( Cortes et al. , 2016 ; Geifman & El-Yaniv , 2019 ; Liu et al. , 2019 ) . Notably , several works propose customized losses , and incorporate them into neural networks . In ( Geifman & El-Yaniv , 2019 ) , the network maintains an extra output neuron to indicate rejection of datapoints . Liu et al . ( 2019 ) use Gambler loss where a cost term is associated with each output neuron and a doubling-rate-like loss function is used to balance rejections and predictions . Cortes et al . ( 2016 ) perform data selection with an extra model and introduce a selective loss that helps maximize the coverage ratio , thus trading off a small fraction of data for a better precision . Existing works on selective prediction are all motivated from the coverage perspective - i.e . one wants to make safe prediction to achieve higher precision while maintain a reasonable recall ( El-Yaniv et al. , 2010 ) . Whereas our paper is the first to investigate the case where some ( or even majority ) of the data is uninformative , and thus must be discarded at prediction time . Unlike with the selective prediction , there is a latent ground truth indicator function of whether a datapoint should be selected or not . Our method is guaranteed to identify those uninformative samples . 2 PROBLEM FORMULATION . In this section , we describe the inherently-noisy data generation process that we aim to study . The model has three important features : 1 ) the uninformative portion of the data has labels generated by coin flipping ; 2 ) the informative datapoints are labeled with a latent ground truth function ; and 3 ) the uninformative data has a distinguishable support from the informative data . Formally : Definition 1 ( Noisy Generative Process ) . We define Noisy Generative Process by the following notation x ∼ Dα where Dα ≡ { x ∼ DU with prob . 1− α ( Uninformative/Noisy Data ) x ∼ DI with prob . α ( Informative/Structured Data ) . ( 1 ) Let ΩD ⊆ Rd be the support of Dα . Suppose { ΩU , ΩI } is a partition of ΩD , the ground truth labeling function f∗ : X → { +1 , −1 } is in hypothesis class F and data is sampled according to : x ∼ Dα ; y ≡ { Bernoulli ( 0.5 ) , if x ∈ ΩU f∗ ( x ) , if x ∈ ΩI . ( 2 ) Note that the f∗ is defined on the whole domain , although only the part within ΩI is relevant . α represents the fraction of informative or structured data in the population . For the rest of the paper , we abuse the notation and use ( x , y ) ∼ Dα to refer to samples generated from this process . Next definition describes a separability condition between informative and uninformative data . It allows to distinguish noise from structure and enables us to approximate the ground truth selector via empirical minimization . Definition 2 ( H-Separable ) . Given compact set Ω and its partition { ΩU , ΩI } , { ΩU , ΩI } satisfies H-Separable condition if there exists g∗ ∈ H satisfying g∗ ( x ) : X → { +1 , −1 } : g∗ ( x ) ≡ { −1 , if x ∈ ΩU 1 , if x ∈ ΩI . ( 3 ) One can view g∗ ( · ) in definition 2 as the target selector we wish to recover . Having introduced the data generation process and the separability condition , we now describe our main assumption for the remainder of the paper . Assumption 1 . Data Sn = { xi , yi } Ni=1 is i.i.d generated according to the Noisy Generative Process ( Definition 1 ) , with f∗ ∈ F , support ΩDα = ΩU ∪̇ΩI where ΩU , ΩI areH-Separable . Throughout this paper , we are interested in the following learning task : Problem 1 ( Abstain from Uninformative Data ) . Under Assumption 1 with enough i.i.d observations from Dα : 1 ) given hypothesis class F , we aim to learn the underlying ground truth classifier f∗ ( x ) ; and 2 ) given the hypothesis classH , we aim to learn the selector g∗ ( x ) . Evaluation metrics . We define metrics to evaluate the quality of both the prediction and the selection . For the prediction , we borrow the selective risk definition from selective learning ( El-Yaniv et al. , 2010 ; Geifman & El-Yaniv , 2019 ; 2017 ) using our own notation . Definition 3 ( Selective Risk ) . Given a predictor f ∈ F and a selector g ∈ H , we define the selective risk as : CR ( f , g ) = E ( x , y ) ∼Dα [ 1 { f ( x ) 6= y } |g ( x ) ≥ 0 ] and its empirical version : CRS ( f , g ) = ∑n i=1 1 { f ( xi ) 6=yi } 1 { g ( xi ) ≥0 } ∑n i=1 1 { g ( xi ) ≥0 } . Selective risk measures the average risk conditioned on instances that are picked by the selector g ( x ) . Note here when combined with the ground truth selector g∗ , the selective risk of the ground truth predictor f∗ goes to zero . Without a selector , however , f∗ has classical classification risk ( which will be formally defined in Definition 5 later ) of more than 12 ( 1− α ) . The metric used to evaluate the quality of a learned selector g is its false positive/negative rate . When g ( x ) ≥ 0 and g∗ ( x ) < 0 , the selector g accepts an uninformative datapoint and thus commits a false positive error . When g ( x ) < 0 and g∗ ( x ) ≥ 0 , g rejects/abstains from an informative datapoint resulting in a false negative error . Definition 4 ( Evaluation Metric for the Selector ) . Given distribution Dα as defined in Defnition 1 and g∗ as target selector , we denote the false positive and false negative of a selector g as False Positive : P [ g ( x ) ≥ 0|g∗ ( x ) < 0 ] ; False Negative : P [ g ( x ) < 0|g∗ ( x ) ≥ 0 ] ( 4 ) | This paper considers supervised learning with abstention -- where the learning method can decide to make prediction in some region of the feature space, and declare the rest of the feature space unpredictable. The setting is related to but different from the classical selective prediction, and to prediction uncertainty quantification (i.e. imbuing predictions with calibrated confidence intervals). Classical selective prediction balances coverage vs. accuracy -- whereas in the setting here the authors assume that there is a "true" split into an informative and uninformative partition of the feature-space, and the goal is to both recover the partition and learn a good model on the informative partition. The paper focuses on theoretical analysis, and also has experimental results suggesting that if the data is generated according to the author's model, than a heuristic algorithm inspired by the analysis indeed outperforms other selective prediction baselines. | SP:4313c3bed7eaa75d90537a6d993bb6207f4adae2 |
Learn Together, Stop Apart: a Novel Approach to Ensemble Pruning | 1 INTRODUCTION . There are still many areas where classical machine learning algorithms prevail over deep neural networks despite the dramatic growth of their usage in artificial intelligence research . One of such classical algorithms is Gradient Boosting ( GB ) ( Friedman ( 2001 ) ) . It allows to obtain high-quality models on table data with no multimedia ( e.g. , images , audios , videos ) , with samples full of categorical features , noisy features and labels , missing data ( Zhang & Haghani , 2015 ; Li et al. , 2007 ; Babajide Mustapha & Saeed , 2016 ) . Also , the undoubted advantage of the boosting method is the low computational complexity of training and inference ( Deng et al. , 2018 ) . For these reasons , Gradient Boosting is widely used in ranking ( Chapelle & Chang , 2011 ) , recommender systems ( Cheng et al. , 2014 ) , meta-learning ( LeDell & Poirier , 2020 ) , and many other tasks ( Touzani et al. , 2018 ; Trofimov et al. , 2012 ; Ling et al. , 2017 ) . In recent years , many hyperparameters and additional options have been proposed for GB influencing the performance of the learned model ( Ke et al. , 2017 ; Ibragimov & Gusev , 2019 ) . But the learning rate ( weight of each model in the ensemble ) and the size of the ensemble are the key ones . Large models are responsible for revealing complex dependencies in the data but require more time for training and inference ( Friedman , 2002 ) . In comparison , the smaller ones are less expressive but more time-efficient . The standard approach to select an optimal number of training steps is to control the quality of the model by measuring it on a hold-out sample called validation set , which is separate from the training data . The idea is to set a large enough size of the model and find the moment ( overfitting point ) when the validation score stops growing and begins going down . Then one can prune the ensemble to the retrieved number of iterations . The described method has a significant and surprisingly understudied weakness . The approach assumes the existence of a universal ensemble size equally effective for any instance in the sample . In other words , the hypothesis is that all samples require approximately the same number of learners to fit them well . However , in practice , the learning task can consist of different subtasks , which correspond to different regions in the input space of the dataset , where examples follow different distributions with diversified complexities and functional dependencies . In particular , the data space may contain regions of both simple and complex surfaces for training . For the first ones , the ensemble needs a relatively small number of boosting rounds to be trained well , while the latter requires a way longer path until convergence . In this case , the generic boosting size selected by the least regret principle is a compromise between simple and complex areas . This approach encourages models with a composition of overfitted and underfitted regions in the dataset . To handle this issue , we propose a new method to prune large GB models based on an adaptive choice of the optimal size of the ensemble . As in the standard version of GB ( Friedman , 2001 ) , we train one sequence of learners in an ensemble but apply a different number of learned models to different regions in the dataset . Namely , we build an additional model that divides the input space into regions where the distribution of data points has homogeneous complexity and representativity . Then we optimize the ensemble size to each region individually . Our method incurs meager computational costs and can be easily incorporated into any existing learning pipeline . We apply the proposed approach to state-of-the-art open-source GB algorithms and demonstrate its ability to outperform on popular publicly available benchmarks consistently . We show that the described problem of the universal stopping moment highly affects the quality of trained models . To the best of our knowledge , this is the first research devoted to adaptive , instance–wise early stopping in GB , and we hope this paper will encourage further research of the GB algorithm . The rest of the paper is organized as follows . Section 2 introduces notations and background on GB . Previous works on early stopping and ensemble pruning are discussed in Section 3 . In Section 4 , we reveal the details of the proposed approach and present theoretical reasoning and discussions . In Section 5 , the effectiveness of the algorithm is empirically studied using several popular datasets . Section 6 makes conclusions and proposes ideas for future work . 2 BACKGROUND . In this section , we introduce necessary notations and briefly discuss basic concepts concerning gradient boosting and cross-validation for clarity and independent reading reasons . 2.1 GRADIENT BOOSTING . Let S = { xi , yi } ni=1 be a sample from some fixed but unknown distribution P ( x , y ) , where xi = ( x1i , ... , x m i ) ∈ X is an m-dimensional feature representation and yi ∈ Y is a target value of the i-th observation . The classical formulation of the learning problem consists in constructing a function F : X→ Y minimizing the expected target prediction error , which is calculated using a loss function L : Y× Y→ R+ : L ( P , F ) : = E ( x , y ) ∼P [ L ( F ( x ) , y ) ] → min F Since the distribution P is not given and the sample S is the only source of data , the task reduces to empirical risk minimization problem : L̂ ( S , F ) = Ê ( x , y ) ∼S [ L ( F ( x ) , y ) ] = 1 n n∑ i=1 L ( F ( xi ) , yi ) → min F The ability to achieve smaller value of an empirical risk is bounded by the complexity of the set F from which the desired function F ∈ F is selected . A common approach to increase the expressiveness of the learned model is to build a composition ( or an ensemble ) of functions from F . Gradient Boosting ( GB ) constructs an ensemble FB of size B as a weighted sum of base functions { f1 , f2 , ... , fB } ⊂ F : FB ( x ) = B∑ i=1 αifi ( x ) ( 1 ) When the set of available base functions F is closed under scalar multiplication , multipliers αi are usually fixed and equal : ∀i αi = α , where α is a hyperparameter of the GB algorithm called learning rate . Having constructed the first t − 1 terms , the learning algorithm aimes to select the next function ft sequentially as a solution of : L̂ ( S , Ft−1 + ft ) = 1 n n∑ i=1 [ L ( Ft−1 ( xi ) + ft ( xi ) , yi ) ] → min ft The approximate solution of the latter equation in GB is usually constructed as follows . Algorithm calculates first and second order derivatives of L̂ at the point Ft−1 w.r.t . predicted values ŷ : gti = ∂L ( ŷi , yi ) ∂ŷi ∣∣∣∣ ŷi=Ft−1 ( xi ) , hti = ∂2L ( ŷi , yi ) ∂ŷ2i ∣∣∣∣ ŷi=Ft−1 ( xi ) , and selects a least squares estimator to Newton ’ s gradient step in the functional space : ft = argmin f∈F N∑ i=1 hti ( ~xi , yi ) ( f ( ~xi ) − ( − g t i ( ~xi , yi ) hti ( ~xi , yi ) ) ) 2 , ( 2 ) see ( Chen & Guestrin , 2016 ) for details . 2.2 MODEL SELECTION VIA CROSS-VALIDATION . Since the quality estimation based on a train set used in the learning process is biased ( Prokhorenkova et al. , 2017 ) ( the inference is performed on the unseen data ) , it is conventional to use a separate independent set , called validation set , to control the generalization ability of the algorithm . The whole dataset S is split into two disjoint sets Strain and Svalid , where the first one is used for learning and the latter for quality estimation . The final result of this procedure is often highly dependent on the particular train-validation split and , therefore , quality estimation can be very noisy . To tackle this issue , one can use crossvalidation ( Stone , 1974 ) method : split the data S into k subsets of approximately equal size , or folds , ( S1 , S2 , ... , Sk ) s.t . S = k⊔ i=1 Si , and perform k rounds of training-evaluation cycle using S−i : = S/Si as the training set and Si as the validation data for each i ∈ { 1 , 2 , ... , k } . Then the estimated quality is calculated as the mean value of qualities on validation sets over all iterations of described procedure . Another source of bias in the quality estimator is mismatch of target distributions in the training and validation samples . Because the splits in the standard cross-validation procedure are generated randomly the weights of positive samples ( in binary classification tasks ) in the trained model may differ from the data on which quality control is performed . To avoid this effect stratified sampling scheme ( preserving the proportions of the target ) is usually applied . 3 RELATED WORK . 3.1 EARLY STOPPING . Early stopping is a task of controlling the learning process and interrupting it to avoid unnecessary boosting steps , which increase complexity of the model and can lead to overfitting . Since the number of learning steps is directly connected to the complexity of the model , larger ensemble sizes lead to models of smaller bias but larger variance ( bias–variance tradeoff ) . One of the ideas proposed in the literature ( Chang et al . ( 2010 ) , Mayr et al . ( 2012 ) ) is to penalize the complexity of the models , e.g. , via AIC-based methods by approximating the ensemble ’ s degrees of freedom . Some works use generalization bounds of the algorithm employing VC-dimension ( Freund & Schapire ( 1997 ) ) , Rademacher complexity ( Cortes et al . ( 2019 ) ) , or in the PAC setting ( Yao et al . ( 2007 ) , Wei et al . ( 2017 ) ) . These methods do not require separate validation control , but in most cases , are not applicable in real-world tasks since the obtained bounds are distribution-agnostic . Therefore the approximations are very rough . Standard approaches of early stopping mentioned in most of the well-known implementations of GB utilize the simple ” waiting ” idea : if the validation quality does not change for some ” reasonable ” number of iterations , then the training must be stopped ( see , e.g. , Click et al . ( 2016 ) ) . It is important to note that this kind of early stopping may be performed simultaneously with the boosting learning procedure step-by-step so that the training is stopped at the same time when a specific criterion is met . Unfortunately , this method has nothing to do with the double–descent problem ( Belkin et al. , 2019 ) , and the choice of the required number of waiting rounds remains at the researcher ’ s discretion based on experience or heuristic assumptions . | This paper proposes to tune the number of models in a boosting ensemble in an instance-wise fashion. The idea is to first cluster the samples using a decision tree and then to tune the size of the ensemble independently for each cluster, instead of doing it globally for all instances. An efficient two-level cross-validation procedure is designed to tune both the number of terms in each cluster and the number of clusters. Experiments are conducted on 6 large-scale problems that show that local pruning brings some improvement with respect to the more standard global pruning technique. | SP:5af5a69509d6176336791a840c17703dee176a1d |
Learn Together, Stop Apart: a Novel Approach to Ensemble Pruning | 1 INTRODUCTION . There are still many areas where classical machine learning algorithms prevail over deep neural networks despite the dramatic growth of their usage in artificial intelligence research . One of such classical algorithms is Gradient Boosting ( GB ) ( Friedman ( 2001 ) ) . It allows to obtain high-quality models on table data with no multimedia ( e.g. , images , audios , videos ) , with samples full of categorical features , noisy features and labels , missing data ( Zhang & Haghani , 2015 ; Li et al. , 2007 ; Babajide Mustapha & Saeed , 2016 ) . Also , the undoubted advantage of the boosting method is the low computational complexity of training and inference ( Deng et al. , 2018 ) . For these reasons , Gradient Boosting is widely used in ranking ( Chapelle & Chang , 2011 ) , recommender systems ( Cheng et al. , 2014 ) , meta-learning ( LeDell & Poirier , 2020 ) , and many other tasks ( Touzani et al. , 2018 ; Trofimov et al. , 2012 ; Ling et al. , 2017 ) . In recent years , many hyperparameters and additional options have been proposed for GB influencing the performance of the learned model ( Ke et al. , 2017 ; Ibragimov & Gusev , 2019 ) . But the learning rate ( weight of each model in the ensemble ) and the size of the ensemble are the key ones . Large models are responsible for revealing complex dependencies in the data but require more time for training and inference ( Friedman , 2002 ) . In comparison , the smaller ones are less expressive but more time-efficient . The standard approach to select an optimal number of training steps is to control the quality of the model by measuring it on a hold-out sample called validation set , which is separate from the training data . The idea is to set a large enough size of the model and find the moment ( overfitting point ) when the validation score stops growing and begins going down . Then one can prune the ensemble to the retrieved number of iterations . The described method has a significant and surprisingly understudied weakness . The approach assumes the existence of a universal ensemble size equally effective for any instance in the sample . In other words , the hypothesis is that all samples require approximately the same number of learners to fit them well . However , in practice , the learning task can consist of different subtasks , which correspond to different regions in the input space of the dataset , where examples follow different distributions with diversified complexities and functional dependencies . In particular , the data space may contain regions of both simple and complex surfaces for training . For the first ones , the ensemble needs a relatively small number of boosting rounds to be trained well , while the latter requires a way longer path until convergence . In this case , the generic boosting size selected by the least regret principle is a compromise between simple and complex areas . This approach encourages models with a composition of overfitted and underfitted regions in the dataset . To handle this issue , we propose a new method to prune large GB models based on an adaptive choice of the optimal size of the ensemble . As in the standard version of GB ( Friedman , 2001 ) , we train one sequence of learners in an ensemble but apply a different number of learned models to different regions in the dataset . Namely , we build an additional model that divides the input space into regions where the distribution of data points has homogeneous complexity and representativity . Then we optimize the ensemble size to each region individually . Our method incurs meager computational costs and can be easily incorporated into any existing learning pipeline . We apply the proposed approach to state-of-the-art open-source GB algorithms and demonstrate its ability to outperform on popular publicly available benchmarks consistently . We show that the described problem of the universal stopping moment highly affects the quality of trained models . To the best of our knowledge , this is the first research devoted to adaptive , instance–wise early stopping in GB , and we hope this paper will encourage further research of the GB algorithm . The rest of the paper is organized as follows . Section 2 introduces notations and background on GB . Previous works on early stopping and ensemble pruning are discussed in Section 3 . In Section 4 , we reveal the details of the proposed approach and present theoretical reasoning and discussions . In Section 5 , the effectiveness of the algorithm is empirically studied using several popular datasets . Section 6 makes conclusions and proposes ideas for future work . 2 BACKGROUND . In this section , we introduce necessary notations and briefly discuss basic concepts concerning gradient boosting and cross-validation for clarity and independent reading reasons . 2.1 GRADIENT BOOSTING . Let S = { xi , yi } ni=1 be a sample from some fixed but unknown distribution P ( x , y ) , where xi = ( x1i , ... , x m i ) ∈ X is an m-dimensional feature representation and yi ∈ Y is a target value of the i-th observation . The classical formulation of the learning problem consists in constructing a function F : X→ Y minimizing the expected target prediction error , which is calculated using a loss function L : Y× Y→ R+ : L ( P , F ) : = E ( x , y ) ∼P [ L ( F ( x ) , y ) ] → min F Since the distribution P is not given and the sample S is the only source of data , the task reduces to empirical risk minimization problem : L̂ ( S , F ) = Ê ( x , y ) ∼S [ L ( F ( x ) , y ) ] = 1 n n∑ i=1 L ( F ( xi ) , yi ) → min F The ability to achieve smaller value of an empirical risk is bounded by the complexity of the set F from which the desired function F ∈ F is selected . A common approach to increase the expressiveness of the learned model is to build a composition ( or an ensemble ) of functions from F . Gradient Boosting ( GB ) constructs an ensemble FB of size B as a weighted sum of base functions { f1 , f2 , ... , fB } ⊂ F : FB ( x ) = B∑ i=1 αifi ( x ) ( 1 ) When the set of available base functions F is closed under scalar multiplication , multipliers αi are usually fixed and equal : ∀i αi = α , where α is a hyperparameter of the GB algorithm called learning rate . Having constructed the first t − 1 terms , the learning algorithm aimes to select the next function ft sequentially as a solution of : L̂ ( S , Ft−1 + ft ) = 1 n n∑ i=1 [ L ( Ft−1 ( xi ) + ft ( xi ) , yi ) ] → min ft The approximate solution of the latter equation in GB is usually constructed as follows . Algorithm calculates first and second order derivatives of L̂ at the point Ft−1 w.r.t . predicted values ŷ : gti = ∂L ( ŷi , yi ) ∂ŷi ∣∣∣∣ ŷi=Ft−1 ( xi ) , hti = ∂2L ( ŷi , yi ) ∂ŷ2i ∣∣∣∣ ŷi=Ft−1 ( xi ) , and selects a least squares estimator to Newton ’ s gradient step in the functional space : ft = argmin f∈F N∑ i=1 hti ( ~xi , yi ) ( f ( ~xi ) − ( − g t i ( ~xi , yi ) hti ( ~xi , yi ) ) ) 2 , ( 2 ) see ( Chen & Guestrin , 2016 ) for details . 2.2 MODEL SELECTION VIA CROSS-VALIDATION . Since the quality estimation based on a train set used in the learning process is biased ( Prokhorenkova et al. , 2017 ) ( the inference is performed on the unseen data ) , it is conventional to use a separate independent set , called validation set , to control the generalization ability of the algorithm . The whole dataset S is split into two disjoint sets Strain and Svalid , where the first one is used for learning and the latter for quality estimation . The final result of this procedure is often highly dependent on the particular train-validation split and , therefore , quality estimation can be very noisy . To tackle this issue , one can use crossvalidation ( Stone , 1974 ) method : split the data S into k subsets of approximately equal size , or folds , ( S1 , S2 , ... , Sk ) s.t . S = k⊔ i=1 Si , and perform k rounds of training-evaluation cycle using S−i : = S/Si as the training set and Si as the validation data for each i ∈ { 1 , 2 , ... , k } . Then the estimated quality is calculated as the mean value of qualities on validation sets over all iterations of described procedure . Another source of bias in the quality estimator is mismatch of target distributions in the training and validation samples . Because the splits in the standard cross-validation procedure are generated randomly the weights of positive samples ( in binary classification tasks ) in the trained model may differ from the data on which quality control is performed . To avoid this effect stratified sampling scheme ( preserving the proportions of the target ) is usually applied . 3 RELATED WORK . 3.1 EARLY STOPPING . Early stopping is a task of controlling the learning process and interrupting it to avoid unnecessary boosting steps , which increase complexity of the model and can lead to overfitting . Since the number of learning steps is directly connected to the complexity of the model , larger ensemble sizes lead to models of smaller bias but larger variance ( bias–variance tradeoff ) . One of the ideas proposed in the literature ( Chang et al . ( 2010 ) , Mayr et al . ( 2012 ) ) is to penalize the complexity of the models , e.g. , via AIC-based methods by approximating the ensemble ’ s degrees of freedom . Some works use generalization bounds of the algorithm employing VC-dimension ( Freund & Schapire ( 1997 ) ) , Rademacher complexity ( Cortes et al . ( 2019 ) ) , or in the PAC setting ( Yao et al . ( 2007 ) , Wei et al . ( 2017 ) ) . These methods do not require separate validation control , but in most cases , are not applicable in real-world tasks since the obtained bounds are distribution-agnostic . Therefore the approximations are very rough . Standard approaches of early stopping mentioned in most of the well-known implementations of GB utilize the simple ” waiting ” idea : if the validation quality does not change for some ” reasonable ” number of iterations , then the training must be stopped ( see , e.g. , Click et al . ( 2016 ) ) . It is important to note that this kind of early stopping may be performed simultaneously with the boosting learning procedure step-by-step so that the training is stopped at the same time when a specific criterion is met . Unfortunately , this method has nothing to do with the double–descent problem ( Belkin et al. , 2019 ) , and the choice of the required number of waiting rounds remains at the researcher ’ s discretion based on experience or heuristic assumptions . | The paper proposes a novel method to set the optimal ensemble size in gradient boosting. In particular, the authors propose an adaptive strategy that sets distinct ensemble sizes for different regions of the input space. For that, they propose dividing the input space in coherent regions (whose instances are similar both in terms of features and labels) and estimating the optimal ensemble size within those regions. For clustering data, they propose using a decision tree induced from the entire training set, and the leaves of the trees are the clusters that comprise the data partition. They show the results of both a biased and a less-biased estimator for finding out the optimal ensemble size per region, and they compare their findings with the traditional strategy of simply pruning the ensembles based on a single optimal number of learners estimated from a cross-validation procedure. Results in 6 public datasets show that in at least 4 of those datasets the method seems to provide better results. | SP:5af5a69509d6176336791a840c17703dee176a1d |
State-Action Joint Regularized Implicit Policy for Offline Reinforcement Learning | Offline reinforcement learning enables learning from a fixed dataset , without further interactions with the environment . The lack of environmental interactions makes the policy training vulnerable to state-action pairs far from the training dataset and prone to missing rewarding actions . For training more effective agents , we propose a framework that supports learning a flexible and well-regularized policy , which consists of a fully implicit policy and a regularization through the stateaction visitation frequency induced by the current policy and that induced by the data-collecting behavior policy . We theoretically show the equivalence between policy-matching and state-action-visitation matching , and thus the compatibility of many prior work with our framework . An effective instantiation of our framework through the GAN structure is provided , together with some techniques to explicitly smooth the state-action mapping for robust generalization beyond the static dataset . Extensive experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs . 1 INTRODUCTION . Offline reinforcement learning ( RL ) , also known as batch RL , aims at training agents from previously-collected fixed datasets that are typically large and heterogeneous , with a special emphasis on no interactions with the environment during the learning process ( Ernst et al. , 2005 ; Lange et al. , 2012 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Agarwal et al. , 2020 ; Siegel et al. , 2020 ; Wang et al. , 2020 ) . This paradigm extends the applicability of RL to where the environmental interactions are costly or even potentially dangerous , such as healthcare ( Tseng et al. , 2017 ; Gottesman et al. , 2018 ; Nie et al. , 2019 ) , autonomous driving ( Yurtsever et al. , 2020 ) , and recommendation systems ( Swaminathan et al. , 2017 ; Gilotte et al. , 2018 ) . While ( online ) off-policy RL algorithms , such as DDPG ( Lillicrap et al. , 2016 ) , TD3 ( Fujimoto et al. , 2018 ) , and SAC ( Haarnoja et al. , 2018a ) could be directly adopted in an offline setting , their application can be unsuccessful ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) , especially in high-dimensional continuous control tasks , where function approximations are inevitable and data samples are non-exhaustive . Such failures may be attributed to the shift between the state-action visitation frequency induced by the current policy and that by the data-collecting behavior policy , where unseen state-action pairs are presented to the action-value estimator , resulting in possibly uncontrollable extrapolation errors ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . In this regard , one approach to offline RL is to control the difference between the observed and policy-induced state-action visitation frequencies , so that the current policy mostly generates state-action pairs that have reliable action-value estimate . Previous work in this line of research typically ( 1 ) regularizes the current policy to be close to the behavior policy during the training process , i.e. , policy or state-conditional action distribution matching ; ( 2 ) uses a Gaussian policy class with a learnable mean and diagonal covariance matrix ( Kumar et al. , 2019 ; Wu et al. , 2019 ) . See Appendix A for a detailed review . However , at any given state s , the underlying action-value function over the action space may possess multiple local maxima . A deterministic or uni-modal stochastic policy may only capture one of the local optima and neglect lots of rewarding actions . An even worse situation occurs when such policies exhibit a strong mode-covering behavior , artificially inflating the density around the average of multiple rewarding actions that itself may be inferior . Previous work under the policy-matching theme mainly takes two approaches . One approach directly estimates the divergence between the state-conditional distributions over actions ( Wu et al. , 2019 ) . However , on tasks with continuous state space , with probability one , no state can appear in the dataset more than once . In other words , for each observed state si , the offline dataset has only one corresponding action ai from the behavior policy . Thus , one is only able to use a single point to assess whether the current policy is close to the data-collecting behavior policy at any particular state , which may not well reflect the true divergence between the two conditional distributions . The other approach , e.g. , Kumar et al . ( 2019 ) , resorts to a two-step strategy : First , fit a generative model π ( a | s ) to clone the behavior policy ; Second , estimate the distance between the fitted behavior policy π ( a | s ) and the current policy , and minimize that distance as a way to regularize . While this approach , which can acquire multiple samples from the cloned behavior policy , is able to accurately estimate the distance between the current policy and cloned behavior , its success relies heavily on how well the inferred behavior-cloning generative model mimics the true behavior policy . On tasks with large or continuous state space , however , the same problem arises that each state-conditional action distribution is fitted on only one data point . Moreover , some prior work use conditional VAE ( CVAE , Sohn et al . ( 2015 ) ) as the generative model to clone the possibly-multimodal behavior policy , which further suffers to the problem that CVAE may exhibit a strong mode-covering behavior that allocates large probability density to low data-density regions . Inasmuch as these weaknesses , one may naturally question on how good the samples from such a cloned policy resemble the truth , and further , on the quality of the constraint imposed by sample-based calculation of the distance between such a cloned behavior policy and the current policy . To address these concerns , we are motivated to develop a new framework that not only supports an expressive policy , which can be as flexible as needed , but also well regularizes this flexible policy towards the data-collecting behavior policy . Specifically , ( 1 ) instead of using the classical deterministic or uni-modal Gaussian policy , we train a fully implicit policy for its flexibility to capture multiple modes in the action-value function ; ( 2 ) to avoid the aforementioned potential problems in the policy-matching regularization , we directly control the distance between the state-action visitation frequency induced by the current policy and that induced by the behavior policy , as an auxiliary training target . Hence , our approach does not need to build a generative model to clone the behavior policy . On the theoretical side , we prove in Section 3 that the approach of matching the behavior and current policies is equivalent to matching their corresponding state-action visitation frequencies , which reveals the compatibility of many prior work with our framework . Similar notion of matching the state-action visitations in offline RL is taken by the DICE family ( Nachum et al. , 2019 ; Lee et al. , 2021a ) , but they either use a Gaussian policy or a mixture of Gaussian policy with a per-dataset tuned number of mixtures . Besides , these algorithms have high computational complexity , which , together with inflexible policies and intensive hyperparameter tuning , limit their practical applicability . We instantiate our framework with a GAN structure that approximately minimizes the Jenson–Shannon divergence between the visitation frequencies . Furthermore , we design techniques to explicitly encourage robust behavior of our policy at states not covered in the static dataset . We conduct ablation study on several components of our algorithm and analyze their contributions . With all these considerations , our full algorithm achieves state-of-the-art performance on various tasks from the D4RL dataset ( Fu et al. , 2021 ) , validating the effectiveness of our framework and implementation . 2 BACKGROUND AND MOTIVATION . We first present some background information and then introduce a toy example to illustrate the motivations of the proposed framework for offline RL . Offline RL . Following the classic RL setting ( Sutton & Barto , 2018 ) , the interaction between the agent and environment is modeled as a Markov decision process ( MDP ) , specified by the tupleM = ( S , A , P , r , γ ) , where S denotes the state space , A the action space , γ ∈ ( 0 , 1 ] the discount factor , P ( s′ | s , a ) : S×S×A→ [ 0 , 1 ] the environmental dynamics , and r ( s , a ) : S×A→ [ Rmin , Rmax ] the reward function . The goal of RL is to learn a policy πϕ ( at | st ) , parametrized by ϕ , that maximizes the expected cumulative discounted reward Rt ≜ R ( st , at ) = E [ ∑∞ k=0 γ krt+k+1 ] . In offline RL ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; 2020 ; Levine et al. , 2020 ) , the agent only has access to a fixed dataset D ≜ { ( s , a , r , s′ ) } , consisting of transition tuples from rollouts by some behavior policies πb ( a | s ) . We denote the state-action visitation frequency induced by the behavior policy πb as db ( s , a ) and its state-marginal , the state visitation frequency , as db ( s ) . Similarly , dϕ ( s , a ) and dϕ ( s ) are the counterparts for the current policy πϕ . Here , db ( s , a ) = db ( s ) πb ( a | s ) and we assume D ∼ db ( s , a ) . The visitation frequencies in the dataset are denoted as dD ( s , a ) and dD ( s ) , which are discrete approximations to db ( s , a ) and db ( s ) , respectively . Actor-Critic Algorithm . Denote Qπ ( s , a ) = E [ ∑∞ t=0 γ tr ( st , at ) | s0 = s , a0 = a ] as the action-value function . In the actor-critic scheme ( Sutton & Barto , 2018 ) , the critic Qπ ( s , a ) is often approximated by a neural network Qθ ( s , a ) , parametrized by θ and trained by applying the Bellman operator ( Lillicrap et al. , 2016 ; Haarnoja et al. , 2018a ; Fujimoto et al. , 2019 ) as argminθ [ Qθ ( s , a ) − ( r ( s , a ) + γEs′∼P ( · | s , a ) , a′∼π ( · | s′ ) [ Qθ ( s′ , a′ ) ] ) ] 2 . The actor πϕ aims at maximizing the expected value of Qθ , with the training objective expressed as argmaxϕ { J ( πϕ ) = Es∼dϕ ( s ) , a∼πϕ ( a | s ) [ Qθ ( s , a ) ] } , ( 1 ) where dϕ ( s ) is the state visitation frequency under policy πϕ . In offline RL , sampling from dϕ ( s ) is infeasible as no interactions with the environment are allowed . A common and practically effective approximation ( Fu et al. , 2019 ; Levine et al. , 2020 ) to Equation 1 is J ( πϕ ) ≈ Es∼db ( s ) , a∼πϕ ( a | s ) [ Qθ ( s , a ) ] , ( 2 ) where sampling from db ( s ) can be implemented easily as sampling from the offline dataset D. Generative Adversarial Nets . GAN ( Goodfellow et al. , 2014 ) provides a framework to train deep generative models , with two neural networks jointly trained in an adversarial manner : a generator Gϕ , parametrized by ϕ , that fits the data distribution and a discriminator Dw , parametrized by w , that outputs the probability of a sample coming from the training data rather than Gϕ . Samples x ’ s from the generator ’ s distribution dϕ ( x ) are drawn via z ∼ pz ( z ) , x = Gϕ ( z ) , where pz ( z ) is some noise distribution . Both Gϕ and Dw are trained via a two-player min-max game as minϕ maxw { V ( Dw , Gϕ ) = Ey∼dD ( · ) [ logDw ( y ) ] + Ez∼pz ( z ) [ log ( 1−Dw ( Gϕ ( z ) ) ) ] } , ( 3 ) where dD ( · ) is the data distribution . Given the optimal discriminator D∗G at Gϕ , the training objective of Gϕ is determined by the Jensen–Shannon divergence ( JSD ) ( Lin , 1991 ) between dD and dϕ as V ( D∗G , Gϕ ) = − log 4 + 2 · JSD ( dD∥dϕ ) , with the global minimum achieved if and only if dϕ = dD . Therefore , one may view GAN as a distributional matching framework that approximately minimizes the Jensen–Shannon divergence between the generator distribution and data distribution . Motivations . To illustrate our motivations of training an expressive policy under an appropriate regularization , we conduct a toy experiment of behavior cloning , as shown in Figure 1 , where we use the x- and y-axis values to represent the state and action , respectively . Figure 1a illustrates the state-action joint distribution of the data-collecting behavior policy that we try to mimic . For Figures 1b-1e , we use the same test-time state marginal distribution , which consists of an equal mixture of the behavior policy ’ s state distribution and a uniform state distribution between−1.5 and 1.5 . If the inferred policy well approaches the behavior policy , we expect ( 1 ) clear concentration on the eight centers and ( 2 ) smooth interpolation between centers , which implies a good and smooth fit to the behavior policy . We start the toy experiment with fitting a CVAE model , a representative behavior-cloning method , to the dataset . As shown in Figure 1b , CVAE exhibits a mode-covering behavior that covers the data density modes at the expense of overestimating unwanted low data density regions . Hence , the regularization ability is questionable of using CVAE as a proxy for the behavior policy in some prior work . Replacing CVAE with the conditional GAN ( CGAN , Mirza & Osindero ( 2014 ) ) , i.e. , replacing the KL loss with JSD loss , but adopting the Gaussian policy popular in prior offline RL work partially alleviates the issues but drops necessary modes , as shown in Figure 1c . This shows the inflexibility of Gaussian policies . Replacing the Gaussian policy in CGAN with an implicit policy , while still training CGAN via sampled-based policy-matching , improves the capability of capturing multiple modes , as shown in Figure 1d . Nevertheless , it is still prone to mode collapse and interpolates less smoothly between the seen states . Finally , training the implicit-policy CGAN via direct state-action-visitation joint matching leads to the best performance . As shown in Figure 1e , it concentrates clearly on the eight centers and interpolates smoothly between the seen states . Thus , constraining the state-action visitations can be an effective way to regularize implicit policies in offline RL . These observations motivate us to train implicit policies in offline RL , with sample-based regularization on the state-action visitations . | This paper proposed a regularized policy learning algorithm for offline reinforcement learning. The implicit policy is trained by a GAN-like framework, and the regularization loss constrains the distance between learned policy and behavior policy. Experiments and ablation study on the D4RL dataset validate the proposed framework and algorithmic designs. | SP:558e16a6d21c668b92792c51293357eaecb4dc9d |
State-Action Joint Regularized Implicit Policy for Offline Reinforcement Learning | Offline reinforcement learning enables learning from a fixed dataset , without further interactions with the environment . The lack of environmental interactions makes the policy training vulnerable to state-action pairs far from the training dataset and prone to missing rewarding actions . For training more effective agents , we propose a framework that supports learning a flexible and well-regularized policy , which consists of a fully implicit policy and a regularization through the stateaction visitation frequency induced by the current policy and that induced by the data-collecting behavior policy . We theoretically show the equivalence between policy-matching and state-action-visitation matching , and thus the compatibility of many prior work with our framework . An effective instantiation of our framework through the GAN structure is provided , together with some techniques to explicitly smooth the state-action mapping for robust generalization beyond the static dataset . Extensive experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs . 1 INTRODUCTION . Offline reinforcement learning ( RL ) , also known as batch RL , aims at training agents from previously-collected fixed datasets that are typically large and heterogeneous , with a special emphasis on no interactions with the environment during the learning process ( Ernst et al. , 2005 ; Lange et al. , 2012 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Wu et al. , 2019 ; Agarwal et al. , 2020 ; Siegel et al. , 2020 ; Wang et al. , 2020 ) . This paradigm extends the applicability of RL to where the environmental interactions are costly or even potentially dangerous , such as healthcare ( Tseng et al. , 2017 ; Gottesman et al. , 2018 ; Nie et al. , 2019 ) , autonomous driving ( Yurtsever et al. , 2020 ) , and recommendation systems ( Swaminathan et al. , 2017 ; Gilotte et al. , 2018 ) . While ( online ) off-policy RL algorithms , such as DDPG ( Lillicrap et al. , 2016 ) , TD3 ( Fujimoto et al. , 2018 ) , and SAC ( Haarnoja et al. , 2018a ) could be directly adopted in an offline setting , their application can be unsuccessful ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) , especially in high-dimensional continuous control tasks , where function approximations are inevitable and data samples are non-exhaustive . Such failures may be attributed to the shift between the state-action visitation frequency induced by the current policy and that by the data-collecting behavior policy , where unseen state-action pairs are presented to the action-value estimator , resulting in possibly uncontrollable extrapolation errors ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . In this regard , one approach to offline RL is to control the difference between the observed and policy-induced state-action visitation frequencies , so that the current policy mostly generates state-action pairs that have reliable action-value estimate . Previous work in this line of research typically ( 1 ) regularizes the current policy to be close to the behavior policy during the training process , i.e. , policy or state-conditional action distribution matching ; ( 2 ) uses a Gaussian policy class with a learnable mean and diagonal covariance matrix ( Kumar et al. , 2019 ; Wu et al. , 2019 ) . See Appendix A for a detailed review . However , at any given state s , the underlying action-value function over the action space may possess multiple local maxima . A deterministic or uni-modal stochastic policy may only capture one of the local optima and neglect lots of rewarding actions . An even worse situation occurs when such policies exhibit a strong mode-covering behavior , artificially inflating the density around the average of multiple rewarding actions that itself may be inferior . Previous work under the policy-matching theme mainly takes two approaches . One approach directly estimates the divergence between the state-conditional distributions over actions ( Wu et al. , 2019 ) . However , on tasks with continuous state space , with probability one , no state can appear in the dataset more than once . In other words , for each observed state si , the offline dataset has only one corresponding action ai from the behavior policy . Thus , one is only able to use a single point to assess whether the current policy is close to the data-collecting behavior policy at any particular state , which may not well reflect the true divergence between the two conditional distributions . The other approach , e.g. , Kumar et al . ( 2019 ) , resorts to a two-step strategy : First , fit a generative model π ( a | s ) to clone the behavior policy ; Second , estimate the distance between the fitted behavior policy π ( a | s ) and the current policy , and minimize that distance as a way to regularize . While this approach , which can acquire multiple samples from the cloned behavior policy , is able to accurately estimate the distance between the current policy and cloned behavior , its success relies heavily on how well the inferred behavior-cloning generative model mimics the true behavior policy . On tasks with large or continuous state space , however , the same problem arises that each state-conditional action distribution is fitted on only one data point . Moreover , some prior work use conditional VAE ( CVAE , Sohn et al . ( 2015 ) ) as the generative model to clone the possibly-multimodal behavior policy , which further suffers to the problem that CVAE may exhibit a strong mode-covering behavior that allocates large probability density to low data-density regions . Inasmuch as these weaknesses , one may naturally question on how good the samples from such a cloned policy resemble the truth , and further , on the quality of the constraint imposed by sample-based calculation of the distance between such a cloned behavior policy and the current policy . To address these concerns , we are motivated to develop a new framework that not only supports an expressive policy , which can be as flexible as needed , but also well regularizes this flexible policy towards the data-collecting behavior policy . Specifically , ( 1 ) instead of using the classical deterministic or uni-modal Gaussian policy , we train a fully implicit policy for its flexibility to capture multiple modes in the action-value function ; ( 2 ) to avoid the aforementioned potential problems in the policy-matching regularization , we directly control the distance between the state-action visitation frequency induced by the current policy and that induced by the behavior policy , as an auxiliary training target . Hence , our approach does not need to build a generative model to clone the behavior policy . On the theoretical side , we prove in Section 3 that the approach of matching the behavior and current policies is equivalent to matching their corresponding state-action visitation frequencies , which reveals the compatibility of many prior work with our framework . Similar notion of matching the state-action visitations in offline RL is taken by the DICE family ( Nachum et al. , 2019 ; Lee et al. , 2021a ) , but they either use a Gaussian policy or a mixture of Gaussian policy with a per-dataset tuned number of mixtures . Besides , these algorithms have high computational complexity , which , together with inflexible policies and intensive hyperparameter tuning , limit their practical applicability . We instantiate our framework with a GAN structure that approximately minimizes the Jenson–Shannon divergence between the visitation frequencies . Furthermore , we design techniques to explicitly encourage robust behavior of our policy at states not covered in the static dataset . We conduct ablation study on several components of our algorithm and analyze their contributions . With all these considerations , our full algorithm achieves state-of-the-art performance on various tasks from the D4RL dataset ( Fu et al. , 2021 ) , validating the effectiveness of our framework and implementation . 2 BACKGROUND AND MOTIVATION . We first present some background information and then introduce a toy example to illustrate the motivations of the proposed framework for offline RL . Offline RL . Following the classic RL setting ( Sutton & Barto , 2018 ) , the interaction between the agent and environment is modeled as a Markov decision process ( MDP ) , specified by the tupleM = ( S , A , P , r , γ ) , where S denotes the state space , A the action space , γ ∈ ( 0 , 1 ] the discount factor , P ( s′ | s , a ) : S×S×A→ [ 0 , 1 ] the environmental dynamics , and r ( s , a ) : S×A→ [ Rmin , Rmax ] the reward function . The goal of RL is to learn a policy πϕ ( at | st ) , parametrized by ϕ , that maximizes the expected cumulative discounted reward Rt ≜ R ( st , at ) = E [ ∑∞ k=0 γ krt+k+1 ] . In offline RL ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; 2020 ; Levine et al. , 2020 ) , the agent only has access to a fixed dataset D ≜ { ( s , a , r , s′ ) } , consisting of transition tuples from rollouts by some behavior policies πb ( a | s ) . We denote the state-action visitation frequency induced by the behavior policy πb as db ( s , a ) and its state-marginal , the state visitation frequency , as db ( s ) . Similarly , dϕ ( s , a ) and dϕ ( s ) are the counterparts for the current policy πϕ . Here , db ( s , a ) = db ( s ) πb ( a | s ) and we assume D ∼ db ( s , a ) . The visitation frequencies in the dataset are denoted as dD ( s , a ) and dD ( s ) , which are discrete approximations to db ( s , a ) and db ( s ) , respectively . Actor-Critic Algorithm . Denote Qπ ( s , a ) = E [ ∑∞ t=0 γ tr ( st , at ) | s0 = s , a0 = a ] as the action-value function . In the actor-critic scheme ( Sutton & Barto , 2018 ) , the critic Qπ ( s , a ) is often approximated by a neural network Qθ ( s , a ) , parametrized by θ and trained by applying the Bellman operator ( Lillicrap et al. , 2016 ; Haarnoja et al. , 2018a ; Fujimoto et al. , 2019 ) as argminθ [ Qθ ( s , a ) − ( r ( s , a ) + γEs′∼P ( · | s , a ) , a′∼π ( · | s′ ) [ Qθ ( s′ , a′ ) ] ) ] 2 . The actor πϕ aims at maximizing the expected value of Qθ , with the training objective expressed as argmaxϕ { J ( πϕ ) = Es∼dϕ ( s ) , a∼πϕ ( a | s ) [ Qθ ( s , a ) ] } , ( 1 ) where dϕ ( s ) is the state visitation frequency under policy πϕ . In offline RL , sampling from dϕ ( s ) is infeasible as no interactions with the environment are allowed . A common and practically effective approximation ( Fu et al. , 2019 ; Levine et al. , 2020 ) to Equation 1 is J ( πϕ ) ≈ Es∼db ( s ) , a∼πϕ ( a | s ) [ Qθ ( s , a ) ] , ( 2 ) where sampling from db ( s ) can be implemented easily as sampling from the offline dataset D. Generative Adversarial Nets . GAN ( Goodfellow et al. , 2014 ) provides a framework to train deep generative models , with two neural networks jointly trained in an adversarial manner : a generator Gϕ , parametrized by ϕ , that fits the data distribution and a discriminator Dw , parametrized by w , that outputs the probability of a sample coming from the training data rather than Gϕ . Samples x ’ s from the generator ’ s distribution dϕ ( x ) are drawn via z ∼ pz ( z ) , x = Gϕ ( z ) , where pz ( z ) is some noise distribution . Both Gϕ and Dw are trained via a two-player min-max game as minϕ maxw { V ( Dw , Gϕ ) = Ey∼dD ( · ) [ logDw ( y ) ] + Ez∼pz ( z ) [ log ( 1−Dw ( Gϕ ( z ) ) ) ] } , ( 3 ) where dD ( · ) is the data distribution . Given the optimal discriminator D∗G at Gϕ , the training objective of Gϕ is determined by the Jensen–Shannon divergence ( JSD ) ( Lin , 1991 ) between dD and dϕ as V ( D∗G , Gϕ ) = − log 4 + 2 · JSD ( dD∥dϕ ) , with the global minimum achieved if and only if dϕ = dD . Therefore , one may view GAN as a distributional matching framework that approximately minimizes the Jensen–Shannon divergence between the generator distribution and data distribution . Motivations . To illustrate our motivations of training an expressive policy under an appropriate regularization , we conduct a toy experiment of behavior cloning , as shown in Figure 1 , where we use the x- and y-axis values to represent the state and action , respectively . Figure 1a illustrates the state-action joint distribution of the data-collecting behavior policy that we try to mimic . For Figures 1b-1e , we use the same test-time state marginal distribution , which consists of an equal mixture of the behavior policy ’ s state distribution and a uniform state distribution between−1.5 and 1.5 . If the inferred policy well approaches the behavior policy , we expect ( 1 ) clear concentration on the eight centers and ( 2 ) smooth interpolation between centers , which implies a good and smooth fit to the behavior policy . We start the toy experiment with fitting a CVAE model , a representative behavior-cloning method , to the dataset . As shown in Figure 1b , CVAE exhibits a mode-covering behavior that covers the data density modes at the expense of overestimating unwanted low data density regions . Hence , the regularization ability is questionable of using CVAE as a proxy for the behavior policy in some prior work . Replacing CVAE with the conditional GAN ( CGAN , Mirza & Osindero ( 2014 ) ) , i.e. , replacing the KL loss with JSD loss , but adopting the Gaussian policy popular in prior offline RL work partially alleviates the issues but drops necessary modes , as shown in Figure 1c . This shows the inflexibility of Gaussian policies . Replacing the Gaussian policy in CGAN with an implicit policy , while still training CGAN via sampled-based policy-matching , improves the capability of capturing multiple modes , as shown in Figure 1d . Nevertheless , it is still prone to mode collapse and interpolates less smoothly between the seen states . Finally , training the implicit-policy CGAN via direct state-action-visitation joint matching leads to the best performance . As shown in Figure 1e , it concentrates clearly on the eight centers and interpolates smoothly between the seen states . Thus , constraining the state-action visitations can be an effective way to regularize implicit policies in offline RL . These observations motivate us to train implicit policies in offline RL , with sample-based regularization on the state-action visitations . | One of the existing Offline RL algorithms is to constrain the learned policy, such as constraining the learned policy to be consistent with the behavior policy itself or the action distribution based on state conditions, or adopting a Gaussian policy. However, in any given state s, the potential action value function in the action space may have multiple local maxima. Therefore, only one point can be used to evaluate whether the current policy is close to the behavior policy in any specific state, which may not well reflect the true difference between the two conditional distributions, and the stochastic policy that leads to determinism or unimodality may only capture one of the local optima leads to ignoring many other high-value actions. Especially when such policies such as CVAE and other models exhibit strong model-recovering behavior, large probability density may be assigned to low data density areas, resulting in exaggerating the density of high-value actions. In response to the above problems, the method proposed in this article: - Uses an implicit policy to capture the multi-modality of the action value function - Controls the <s,a> access frequency in the data set to be as consistent as possible with the <s,a> access frequency of the learned policy as an additional training target, without the need to explicitly construct a behavior policy - provides the theory that proves, the method of matching behavior policy and current policy is equivalent to matching their corresponding <s,a> access frequency | SP:558e16a6d21c668b92792c51293357eaecb4dc9d |
Objective Evaluation of Deep Visual Interpretations on Time Series Data | 1 INTRODUCTION . Due to its high performance on complex multi-modal data , deep learning ( DL ) becomes increasingly popular in many real-world applications that process time series data ( Fawaz et al. , 2019b ) . While we fundamentally rely on their classification accuracy in many safety-relevant applications ( Berkenkamp et al. , 2017 ) they remain difficult to interpret . Typical applications are the monitoring of industrial processes ( Löffler et al. , 2021 ) , the support of health care and sports ( Dorschky et al. , 2020 ) , or safety in autonomous driving ( Schmidt et al. , 2021 ) . The need for improved model understanding ( Carvalho et al. , 2019 ) , along with regulatory guidelines ( Goodman & Flaxman , 2017 ) , led to a myriad of new approaches to the visual interpretation problem ( Zhang & Zhu , 2018 ) . Post-hoc visual interpretation allows a domain expert to validate and understand how ( almost ) arbitrary deep learning models operate . Their central idea lies in highlighting features on the input that are ” relevant ” for the prediction of a learned model ( Adebayo et al. , 2018 ) . Many of these techniques do not require a modification of the original model ( Simonyan et al. , 2014 ; Ribeiro et al. , 2016 ) and are compatible with different architectures , which makes them useful as a general-purpose validation tool for neural networks across different tasks ( Arrieta et al. , 2020 ) . However , while visual interpretation yields intuitive and correct explanations for images ( Samek et al. , 2021 ) , the application of these methods on time series data is still an unsolved problem ( Rojat et al. , 2021 ) . Time series inherently are more diverse ( Rojat et al. , 2021 ) ( because they may originate from a variety of different sensors and processes ) , and often do not allow an obvious patch- or texture-based localization of critical features for human observers . This makes the application and the evaluation of visual interpretability methods difficult . A domain expert can not easily judge if explanations are correct in ( i ) explaining the reason for the decision process in the DL model and in ( ii ) capturing the real features in the dataset that lead to a correct classification . Hence , it is important not to blindly apply different visualization methods . This requires quality metrics that evaluate visual interpretations and that enable an expert to select a suitable visualization for a given model and task at hand . However , both state-of-the-art visualization techniques and metrics that evaluate visual interpretations ( e.g . Pixel Flipping ( Samek et al. , 2017 ) , Sanity Check ( Adebayo et al. , 2018 ) , and sensitivity checks ( Rebuffi et al. , 2020 ) ) have so far only been examined on images ( Rojat et al. , 2021 ) or on NLP tasks ( Arras et al. , 2017 ) . This lack of objective and subjective evaluation seriously limits the application and utility of them for time series . In this paper , we propose to evaluate visual interpretations for time series combining six orthogonal metrics : ” sanity ” ( Adebayo et al. , 2018 ) , ” faithfulness ” ( Alvarez Melis & Jaakkola , 2018 ) , ” sensitivity ” ( Rebuffi et al. , 2020 ) , ” robustness ” ( Yeh et al. , 2019 ) , ” stability ” ( Fel & Vigouroux , 2020 ; Li et al. , 2021 ) , and a novel metric based on human preferences : ” localization ” . These metrics rate and validate distinct qualities of saliency . Our metrics are both based on the functional perspective , i.e. , based on the model-specific operation , and on how well they represent annotators ’ semantics . In an extensive evaluation , we train four different architectures on two different types of tasks : UTime model ( Perslev et al. , 2019 ) and bidirectional Long Short-Term Memory ( bi-LSTM ) ( Schuster & Paliwal , 1997 ) on segmentation tasks , and Fully Convolutional Network ( FCN ) ( Long et al. , 2015 ) and Temporal Convolutional Network ( TCN ) ( Bai et al. , 2018 ) on classification tasks . We use diverse datasets from the UCR repository ( Dau et al. , 2018 ) ( GunPointAgeSpan , FordA , FordB , ElectricDevices , MelbournePedestrian , NATOPS ) and for segmentation the more complex real-world tool tracking dataset ( Löffler et al. , 2021 ) . The experiments show the necessity of all categories to create an objective rating for methods , models and tasks , and allow domain experts to understand , rate , and validate saliency for time series in safety-critical applications . The rest of the paper is structured as follows . Section 2 discusses background and related work . Section 3 introduces extended and novel metrics for both the classification and segmentation task . Section 4 discusses the experimental results and Section 5 proposes recommendations . 2 BACKGROUND AND RELATED WORK . Interpretation methods for DL models can be divided into ante-hoc methods , i.e. , methods that are inherently part of the model , and post-hoc methods , i.e. , methods that provide the interpretation after training ( Rojat et al. , 2021 ) . We focus on post-hoc methods and divide them into gradient-based and perturbation-based methods ( Li et al. , 2021 ; Warnecke et al. , 2020 ; Ismail et al. , 2020 ) . Gradient-based methods compute the relevance for all input features by passing gradients backwards through the neural network ( Ancona et al. , 2018 ) . Gradient ( Simonyan et al. , 2014 ) computes a class c ’ s saliency map M c using the derivative of the class score P c of the model with respect to the input sample x , as M c ( x ) = ∂P c ∂x . The gradient indicates the importance of points in the input sequence for predictions . The advantage of gradient-based methods lies in their computational efficiency , as they use only a small number of backward passes to compute M c ( x ) . Perturbation-based methods perturb known input samples and measure the effects of specific perturbations on the predicted class via a forward pass through the network . For instance , Local Interpretable Model-Agnostic Explanations ( LIME ) ( Ribeiro et al. , 2016 ) fits a local surrogate model ( e.g. , a linear regression ) as an explanation , and calculates relevance based on this surrogate . Perturbation-based methods are computationally expensive as they require multiple forward passes per sample . However , they do not need gradient information and work with black-box models . 2.1 METRICS FOR SALIENCY ON TIME SERIES . Most interpretation methods were originally designed for image or text data . Understanding and comparing visual interpretation is intuitive on image data , compared to more abstract time series . Furthermore , the diversity of interpretation methods complicates an objective choice for the model and task ( Rojat et al. , 2021 ) . For example , when Wang et al . ( 2017 ) and Fawaz et al . ( 2019b ) apply Class Activation Maps ( CAM ) ( Zhou et al. , 2016 ) on well-known UCR datasets ( Dau et al. , 2018 ) , they notice a qualitative difference of CAM interpretations between network architectures . Similarly , other work relies on domain experts that perform a ( costly ) qualitative evaluation ( Strodthoff & Strodthoff , 2019 ; Fawaz et al. , 2019a ; Oviedo et al. , 2019 ) . Hence , there is an increasing need for objective evaluation metrics to make the interpretations measurable and comparable . Metrics . There exists a large variety of different metrics . First , the stability of saliency maps can be evaluated with respect to the sensitivity of the majority class ( Yeh et al. , 2019 ) . This metric evaluates the change of interpretation when the input samples are attacked by adversarial noise . It is a special form of perturbation , that was so far only applied to image data . Second , Rebuffi et al . ( 2020 ) measure class sensitivity by comparing saliency maps of a min- and max-class . Third , Cho et al . ( 2020 ) perturb the unimportant input samples and preserve only the important inputs . If the predictions are stable compared to the unperturbed input , the interpretation is robust . Similarly , Ates et al . ( 2021 ) evaluate the local robustness ( i.e. , similar samples should lead to similar interpretations ) of the interpretation by measuring the ratio of change of the interpretation compared to the amount of input perturbation . Finally , faithfulness is a perturbation-based metric that aims to measure the relationship between saliency and predictive features ( Alvarez Melis & Jaakkola , 2018 ) . Schlegel et al . ( 2019 ) focus on continuous univariate time series . Sub-sequences with high relevance values are perturbed either by swapping their samples or replacing them with a mean value . Similarly , Ismail et al . ( 2020 ) propose input perturbations to evaluate models on synthetic time series . As they perturb inputs point-wise instead of sub-sequences their metric does not consider time dependency between consecutive points . Our faithfulness metric perturbs sub-sequences of high saliency ( Schlegel et al. , 2019 ) . We adapt each type of metric into our set to provide a visual interpretation of time series . Categories of metrics . Metrics can be divided into different categories , depending on the question they answer . Doshi-Velez & Kim ( 2017 ) propose a distinction between human-grounded and functional metrics . The former involve human perception and intuition with the goal of generating qualitative , intuitive visualizations , e.g. , bounding boxes for image object detection for testing the localization of saliency maps ( Jianming et al. , 2016 ) , or questionnaires to indicate testers ’ opinions on the quality of explanations ( Li et al. , 2021 ) . Functional metrics provide statistical measures , e.g. , to aggregate performance metrics automatically ( Rojat et al. , 2021 ) , and make use of proxy tasks to generate a quantitative evaluation . Another categorization ( Li et al. , 2021 ) identifies multiple broad categories for image data , that we may transfer to time series , i.e. , the faithfulness of salient features , the class sensitivity of an explanation , and the stability of explanations given noisy inputs . This paper proposes a framework of metrics with orthogonal categories , specifically to time series . We adapt and extend metrics to ( multivariate ) time series and propose an intra-class stability metric and a concept of relevance localization : we build on the pointing game ( Jianming et al. , 2016 ) and combine it with the precision and recall for time series framework ( Tatbul et al. , 2018 ) . To our knowledge we are the first to apply and evaluate sanity check ( Adebayo et al. , 2018 ) on time series . 3 SCORING CATEGORIES . We propose a set of six distinct categories ( sanity , faithfulness , sensitivity , robustness , stability , and localization ) to assess visual interpretation methods and to determine their performance and trustworthiness in classification or segmentation tasks on time series . For each of them we propose a metric that enables a comparative evaluation of diverse types of visual interpretation methods . Why do we need six scoring categories ? It seems tempting to rely on a single metric or on a single aggregated score across multiple metrics to assess the quality of a visual interpretation . However , we show that the six presented categories are inherently orthogonal ( see Fig . 2 ) and capture distinct qualities . Interpretations depend on model parameters ( sanity check ( Adebayo et al. , 2018 ) ) , predictive features ( faithfulness ( Alvarez Melis & Jaakkola , 2018 ) ) , coherence of class predictions ( intra-class stability ( Fel & Vigouroux , 2020 ) ) , robustness against noise ( max sensitivity with adversarial noise ( Yeh et al. , 2019 ) ) , specific sequences ( or even points ) in a time series like by Tatbul et al . ( 2018 ) ( novel localization metric ) , and the relevance map ’ s specificity ( interclass sensitivity ( Rebuffi et al. , 2020 ) ) . It is important to assess if a given interpretation accurately captures these dependencies . We give a brief overview of our frameworks ’ functional ( Fig . 1a-e ) and human-grounded metrics ( Fig . 1f ) that we evaluate with time series data . ( a ) sane saliency depends on network parameters and is structurally different after randomizing the network ’ s weights ρi for layers [ 1 , 2 , 3 ] ( Adebayo et al. , 2018 ) . ( b ) faithful saliency correlates with predictive accuracy and perturbing a percentage of the input sequence with high saliency decreases accuracy ( Alvarez Melis & Jaakkola , 2018 ) . ( c ) saliency is sensitive when the predicted ( max ) class in one sample is sufficiently different from any other ( min ) class ( Rebuffi et al. , 2020 ) . ( d ) saliency is robust , if small changes to the input cause only small changes to the saliency ( Yeh et al. , 2019 ) . ( e ) saliency is stable if it has a low variance and standard deviation for all samples of a class , with respect to a suitable distance metric ( Fel & Vigouroux , 2020 ) . ( f ) saliency should be localized on the predicted classes segment ( t0 to t1 ) . We will define those categories on time series after introducing a unified notation . Notation . We follow the notation adapted from Fawaz et al . ( 2019b ) : A multivariate time series is defined byX = [ X1 , ... , XH ] , whereH is the number of input channels , Xi = ( xi1 , ... , x i T ) ∈ RT is an ordered set of real values , and T denotes the number of time steps . For H equal to 1 , we consider a univariate time series , otherwise we consider a multivariate time series . Time series often include complex temporal dependencies , i.e. , distinct points are not independent . Time series classification defines a mapping X → y that minimizes the error on a dataset D = { ( X1 , y1 ) , ... , ( XN , yN ) } , where N is the number of data samples , X ∈ D is a time series , yi ∈ RC denotes the one-hot vector of a class label the input belongs to , and C is the number of classes . In time series segmentation , we search X → Y that maps an input sample to a dense classification Y = [ y1 , ... , yT ] ∈ RC×T ( Perslev et al. , 2019 ) , i.e. , a class label is predicted for each time step . Post-hoc visual interpretation methods compute a relevance map M cm ∈ RH×T , M cm ( X ) = [ R1 , ... , RH ] , where Ri = ( ri1 , ... , r i T ) , representing the importance of each input feature with respect to the class c and a model m , for each time step . We use M as a function to produce the saliency map . For clarity , we will omit the dependency onm , i.e. , M cm ≡M c , if it is not explicitly required . An evaluation metric for visual interpretation methods defines a score Smetric ( · ) that rates the quality of the relevance map M at a sample X given a model m and optional parameters . We provide a unified view , so that for all scores , a higher score corresponds to a better visualization according to the perspective . | This study deals with the issue of interpretability in the analysis of time series data using deep learning models. Six metrics for evaluating interpretation methods are introduced and nine existing interpretation methods are evaluated. One of these metrics is proposed by the author. One of the metrics is based on an experimental evaluation, which shows that none of the methods is superior in all the metrics. | SP:175791387c30347cc7f014645095f3a2c82a6c32 |
Objective Evaluation of Deep Visual Interpretations on Time Series Data | 1 INTRODUCTION . Due to its high performance on complex multi-modal data , deep learning ( DL ) becomes increasingly popular in many real-world applications that process time series data ( Fawaz et al. , 2019b ) . While we fundamentally rely on their classification accuracy in many safety-relevant applications ( Berkenkamp et al. , 2017 ) they remain difficult to interpret . Typical applications are the monitoring of industrial processes ( Löffler et al. , 2021 ) , the support of health care and sports ( Dorschky et al. , 2020 ) , or safety in autonomous driving ( Schmidt et al. , 2021 ) . The need for improved model understanding ( Carvalho et al. , 2019 ) , along with regulatory guidelines ( Goodman & Flaxman , 2017 ) , led to a myriad of new approaches to the visual interpretation problem ( Zhang & Zhu , 2018 ) . Post-hoc visual interpretation allows a domain expert to validate and understand how ( almost ) arbitrary deep learning models operate . Their central idea lies in highlighting features on the input that are ” relevant ” for the prediction of a learned model ( Adebayo et al. , 2018 ) . Many of these techniques do not require a modification of the original model ( Simonyan et al. , 2014 ; Ribeiro et al. , 2016 ) and are compatible with different architectures , which makes them useful as a general-purpose validation tool for neural networks across different tasks ( Arrieta et al. , 2020 ) . However , while visual interpretation yields intuitive and correct explanations for images ( Samek et al. , 2021 ) , the application of these methods on time series data is still an unsolved problem ( Rojat et al. , 2021 ) . Time series inherently are more diverse ( Rojat et al. , 2021 ) ( because they may originate from a variety of different sensors and processes ) , and often do not allow an obvious patch- or texture-based localization of critical features for human observers . This makes the application and the evaluation of visual interpretability methods difficult . A domain expert can not easily judge if explanations are correct in ( i ) explaining the reason for the decision process in the DL model and in ( ii ) capturing the real features in the dataset that lead to a correct classification . Hence , it is important not to blindly apply different visualization methods . This requires quality metrics that evaluate visual interpretations and that enable an expert to select a suitable visualization for a given model and task at hand . However , both state-of-the-art visualization techniques and metrics that evaluate visual interpretations ( e.g . Pixel Flipping ( Samek et al. , 2017 ) , Sanity Check ( Adebayo et al. , 2018 ) , and sensitivity checks ( Rebuffi et al. , 2020 ) ) have so far only been examined on images ( Rojat et al. , 2021 ) or on NLP tasks ( Arras et al. , 2017 ) . This lack of objective and subjective evaluation seriously limits the application and utility of them for time series . In this paper , we propose to evaluate visual interpretations for time series combining six orthogonal metrics : ” sanity ” ( Adebayo et al. , 2018 ) , ” faithfulness ” ( Alvarez Melis & Jaakkola , 2018 ) , ” sensitivity ” ( Rebuffi et al. , 2020 ) , ” robustness ” ( Yeh et al. , 2019 ) , ” stability ” ( Fel & Vigouroux , 2020 ; Li et al. , 2021 ) , and a novel metric based on human preferences : ” localization ” . These metrics rate and validate distinct qualities of saliency . Our metrics are both based on the functional perspective , i.e. , based on the model-specific operation , and on how well they represent annotators ’ semantics . In an extensive evaluation , we train four different architectures on two different types of tasks : UTime model ( Perslev et al. , 2019 ) and bidirectional Long Short-Term Memory ( bi-LSTM ) ( Schuster & Paliwal , 1997 ) on segmentation tasks , and Fully Convolutional Network ( FCN ) ( Long et al. , 2015 ) and Temporal Convolutional Network ( TCN ) ( Bai et al. , 2018 ) on classification tasks . We use diverse datasets from the UCR repository ( Dau et al. , 2018 ) ( GunPointAgeSpan , FordA , FordB , ElectricDevices , MelbournePedestrian , NATOPS ) and for segmentation the more complex real-world tool tracking dataset ( Löffler et al. , 2021 ) . The experiments show the necessity of all categories to create an objective rating for methods , models and tasks , and allow domain experts to understand , rate , and validate saliency for time series in safety-critical applications . The rest of the paper is structured as follows . Section 2 discusses background and related work . Section 3 introduces extended and novel metrics for both the classification and segmentation task . Section 4 discusses the experimental results and Section 5 proposes recommendations . 2 BACKGROUND AND RELATED WORK . Interpretation methods for DL models can be divided into ante-hoc methods , i.e. , methods that are inherently part of the model , and post-hoc methods , i.e. , methods that provide the interpretation after training ( Rojat et al. , 2021 ) . We focus on post-hoc methods and divide them into gradient-based and perturbation-based methods ( Li et al. , 2021 ; Warnecke et al. , 2020 ; Ismail et al. , 2020 ) . Gradient-based methods compute the relevance for all input features by passing gradients backwards through the neural network ( Ancona et al. , 2018 ) . Gradient ( Simonyan et al. , 2014 ) computes a class c ’ s saliency map M c using the derivative of the class score P c of the model with respect to the input sample x , as M c ( x ) = ∂P c ∂x . The gradient indicates the importance of points in the input sequence for predictions . The advantage of gradient-based methods lies in their computational efficiency , as they use only a small number of backward passes to compute M c ( x ) . Perturbation-based methods perturb known input samples and measure the effects of specific perturbations on the predicted class via a forward pass through the network . For instance , Local Interpretable Model-Agnostic Explanations ( LIME ) ( Ribeiro et al. , 2016 ) fits a local surrogate model ( e.g. , a linear regression ) as an explanation , and calculates relevance based on this surrogate . Perturbation-based methods are computationally expensive as they require multiple forward passes per sample . However , they do not need gradient information and work with black-box models . 2.1 METRICS FOR SALIENCY ON TIME SERIES . Most interpretation methods were originally designed for image or text data . Understanding and comparing visual interpretation is intuitive on image data , compared to more abstract time series . Furthermore , the diversity of interpretation methods complicates an objective choice for the model and task ( Rojat et al. , 2021 ) . For example , when Wang et al . ( 2017 ) and Fawaz et al . ( 2019b ) apply Class Activation Maps ( CAM ) ( Zhou et al. , 2016 ) on well-known UCR datasets ( Dau et al. , 2018 ) , they notice a qualitative difference of CAM interpretations between network architectures . Similarly , other work relies on domain experts that perform a ( costly ) qualitative evaluation ( Strodthoff & Strodthoff , 2019 ; Fawaz et al. , 2019a ; Oviedo et al. , 2019 ) . Hence , there is an increasing need for objective evaluation metrics to make the interpretations measurable and comparable . Metrics . There exists a large variety of different metrics . First , the stability of saliency maps can be evaluated with respect to the sensitivity of the majority class ( Yeh et al. , 2019 ) . This metric evaluates the change of interpretation when the input samples are attacked by adversarial noise . It is a special form of perturbation , that was so far only applied to image data . Second , Rebuffi et al . ( 2020 ) measure class sensitivity by comparing saliency maps of a min- and max-class . Third , Cho et al . ( 2020 ) perturb the unimportant input samples and preserve only the important inputs . If the predictions are stable compared to the unperturbed input , the interpretation is robust . Similarly , Ates et al . ( 2021 ) evaluate the local robustness ( i.e. , similar samples should lead to similar interpretations ) of the interpretation by measuring the ratio of change of the interpretation compared to the amount of input perturbation . Finally , faithfulness is a perturbation-based metric that aims to measure the relationship between saliency and predictive features ( Alvarez Melis & Jaakkola , 2018 ) . Schlegel et al . ( 2019 ) focus on continuous univariate time series . Sub-sequences with high relevance values are perturbed either by swapping their samples or replacing them with a mean value . Similarly , Ismail et al . ( 2020 ) propose input perturbations to evaluate models on synthetic time series . As they perturb inputs point-wise instead of sub-sequences their metric does not consider time dependency between consecutive points . Our faithfulness metric perturbs sub-sequences of high saliency ( Schlegel et al. , 2019 ) . We adapt each type of metric into our set to provide a visual interpretation of time series . Categories of metrics . Metrics can be divided into different categories , depending on the question they answer . Doshi-Velez & Kim ( 2017 ) propose a distinction between human-grounded and functional metrics . The former involve human perception and intuition with the goal of generating qualitative , intuitive visualizations , e.g. , bounding boxes for image object detection for testing the localization of saliency maps ( Jianming et al. , 2016 ) , or questionnaires to indicate testers ’ opinions on the quality of explanations ( Li et al. , 2021 ) . Functional metrics provide statistical measures , e.g. , to aggregate performance metrics automatically ( Rojat et al. , 2021 ) , and make use of proxy tasks to generate a quantitative evaluation . Another categorization ( Li et al. , 2021 ) identifies multiple broad categories for image data , that we may transfer to time series , i.e. , the faithfulness of salient features , the class sensitivity of an explanation , and the stability of explanations given noisy inputs . This paper proposes a framework of metrics with orthogonal categories , specifically to time series . We adapt and extend metrics to ( multivariate ) time series and propose an intra-class stability metric and a concept of relevance localization : we build on the pointing game ( Jianming et al. , 2016 ) and combine it with the precision and recall for time series framework ( Tatbul et al. , 2018 ) . To our knowledge we are the first to apply and evaluate sanity check ( Adebayo et al. , 2018 ) on time series . 3 SCORING CATEGORIES . We propose a set of six distinct categories ( sanity , faithfulness , sensitivity , robustness , stability , and localization ) to assess visual interpretation methods and to determine their performance and trustworthiness in classification or segmentation tasks on time series . For each of them we propose a metric that enables a comparative evaluation of diverse types of visual interpretation methods . Why do we need six scoring categories ? It seems tempting to rely on a single metric or on a single aggregated score across multiple metrics to assess the quality of a visual interpretation . However , we show that the six presented categories are inherently orthogonal ( see Fig . 2 ) and capture distinct qualities . Interpretations depend on model parameters ( sanity check ( Adebayo et al. , 2018 ) ) , predictive features ( faithfulness ( Alvarez Melis & Jaakkola , 2018 ) ) , coherence of class predictions ( intra-class stability ( Fel & Vigouroux , 2020 ) ) , robustness against noise ( max sensitivity with adversarial noise ( Yeh et al. , 2019 ) ) , specific sequences ( or even points ) in a time series like by Tatbul et al . ( 2018 ) ( novel localization metric ) , and the relevance map ’ s specificity ( interclass sensitivity ( Rebuffi et al. , 2020 ) ) . It is important to assess if a given interpretation accurately captures these dependencies . We give a brief overview of our frameworks ’ functional ( Fig . 1a-e ) and human-grounded metrics ( Fig . 1f ) that we evaluate with time series data . ( a ) sane saliency depends on network parameters and is structurally different after randomizing the network ’ s weights ρi for layers [ 1 , 2 , 3 ] ( Adebayo et al. , 2018 ) . ( b ) faithful saliency correlates with predictive accuracy and perturbing a percentage of the input sequence with high saliency decreases accuracy ( Alvarez Melis & Jaakkola , 2018 ) . ( c ) saliency is sensitive when the predicted ( max ) class in one sample is sufficiently different from any other ( min ) class ( Rebuffi et al. , 2020 ) . ( d ) saliency is robust , if small changes to the input cause only small changes to the saliency ( Yeh et al. , 2019 ) . ( e ) saliency is stable if it has a low variance and standard deviation for all samples of a class , with respect to a suitable distance metric ( Fel & Vigouroux , 2020 ) . ( f ) saliency should be localized on the predicted classes segment ( t0 to t1 ) . We will define those categories on time series after introducing a unified notation . Notation . We follow the notation adapted from Fawaz et al . ( 2019b ) : A multivariate time series is defined byX = [ X1 , ... , XH ] , whereH is the number of input channels , Xi = ( xi1 , ... , x i T ) ∈ RT is an ordered set of real values , and T denotes the number of time steps . For H equal to 1 , we consider a univariate time series , otherwise we consider a multivariate time series . Time series often include complex temporal dependencies , i.e. , distinct points are not independent . Time series classification defines a mapping X → y that minimizes the error on a dataset D = { ( X1 , y1 ) , ... , ( XN , yN ) } , where N is the number of data samples , X ∈ D is a time series , yi ∈ RC denotes the one-hot vector of a class label the input belongs to , and C is the number of classes . In time series segmentation , we search X → Y that maps an input sample to a dense classification Y = [ y1 , ... , yT ] ∈ RC×T ( Perslev et al. , 2019 ) , i.e. , a class label is predicted for each time step . Post-hoc visual interpretation methods compute a relevance map M cm ∈ RH×T , M cm ( X ) = [ R1 , ... , RH ] , where Ri = ( ri1 , ... , r i T ) , representing the importance of each input feature with respect to the class c and a model m , for each time step . We use M as a function to produce the saliency map . For clarity , we will omit the dependency onm , i.e. , M cm ≡M c , if it is not explicitly required . An evaluation metric for visual interpretation methods defines a score Smetric ( · ) that rates the quality of the relevance map M at a sample X given a model m and optional parameters . We provide a unified view , so that for all scores , a higher score corresponds to a better visualization according to the perspective . | This paper studies deep neural network (DNN) explainability methods in the context of time-series data. Several metrics exist for evaluating the validity of DNN explainability methods on computer vision tasks. However, it is not clear whether these metrics are reliable when applied to DNN explainability methods on time-series data. This paper conducts experiments to compare 6 of those metrics on time-series classification and segmentation tasks. | SP:175791387c30347cc7f014645095f3a2c82a6c32 |
Unifying Top-down and Bottom-up for Recurrent Visual Attention | 1 INTRODUCTION . Recurrent visual attention model Mnih et al . ( 2014 ) ( abbreviated as RAM ) leverages reinforcement learning Sutton & Barto ( 1998 ) and recurrent neural networks Schuster et al . ( 1997 ) ; LeCun et al . ( 2015 ) to recognize the objects of interests in a sequential manner . Specifically , RAM models object recognition as a sequential decision problem based on reinforcement learning . The agent can adaptively select a series of regions based on its state and make decision to maximize the expected return . The advantages of RAM are follows : firstly , its computation time scales linearly with the complexity of the patch size and the glimpse length rather than the whole image , through a sequence of glimpses and interactions locally . Secondly , this model is very flexible and effective . For example , it considers both local and global features by using more number of glimpses , scales and large patch sizes to enhance its performance . Unfortunately , the idea of using glimpse in the local region limits its understanding to the holistic view and restricts its power to capture the semantic regions in the image . For example , its initial location and patch extraction is totally random , which will take more steps to localize the target objects . Although we can increase the patch size to cover larger region , it will increase the computation time correspondently . In the same time , we need to set a high variance ( if we use Gaussian policy ) to explore unseen regions . Thinking a situation that you were in desert and you were looking for water , but you did not have GPS/Map over the whole scene . What would you do to find water ? You might need to search each direction ( east , south , north , and west ) , that is random searching and time-consuming . In this paper , we want to provide a global view to localize where to search . For example , the process of human perception is to scan the whole scene , and then may be attracted by specific regions of interest . In a high level view , it can help us to ignore non-important parts and focus information on where to look , and further to guide us to different fixations for better understanding the internal representation of the scene . To solve ‘ where to look ’ using reinforcement learning , the desired algorithm needs to satisfy the following criterions : ( 1 ) it is stable and effective ; ( 2 ) it is computationally efficient ; ( 3 ) it must be more goal-oriented with global information ; ( 4 ) instead of random searching , it attends regions of interest even over large image ; ( 5 ) it has better exploration strategy on ‘ where to look ’ . For a large image , the weakness of RAM is exposed throughly , because it relies on local fixations to search the next location , which increasing the policy uncertainty as well as the learning time to optimizing the best in the interaction sequence . Inspired by the hierarchical image pyramids Adelson et al . ( 1984 ) ; Lin et al . ( 2017 ) and reinforcement learning Watkins & Dayan ( 1992 ) ; Kulkarni et al . ( 2016 ) , we propose to unify the top-down and bottom-up mechanism to overcome the weakness of recurrent visual attention model . In the high-level , we take a top-down mechanism to extract information at multiple scales and levels of abstraction , and learn to where to attend regions of interests via reinforcement learning . While in the low-level , we use the similar recurrent visual attention model to localize objects . In particular , we add another two constraints over the bottom-up recurrent neural networks for better exploration . Specifically , we add the entropy of image patch in the trajectory to enhance the policy search . In addition , we constrain the region that attend to should be more related to target objects . By combining the sequential information together , we can understand the big picture and make better decision while interacting with the environment . We train our model in an end-to-end reinforcement learning framework , and evaluate our method on visual classification tasks including MNIST , CIFAR 10 classes dataset and street view house number ( SVHN ) dataset . The experimental results outperform convolutional neural networks ( CNNs ) baseline and the bottom-up recurrent attention models ( RAM ) . 2 RELATED WORK . Since the recurrent models with visual attention ( RAM ) was proposed , many works have been inspired to either use RNNs or reinforcement learning to improve performance on computer vision problems , such as object recognition , location and question answering problems . One direction is the top-down attention mechanism . For example , Caicedo and Lazebnik extends the recurrent attention model and designs an agent with actions to deform a bounding box to determine the most specific location of target objects Caicedo & Lazebnik ( 2015 ) . Xu et al . ( 2015 ) introduces an attention based model to generate words over an given image . Basically , it partitions image into regions , and then models the importance of the attention locations as latent variables , that can be automatically learned to attend to the salient regions of image for corresponding words . Wang et al . Wang et al . ( 2018 ) leverages the hierarchical reinforcement learning to capture multiple fine-grained actions in sub-goals and shows promising results on video captioning . Another trend is to build the interpretation of the scene from bottom-up . Butko & Movellan ( 2009 ) builds a Bayesian model to decide the location to attend while interacting with local patches in a sequential process . Its idea is based on reinforcement learning , such as partially observed Markov decision processes ( POMDP ) to integrate patches into high-level understanding of the whole image . The design of RAM Mnih et al . ( 2014 ) takes a similar approach as Butko & Movellan ( 2008 ; 2009 ) , where reinforcement learning is used to learn ‘ where to attend ’ . Specifically , at each point , the agent only senses locally with limited bandwidth , not observe the full environment . Compared Butko & Movellan ( 2009 ) , RAM leverages RNNs with hidden states to summarize observations for better understanding the environment . Similarly , drl-RPN Pirinen & Sminchisescu ( 2018 ) proposes a sequential attention model for object detection , which generates object proposal using deep reinforcement learning , as well as automatically determines when to stop the search process . Other related work combine local and global information to improve performance on computer vision problems . Attention to Context Convolution Neural Network ( AC-CNN ) exploits both global and local contextual information and incorporates them effectively into the region-based CNN to improve object detection performance Li et al . ( 2017 ) . Anderson et al . Anderson et al . ( 2018 ) proposes to combine bottom-up and top-down attention mechanism for image captioning and visual question answering . They uses the recurrent neural networks ( specifically , LSTM ) to predict an attention distribution over image regions from bottom-up . However , no reinforcement learning technique used to adaptively focus objects and other salient image regions of interests . RAM learns to attend local regions while interacting with environment in a sequential decision process with the purpose to maximum the cumulative return . However , if the scope of local glimpse is small , then we lose the high level context information in the image when making decision in an sequential manner . In contrast , if we increase the patch size to the scale as large as the original image , then we lose the attention mechanism , as well as its efficiency of this model . Moreover , when we increase the scale with the fixed patch size , we will deprive distinguished features after resizing back to original patch size . Also it is a challenge to balance the Gaussian policy variance and patch size . Since the initial location is randomly generated , we need to set a large variance in Gaussian policy to increase its policy exploration . However , it will result in high instability in the learning process . In this paper , we propose to unify bottom-up and top-down mechanism to address these issues mentioned above . In addition , we introduce entropy into policy gradient in order to attend regions with high information content . 3 BACKGROUND . 3.1 REINFORCEMENT LEARNING . The objective of reinforcement learning is to maximize a cumulative return with sequential interactions between an agent and its environment Sutton & Barto ( 1998 ) . At every time step t , the agent selects an action at in the state st according its policy and receives a scalar reward rt ( st , at ) , and then transit to the next state st+1 with probability p ( st+1|st , at ) . We model the agent ’ s behavior with πθ ( a|s ) , which is a parametric distribution from a neural network . Suppose we have the finite trajectory length while the agent interacting with the environment . The return under the policy π for a trajectory τ = ( st , at ) T t=0 J ( θ ) = Eτ∼πθ ( τ ) [ T∑ t=0 γtr ( st , at ) ] = Eτ∼πθ ( τ ) [ R T 0 ] ( 1 ) where γ is the return discount factor , which is necessary to decay the future rewards ensuring bounded returns . πθ ( τ ) denotes the distribution of trajectories below ρ ( τ ) = π ( s0 , a0 , s1 , ... , sT , aT ) = p ( s0 ) T∏ t=0 πθ ( at|st ) p ( st+1|st , at ) ( 2 ) The goal of reinforcement learning is to learn a policy π which can maximize the expected returns . θ = arg max J ( θ ) = arg maxEτ∼πθ ( τ ) [ R T 0 ] ( 3 ) Policy gradient : Take the derivative w.r.t . θ ∇θJ ( θ ) = ∇θEτ∼πθ ( τ ) [ R T 0 ] = ∇θ ∫ ρ ( τ ) RT0 dτ = ∫ ∇θρ ( τ ) RT0 dτ = ∫ ρ ( τ ) ∇θρ ( τ ) ρ ( τ ) RT0 dτ = Eτ∼πθ ( τ ) [ ∇θlogπθ ( τ ) R T 0 ] = Eτ∼πθ ( τ ) [ ∇θlogπθ ( τ ) ( R T 0 − b ) ] ( 4 ) Q-learning : The action-value function describes what the expected return of the agent is in state s and action a under the policy π . The advantage of action value function is to make actions explicit , so we can select actions even in the model-free environment . After taking an action at in state st and thereafter following policy π , the action value function is formatted as : Qπ ( st , at ) = Eτ∼πθ ( τ ) [ Rt|st , at ] = Eτ∼πθ ( τ ) [ T∑ i=t γ ( i−t ) r ( si , ai ) |st , at ] ( 5 ) To get the optimal value function , we can use the maximum over actions , denoted as Q∗ ( st , at ) = maxπ Q π ( st , at ) , and the corresponding optimal policy π can be easily derived by π∗ ( s ) ∈ arg maxat Q ∗ ( st , at ) . | This submission proposes to offer a better initialization by using image pyramids and Q-learning (top-down manner) for the original recurrent visual attention (RAM) model (bottom-up manner). Two new constraints are also proposed for better exploration for RAM. The proposed method has been tested on several image classification datasets, including MNIST, cluttered translated MNIST, SNHN (sequential multi-digit recognition). They also test the robustness to adversarial attack (PGD attack) on CIFAR10. | SP:1eae7d672cd3f9a61f41af1cd46082370ee5b666 |
Unifying Top-down and Bottom-up for Recurrent Visual Attention | 1 INTRODUCTION . Recurrent visual attention model Mnih et al . ( 2014 ) ( abbreviated as RAM ) leverages reinforcement learning Sutton & Barto ( 1998 ) and recurrent neural networks Schuster et al . ( 1997 ) ; LeCun et al . ( 2015 ) to recognize the objects of interests in a sequential manner . Specifically , RAM models object recognition as a sequential decision problem based on reinforcement learning . The agent can adaptively select a series of regions based on its state and make decision to maximize the expected return . The advantages of RAM are follows : firstly , its computation time scales linearly with the complexity of the patch size and the glimpse length rather than the whole image , through a sequence of glimpses and interactions locally . Secondly , this model is very flexible and effective . For example , it considers both local and global features by using more number of glimpses , scales and large patch sizes to enhance its performance . Unfortunately , the idea of using glimpse in the local region limits its understanding to the holistic view and restricts its power to capture the semantic regions in the image . For example , its initial location and patch extraction is totally random , which will take more steps to localize the target objects . Although we can increase the patch size to cover larger region , it will increase the computation time correspondently . In the same time , we need to set a high variance ( if we use Gaussian policy ) to explore unseen regions . Thinking a situation that you were in desert and you were looking for water , but you did not have GPS/Map over the whole scene . What would you do to find water ? You might need to search each direction ( east , south , north , and west ) , that is random searching and time-consuming . In this paper , we want to provide a global view to localize where to search . For example , the process of human perception is to scan the whole scene , and then may be attracted by specific regions of interest . In a high level view , it can help us to ignore non-important parts and focus information on where to look , and further to guide us to different fixations for better understanding the internal representation of the scene . To solve ‘ where to look ’ using reinforcement learning , the desired algorithm needs to satisfy the following criterions : ( 1 ) it is stable and effective ; ( 2 ) it is computationally efficient ; ( 3 ) it must be more goal-oriented with global information ; ( 4 ) instead of random searching , it attends regions of interest even over large image ; ( 5 ) it has better exploration strategy on ‘ where to look ’ . For a large image , the weakness of RAM is exposed throughly , because it relies on local fixations to search the next location , which increasing the policy uncertainty as well as the learning time to optimizing the best in the interaction sequence . Inspired by the hierarchical image pyramids Adelson et al . ( 1984 ) ; Lin et al . ( 2017 ) and reinforcement learning Watkins & Dayan ( 1992 ) ; Kulkarni et al . ( 2016 ) , we propose to unify the top-down and bottom-up mechanism to overcome the weakness of recurrent visual attention model . In the high-level , we take a top-down mechanism to extract information at multiple scales and levels of abstraction , and learn to where to attend regions of interests via reinforcement learning . While in the low-level , we use the similar recurrent visual attention model to localize objects . In particular , we add another two constraints over the bottom-up recurrent neural networks for better exploration . Specifically , we add the entropy of image patch in the trajectory to enhance the policy search . In addition , we constrain the region that attend to should be more related to target objects . By combining the sequential information together , we can understand the big picture and make better decision while interacting with the environment . We train our model in an end-to-end reinforcement learning framework , and evaluate our method on visual classification tasks including MNIST , CIFAR 10 classes dataset and street view house number ( SVHN ) dataset . The experimental results outperform convolutional neural networks ( CNNs ) baseline and the bottom-up recurrent attention models ( RAM ) . 2 RELATED WORK . Since the recurrent models with visual attention ( RAM ) was proposed , many works have been inspired to either use RNNs or reinforcement learning to improve performance on computer vision problems , such as object recognition , location and question answering problems . One direction is the top-down attention mechanism . For example , Caicedo and Lazebnik extends the recurrent attention model and designs an agent with actions to deform a bounding box to determine the most specific location of target objects Caicedo & Lazebnik ( 2015 ) . Xu et al . ( 2015 ) introduces an attention based model to generate words over an given image . Basically , it partitions image into regions , and then models the importance of the attention locations as latent variables , that can be automatically learned to attend to the salient regions of image for corresponding words . Wang et al . Wang et al . ( 2018 ) leverages the hierarchical reinforcement learning to capture multiple fine-grained actions in sub-goals and shows promising results on video captioning . Another trend is to build the interpretation of the scene from bottom-up . Butko & Movellan ( 2009 ) builds a Bayesian model to decide the location to attend while interacting with local patches in a sequential process . Its idea is based on reinforcement learning , such as partially observed Markov decision processes ( POMDP ) to integrate patches into high-level understanding of the whole image . The design of RAM Mnih et al . ( 2014 ) takes a similar approach as Butko & Movellan ( 2008 ; 2009 ) , where reinforcement learning is used to learn ‘ where to attend ’ . Specifically , at each point , the agent only senses locally with limited bandwidth , not observe the full environment . Compared Butko & Movellan ( 2009 ) , RAM leverages RNNs with hidden states to summarize observations for better understanding the environment . Similarly , drl-RPN Pirinen & Sminchisescu ( 2018 ) proposes a sequential attention model for object detection , which generates object proposal using deep reinforcement learning , as well as automatically determines when to stop the search process . Other related work combine local and global information to improve performance on computer vision problems . Attention to Context Convolution Neural Network ( AC-CNN ) exploits both global and local contextual information and incorporates them effectively into the region-based CNN to improve object detection performance Li et al . ( 2017 ) . Anderson et al . Anderson et al . ( 2018 ) proposes to combine bottom-up and top-down attention mechanism for image captioning and visual question answering . They uses the recurrent neural networks ( specifically , LSTM ) to predict an attention distribution over image regions from bottom-up . However , no reinforcement learning technique used to adaptively focus objects and other salient image regions of interests . RAM learns to attend local regions while interacting with environment in a sequential decision process with the purpose to maximum the cumulative return . However , if the scope of local glimpse is small , then we lose the high level context information in the image when making decision in an sequential manner . In contrast , if we increase the patch size to the scale as large as the original image , then we lose the attention mechanism , as well as its efficiency of this model . Moreover , when we increase the scale with the fixed patch size , we will deprive distinguished features after resizing back to original patch size . Also it is a challenge to balance the Gaussian policy variance and patch size . Since the initial location is randomly generated , we need to set a large variance in Gaussian policy to increase its policy exploration . However , it will result in high instability in the learning process . In this paper , we propose to unify bottom-up and top-down mechanism to address these issues mentioned above . In addition , we introduce entropy into policy gradient in order to attend regions with high information content . 3 BACKGROUND . 3.1 REINFORCEMENT LEARNING . The objective of reinforcement learning is to maximize a cumulative return with sequential interactions between an agent and its environment Sutton & Barto ( 1998 ) . At every time step t , the agent selects an action at in the state st according its policy and receives a scalar reward rt ( st , at ) , and then transit to the next state st+1 with probability p ( st+1|st , at ) . We model the agent ’ s behavior with πθ ( a|s ) , which is a parametric distribution from a neural network . Suppose we have the finite trajectory length while the agent interacting with the environment . The return under the policy π for a trajectory τ = ( st , at ) T t=0 J ( θ ) = Eτ∼πθ ( τ ) [ T∑ t=0 γtr ( st , at ) ] = Eτ∼πθ ( τ ) [ R T 0 ] ( 1 ) where γ is the return discount factor , which is necessary to decay the future rewards ensuring bounded returns . πθ ( τ ) denotes the distribution of trajectories below ρ ( τ ) = π ( s0 , a0 , s1 , ... , sT , aT ) = p ( s0 ) T∏ t=0 πθ ( at|st ) p ( st+1|st , at ) ( 2 ) The goal of reinforcement learning is to learn a policy π which can maximize the expected returns . θ = arg max J ( θ ) = arg maxEτ∼πθ ( τ ) [ R T 0 ] ( 3 ) Policy gradient : Take the derivative w.r.t . θ ∇θJ ( θ ) = ∇θEτ∼πθ ( τ ) [ R T 0 ] = ∇θ ∫ ρ ( τ ) RT0 dτ = ∫ ∇θρ ( τ ) RT0 dτ = ∫ ρ ( τ ) ∇θρ ( τ ) ρ ( τ ) RT0 dτ = Eτ∼πθ ( τ ) [ ∇θlogπθ ( τ ) R T 0 ] = Eτ∼πθ ( τ ) [ ∇θlogπθ ( τ ) ( R T 0 − b ) ] ( 4 ) Q-learning : The action-value function describes what the expected return of the agent is in state s and action a under the policy π . The advantage of action value function is to make actions explicit , so we can select actions even in the model-free environment . After taking an action at in state st and thereafter following policy π , the action value function is formatted as : Qπ ( st , at ) = Eτ∼πθ ( τ ) [ Rt|st , at ] = Eτ∼πθ ( τ ) [ T∑ i=t γ ( i−t ) r ( si , ai ) |st , at ] ( 5 ) To get the optimal value function , we can use the maximum over actions , denoted as Q∗ ( st , at ) = maxπ Q π ( st , at ) , and the corresponding optimal policy π can be easily derived by π∗ ( s ) ∈ arg maxat Q ∗ ( st , at ) . | This paper extends the recurrent attention model(RAM) with another extra top-down attention. Specifically, they exploit image pyramids and Q-learning to select regions-of-interest first in the top-down attention mechanism, and then follow RAM to use policy gradient to find the patch in the bottom-up attention. Meanwhile, they also propose two loss constraints to further boost the performance of bottom-up recurrent neural networks. The proposed framework is an end-to-end framework. Experiments on three datasets (MNIST, CIFAR 10, and SVHN) have demonstrated the effectiveness of the proposed model. | SP:1eae7d672cd3f9a61f41af1cd46082370ee5b666 |
Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning | 1 INTRODUCTION . Offline or batch reinforcement learning ( RL ) deals with the training of RL agents from fixed datasets generated by possibly unknown behavior policies , without any interactions with the environment . This is important in problems like robotics , autonomous driving , and healthcare where data collection can be expensive or dangerous . Offline RL has been challenging for model-free RL methods due to extrapolation error where the Q networks predict unrealistic values upon evaluations on outof-distribution state-action pairs ( Fujimoto et al. , 2019 ) . Recent methods overcome this issue by constraining the policy to stay close to the behavior policy that generated the offline data distribution ( Fujimoto et al. , 2019 ; Kumar et al. , 2020 ; Kostrikov et al. , 2021 ; Fujimoto & Gu , 2021 ) , to demonstrate even better performance than the behavior policy on several simulated and real-world tasks ( Siegel et al. , 2020 ; Singh et al. , 2020 ; Nair et al. , 2020 ) . However , the performance of pre-trained policies will be limited by the quality of the offline dataset and it is often necessary or desirable to fine-tune them by interacting with the environment . Also , offline-to-online learning reduces the risks in online interaction as the offline pre-training results in reasonable policies that could be tested before deployment . In practice , offline RL methods often fail during online fine-tuning by interacting with the environment . This offline-to-online RL setting is challenging due to : ( i ) the sudden distribution shift from offline data to online data . This could lead to severe bootstrapping errors which completely distorts the pre-trained policy leading to a sudden performance drop from the very beginning of online fine-tuning , and ( ii ) constraints enforced by offline RL methods on the policy to stay close to the behavior policy . While these constraints help in dealing with the sudden distribution shift they significantly slow down online fine-tuning from newly collected samples . We propose to adaptively weigh the offline RL constraints such as behavior cloning loss during online fine-tuning . This could prevent sudden performance collapses due to the distribution shift while also enabling sample-efficient learning from the newly collected samples . We propose to perform this adaptive weighing according to the agent ’ s performance and the training stability . We start with TD3+BC , a simple offline RL algorithm recently proposed by Fujimoto & Gu ( 2021 ) which combines TD3 ( Fujimoto et al. , 2018 ) with a simple behavior cloning loss , weighted by an α hyperparameter . We adaptively weigh this α hyperparameter using a control mechanism similar to the proportional–derivative ( PD ) controller . The α value is decided based on two components : the difference between the current return and the target return ( proportional term ) as well as the change of return between current episode and the last episode ( derivative term ) . We demonstrate that these simple modifications lead to stable online fine-tuning after offline pre-training on datasets of different quality . We also use a randomized ensemble of Q functions ( Chen et al. , 2021 ) to further improve the sample-efficiency . We attain state-of-the-art online fine-tuning performance on locomotion tasks from the popular D4RL benchmark . 2 RELATED WORK . Offline RL . Offline RL aims to learn a policy from pre-collected fixed datasets without interacting with the environment ( Lange et al. , 2012 ; Agarwal et al. , 2020 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Nachum et al. , 2019 ; Siegel et al. , 2020 ; Levine et al. , 2020 ; Peng et al. , 2019 ) . Off-policy RL algorithms allow for reuse of off-policy data ( Konda & Tsitsiklis , 2000 ; Degris et al. , 2012 ; Haarnoja et al. , 2018 ; Silver et al. , 2014 ; Lillicrap et al. , 2015 ; Fujimoto et al. , 2018 ; Mnih et al. , 2015 ) but they typically fail when trained offline on a fixed dataset , even if it ’ s collected by a policy trained using the same algorithm ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . In actor-critic methods , this is due to extrapolation error of the critic network on out-of-distribution state-action pairs Levine et al . ( 2020 ) . Offline RL methods deal with this by constraining the policy to stay close to the behavioral policy that collected the offline dataset . BRAC ( Wu et al. , 2019 ) achieves this by minimizing the Kullback-Leibler divergence between the behavior policy and the learned policy . BEAR ( Kumar et al. , 2019 ) minimizes the MMD distance between the two policies . TD3+BC ( Fujimoto & Gu , 2021 ) proposes a simple yet efficient offline RL algorithm by adding an additional behavior cloning loss to the actor update . Another class of offline RL methods learns conservative Q functions , which prevents the policy network from exploiting out-of-distribution actions and forces them to stay close to the behavior policy . CQL ( Kumar et al. , 2020 ) changes the critic objective to also minimize the Q function on unseen actions . Fisher-BRC ( Kostrikov et al. , 2021 ) achieves conservative Q learning by constraining the gradient of the Q function on unseen data . Model-based offline RL methods ( Yu et al. , 2020 ; Kidambi et al. , 2020 ) train policies based on the data generated by ensembles of dynamics models learned from offline data , while constraining the policy to stay within samples where the dynamics model is certain . In this paper , we focus on offline-to-online RL with the goal of stable and sample-efficient online fine-tuning from policies pre-trained on offline datasets of different quality . Offline pre-training in RL . Pre-training has been vastly investigated in the machine learning community from computer vision ( Sharif Razavian et al. , 2014 ; Donahue et al. , 2014 ; Yosinski et al. , 2014 ) to natural language processing ( Devlin et al. , 2018 ; Turian et al. , 2010 ) . Offline pre-training in RL could enable deployment of RL methods in domains where data collection can be expensive or dangerous . ( Silver et al. , 2016 ; Gupta et al. , 2019 ; Rajeswaran et al. , 2017 ) pre-train the policy network with imitation learning to speed up RL . QT-opt ( Kalashnikov et al. , 2018 ) studies visionbased object manipulation using a diverse and large dataset collected by seven robots over several months and fine-tune the policy with 27K samples of online data . However , these methods pre-train using diverse , large , or expert datasets and it is also important to investigate the possibility of pretraining from offline datasets of different quality . Yang & Nachum ( 2021 ) ; Ajay et al . ( 2020 ) use offline pre-training to accelerate downstream tasks . AWAC ( Nair et al. , 2020 ) and Balanced Replay Lee et al . ( 2021 ) are recent works that also focus on offline-to-online RL from datasets of different quality . AWAC updates the policy network such that it is constrained during offline training while not too conservative during fine-tuning . Balanced Replay trains an additional neural network to prioritize samples in order to effectively use new data as well as near-on-policy samples in the offline dataset . We compare with AWAC and Balanced Replay to attain state-of-the-art offline-to-online RL performance on the popular D4RL benchmark . Ensembles in RL . Ensemble methods are widely used for better performance in RL ( Faußer & Schwenker , 2015 ; Osband et al. , 2016 ; Chua et al. , 2018 ; Janner et al. , 2019 ) . In model-based RL , PETS ( Chua et al. , 2018 ) and MBPO ( Janner et al. , 2019 ) use probabilistic ensembles to effectively model the dynamics of the environment . In model-free RL , ensembles of Q functions have been shown to improve performance ( Anschel et al. , 2017 ; Lan et al. , 2020 ) . REDQ ( Chen et al. , 2021 ) learns a randomized ensemble of Q functions to achieve similar sample efficiency as model-based methods without learning a dynamic model . We utilize REDQ in this work for improved sampleefficiency during online fine-tuning . Specific to offline RL , REM ( Agarwal et al. , 2020 ) uses random convex combinations of multiple Q-value estimates to calculate the Q targets for effective offline RL on Atari games . MOPO ( Yu et al. , 2020 ) uses probabilistic ensembles from PETS to learn policies from offline data using uncertainty estimates based on model disagreement . MBOP ( Argenson & Dulac-Arnold , 2020 ) uses ensembles of dynamic models , Q functions , and policy networks to get better performance on locomotion tasks . Balanced Replay ( Lee et al. , 2021 ) uses ensembles of pessimistic Q functions to mitigate instability caused by distribution shift in offline-to-online RL . While ensembling of Q functions has been studied by several prior works ( Lan et al. , 2020 ; Chen et al. , 2021 ) , we combine it with behavioral cloning loss for the purpose of robust and sampleefficient offline-to-online RL . Adaptive balancing of multiple objectives in RL . Ball et al . ( 2020 ) train policies using learned dynamics models with the objective of visiting states that most likely lead to subsequent improvement in the dynamics model , using active online learning . They adaptively weigh the maximization of cumulative rewards and minimization of model uncertainty using an online learning mechanism based on exponential weights algorithm . In this paper , we focus on offline-to-online RL using model-free methods and propose to adaptively weigh the maximization of cumulative rewards and a behavioral cloning loss . Exploration of other online learning algorithms such as exponential weights algorithm is a line of future work . 3 BACKGROUND . 3.1 REINFORCEMENT LEARNING . Reinforcement learning ( RL ) deals with sequential decision making to maximize cumulative rewards . RL problems are often formalized as Markov decision processes ( MDPs ) . An MDP consists of a set of states S , a set of actions A , a transition dynamics st+1 ∼ p ( ·|st , at ) that represents the probability of transitioning to a state st+1 by taking action at in state st at timestep t , a scalar reward function rt = R ( st , at ) , and a discount factor γ ∈ [ 0 , 1 ] . A policy function π of an RL agent is a mapping from states to actions and defines the behavior of the agent . The value function Vπ ( s ) of a policy π is defined as the expected cumulative rewards from state s : V π ( s ) = E [ ∑∞ t=0 γ tR ( st , at ) |s0 = s ] , where the expectation is taken over state transitions st+1 ∼ p ( ·|st , at ) and policy function at ∼ π ( st ) . Similarly , the state-action value function Qπ ( s , a ) is defined as the expected cumulative rewards after taking action a in state s : Qπ ( s , a ) = E [ ∑∞ t=0 γ tR ( st , at ) |s0 = s , a0 = a ] . The goal of RL is to learn an optimal policy function πθ with parameters θ , that maximizes the expected cumulative rewards : πθ = arg max θ Es∼S [ V πθ ( s ) ] = arg max θ Es∼S [ Qπθ ( s , πθ ( s ) ) ] . We use the TD3 algorithm for reinforcement learning ( Fujimoto et al. , 2018 ) . TD3 is an actor-critic method that alternatingly trains : ( i ) the critic network Qφ to estimate the Qπθ ( s , a ) values of the policy network πθ , and ( ii ) the policy network to produce actions that maximize the Q function : ∇θQφ ( s , πθ ( s ) ) . | This paper proposes a new offline RL with online fine-tuning method. The authors first pretrain the policy using recent offline RL method TD3+BC with offline data and then collect on-policy data to further improve the pretrained policy. To prevent the policy from either degrading performance or failing to improve at the fine-tuning stage, the authors propose an automatic scheme to adjust the $\alpha$ term that controls the BC loss in TD3+BC to ensure that the policy does not continue to contain itself to the behavior policy if there's room for improvement and also is able to stick to the previous policy if the performance is already near optimal. The authors conduct evalutions in D4RL mujoco environments and show that the approach is able to outperform prior methods in halfcheetah. | SP:c3b9fafd3676ec970b5d2a7dbb9d92b1ffda5959 |
Adaptive Behavior Cloning Regularization for Stable Offline-to-Online Reinforcement Learning | 1 INTRODUCTION . Offline or batch reinforcement learning ( RL ) deals with the training of RL agents from fixed datasets generated by possibly unknown behavior policies , without any interactions with the environment . This is important in problems like robotics , autonomous driving , and healthcare where data collection can be expensive or dangerous . Offline RL has been challenging for model-free RL methods due to extrapolation error where the Q networks predict unrealistic values upon evaluations on outof-distribution state-action pairs ( Fujimoto et al. , 2019 ) . Recent methods overcome this issue by constraining the policy to stay close to the behavior policy that generated the offline data distribution ( Fujimoto et al. , 2019 ; Kumar et al. , 2020 ; Kostrikov et al. , 2021 ; Fujimoto & Gu , 2021 ) , to demonstrate even better performance than the behavior policy on several simulated and real-world tasks ( Siegel et al. , 2020 ; Singh et al. , 2020 ; Nair et al. , 2020 ) . However , the performance of pre-trained policies will be limited by the quality of the offline dataset and it is often necessary or desirable to fine-tune them by interacting with the environment . Also , offline-to-online learning reduces the risks in online interaction as the offline pre-training results in reasonable policies that could be tested before deployment . In practice , offline RL methods often fail during online fine-tuning by interacting with the environment . This offline-to-online RL setting is challenging due to : ( i ) the sudden distribution shift from offline data to online data . This could lead to severe bootstrapping errors which completely distorts the pre-trained policy leading to a sudden performance drop from the very beginning of online fine-tuning , and ( ii ) constraints enforced by offline RL methods on the policy to stay close to the behavior policy . While these constraints help in dealing with the sudden distribution shift they significantly slow down online fine-tuning from newly collected samples . We propose to adaptively weigh the offline RL constraints such as behavior cloning loss during online fine-tuning . This could prevent sudden performance collapses due to the distribution shift while also enabling sample-efficient learning from the newly collected samples . We propose to perform this adaptive weighing according to the agent ’ s performance and the training stability . We start with TD3+BC , a simple offline RL algorithm recently proposed by Fujimoto & Gu ( 2021 ) which combines TD3 ( Fujimoto et al. , 2018 ) with a simple behavior cloning loss , weighted by an α hyperparameter . We adaptively weigh this α hyperparameter using a control mechanism similar to the proportional–derivative ( PD ) controller . The α value is decided based on two components : the difference between the current return and the target return ( proportional term ) as well as the change of return between current episode and the last episode ( derivative term ) . We demonstrate that these simple modifications lead to stable online fine-tuning after offline pre-training on datasets of different quality . We also use a randomized ensemble of Q functions ( Chen et al. , 2021 ) to further improve the sample-efficiency . We attain state-of-the-art online fine-tuning performance on locomotion tasks from the popular D4RL benchmark . 2 RELATED WORK . Offline RL . Offline RL aims to learn a policy from pre-collected fixed datasets without interacting with the environment ( Lange et al. , 2012 ; Agarwal et al. , 2020 ; Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Nachum et al. , 2019 ; Siegel et al. , 2020 ; Levine et al. , 2020 ; Peng et al. , 2019 ) . Off-policy RL algorithms allow for reuse of off-policy data ( Konda & Tsitsiklis , 2000 ; Degris et al. , 2012 ; Haarnoja et al. , 2018 ; Silver et al. , 2014 ; Lillicrap et al. , 2015 ; Fujimoto et al. , 2018 ; Mnih et al. , 2015 ) but they typically fail when trained offline on a fixed dataset , even if it ’ s collected by a policy trained using the same algorithm ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ) . In actor-critic methods , this is due to extrapolation error of the critic network on out-of-distribution state-action pairs Levine et al . ( 2020 ) . Offline RL methods deal with this by constraining the policy to stay close to the behavioral policy that collected the offline dataset . BRAC ( Wu et al. , 2019 ) achieves this by minimizing the Kullback-Leibler divergence between the behavior policy and the learned policy . BEAR ( Kumar et al. , 2019 ) minimizes the MMD distance between the two policies . TD3+BC ( Fujimoto & Gu , 2021 ) proposes a simple yet efficient offline RL algorithm by adding an additional behavior cloning loss to the actor update . Another class of offline RL methods learns conservative Q functions , which prevents the policy network from exploiting out-of-distribution actions and forces them to stay close to the behavior policy . CQL ( Kumar et al. , 2020 ) changes the critic objective to also minimize the Q function on unseen actions . Fisher-BRC ( Kostrikov et al. , 2021 ) achieves conservative Q learning by constraining the gradient of the Q function on unseen data . Model-based offline RL methods ( Yu et al. , 2020 ; Kidambi et al. , 2020 ) train policies based on the data generated by ensembles of dynamics models learned from offline data , while constraining the policy to stay within samples where the dynamics model is certain . In this paper , we focus on offline-to-online RL with the goal of stable and sample-efficient online fine-tuning from policies pre-trained on offline datasets of different quality . Offline pre-training in RL . Pre-training has been vastly investigated in the machine learning community from computer vision ( Sharif Razavian et al. , 2014 ; Donahue et al. , 2014 ; Yosinski et al. , 2014 ) to natural language processing ( Devlin et al. , 2018 ; Turian et al. , 2010 ) . Offline pre-training in RL could enable deployment of RL methods in domains where data collection can be expensive or dangerous . ( Silver et al. , 2016 ; Gupta et al. , 2019 ; Rajeswaran et al. , 2017 ) pre-train the policy network with imitation learning to speed up RL . QT-opt ( Kalashnikov et al. , 2018 ) studies visionbased object manipulation using a diverse and large dataset collected by seven robots over several months and fine-tune the policy with 27K samples of online data . However , these methods pre-train using diverse , large , or expert datasets and it is also important to investigate the possibility of pretraining from offline datasets of different quality . Yang & Nachum ( 2021 ) ; Ajay et al . ( 2020 ) use offline pre-training to accelerate downstream tasks . AWAC ( Nair et al. , 2020 ) and Balanced Replay Lee et al . ( 2021 ) are recent works that also focus on offline-to-online RL from datasets of different quality . AWAC updates the policy network such that it is constrained during offline training while not too conservative during fine-tuning . Balanced Replay trains an additional neural network to prioritize samples in order to effectively use new data as well as near-on-policy samples in the offline dataset . We compare with AWAC and Balanced Replay to attain state-of-the-art offline-to-online RL performance on the popular D4RL benchmark . Ensembles in RL . Ensemble methods are widely used for better performance in RL ( Faußer & Schwenker , 2015 ; Osband et al. , 2016 ; Chua et al. , 2018 ; Janner et al. , 2019 ) . In model-based RL , PETS ( Chua et al. , 2018 ) and MBPO ( Janner et al. , 2019 ) use probabilistic ensembles to effectively model the dynamics of the environment . In model-free RL , ensembles of Q functions have been shown to improve performance ( Anschel et al. , 2017 ; Lan et al. , 2020 ) . REDQ ( Chen et al. , 2021 ) learns a randomized ensemble of Q functions to achieve similar sample efficiency as model-based methods without learning a dynamic model . We utilize REDQ in this work for improved sampleefficiency during online fine-tuning . Specific to offline RL , REM ( Agarwal et al. , 2020 ) uses random convex combinations of multiple Q-value estimates to calculate the Q targets for effective offline RL on Atari games . MOPO ( Yu et al. , 2020 ) uses probabilistic ensembles from PETS to learn policies from offline data using uncertainty estimates based on model disagreement . MBOP ( Argenson & Dulac-Arnold , 2020 ) uses ensembles of dynamic models , Q functions , and policy networks to get better performance on locomotion tasks . Balanced Replay ( Lee et al. , 2021 ) uses ensembles of pessimistic Q functions to mitigate instability caused by distribution shift in offline-to-online RL . While ensembling of Q functions has been studied by several prior works ( Lan et al. , 2020 ; Chen et al. , 2021 ) , we combine it with behavioral cloning loss for the purpose of robust and sampleefficient offline-to-online RL . Adaptive balancing of multiple objectives in RL . Ball et al . ( 2020 ) train policies using learned dynamics models with the objective of visiting states that most likely lead to subsequent improvement in the dynamics model , using active online learning . They adaptively weigh the maximization of cumulative rewards and minimization of model uncertainty using an online learning mechanism based on exponential weights algorithm . In this paper , we focus on offline-to-online RL using model-free methods and propose to adaptively weigh the maximization of cumulative rewards and a behavioral cloning loss . Exploration of other online learning algorithms such as exponential weights algorithm is a line of future work . 3 BACKGROUND . 3.1 REINFORCEMENT LEARNING . Reinforcement learning ( RL ) deals with sequential decision making to maximize cumulative rewards . RL problems are often formalized as Markov decision processes ( MDPs ) . An MDP consists of a set of states S , a set of actions A , a transition dynamics st+1 ∼ p ( ·|st , at ) that represents the probability of transitioning to a state st+1 by taking action at in state st at timestep t , a scalar reward function rt = R ( st , at ) , and a discount factor γ ∈ [ 0 , 1 ] . A policy function π of an RL agent is a mapping from states to actions and defines the behavior of the agent . The value function Vπ ( s ) of a policy π is defined as the expected cumulative rewards from state s : V π ( s ) = E [ ∑∞ t=0 γ tR ( st , at ) |s0 = s ] , where the expectation is taken over state transitions st+1 ∼ p ( ·|st , at ) and policy function at ∼ π ( st ) . Similarly , the state-action value function Qπ ( s , a ) is defined as the expected cumulative rewards after taking action a in state s : Qπ ( s , a ) = E [ ∑∞ t=0 γ tR ( st , at ) |s0 = s , a0 = a ] . The goal of RL is to learn an optimal policy function πθ with parameters θ , that maximizes the expected cumulative rewards : πθ = arg max θ Es∼S [ V πθ ( s ) ] = arg max θ Es∼S [ Qπθ ( s , πθ ( s ) ) ] . We use the TD3 algorithm for reinforcement learning ( Fujimoto et al. , 2018 ) . TD3 is an actor-critic method that alternatingly trains : ( i ) the critic network Qφ to estimate the Qπθ ( s , a ) values of the policy network πθ , and ( ii ) the policy network to produce actions that maximize the Q function : ∇θQφ ( s , πθ ( s ) ) . | This paper studies the fine-tuning problem from offline to online RL. While the naive approach to fine-tune offline policy suffers from a sudden distributional shift by online samples and too much behavior constraint in offline algorithms. The proposed method leverage (1) the adaptive coefficient tuning in TD3+BC loss, (2) randomized ensembles of Q-functions (proposed by Chen et al. 2021), and (3) down-sampling of offline data. The experiments seem to show better or competitive results to other approaches. | SP:c3b9fafd3676ec970b5d2a7dbb9d92b1ffda5959 |
Two Instances of Interpretable Neural Network for Universal Approximations | 1 INTRODUCTION . Artificial neural networks ( NN ) have recently seen successful applications in many fields . Modern deep neural network ( DNN ) architecture , usually trained through the backpropagation mechanism , has been called a black-box because of its lack of interpretability . To tackle this issue , various studies have been performed to understand how a NN works ; see the following surveys Arrieta et al . ( 2020 ) ; Gilpin et al . ( 2018 ) ; Tjoa & Guan ( 2020 ) ; Wiegreffe & Marasović ( 2021 ) . This paper primarily proposes two interpretable models , namely triangularly-constructed NN ( TNN ) and Semi-Quantized Activation NN ( SQANN ) . Both possess the following three notable properties : ( 1 ) Resistance to catastrophic forgetting . ( 2 ) Mathematical proofs for arbitrarily high accuracy on training datasets ; experimentally demonstrable with python code and simple common datasets ( see supp . materials ) . ( 3 ) Detection of out-of-distribution samples through weak activations . Concept disambiguation . Several concepts have multiple possible definitions . We clarify the definitions used in this paper . 1 . Interpretability . We consider only fine-grained interpretation , i.e . we look at the meaning of each single neuron or its activation in our models . 2 . Universal approximation . Readers might be familiar with universal approximation of functions on certain conditions , e.g . compact sets . Our models can be more general , e.g . user can freely choose the interpolation function between 2 activations based on knowledge of the local manifold . The function can even be pathological . See appendix A.2.1 . 3 . Catastrophic forgetting : the tendency for knowledge of previously learned dataset to be abruptly lost as information relevant to a new dataset is incorporated . This definition is a slightly nuanced version of Kirkpatrick et al . ( 2017 ) . Hence , our models ’ resistance to catastrophic forgetting is the following . Given a training dataset D accurately modeled by an architecture M , a new dataset D′ ( especially new , out of distribution dataset ) can be incorporated into M without losing accuracy on D. See appendix A.2.2 ( analogy to biological system included ) . Related works and interpretability issues . Recent remarkable studies on universal approximators include the Deep Narrow Network by Kidger & Lyons ( 2020 ) , DeepONet universal approximation for operators by Lu et al . ( 2021 ) and the Broad Learning System by Chen et al . ( 2019 ) ; Hanin ( 2019 ) ; Park et al . ( 2021 ) ; Johnson ( 2019 ) . While insightful , they do not directly address the eXplainable Artificial Intelligence ( XAI ) issue , especially the blackbox property of the DNN . Similarly , a number of classical papers provide theoretical insights for NN as universal approximators , but interpretability , transparency and fairness issues are not their main focus . The universal approximation theorem by Cybenko ( 1989 ) asserts that a NN with a single hidden layer can approximate any function to arbitrarily small error under common conditions , proven by asserting the density of that set of NN in the function space using classic mathematical theorems . In particular , its theorem 1 uses an abstract proof by contradiction . From the proof , it is not easy to observe the internal mechanism of a NN in a straight-forward manner ; consequently modern works that depend on it ( e.g . Deep Narrow Network ) might inherit the blackbox property . Bottom-up constructions for function approximation using NN then emerged , though they also lack the interpretability ( see appendix A.3 for more related works ) . Also consider a demonstration in Nielsen ( 2015 ) that could help improve our understanding of universal approximation . Outline . This paper is arranged as the following . Section 2 shows explicit TNN construction , related results , including a pencil-and-paper example for pedagogical purpose . Likewise , section 3 shows SQANN construction , statements regarding SQANN , another pencil-and-paper example , then experimental results of its application , before we conclude the paper with limitations and future works . Python codes and clearer figures are fully available in supp . materials ( also see appendix ) . 2 TRIANGULARLY-CONSTRUCTED NN ( TNN ) . TNN is the prototype NN for our interpretable universal approximator . SQANN ( next section ) partially borrows the concept from TNN which will be useful as an easy and manageable illustration to deliver the following ideas : ( 1 ) organized activations of neurons and ( 2 ) the retrieval of α values as the outputs . The model is TNN ( x ) = αTσ ( Wx+ b ) where x ∈ [ 0 , 1 ] , α , b ∈ RN and W ∈ RN , where we use sigmoid function σ for simplicity . We start with a simple scalar function y = f ( x ) ∈ R for x ∈ [ 0 , 1 ] , thus TNN ’ s interpretability can be illustrated very clearly . Assumption : Linear Ordering . It is constructed on a linearly ordered dataset containing N samples { ( x ( k ) , y ( k ) ) ∈ Rn × R } Nk=1 such that x ( N ) < x ( N−1 ) < · · · < x ( 1 ) and y ( k ) = f ( x ( k ) ) , f the true function that TNN will approximate . The interpretability comes from the linear ordering property where higher value of x ( ≈ 1 ) will activate more neurons while lower values will activate less neurons as shown in fig 1 ( A ) . Then α values will be retrieved in a continuous way through dot product , eventually used to compute the output for prediction . In time series , such as ECG ( Electrocardiogram ) , signals can be approximated point-wise ( although it is still preferable to have a noise model during preprocessing to prevent overfitting the noise ) . Meaningful interpretation can be given , for example , by mapping PQRST segments from ECG to specific neurons within TNN , giving some neurons specific meaning and thus interpretability . For more remarks and definition of formal linear ordering etc , see appendix A.4 . Ordered activation . We would like x ( 1 ) to activate all neurons , while x ( N ) activates only 1 neuron . In other words , ideally σ ( Wx ( 1 ) + b ) = [ 1 , 1 , . . . , 1 , 1 ] T , followed by σ ( Wx ( 2 ) + b ) = [ 1 , 1 , . . . , 1 , 0 ] T and so on until σ ( Wx ( N ) + b ) = [ 1 , 0 , . . . , 0 ] T ; again , refer to fig . 1 ( A ) . With this concept , we seek to achieve interpretability by successive activations of neurons depending on the “ intensity ” of the input , with x ( 1 ) being the most intense . In general , the above can be written as σ ( k ) ≡ σ ( Wx ( k ) + b ) = [ 1 , . . . , 1︸ ︷︷ ︸ N− ( k−1 ) , 0 , . . . , 0︸ ︷︷ ︸ k−1 ] T ( 1 ) which is approximately achieved for k = 1 , . . . , N at large a ( and exactly if a→∞ ) with ( Wx ( k ) + b ) j = { ≤ −a , j ≥ N − k + 2 ≥ +a , j ≤ N − k + 1 ( 2 ) For more remarks , see appendix A.4 subsection ordered activation . TNN construction : computing weights W , b , α . How then do we compute W , b to achieve the ordered activation ? Consider first ( Wx ( 2 ) +b ) N = −a and ( Wx ( 1 ) +b ) N = a and solve them . This yields WN = 2a/∆ ( 1 ) and bN = a −WNx ( 1 ) where ∆ ( 1 ) = x ( 1 ) − x ( 2 ) . Iterating through k , i.e . solving ( Wx ( k+1 ) + b ) N−k+1 = −a and ( Wx ( k ) + b ) N−k+1 = a we obtain WN−k+1 = 2a/∆ ( k ) and bN−k+1 = a − WN−k+1x ( k ) where ∆ ( k ) = x ( k ) − x ( k+1 ) . We can rewrite the indices so that Wk = 2a/∆ ( N−k+1 ) and bk = a −Wkx ( N−k+1 ) whenever convenient . For ∆ ( N ) , we need a dummy x ( N+1 ) value or we can directly choose its value , e.g . 1NΣ N−1 k=1 ∆ ( k ) . The effect is illustrated by the value near x = 0 in fig . 2 ( A1-3 ) and should not pose any problem ; the chosen dummy value will only affect the shape at the left end of the graph . We compute α using the property of equation ( 1 ) . From fig . 1 ( A ) , this means ideally y ( 1 ) = ΣNi=1αiσ ( Wx ( 1 ) + b ) i for a → ∞ , and similarly y ( 2 ) = ΣN−1i=1 αiσ ( Wx ( 2 ) + b ) i and so on until y ( N ) = α1σ ( Wx ( N ) + b ) 1 . Putting them together as y = [ y ( 1 ) , . . . , y ( N ) ] T , we get y = Aα where A is an upper-left triangular matrix and the inverse A−1 exists . Thus , α = A−1y , a matrix such that A−1ij = 1 along the diagonal , A −1 i , i+1 = −1 and zeroes otherwise , which facilitates a convenient computation . The triangular construction is complete : TNN ( x ) = αTσ ( Wx+ b ) ( 3 ) While Nielsen ( 2015 ) provides only visual demonstration , the following result shows rigorous proof on universal approximation at work ( python code also available ) . Theorem 1 TNN achieves arbitrarily high accuracy on the training dataset . Proof : see appendix A.4.1 . Also see example results in fig . 2 . With the following proposition , test dataset that resembles training dataset will yield small error . Otherwise , there are out-of-distribution ( ood ) samples A ⊆ D′ , which can be incorporated into the training dataset . Catastrophic forgetting is not a problem during re-training because when ood samples from the test dataset are included as training points , each training point x ∈ D is still identified with a neuron . By theorem 1 , x is still approximated accurately . See appendix A.2.2 for remarks on advantage over existing methods . Proposition 1 Errors on monotonous interval . Given finite training , test datasets D , D′ , there exists A ⊆ D′ such that , using TNN constructed with D∪A , for all samples in test dataset ( x′ , y′ ) ∈ D′ , sample-wise error e = |y′−TNN ( x′ ) | has an upper bound max ( |y′− y ( k+1 ) | , |y′− y ( k ) | ) for some k. Proof : see appendix A.4.2 . There is also a mid-point property that can be exploited for generalizability to arbitrarily high accuracy , where data must be sampled such that any instance xtest lies inside either ( 1 ) the training dataset or ( 2 ) is equal to some mid-point of two neighbouring training samples ; see the proposition below . Fig . 1 ( B ) shows how the component of xmid , k at j = N − k + 1 is half-activated i.e . the activation value is 0.5 . Admittedly , this is an ideal condition for accurate generalizability . Proposition 2 Mid-point property . The mid-point xmid , k = 12 ( x ( k ) + x ( k+1 ) ) takes the value of αTσ ( Wxmid , k + b ) = 1 2 ( y ( k ) + y ( k+1 ) ) . Proof : see appendix A.4.3 . TNN pencil-and-paper example . Use TNN to fit the dataset ( x , y ) ∈ { ( 1 , 1 ) , ( 0.5 , 2 ) , ( 0 , 3 ) } . Then f ( x ) ≈ TNN ( x ) = 3σ ( 20x+ 5 ) − σ ( 20x− 5 ) − σ ( 20x− 15 ) . See appendix A.4.4 . Remarks smoothness property , special case , scalability/complexity and generalizability to higher dimensions can be found in appendix A.4.5 . We proceed with SQANN , a universal approximation inspired by TNN that allows multi-dimensional input , multi-layer stacking of neurons based on relative strength of neuron activations . | This paper proposes two new neural networks (NN) construction schemes that aim at better interpretability, in the sense that (1) the NN should always memorize the training data; (2) the NN can roughly tell if a new test data has any similarity to any data in the training sample. The authors also provide approximation error bounds on both training and test data under certain assumptions. Some numerical examples are shown to support their theory. | SP:2e6e3e928f4a398cb89208e3af72d60022951b0f |
Two Instances of Interpretable Neural Network for Universal Approximations | 1 INTRODUCTION . Artificial neural networks ( NN ) have recently seen successful applications in many fields . Modern deep neural network ( DNN ) architecture , usually trained through the backpropagation mechanism , has been called a black-box because of its lack of interpretability . To tackle this issue , various studies have been performed to understand how a NN works ; see the following surveys Arrieta et al . ( 2020 ) ; Gilpin et al . ( 2018 ) ; Tjoa & Guan ( 2020 ) ; Wiegreffe & Marasović ( 2021 ) . This paper primarily proposes two interpretable models , namely triangularly-constructed NN ( TNN ) and Semi-Quantized Activation NN ( SQANN ) . Both possess the following three notable properties : ( 1 ) Resistance to catastrophic forgetting . ( 2 ) Mathematical proofs for arbitrarily high accuracy on training datasets ; experimentally demonstrable with python code and simple common datasets ( see supp . materials ) . ( 3 ) Detection of out-of-distribution samples through weak activations . Concept disambiguation . Several concepts have multiple possible definitions . We clarify the definitions used in this paper . 1 . Interpretability . We consider only fine-grained interpretation , i.e . we look at the meaning of each single neuron or its activation in our models . 2 . Universal approximation . Readers might be familiar with universal approximation of functions on certain conditions , e.g . compact sets . Our models can be more general , e.g . user can freely choose the interpolation function between 2 activations based on knowledge of the local manifold . The function can even be pathological . See appendix A.2.1 . 3 . Catastrophic forgetting : the tendency for knowledge of previously learned dataset to be abruptly lost as information relevant to a new dataset is incorporated . This definition is a slightly nuanced version of Kirkpatrick et al . ( 2017 ) . Hence , our models ’ resistance to catastrophic forgetting is the following . Given a training dataset D accurately modeled by an architecture M , a new dataset D′ ( especially new , out of distribution dataset ) can be incorporated into M without losing accuracy on D. See appendix A.2.2 ( analogy to biological system included ) . Related works and interpretability issues . Recent remarkable studies on universal approximators include the Deep Narrow Network by Kidger & Lyons ( 2020 ) , DeepONet universal approximation for operators by Lu et al . ( 2021 ) and the Broad Learning System by Chen et al . ( 2019 ) ; Hanin ( 2019 ) ; Park et al . ( 2021 ) ; Johnson ( 2019 ) . While insightful , they do not directly address the eXplainable Artificial Intelligence ( XAI ) issue , especially the blackbox property of the DNN . Similarly , a number of classical papers provide theoretical insights for NN as universal approximators , but interpretability , transparency and fairness issues are not their main focus . The universal approximation theorem by Cybenko ( 1989 ) asserts that a NN with a single hidden layer can approximate any function to arbitrarily small error under common conditions , proven by asserting the density of that set of NN in the function space using classic mathematical theorems . In particular , its theorem 1 uses an abstract proof by contradiction . From the proof , it is not easy to observe the internal mechanism of a NN in a straight-forward manner ; consequently modern works that depend on it ( e.g . Deep Narrow Network ) might inherit the blackbox property . Bottom-up constructions for function approximation using NN then emerged , though they also lack the interpretability ( see appendix A.3 for more related works ) . Also consider a demonstration in Nielsen ( 2015 ) that could help improve our understanding of universal approximation . Outline . This paper is arranged as the following . Section 2 shows explicit TNN construction , related results , including a pencil-and-paper example for pedagogical purpose . Likewise , section 3 shows SQANN construction , statements regarding SQANN , another pencil-and-paper example , then experimental results of its application , before we conclude the paper with limitations and future works . Python codes and clearer figures are fully available in supp . materials ( also see appendix ) . 2 TRIANGULARLY-CONSTRUCTED NN ( TNN ) . TNN is the prototype NN for our interpretable universal approximator . SQANN ( next section ) partially borrows the concept from TNN which will be useful as an easy and manageable illustration to deliver the following ideas : ( 1 ) organized activations of neurons and ( 2 ) the retrieval of α values as the outputs . The model is TNN ( x ) = αTσ ( Wx+ b ) where x ∈ [ 0 , 1 ] , α , b ∈ RN and W ∈ RN , where we use sigmoid function σ for simplicity . We start with a simple scalar function y = f ( x ) ∈ R for x ∈ [ 0 , 1 ] , thus TNN ’ s interpretability can be illustrated very clearly . Assumption : Linear Ordering . It is constructed on a linearly ordered dataset containing N samples { ( x ( k ) , y ( k ) ) ∈ Rn × R } Nk=1 such that x ( N ) < x ( N−1 ) < · · · < x ( 1 ) and y ( k ) = f ( x ( k ) ) , f the true function that TNN will approximate . The interpretability comes from the linear ordering property where higher value of x ( ≈ 1 ) will activate more neurons while lower values will activate less neurons as shown in fig 1 ( A ) . Then α values will be retrieved in a continuous way through dot product , eventually used to compute the output for prediction . In time series , such as ECG ( Electrocardiogram ) , signals can be approximated point-wise ( although it is still preferable to have a noise model during preprocessing to prevent overfitting the noise ) . Meaningful interpretation can be given , for example , by mapping PQRST segments from ECG to specific neurons within TNN , giving some neurons specific meaning and thus interpretability . For more remarks and definition of formal linear ordering etc , see appendix A.4 . Ordered activation . We would like x ( 1 ) to activate all neurons , while x ( N ) activates only 1 neuron . In other words , ideally σ ( Wx ( 1 ) + b ) = [ 1 , 1 , . . . , 1 , 1 ] T , followed by σ ( Wx ( 2 ) + b ) = [ 1 , 1 , . . . , 1 , 0 ] T and so on until σ ( Wx ( N ) + b ) = [ 1 , 0 , . . . , 0 ] T ; again , refer to fig . 1 ( A ) . With this concept , we seek to achieve interpretability by successive activations of neurons depending on the “ intensity ” of the input , with x ( 1 ) being the most intense . In general , the above can be written as σ ( k ) ≡ σ ( Wx ( k ) + b ) = [ 1 , . . . , 1︸ ︷︷ ︸ N− ( k−1 ) , 0 , . . . , 0︸ ︷︷ ︸ k−1 ] T ( 1 ) which is approximately achieved for k = 1 , . . . , N at large a ( and exactly if a→∞ ) with ( Wx ( k ) + b ) j = { ≤ −a , j ≥ N − k + 2 ≥ +a , j ≤ N − k + 1 ( 2 ) For more remarks , see appendix A.4 subsection ordered activation . TNN construction : computing weights W , b , α . How then do we compute W , b to achieve the ordered activation ? Consider first ( Wx ( 2 ) +b ) N = −a and ( Wx ( 1 ) +b ) N = a and solve them . This yields WN = 2a/∆ ( 1 ) and bN = a −WNx ( 1 ) where ∆ ( 1 ) = x ( 1 ) − x ( 2 ) . Iterating through k , i.e . solving ( Wx ( k+1 ) + b ) N−k+1 = −a and ( Wx ( k ) + b ) N−k+1 = a we obtain WN−k+1 = 2a/∆ ( k ) and bN−k+1 = a − WN−k+1x ( k ) where ∆ ( k ) = x ( k ) − x ( k+1 ) . We can rewrite the indices so that Wk = 2a/∆ ( N−k+1 ) and bk = a −Wkx ( N−k+1 ) whenever convenient . For ∆ ( N ) , we need a dummy x ( N+1 ) value or we can directly choose its value , e.g . 1NΣ N−1 k=1 ∆ ( k ) . The effect is illustrated by the value near x = 0 in fig . 2 ( A1-3 ) and should not pose any problem ; the chosen dummy value will only affect the shape at the left end of the graph . We compute α using the property of equation ( 1 ) . From fig . 1 ( A ) , this means ideally y ( 1 ) = ΣNi=1αiσ ( Wx ( 1 ) + b ) i for a → ∞ , and similarly y ( 2 ) = ΣN−1i=1 αiσ ( Wx ( 2 ) + b ) i and so on until y ( N ) = α1σ ( Wx ( N ) + b ) 1 . Putting them together as y = [ y ( 1 ) , . . . , y ( N ) ] T , we get y = Aα where A is an upper-left triangular matrix and the inverse A−1 exists . Thus , α = A−1y , a matrix such that A−1ij = 1 along the diagonal , A −1 i , i+1 = −1 and zeroes otherwise , which facilitates a convenient computation . The triangular construction is complete : TNN ( x ) = αTσ ( Wx+ b ) ( 3 ) While Nielsen ( 2015 ) provides only visual demonstration , the following result shows rigorous proof on universal approximation at work ( python code also available ) . Theorem 1 TNN achieves arbitrarily high accuracy on the training dataset . Proof : see appendix A.4.1 . Also see example results in fig . 2 . With the following proposition , test dataset that resembles training dataset will yield small error . Otherwise , there are out-of-distribution ( ood ) samples A ⊆ D′ , which can be incorporated into the training dataset . Catastrophic forgetting is not a problem during re-training because when ood samples from the test dataset are included as training points , each training point x ∈ D is still identified with a neuron . By theorem 1 , x is still approximated accurately . See appendix A.2.2 for remarks on advantage over existing methods . Proposition 1 Errors on monotonous interval . Given finite training , test datasets D , D′ , there exists A ⊆ D′ such that , using TNN constructed with D∪A , for all samples in test dataset ( x′ , y′ ) ∈ D′ , sample-wise error e = |y′−TNN ( x′ ) | has an upper bound max ( |y′− y ( k+1 ) | , |y′− y ( k ) | ) for some k. Proof : see appendix A.4.2 . There is also a mid-point property that can be exploited for generalizability to arbitrarily high accuracy , where data must be sampled such that any instance xtest lies inside either ( 1 ) the training dataset or ( 2 ) is equal to some mid-point of two neighbouring training samples ; see the proposition below . Fig . 1 ( B ) shows how the component of xmid , k at j = N − k + 1 is half-activated i.e . the activation value is 0.5 . Admittedly , this is an ideal condition for accurate generalizability . Proposition 2 Mid-point property . The mid-point xmid , k = 12 ( x ( k ) + x ( k+1 ) ) takes the value of αTσ ( Wxmid , k + b ) = 1 2 ( y ( k ) + y ( k+1 ) ) . Proof : see appendix A.4.3 . TNN pencil-and-paper example . Use TNN to fit the dataset ( x , y ) ∈ { ( 1 , 1 ) , ( 0.5 , 2 ) , ( 0 , 3 ) } . Then f ( x ) ≈ TNN ( x ) = 3σ ( 20x+ 5 ) − σ ( 20x− 5 ) − σ ( 20x− 15 ) . See appendix A.4.4 . Remarks smoothness property , special case , scalability/complexity and generalizability to higher dimensions can be found in appendix A.4.5 . We proceed with SQANN , a universal approximation inspired by TNN that allows multi-dimensional input , multi-layer stacking of neurons based on relative strength of neuron activations . | Summary: 1. This paper proposes two neural networks by construction, i.e., Triangularly-constructed Neural Network (TNN) and Semi-Quantized Activation Neural Network (SQANN). 2. These two neural networks are universal approximators, which is proven by construction. 3. These two neural networks are resistant to catastrophic forgetting. 4. For TNN, strongly activated neurons and half-activated neurons can be identified. 5. For SQANN, users can identify the samples that are likely out of distribution. Contributions claimed by authors: 1. TNN and SQANN are proposed. 2. The universal approximation is proven, using the construction of TNN and SQANN. 3. Resistance to catastrophic forgetting is proven. 4. SQANN can identify out-of-distribution samples. | SP:2e6e3e928f4a398cb89208e3af72d60022951b0f |
ZenDet: Revisiting Efficient Object Detection Backbones from Zero-Shot Neural Architecture Search | In object detection models , the detection backbone consumes more than half of the overall inference cost . Recent researches attempt to reduce this cost by optimizing the backbone architecture with the help of Neural Architecture Search ( NAS ) . However , existing NAS methods for object detection require hundreds to thousands of GPU hours of searching , making them impractical in fast-paced research and development . In this work , we propose a novel zero-shot NAS method to address this issue . The proposed method , named ZenDet , automatically designs efficient detection backbones without training network parameters , reducing the architecture design cost to nearly zero yet delivering the state-of-the-art ( SOTA ) performance . Under the hood , ZenDet maximizes the differential entropy of detection backbones , leading to a better feature extractor for object detection under the same computational budgets . After merely one GPU day of fully automatic design , ZenDet innovates SOTA detection backbones on multiple detection benchmark datasets with little human intervention . Comparing to ResNet50 backbone , ZenDet is +2.0 % better in mAP when using the same amount of FLOPs/parameters and is 1.54 times faster on NVIDIA V100 at the same mAP . Code and pre-trained models will be released after publication . 1 INTRODUCTION . Seeking better and faster deep models for object detection is never an outdated task in computer vision . The performance of a deep object detection network heavily depends on the feature extraction backbone ( Li et al. , 2018 ; Chen et al. , 2019b ) . Currently , most state-of-the-art ( SOTA ) detection backbones ( He et al. , 2016 ; Xie et al. , 2017 ; Zhu et al. , 2019 ) are designed manually by human experts which could take years to develop . Since the detection backbone consumes more than half of the total inference cost in many detection frameworks , it is critical to optimize the backbone architecture for better speed-accuracy trade-off on different hardware platforms varying from server-side GPUs to mobile-side chipsets . To reduce the manual design , Neural Architecture Search ( NAS ) has emerged to facilitate the architecture design . Various NAS methods have demonstrated their efficacy in designing SOTA image classification models ( Zoph et al. , 2018 ; Liu et al. , 2018 ; Cai et al. , 2019 ; Tan & Le , 2019 ) . These successful stories inspire recent researchers to use NAS to design detection backbones ( Chen et al. , 2019b ; Du et al. , 2020 ; Jiang et al. , 2020 ) in an end-to-end way . The existing NAS methods for detection backbone design are all training-based , meaning that they need to train network parameters to evaluate the performance of candidates on the target dataset , taking a long time and consuming huge hardware resources . Hardware consumption and long time searching make these training-based methods inefficient in modern fast-paced research and development . To reduce the training cost , training-free methods are proposed recently , also known as zero-shot NAS in previous literatures ( Tanaka et al. , 2020 ; Mellor et al. , 2021 ; Chen et al. , 2021b ; Lin et al. , 2021 ) . The zero-shot NAS predicts network performance without training network parameters therefore is way faster than training-based NAS . As a relatively new technique , existing zero-shot NAS methods are mostly validated on image classification datasets . Applying zero-shot NAS to object detection backbone design is still an intact challenge . In this work , we present the first effort of introducing zero-shot NAS technique to design efficient object detection backbones . We show that directly transferring existing zero-shot NAS methods from image classification to detection backbone design will encounter fundamental difficulties . While image classification network only needs to predict the class probability , object detection network needs to additionally predict the bounding boxes of multiple objects , making the direct architecture transfer sub-optimal . To this end , a novel zero-shot NAS method , termed ZenDet , is proposed for searching object detection backbones . The key idea behind ZenDet is inspired by the Principle of Maximum Entropy ( PME ) ( Reza , 1994 ; Kullback , 1997 ; Brillouin , 2013 ) . Informally speaking , when a network is formulated as an information processing system , its capacity is maximized when its differential entropy ( Shannon , 1948 ) achieves maximum under budget constraints , leading to a better feature extractor for object detection . Based on this observation , ZenDet maximizes the differential entropy of detection backbones by searching for the optimal configuration of network depth and width without training network parameters . The above strategy raises two technical challenges . The first challenge is how to estimate the differential entropy of a deep network . The exact computation of differential entropy requires knowing the precise probability distribution of deep features in high dimensional space which is difficult to estimate in practice . To address this issue , ZenDet estimates the Gaussian upper bound of the differential entropy which only requires computing the variance of the feature maps . The second challenge is how to efficiently capture objects of different sizes . In object detection benchmark datasets such as MS COCO Lin et al . ( 2014 ) , the distribution of object size is data-dependent and non-uniform . To bring in this prior knowledge in backbone design , we introduce the Multi-Scale Entropy Prior ( MSEP ) in backbone entropy estimation to capture different-scale objects . We find that the MSEP improves the detection performance significantly . The overall computation of ZenDet only requires one forward inference of the detection backbone at initialization therefore is nearly zero-cost comparing to previous backbone NAS methods . The main contributions of this work are summarized as follows : • Based on the entropy theory , the multi-scale entropy prior is present to rank the expressivity of the backbone instead of training on the target datasets , speeding up searching . • While using less than one GPU day and 2GB GPU memory , ZenDet achieves competitive performance over other NAS methods on COCO with at least 50x times faster . • ZenDet is the first zero-shot NAS method designed for object detection with SOTA performance in multiple benchmark datasets under multiple popular detection frameworks . 2 RELATED WORK . Backbone for Object Detection Recently , object detectors composed of backbone , neck and head have become increasingly popular due to their effectiveness and high performance ( Lin et al. , 2017a ; b ; Tian et al. , 2019 ; Li et al. , 2020 ; 2021 ) . Prevailing detectors directly use the backbone designed for image classification to extract multi-scale features from an image , such as ResNet ( He et al. , 2016 ) , ResNeXt ( Xie et al. , 2017 ) and Deformable Convolutional Network ( DCN ) ( Zhu et al. , 2019 ) . Nevertheless , the backbone migrated from image classification may be suboptimal in object detection ( Ghiasi et al. , 2019 ) . To tackle the gap , many architectures are end-to-end designed for object detection , including Stacked Hourglass Newell et al . ( 2016 ) , FishNet Sun et al . ( 2018 ) , DetNet Li et al . ( 2018 ) , HRNet Wang et al . ( 2020a ) and so on . Albeit with improved performance , these hand-crafted detection architectures heavily rely on human labor and tedious trial-and-error processes . Neural Architecture Search Neural Architecture Search ( NAS ) is initially developed to automatically design network architectures for image classification models ( Zoph et al. , 2018 ; Liu et al. , 2018 ; Real et al. , 2019 ; Cai et al. , 2019 ; Lin et al. , 2020 ; Tan & Le , 2019 ; Lin et al. , 2021 ) . Using NAS to design object detection models has not been well studied . Currently , existing detection NAS methods are all training-based methods . Some methods focus on searching detection backbones , such as DetNAS ( Chen et al. , 2019b ) , SpineNet ( Du et al. , 2020 ) and SP-NAS ( Jiang et al. , 2020 ) , while the others focus on searching FPN neck , such as NAS-FPN ( Ghiasi et al. , 2019 ) , NASFCOS ( Wang et al. , 2020b ) and OPANet ( Liang et al. , 2021 ) . These methods require training and evaluation on the target datasets which is intensive in computation . ZenDet distinguishes itself as being the first zero-shot NAS method for the backbone design of object detection . 3 PRELIMINARY . In this section , we first formulate a deep network as a system with continuous state space . Then we define the differential entropy of this system and show how to estimate this entropy via its Gaussian upper bound . Finally we introduce the basic concept of vanilla network search space for designing our detection backbones . Continuous State Space of Deep Networks A deep network F ( · ) : Rd −→ R maps an input image x ∈ Rd to its label y ∈ R. The topology of a network can be abstracted as a graph G = ( V , E ) where the vertex set V consists of neurons and the edge set E consists of spikes between neurons . For any v ∈ V and e ∈ E , h ( v ) ∈ R and h ( e ) ∈ R present the values endowed with each vertex v and each edge e respectively . The set S = { h ( v ) , h ( e ) : ∀v ∈ V , e ∈ E } defines the continuous state space of the network F . According to the Principle of Maximum Entropy , we want to maximize the differential entropy of network F , under some given computational budgets . The entropy H ( S ) of set S measures the total information contained in the system ( network ) F , including the information contained in the latent features H ( Sv ) = { h ( v ) : v ∈ V } and in the network parameters H ( Se ) = { h ( e ) : e ∈ E } . As for object detection backbone design , we only care about the entropy of latent features H ( Sv ) rather than the entropy of network parameters H ( Se ) . Informally speaking , H ( Sv ) measures the feature representation power of F while H ( Se ) measures the model complexity of F . Therefore , in the remainder of this work , the differential entropy of F refers to the entropy H ( Sv ) by default . Entropy of Gaussian Distribution The differential entropy of Gaussian distribution can be found in many textbooks such as ( Norwich , 1993 ) . Suppose x is sampled from Gaussian distribution N ( µ , σ2 ) . Then the differential entropy of x is given by H∗ ( x ) = 1 2 log ( 2π ) + 1 2 +H ( x ) H ( x ) : = log ( σ ) . ( 1 ) From Eq . 1 , the entropy of Gaussian distribution only depends on the variance . In the following , we will use H ( x ) instead of H∗ ( x ) as constants do not matter in our discussion . Gaussian Entropy Upper Bound Since the probability distribution P ( Sv ) is a high dimensional function , it is difficult to compute the precise value of its entropy directly . Instead , we propose to estimate the upper bound of the entropy , given by the following well-known theorem ( Cover & Thomas , 2012 ) : Theorem 1 . For any continuous distribution P ( x ) of mean µ and variance σ2 , its differential entropy is maximized when P ( x ) is a Gaussian distribution N ( µ , σ2 ) . Theorem 1 says that the differential entropy of a distribution is upper bounded by a Gaussian distribution with the same mean and variance . Combining this with Eq . ( 1 ) , we can easily estimate the network entropy H ( Sv ) by simply computing the feature map variance and then use Eq . ( 1 ) to get the Gaussian entropy upper bound for the network . Vanilla Network Search Space Following previous works , we design our backbones in the vanilla convolutional network space ( Li et al. , 2018 ; Chen et al. , 2019b ; Du et al. , 2020 ; Lin et al. , 2021 ) because this space is widely adopted in detection backbones and is used as a prototype in theoretical literature ( Poole et al. , 2016 ; Serra et al. , 2018 ; Hanin & Rolnick , 2019 ) . A vanilla network is stacked by multiple convolutional layers followed by RELU activations . Auxiliary components such as residual link ( He et al. , 2016 ) , Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) and Squeeze-and-Excitation ( SE ) ( Hu et al. , 2018 ) are all removed during the search and only during the search ( More details in Appendix E ) . These removed auxiliary components are plugged in back after the search . Therefore , the final architecture for training still has these components . Consider a vanilla convolutional network with D layers of weights W1 , ... , WD whose output feature maps are x1 , ... , xD . The input image is x0 . Let φ ( · ) denote the RELU activation function . Then the forward inference is given by xl = φ ( hl ) hl = Wl ∗ xl−1 for l = 1 , ... , D . ( 2 ) For sake of simplicity , we set the bias of the convolutional layer to zero . | In this paper, the authors propose a zero-shot NAS to search backbone for detection task. Specifically, this method uses the differential entropy of output features as a metric to measure the performance of architecture on detection task. With the differential entropy , this method does not need training network parameters, reducing the search cost largely. Also, the backbones searched by this method have the state-of-the-art performance on detection task. | SP:e2de45a266957eb22a06444dba4e4d58be9292be |
ZenDet: Revisiting Efficient Object Detection Backbones from Zero-Shot Neural Architecture Search | In object detection models , the detection backbone consumes more than half of the overall inference cost . Recent researches attempt to reduce this cost by optimizing the backbone architecture with the help of Neural Architecture Search ( NAS ) . However , existing NAS methods for object detection require hundreds to thousands of GPU hours of searching , making them impractical in fast-paced research and development . In this work , we propose a novel zero-shot NAS method to address this issue . The proposed method , named ZenDet , automatically designs efficient detection backbones without training network parameters , reducing the architecture design cost to nearly zero yet delivering the state-of-the-art ( SOTA ) performance . Under the hood , ZenDet maximizes the differential entropy of detection backbones , leading to a better feature extractor for object detection under the same computational budgets . After merely one GPU day of fully automatic design , ZenDet innovates SOTA detection backbones on multiple detection benchmark datasets with little human intervention . Comparing to ResNet50 backbone , ZenDet is +2.0 % better in mAP when using the same amount of FLOPs/parameters and is 1.54 times faster on NVIDIA V100 at the same mAP . Code and pre-trained models will be released after publication . 1 INTRODUCTION . Seeking better and faster deep models for object detection is never an outdated task in computer vision . The performance of a deep object detection network heavily depends on the feature extraction backbone ( Li et al. , 2018 ; Chen et al. , 2019b ) . Currently , most state-of-the-art ( SOTA ) detection backbones ( He et al. , 2016 ; Xie et al. , 2017 ; Zhu et al. , 2019 ) are designed manually by human experts which could take years to develop . Since the detection backbone consumes more than half of the total inference cost in many detection frameworks , it is critical to optimize the backbone architecture for better speed-accuracy trade-off on different hardware platforms varying from server-side GPUs to mobile-side chipsets . To reduce the manual design , Neural Architecture Search ( NAS ) has emerged to facilitate the architecture design . Various NAS methods have demonstrated their efficacy in designing SOTA image classification models ( Zoph et al. , 2018 ; Liu et al. , 2018 ; Cai et al. , 2019 ; Tan & Le , 2019 ) . These successful stories inspire recent researchers to use NAS to design detection backbones ( Chen et al. , 2019b ; Du et al. , 2020 ; Jiang et al. , 2020 ) in an end-to-end way . The existing NAS methods for detection backbone design are all training-based , meaning that they need to train network parameters to evaluate the performance of candidates on the target dataset , taking a long time and consuming huge hardware resources . Hardware consumption and long time searching make these training-based methods inefficient in modern fast-paced research and development . To reduce the training cost , training-free methods are proposed recently , also known as zero-shot NAS in previous literatures ( Tanaka et al. , 2020 ; Mellor et al. , 2021 ; Chen et al. , 2021b ; Lin et al. , 2021 ) . The zero-shot NAS predicts network performance without training network parameters therefore is way faster than training-based NAS . As a relatively new technique , existing zero-shot NAS methods are mostly validated on image classification datasets . Applying zero-shot NAS to object detection backbone design is still an intact challenge . In this work , we present the first effort of introducing zero-shot NAS technique to design efficient object detection backbones . We show that directly transferring existing zero-shot NAS methods from image classification to detection backbone design will encounter fundamental difficulties . While image classification network only needs to predict the class probability , object detection network needs to additionally predict the bounding boxes of multiple objects , making the direct architecture transfer sub-optimal . To this end , a novel zero-shot NAS method , termed ZenDet , is proposed for searching object detection backbones . The key idea behind ZenDet is inspired by the Principle of Maximum Entropy ( PME ) ( Reza , 1994 ; Kullback , 1997 ; Brillouin , 2013 ) . Informally speaking , when a network is formulated as an information processing system , its capacity is maximized when its differential entropy ( Shannon , 1948 ) achieves maximum under budget constraints , leading to a better feature extractor for object detection . Based on this observation , ZenDet maximizes the differential entropy of detection backbones by searching for the optimal configuration of network depth and width without training network parameters . The above strategy raises two technical challenges . The first challenge is how to estimate the differential entropy of a deep network . The exact computation of differential entropy requires knowing the precise probability distribution of deep features in high dimensional space which is difficult to estimate in practice . To address this issue , ZenDet estimates the Gaussian upper bound of the differential entropy which only requires computing the variance of the feature maps . The second challenge is how to efficiently capture objects of different sizes . In object detection benchmark datasets such as MS COCO Lin et al . ( 2014 ) , the distribution of object size is data-dependent and non-uniform . To bring in this prior knowledge in backbone design , we introduce the Multi-Scale Entropy Prior ( MSEP ) in backbone entropy estimation to capture different-scale objects . We find that the MSEP improves the detection performance significantly . The overall computation of ZenDet only requires one forward inference of the detection backbone at initialization therefore is nearly zero-cost comparing to previous backbone NAS methods . The main contributions of this work are summarized as follows : • Based on the entropy theory , the multi-scale entropy prior is present to rank the expressivity of the backbone instead of training on the target datasets , speeding up searching . • While using less than one GPU day and 2GB GPU memory , ZenDet achieves competitive performance over other NAS methods on COCO with at least 50x times faster . • ZenDet is the first zero-shot NAS method designed for object detection with SOTA performance in multiple benchmark datasets under multiple popular detection frameworks . 2 RELATED WORK . Backbone for Object Detection Recently , object detectors composed of backbone , neck and head have become increasingly popular due to their effectiveness and high performance ( Lin et al. , 2017a ; b ; Tian et al. , 2019 ; Li et al. , 2020 ; 2021 ) . Prevailing detectors directly use the backbone designed for image classification to extract multi-scale features from an image , such as ResNet ( He et al. , 2016 ) , ResNeXt ( Xie et al. , 2017 ) and Deformable Convolutional Network ( DCN ) ( Zhu et al. , 2019 ) . Nevertheless , the backbone migrated from image classification may be suboptimal in object detection ( Ghiasi et al. , 2019 ) . To tackle the gap , many architectures are end-to-end designed for object detection , including Stacked Hourglass Newell et al . ( 2016 ) , FishNet Sun et al . ( 2018 ) , DetNet Li et al . ( 2018 ) , HRNet Wang et al . ( 2020a ) and so on . Albeit with improved performance , these hand-crafted detection architectures heavily rely on human labor and tedious trial-and-error processes . Neural Architecture Search Neural Architecture Search ( NAS ) is initially developed to automatically design network architectures for image classification models ( Zoph et al. , 2018 ; Liu et al. , 2018 ; Real et al. , 2019 ; Cai et al. , 2019 ; Lin et al. , 2020 ; Tan & Le , 2019 ; Lin et al. , 2021 ) . Using NAS to design object detection models has not been well studied . Currently , existing detection NAS methods are all training-based methods . Some methods focus on searching detection backbones , such as DetNAS ( Chen et al. , 2019b ) , SpineNet ( Du et al. , 2020 ) and SP-NAS ( Jiang et al. , 2020 ) , while the others focus on searching FPN neck , such as NAS-FPN ( Ghiasi et al. , 2019 ) , NASFCOS ( Wang et al. , 2020b ) and OPANet ( Liang et al. , 2021 ) . These methods require training and evaluation on the target datasets which is intensive in computation . ZenDet distinguishes itself as being the first zero-shot NAS method for the backbone design of object detection . 3 PRELIMINARY . In this section , we first formulate a deep network as a system with continuous state space . Then we define the differential entropy of this system and show how to estimate this entropy via its Gaussian upper bound . Finally we introduce the basic concept of vanilla network search space for designing our detection backbones . Continuous State Space of Deep Networks A deep network F ( · ) : Rd −→ R maps an input image x ∈ Rd to its label y ∈ R. The topology of a network can be abstracted as a graph G = ( V , E ) where the vertex set V consists of neurons and the edge set E consists of spikes between neurons . For any v ∈ V and e ∈ E , h ( v ) ∈ R and h ( e ) ∈ R present the values endowed with each vertex v and each edge e respectively . The set S = { h ( v ) , h ( e ) : ∀v ∈ V , e ∈ E } defines the continuous state space of the network F . According to the Principle of Maximum Entropy , we want to maximize the differential entropy of network F , under some given computational budgets . The entropy H ( S ) of set S measures the total information contained in the system ( network ) F , including the information contained in the latent features H ( Sv ) = { h ( v ) : v ∈ V } and in the network parameters H ( Se ) = { h ( e ) : e ∈ E } . As for object detection backbone design , we only care about the entropy of latent features H ( Sv ) rather than the entropy of network parameters H ( Se ) . Informally speaking , H ( Sv ) measures the feature representation power of F while H ( Se ) measures the model complexity of F . Therefore , in the remainder of this work , the differential entropy of F refers to the entropy H ( Sv ) by default . Entropy of Gaussian Distribution The differential entropy of Gaussian distribution can be found in many textbooks such as ( Norwich , 1993 ) . Suppose x is sampled from Gaussian distribution N ( µ , σ2 ) . Then the differential entropy of x is given by H∗ ( x ) = 1 2 log ( 2π ) + 1 2 +H ( x ) H ( x ) : = log ( σ ) . ( 1 ) From Eq . 1 , the entropy of Gaussian distribution only depends on the variance . In the following , we will use H ( x ) instead of H∗ ( x ) as constants do not matter in our discussion . Gaussian Entropy Upper Bound Since the probability distribution P ( Sv ) is a high dimensional function , it is difficult to compute the precise value of its entropy directly . Instead , we propose to estimate the upper bound of the entropy , given by the following well-known theorem ( Cover & Thomas , 2012 ) : Theorem 1 . For any continuous distribution P ( x ) of mean µ and variance σ2 , its differential entropy is maximized when P ( x ) is a Gaussian distribution N ( µ , σ2 ) . Theorem 1 says that the differential entropy of a distribution is upper bounded by a Gaussian distribution with the same mean and variance . Combining this with Eq . ( 1 ) , we can easily estimate the network entropy H ( Sv ) by simply computing the feature map variance and then use Eq . ( 1 ) to get the Gaussian entropy upper bound for the network . Vanilla Network Search Space Following previous works , we design our backbones in the vanilla convolutional network space ( Li et al. , 2018 ; Chen et al. , 2019b ; Du et al. , 2020 ; Lin et al. , 2021 ) because this space is widely adopted in detection backbones and is used as a prototype in theoretical literature ( Poole et al. , 2016 ; Serra et al. , 2018 ; Hanin & Rolnick , 2019 ) . A vanilla network is stacked by multiple convolutional layers followed by RELU activations . Auxiliary components such as residual link ( He et al. , 2016 ) , Batch Normalization ( BN ) ( Ioffe & Szegedy , 2015 ) and Squeeze-and-Excitation ( SE ) ( Hu et al. , 2018 ) are all removed during the search and only during the search ( More details in Appendix E ) . These removed auxiliary components are plugged in back after the search . Therefore , the final architecture for training still has these components . Consider a vanilla convolutional network with D layers of weights W1 , ... , WD whose output feature maps are x1 , ... , xD . The input image is x0 . Let φ ( · ) denote the RELU activation function . Then the forward inference is given by xl = φ ( hl ) hl = Wl ∗ xl−1 for l = 1 , ... , D . ( 2 ) For sake of simplicity , we set the bias of the convolutional layer to zero . | This paper proposes a zero-shot neural architecture search approach for backbone design in object detection. The idea is to compute an entropy-based multi-scale Zen-score and use the score as an objective for evolutionary architecture search. The paper achieves better results comparing with previous zero-shot NAS approaches on object detection and greatly reduces the search time of conventional NAS approaches while maintaining similar accuracy. | SP:e2de45a266957eb22a06444dba4e4d58be9292be |
The Importance of the Current Input in Sequence Modeling | 1 INTRODUCTION . Deep learning models constitute the current state of the art in most artificial intelligence applications , from computer vision to robotics or medicine . When dealing with sequential data , Recurrent Neural Networks ( RNNs ) , specially those architectures with gating mechanisms such as the LSTM ( Hochreiter & Schmidhuber , 1997 ) , the GRU ( Cho et al. , 2014 ) and other variants , are usually the default choice . One of the most interesting applications of RNNs is related to the field of Natural Language Processing , where most tasks , such as machine translation , document summarization or language modeling , involve the manipulation of sequences of textual data . Of these , language modeling has been extensively used to test different innovations in recurrent architectures , mainly due to the ease of obtaining very large datasets that can be used to train neural networks with millions of parameters . Sequence modeling consists of predicting the next element in a sequence given the past history . In language modeling , the sequence is a text , and hence the task is to predict the next word or the next character . In this context , some of the best performing architectures include the Mogrifier LSTM ( Melis et al. , 2020 ) and different variations of the AWD-LSTM ( Merity et al. , 2018 ) , usually combined with dynamic evaluation and mixture of sofmaxes ( MoS ) ( Wang et al. , 2019 ; Gong et al. , 2018 ) . These models obtain the best state-of-the-art performance with moderate size datasets , such as the Penn Treebank ( Mikolov et al. , 2010 ) or the Wikitext-2 ( Merity et al. , 2017 ) corpora , when no additional data are used during training . When larger datasets are considered , or when external data are used to pre-train the networks , attention-based architectures usually outperform other models ( Radford et al. , 2019 ; Brown et al. , 2020 ) . In this work we use moderate-scale language modeling datasets to explore the effect of a mechanism recently proposed by Oliva & Lago-Fernández ( 2021 ) , when combined with different LSTM-based models in the language modeling context . The idea consists of modifying a recurrent architecture by introducing a direct connection between the input and the output of the recurrent module . This has been shown to improve both the model ’ s generalization results and its readability in simple tasks related to the recognition of regular languages . In a standard RNN , the output depends only on the network ’ s hidden state , ht , which in turn depends on both the input , xt , and the recent past , ht−1 . But there is no explicit dependence of the network ’ s output on its input . In some cases this could be a shortcoming , since the transformation of xt needed to compute the network ’ s internal state is not necessarily the most appropriate to compute the output . However , an explicit dependence of the output on xt can be forced by adding a dual connection that skips the recurrent layers . We claim that this strategy may be of general application in RNN models . To test our hypothesis we perform a thorough comparison of several state-of-the-art RNN architectures , with and without the dual connection , on the Penn Treebank and the Wikitext-2 datasets . Our results show that , under all experimental conditions , the dual architectures outperform their nondual counterparts . In addition , the Mogrifier-LSTM enhanced with a dual connection establishes a new state-of-the-art word-level perplexity for the Penn Treebank dataset when no additional data are used to train the models . The remainder of the article is organized as follows . First , in section 2 , we present the different models we have used and the two possible architectures , the standard recurrent architecture and the dual architecture . In section 3 , we describe the datasets and the experimental setup . In section 4 , we present our results . And finally , in section 5 , we extract some conclusions and discuss further lines of research . 2 MODELS . We start by presenting the standard recurrent architecture which is common to all the models . In absence of a dual connection , the basic architecture involves an embedding layer , a recurrent layer and a fully-connected layer with softmax activation : et = W exxt ( 1 ) ht = REC ( et , St−1 ) ( 2 ) yt = softmax ( W yhht + b y ) , ( 3 ) where W ∗∗ and b∗ are weight matrices and biases , respectively , and xt is the input vector at time t. TheREC module represents an arbitrary recurrent layer , with St−1 being a set of vectors describing its internal state at the previous time step . In the most general case , this module will simply be an LSTM cell , but we consider other possibilities as well , as described below . The dual architecture introduces an additional layer , with ReLU activation , which is fed with both the output of the embedding layer and the output of the recurrent module : et = W exxt ( 4 ) ht = REC ( et , St−1 ) ( 5 ) dt = ReLU ( W deet +W dhht + b d ) ( 6 ) yt = softmax ( W yddt + b y ) . ( 7 ) This way the network ’ s input can reach the softmax layer following two different paths , through the recurrent layer and through the dual connection . In the following we consider different forms for the recurrent module in equations 2 and 5 . 2.1 THE LSTM MODULE . In the simplest approach the recurrent module consists of an LSTM cell , where the internal state includes both the output and the memory , St = { ht ; ct } , which are computed as follows : f t = σ ( W feet +W fhht−1 + b f ) ( 8 ) it = σ ( W ieet +W ihht−1 + b i ) ( 9 ) ot = σ ( W oeet +W ohht−1 + b o ) ( 10 ) zt = tanh ( W zeet +W zhht−1 + b z ) ( 11 ) ct = f t ct−1 + it zt ( 12 ) ht = ot tanh ( ct ) , ( 13 ) where , as before , W ∗∗ are weight matrices and b∗ are bias vectors . The operator denotes an element-wise product , and σ is the logistic sigmoid function . For convenience , we summarize the joint effect of equations 8-13 as : ht = LSTM ( et , { ht−1 ; ct−1 } ) . ( 14 ) In the literature it is quite common to stack several LSTM layers . Here we consider a double-layer LSTM , where the output ht of the recurrent module is obtained by the concatenated application of two LSTM layers : h′t = LSTM1 ( et , { h ′ t−1 ; c ′ t−1 } ) ( 15 ) ht = LSTM2 ( h ′ t , { ht−1 ; ct−1 } ) . ( 16 ) We refer to this double LSTM module as dLSTM : ht = dLSTM ( et , { ht−1 ; ct−1 ; h′t−1 ; c′t−1 } ) ( 17 ) = LSTM2 ( LSTM1 ( et , { h′t−1 ; c′t−1 } ) , { ht−1 ; ct−1 } ) . ( 18 ) 2.2 THE MOGRIFIER-LSTM MODULE . The Mogrifier-LSTM ( Melis et al. , 2020 ) is one of the state-of-the-art variations of the standard LSTM architecture achieving the lowest perplexity scores in language modeling tasks . It basically consists of a standard LSTM block , but the input et and the hidden state ht−1 are transformed before entering equations 8-13 . The mogrifier transformation involves several steps where et and ht−1 modulate each other : eit = 2σ ( Q ihi−1t−1 ) ei−2t , for odd i ∈ { 1 , 2 , ... , r } ( 19 ) hit−1 = 2σ ( R iei−1t ) h i−2 t−1 , for even i ∈ { 1 , 2 , ... , r } , ( 20 ) where Qi and Ri are weight matrices and we have e−1t = et and h 0 t−1 = ht−1 . The linear transformations Qihi−1t−1 and R iei−1t can also include the addition of a bias vector , which has been omitted for the sake of clarity . The constant r is a hyperparameter whose value defines the number of rounds of the transformation . We refer to this recurrent module , including the mogrifier transformation and the subsequent application of the LSTM layer , as : ht = mLSTM ( et , { ht−1 ; ct−1 } ) = LSTM ( e∗t , { h ∗ t−1 ; ct−1 } ) , ( 21 ) where e∗t and h ∗ t−1 are the highest indexed e i t and h i t−1 in equations 19 and 20 . Note that the choice r = 0 recovers the standard LSTM model . Melis et al . ( 2020 ) also used a double-layer LSTM enhanced with the mogrifier transformation . This strategy can be summarized as follows : ht = mdLSTM ( et , { ht−1 ; ct−1 ; h′t−1 ; c′t−1 } ) ( 22 ) = mLSTM2 ( mLSTM1 ( et , { h′t−1 ; c′t−1 } ) , { ht−1 ; ct−1 } ) . ( 23 ) 3 EXPERIMENTS . 3.1 DATASETS . We perform experiments on two datasets : the Penn Treebank corpus ( Marcus et al. , 1993 ) , as preprocessed by Mikolov et al . ( 2010 ) , and the WikiText-2 dataset ( Merity et al. , 2017 ) . In both cases , the data are used without any additional preprocessing . The Penn Treebank ( PTB ) dataset has been widely used in the literature to experiment with language modeling . The standard data preprocessing is due to Mikolov et al . ( 2010 ) , and includes transformation of all letters to lower case , elimination of punctuation symbols , and replacement of all numbers with a special token . The vocabulary is limited to the 10,000 most frequent words . The data is split into a training set which contains almost 930,000 tokens , and validation and test sets with around 80,000 words each . The WikiText-2 ( WT2 ) dataset , introduced by Merity et al . ( 2017 ) , is a more realistic benchmark for language modeling tasks . It consists of more than 2 million words extracted from Wikipedia articles . The training , validation and test sets contain around 2,125,000 , 220,000 , and 250,000 words , respectively . The vocabulary includes over 30,000 words , and the data retain capitalization , punctuation , and numbers . 3.2 EXPERIMENTAL SETUP . All the considered models follow one of the two architectures discussed in section 2 , either the Embedding-Recurrent-Softmax ( ERS ) architecture ( equations 1-3 ) or the dual architecture ( equations 4-7 ) . In either case , the recurrent module can be any of LSTM , dLSTM , or mdLSTM . Weight tying ( Inan et al. , 2017 ; Press & Wolf , 2017 ) is used to couple the weight matrices of the embedding and the output layers . This reduces the number of parameters and prevents the model from learning a one-to-one correspondence between the input and the output ( Merity et al. , 2018 ) . We run two different sets of experiments . First , we analyze the effect of the dual connection by comparing the performances of the two architectures ( ERS vs Dual ) , using each of the recurrent modules , on both the PTB and the WT2 datasets . In this setting the hyperparameters are tuned for the ERS architecture , and then transferred to the dual case . Second , we search for the best hyperparameters for the dual architecture using the mdLSTM recurrence , and compare the perplexity score with current state-of-the-art values . All the experiments have been performed using the Keras library ( Chollet et al. , 2015 ) , and the implementation is available in a public Github repository1 . The networks are trained using the Nadam optimizer ( Dozat , 2016 ) , a variation of Adam ( Kingma & Ba , 2015 ) where Nesterov momentum is applied . The number of training epochs is different for each experimental condition . On one hand , when the objective is to perform a pairwise comparison between dual and non-dual architectures , we train the models for 100 epochs . On the other hand , when the goal is to compare the dual network with state of the art approaches , we let the models run for 300 epochs . We use batch sizes of 32 and 128 for the PTB and the WT2 problems , respectively , and set the sequence length to 25 in all cases . The remaining hyperparameters are searched in the ranges described in table 1 . Finally , all the models are run twice , both with and without dynamic evaluation ( Krause et al. , 2018 ) . Dynamic evaluation is a standard method commonly used to adapt the model parameters , learned during training , using also the validation data . This allows the networks to get adapted to the new evaluation conditions , which in general improves their performance . In order to keep the models as simple as possible , no additional modifications have been considered . 1The repository is not yet available to preserve author anonymity . It will be released after the review process . | This paper proposes a simple improvement to recurrent network architectures for language modeling. The idea is to insert a single additional layer into a network with one or more recurrent layers, just before the output layer (Eq. 6). This is termed a "dual connection" and it combines the output of the last recurrent layer directly with the input to the network at the current step. The paper presents the results of adding this modification to both LSTM and Mogrifier-LSTM networks on the Penn Treebank and WikiText-2 datasets. | SP:9be76bd104a9119c6aa5480f7cb70aba77519907 |
The Importance of the Current Input in Sequence Modeling | 1 INTRODUCTION . Deep learning models constitute the current state of the art in most artificial intelligence applications , from computer vision to robotics or medicine . When dealing with sequential data , Recurrent Neural Networks ( RNNs ) , specially those architectures with gating mechanisms such as the LSTM ( Hochreiter & Schmidhuber , 1997 ) , the GRU ( Cho et al. , 2014 ) and other variants , are usually the default choice . One of the most interesting applications of RNNs is related to the field of Natural Language Processing , where most tasks , such as machine translation , document summarization or language modeling , involve the manipulation of sequences of textual data . Of these , language modeling has been extensively used to test different innovations in recurrent architectures , mainly due to the ease of obtaining very large datasets that can be used to train neural networks with millions of parameters . Sequence modeling consists of predicting the next element in a sequence given the past history . In language modeling , the sequence is a text , and hence the task is to predict the next word or the next character . In this context , some of the best performing architectures include the Mogrifier LSTM ( Melis et al. , 2020 ) and different variations of the AWD-LSTM ( Merity et al. , 2018 ) , usually combined with dynamic evaluation and mixture of sofmaxes ( MoS ) ( Wang et al. , 2019 ; Gong et al. , 2018 ) . These models obtain the best state-of-the-art performance with moderate size datasets , such as the Penn Treebank ( Mikolov et al. , 2010 ) or the Wikitext-2 ( Merity et al. , 2017 ) corpora , when no additional data are used during training . When larger datasets are considered , or when external data are used to pre-train the networks , attention-based architectures usually outperform other models ( Radford et al. , 2019 ; Brown et al. , 2020 ) . In this work we use moderate-scale language modeling datasets to explore the effect of a mechanism recently proposed by Oliva & Lago-Fernández ( 2021 ) , when combined with different LSTM-based models in the language modeling context . The idea consists of modifying a recurrent architecture by introducing a direct connection between the input and the output of the recurrent module . This has been shown to improve both the model ’ s generalization results and its readability in simple tasks related to the recognition of regular languages . In a standard RNN , the output depends only on the network ’ s hidden state , ht , which in turn depends on both the input , xt , and the recent past , ht−1 . But there is no explicit dependence of the network ’ s output on its input . In some cases this could be a shortcoming , since the transformation of xt needed to compute the network ’ s internal state is not necessarily the most appropriate to compute the output . However , an explicit dependence of the output on xt can be forced by adding a dual connection that skips the recurrent layers . We claim that this strategy may be of general application in RNN models . To test our hypothesis we perform a thorough comparison of several state-of-the-art RNN architectures , with and without the dual connection , on the Penn Treebank and the Wikitext-2 datasets . Our results show that , under all experimental conditions , the dual architectures outperform their nondual counterparts . In addition , the Mogrifier-LSTM enhanced with a dual connection establishes a new state-of-the-art word-level perplexity for the Penn Treebank dataset when no additional data are used to train the models . The remainder of the article is organized as follows . First , in section 2 , we present the different models we have used and the two possible architectures , the standard recurrent architecture and the dual architecture . In section 3 , we describe the datasets and the experimental setup . In section 4 , we present our results . And finally , in section 5 , we extract some conclusions and discuss further lines of research . 2 MODELS . We start by presenting the standard recurrent architecture which is common to all the models . In absence of a dual connection , the basic architecture involves an embedding layer , a recurrent layer and a fully-connected layer with softmax activation : et = W exxt ( 1 ) ht = REC ( et , St−1 ) ( 2 ) yt = softmax ( W yhht + b y ) , ( 3 ) where W ∗∗ and b∗ are weight matrices and biases , respectively , and xt is the input vector at time t. TheREC module represents an arbitrary recurrent layer , with St−1 being a set of vectors describing its internal state at the previous time step . In the most general case , this module will simply be an LSTM cell , but we consider other possibilities as well , as described below . The dual architecture introduces an additional layer , with ReLU activation , which is fed with both the output of the embedding layer and the output of the recurrent module : et = W exxt ( 4 ) ht = REC ( et , St−1 ) ( 5 ) dt = ReLU ( W deet +W dhht + b d ) ( 6 ) yt = softmax ( W yddt + b y ) . ( 7 ) This way the network ’ s input can reach the softmax layer following two different paths , through the recurrent layer and through the dual connection . In the following we consider different forms for the recurrent module in equations 2 and 5 . 2.1 THE LSTM MODULE . In the simplest approach the recurrent module consists of an LSTM cell , where the internal state includes both the output and the memory , St = { ht ; ct } , which are computed as follows : f t = σ ( W feet +W fhht−1 + b f ) ( 8 ) it = σ ( W ieet +W ihht−1 + b i ) ( 9 ) ot = σ ( W oeet +W ohht−1 + b o ) ( 10 ) zt = tanh ( W zeet +W zhht−1 + b z ) ( 11 ) ct = f t ct−1 + it zt ( 12 ) ht = ot tanh ( ct ) , ( 13 ) where , as before , W ∗∗ are weight matrices and b∗ are bias vectors . The operator denotes an element-wise product , and σ is the logistic sigmoid function . For convenience , we summarize the joint effect of equations 8-13 as : ht = LSTM ( et , { ht−1 ; ct−1 } ) . ( 14 ) In the literature it is quite common to stack several LSTM layers . Here we consider a double-layer LSTM , where the output ht of the recurrent module is obtained by the concatenated application of two LSTM layers : h′t = LSTM1 ( et , { h ′ t−1 ; c ′ t−1 } ) ( 15 ) ht = LSTM2 ( h ′ t , { ht−1 ; ct−1 } ) . ( 16 ) We refer to this double LSTM module as dLSTM : ht = dLSTM ( et , { ht−1 ; ct−1 ; h′t−1 ; c′t−1 } ) ( 17 ) = LSTM2 ( LSTM1 ( et , { h′t−1 ; c′t−1 } ) , { ht−1 ; ct−1 } ) . ( 18 ) 2.2 THE MOGRIFIER-LSTM MODULE . The Mogrifier-LSTM ( Melis et al. , 2020 ) is one of the state-of-the-art variations of the standard LSTM architecture achieving the lowest perplexity scores in language modeling tasks . It basically consists of a standard LSTM block , but the input et and the hidden state ht−1 are transformed before entering equations 8-13 . The mogrifier transformation involves several steps where et and ht−1 modulate each other : eit = 2σ ( Q ihi−1t−1 ) ei−2t , for odd i ∈ { 1 , 2 , ... , r } ( 19 ) hit−1 = 2σ ( R iei−1t ) h i−2 t−1 , for even i ∈ { 1 , 2 , ... , r } , ( 20 ) where Qi and Ri are weight matrices and we have e−1t = et and h 0 t−1 = ht−1 . The linear transformations Qihi−1t−1 and R iei−1t can also include the addition of a bias vector , which has been omitted for the sake of clarity . The constant r is a hyperparameter whose value defines the number of rounds of the transformation . We refer to this recurrent module , including the mogrifier transformation and the subsequent application of the LSTM layer , as : ht = mLSTM ( et , { ht−1 ; ct−1 } ) = LSTM ( e∗t , { h ∗ t−1 ; ct−1 } ) , ( 21 ) where e∗t and h ∗ t−1 are the highest indexed e i t and h i t−1 in equations 19 and 20 . Note that the choice r = 0 recovers the standard LSTM model . Melis et al . ( 2020 ) also used a double-layer LSTM enhanced with the mogrifier transformation . This strategy can be summarized as follows : ht = mdLSTM ( et , { ht−1 ; ct−1 ; h′t−1 ; c′t−1 } ) ( 22 ) = mLSTM2 ( mLSTM1 ( et , { h′t−1 ; c′t−1 } ) , { ht−1 ; ct−1 } ) . ( 23 ) 3 EXPERIMENTS . 3.1 DATASETS . We perform experiments on two datasets : the Penn Treebank corpus ( Marcus et al. , 1993 ) , as preprocessed by Mikolov et al . ( 2010 ) , and the WikiText-2 dataset ( Merity et al. , 2017 ) . In both cases , the data are used without any additional preprocessing . The Penn Treebank ( PTB ) dataset has been widely used in the literature to experiment with language modeling . The standard data preprocessing is due to Mikolov et al . ( 2010 ) , and includes transformation of all letters to lower case , elimination of punctuation symbols , and replacement of all numbers with a special token . The vocabulary is limited to the 10,000 most frequent words . The data is split into a training set which contains almost 930,000 tokens , and validation and test sets with around 80,000 words each . The WikiText-2 ( WT2 ) dataset , introduced by Merity et al . ( 2017 ) , is a more realistic benchmark for language modeling tasks . It consists of more than 2 million words extracted from Wikipedia articles . The training , validation and test sets contain around 2,125,000 , 220,000 , and 250,000 words , respectively . The vocabulary includes over 30,000 words , and the data retain capitalization , punctuation , and numbers . 3.2 EXPERIMENTAL SETUP . All the considered models follow one of the two architectures discussed in section 2 , either the Embedding-Recurrent-Softmax ( ERS ) architecture ( equations 1-3 ) or the dual architecture ( equations 4-7 ) . In either case , the recurrent module can be any of LSTM , dLSTM , or mdLSTM . Weight tying ( Inan et al. , 2017 ; Press & Wolf , 2017 ) is used to couple the weight matrices of the embedding and the output layers . This reduces the number of parameters and prevents the model from learning a one-to-one correspondence between the input and the output ( Merity et al. , 2018 ) . We run two different sets of experiments . First , we analyze the effect of the dual connection by comparing the performances of the two architectures ( ERS vs Dual ) , using each of the recurrent modules , on both the PTB and the WT2 datasets . In this setting the hyperparameters are tuned for the ERS architecture , and then transferred to the dual case . Second , we search for the best hyperparameters for the dual architecture using the mdLSTM recurrence , and compare the perplexity score with current state-of-the-art values . All the experiments have been performed using the Keras library ( Chollet et al. , 2015 ) , and the implementation is available in a public Github repository1 . The networks are trained using the Nadam optimizer ( Dozat , 2016 ) , a variation of Adam ( Kingma & Ba , 2015 ) where Nesterov momentum is applied . The number of training epochs is different for each experimental condition . On one hand , when the objective is to perform a pairwise comparison between dual and non-dual architectures , we train the models for 100 epochs . On the other hand , when the goal is to compare the dual network with state of the art approaches , we let the models run for 300 epochs . We use batch sizes of 32 and 128 for the PTB and the WT2 problems , respectively , and set the sequence length to 25 in all cases . The remaining hyperparameters are searched in the ranges described in table 1 . Finally , all the models are run twice , both with and without dynamic evaluation ( Krause et al. , 2018 ) . Dynamic evaluation is a standard method commonly used to adapt the model parameters , learned during training , using also the validation data . This allows the networks to get adapted to the new evaluation conditions , which in general improves their performance . In order to keep the models as simple as possible , no additional modifications have been considered . 1The repository is not yet available to preserve author anonymity . It will be released after the review process . | This work revisits the LSTM architecture. They propose to modify a recurrent architecture by adding a direct connection between the input and the output of the recurrent module, called "dual". They also consider a double-layer LSTM, where the output of the recurrent module is obtained by the concatenated application of two LSTM layers. In experiments, the dual modification can get consistent improvement. | SP:9be76bd104a9119c6aa5480f7cb70aba77519907 |
Adversarial Robustness as a Prior for Learned Representations | 1 INTRODUCTION . Beyond achieving remarkably high accuracy on a variety of tasks ( Krizhevsky et al. , 2012 ; He et al. , 2015 ; Collobert & Weston , 2008 ) , a major appeal of deep learning is the ability to learn effective representations of data . Specifically , deep neural networks can be thought of as linear classifiers acting on learned feature representations ( also known as feature embeddings ) . A major goal in representation learning is for these embeddings to encode high-level , interpretable features of any given input ( Goodfellow et al. , 2016 ; Bengio et al. , 2013 ; Bengio , 2019 ) . Indeed , learned representations turn out to be quite versatile—in computer vision , for example , they are the driving force behind transfer learning ( Girshick et al. , 2014 ; Donahue et al. , 2014 ) , and image similarity metrics such as VGG distance ( Dosovitskiy & Brox , 2016a ; Zhang et al. , 2018 ) . Still , deep networks ’ feature embeddings exhibit some shortcomings that are at odds with our idealized model of a linear classifier on top of interpretable high-level features . For example , the existence of adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) —and the fact that they may correspond to flipping predictive features ( Ilyas et al. , 2019 ) —suggests that deep neural networks make predictions based on features that are vastly different from what humans use , or even recognize . ( This message has been also corroborated by several recent works ( Brendel & Bethge , 2019 ; Geirhos et al. , 2019 ; Jetley et al. , 2018 ; Zhang & Zhu , 2019 ) . ) An more direct example of such a shortcoming is pinpointed by Jacobsen et al . ( 2019 ) , who show that one can find image pairs that appear completely different to a human but are nearly identical in terms of their feature embeddings . 1https : //github.com/cantankerousdolphin/robust-learned-representations . Our contributions . Motivated by the limitations of standard representations , we propose using robust optimization ( in particular , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) ) to enforce a prior on features that models learn ( and thus on their learned feature representations ) . It turns out that the resulting “ robust representations ” ( the embeddings learned by adversarially trained neural networks ) are significantly better-behaved than their standard counterparts . We demonstrate this fact by looking at two well-studied tasks that are typically used to study representations : • Representation inversion ( Section 5 ) : In contrast to standard representations—for which representation inversion is a significant challenge , e.g. , ( Mahendran & Vedaldi , 2015 ) — robust representations are naturally approximately invertible . In particular , adversarially robust networks provide an embedding of the input such that images with similar representations are semantically similar , and the salient features of an image are easily recoverable from its robust feature representation . This property also naturally enables feature interpolation between arbitrary inputs . • Feature visualization ( Section 6 ) : Direct maximization of the coordinates of robust representations suffices to visualize easily recognizable features of the model . This is again a significant departure from standard representations where ( a ) without explicit regularization at visualization-time , feature visualization often produces unintelligible results ; and ( b ) even with regularization , visualized features in the representation layer are scarcely humanrecognizeable ( Olah et al. , 2017 ) . As a result of this property , robust representations enable the addition of specific features to images through direct first-order optimization . The tasks above and their respective applications are illustrated in Figure 2 . Broadly , our results suggest robust optimization as a promising avenue for learning better-behaved image representations . 2 RELATED WORK . Our work studies the feature representations of adversarially robust networks . As discussed in Section 3 , these are networks trained with the robust optimization framework ( Wald , 1945 ; Goodfellow et al. , 2015 ; Madry et al. , 2018 ) and were originally proposed in the context of defending against adversarial perturbations ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) . Adversarial robustness has been studied extensively in the context of machine learning security ( see e.g. , Carlini & Wagner ( 2017 ) ; Athalye et al . ( 2018b ; a ) ; Papernot et al . ( 2017 ) ) , and as an independent phenomenon ( see e.g. , Gilmer et al . ( 2018 ) ; Schmidt et al . ( 2018 ) ; Jacobsen et al . ( 2019 ) ; Ilyas et al . ( 2019 ) ; Tsipras et al . ( 2019 ) ; Su et al . ( 2018 ) . Several recent works have studied the qualitative properties of robust models . Zhang & Zhu ( 2019 ) find that adversarially robust models behave more predictably in the face of various outof-distribution data , appearing to use more global features . Tsipras et al . ( 2019 ) observe that large adversarial perturbation constructed for robust networks actually resemble instances of the target class . Santurkar et al . ( 2019 ) leverage this fact and use robust classifiers for a wide array of image synthesis tasks . Our work is complementary to this line of research , and focuses on understanding properties of robust representations through the lens of “ benchmark ” representation learning tasks ( namely , inversion and component visualization ) . There is also a large body of work dedicated to studying each of the representation learning tasks we study below ( inversion and feature visualization ) —we have embedded discussions of the related work for each task within the relevant sections . 3 BACKGROUND AND MOTIVATION . 3.1 ADVERSARIAL EXAMPLES AND ROBUST TRAINING . In standard settings , supervised machine learning models are trained by minimizing the expected loss with respect to a set of parameters θ , i.e. , by solving an optimization problem of the form : θ∗ = min θ E ( x , y ) ∼D [ Lθ ( x , y ) ] . ( 1 ) We refer to ( 1 ) as the standard training objective—finding the optimum of this objective should guarantee high performance on unseen data from the distribution . It turns out that deep neural networks trained with the standard objective are vulnerable to adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) —by changing a natural input imperceptibly , one can easily manipulate the predictions of a deep network to be arbitrarily incorrect . A natural approach ( and one of the most successful ) for defending against these adversarial examples is to use the robust optimization framework : a classical framework for optimization in the presence of uncertainty ( Wald , 1945 ; Danskin , 1967 ) . In particular , instead of just finding parameters which minimize the expected loss ( as in the standard objective ) , a robust optimization objective also requires that the model induced by the parameters θ be robust to worst-case perturbation of the input : θ∗ = arg min θ E ( x , y ) ∼D [ max δ∈∆ Lθ ( x+ δ , y ) ] . ( 2 ) This robust objective is in fact common in the context of machine learning security , where ∆ is usually chosen to be a simple convex set , e.g. , an ` p-ball . Canonical instantiations of robust optimization such as adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) ) have arisen as practical ways of obtaining networks that are invariant to small ` p-bounded changes in the input while maintaining high accuracy ( though a small tradeoff between robustness and accuracy has been noted by prior work ( Tsipras et al. , 2019 ; Su et al. , 2018 ) ( also cf . Appendix Tables 4 and 5 for a comparison of accuracies of standard and robust classifiers ) ) . 3.2 ROBUST TRAINING AS A FEATURE PRIOR . Traditionally , adversarial robustness in the deep learning setting has been explored as a goal predominantly in the context of ML security and reliability ( Biggio & Roli , 2018 ) . In this work , we consider an alternative perspective on adversarial robustness—we cast it as a prior on the features that can be learned by a model . Specifically , models trained with objective ( 2 ) must be invariant to a set of perturbations ∆ . Thus , selecting ∆ to be a set of perturbations that humans are robust to ( e.g. , small ` p-norm perturbations ) results in models that share more invariances with ( and thus are encouraged to use similar features to ) human perception . Note that incorporating human-selected priors and invariances in this fashion has a long history in the design of ML models—convolutional layers , for instance , were introduced as a means of introducing an invariance to translations of the input ( Fukushima , 1980 ) . In what follows , we will explore the effect of the prior induced by adversarial robustness on models ’ learned representations , and demonstrate that representations learned by adversarially robust models are significantly better-behaved , and enable many previously-infeasible modes of direct interaction with feature embeddings . It is worth noting that despite the value of ε used for training being quite small , we find that robust optimization globally affects the behavior of learned representations . As we demonstrate in this section , the benefits of robust representations extend to out-of-distribution inputs and far beyond ε-balls around the training distribution . 3.3 STANDARD AND ROBUST REPRESENTATIONS . Our work is primarily focused on studying the representations of trained neural networks . Throughout this work , we define the representation function R ( · ) as a function induced by a neural network which maps inputs x ∈ Rn to vectors R ( x ) ∈ Rk in the representation layer of that network ( the penultimate layer ) . In what follows , we refer to “ standard representations ” as the representation functions induced by standard ( non-robust ) networks , trained with the objective ( 1 ) —analogously , “ robust representations ” refer to the representation functions induced by ` 2-adversarially robust networks , i.e . networks trained with the objective ( 2 ) with ∆ being the ` 2 ball : θ∗robust = arg min θ E ( x , y ) ∼D [ max ‖δ‖2≤ε Lθ ( x+ δ , y ) ] . 4 EXPERIMENTAL SETUP . We train robust and standard ResNet-50 ( He et al. , 2016 ) networks on the Restricted ImageNet ( Tsipras et al. , 2019 ) and ImageNet ( Russakovsky et al. , 2015 ) datasets . Datasets specifics are in in Appendix A.1 , training details are in in Appendices A.2 and A.3 , and the performance of each model is reported in Appendix A.4 . In the main text , we present results for Restricted ImageNet , and link to ( nearly identical ) results for ImageNet in Appendices ( B.1.4 , B.3.2 ) . Unless explicitly noted otherwise , our optimization method of choice for any objective function will be ( projected ) gradient descent ( PGD ) , a first-order method which is known to be highly effective for minimizing neural network-based loss functions for both standard and adversarially robust neural networks ( Athalye et al. , 2018a ; Madry et al. , 2018 ) . | The paper looks at favorable properties of feature representations of an adversarially robust model. In particular, the authors look at a model trained with PGD training with an $\ell_p$ adversary. In terms of favourable properties, the authors look at representation inversion and feature manipulation and with experimental evidences claim that adversarially robust models are naturally better at it, | SP:33d9ed48ce72f65860ffb34e77ed8b79b95b4869 |
Adversarial Robustness as a Prior for Learned Representations | 1 INTRODUCTION . Beyond achieving remarkably high accuracy on a variety of tasks ( Krizhevsky et al. , 2012 ; He et al. , 2015 ; Collobert & Weston , 2008 ) , a major appeal of deep learning is the ability to learn effective representations of data . Specifically , deep neural networks can be thought of as linear classifiers acting on learned feature representations ( also known as feature embeddings ) . A major goal in representation learning is for these embeddings to encode high-level , interpretable features of any given input ( Goodfellow et al. , 2016 ; Bengio et al. , 2013 ; Bengio , 2019 ) . Indeed , learned representations turn out to be quite versatile—in computer vision , for example , they are the driving force behind transfer learning ( Girshick et al. , 2014 ; Donahue et al. , 2014 ) , and image similarity metrics such as VGG distance ( Dosovitskiy & Brox , 2016a ; Zhang et al. , 2018 ) . Still , deep networks ’ feature embeddings exhibit some shortcomings that are at odds with our idealized model of a linear classifier on top of interpretable high-level features . For example , the existence of adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) —and the fact that they may correspond to flipping predictive features ( Ilyas et al. , 2019 ) —suggests that deep neural networks make predictions based on features that are vastly different from what humans use , or even recognize . ( This message has been also corroborated by several recent works ( Brendel & Bethge , 2019 ; Geirhos et al. , 2019 ; Jetley et al. , 2018 ; Zhang & Zhu , 2019 ) . ) An more direct example of such a shortcoming is pinpointed by Jacobsen et al . ( 2019 ) , who show that one can find image pairs that appear completely different to a human but are nearly identical in terms of their feature embeddings . 1https : //github.com/cantankerousdolphin/robust-learned-representations . Our contributions . Motivated by the limitations of standard representations , we propose using robust optimization ( in particular , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) ) to enforce a prior on features that models learn ( and thus on their learned feature representations ) . It turns out that the resulting “ robust representations ” ( the embeddings learned by adversarially trained neural networks ) are significantly better-behaved than their standard counterparts . We demonstrate this fact by looking at two well-studied tasks that are typically used to study representations : • Representation inversion ( Section 5 ) : In contrast to standard representations—for which representation inversion is a significant challenge , e.g. , ( Mahendran & Vedaldi , 2015 ) — robust representations are naturally approximately invertible . In particular , adversarially robust networks provide an embedding of the input such that images with similar representations are semantically similar , and the salient features of an image are easily recoverable from its robust feature representation . This property also naturally enables feature interpolation between arbitrary inputs . • Feature visualization ( Section 6 ) : Direct maximization of the coordinates of robust representations suffices to visualize easily recognizable features of the model . This is again a significant departure from standard representations where ( a ) without explicit regularization at visualization-time , feature visualization often produces unintelligible results ; and ( b ) even with regularization , visualized features in the representation layer are scarcely humanrecognizeable ( Olah et al. , 2017 ) . As a result of this property , robust representations enable the addition of specific features to images through direct first-order optimization . The tasks above and their respective applications are illustrated in Figure 2 . Broadly , our results suggest robust optimization as a promising avenue for learning better-behaved image representations . 2 RELATED WORK . Our work studies the feature representations of adversarially robust networks . As discussed in Section 3 , these are networks trained with the robust optimization framework ( Wald , 1945 ; Goodfellow et al. , 2015 ; Madry et al. , 2018 ) and were originally proposed in the context of defending against adversarial perturbations ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) . Adversarial robustness has been studied extensively in the context of machine learning security ( see e.g. , Carlini & Wagner ( 2017 ) ; Athalye et al . ( 2018b ; a ) ; Papernot et al . ( 2017 ) ) , and as an independent phenomenon ( see e.g. , Gilmer et al . ( 2018 ) ; Schmidt et al . ( 2018 ) ; Jacobsen et al . ( 2019 ) ; Ilyas et al . ( 2019 ) ; Tsipras et al . ( 2019 ) ; Su et al . ( 2018 ) . Several recent works have studied the qualitative properties of robust models . Zhang & Zhu ( 2019 ) find that adversarially robust models behave more predictably in the face of various outof-distribution data , appearing to use more global features . Tsipras et al . ( 2019 ) observe that large adversarial perturbation constructed for robust networks actually resemble instances of the target class . Santurkar et al . ( 2019 ) leverage this fact and use robust classifiers for a wide array of image synthesis tasks . Our work is complementary to this line of research , and focuses on understanding properties of robust representations through the lens of “ benchmark ” representation learning tasks ( namely , inversion and component visualization ) . There is also a large body of work dedicated to studying each of the representation learning tasks we study below ( inversion and feature visualization ) —we have embedded discussions of the related work for each task within the relevant sections . 3 BACKGROUND AND MOTIVATION . 3.1 ADVERSARIAL EXAMPLES AND ROBUST TRAINING . In standard settings , supervised machine learning models are trained by minimizing the expected loss with respect to a set of parameters θ , i.e. , by solving an optimization problem of the form : θ∗ = min θ E ( x , y ) ∼D [ Lθ ( x , y ) ] . ( 1 ) We refer to ( 1 ) as the standard training objective—finding the optimum of this objective should guarantee high performance on unseen data from the distribution . It turns out that deep neural networks trained with the standard objective are vulnerable to adversarial examples ( Biggio et al. , 2013 ; Szegedy et al. , 2014 ) —by changing a natural input imperceptibly , one can easily manipulate the predictions of a deep network to be arbitrarily incorrect . A natural approach ( and one of the most successful ) for defending against these adversarial examples is to use the robust optimization framework : a classical framework for optimization in the presence of uncertainty ( Wald , 1945 ; Danskin , 1967 ) . In particular , instead of just finding parameters which minimize the expected loss ( as in the standard objective ) , a robust optimization objective also requires that the model induced by the parameters θ be robust to worst-case perturbation of the input : θ∗ = arg min θ E ( x , y ) ∼D [ max δ∈∆ Lθ ( x+ δ , y ) ] . ( 2 ) This robust objective is in fact common in the context of machine learning security , where ∆ is usually chosen to be a simple convex set , e.g. , an ` p-ball . Canonical instantiations of robust optimization such as adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) ) have arisen as practical ways of obtaining networks that are invariant to small ` p-bounded changes in the input while maintaining high accuracy ( though a small tradeoff between robustness and accuracy has been noted by prior work ( Tsipras et al. , 2019 ; Su et al. , 2018 ) ( also cf . Appendix Tables 4 and 5 for a comparison of accuracies of standard and robust classifiers ) ) . 3.2 ROBUST TRAINING AS A FEATURE PRIOR . Traditionally , adversarial robustness in the deep learning setting has been explored as a goal predominantly in the context of ML security and reliability ( Biggio & Roli , 2018 ) . In this work , we consider an alternative perspective on adversarial robustness—we cast it as a prior on the features that can be learned by a model . Specifically , models trained with objective ( 2 ) must be invariant to a set of perturbations ∆ . Thus , selecting ∆ to be a set of perturbations that humans are robust to ( e.g. , small ` p-norm perturbations ) results in models that share more invariances with ( and thus are encouraged to use similar features to ) human perception . Note that incorporating human-selected priors and invariances in this fashion has a long history in the design of ML models—convolutional layers , for instance , were introduced as a means of introducing an invariance to translations of the input ( Fukushima , 1980 ) . In what follows , we will explore the effect of the prior induced by adversarial robustness on models ’ learned representations , and demonstrate that representations learned by adversarially robust models are significantly better-behaved , and enable many previously-infeasible modes of direct interaction with feature embeddings . It is worth noting that despite the value of ε used for training being quite small , we find that robust optimization globally affects the behavior of learned representations . As we demonstrate in this section , the benefits of robust representations extend to out-of-distribution inputs and far beyond ε-balls around the training distribution . 3.3 STANDARD AND ROBUST REPRESENTATIONS . Our work is primarily focused on studying the representations of trained neural networks . Throughout this work , we define the representation function R ( · ) as a function induced by a neural network which maps inputs x ∈ Rn to vectors R ( x ) ∈ Rk in the representation layer of that network ( the penultimate layer ) . In what follows , we refer to “ standard representations ” as the representation functions induced by standard ( non-robust ) networks , trained with the objective ( 1 ) —analogously , “ robust representations ” refer to the representation functions induced by ` 2-adversarially robust networks , i.e . networks trained with the objective ( 2 ) with ∆ being the ` 2 ball : θ∗robust = arg min θ E ( x , y ) ∼D [ max ‖δ‖2≤ε Lθ ( x+ δ , y ) ] . 4 EXPERIMENTAL SETUP . We train robust and standard ResNet-50 ( He et al. , 2016 ) networks on the Restricted ImageNet ( Tsipras et al. , 2019 ) and ImageNet ( Russakovsky et al. , 2015 ) datasets . Datasets specifics are in in Appendix A.1 , training details are in in Appendices A.2 and A.3 , and the performance of each model is reported in Appendix A.4 . In the main text , we present results for Restricted ImageNet , and link to ( nearly identical ) results for ImageNet in Appendices ( B.1.4 , B.3.2 ) . Unless explicitly noted otherwise , our optimization method of choice for any objective function will be ( projected ) gradient descent ( PGD ) , a first-order method which is known to be highly effective for minimizing neural network-based loss functions for both standard and adversarially robust neural networks ( Athalye et al. , 2018a ; Madry et al. , 2018 ) . | This paper empirically demonstrates that robust optimization encourages deep neural networks to learn a high-level encoding of inputs. Specifically, this paper first utilize $\ell_2$- norm adversarial training to train robust neural networks. Then, this paper leverage two visualization techniques, i.e., *representation inversion* and *feature visualization*, to demonstrate that features identified by the robust neural network are more discernable. To summary, the main contributions of this paper are as follows: - A comprehensive literature review about findings related to features of standard and robust models; - Train a $\ell_2$-norm robust neural network and visualize its features; - Leverage *representation inversion* and *feature visualization* to demonstrate the features in robust neural networks are more discernable. | SP:33d9ed48ce72f65860ffb34e77ed8b79b95b4869 |
Two Regimes of Generalization for Non-Linear Metric Learning | 1 INTRODUCTION . Metric Learning , Bellet et al . ( 2015 ) , is the problem of finding a metric ρ on the space of features , such that ρ reflects some semantic properties of a given task . Generally , the input can be thought of as a set of labeled pairs { ( ( xi , x′i ) , yi ) } ni=1 , where xi , x′i ∈ Rd are the features , and yi is the label , indicating whether xi and x′i should be close in the metric or far apart . For instance , in face identification , Schroff et al . ( 2015 ) , features xi and x′i corresponding to the same face should be close in ρ , while different faces should be far apart . Note that the above metric learning formulation is fairly general and one can convert supervised clustering , or even standard classification problems into metric learning simply by setting yi = 1 if xi and x′i have the same original label and yi = 0 otherwise Davis et al . ( 2007 ) ; Weinberger & Saul ( 2009 ) ; Cao et al . ( 2016 ) ; Khosla et al . ( 2020 ) ; Chicco ( 2020 ) . The metric ρ is typically assumed to be the Euclidean metric taken after a linear or non-linear embedding of the features . That is , we consider a parametric family G of embeddings into a kdimensional space , g : Rd → Rk , and set 1 ρ ( x , x′ ) = ρg ( x , x ′ ) = ‖g ( x ) − g ( x′ ) ‖22 . ( 1 ) As an example , Figure 1 shows tSNE plots , Maaten & Hinton ( 2008 ) , of the classical 20newsgroups dataset . Figure 1a was generated using the regular bag of words representation of the data ( see Sections 5 and D for additional details ) while Figure 1b was generated by applying tSNE to an embedding of the data ( of the form ( 3 ) below ) learned by minimizing a loss based on labels , as described above . Clearly , there is no discernible relation between the label and the metric in the raw representation , but there is a strong relation in the learned metric . One can also obtain similar conclusions , and quantify them , by , for instance , replacing tSNE with spectral clustering . The uniform generalization problem for metric learning is the following : Given a family of embeddings G , provide bounds , that hold uniformly for all g ∈ G , on the difference between the expected value of the loss and the average value of the loss on the train set ; see Section 2 for a formal definition . Such bounds guarantee , for instance , that the train set would not be overfitted . Given a family 1Note that ρ in ( 1 ) is not strictly a metric . Nevertheless , this terminology is common . of embeddings G , consider the family F of functions f : Rd × Rd → R of the form F = { f ( x , x′ ) = ρg ( x , x′ ) | g ∈ G } . ( 2 ) We refer to these as the distance functions , which map a pair of features into their distance after the embedding . Well known arguments imply that one can obtain generalization bounds for F by providing upper bounds on the Rademacher complexity Rn ( F ) of the family of the scalar functions F . Therefore in what follows , we discuss directly the upper bounds on Rn ( F ) for various families G. We refer to Section 2 for formal definitions and details on the relation between G , F , Rn ( F ) , and generalization bounds . 1.1 OVERVIEW OF THE RESULTS . In this paper our goal is to study generalization bounds for neural network type non-linear embeddings g : Rd → Rk given by neural networks of depth L ≥ 1 . Specifically , we consider embeddings of the form g ( x ) = gA ( x ) = φL ( AtL · ( φL−1 ( AtL−1 . . . φ1 ( At1x ) . . . ) ) ) , ( 3 ) where A = ( A1 , . . . , AL ) is a tuple of matrices such that Ai ∈ Rki−1×ki for i = 1 , . . . , L , and ki are the layer widths , with k0 = d and kL = k. The activations φi : R → R are assumed to be Lipschitz , with ‖φ‖Lip ≤ ρi , and act on vectors in Rki coordinatewise . A family of matrix tuples will be generally denoted by A , while the associated embeddings family ( 3 ) will be denoted by G and the associated family of distance functions , ( 2 ) , will be denoted by F . In the context of deep learning generalization guarantees for classification , there exists a large body of work on various complexity bounds related to the family of mappings G. The current strongest uniform bounds were given in Bartlett et al . ( 2017 ) ( see also Neyshabur et al . ( 2018 ) ) . Our goal here is to translate these bounds to bounds on the Rademacher complexity of the family F , derived from G. That is , we study the aspects of the problem that are specific to the metric learning setting . To state the results , we require a few norm definitions : For matrix A ∈ Rs×t , ‖A‖op is the spectral norm , and set ‖A‖2,1 = ∑t i=1 ‖A·i‖2 , where A·i is the i-th column of A . For a family A of matrix tuples , A = ( A1 , . . . , AL ) ∈ A , for i ≤ L we set ‖Ai‖op = sup A∈A ‖Ai‖op and ‖Ai‖2,1 = sup A∈A ‖Ai‖2,1 . ( 4 ) Thus ‖Ai‖op , ‖Ai‖op are the largest respective norms of the component Ai in A . With this notation , our first result is the following bound ( up to logarithmic factors ) : Rn ( F ) ≤ O 1√ n b2 ( L∏ i=1 ρi ‖Ai‖op ) 2 L∑ i=1 ‖Ai‖ 2 3 2,1 ‖Ai‖ 2 3 op 32 . ( 5 ) Here b is an upper bound on inputs , ‖xi‖2 , ‖x′i‖2 ≤ b for i ≤ n. The full statement is given in Section 4 , Theorem 1 . This bound can be regarded as a natural extension of the bounds of Bartlett et al . ( 2017 ) to the metric learning setting . Indeed , it uses the same parameters – the ‖A‖op and ‖A‖2,1 norms . The quantity ( ∏L i=1 ρi ‖Ai‖op ) is the maximal Lipschitz constant of the mappings in G , and it will typically be the dominant term in ( 5 ) . One of the appealing properties of ( 5 ) , inherited from the bounds in Bartlett et al . ( 2017 ) , is that it is dimension free , in the sense that the depth L and the the layer widths ki do not enter the bound explicitly . As noted above , for the case of classification , the bounds in Bartlett et al . ( 2017 ) are currently the strongest known uniform bounds . However , we now show that in the metric learning setting , one can improve the bound in some situations , by using the ‖A‖2 , ∞ = maxi≤k ‖A·i‖2 norm at the last layer . In Theorem 2 , Section 4 , we show the following : Rn ( F ) ≤ O k√ n b2 ( ρL ‖AL‖2 , ∞ L−1∏ i=1 ρi ‖Ai‖op ) 2L−1∑ i=1 ‖Ai‖ 2 3 2,1 ‖Ai‖ 2 3 op + 1 32 . ( 6 ) We refer to ( 5 ) as the sparse bound , and to ( 6 ) as the non-sparse bound . To compare the bounds , and to see the relation to sparsity , let us first consider the simpler single layer setting , L = 1 . In this case , the bounds read Rn ( F ) ≤ O ( b2 ‖φ‖2Lip ‖A‖op ‖A‖2,1√ n ) , ( 7 ) and Rn ( F ) ≤ O ( kb2 ‖φ‖2Lip ‖A‖ 2 2 , ∞√ n ) , ( 8 ) where A is the family of weights matrices , A ∈ Rd×k , φ an activation , and ‖A‖∗ = supA∈A ‖A‖∗ where ‖·‖∗ is one of ‖·‖2,1 , ‖·‖2 , ∞ , ‖·‖op . Consider first the case where A contains non sparse matrices A ∈ A . These are the weights A where all k neurons have roughly the same norm , or equivalently , ‖A‖2,1 ∼ k ‖A‖2 , ∞ . ( 9 ) We refer to this condition as the dense regime . Note that ( 9 ) holds true for neural networks at initialization , and in Section 5 we observe empirically that this also holds for SGD trained networks on MNIST data . Next , when ( 9 ) holds , we also have that ‖A‖op ∼ √ k ‖A‖2 , ∞ . ( 10 ) Indeed , one has ‖A‖op ≤ √ k ‖A‖2 , ∞ for any A , and ( 9 ) implies ‖A‖op ≥ const · √ k d ‖A‖2 , ∞ ( see Lemma 3 in supplementary material A ) . The condition ( 10 ) will be also experimentally verified for MNIST data . Now , substituting ( 9 ) , ( 10 ) into ( 7 ) and ( 8 ) , we obtain that in the non-sparse regime , ( 8 ) is stronger than ( 7 ) by a factor of √ k , a significant improvement . On the other hand , when A is sparse , with most output neurons zeroed out ( e.x . ‖A‖2,1 = ‖A‖op = ‖A‖2 , ∞ ) , ( 7 ) will be much stronger . Such sparse networks can be achieved by adding explicit sparsity regularization terms to the cost . This will be the case for networks trained on the 20newsgroups dataset . Finally , as evident from ( 5 ) and ( 6 ) , for the multilayer case L > 1 , the relation between the sparse and non-sparse bounds is more involved . However , the asymptotics of the dependence on k is similar : With all other ki fixed , ( 6 ) will be better by a factor of √ k for non-sparse last layer AL , and ( 5 ) will be better otherwise . We now discuss an additional direction in which the bounds may be improved : Note that the dependence of the bound on the coefficients of the weights in ( 5 ) and ( 6 ) is quadratic . As we show in Section 4 , this is in general unavoidable , due to the quadratic form of the metric ( 1 ) . However , if the highest activation , φL is bounded , such as for instance the sigmoid φ ( x ) = ( 1 + e−x ) −1 , then one can obtain a linearly homogeneous dependence on the layer weights , thus potentially significantly improving the bounds . These bounds are given in ( 18 ) and ( 21 ) in Theorems 1 and 2 respectively . For comparison , in the single layer case the sparse and non-sparse bounded φL bounds are given by Rn ( F ) ≤ O ( √ kb ‖φ‖∞ ‖φ‖Lip ‖A‖2,1√ n ) and Rn ( F ) ≤ O ( kb ‖φ‖∞ ‖φ‖Lip ‖A‖2 , ∞√ n ) , ( 11 ) where ‖φ‖∞ is the upper bound on the values of φ . | In this paper, the authors look at the Rademacher complexity of the family of Euclidean metrics learned on a data set via an $L$ layer network where the activations functions are Lipschitz. The idea behind the proof is to use the bounds for $\epsilon$ net for the embedding network from Barlett, Foster, and Telgarsky 2017. This net, then provides a net for space of metrics with only requiring a change in the constants. Using this net, the authors then use standard arguments to bound the Rademacher complexity. Specifically for the metric learning setting they show that this bound can be improved. | SP:935ed98ae6074d793f311b963ea3944a8e2c6293 |
Two Regimes of Generalization for Non-Linear Metric Learning | 1 INTRODUCTION . Metric Learning , Bellet et al . ( 2015 ) , is the problem of finding a metric ρ on the space of features , such that ρ reflects some semantic properties of a given task . Generally , the input can be thought of as a set of labeled pairs { ( ( xi , x′i ) , yi ) } ni=1 , where xi , x′i ∈ Rd are the features , and yi is the label , indicating whether xi and x′i should be close in the metric or far apart . For instance , in face identification , Schroff et al . ( 2015 ) , features xi and x′i corresponding to the same face should be close in ρ , while different faces should be far apart . Note that the above metric learning formulation is fairly general and one can convert supervised clustering , or even standard classification problems into metric learning simply by setting yi = 1 if xi and x′i have the same original label and yi = 0 otherwise Davis et al . ( 2007 ) ; Weinberger & Saul ( 2009 ) ; Cao et al . ( 2016 ) ; Khosla et al . ( 2020 ) ; Chicco ( 2020 ) . The metric ρ is typically assumed to be the Euclidean metric taken after a linear or non-linear embedding of the features . That is , we consider a parametric family G of embeddings into a kdimensional space , g : Rd → Rk , and set 1 ρ ( x , x′ ) = ρg ( x , x ′ ) = ‖g ( x ) − g ( x′ ) ‖22 . ( 1 ) As an example , Figure 1 shows tSNE plots , Maaten & Hinton ( 2008 ) , of the classical 20newsgroups dataset . Figure 1a was generated using the regular bag of words representation of the data ( see Sections 5 and D for additional details ) while Figure 1b was generated by applying tSNE to an embedding of the data ( of the form ( 3 ) below ) learned by minimizing a loss based on labels , as described above . Clearly , there is no discernible relation between the label and the metric in the raw representation , but there is a strong relation in the learned metric . One can also obtain similar conclusions , and quantify them , by , for instance , replacing tSNE with spectral clustering . The uniform generalization problem for metric learning is the following : Given a family of embeddings G , provide bounds , that hold uniformly for all g ∈ G , on the difference between the expected value of the loss and the average value of the loss on the train set ; see Section 2 for a formal definition . Such bounds guarantee , for instance , that the train set would not be overfitted . Given a family 1Note that ρ in ( 1 ) is not strictly a metric . Nevertheless , this terminology is common . of embeddings G , consider the family F of functions f : Rd × Rd → R of the form F = { f ( x , x′ ) = ρg ( x , x′ ) | g ∈ G } . ( 2 ) We refer to these as the distance functions , which map a pair of features into their distance after the embedding . Well known arguments imply that one can obtain generalization bounds for F by providing upper bounds on the Rademacher complexity Rn ( F ) of the family of the scalar functions F . Therefore in what follows , we discuss directly the upper bounds on Rn ( F ) for various families G. We refer to Section 2 for formal definitions and details on the relation between G , F , Rn ( F ) , and generalization bounds . 1.1 OVERVIEW OF THE RESULTS . In this paper our goal is to study generalization bounds for neural network type non-linear embeddings g : Rd → Rk given by neural networks of depth L ≥ 1 . Specifically , we consider embeddings of the form g ( x ) = gA ( x ) = φL ( AtL · ( φL−1 ( AtL−1 . . . φ1 ( At1x ) . . . ) ) ) , ( 3 ) where A = ( A1 , . . . , AL ) is a tuple of matrices such that Ai ∈ Rki−1×ki for i = 1 , . . . , L , and ki are the layer widths , with k0 = d and kL = k. The activations φi : R → R are assumed to be Lipschitz , with ‖φ‖Lip ≤ ρi , and act on vectors in Rki coordinatewise . A family of matrix tuples will be generally denoted by A , while the associated embeddings family ( 3 ) will be denoted by G and the associated family of distance functions , ( 2 ) , will be denoted by F . In the context of deep learning generalization guarantees for classification , there exists a large body of work on various complexity bounds related to the family of mappings G. The current strongest uniform bounds were given in Bartlett et al . ( 2017 ) ( see also Neyshabur et al . ( 2018 ) ) . Our goal here is to translate these bounds to bounds on the Rademacher complexity of the family F , derived from G. That is , we study the aspects of the problem that are specific to the metric learning setting . To state the results , we require a few norm definitions : For matrix A ∈ Rs×t , ‖A‖op is the spectral norm , and set ‖A‖2,1 = ∑t i=1 ‖A·i‖2 , where A·i is the i-th column of A . For a family A of matrix tuples , A = ( A1 , . . . , AL ) ∈ A , for i ≤ L we set ‖Ai‖op = sup A∈A ‖Ai‖op and ‖Ai‖2,1 = sup A∈A ‖Ai‖2,1 . ( 4 ) Thus ‖Ai‖op , ‖Ai‖op are the largest respective norms of the component Ai in A . With this notation , our first result is the following bound ( up to logarithmic factors ) : Rn ( F ) ≤ O 1√ n b2 ( L∏ i=1 ρi ‖Ai‖op ) 2 L∑ i=1 ‖Ai‖ 2 3 2,1 ‖Ai‖ 2 3 op 32 . ( 5 ) Here b is an upper bound on inputs , ‖xi‖2 , ‖x′i‖2 ≤ b for i ≤ n. The full statement is given in Section 4 , Theorem 1 . This bound can be regarded as a natural extension of the bounds of Bartlett et al . ( 2017 ) to the metric learning setting . Indeed , it uses the same parameters – the ‖A‖op and ‖A‖2,1 norms . The quantity ( ∏L i=1 ρi ‖Ai‖op ) is the maximal Lipschitz constant of the mappings in G , and it will typically be the dominant term in ( 5 ) . One of the appealing properties of ( 5 ) , inherited from the bounds in Bartlett et al . ( 2017 ) , is that it is dimension free , in the sense that the depth L and the the layer widths ki do not enter the bound explicitly . As noted above , for the case of classification , the bounds in Bartlett et al . ( 2017 ) are currently the strongest known uniform bounds . However , we now show that in the metric learning setting , one can improve the bound in some situations , by using the ‖A‖2 , ∞ = maxi≤k ‖A·i‖2 norm at the last layer . In Theorem 2 , Section 4 , we show the following : Rn ( F ) ≤ O k√ n b2 ( ρL ‖AL‖2 , ∞ L−1∏ i=1 ρi ‖Ai‖op ) 2L−1∑ i=1 ‖Ai‖ 2 3 2,1 ‖Ai‖ 2 3 op + 1 32 . ( 6 ) We refer to ( 5 ) as the sparse bound , and to ( 6 ) as the non-sparse bound . To compare the bounds , and to see the relation to sparsity , let us first consider the simpler single layer setting , L = 1 . In this case , the bounds read Rn ( F ) ≤ O ( b2 ‖φ‖2Lip ‖A‖op ‖A‖2,1√ n ) , ( 7 ) and Rn ( F ) ≤ O ( kb2 ‖φ‖2Lip ‖A‖ 2 2 , ∞√ n ) , ( 8 ) where A is the family of weights matrices , A ∈ Rd×k , φ an activation , and ‖A‖∗ = supA∈A ‖A‖∗ where ‖·‖∗ is one of ‖·‖2,1 , ‖·‖2 , ∞ , ‖·‖op . Consider first the case where A contains non sparse matrices A ∈ A . These are the weights A where all k neurons have roughly the same norm , or equivalently , ‖A‖2,1 ∼ k ‖A‖2 , ∞ . ( 9 ) We refer to this condition as the dense regime . Note that ( 9 ) holds true for neural networks at initialization , and in Section 5 we observe empirically that this also holds for SGD trained networks on MNIST data . Next , when ( 9 ) holds , we also have that ‖A‖op ∼ √ k ‖A‖2 , ∞ . ( 10 ) Indeed , one has ‖A‖op ≤ √ k ‖A‖2 , ∞ for any A , and ( 9 ) implies ‖A‖op ≥ const · √ k d ‖A‖2 , ∞ ( see Lemma 3 in supplementary material A ) . The condition ( 10 ) will be also experimentally verified for MNIST data . Now , substituting ( 9 ) , ( 10 ) into ( 7 ) and ( 8 ) , we obtain that in the non-sparse regime , ( 8 ) is stronger than ( 7 ) by a factor of √ k , a significant improvement . On the other hand , when A is sparse , with most output neurons zeroed out ( e.x . ‖A‖2,1 = ‖A‖op = ‖A‖2 , ∞ ) , ( 7 ) will be much stronger . Such sparse networks can be achieved by adding explicit sparsity regularization terms to the cost . This will be the case for networks trained on the 20newsgroups dataset . Finally , as evident from ( 5 ) and ( 6 ) , for the multilayer case L > 1 , the relation between the sparse and non-sparse bounds is more involved . However , the asymptotics of the dependence on k is similar : With all other ki fixed , ( 6 ) will be better by a factor of √ k for non-sparse last layer AL , and ( 5 ) will be better otherwise . We now discuss an additional direction in which the bounds may be improved : Note that the dependence of the bound on the coefficients of the weights in ( 5 ) and ( 6 ) is quadratic . As we show in Section 4 , this is in general unavoidable , due to the quadratic form of the metric ( 1 ) . However , if the highest activation , φL is bounded , such as for instance the sigmoid φ ( x ) = ( 1 + e−x ) −1 , then one can obtain a linearly homogeneous dependence on the layer weights , thus potentially significantly improving the bounds . These bounds are given in ( 18 ) and ( 21 ) in Theorems 1 and 2 respectively . For comparison , in the single layer case the sparse and non-sparse bounded φL bounds are given by Rn ( F ) ≤ O ( √ kb ‖φ‖∞ ‖φ‖Lip ‖A‖2,1√ n ) and Rn ( F ) ≤ O ( kb ‖φ‖∞ ‖φ‖Lip ‖A‖2 , ∞√ n ) , ( 11 ) where ‖φ‖∞ is the upper bound on the values of φ . | The paper provides two new generalization bounds for non-linear metric learning with deep neural networks, by extending results of Bartlett et al. 2017 to the metric learning setting. The two bounds have been called the 'sparse' and 'non-sparse' bounds and differ in the norm used for the last layer. Experiments are performed where it is shown that either bound may dominate on real datasets. | SP:935ed98ae6074d793f311b963ea3944a8e2c6293 |
Understanding the robustness-accuracy tradeoff by rethinking robust fairness | 1 INTRODUCTION . 1.1 BACKGROUND . Deep neural networks ( DNNs ) have been proven to be vulnerable to the adversarial attacks , as demonstrated in ( Szegedy ; Goodfellow et al . ; Kurakin et al . ; Carlini & Wagner ) . By adding crafted imperceptible perturbations to the input , attackers can easily fool the model to give an incorrect prediction . To defend against adversarial attacks , tens of methods have been proposed , but most of them later proved to be ineffective ( Athalye et al. , 2018 ) . Among these many defense techniques , adversarial training ( AT ) ( Madry et al. , 2017 ) has been proven to be the most effective strategy against adversarial attacks . Although current AT algorithms can effectively improve model robustness , there are two confusing phenomena in AT models . First , there can be an inevitable robustness-accuracy tradeoff ( Tsipras et al. , 2018 ) in AT models in which increasing robustness is always accompanied by an accuracy drop . Second , recently Xu et al . ( 2021 ) found that AT tends to introduce severe disparities in accuracy and robustness between different classes . For example , as shown in Figure 1b , in a PGD-AT model ( Madry et al. , 2017 ) , both the accuracy and robustness of the 3rd class cat are much lower than those of the 1st class car , while the two classes have similar accuracies in the standard training model ( see Figure 1a ) . This phenomenon is defined as the robust fairness according to the authors . Additionally , as Xu et al . ( 2021 ) mentioned , the robust fairness problem is closely related to the robustness-accuracy tradeoff , because the average accuracy drop in the robustness-accuracy tradeoff could mainly come from the classes that are hard to classify in AT . To verify this , we have measured the accuracy drop for each class , and calculated their percentage in the total accuracy drop , as shown in Figure 1c . We can see that it only takes two classes ( the cat and bird ) to contribute almost half of the accuracy drop , while the two classes have the lowest accuracy and robustness than other classes in AT . That is , these hard classified classes have a significantly greater impact on the decline in accuracy , and to better understand the robustness-accuracy tradeoff , it should be determined why these classes are so difficult to classify in AT . To explain the phenomenon , Xu et al . ( 2021 ) argued that some classes are difficult to classify in AT because they are intrinsically “ harder ” to classify than other classes , and AT tends to hurt both accuracy and robustness of these “ hard ” classes . To verify this point of view , these authors studied the effect of AT on a binary classification task under a mixture Gaussian distribution , and the “ hard ” class is the one with larger variance in their case . They showed that AT will push the decision boundary closer to the larger variance class and further worsen both the accuracy and robustness of the class . However , although they showed that the class with a larger variance is more difficult to classify , there still remains a question ; that is , is variance enough to describe the “ hard ” degree of a class ? Imagine two Gaussian distributions both have a high variance , but also an extremely large difference of mean values , they should still be well classified . On the contrary , when the two Gaussian distributions both have a low variance , but the mean values of them are extremely similar , which makes the two distributions severely overlap , we can not satisfactorily classify them instead . That is , the inter-class similarity is also an important factor affecting the model ’ s accuracy . With this point in mind , we have measured both inter-class similarity and intra-class variance in standard training , PGD-AT and TRADES models ( Zhang et al. , 2019 ) for each class in the CIFAR10 test set , as shown in Figure 2 . The measurement is performed in the penultimate layer feature space . For each class , we use the variance of features as the class ’ s intra-class variance . To measure the inter-class similarity of each class , we first calculate the feature mean value vectors for all classes , and then the cosine similarity between the mean value vectors of the measured class and other classes . The largest cosine similarity is used as the inter-class similarity of the measured class in this paper . It is somewhat surprising to see that both PGD-AT and TRADES models have a lower variance than the standard training model in Figure 2b , while they have a worse accuracy instead . However , as shown in Figure 2a , both PGD-AT and TRADES can lead to a higher inter-class similarity than standard training . In particular , we notice that the “ hardest ” class cat does not have the largest variance no matter in PGD-AT , TRADES or the standard training model , but has the largest inter-class similarity . These observations have challenged Xu et al . ( 2021 ) ’ s theory that the “ hard ” classes are the large variance classes and indicate that inter-class similarity does matter in AT , thus motivated us to study both robust fairness phenomenon and robustness-accuracy tradeoff toward the increased inter-class similarity . 1.2 OUR CONTRIBUTIONS . Understand the robustness-accuracy tradeoff & robust fairness . To the best of our knowledge , we are the first to study the relationship between the robustness-accuracy tradeoff and the robust fairness , and we find that the two phenomena could both come from the increased inter-class similarity caused by AT . More specifically , through our single AT and binary classification AT experiments in section 2 , we find that : • AT will cause a general increase in inter-class similarity for each class , which even causes a feature overlap , and finally leads to the accuracy drop in the tradeoff . • The “ hard ” classes in AT are actually similar classes in standard training , and the increased inter-class similarity in AT makes them more similar and harder to be classified , which causes the robust fairness problem . Re-investigate the effect of smoothing regularizer in AT . Label smoothing ( LS ) ( Szegedy et al. , 2016 ) has been used as a trick to benchmark the robustness in AT by Pang et al . ( 2020 ) , however , we noticed that LS can not only help improve the robustness , but usually improve the accuracy too , which means a reduction in the robustness-accuracy tradeoff . In this paper , we find LS can help alleviate the tradeoff because it helps reduce the large inter-class similarity in AT , and also provides a lower intra-class variance . Then , we investigate the effect of the maximum entropy ( ME ) Pereyra et al . ( 2017 ) , which is also a classic smoothing regularizer , and we find ME can help reduce both inter-class similarity and intra-class variance too . In addition , we find that the state-of-the-art AT method TRADES can be seen as a special maximum entropy learning , which could explain why TRADES model have a lower intra-class variance than PGD-AT model in Figure 2 , and usually performs better than PGD-AT in terms of robustness . We proposed the maximum entropy PGD-AT ( ME-AT ) and the maximum entropy TRADES ( ME-TRADES ) , and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness . 2 RETHINKING ROBUST FAIRNESS TOWARD INTER-CLASS SIMILARITY . In Figure 2a , we have shown that AT models have higher inter-class similarity than the standard training model . In this section , we design two experiments to see how the high inter-class similarity in AT is related to both the robustness-accuracy tradeoff and robust fairness phenomena . 2.1 SINGLE ADVERSARIAL TRAINING . AT causes a feature overlap . We design the single adversarial training ( single AT ) to see how AT affect one single class . In single AT , we only conduct adversarial training on one class while training other classes normally . For better visualization , we adjust the penultimate layer of a ResNet-18 model to output 2-D features . In Figure 3 , we show the single AT results of the two most representative classes : the “ hardest ” class cat and the “ easiest ” class car . The results of other classes and detailed settings are both provided in Appendix A.1 . In Figure 3b , when single adversarial train the 3rd class cat , the features of the cat severely overlap with those of the 5th class dog , and the overlapping features make the class cat almost impossible to classify ( only has 7.03 natural accuracy and 0 PGD-10 robustness ) . This observation intuitively shows how the inter-class similarity increases in AT , and proves that the accuracy drop part in the robustness-accuracy tradeoff could come from the increased inter-class similarity ( the overlapping features ) . The increase in inter-class similarity is general in AT . However , when single AT is carried out for the 1st class car , the features of the class car can still be split well from other classes , and both the accuracy and PGD-10 robustness of the class car achieve a high level ( 98.4 and 72.2 respectively , see Figure 3a ) . Does this mean that the “ easy ” classes can avoid an increase in inter-class similarity in AT ? To check this , we measure the inter-class similarity in the single AT models , and for comparison , we also measured the inter-class similarity of a standard training 2-D features ResNet-18 model . As shown in Figure 4 , each class in the blue line represents the inter-class similarity of the class in the corresponding single AT model ( e.g. , the point of the class car in the blue line represents the interclass similarity of the class car in the single car AT model ) , and the yellow line is the inter-class similarity of the standard training model . We can see that even the “ easiest ” class car in the single AT model can have a higher inter-class similarity than that in the standard training model . This observation turns out that the increase in inter-class similarity is general in AT for all classes . 2.2 BINARY CLASSIFICATION ADVERSARIAL TRAINING . “ Hard ” classes or similar classes ? Since the increase in inter-class similarity is general for all classes , we assume that some classes are difficult to classify in AT possibly because they are already similar in standard training , and the increased inter-class similarity caused by AT makes them more similar and become the “ hard ” classes . To verify this assumption , we conduct the binary classification AT experiments . We set the class cat to binary classify with other classes in CIFAR10 dataset , and we use both PGD-AT ( Madry et al. , 2017 ) and TRADES ( Zhang et al. , 2019 ) to train our binary classification ResNet-18 models ( 512-D features here ) . We plot the natural error and PGD-10 error of the PGD-AT and TRADES trained binary classification models in Figure 5a and Figure 5b respectively . Classes in the horizontal axis represent the classes that binary classified with the class cat , and is sorted from small to large by their similarity with the cat in standard training . We find that both natural error and PGD-10 error in the binary classification PGD-AT and TRADES models are highly positive correlated with the similarity in standard training . For example , the class car is the least similar class of the cat in standard training , when binary classified cat with the car , model can get both low natural error and PGD-10 error ( 4.6 and 11.0 ) ; However , when binary classified cat with the most similar class dog , the ResNet-18 model even failed to converge in PGD-AT ( 49.7 for both natural and PGD-10 error ) , and even model can converge in TRADES , it is also in both highest natural error and PGD-10 error ( 23.4 and 44.0 ) . This observation indicates that the “ hard ” classes in AT could actually be the similar classes in standard training . | This paper identifies how adversarial training (AT) algorithms for robustness may negatively affect the notion of robust fairness and proposes two methods ME-AT and ME-TRADES, which combine existing AT methods with a maximum entropy (ME) term, to improve the accuracy-robustness tradeoff and robustness fairness. Although AT algorithms improve model robustness, they also increase inter-class similarities, which make certain classes more difficult to classify, leading to unfair accuracies. The paper then shows that label smoothing (LS) mitigates this effect and in particular investigates the ME technique. The authors show that a method called TRADES outperforms another method called PGD-AT because it is a special version of ME. Experiments show that combining ME with these AT methods outperforms PGD-AT. | SP:e85499f1a98415c4392e71b1b5702cd89a35bd12 |
Understanding the robustness-accuracy tradeoff by rethinking robust fairness | 1 INTRODUCTION . 1.1 BACKGROUND . Deep neural networks ( DNNs ) have been proven to be vulnerable to the adversarial attacks , as demonstrated in ( Szegedy ; Goodfellow et al . ; Kurakin et al . ; Carlini & Wagner ) . By adding crafted imperceptible perturbations to the input , attackers can easily fool the model to give an incorrect prediction . To defend against adversarial attacks , tens of methods have been proposed , but most of them later proved to be ineffective ( Athalye et al. , 2018 ) . Among these many defense techniques , adversarial training ( AT ) ( Madry et al. , 2017 ) has been proven to be the most effective strategy against adversarial attacks . Although current AT algorithms can effectively improve model robustness , there are two confusing phenomena in AT models . First , there can be an inevitable robustness-accuracy tradeoff ( Tsipras et al. , 2018 ) in AT models in which increasing robustness is always accompanied by an accuracy drop . Second , recently Xu et al . ( 2021 ) found that AT tends to introduce severe disparities in accuracy and robustness between different classes . For example , as shown in Figure 1b , in a PGD-AT model ( Madry et al. , 2017 ) , both the accuracy and robustness of the 3rd class cat are much lower than those of the 1st class car , while the two classes have similar accuracies in the standard training model ( see Figure 1a ) . This phenomenon is defined as the robust fairness according to the authors . Additionally , as Xu et al . ( 2021 ) mentioned , the robust fairness problem is closely related to the robustness-accuracy tradeoff , because the average accuracy drop in the robustness-accuracy tradeoff could mainly come from the classes that are hard to classify in AT . To verify this , we have measured the accuracy drop for each class , and calculated their percentage in the total accuracy drop , as shown in Figure 1c . We can see that it only takes two classes ( the cat and bird ) to contribute almost half of the accuracy drop , while the two classes have the lowest accuracy and robustness than other classes in AT . That is , these hard classified classes have a significantly greater impact on the decline in accuracy , and to better understand the robustness-accuracy tradeoff , it should be determined why these classes are so difficult to classify in AT . To explain the phenomenon , Xu et al . ( 2021 ) argued that some classes are difficult to classify in AT because they are intrinsically “ harder ” to classify than other classes , and AT tends to hurt both accuracy and robustness of these “ hard ” classes . To verify this point of view , these authors studied the effect of AT on a binary classification task under a mixture Gaussian distribution , and the “ hard ” class is the one with larger variance in their case . They showed that AT will push the decision boundary closer to the larger variance class and further worsen both the accuracy and robustness of the class . However , although they showed that the class with a larger variance is more difficult to classify , there still remains a question ; that is , is variance enough to describe the “ hard ” degree of a class ? Imagine two Gaussian distributions both have a high variance , but also an extremely large difference of mean values , they should still be well classified . On the contrary , when the two Gaussian distributions both have a low variance , but the mean values of them are extremely similar , which makes the two distributions severely overlap , we can not satisfactorily classify them instead . That is , the inter-class similarity is also an important factor affecting the model ’ s accuracy . With this point in mind , we have measured both inter-class similarity and intra-class variance in standard training , PGD-AT and TRADES models ( Zhang et al. , 2019 ) for each class in the CIFAR10 test set , as shown in Figure 2 . The measurement is performed in the penultimate layer feature space . For each class , we use the variance of features as the class ’ s intra-class variance . To measure the inter-class similarity of each class , we first calculate the feature mean value vectors for all classes , and then the cosine similarity between the mean value vectors of the measured class and other classes . The largest cosine similarity is used as the inter-class similarity of the measured class in this paper . It is somewhat surprising to see that both PGD-AT and TRADES models have a lower variance than the standard training model in Figure 2b , while they have a worse accuracy instead . However , as shown in Figure 2a , both PGD-AT and TRADES can lead to a higher inter-class similarity than standard training . In particular , we notice that the “ hardest ” class cat does not have the largest variance no matter in PGD-AT , TRADES or the standard training model , but has the largest inter-class similarity . These observations have challenged Xu et al . ( 2021 ) ’ s theory that the “ hard ” classes are the large variance classes and indicate that inter-class similarity does matter in AT , thus motivated us to study both robust fairness phenomenon and robustness-accuracy tradeoff toward the increased inter-class similarity . 1.2 OUR CONTRIBUTIONS . Understand the robustness-accuracy tradeoff & robust fairness . To the best of our knowledge , we are the first to study the relationship between the robustness-accuracy tradeoff and the robust fairness , and we find that the two phenomena could both come from the increased inter-class similarity caused by AT . More specifically , through our single AT and binary classification AT experiments in section 2 , we find that : • AT will cause a general increase in inter-class similarity for each class , which even causes a feature overlap , and finally leads to the accuracy drop in the tradeoff . • The “ hard ” classes in AT are actually similar classes in standard training , and the increased inter-class similarity in AT makes them more similar and harder to be classified , which causes the robust fairness problem . Re-investigate the effect of smoothing regularizer in AT . Label smoothing ( LS ) ( Szegedy et al. , 2016 ) has been used as a trick to benchmark the robustness in AT by Pang et al . ( 2020 ) , however , we noticed that LS can not only help improve the robustness , but usually improve the accuracy too , which means a reduction in the robustness-accuracy tradeoff . In this paper , we find LS can help alleviate the tradeoff because it helps reduce the large inter-class similarity in AT , and also provides a lower intra-class variance . Then , we investigate the effect of the maximum entropy ( ME ) Pereyra et al . ( 2017 ) , which is also a classic smoothing regularizer , and we find ME can help reduce both inter-class similarity and intra-class variance too . In addition , we find that the state-of-the-art AT method TRADES can be seen as a special maximum entropy learning , which could explain why TRADES model have a lower intra-class variance than PGD-AT model in Figure 2 , and usually performs better than PGD-AT in terms of robustness . We proposed the maximum entropy PGD-AT ( ME-AT ) and the maximum entropy TRADES ( ME-TRADES ) , and experimental results show that our methods can significantly mitigate both tradeoff and robust fairness . 2 RETHINKING ROBUST FAIRNESS TOWARD INTER-CLASS SIMILARITY . In Figure 2a , we have shown that AT models have higher inter-class similarity than the standard training model . In this section , we design two experiments to see how the high inter-class similarity in AT is related to both the robustness-accuracy tradeoff and robust fairness phenomena . 2.1 SINGLE ADVERSARIAL TRAINING . AT causes a feature overlap . We design the single adversarial training ( single AT ) to see how AT affect one single class . In single AT , we only conduct adversarial training on one class while training other classes normally . For better visualization , we adjust the penultimate layer of a ResNet-18 model to output 2-D features . In Figure 3 , we show the single AT results of the two most representative classes : the “ hardest ” class cat and the “ easiest ” class car . The results of other classes and detailed settings are both provided in Appendix A.1 . In Figure 3b , when single adversarial train the 3rd class cat , the features of the cat severely overlap with those of the 5th class dog , and the overlapping features make the class cat almost impossible to classify ( only has 7.03 natural accuracy and 0 PGD-10 robustness ) . This observation intuitively shows how the inter-class similarity increases in AT , and proves that the accuracy drop part in the robustness-accuracy tradeoff could come from the increased inter-class similarity ( the overlapping features ) . The increase in inter-class similarity is general in AT . However , when single AT is carried out for the 1st class car , the features of the class car can still be split well from other classes , and both the accuracy and PGD-10 robustness of the class car achieve a high level ( 98.4 and 72.2 respectively , see Figure 3a ) . Does this mean that the “ easy ” classes can avoid an increase in inter-class similarity in AT ? To check this , we measure the inter-class similarity in the single AT models , and for comparison , we also measured the inter-class similarity of a standard training 2-D features ResNet-18 model . As shown in Figure 4 , each class in the blue line represents the inter-class similarity of the class in the corresponding single AT model ( e.g. , the point of the class car in the blue line represents the interclass similarity of the class car in the single car AT model ) , and the yellow line is the inter-class similarity of the standard training model . We can see that even the “ easiest ” class car in the single AT model can have a higher inter-class similarity than that in the standard training model . This observation turns out that the increase in inter-class similarity is general in AT for all classes . 2.2 BINARY CLASSIFICATION ADVERSARIAL TRAINING . “ Hard ” classes or similar classes ? Since the increase in inter-class similarity is general for all classes , we assume that some classes are difficult to classify in AT possibly because they are already similar in standard training , and the increased inter-class similarity caused by AT makes them more similar and become the “ hard ” classes . To verify this assumption , we conduct the binary classification AT experiments . We set the class cat to binary classify with other classes in CIFAR10 dataset , and we use both PGD-AT ( Madry et al. , 2017 ) and TRADES ( Zhang et al. , 2019 ) to train our binary classification ResNet-18 models ( 512-D features here ) . We plot the natural error and PGD-10 error of the PGD-AT and TRADES trained binary classification models in Figure 5a and Figure 5b respectively . Classes in the horizontal axis represent the classes that binary classified with the class cat , and is sorted from small to large by their similarity with the cat in standard training . We find that both natural error and PGD-10 error in the binary classification PGD-AT and TRADES models are highly positive correlated with the similarity in standard training . For example , the class car is the least similar class of the cat in standard training , when binary classified cat with the car , model can get both low natural error and PGD-10 error ( 4.6 and 11.0 ) ; However , when binary classified cat with the most similar class dog , the ResNet-18 model even failed to converge in PGD-AT ( 49.7 for both natural and PGD-10 error ) , and even model can converge in TRADES , it is also in both highest natural error and PGD-10 error ( 23.4 and 44.0 ) . This observation indicates that the “ hard ” classes in AT could actually be the similar classes in standard training . | This paper investigates inter-class similarity and intra-class variance, and corroborates that AT will cause an increase in inter-class similarity, which could be the root of both the robustness-accuracy tradeoff and robust fairness phenomena. The authors first considers Label Smoothing (LS) as the regularizer, and concludes that LS will cause a loss of information in the logits, and hence weaken the discriminative power of the trained models. The authors then confirms that ME can help reduce the excessive inter-class similarity in robust models, and also provides a lower intra-class variance, which is the remedy of previous AT methods. Experiments partially support the conclusions of the paper. | SP:e85499f1a98415c4392e71b1b5702cd89a35bd12 |
Transformed CNNs: recasting pre-trained convolutional layers with self-attention | INTRODUCTION Since the success of AlexNet in 2012 ( Krizhevsky et al. , 2017 ) , the field of Computer Vision has been dominated by Convolutional Neural Networks ( CNNs ) ( LeCun et al. , 1998 ; 1989 ) . Their local receptive fields give them a strong inductive bias to exploit the spatial structure of natural images ( Scherer et al. , 2010 ; Schmidhuber , 2015 ; Goodfellow et al. , 2016 ) , while allowing them to scale to large resolutions seamlessly . Yet , this inductive bias limits their ability to capture long-range interactions . In this regard , self-attention ( SA ) layers , originally introduced in language models ( Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ; Devlin et al. , 2018 ) , have gained interest as a building block for vision Ramachandran et al . ( 2019 ) ; Zhao et al . ( 2020 ) . Recently , they gave rise to a plethora of Vision Transformer ( ViT ) models , able to compete with state-of-the-art CNNs in various tasks Dosovitskiy et al . ( 2020 ) ; Touvron et al . ( 2020 ) ; Wu et al . ( 2020 ) ; Touvron et al . ( 2021 ) ; Liu et al . ( 2021 ) ; Heo et al . ( 2021 ) while demonstrating better robustness ( Bhojanapalli et al. , 2021 ; Mao et al. , 2021 ) . However , capturing long-range dependencies necessarily comes at the cost of quadratic complexity in input size , a computational burden which many recent directions have tried to alleviate ( Bello , 2021 ; Wang et al. , 2020 ; Choromanski et al. , 2020 ; Katharopoulos et al. , 2020 ) . Additionally , ViTs are generally harder to train ( Zhang et al. , 2019 ; Liu et al. , 2020 ) , and require vast amounts of pre-training ( Dosovitskiy et al. , 2020 ) or distillation from a convolutional teacher ( Hinton et al. , 2015 ; Jiang et al. , 2021 ; Graham et al. , 2021 ) to match the performance of CNNs . Faced with the dilemma between efficient CNNs and powerful ViTs , several approaches have aimed to bridge the gap between these architectures . On one side , hybrid models append SA layers onto convolutional backbones ( Chen et al. , 2018 ; Bello et al. , 2019 ; Graham et al. , 2021 ; Chen et al. , 2021 ; Srinivas et al. , 2021 ) , and have already fueled successful results in a variety of tasks ( Carion et al. , 2020 ; Hu et al. , 2018 ; Chen et al. , 2020 ; Locatello et al. , 2020 ; Sun et al. , 2019 ) . Conversely , a line of research has studied the benefit of introducing convolutional biases in Transformer architectures to ease learning ( d ’ Ascoli et al. , 2021 ; Wu et al. , 2021 ; Yuan et al. , 2021 ) . Despite these interesting compromises , modelling long-range dependencies at low computational cost remains a challenge for practitioners . Contributions At a time when pre-training on vast datasets has become common practice , we ask the following question : does one need to train the SA layers during the whole learning process ? Could one instead learn cheap components such as convolutions first , leaving the SA layers to be learnt at the end ? In this paper , we take a step in this direction by presenting a method to fully reparameterize any pre-trained convolutional layer as a Gated Positional Self-Attention ( GPSA ) layer ( d ’ Ascoli et al. , 2021 ) . The latter is initialized to reproduce the mapping of the convolutional layer , but is then encouraged to learn more general mappings which are not accessible to the CNN by adjusting positional gating parameters . We leverage this method to reparametrize pre-trained CNNs as functionally equivalent hybrid models . After only 50 epochs of fine-tuning , the resulting Transformed CNNs ( T-CNNs ) boast significant performance and robustness improvements as shown in Fig . 1 , demonstrating the practical relevance of our method . We analyze the inner workings of the T-CNNs , and show that they learn more robust representations by combining convolutional heads and attentional heads in a complementary way . Related work Our work mainly builds on two pillars . First , the idea that SA layers can express any convolution , introduced by Cordonnier et al . ( 2019 ) . This idea was recently leveraged by d ’ Ascoli et al . ( 2021 ) , which initialize the SA layers of end-to-end Transformers as random convolutions to imbue them with a local inductive bias and improve their sample efficiency . Our approach leverages the opposite idea : giving an end-to-end CNN the freedom to escape locality by learning self-attention at late times . Second , we exploit the following learning paradigm : train a simple and fast model , then reparameterize it as a more complex model for the final stages of learning . This approach was studied from a scientific point of view in d ’ Ascoli et al . ( 2019 ) , which shows that reparameterizing a CNN as a fully-connected network ( FCN ) halfway through training can lead the FCN to outperform the CNN . Yet , the practical relevance of this method is limited by the vast increase in number of parameters required by the FCN to functionally represent the CNN . In contrast , our reparameterization hardly increases the parameter count of the CNN , making it easily applicable to any state-of-the-art CNN . Note that these reparameterization methods can be viewed an informed version of dynamic architecture growing algorithms such as AutoGrow ( Wen et al. , 2020 ) . In the context of hybrid models , various works have studied the performance gains obtained by introducing MHSA layers in ResNets with minimal architectural changes ( Srinivas et al. , 2021 ; Graham et al. , 2021 ; Chen et al. , 2021 ) . However , the MHSA layers used in these works are initialized randomly and need to be trained from scratch . Our approach is different , as it makes use of GPSA layers , which can be initialized to represent the same function as the convolutional layer it replaces . We emphasize that the novelty in our work is not in the architectures used , but in the unusual way they are blended together . 1 BACKGROUND . Multi-head self-attention The SA mechanism is based on a trainable associative memory with ( key , query ) vector pairs . To extract the semantic interpendencies between the L elements of a sequence X ∈ RL×Din , a sequence of “ query ” embeddings Q = WqryX ∈ RL×Dh is matched against another sequence of “ key ” embeddingsK = WkeyX ∈ RL×Dh using inner products . The result is an attention matrix whose entry ( ij ) quantifies how semantically relevantQi is toKj : A = softmax ( QK > √ Dh ) ∈ RL×L . ( 1 ) Multi-head SA layers use several SA heads in parallel to allow the learning of different kinds of dependencies : MSA ( X ) : = Nh∑ h=1 [ SAh ( X ) ] W hout , SAh ( X ) : = A hXW hval , ( 2 ) whereW hval ∈ RDin×Dv andW hout ∈ RDv×Dout are two learnable projections . To incorporate positional information , ViTs usually add absolute position information to the input at embedding time , before propagating it through the SA layers . Another possibility is to replace the vanilla SA with positional SA ( PSA ) , by including a position-dependent term in the softmax ( Ramachandran et al. , 2019 ; Shaw et al. , 2018 ) . Although there are several ways to parametrize the positional attention , we use encodings rij of the relative position of pixels i and j as in ( Cordonnier et al. , 2019 ) : Ahij : = softmax ( QhiK h > j + v h > posrij ) . ( 3 ) Each attention head learns an embedding vhpos ∈ RDpos , and the relative positional encodings rij ∈ RDpos only depend on the distance between pixels i and j , denoted denoted as a twodimensional vector δij . Self-attention as a generalized convolution Cordonnier et al . ( 2019 ) shows that a multi-head PSA layer ( Eq . 3 ) with Nh heads and dimension Dpos ≥ 3 can express any convolutional layer of filter size √ Nh , with Din input channels and min ( Dv , Dout ) output channels , by setting the following : vhpos : = −αh ( 1 , −2∆h1 , −2∆h2 , 0 , . . . 0 ) rδ : = ( ‖δ‖2 , δ1 , δ2 , 0 , . . . 0 ) Wqry = Wkey : = 0 ( 4 ) In the above , the center of attention ∆h ∈ R2 is the position to which head h pays most attention to , relative to the query pixel , whereas the locality strength αh > 0 determines how focused the attention is around its center ∆h . When αh is large , the attention is focused only on the pixel located at ∆h ; when αh is small , the attention is spread out into a larger area . Thus , the PSA layer can achieve a convolutional attention map by setting the centers of attention ∆h to each of the possible positional offsets of a √ Nh × √ Nh convolutional kernel , and sending the locality strengths αh to some large value . 2 APPROACH . In this section , we introduce our method for mapping a convolutional layer to a functionally equivalent PSA layer with minimal increase in parameter count . To do this , we leverage the GPSA layers introduced in d ’ Ascoli et al . ( 2021 ) . Loading the filters We want each head h of the PSA layer to functionally mimic the pixel h of a convolutional filterWfilter ∈ RNh×Din×Dout , where we typically have Dout ≥ Din . Rewriting the action of the MHSA operator in a more explicit form , we have MHSA ( X ) = Nh∑ h=1 AhX W hvalW h out︸ ︷︷ ︸ W h∈RDin×Dout ( 5 ) In the convolutional configuration of Eq . 4 , AhX selects pixel h of X . Hence , we need to set W h = W hfilter . However , as a product of matrices , the rank ofWh is bottlenecked by Dv . To avoid this being a limitation , we need Dv ≥ Din ( since Dout ≥ Din ) . To achieve this with a minimal number of parameters , we choose Dv = Din , and simply set the following initialization : W hval = I , W h out = W h filter . ( 6 ) Note that this differs from the usual choice made in SA layers , where Dv = bDin/Nhc . However , to keep the parameter count the same , we share the sameW hval across different heads h , since it plays a symmetric role at initialization . Note that this reparameterization introduces three additional matrices compared to the convolutional filter : Wqry , Wkey , Wval , each containing Din × Din parameters . However , since the convolutional filter contains Nh ×Din ×Dout parameters , where we typically have Nh = 9 and Dout ∈ { Din , 2Din } , these additional matrices are much smaller than the filters and hardly increase the parameter count . This can be seen from the model sizes in Tab . 3 . Gated Positional self-attention Recent work ( d ’ Ascoli et al. , 2021 ) has highlighted an issue with standard PSA : the fact that the content and positional terms in Eq . 3 are potentially of very different magnitudes , in which case the softmax ignores the smallest of the two . This can typically lead the PSA to adopt a greedy attitude : choosing the form of attention ( content or positional ) which is easiest at a given time then sticking to it . To avoid this , the authors suggest to sum the content and positional terms after the softmax , with their relative importances governed by a learnable gating parameter λh ( one for each attention head ) . The resulting Gated Positional Self-Attention ( GPSA ) layers are parametrized as follows : Ahij : = ( 1− σ ( λh ) ) softmax ( QhiK h > j ) + σ ( λh ) softmax ( vh > posrij ) , ( 7 ) where σ : x 7→ 1/ ( 1+e−x ) is the sigmoid function . In the positional part , the encodings rij are fixed rather than learnt ( see Eq . 4 ) , which makes changing input resolution straightforward ( see SM . C ) and leaves only 3 learnable parameters per head : ∆1 , ∆2 and α1 . How convolutional should the initialization be ? The convolutional initialization of GPSA layers involves two parameters , determining how strictly convolutional the behavior is : the initial value of the locality strength α , which determines how focused each attention head is on its dedicated pixel , and the initial value of the gating parameters λ , which determines the importance of the positional information versus content . If λh 0 and α 1 , the T-CNN will perfectly reproduce the input-output function of the CNN , but may want to greedily stay in the convolutional configuration . Conversely , if λh 0 and α 1 , the T-CNN will forget about the input-output function of the CNN . Hence , we choose α = 1 and λ = 1 to lie in between these two extremes , encouraging the T-CNNs to escape locality throughout training . Architectural details To make our setup as canonical as possible , we focus on ResNet architectures ( He et al. , 2016 ) , which contain 5 stages , with spatial resolution halfed and number of channels doubled at each stage . Our method involves reparameterizing 3×3 convolutions as GPSA layers with 9 attention heads . However , global SA is too costly in the first layers , where the spatial resolution is large . We therefore only reparameterize the last stage of the architecture , while replacing the first stride-2 convolution by a stride-1 convolution , exactly as in ( Srinivas et al. , 2021 ) . We also add explicit padding layers to account for the padding of the original convolutions . 1Since α represents the temperature of the softmax , its value must stay positive at all times . To ensure this , we instead learn a rectified parameter α̃ using the softplus function : α = 1 β log ( 1 + e−βα̃ ) , with β = 5 . | This work proposes an approach to bridge CNNs and vision transformers for image recognition. The idea is to replace the last convolutional stage of a ResNet by a self-attention layer which is initialized from the weights of the convolutional stage. The approach is shown to improve the performance of the CNN, especially in terms of robustness to adversarial examples, corruptions in the input images, and domain shift. | SP:be490ce3cc1184daa229cab9c9dbfede83f79469 |
Transformed CNNs: recasting pre-trained convolutional layers with self-attention | INTRODUCTION Since the success of AlexNet in 2012 ( Krizhevsky et al. , 2017 ) , the field of Computer Vision has been dominated by Convolutional Neural Networks ( CNNs ) ( LeCun et al. , 1998 ; 1989 ) . Their local receptive fields give them a strong inductive bias to exploit the spatial structure of natural images ( Scherer et al. , 2010 ; Schmidhuber , 2015 ; Goodfellow et al. , 2016 ) , while allowing them to scale to large resolutions seamlessly . Yet , this inductive bias limits their ability to capture long-range interactions . In this regard , self-attention ( SA ) layers , originally introduced in language models ( Bahdanau et al. , 2014 ; Vaswani et al. , 2017 ; Devlin et al. , 2018 ) , have gained interest as a building block for vision Ramachandran et al . ( 2019 ) ; Zhao et al . ( 2020 ) . Recently , they gave rise to a plethora of Vision Transformer ( ViT ) models , able to compete with state-of-the-art CNNs in various tasks Dosovitskiy et al . ( 2020 ) ; Touvron et al . ( 2020 ) ; Wu et al . ( 2020 ) ; Touvron et al . ( 2021 ) ; Liu et al . ( 2021 ) ; Heo et al . ( 2021 ) while demonstrating better robustness ( Bhojanapalli et al. , 2021 ; Mao et al. , 2021 ) . However , capturing long-range dependencies necessarily comes at the cost of quadratic complexity in input size , a computational burden which many recent directions have tried to alleviate ( Bello , 2021 ; Wang et al. , 2020 ; Choromanski et al. , 2020 ; Katharopoulos et al. , 2020 ) . Additionally , ViTs are generally harder to train ( Zhang et al. , 2019 ; Liu et al. , 2020 ) , and require vast amounts of pre-training ( Dosovitskiy et al. , 2020 ) or distillation from a convolutional teacher ( Hinton et al. , 2015 ; Jiang et al. , 2021 ; Graham et al. , 2021 ) to match the performance of CNNs . Faced with the dilemma between efficient CNNs and powerful ViTs , several approaches have aimed to bridge the gap between these architectures . On one side , hybrid models append SA layers onto convolutional backbones ( Chen et al. , 2018 ; Bello et al. , 2019 ; Graham et al. , 2021 ; Chen et al. , 2021 ; Srinivas et al. , 2021 ) , and have already fueled successful results in a variety of tasks ( Carion et al. , 2020 ; Hu et al. , 2018 ; Chen et al. , 2020 ; Locatello et al. , 2020 ; Sun et al. , 2019 ) . Conversely , a line of research has studied the benefit of introducing convolutional biases in Transformer architectures to ease learning ( d ’ Ascoli et al. , 2021 ; Wu et al. , 2021 ; Yuan et al. , 2021 ) . Despite these interesting compromises , modelling long-range dependencies at low computational cost remains a challenge for practitioners . Contributions At a time when pre-training on vast datasets has become common practice , we ask the following question : does one need to train the SA layers during the whole learning process ? Could one instead learn cheap components such as convolutions first , leaving the SA layers to be learnt at the end ? In this paper , we take a step in this direction by presenting a method to fully reparameterize any pre-trained convolutional layer as a Gated Positional Self-Attention ( GPSA ) layer ( d ’ Ascoli et al. , 2021 ) . The latter is initialized to reproduce the mapping of the convolutional layer , but is then encouraged to learn more general mappings which are not accessible to the CNN by adjusting positional gating parameters . We leverage this method to reparametrize pre-trained CNNs as functionally equivalent hybrid models . After only 50 epochs of fine-tuning , the resulting Transformed CNNs ( T-CNNs ) boast significant performance and robustness improvements as shown in Fig . 1 , demonstrating the practical relevance of our method . We analyze the inner workings of the T-CNNs , and show that they learn more robust representations by combining convolutional heads and attentional heads in a complementary way . Related work Our work mainly builds on two pillars . First , the idea that SA layers can express any convolution , introduced by Cordonnier et al . ( 2019 ) . This idea was recently leveraged by d ’ Ascoli et al . ( 2021 ) , which initialize the SA layers of end-to-end Transformers as random convolutions to imbue them with a local inductive bias and improve their sample efficiency . Our approach leverages the opposite idea : giving an end-to-end CNN the freedom to escape locality by learning self-attention at late times . Second , we exploit the following learning paradigm : train a simple and fast model , then reparameterize it as a more complex model for the final stages of learning . This approach was studied from a scientific point of view in d ’ Ascoli et al . ( 2019 ) , which shows that reparameterizing a CNN as a fully-connected network ( FCN ) halfway through training can lead the FCN to outperform the CNN . Yet , the practical relevance of this method is limited by the vast increase in number of parameters required by the FCN to functionally represent the CNN . In contrast , our reparameterization hardly increases the parameter count of the CNN , making it easily applicable to any state-of-the-art CNN . Note that these reparameterization methods can be viewed an informed version of dynamic architecture growing algorithms such as AutoGrow ( Wen et al. , 2020 ) . In the context of hybrid models , various works have studied the performance gains obtained by introducing MHSA layers in ResNets with minimal architectural changes ( Srinivas et al. , 2021 ; Graham et al. , 2021 ; Chen et al. , 2021 ) . However , the MHSA layers used in these works are initialized randomly and need to be trained from scratch . Our approach is different , as it makes use of GPSA layers , which can be initialized to represent the same function as the convolutional layer it replaces . We emphasize that the novelty in our work is not in the architectures used , but in the unusual way they are blended together . 1 BACKGROUND . Multi-head self-attention The SA mechanism is based on a trainable associative memory with ( key , query ) vector pairs . To extract the semantic interpendencies between the L elements of a sequence X ∈ RL×Din , a sequence of “ query ” embeddings Q = WqryX ∈ RL×Dh is matched against another sequence of “ key ” embeddingsK = WkeyX ∈ RL×Dh using inner products . The result is an attention matrix whose entry ( ij ) quantifies how semantically relevantQi is toKj : A = softmax ( QK > √ Dh ) ∈ RL×L . ( 1 ) Multi-head SA layers use several SA heads in parallel to allow the learning of different kinds of dependencies : MSA ( X ) : = Nh∑ h=1 [ SAh ( X ) ] W hout , SAh ( X ) : = A hXW hval , ( 2 ) whereW hval ∈ RDin×Dv andW hout ∈ RDv×Dout are two learnable projections . To incorporate positional information , ViTs usually add absolute position information to the input at embedding time , before propagating it through the SA layers . Another possibility is to replace the vanilla SA with positional SA ( PSA ) , by including a position-dependent term in the softmax ( Ramachandran et al. , 2019 ; Shaw et al. , 2018 ) . Although there are several ways to parametrize the positional attention , we use encodings rij of the relative position of pixels i and j as in ( Cordonnier et al. , 2019 ) : Ahij : = softmax ( QhiK h > j + v h > posrij ) . ( 3 ) Each attention head learns an embedding vhpos ∈ RDpos , and the relative positional encodings rij ∈ RDpos only depend on the distance between pixels i and j , denoted denoted as a twodimensional vector δij . Self-attention as a generalized convolution Cordonnier et al . ( 2019 ) shows that a multi-head PSA layer ( Eq . 3 ) with Nh heads and dimension Dpos ≥ 3 can express any convolutional layer of filter size √ Nh , with Din input channels and min ( Dv , Dout ) output channels , by setting the following : vhpos : = −αh ( 1 , −2∆h1 , −2∆h2 , 0 , . . . 0 ) rδ : = ( ‖δ‖2 , δ1 , δ2 , 0 , . . . 0 ) Wqry = Wkey : = 0 ( 4 ) In the above , the center of attention ∆h ∈ R2 is the position to which head h pays most attention to , relative to the query pixel , whereas the locality strength αh > 0 determines how focused the attention is around its center ∆h . When αh is large , the attention is focused only on the pixel located at ∆h ; when αh is small , the attention is spread out into a larger area . Thus , the PSA layer can achieve a convolutional attention map by setting the centers of attention ∆h to each of the possible positional offsets of a √ Nh × √ Nh convolutional kernel , and sending the locality strengths αh to some large value . 2 APPROACH . In this section , we introduce our method for mapping a convolutional layer to a functionally equivalent PSA layer with minimal increase in parameter count . To do this , we leverage the GPSA layers introduced in d ’ Ascoli et al . ( 2021 ) . Loading the filters We want each head h of the PSA layer to functionally mimic the pixel h of a convolutional filterWfilter ∈ RNh×Din×Dout , where we typically have Dout ≥ Din . Rewriting the action of the MHSA operator in a more explicit form , we have MHSA ( X ) = Nh∑ h=1 AhX W hvalW h out︸ ︷︷ ︸ W h∈RDin×Dout ( 5 ) In the convolutional configuration of Eq . 4 , AhX selects pixel h of X . Hence , we need to set W h = W hfilter . However , as a product of matrices , the rank ofWh is bottlenecked by Dv . To avoid this being a limitation , we need Dv ≥ Din ( since Dout ≥ Din ) . To achieve this with a minimal number of parameters , we choose Dv = Din , and simply set the following initialization : W hval = I , W h out = W h filter . ( 6 ) Note that this differs from the usual choice made in SA layers , where Dv = bDin/Nhc . However , to keep the parameter count the same , we share the sameW hval across different heads h , since it plays a symmetric role at initialization . Note that this reparameterization introduces three additional matrices compared to the convolutional filter : Wqry , Wkey , Wval , each containing Din × Din parameters . However , since the convolutional filter contains Nh ×Din ×Dout parameters , where we typically have Nh = 9 and Dout ∈ { Din , 2Din } , these additional matrices are much smaller than the filters and hardly increase the parameter count . This can be seen from the model sizes in Tab . 3 . Gated Positional self-attention Recent work ( d ’ Ascoli et al. , 2021 ) has highlighted an issue with standard PSA : the fact that the content and positional terms in Eq . 3 are potentially of very different magnitudes , in which case the softmax ignores the smallest of the two . This can typically lead the PSA to adopt a greedy attitude : choosing the form of attention ( content or positional ) which is easiest at a given time then sticking to it . To avoid this , the authors suggest to sum the content and positional terms after the softmax , with their relative importances governed by a learnable gating parameter λh ( one for each attention head ) . The resulting Gated Positional Self-Attention ( GPSA ) layers are parametrized as follows : Ahij : = ( 1− σ ( λh ) ) softmax ( QhiK h > j ) + σ ( λh ) softmax ( vh > posrij ) , ( 7 ) where σ : x 7→ 1/ ( 1+e−x ) is the sigmoid function . In the positional part , the encodings rij are fixed rather than learnt ( see Eq . 4 ) , which makes changing input resolution straightforward ( see SM . C ) and leaves only 3 learnable parameters per head : ∆1 , ∆2 and α1 . How convolutional should the initialization be ? The convolutional initialization of GPSA layers involves two parameters , determining how strictly convolutional the behavior is : the initial value of the locality strength α , which determines how focused each attention head is on its dedicated pixel , and the initial value of the gating parameters λ , which determines the importance of the positional information versus content . If λh 0 and α 1 , the T-CNN will perfectly reproduce the input-output function of the CNN , but may want to greedily stay in the convolutional configuration . Conversely , if λh 0 and α 1 , the T-CNN will forget about the input-output function of the CNN . Hence , we choose α = 1 and λ = 1 to lie in between these two extremes , encouraging the T-CNNs to escape locality throughout training . Architectural details To make our setup as canonical as possible , we focus on ResNet architectures ( He et al. , 2016 ) , which contain 5 stages , with spatial resolution halfed and number of channels doubled at each stage . Our method involves reparameterizing 3×3 convolutions as GPSA layers with 9 attention heads . However , global SA is too costly in the first layers , where the spatial resolution is large . We therefore only reparameterize the last stage of the architecture , while replacing the first stride-2 convolution by a stride-1 convolution , exactly as in ( Srinivas et al. , 2021 ) . We also add explicit padding layers to account for the padding of the original convolutions . 1Since α represents the temperature of the softmax , its value must stay positive at all times . To ensure this , we instead learn a rectified parameter α̃ using the softplus function : α = 1 β log ( 1 + e−βα̃ ) , with β = 5 . | The paper explores a hybrid type of model architectures that combines the recently popular transformer based model architectures with already well established convolutional neural networks. Specifically, the proposed hybrid model follows a two-stage training strategy where first a CNN model is trained and the pre-trained weights are used to further improve the training by recasting the weights into a new transformed CNN model termed int he paper as T-CNN. The proposed T-CNN model architecture is able to outperform its CNN counterparts and other models compared in the paper. The representations learnt by the T-CNN are also analyzed in the experimental analysis section. | SP:be490ce3cc1184daa229cab9c9dbfede83f79469 |
AS-MLP: An Axial Shifted MLP Architecture for Vision | 1 INTRODUCTION . In the past decade , Convolutional Neural Networks ( CNNs ) ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) have received widespread attention and have become the de-facto standard for computer vision . Furthermore , with the in-depth exploration and research on self-attention , transformer-based architectures have also gradually emerged , and have surpassed CNN-based architectures in natural language processing ( e.g. , Bert ( Devlin et al. , 2018 ) ) and vision understanding ( e.g. , ViT ( Dosovitskiy et al. , 2021 ) , DeiT ( Touvron et al. , 2021b ) ) with amounts of training data . Recently , Tolstikhin et al . ( 2021 ) first propose MLP-based architecture , where almost all network parameters are learned from MLP ( linear layer ) and achieve amazing results , which is comparable with CNN-like models . Such promising results drive our exploration of MLP-based architecture . In the MLP-Mixer ( Tolstikhin et al. , 2021 ) , the model obtains the global receptive field through matrix transposition and token-mixing projection such that the long-range dependencies are covered . However , this rarely makes full use of the local information , which is very important in CNN-like architecture ( Simonyan & Zisserman , 2015 ; He et al. , 2016 ) because not all pixels need long-range dependencies , and the local information more focuses on extracting the low-level feature . In the transformer-based architectures , Swin Transformer ( Liu et al. , 2021b ) computes the self-attention in a window ( 7×7 ) instead of the global receptive field , which is similar to directly using a convolution layer with a large kernel size ( 7 × 7 ) to cover the local receptive field . Some other papers have also already emphasized the advantages of local receptive fields , and introduced local information in the transformer , such as Localvit ( Li et al. , 2021 ) , NesT ( Zhang et al. , 2021 ) , etc . Driven by these ideas , we mainly explore the influence of locality on MLP-based architectures . In order to introduce locality into the MLP-based architecture , one of the simplest and most intuitive ideas is to add a window to the MLP-Mixer , and then perform a token-mixing projection of the local information on the features within the window , just as done in Swin Transformer ( Liu et al. , 2021b ) and LINMAPPER ( Fang et al. , 2021 ) . However , for the MLP-based architecture , if we divide the window ( e.g. , 7 × 7 ) and perform the token-mixing projection in the window , then the linear layer has the 49× 49 parameters shared between windows1 , which greatly limits the model capacity and thus affects the learning of parameters and the final results . Therefore , a more ideal way to introduce locality is to directly model the relationship between a feature point and its surrounding feature points at any position , without the need to set a fixed window ( and window size ) in advance . To aggregate the features of different spatial positions in the same position and model their relationships , inspired by ( Wu et al. , 2018 ; Lin et al. , 2019 ; Wang et al. , 2020 ; Ho et al. , 2019 ) , we propose an axial shift strategy for MLP-based architecture , where we spatially shift features in both horizontal and vertical directions . Such an approach not only aggregates features from different locations , but also makes the feature channel only need to be divided into k groups instead of k2 groups to obtain a receptive field of size k × k with the help of axial operation . After that , a channel-mixing MLP combines these features , enabling the model to obtain local dependencies . It also allows us to design MLP structure as the same as the convolutional kernel , for instance , to design the kernel size and dilation rate . Based on the axial shift strategy , we design Axial Shifted MLP architecture , named AS-MLP . Our AS-MLP obtains 83.3 % Top-1 accuracy with 88M parameters and 15.2 GFLOPs in the ImageNet1K dataset without any extra training data . Such a simple yet effective method outperforms all MLP-based architectures and achieves competitive performance compared to the transformer-based architectures . It is also worth noting that the model weights in MLP-Mixer trained with fixed image size can not be adapted to downstream tasks with various input sizes because the token-mixing MLP has a fixed dimension . On the contrary , the AS-MLP architecture can be transferred to downstream tasks ( e.g. , object detection ) due to the design of axial shift . As far as we know , it is also the first work to apply MLP-based architecture to the downstream task . With the pre-trained model in the ImageNet-1K dataset , AS-MLP obtains 51.5 mAP on the COCO validation set and 49.5 MS mIoU on the ADE20K dataset , which is competitive compared to the transformer-based architectures . 2 RELATED WORK . CNN-based Architectures . Since AlexNet ( Krizhevsky et al. , 2012 ) won the ImageNet competition in 2012 , the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features . Subsequently , the VGG network ( Simonyan & Zisserman , 2015 ) is proposed , which purely uses a series of 3 × 3 convolution and fully connected layers , and obtains outstanding performance in image classification . Furthermore , ResNet ( He et al. , 2016 ) is proposed , which utilizes the residual connection to transfer features in different layers , thereby alleviating the problem of gradient vanishing and obtaining superior performance . After that , the residual module becomes an important component of the network design and is also employed in subsequent transformer-based architectures and MLP-based architectures . Some papers have made further improvements to the convolution operation in CNN-based architecture , such as dilated convolution ( Yu & Koltun , 2016 ) and deformable convolution ( Dai et al. , 2017 ) . EfficientNet ( Tan & Le , 2019 ; 2021 ) introduces neural architecture search into CNN to search for a suitable structure . These architectures build the CNN family and are used extensively in computer vision tasks . Transformer-based Architectures . Transformer is first proposed in ( Vaswani et al. , 2017 ) , where the attention mechanism is utilized to model the relationship between features from the different spatial positions . Subsequently , the popularity of BERT ( Devlin et al. , 2018 ) in NLP also promotes the research on transformer in the field of vision . ViT ( Dosovitskiy et al. , 2021 ) uses a pure transformer framework to extract visual features , where an image is divided into 16× 16 patches and the convolution layer is completely abandoned . It shows that the transformer-based architecture can perform well in large-scale datasets ( e.g. , JFT-300M ) . After that , DeiT ( Touvron et al. , 2021b ) carefully designs training strategies and data augmentation to further improve performance on the small datasets ( e.g. , ImageNet-1K ) . DeepViT ( Zhou et al. , 2021 ) and CaiT ( Touvron et al. , 2021c ) consider the optimization problem when the network deepens , and train a deeper transformer network . CrossViT ( Chen et al. , 2021a ) combines the local patch and global patch using two vision transformers . CPVT ( Chu et al. , 2021b ) uses a conditional position encoding to effectively encode the spatial positions of 1If linear layer is not shared between windows , the model weights trained with fixed image size can not be adapted to downstream tasks with various input sizes because unfixed input sizes cause a mismatch in the number of windows . patches . LeViT ( Graham et al. , 2021 ) improves ViT from many aspects , including the convolution embedding , extra non-linear projection and batch normalization , etc . Transformer-LS ( Zhu et al. , 2021 ) proposes a long-range attention and a short-term attention to model long sequences for both language and vision tasks . Some papers also design hierarchical backbone to extract spatial features at different scales , such as PVT ( Wang et al. , 2021 ) , Swin Transformer ( Liu et al. , 2021b ) , Twins ( Chu et al. , 2021a ) and NesT ( Zhang et al. , 2021 ) , which can be applied to downstream tasks ( e.g. , object detection and semantic segmentation ) . MLP-based Architectures . MLP-Mixer ( Tolstikhin et al. , 2021 ) has designed a very concise framework that utilizes matrix transposition and MLP to transmit information between spatial features . Resort to MLP , skip connection between layers and normalization layer , MLP-Mixer obtains promising experimental results . The concurrent work FF ( Melas-Kyriazi , 2021 ) also applies a similar network architecture and reaches similar conclusions . Such experimental results are surprising , which shows that the MLP-based architecture also achieves comparable performance with CNN-based architectures and transformer-based architectures . Subsequently , Res-MLP ( Touvron et al. , 2021a ) is proposed , which also obtains impressive performance with residual MLP only trained on ImageNet1K . gMLP ( Liu et al. , 2021a ) and EA ( Guo et al. , 2021 ) introduce Spatial Gating Unit ( SGU ) and the external attention to improve the performance of the pure MLP-based architecture , respectively . Recently , Container ( Gao et al. , 2021 ) proposes a general network that unifies convolution , transformer , and MLP-Mixer . The closest concurrent methods to our work are S2-MLP ( Yu et al. , 2021 ) and ViP ( Hou et al. , 2021 ) . S2-MLP uses spatial-shift MLP for feature exchange . ViP proposes a Permute-MLP layer for spatial information encoding to capture long-range dependencies . Different from their work , we focus on capturing the local dependencies with axially shifting features in the spatial dimension , which obtains better performance and can be applied to the downstream tasks . More recently , the concurrent work , CycleMLP ( Chen et al. , 2021b ) and S2-MLPv2 ( Yu et al. , 2021 ) are proposed . S2-MLPv2 improves S2-MLP and CycleMLP designs Cycle Fully-Connected Layer ( Cycle FC ) to obtain a larger receptive field than Channel FC . Our AS-MLP obtains comparable performance with these architectures . 3 THE AS-MLP ARCHITECTURE . 3.1 OVERALL ARCHITECTURE . Figure 1 shows our Axial Shifted MLP ( AS-MLP ) architecture2 . Given an RGB image I ∈ R3×H×W , where H and W are the height and width of the image , respectively . AS-MLP first performs the patch partition operation , which splits the original image into multiple patch tokens with the patch size of 4 × 4 , thus the combination of all tokens has the size of 48 × H4 × W 4 . ASMLP has four stages in total and there are different numbers of AS-MLP blocks in different stages . Figure 1 only shows the tiny version of AS-MLP , and other variants will be discussed in Sec . 3.4 . All the tokens in the previous step will go through these four stages , and the final output feature will be used for image classification . In Stage 1 , a linear embedding and the AS-MLP blocks are adopted for each token . The output has the dimension of C × H4 × W 4 , where C is the number of channels . Stage 2 first performs patch merging on the features outputted from the previous stage , which groups the neighbor 2×2 patches to obtain a feature with the size of 4C× H8 × W 8 and then a linear layer is adopted to warp the feature size to 2C × H8 × W 8 , followed by the cascaded AS-MLP blocks . Stage 3 and Stage 4 have similar structures to Stage 2 , and the hierarchical representations will be generated in these stages . 2The design of figure follows Swin Transformer ( Liu et al. , 2021b ) . | The paper proposes a new architecture for computer vision that is inspired by (a) the Swin Transformer, (b) MLP-Mixer (and colleagues) and (c) CNN-like local context via shifts (like Shift, TSM, ViP, S2-MLP). The architecture is based on the Swin Transformer, removes the windowed-attention, and then adds "local shifts" of channels to introduce local context via MLP. The architecture is applied to ImageNet-1k classification, COCO detection, and ADE20k segmentation with good results relative to model size and performance. | SP:ad02657d8b6e822881aac3b489431e1ee0afeddc |
AS-MLP: An Axial Shifted MLP Architecture for Vision | 1 INTRODUCTION . In the past decade , Convolutional Neural Networks ( CNNs ) ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) have received widespread attention and have become the de-facto standard for computer vision . Furthermore , with the in-depth exploration and research on self-attention , transformer-based architectures have also gradually emerged , and have surpassed CNN-based architectures in natural language processing ( e.g. , Bert ( Devlin et al. , 2018 ) ) and vision understanding ( e.g. , ViT ( Dosovitskiy et al. , 2021 ) , DeiT ( Touvron et al. , 2021b ) ) with amounts of training data . Recently , Tolstikhin et al . ( 2021 ) first propose MLP-based architecture , where almost all network parameters are learned from MLP ( linear layer ) and achieve amazing results , which is comparable with CNN-like models . Such promising results drive our exploration of MLP-based architecture . In the MLP-Mixer ( Tolstikhin et al. , 2021 ) , the model obtains the global receptive field through matrix transposition and token-mixing projection such that the long-range dependencies are covered . However , this rarely makes full use of the local information , which is very important in CNN-like architecture ( Simonyan & Zisserman , 2015 ; He et al. , 2016 ) because not all pixels need long-range dependencies , and the local information more focuses on extracting the low-level feature . In the transformer-based architectures , Swin Transformer ( Liu et al. , 2021b ) computes the self-attention in a window ( 7×7 ) instead of the global receptive field , which is similar to directly using a convolution layer with a large kernel size ( 7 × 7 ) to cover the local receptive field . Some other papers have also already emphasized the advantages of local receptive fields , and introduced local information in the transformer , such as Localvit ( Li et al. , 2021 ) , NesT ( Zhang et al. , 2021 ) , etc . Driven by these ideas , we mainly explore the influence of locality on MLP-based architectures . In order to introduce locality into the MLP-based architecture , one of the simplest and most intuitive ideas is to add a window to the MLP-Mixer , and then perform a token-mixing projection of the local information on the features within the window , just as done in Swin Transformer ( Liu et al. , 2021b ) and LINMAPPER ( Fang et al. , 2021 ) . However , for the MLP-based architecture , if we divide the window ( e.g. , 7 × 7 ) and perform the token-mixing projection in the window , then the linear layer has the 49× 49 parameters shared between windows1 , which greatly limits the model capacity and thus affects the learning of parameters and the final results . Therefore , a more ideal way to introduce locality is to directly model the relationship between a feature point and its surrounding feature points at any position , without the need to set a fixed window ( and window size ) in advance . To aggregate the features of different spatial positions in the same position and model their relationships , inspired by ( Wu et al. , 2018 ; Lin et al. , 2019 ; Wang et al. , 2020 ; Ho et al. , 2019 ) , we propose an axial shift strategy for MLP-based architecture , where we spatially shift features in both horizontal and vertical directions . Such an approach not only aggregates features from different locations , but also makes the feature channel only need to be divided into k groups instead of k2 groups to obtain a receptive field of size k × k with the help of axial operation . After that , a channel-mixing MLP combines these features , enabling the model to obtain local dependencies . It also allows us to design MLP structure as the same as the convolutional kernel , for instance , to design the kernel size and dilation rate . Based on the axial shift strategy , we design Axial Shifted MLP architecture , named AS-MLP . Our AS-MLP obtains 83.3 % Top-1 accuracy with 88M parameters and 15.2 GFLOPs in the ImageNet1K dataset without any extra training data . Such a simple yet effective method outperforms all MLP-based architectures and achieves competitive performance compared to the transformer-based architectures . It is also worth noting that the model weights in MLP-Mixer trained with fixed image size can not be adapted to downstream tasks with various input sizes because the token-mixing MLP has a fixed dimension . On the contrary , the AS-MLP architecture can be transferred to downstream tasks ( e.g. , object detection ) due to the design of axial shift . As far as we know , it is also the first work to apply MLP-based architecture to the downstream task . With the pre-trained model in the ImageNet-1K dataset , AS-MLP obtains 51.5 mAP on the COCO validation set and 49.5 MS mIoU on the ADE20K dataset , which is competitive compared to the transformer-based architectures . 2 RELATED WORK . CNN-based Architectures . Since AlexNet ( Krizhevsky et al. , 2012 ) won the ImageNet competition in 2012 , the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features . Subsequently , the VGG network ( Simonyan & Zisserman , 2015 ) is proposed , which purely uses a series of 3 × 3 convolution and fully connected layers , and obtains outstanding performance in image classification . Furthermore , ResNet ( He et al. , 2016 ) is proposed , which utilizes the residual connection to transfer features in different layers , thereby alleviating the problem of gradient vanishing and obtaining superior performance . After that , the residual module becomes an important component of the network design and is also employed in subsequent transformer-based architectures and MLP-based architectures . Some papers have made further improvements to the convolution operation in CNN-based architecture , such as dilated convolution ( Yu & Koltun , 2016 ) and deformable convolution ( Dai et al. , 2017 ) . EfficientNet ( Tan & Le , 2019 ; 2021 ) introduces neural architecture search into CNN to search for a suitable structure . These architectures build the CNN family and are used extensively in computer vision tasks . Transformer-based Architectures . Transformer is first proposed in ( Vaswani et al. , 2017 ) , where the attention mechanism is utilized to model the relationship between features from the different spatial positions . Subsequently , the popularity of BERT ( Devlin et al. , 2018 ) in NLP also promotes the research on transformer in the field of vision . ViT ( Dosovitskiy et al. , 2021 ) uses a pure transformer framework to extract visual features , where an image is divided into 16× 16 patches and the convolution layer is completely abandoned . It shows that the transformer-based architecture can perform well in large-scale datasets ( e.g. , JFT-300M ) . After that , DeiT ( Touvron et al. , 2021b ) carefully designs training strategies and data augmentation to further improve performance on the small datasets ( e.g. , ImageNet-1K ) . DeepViT ( Zhou et al. , 2021 ) and CaiT ( Touvron et al. , 2021c ) consider the optimization problem when the network deepens , and train a deeper transformer network . CrossViT ( Chen et al. , 2021a ) combines the local patch and global patch using two vision transformers . CPVT ( Chu et al. , 2021b ) uses a conditional position encoding to effectively encode the spatial positions of 1If linear layer is not shared between windows , the model weights trained with fixed image size can not be adapted to downstream tasks with various input sizes because unfixed input sizes cause a mismatch in the number of windows . patches . LeViT ( Graham et al. , 2021 ) improves ViT from many aspects , including the convolution embedding , extra non-linear projection and batch normalization , etc . Transformer-LS ( Zhu et al. , 2021 ) proposes a long-range attention and a short-term attention to model long sequences for both language and vision tasks . Some papers also design hierarchical backbone to extract spatial features at different scales , such as PVT ( Wang et al. , 2021 ) , Swin Transformer ( Liu et al. , 2021b ) , Twins ( Chu et al. , 2021a ) and NesT ( Zhang et al. , 2021 ) , which can be applied to downstream tasks ( e.g. , object detection and semantic segmentation ) . MLP-based Architectures . MLP-Mixer ( Tolstikhin et al. , 2021 ) has designed a very concise framework that utilizes matrix transposition and MLP to transmit information between spatial features . Resort to MLP , skip connection between layers and normalization layer , MLP-Mixer obtains promising experimental results . The concurrent work FF ( Melas-Kyriazi , 2021 ) also applies a similar network architecture and reaches similar conclusions . Such experimental results are surprising , which shows that the MLP-based architecture also achieves comparable performance with CNN-based architectures and transformer-based architectures . Subsequently , Res-MLP ( Touvron et al. , 2021a ) is proposed , which also obtains impressive performance with residual MLP only trained on ImageNet1K . gMLP ( Liu et al. , 2021a ) and EA ( Guo et al. , 2021 ) introduce Spatial Gating Unit ( SGU ) and the external attention to improve the performance of the pure MLP-based architecture , respectively . Recently , Container ( Gao et al. , 2021 ) proposes a general network that unifies convolution , transformer , and MLP-Mixer . The closest concurrent methods to our work are S2-MLP ( Yu et al. , 2021 ) and ViP ( Hou et al. , 2021 ) . S2-MLP uses spatial-shift MLP for feature exchange . ViP proposes a Permute-MLP layer for spatial information encoding to capture long-range dependencies . Different from their work , we focus on capturing the local dependencies with axially shifting features in the spatial dimension , which obtains better performance and can be applied to the downstream tasks . More recently , the concurrent work , CycleMLP ( Chen et al. , 2021b ) and S2-MLPv2 ( Yu et al. , 2021 ) are proposed . S2-MLPv2 improves S2-MLP and CycleMLP designs Cycle Fully-Connected Layer ( Cycle FC ) to obtain a larger receptive field than Channel FC . Our AS-MLP obtains comparable performance with these architectures . 3 THE AS-MLP ARCHITECTURE . 3.1 OVERALL ARCHITECTURE . Figure 1 shows our Axial Shifted MLP ( AS-MLP ) architecture2 . Given an RGB image I ∈ R3×H×W , where H and W are the height and width of the image , respectively . AS-MLP first performs the patch partition operation , which splits the original image into multiple patch tokens with the patch size of 4 × 4 , thus the combination of all tokens has the size of 48 × H4 × W 4 . ASMLP has four stages in total and there are different numbers of AS-MLP blocks in different stages . Figure 1 only shows the tiny version of AS-MLP , and other variants will be discussed in Sec . 3.4 . All the tokens in the previous step will go through these four stages , and the final output feature will be used for image classification . In Stage 1 , a linear embedding and the AS-MLP blocks are adopted for each token . The output has the dimension of C × H4 × W 4 , where C is the number of channels . Stage 2 first performs patch merging on the features outputted from the previous stage , which groups the neighbor 2×2 patches to obtain a feature with the size of 4C× H8 × W 8 and then a linear layer is adopted to warp the feature size to 2C × H8 × W 8 , followed by the cascaded AS-MLP blocks . Stage 3 and Stage 4 have similar structures to Stage 2 , and the hierarchical representations will be generated in these stages . 2The design of figure follows Swin Transformer ( Liu et al. , 2021b ) . | This paper proposes to use the shift operation (Wu et al. CVPR 2018) in an axial manner for MLP-mixer architectures. The proposed method performs much better than previous MLP-based methods on ImageNet-1K and on par with Swin-transformer. | SP:ad02657d8b6e822881aac3b489431e1ee0afeddc |
Unsupervised Federated Learning is Possible | 1 INTRODUCTION . Federated Learning ( FL ) has received significant attention from both academic and industrial perspectives in that it can bring together separate data sources and allow multiple clients to train a central model in a collaborative but private manner ( McMahan et al. , 2017 ; Kairouz et al. , 2019 ; Yang et al. , 2019 ) . So far , the majority of FL researches focused on the supervised setting , requiring collected data at each client to be fully labeled—this may hinder the spread of FL in practice since labeling large-scale training data can be extremely costly due to laborious mannual annotations , and sometimes may not even be possible for privacy concerns , e.g. , in medical diagnosis ( Ng et al. , 2021 ) . In this paper , we are interested in a challenging unsupervised FL problem where only unlabeled ( U ) data are available at each client . Our goal remains to train a classification model that can predict accurate class labels in the FL setup . It is often observed that each client collects their own data at different time points or from different geographic locations , e.g. , a hospital ( data center ) may store patient data every month or a particular user ( mobile device ) may take photos at different places . Therefore , we consider a realistic setting that the U data at a client come in the form of separate data sets : the data distribution of each U set and the number of U sets available at each client may vary . Without any labels , FL becomes significantly harder than before . In this work , we show that the aforementioned unsupervised FL problem can be solved if the class prior probability shifts between the available U sets at each client , and these class priors are known to the clients . Such a learning scenario can be conceivable in many real-world problems . For example , the hospital may not release the patients ’ diagnostic labels due to privacy reasons , but the morbidity rates of diseases ( corresponding to the class priors ) may change over time and can be accessible from related public medical reports ( Croft et al. , 2018 ) ; moreover , in many cases it is possible to estimate the class priors much more cheaply than the ground-truth labels ( Quadrianto et al. , 2009a ) . Based on these findings , we propose the federation of unsupervised learning ( FedUL ) ( cf . Figure 1 ) , where the original clients with U data are transformed to surrogate clients with ( surrogate ) labeled data and then supervised FL can be applied . The difficulty is how to infer our wanted model for the original classification task from the surrogate model learned by the surrogate task . We solve this problem by bridging the original and surrogate class probabilities with certain injective transition functions . Then we implement it by adding a transition layer to the output of each client model , so that the learned model is guaranteed to be a good approximation of the original class probability . On one hand , FedUL is a very general and flexible : it works as a wrapper that transforms the original clients to the surrogate ones , such that it is compatible with many supervised FL methods , e.g. , FedAvg , and add no additional communication cost . On the other hand , FedUL is computationally efficient and easy-to-implement : since the added transformation layer is fixed and determined by the class priors , FedUL adds no more burden on hyper-parameter tuning and/or optimization . Our contributions can be summarized as follows : • Methodologically , we propose a novel FedUL method that employs a surrogate FL task , providing new perspectives for dealing with U data in FL . • Theoretically , we prove that the optimal global model learned by FedUL with only U data converges to the optimal global model learned by supervised FL with fully labeled data under mild conditions . • Empirically , we demonstrate the effectiveness of FedUL on benchmark and real-world datasets . Related Work : FL has been extensively studied in the supervised setting . One of the most popular supervised FL paradigms is FedAvg ( McMahan et al. , 2017 ) which aggregates the local updates at the server and transmits the averaged model back to local clients . In contrast , FL in the unsupervised setting is less explored . Recent studies have shown the possibility of federated clustering with U data ( Ghosh et al. , 2020 ; Dennis et al. , 2021 ) . However , these clustering based methods rely on geometric or information-theoretic assumptions to build their learning objectives ( Chapelle et al. , 2006 ) , and are sub-optimal for classification goals . Our work is intrinsically different from them in the sense that we show the possibility of federated classification with U data based on empirical risk minimization ( ERM ) ( Vapnik , 1998 ) , and therefore the optimal model recovery can be guaranteed . Learning from U datasets has also been studied in the classical centralized learning . Mainstream directions include learning with label proportions ( LLP ) ( Quadrianto et al. , 2009b ) and Um classification ( Lu et al. , 2021 ) . The former learns a multi-class classifier from U datasets based on empirical proportion risk minimization ( EPRM ) ( Yu et al. , 2014 ) , while the latter learns a binary classifier from U datasts based on ERM . EPRM is inferior to ERM since its learning is not consistent . Yet , it is still unexplored how to learn a multi-class classification model from U datasets based on ERM . To the best of our knowledge , this work is the first attempt to tackle this problem in the FL setup , which is built upon the binary Um classification in the centralized learning . 2 STANDARD FEDERATED LEARNING . In this section , we review standard FL of a classification model . We consider a K-class classification problem with the feature space X ⊂ Rd and the label space Y = [ K ] , where d is the input dimension and [ K ] : = { 1 , . . . , K } . Let x ∈ X and y ∈ Y be the input and output random variables following an underlying joint distribution with density p ( x , y ) , which can be identified via the class priors { πk = p ( y = k ) } Kk=1 and the class-conditional densities { p ( x | y = k ) } Kk=1 . Let f : X → R K be a classification model that assigns a score to each of the K classes for a given input x and then outputs the predicted label by ypred = argmaxk∈ [ K ] ( f ( x ) ) k , where ( f ( x ) ) k is the k-th element of f ( x ) . In standard FL setup with C clients , each client c ∈ [ C ] has access to a labeled training set Dc = { ( xci , yci ) } nc i=1∼pc ( x , y ) of sample size nc and learns its local model fc by minimizing the following risk : Rc ( fc ) : = E ( x , y ) ∼pc ( x , y ) [ ` ( fc ( x ) , y ) ] , ( 1 ) where E denotes the expectation , ` : RK × Y → R+ is a proper loss function , e.g. , the softmax cross-entropy loss , i.e. , ` ce ( fc ( x ) , y ) = − ∑K k=1 1 ( y = k ) log ( fc ( x ) ) k = − log ( fc ( x ) ) y , and 1 ( · ) is the indicator function . In practice , ERM is a common practice that computes an approximation of Rc ( fc ) based on the client ’ s local data Dc by R̂c ( fc ; Dc ) = 1nc ∑nc i=1 ` ( fc ( x c i ) , y c i ) . The goal of standard FL ( McMahan et al. , 2017 ) is that the C clients collaboratively train a global classification model f that generalizes well with respect to p ( x , y ) , without sharing their local data Dc . The problem can be formalized as minimizing the aggregated risk : R ( f ) = 1 C ∑C c=1 Rc ( f ) . ( 2 ) In practice , we typically employ a server to coordinate the iterative distributed training as follows : • in each global round of training , the server broadcasts its current model f to all the clients { fc } Cc=1 ; • each client c copies the current server model fc = f , performs L local step updates : fc ← fc − αl · ∇R̂c ( fc ; Dc ) , ( 3 ) where αl is the local step-size , and sends fc − f back to the server ; • the server aggregates the updates from all clients { fc − f } Cc=1 to form the new server model using FedAvg : f ← f − αg · ∑C c=1 ( fc − f ) , ( 4 ) where αg is the global step-size . 3 FEDERATED LEARNING : NO LABEL NO CRY . In this section , we formulate the novel problem setting of FL with only unlabeled datasets , propose our method called federation of unsupervised learning ( FedUL ) , and provide theoretical analysis of FedUL . All the proofs are given in Appendix C . 3.1 PROBLEM FORMULATION . In this paper , we consider a challenging setting : instead of fully labeled training data , each client c ∈ [ C ] observes Mc ( Mc ≥ K , ∀c ) 1 sets of unlabeled samples , Uc = { Umc } Mc m=1 , where Umc = { xc , mi } nc , m i=1 denotes the collection of the m-th U set for client c with sample size nc , m . Each U set can be seen as a set of data points drawn from a mixture of the original class-conditional densities : Umc ∼ pmc ( x ) = ∑K k=1 πm , kc p ( x | y = k ) , ( 5 ) 1Note that we do not require each client has the same number of U sets . We only assume the condition Mc ≥ K , ∀c to ensure the full column rank matrix Πc , which is essential for FL with U datasets . where πm , kc = p m c ( y = k ) denotes the k-th class prior of the m-th U set . These class priors form a full column rank matrix Πc ∈ RMc×K with the constraint that each row sums up to 1 . Note that we do not require any labels to be known , only assume the class priors within each of the involved data sets are available , i.e. , the training class priors πm , kc and the test class prior π k = p ( y = k ) . Our goal is the same as standard FL , i.e. , the C clients are interested in collaboratively training a global classification model f that generalizes well with respect to p ( x , y ) , but using only unlabeled datasets Uc on each local client . | This work presents a novel federated learning scheme to address the problem of learning from only unlabeled data. The main idea is clean and interesting, which constructs a global (server) model by aggregating the surrogate clients’ tasks from observing only unlabeled data for the classification tasks. The unlabeled data are transformed at each client to make them compatible with supervised federated learning; consequently, the learned models are transformed accordingly. Existing federated aggregation techniques, e.g. FedAvg, are applied on these transformed clients (where they call surrogate clients). Theoretical results are provided for learning the optimal model and experiments on benchmark and real data show superior performance compared to baselines. | SP:a69fd07798a781ba3b1f06d26c6c4554abcfd30f |
Unsupervised Federated Learning is Possible | 1 INTRODUCTION . Federated Learning ( FL ) has received significant attention from both academic and industrial perspectives in that it can bring together separate data sources and allow multiple clients to train a central model in a collaborative but private manner ( McMahan et al. , 2017 ; Kairouz et al. , 2019 ; Yang et al. , 2019 ) . So far , the majority of FL researches focused on the supervised setting , requiring collected data at each client to be fully labeled—this may hinder the spread of FL in practice since labeling large-scale training data can be extremely costly due to laborious mannual annotations , and sometimes may not even be possible for privacy concerns , e.g. , in medical diagnosis ( Ng et al. , 2021 ) . In this paper , we are interested in a challenging unsupervised FL problem where only unlabeled ( U ) data are available at each client . Our goal remains to train a classification model that can predict accurate class labels in the FL setup . It is often observed that each client collects their own data at different time points or from different geographic locations , e.g. , a hospital ( data center ) may store patient data every month or a particular user ( mobile device ) may take photos at different places . Therefore , we consider a realistic setting that the U data at a client come in the form of separate data sets : the data distribution of each U set and the number of U sets available at each client may vary . Without any labels , FL becomes significantly harder than before . In this work , we show that the aforementioned unsupervised FL problem can be solved if the class prior probability shifts between the available U sets at each client , and these class priors are known to the clients . Such a learning scenario can be conceivable in many real-world problems . For example , the hospital may not release the patients ’ diagnostic labels due to privacy reasons , but the morbidity rates of diseases ( corresponding to the class priors ) may change over time and can be accessible from related public medical reports ( Croft et al. , 2018 ) ; moreover , in many cases it is possible to estimate the class priors much more cheaply than the ground-truth labels ( Quadrianto et al. , 2009a ) . Based on these findings , we propose the federation of unsupervised learning ( FedUL ) ( cf . Figure 1 ) , where the original clients with U data are transformed to surrogate clients with ( surrogate ) labeled data and then supervised FL can be applied . The difficulty is how to infer our wanted model for the original classification task from the surrogate model learned by the surrogate task . We solve this problem by bridging the original and surrogate class probabilities with certain injective transition functions . Then we implement it by adding a transition layer to the output of each client model , so that the learned model is guaranteed to be a good approximation of the original class probability . On one hand , FedUL is a very general and flexible : it works as a wrapper that transforms the original clients to the surrogate ones , such that it is compatible with many supervised FL methods , e.g. , FedAvg , and add no additional communication cost . On the other hand , FedUL is computationally efficient and easy-to-implement : since the added transformation layer is fixed and determined by the class priors , FedUL adds no more burden on hyper-parameter tuning and/or optimization . Our contributions can be summarized as follows : • Methodologically , we propose a novel FedUL method that employs a surrogate FL task , providing new perspectives for dealing with U data in FL . • Theoretically , we prove that the optimal global model learned by FedUL with only U data converges to the optimal global model learned by supervised FL with fully labeled data under mild conditions . • Empirically , we demonstrate the effectiveness of FedUL on benchmark and real-world datasets . Related Work : FL has been extensively studied in the supervised setting . One of the most popular supervised FL paradigms is FedAvg ( McMahan et al. , 2017 ) which aggregates the local updates at the server and transmits the averaged model back to local clients . In contrast , FL in the unsupervised setting is less explored . Recent studies have shown the possibility of federated clustering with U data ( Ghosh et al. , 2020 ; Dennis et al. , 2021 ) . However , these clustering based methods rely on geometric or information-theoretic assumptions to build their learning objectives ( Chapelle et al. , 2006 ) , and are sub-optimal for classification goals . Our work is intrinsically different from them in the sense that we show the possibility of federated classification with U data based on empirical risk minimization ( ERM ) ( Vapnik , 1998 ) , and therefore the optimal model recovery can be guaranteed . Learning from U datasets has also been studied in the classical centralized learning . Mainstream directions include learning with label proportions ( LLP ) ( Quadrianto et al. , 2009b ) and Um classification ( Lu et al. , 2021 ) . The former learns a multi-class classifier from U datasets based on empirical proportion risk minimization ( EPRM ) ( Yu et al. , 2014 ) , while the latter learns a binary classifier from U datasts based on ERM . EPRM is inferior to ERM since its learning is not consistent . Yet , it is still unexplored how to learn a multi-class classification model from U datasets based on ERM . To the best of our knowledge , this work is the first attempt to tackle this problem in the FL setup , which is built upon the binary Um classification in the centralized learning . 2 STANDARD FEDERATED LEARNING . In this section , we review standard FL of a classification model . We consider a K-class classification problem with the feature space X ⊂ Rd and the label space Y = [ K ] , where d is the input dimension and [ K ] : = { 1 , . . . , K } . Let x ∈ X and y ∈ Y be the input and output random variables following an underlying joint distribution with density p ( x , y ) , which can be identified via the class priors { πk = p ( y = k ) } Kk=1 and the class-conditional densities { p ( x | y = k ) } Kk=1 . Let f : X → R K be a classification model that assigns a score to each of the K classes for a given input x and then outputs the predicted label by ypred = argmaxk∈ [ K ] ( f ( x ) ) k , where ( f ( x ) ) k is the k-th element of f ( x ) . In standard FL setup with C clients , each client c ∈ [ C ] has access to a labeled training set Dc = { ( xci , yci ) } nc i=1∼pc ( x , y ) of sample size nc and learns its local model fc by minimizing the following risk : Rc ( fc ) : = E ( x , y ) ∼pc ( x , y ) [ ` ( fc ( x ) , y ) ] , ( 1 ) where E denotes the expectation , ` : RK × Y → R+ is a proper loss function , e.g. , the softmax cross-entropy loss , i.e. , ` ce ( fc ( x ) , y ) = − ∑K k=1 1 ( y = k ) log ( fc ( x ) ) k = − log ( fc ( x ) ) y , and 1 ( · ) is the indicator function . In practice , ERM is a common practice that computes an approximation of Rc ( fc ) based on the client ’ s local data Dc by R̂c ( fc ; Dc ) = 1nc ∑nc i=1 ` ( fc ( x c i ) , y c i ) . The goal of standard FL ( McMahan et al. , 2017 ) is that the C clients collaboratively train a global classification model f that generalizes well with respect to p ( x , y ) , without sharing their local data Dc . The problem can be formalized as minimizing the aggregated risk : R ( f ) = 1 C ∑C c=1 Rc ( f ) . ( 2 ) In practice , we typically employ a server to coordinate the iterative distributed training as follows : • in each global round of training , the server broadcasts its current model f to all the clients { fc } Cc=1 ; • each client c copies the current server model fc = f , performs L local step updates : fc ← fc − αl · ∇R̂c ( fc ; Dc ) , ( 3 ) where αl is the local step-size , and sends fc − f back to the server ; • the server aggregates the updates from all clients { fc − f } Cc=1 to form the new server model using FedAvg : f ← f − αg · ∑C c=1 ( fc − f ) , ( 4 ) where αg is the global step-size . 3 FEDERATED LEARNING : NO LABEL NO CRY . In this section , we formulate the novel problem setting of FL with only unlabeled datasets , propose our method called federation of unsupervised learning ( FedUL ) , and provide theoretical analysis of FedUL . All the proofs are given in Appendix C . 3.1 PROBLEM FORMULATION . In this paper , we consider a challenging setting : instead of fully labeled training data , each client c ∈ [ C ] observes Mc ( Mc ≥ K , ∀c ) 1 sets of unlabeled samples , Uc = { Umc } Mc m=1 , where Umc = { xc , mi } nc , m i=1 denotes the collection of the m-th U set for client c with sample size nc , m . Each U set can be seen as a set of data points drawn from a mixture of the original class-conditional densities : Umc ∼ pmc ( x ) = ∑K k=1 πm , kc p ( x | y = k ) , ( 5 ) 1Note that we do not require each client has the same number of U sets . We only assume the condition Mc ≥ K , ∀c to ensure the full column rank matrix Πc , which is essential for FL with U datasets . where πm , kc = p m c ( y = k ) denotes the k-th class prior of the m-th U set . These class priors form a full column rank matrix Πc ∈ RMc×K with the constraint that each row sums up to 1 . Note that we do not require any labels to be known , only assume the class priors within each of the involved data sets are available , i.e. , the training class priors πm , kc and the test class prior π k = p ( y = k ) . Our goal is the same as standard FL , i.e. , the C clients are interested in collaboratively training a global classification model f that generalizes well with respect to p ( x , y ) , but using only unlabeled datasets Uc on each local client . | The authors are proposing an approach for unsupervised Federated Learning. The authors propose to use Emperical risk minimizing for unsupervised learning. The authors assign labels to each of the class and uses a prior for the classes so that each of the client can learn without any labels. The authors have shown theoretical properties for their proposed solution. Theorem 1 showing that there is a map that that exist to transform the data to the classes, Lemma 2 shows that there is a transition function for each of the clients. The authors have also shown convergences of the algorithm by deriving the upper bound. | SP:a69fd07798a781ba3b1f06d26c6c4554abcfd30f |
Does Entity Abstraction Help Generative Transformers Reason? | 1 INTRODUCTION . Transformer language models ( TLMs ; Vaswani et al . 2017 ) have enabled rapid progress in natural language processing ( NLP ) . When pre-trained on large corpora ( such as the web ) to predict the next tokens or a set of masked tokens from an input sequence , TLMs can capture linguistic knowledge ( Peters et al. , 2018 ; Goldberg , 2019 ; Tenney et al. , 2019b ) , and yield state-of-the-art performance on many NLP tasks with little to no task supervision ( Devlin et al. , 2019 ; Radford et al. , 2018 ; 2019 ; Brown et al. , 2020 ) . However , it is not clear if these models can capture higher level knowledge such as reasoning skills that can be re-used in arbitrary contexts , and in ways that leverage the compositionality of those skills ( Lake & Baroni , 2018 ; Liška et al. , 2018 ) , something logical reasoners can do relatively well on a smaller scale ( De Raedt et al. , 2007 ; Fierens et al. , 2015 ) . Simple compositional tasks such as SCAN ( Lake & Baroni , 2018 ) , CLUTRR ( Sinha et al. , 2019 ) , and ProofWritter ( Clark et al. , 2020 ; Tafjord et al. , 2020 ) can help diagnose the compositional generalization behavior of language models . Recent work on some of these datasets showed that TLMs still struggle to learn reasoning strategies that can be re-used in out-of training distribution settings ( Lake & Baroni , 2018 ; Gontier et al. , 2020 ) . If we look at how logical reasoners operate , we find that they have an important abstraction component ( going from grounded entities to higher level concepts ) before logical reasoning can start ( De Raedt et al. , 2007 ) . Going from an original text sequence to its higher-order meaning is an important part of the NLP pipeline ( part of it being entity type tagging ) . Similarly in mathematics , the introduction of generic variables allows to progress in a logical reasoning process without keeping track of every ( grounded ) atomic entity . Overall , this idea that we call abstraction , seems to be an important part of logical reasoning . Recent work suggests that incorporating external knowledge about grounded entities could improve language models ’ abilities to reason and generalize ( Ji et al. , 2017 ; Zhang et al. , 2019 ; Moosavi et al. , 2020 ; Rosset et al. , 2020 ) . However the empirical effect of incorporating generic entity types remains unclear , especially with recent studies suggesting that pre-trained models already encode some of that linguistic knowledge in their parameters ( Hewitt & Manning , 2019 ; Liu et al. , 2019a ; Clark et al. , 2019 ; Tenney et al. , 2019a ) . In this work , we study the effect of explicitly providing entity type abstraction in addition to the original input to pre-trained Transformers . We explore and evaluate different ways to incorporate entity type abstraction and observe that some methods are more efficient than others . To constructs the abstract representation incorporated into TLMs , we leverage symbolic NLP representations such as entity type information given by popular NLP libraries . This allows for automatic and reproducible data processing . In general , our approach is the following : given an input sequence , we build a simplified version of it by replacing entity names by their corresponding entity types . This simplified sequence can then be used as extra input ( Sections 3.1 & 3.2 ) or as extra training signal to the model ( Sections 3.3 ) . In particular , we explore three different ways to augment pre-trained Transformers with this abstract knowledge : 1. by combining token embeddings from both the original and the abstract sequence before encoding ( Section 3.1 ) ( Figure 1a & 1b ) . 2. by encoding both the original and the abstract sequence and combining them before decoding the target output ( Section 3.2 ) ( Figure 1c & 1d ) . 3. by adding a second language model head on top of the Transformer decoder to predict the abstract sequence ( Section 3.3 ) ( Figure 1e ) . A series of controlled experiments on two synthetic datasets show that models having access to abstract knowledge about entity types yield better performance at inference time both when interpolating and extrapolating to unseen lengths of reasoning chains . Synthetic data is used in order to control for the degree of compositional generalization required . Furthermore , in order to understand if the benefits observed in the synthetic cases are applicable in more realistic settings , we ran a series of experiments on two question answering datasets requiring various degrees of multi-hop reasoning . Unfortunately , results on these more natural language datasets show that abstraction aware models are not significantly better than baseline models . Overall our work contributions are the following : 1. we introduce and compare empirically different ways to incorporate abstraction into pre-trained TLMs . 2. we show that incorporating abstract knowledge can significantly improve reasoning and compositional generalization in both interpolation and extrapolation when the environment is formally defined in a logical reasoning setting . 3. we show that abstraction aware models may not benefit much when language is more natural and less procedural . We hope that our work will inspire future research in the field to look for simple inductive biases that can complement pre-trained models in their quest to achieve logical reasoning at scale . 2 RELATED WORK . Augmenting neural language models with knowledge about entities has been a popular method to improve their functionality . Ji et al . ( 2017 ) trained an entity neural language model to predict sequences of entities with an LSTM ( Hochreiter & Schmidhuber , 1997 ) . At each sampling step , they predict the next word alongside a categorical variable indicating the current token ’ s entity ID . They obtained lower perplexity and better results on co-reference resolution and entity prediction tasks than a variety of baselines . Similarly , Rosset et al . ( 2020 ) trained a GPT2 model ( Radford et al. , 2019 ) by giving it access to entity knowledge at the input level and as an additional pre-training loss . Their model achieved better factual correctness on benchmarks such as LAMA ( Petroni et al. , 2019 ) , and performed better than a baseline GPT2 model in various question answering tasks . Inspired by this work and motivated by the goal of building better reasoning language models , we instead focus on the prediction of entity types rather than entity identifiers taken from a fixed list of entities . This allows our solution to be robust to new entities . In addition , we explore and compare different ways to incorporate the entity knowledge in an encoder-decoder architecture . Besides entity knowledge , other types of explicit information has also been given to Transformer models . Levine et al . ( 2019 ) trained a BERT-like model to learn word senses . They gave their model access to WordNet supersenses at the input level and as an additional training loss . They achieve better performance than other baselines on the SemEval Word Sense Disambiguation task ( Raganato et al. , 2017 ) . In addition , Moosavi et al . ( 2020 ) propose to improve robustness to data biases by augmenting the training data with predicate-argument structures . They train a BERT-base model ( Devlin et al. , 2019 ) with PropBank-style semantic role labeling ( Shi & Lin , 2019 ) on MultiNLI ( Williams et al. , 2017 ) and SWAG ( Zellers et al. , 2018 ) datasets . Their results show that incorporating predicate-argument structure in the input sequence ( only during training ) makes the model more robust to adversarial examples in MultiNLI . Furthermore , Sachan et al . ( 2020 ) ask if syntax trees can help Transformers to better extract information . They augmented a pre-trained BERT model with a syntax graph neural network in order to encode syntaxt trees and measure the performance of their model against a BERT model on various tasks . Their results showed that the quality of the trees are highly tied to the performance boost observed . More recently , Porada et al . ( 2021 ) extended a RoBERTa model ( Liu et al. , 2019b ) with hypernym abstraction based on WordNet to evaluate the plausibility of events . Their model is able to better predict human plausibility judgement than other RoBERTa baselines . Although different in application , all these prior works leverage the general idea of explicitly giving more abstract knowledge to Transformer models , hence showing how flexible and generic this strategy can be . We take a similar approach with entity types , but in the hope of improving the reasoning skills of our baseline model . A flurry of recent work has also examined ways to augment TLMs with entities from external knowledge bases ( Zhang et al. , 2019 ; Peters et al. , 2019 ; Févry et al. , 2020 ; Verga et al. , 2020 ) . However , most of the time , these solutions rely on external components such as knowledge graphs with pre-trained entity embeddings , and/or an additional memory . While they often use entity linking as a way to perform co-reference resolution , they do not incorporate higher level of abstractions such as entity types like we do here . 3 INTRODUCING ABSTRACTION INDUCTIVE BIASES . In this section we describe five different ways to incorporate abstraction into a pre-trained encoderdecoder model . Given an input sequence X , we use existing NLP tools such as spacy named entity tagger1 to make a simplified copy Xs of the input . This is a more generic copy of X . We run the spacy recognizer on X to extract entity tags such as PERSON , ORG , GPE , etc ... For each entity type , we create n additional vocabulary entries ( with randomly initialized embeddings ) such as [ PERSON 1 , . . . , PERSON n , ORG 1 , . . . , ORG n , . . . ] . Every token in X is then replaced by their ( randomly numbered ) entity tag to make the simplified sequence Xs . If the same entity is present multiple times in X , each occurrence will be replaced by the same entity tag in Xs . If no entity is found for a token in X , the original token ’ s text is kept in Xs ( e.g . “ Bob Smith has a cat that he loves . Bob also loves Alexandra. ” would be transformed into “ PERSON 11 has a cat that he loves . PERSON 11 also loves PERSON 23. ” ) . The hyper-parameter n is set such that each distinct entity within the same sequence gets a different ID . If the same entity appears more than once in a single example , it will get the same ID every time within that sequence . However , we re-use the same set of entity tags across different examples . The value of n depends on each dataset , details can be seen in Appendix A . Although this may look similar to attributing different entity IDs to each entity regardless of their type , we believe that after seeing many examples , all IDs of the same type will have a similar embedding since we randomly select such ID across examples and across epochs . The number of overlapping entity tags across the entire dataset during many epochs will be large , resulting in pushing the embedding of same-type entity tokens closer together . To test the effect of having multiple tags for the same entity type , we also ran experiments in which we set n = 1 thus forcing overlapping entity tags within each example . However we didn ’ t notice any concluding difference , so the rest of this work will focus on the n > 1 setting described above . 1https : //spacy.io/models/en In the following subsections , we describe different strategies to incorporate Xs into an encoderdecoder Transformer model . | This paper investigate incorporating entity abstraction to transformer language models for text reasoning tasks. The paper proposes different methods to inject entity abstraction information into transformer LMs and experiments on a synthetic dataset show that the proposed method helps compositional generalization. However, experiments on two realistic datasets show that the proposed method fail to effectively improve performance. | SP:944ebd7c4c81d2266fe950501917c948f6925bb2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.