paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
Shape Defense | 1 INTRODUCTION . Deep neural networks ( LeCun et al. , 2015 ) remain the state of the art across many areas and are employed in a wide range of applications . They also provide the leading model of biological neural networks , especially in visual processing ( Kriegeskorte , 2015 ) . Despite the unprecedented success , however , they can be easily fooled by adding carefully-crafted imperceptible noise to normal inputs ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . This poses serious threats in using them in safety- and security-critical domains . Intensive efforts are ongoing to remedy this problem . Our primary goal here is to learn robust models for visual recognition inspired by two observations . First , object shape remains largely invariant to imperceptible adversarial perturbations ( Fig . 1 ) . Shape is a sign of an object and plays a vital role in recognition ( Biederman , 1987 ) . We rely heavily on edges and object boundaries , whereas CNNs emphasize more on texture ( Geirhos et al. , 2018 ) . Second , unlike CNNs , we recognize objects one at a time through attention and background subtraction ( e.g. , Itti & Koch ( 2001 ) ) . These may explain why adversarial examples are perplexing . The convolution operation in CNNs is biased towards capturing texture since the number of pixels constituting texture far exceeds the number of pixels that fall on the object boundary . This in turn provides a big opportunity for adversarial image manipulation . Some attempts have been made to emphasize more on edges , for example by utilizing normalization layers ( e.g. , contrast and divisive normalization ( Krizhevsky et al. , 2012 ) ) . Such attempts , however , have not been fully investigated for adversarial defense . Overall , how shape and texture should be reconciled in CNNs continues to be an open question . Here we propose two solutions that can be easily implemented and integrated in existing defenses . We also investigate possible adaptive attacks against them . Extensive experiments across ten datasets , over which shape and texture have different relative importance , demonstrate the effectiveness of our solutions against strong attacks . Our first method performs adversarial training on edge-augmented inputs . The second method uses a conditional GAN ( Isola et al. , 2017 ) to translate edge maps to clean images , essentially finding a perturbation-invariant transformation . There is no need for adversarial training ( and hence less computation ) in this method . Further , and perhaps less surprising , we find that incorporating edges also makes CNNs more robust to natural images corruptions and backdoor attacks . The versatility and effectiveness of these approaches , without significant parameter tuning , is very promising . Ultimately , our study shows that shape is the key to build robust models and opens a new direction for future research in adversarial robustness . 2 RELATED WORK . Here , we provide a brief overview of the closely related research with an emphasis on adversarial defenses . For detailed comments on this topic , please refer to Akhtar & Mian ( 2018 ) . Adversarial attacks . The goal of the adversary is to craft an adversarial input x̃ ∈ Rd by adding an imperceptible perturbation to the ( legitimate ) input x ∈ Rd ( here in the range [ 0,1 ] ) , i.e. , x̃ = x + . Here , we consider two attacks based on the ` ∞-norm of , the Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2015 ) , as well as the Projected Gradient Descent ( PGD ) method ( Madry et al. , 2017 ) . Both white-box and black-box attacks in the untargeted condition are considered . Deep models are also susceptible to image transformations other than adversarial attacks ( e.g. , noise , blur ) , as is shown in Hendrycks & Dietterich ( 2019 ) and Azulay & Weiss ( 2018 ) . Adversarial defenses . Recently , there has been a surge of methods to mitigate the threat from adversarial attacks either by making models robust to perturbations or by detecting and rejecting malicious inputs . A popular defense is adversarial training in which a network is trained on adversarial examples ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . In particular , adversarial training with a PGD adversary remains empirically robust to this day ( Athalye et al. , 2018 ) . Drawbacks of adversarial training include impacting clean performance , being computationally expensive , and overfitting to the attacks it is trained on . Some defenses , such as Feature Squeezing ( Xu et al. , 2017 ) , Feature Denoising ( Xie et al. , 2019 ) , PixelDefend ( Song et al. , 2017 ) , JPEG Compression ( Dziugaite et al. , 2016 ) and Input Transformation ( Guo et al. , 2017 ) , attempt to purify the maliciously perturbed images by transforming them back towards the distribution seen during training . MagNet ( Meng & Chen , 2017 ) trains a reformer network ( one or multiple auto-encoders ) to move the adversarial image closer to the manifold of legitimate images . Likewise , Defense-GAN ( Samangouei et al. , 2018 ) uses GANs ( Goodfellow et al. , 2014 ) to project samples onto the manifold of the generator before classifying them . A similar approach based on Variational AutoEncoders ( VAE ) is proposed in Li & Ji ( 2019 ) . Unlike these works which are based on texture ( and hence are fragile ( Athalye et al. , 2018 ) ) , our GAN-based defense is built upon edge maps . Some defenses are inspired by biology ( e.g. , Dapello et al . ( 2020 ) , Li et al . ( 2019 ) , Strisciuglio et al . ( 2020 ) , Reddy et al . ( 2020 ) ) . Shape vs. texture . Geirhos et al . ( 2018 ) discovered that CNNs routinely latch on to the object texture , whereas humans pay more attention to shape . When presented with stimuli with conflicting cues ( e.g. , a cat shape with elephant skin texture ; Appx . A ) , human subjects correctly labeled them based on their shape . In sharp contrast , predictions made by CNNs were mostly based on the texture ( See also Hermann & Kornblith ( 2019 ) ) . Similar results are also reported by Baker et al . ( 2018 ) . Hermann et al . ( 2020 ) studied the factors that produce texture bias in CNNs and learned that data augmentation plays a significant role to mitigate texture bias . Xiao et al . ( 2019 ) , in parallel to our work , have also proposed methods to utilize shape for adversarial defense . They perform classification on the edge map rather than the image itself . This is a baseline method against which we compare our algorithms . Similar to us , they also use GANs to purify the input image . Algorithm 1 Edge-guided adversarial training ( EAT ) for T epochs , perturbation budget , and loss balance ratio α , over a dataset of size M for a network fθ ( performed in minibatches in practice ) . β ∈ { edge , img , imgedge } indicates network type and redetect train means edge redetection during training . for t = 1 . . . T do for i = 1 . . .M do // launch adversarial attack ( here FGSM and PGD attacks ) x̃i = clip ( xi + sign ( ∇x ` ( fθ ( xi ) , yi ) ) ) if β == imgedge & redetect train then x̃i = detect edge ( x̃i ) // recompute and replace the edge map end if ` = α ` ( fθ ( xi ) , yi ) + ( 1− α ) ` ( fθ ( x̃i ) , yi ) // here α = 0.5 θ = θ −∇θ ` // update model weights with some optimizer , e.g. , Adam end for end for Algorithm 2 GAN-based shape defense ( GSD ) // Training 1 . Create a dataset of images X = { xi , yi } i=1···N including clean and/or perturbed images 2 . Extract edge maps ( ei ) for all images in the dataset 3 . Train a conditional GAN pg ( x|e ) to map edge image e to clean image x // here pix2pix 4 . Train a classifier pc ( y|x ) to map generated image x to class label y // Inference 1 . For input image x , clean or perturbed , first compute the edge image e 2 . Then , compute pc ( y|x′ ) where x′ is the generated image corresponding to e 3 PROPOSED METHODS . Edge-guided Adversarial Training ( EAT ) . The intuition here is that the edge map retains the structure in the image and helps disambiguate the classification ( See Fig . 1 ) . In its simplest form ( Fig . 7 ( A ) in Appx . A ; Alg . 1 ) , adversarial training is performed over the 2D ( Gray+Edge ) or 4D ( RGB+Edge ) input ( i.e. , number of channels ; denoted as Img+Edge ) . In a slightly more complicated form ( Fig . 7 ( B ) ) , first , for each input ( clean or adversarial ) , the old edge map is replaced with the newly extracted one . The edge map can be computed from the average of only image channels or all available channels ( i.e. , image plus edge ) . The latter can sometimes improve the results , since the old edge map ( although perturbed ; Fig . 10 and Appx . B ) still contains unaltered shape structures . Then , adversarial training is performed over the new input . The reason behind adversarial training with redetected edges is to expose the network to possible image structure damage . The loss for training is a weighted combination of loss over clean images and loss over adversarial images . At inference time , first , the edge map is computed and then classification is done over the edge-augmented input . As a baseline model , we also consider first detecting the input ’ s edge map and then feeding it to the model trained on the edges for classification . We refer to this model as Img2Edge . GAN-based Shape Defense ( GSD ) . Here , first , a conditional GAN is trained to map the edge image , from clean or adversarial images , to its corresponding clean image ( Alg . 2 ) . Any image translation method ( here pix2pix by Isola et al . ( 2017 ) using this code1 ) can be employed for this purpose . Next , a CNN is trained over the generated images . At inference time , first , the edge map is computed and then classification is done over the generated image for this edge image . The intuition is that the edge map remains nearly the same over small perturbation budgets ( See Appx . A ) . Notice that conditional GAN can also be trained on perturbed images ( similar to Samangouei et al . ( 2018 ) and Li & Ji ( 2019 ) or edge-augmented perturbed images ( similar to above ) . 4 EXPERIMENTS AND RESULTS . 4.1 DATASETS AND MODELS . Experiments are spread across 10 datasets covering a variety of stimulus types . Sample images from datasets are given in Fig . 2 . Models are trained with cross-entropy loss and Adam optimizer ( Kingma 1https : //github.com/mrzhu-cool/pix2pix-pytorch & Ba , 2014 ) with a batch size of 100 , for 20 epochs over MNIST and FashionMNIST , 30 over DogVsCat , and 10 over the remaining . Canny method ( Canny , 1986 ) is used for edge detection over all datasets , except DogBreeds for which Sobel is used . Edge detection parameters are separately adjusted for each dataset . We did not carry out an exhaustive hyperparameter search , since we are interested in additional benefits edges may bring rather than training the best possible models . The first two datasets include MNIST ( LeCun et al. , 1998 ) and FashionMNIST ( Xiao et al. , 2017 ) . A CNN with 2 convolution , 2 pooling , and 2 fc layers is trained . Each of these datasets contains 60K training images ( resolution 28×28 ) and 6K test images over 10 classes . The third dataset , DogVsCat2 contains 18,085 training and 8,204 test images . Images in this dataset are of varying dimensions . They are resized here to 150×150 pixels to save computation . A CNN with 4 convolution , 4 pooling , and 2 fc layers is trained from scratch . Over the remaining datasets , we finetune a pre-trained ResNet18 ( He et al. , 2016 ) , trained over ImageNet ( Deng et al. , 2009 ) , and normalize images using ImageNet mean and standard deviation . The fourth dataset , CIFAR10 ( Krizhevsky , 2009 ) , con- tains 50K training and 10K test images with a resolution of 32×32 which are resized here to 64×64 for better edge detection . The fifth dataset is DogBreeds ( see footnote ) . It contains 1,421 training and 356 test images at resolution 224×224 over 16 classes . The sixth dataset is GTSRB ( Stallkamp et al. , 2012 ) and includes 39,209 and 1,2631 training and test images , respectively , over 43 classes ( resolution 64×64 pixels ) . The seventh dataset , Icons-50 , includes 6,975 training and 3,025 test images over 50 classes ( Hendrycks & Dietterich , 2019 ) . The original image size is 120×120 which is resized to 64×64 . The eighth dataset , Sketch , contains 14K training and 6K test images over 250 classes . Images have size 1111×1111 and are resized to 64×64 in experiments ( Eitz et al. , 2012 ) . The ninth and tenth datasets are derived from ImageNet3 . The Imagenette2-160 dataset has 3,925 training and 9,469 test images ( resolution 160×160 ) over 10 classes ( tench , English springer , cassette player , chain saw , church , French horn , garbage truck , gas pump , golf ball , and parachute ) . The Tiny Imagenet dataset has 100K training images ( resolution 64× 64 ) and 10K validation images ( used here as the test set ) over 200 classes . For attacks , we use https : //github.com/Harry24k/adversarial-attacks-pytorch , except Boundary attack for which we use https : //github.com/bethgelab/foolbox . | This paper aims to improve adversarial robustness considering the information about the object shape details with the means of edge maps. Two different strategies are proposed to increase model robustness using the edge maps: i) conduct the adversarial training on the input images, which are concatenated with its corresponding edge map as an additional input channel to the image. Here, the edge maps are recomputed and concatenated to the adversarial inputs after their generation during adversarial training. ii) Utilize a conditional GAN to generate the images from clean data distribution that is conditioned on the edge maps and later the classifier performance is evaluated on the generated images. Here, the authors claim that these two strategies improves the classifier robustness after conducting experiments across 10 different datasets. They also studied the effectiveness of their strategy when combined with background subtraction, the defense against poisoning attacks and robustness against natural image corruptions. | SP:aba0fd37465ee59982d617e32243307543cb0cb0 |
Shape Defense | 1 INTRODUCTION . Deep neural networks ( LeCun et al. , 2015 ) remain the state of the art across many areas and are employed in a wide range of applications . They also provide the leading model of biological neural networks , especially in visual processing ( Kriegeskorte , 2015 ) . Despite the unprecedented success , however , they can be easily fooled by adding carefully-crafted imperceptible noise to normal inputs ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . This poses serious threats in using them in safety- and security-critical domains . Intensive efforts are ongoing to remedy this problem . Our primary goal here is to learn robust models for visual recognition inspired by two observations . First , object shape remains largely invariant to imperceptible adversarial perturbations ( Fig . 1 ) . Shape is a sign of an object and plays a vital role in recognition ( Biederman , 1987 ) . We rely heavily on edges and object boundaries , whereas CNNs emphasize more on texture ( Geirhos et al. , 2018 ) . Second , unlike CNNs , we recognize objects one at a time through attention and background subtraction ( e.g. , Itti & Koch ( 2001 ) ) . These may explain why adversarial examples are perplexing . The convolution operation in CNNs is biased towards capturing texture since the number of pixels constituting texture far exceeds the number of pixels that fall on the object boundary . This in turn provides a big opportunity for adversarial image manipulation . Some attempts have been made to emphasize more on edges , for example by utilizing normalization layers ( e.g. , contrast and divisive normalization ( Krizhevsky et al. , 2012 ) ) . Such attempts , however , have not been fully investigated for adversarial defense . Overall , how shape and texture should be reconciled in CNNs continues to be an open question . Here we propose two solutions that can be easily implemented and integrated in existing defenses . We also investigate possible adaptive attacks against them . Extensive experiments across ten datasets , over which shape and texture have different relative importance , demonstrate the effectiveness of our solutions against strong attacks . Our first method performs adversarial training on edge-augmented inputs . The second method uses a conditional GAN ( Isola et al. , 2017 ) to translate edge maps to clean images , essentially finding a perturbation-invariant transformation . There is no need for adversarial training ( and hence less computation ) in this method . Further , and perhaps less surprising , we find that incorporating edges also makes CNNs more robust to natural images corruptions and backdoor attacks . The versatility and effectiveness of these approaches , without significant parameter tuning , is very promising . Ultimately , our study shows that shape is the key to build robust models and opens a new direction for future research in adversarial robustness . 2 RELATED WORK . Here , we provide a brief overview of the closely related research with an emphasis on adversarial defenses . For detailed comments on this topic , please refer to Akhtar & Mian ( 2018 ) . Adversarial attacks . The goal of the adversary is to craft an adversarial input x̃ ∈ Rd by adding an imperceptible perturbation to the ( legitimate ) input x ∈ Rd ( here in the range [ 0,1 ] ) , i.e. , x̃ = x + . Here , we consider two attacks based on the ` ∞-norm of , the Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2015 ) , as well as the Projected Gradient Descent ( PGD ) method ( Madry et al. , 2017 ) . Both white-box and black-box attacks in the untargeted condition are considered . Deep models are also susceptible to image transformations other than adversarial attacks ( e.g. , noise , blur ) , as is shown in Hendrycks & Dietterich ( 2019 ) and Azulay & Weiss ( 2018 ) . Adversarial defenses . Recently , there has been a surge of methods to mitigate the threat from adversarial attacks either by making models robust to perturbations or by detecting and rejecting malicious inputs . A popular defense is adversarial training in which a network is trained on adversarial examples ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . In particular , adversarial training with a PGD adversary remains empirically robust to this day ( Athalye et al. , 2018 ) . Drawbacks of adversarial training include impacting clean performance , being computationally expensive , and overfitting to the attacks it is trained on . Some defenses , such as Feature Squeezing ( Xu et al. , 2017 ) , Feature Denoising ( Xie et al. , 2019 ) , PixelDefend ( Song et al. , 2017 ) , JPEG Compression ( Dziugaite et al. , 2016 ) and Input Transformation ( Guo et al. , 2017 ) , attempt to purify the maliciously perturbed images by transforming them back towards the distribution seen during training . MagNet ( Meng & Chen , 2017 ) trains a reformer network ( one or multiple auto-encoders ) to move the adversarial image closer to the manifold of legitimate images . Likewise , Defense-GAN ( Samangouei et al. , 2018 ) uses GANs ( Goodfellow et al. , 2014 ) to project samples onto the manifold of the generator before classifying them . A similar approach based on Variational AutoEncoders ( VAE ) is proposed in Li & Ji ( 2019 ) . Unlike these works which are based on texture ( and hence are fragile ( Athalye et al. , 2018 ) ) , our GAN-based defense is built upon edge maps . Some defenses are inspired by biology ( e.g. , Dapello et al . ( 2020 ) , Li et al . ( 2019 ) , Strisciuglio et al . ( 2020 ) , Reddy et al . ( 2020 ) ) . Shape vs. texture . Geirhos et al . ( 2018 ) discovered that CNNs routinely latch on to the object texture , whereas humans pay more attention to shape . When presented with stimuli with conflicting cues ( e.g. , a cat shape with elephant skin texture ; Appx . A ) , human subjects correctly labeled them based on their shape . In sharp contrast , predictions made by CNNs were mostly based on the texture ( See also Hermann & Kornblith ( 2019 ) ) . Similar results are also reported by Baker et al . ( 2018 ) . Hermann et al . ( 2020 ) studied the factors that produce texture bias in CNNs and learned that data augmentation plays a significant role to mitigate texture bias . Xiao et al . ( 2019 ) , in parallel to our work , have also proposed methods to utilize shape for adversarial defense . They perform classification on the edge map rather than the image itself . This is a baseline method against which we compare our algorithms . Similar to us , they also use GANs to purify the input image . Algorithm 1 Edge-guided adversarial training ( EAT ) for T epochs , perturbation budget , and loss balance ratio α , over a dataset of size M for a network fθ ( performed in minibatches in practice ) . β ∈ { edge , img , imgedge } indicates network type and redetect train means edge redetection during training . for t = 1 . . . T do for i = 1 . . .M do // launch adversarial attack ( here FGSM and PGD attacks ) x̃i = clip ( xi + sign ( ∇x ` ( fθ ( xi ) , yi ) ) ) if β == imgedge & redetect train then x̃i = detect edge ( x̃i ) // recompute and replace the edge map end if ` = α ` ( fθ ( xi ) , yi ) + ( 1− α ) ` ( fθ ( x̃i ) , yi ) // here α = 0.5 θ = θ −∇θ ` // update model weights with some optimizer , e.g. , Adam end for end for Algorithm 2 GAN-based shape defense ( GSD ) // Training 1 . Create a dataset of images X = { xi , yi } i=1···N including clean and/or perturbed images 2 . Extract edge maps ( ei ) for all images in the dataset 3 . Train a conditional GAN pg ( x|e ) to map edge image e to clean image x // here pix2pix 4 . Train a classifier pc ( y|x ) to map generated image x to class label y // Inference 1 . For input image x , clean or perturbed , first compute the edge image e 2 . Then , compute pc ( y|x′ ) where x′ is the generated image corresponding to e 3 PROPOSED METHODS . Edge-guided Adversarial Training ( EAT ) . The intuition here is that the edge map retains the structure in the image and helps disambiguate the classification ( See Fig . 1 ) . In its simplest form ( Fig . 7 ( A ) in Appx . A ; Alg . 1 ) , adversarial training is performed over the 2D ( Gray+Edge ) or 4D ( RGB+Edge ) input ( i.e. , number of channels ; denoted as Img+Edge ) . In a slightly more complicated form ( Fig . 7 ( B ) ) , first , for each input ( clean or adversarial ) , the old edge map is replaced with the newly extracted one . The edge map can be computed from the average of only image channels or all available channels ( i.e. , image plus edge ) . The latter can sometimes improve the results , since the old edge map ( although perturbed ; Fig . 10 and Appx . B ) still contains unaltered shape structures . Then , adversarial training is performed over the new input . The reason behind adversarial training with redetected edges is to expose the network to possible image structure damage . The loss for training is a weighted combination of loss over clean images and loss over adversarial images . At inference time , first , the edge map is computed and then classification is done over the edge-augmented input . As a baseline model , we also consider first detecting the input ’ s edge map and then feeding it to the model trained on the edges for classification . We refer to this model as Img2Edge . GAN-based Shape Defense ( GSD ) . Here , first , a conditional GAN is trained to map the edge image , from clean or adversarial images , to its corresponding clean image ( Alg . 2 ) . Any image translation method ( here pix2pix by Isola et al . ( 2017 ) using this code1 ) can be employed for this purpose . Next , a CNN is trained over the generated images . At inference time , first , the edge map is computed and then classification is done over the generated image for this edge image . The intuition is that the edge map remains nearly the same over small perturbation budgets ( See Appx . A ) . Notice that conditional GAN can also be trained on perturbed images ( similar to Samangouei et al . ( 2018 ) and Li & Ji ( 2019 ) or edge-augmented perturbed images ( similar to above ) . 4 EXPERIMENTS AND RESULTS . 4.1 DATASETS AND MODELS . Experiments are spread across 10 datasets covering a variety of stimulus types . Sample images from datasets are given in Fig . 2 . Models are trained with cross-entropy loss and Adam optimizer ( Kingma 1https : //github.com/mrzhu-cool/pix2pix-pytorch & Ba , 2014 ) with a batch size of 100 , for 20 epochs over MNIST and FashionMNIST , 30 over DogVsCat , and 10 over the remaining . Canny method ( Canny , 1986 ) is used for edge detection over all datasets , except DogBreeds for which Sobel is used . Edge detection parameters are separately adjusted for each dataset . We did not carry out an exhaustive hyperparameter search , since we are interested in additional benefits edges may bring rather than training the best possible models . The first two datasets include MNIST ( LeCun et al. , 1998 ) and FashionMNIST ( Xiao et al. , 2017 ) . A CNN with 2 convolution , 2 pooling , and 2 fc layers is trained . Each of these datasets contains 60K training images ( resolution 28×28 ) and 6K test images over 10 classes . The third dataset , DogVsCat2 contains 18,085 training and 8,204 test images . Images in this dataset are of varying dimensions . They are resized here to 150×150 pixels to save computation . A CNN with 4 convolution , 4 pooling , and 2 fc layers is trained from scratch . Over the remaining datasets , we finetune a pre-trained ResNet18 ( He et al. , 2016 ) , trained over ImageNet ( Deng et al. , 2009 ) , and normalize images using ImageNet mean and standard deviation . The fourth dataset , CIFAR10 ( Krizhevsky , 2009 ) , con- tains 50K training and 10K test images with a resolution of 32×32 which are resized here to 64×64 for better edge detection . The fifth dataset is DogBreeds ( see footnote ) . It contains 1,421 training and 356 test images at resolution 224×224 over 16 classes . The sixth dataset is GTSRB ( Stallkamp et al. , 2012 ) and includes 39,209 and 1,2631 training and test images , respectively , over 43 classes ( resolution 64×64 pixels ) . The seventh dataset , Icons-50 , includes 6,975 training and 3,025 test images over 50 classes ( Hendrycks & Dietterich , 2019 ) . The original image size is 120×120 which is resized to 64×64 . The eighth dataset , Sketch , contains 14K training and 6K test images over 250 classes . Images have size 1111×1111 and are resized to 64×64 in experiments ( Eitz et al. , 2012 ) . The ninth and tenth datasets are derived from ImageNet3 . The Imagenette2-160 dataset has 3,925 training and 9,469 test images ( resolution 160×160 ) over 10 classes ( tench , English springer , cassette player , chain saw , church , French horn , garbage truck , gas pump , golf ball , and parachute ) . The Tiny Imagenet dataset has 100K training images ( resolution 64× 64 ) and 10K validation images ( used here as the test set ) over 200 classes . For attacks , we use https : //github.com/Harry24k/adversarial-attacks-pytorch , except Boundary attack for which we use https : //github.com/bethgelab/foolbox . | This paper investigates incorporating shape information in deep neural networks to improve their adversarial robustness. It proposes two methods: the first one is to augment the input with the corresponding edge and then adversarially train a CNN on the augmented input. The second idea is to train a conditional GAN to reconstruct images from edge maps and use the reconstructed image as input to a standard classifier. | SP:aba0fd37465ee59982d617e32243307543cb0cb0 |
Shape Defense | 1 INTRODUCTION . Deep neural networks ( LeCun et al. , 2015 ) remain the state of the art across many areas and are employed in a wide range of applications . They also provide the leading model of biological neural networks , especially in visual processing ( Kriegeskorte , 2015 ) . Despite the unprecedented success , however , they can be easily fooled by adding carefully-crafted imperceptible noise to normal inputs ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . This poses serious threats in using them in safety- and security-critical domains . Intensive efforts are ongoing to remedy this problem . Our primary goal here is to learn robust models for visual recognition inspired by two observations . First , object shape remains largely invariant to imperceptible adversarial perturbations ( Fig . 1 ) . Shape is a sign of an object and plays a vital role in recognition ( Biederman , 1987 ) . We rely heavily on edges and object boundaries , whereas CNNs emphasize more on texture ( Geirhos et al. , 2018 ) . Second , unlike CNNs , we recognize objects one at a time through attention and background subtraction ( e.g. , Itti & Koch ( 2001 ) ) . These may explain why adversarial examples are perplexing . The convolution operation in CNNs is biased towards capturing texture since the number of pixels constituting texture far exceeds the number of pixels that fall on the object boundary . This in turn provides a big opportunity for adversarial image manipulation . Some attempts have been made to emphasize more on edges , for example by utilizing normalization layers ( e.g. , contrast and divisive normalization ( Krizhevsky et al. , 2012 ) ) . Such attempts , however , have not been fully investigated for adversarial defense . Overall , how shape and texture should be reconciled in CNNs continues to be an open question . Here we propose two solutions that can be easily implemented and integrated in existing defenses . We also investigate possible adaptive attacks against them . Extensive experiments across ten datasets , over which shape and texture have different relative importance , demonstrate the effectiveness of our solutions against strong attacks . Our first method performs adversarial training on edge-augmented inputs . The second method uses a conditional GAN ( Isola et al. , 2017 ) to translate edge maps to clean images , essentially finding a perturbation-invariant transformation . There is no need for adversarial training ( and hence less computation ) in this method . Further , and perhaps less surprising , we find that incorporating edges also makes CNNs more robust to natural images corruptions and backdoor attacks . The versatility and effectiveness of these approaches , without significant parameter tuning , is very promising . Ultimately , our study shows that shape is the key to build robust models and opens a new direction for future research in adversarial robustness . 2 RELATED WORK . Here , we provide a brief overview of the closely related research with an emphasis on adversarial defenses . For detailed comments on this topic , please refer to Akhtar & Mian ( 2018 ) . Adversarial attacks . The goal of the adversary is to craft an adversarial input x̃ ∈ Rd by adding an imperceptible perturbation to the ( legitimate ) input x ∈ Rd ( here in the range [ 0,1 ] ) , i.e. , x̃ = x + . Here , we consider two attacks based on the ` ∞-norm of , the Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2015 ) , as well as the Projected Gradient Descent ( PGD ) method ( Madry et al. , 2017 ) . Both white-box and black-box attacks in the untargeted condition are considered . Deep models are also susceptible to image transformations other than adversarial attacks ( e.g. , noise , blur ) , as is shown in Hendrycks & Dietterich ( 2019 ) and Azulay & Weiss ( 2018 ) . Adversarial defenses . Recently , there has been a surge of methods to mitigate the threat from adversarial attacks either by making models robust to perturbations or by detecting and rejecting malicious inputs . A popular defense is adversarial training in which a network is trained on adversarial examples ( Szegedy et al. , 2014 ; Goodfellow et al. , 2015 ) . In particular , adversarial training with a PGD adversary remains empirically robust to this day ( Athalye et al. , 2018 ) . Drawbacks of adversarial training include impacting clean performance , being computationally expensive , and overfitting to the attacks it is trained on . Some defenses , such as Feature Squeezing ( Xu et al. , 2017 ) , Feature Denoising ( Xie et al. , 2019 ) , PixelDefend ( Song et al. , 2017 ) , JPEG Compression ( Dziugaite et al. , 2016 ) and Input Transformation ( Guo et al. , 2017 ) , attempt to purify the maliciously perturbed images by transforming them back towards the distribution seen during training . MagNet ( Meng & Chen , 2017 ) trains a reformer network ( one or multiple auto-encoders ) to move the adversarial image closer to the manifold of legitimate images . Likewise , Defense-GAN ( Samangouei et al. , 2018 ) uses GANs ( Goodfellow et al. , 2014 ) to project samples onto the manifold of the generator before classifying them . A similar approach based on Variational AutoEncoders ( VAE ) is proposed in Li & Ji ( 2019 ) . Unlike these works which are based on texture ( and hence are fragile ( Athalye et al. , 2018 ) ) , our GAN-based defense is built upon edge maps . Some defenses are inspired by biology ( e.g. , Dapello et al . ( 2020 ) , Li et al . ( 2019 ) , Strisciuglio et al . ( 2020 ) , Reddy et al . ( 2020 ) ) . Shape vs. texture . Geirhos et al . ( 2018 ) discovered that CNNs routinely latch on to the object texture , whereas humans pay more attention to shape . When presented with stimuli with conflicting cues ( e.g. , a cat shape with elephant skin texture ; Appx . A ) , human subjects correctly labeled them based on their shape . In sharp contrast , predictions made by CNNs were mostly based on the texture ( See also Hermann & Kornblith ( 2019 ) ) . Similar results are also reported by Baker et al . ( 2018 ) . Hermann et al . ( 2020 ) studied the factors that produce texture bias in CNNs and learned that data augmentation plays a significant role to mitigate texture bias . Xiao et al . ( 2019 ) , in parallel to our work , have also proposed methods to utilize shape for adversarial defense . They perform classification on the edge map rather than the image itself . This is a baseline method against which we compare our algorithms . Similar to us , they also use GANs to purify the input image . Algorithm 1 Edge-guided adversarial training ( EAT ) for T epochs , perturbation budget , and loss balance ratio α , over a dataset of size M for a network fθ ( performed in minibatches in practice ) . β ∈ { edge , img , imgedge } indicates network type and redetect train means edge redetection during training . for t = 1 . . . T do for i = 1 . . .M do // launch adversarial attack ( here FGSM and PGD attacks ) x̃i = clip ( xi + sign ( ∇x ` ( fθ ( xi ) , yi ) ) ) if β == imgedge & redetect train then x̃i = detect edge ( x̃i ) // recompute and replace the edge map end if ` = α ` ( fθ ( xi ) , yi ) + ( 1− α ) ` ( fθ ( x̃i ) , yi ) // here α = 0.5 θ = θ −∇θ ` // update model weights with some optimizer , e.g. , Adam end for end for Algorithm 2 GAN-based shape defense ( GSD ) // Training 1 . Create a dataset of images X = { xi , yi } i=1···N including clean and/or perturbed images 2 . Extract edge maps ( ei ) for all images in the dataset 3 . Train a conditional GAN pg ( x|e ) to map edge image e to clean image x // here pix2pix 4 . Train a classifier pc ( y|x ) to map generated image x to class label y // Inference 1 . For input image x , clean or perturbed , first compute the edge image e 2 . Then , compute pc ( y|x′ ) where x′ is the generated image corresponding to e 3 PROPOSED METHODS . Edge-guided Adversarial Training ( EAT ) . The intuition here is that the edge map retains the structure in the image and helps disambiguate the classification ( See Fig . 1 ) . In its simplest form ( Fig . 7 ( A ) in Appx . A ; Alg . 1 ) , adversarial training is performed over the 2D ( Gray+Edge ) or 4D ( RGB+Edge ) input ( i.e. , number of channels ; denoted as Img+Edge ) . In a slightly more complicated form ( Fig . 7 ( B ) ) , first , for each input ( clean or adversarial ) , the old edge map is replaced with the newly extracted one . The edge map can be computed from the average of only image channels or all available channels ( i.e. , image plus edge ) . The latter can sometimes improve the results , since the old edge map ( although perturbed ; Fig . 10 and Appx . B ) still contains unaltered shape structures . Then , adversarial training is performed over the new input . The reason behind adversarial training with redetected edges is to expose the network to possible image structure damage . The loss for training is a weighted combination of loss over clean images and loss over adversarial images . At inference time , first , the edge map is computed and then classification is done over the edge-augmented input . As a baseline model , we also consider first detecting the input ’ s edge map and then feeding it to the model trained on the edges for classification . We refer to this model as Img2Edge . GAN-based Shape Defense ( GSD ) . Here , first , a conditional GAN is trained to map the edge image , from clean or adversarial images , to its corresponding clean image ( Alg . 2 ) . Any image translation method ( here pix2pix by Isola et al . ( 2017 ) using this code1 ) can be employed for this purpose . Next , a CNN is trained over the generated images . At inference time , first , the edge map is computed and then classification is done over the generated image for this edge image . The intuition is that the edge map remains nearly the same over small perturbation budgets ( See Appx . A ) . Notice that conditional GAN can also be trained on perturbed images ( similar to Samangouei et al . ( 2018 ) and Li & Ji ( 2019 ) or edge-augmented perturbed images ( similar to above ) . 4 EXPERIMENTS AND RESULTS . 4.1 DATASETS AND MODELS . Experiments are spread across 10 datasets covering a variety of stimulus types . Sample images from datasets are given in Fig . 2 . Models are trained with cross-entropy loss and Adam optimizer ( Kingma 1https : //github.com/mrzhu-cool/pix2pix-pytorch & Ba , 2014 ) with a batch size of 100 , for 20 epochs over MNIST and FashionMNIST , 30 over DogVsCat , and 10 over the remaining . Canny method ( Canny , 1986 ) is used for edge detection over all datasets , except DogBreeds for which Sobel is used . Edge detection parameters are separately adjusted for each dataset . We did not carry out an exhaustive hyperparameter search , since we are interested in additional benefits edges may bring rather than training the best possible models . The first two datasets include MNIST ( LeCun et al. , 1998 ) and FashionMNIST ( Xiao et al. , 2017 ) . A CNN with 2 convolution , 2 pooling , and 2 fc layers is trained . Each of these datasets contains 60K training images ( resolution 28×28 ) and 6K test images over 10 classes . The third dataset , DogVsCat2 contains 18,085 training and 8,204 test images . Images in this dataset are of varying dimensions . They are resized here to 150×150 pixels to save computation . A CNN with 4 convolution , 4 pooling , and 2 fc layers is trained from scratch . Over the remaining datasets , we finetune a pre-trained ResNet18 ( He et al. , 2016 ) , trained over ImageNet ( Deng et al. , 2009 ) , and normalize images using ImageNet mean and standard deviation . The fourth dataset , CIFAR10 ( Krizhevsky , 2009 ) , con- tains 50K training and 10K test images with a resolution of 32×32 which are resized here to 64×64 for better edge detection . The fifth dataset is DogBreeds ( see footnote ) . It contains 1,421 training and 356 test images at resolution 224×224 over 16 classes . The sixth dataset is GTSRB ( Stallkamp et al. , 2012 ) and includes 39,209 and 1,2631 training and test images , respectively , over 43 classes ( resolution 64×64 pixels ) . The seventh dataset , Icons-50 , includes 6,975 training and 3,025 test images over 50 classes ( Hendrycks & Dietterich , 2019 ) . The original image size is 120×120 which is resized to 64×64 . The eighth dataset , Sketch , contains 14K training and 6K test images over 250 classes . Images have size 1111×1111 and are resized to 64×64 in experiments ( Eitz et al. , 2012 ) . The ninth and tenth datasets are derived from ImageNet3 . The Imagenette2-160 dataset has 3,925 training and 9,469 test images ( resolution 160×160 ) over 10 classes ( tench , English springer , cassette player , chain saw , church , French horn , garbage truck , gas pump , golf ball , and parachute ) . The Tiny Imagenet dataset has 100K training images ( resolution 64× 64 ) and 10K validation images ( used here as the test set ) over 200 classes . For attacks , we use https : //github.com/Harry24k/adversarial-attacks-pytorch , except Boundary attack for which we use https : //github.com/bethgelab/foolbox . | This paper studies how to incorporate shape (particularly depth map) into CNN for more robust models. The study focuses on image classification. Specifically, this paper proposes two depth-map-based defense: 1) Edge-guided Adversarial Training (EAT), which use depth map as an additional input 2) GAN-based Shape Defense (GSD), which learns a generator from depth map to reconstructed images, which is then used as net input. Experiments on 10 datasets shows the effectiveness of the proposed two defenses against white-box attacks including FGSM and PGD40. To further demonstrate the effectiveness, the authors also conduct some other experiments: 1) the proposed EAT goes well with two fast AT algorithms; 2) the proposed algorithm can also be used to defend backdoor attack; 3) edge makes CNN more robust to common image corruptions. | SP:aba0fd37465ee59982d617e32243307543cb0cb0 |
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space | 1 INTRODUCTION . Individualization proposes to leverage omni-channel data to meet individual needs . Individualized decision making plays a vital role in a wide variety of applications . Examples include customized pricing strategy in economics ( Qiang & Bayati , 2016 ; Turvey , 2017 ) , individualized treatment regime in medicine ( Chakraborty , 2013 ; Collins & Varmus , 2015 ) , personalized recommendation system in marketing ( McInerney et al. , 2018 ; Fong et al. , 2018 ) , etc . Prior to adopting any decision rule in practice , it is crucial to know the impact of implementing such a policy . In many applications , it is risky to run a policy online to estimate its value ( see , e.g. , Li et al. , 2011 ) . Off-policy evaluation ( OPE ) thus attracts a lot of attention by learning the policy value offline using logged historical data . Despite the popularity of developing OPE methods with a finite set of actions ( see e.g. , Dudı́k et al. , 2011 ; 2014 ; Swaminathan et al. , 2017 ; Wang et al. , 2017 ) , less attention has been paid to continuous action domains , such as dynamic pricing ( den Boer & Keskin , 2020 ) and personalized dose finding ( Chen et al. , 2016 ) . Recently , a few OPE methods have been proposed to handle continuous actions ( Kallus & Zhou , 2018 ; Sondhi et al. , 2020 ; Colangelo & Lee , 2020 ) . All these methods rely on the use of a kernel function to extend the inverse probability weighting ( IPW ) or doubly robust ( DR ) approaches developed in discrete action domains . They suffer from three limitations . First , the validity of these methods requires the conditional mean of the reward given the feature-action pair to be a smooth function over the action space . This assumption could be violated in applications such as dynamic pricing , where the expected demand for a product has jump discontinuities as a function of the charged price ( den Boer & Keskin , 2020 ) . Second , the value estimator could be sensitive to the choice of the bandwidth parameter in the kernel function . It remains challenging to select this hyperparameter . Kallus & Zhou ( 2018 ) proposed to tune this parameter by minimizing the mean squared error of the resulting value estimator . However , their method is extremely computationally intensive in moderate or high-dimensional feature space ; see Section 5 for details . Third , these kernel-based methods typically use a single bandwidth parameter . This is sub-optimal in cases where the second-order derivative of the conditional mean function has an abrupt change in the action space ; see the toy example in Section 3.1 for details . To address these limitations , we develop a deep jump Q-evaluation ( DJQE ) method by integrating multi-scale change point detection ( see e.g. , Fryzlewicz , 2014 ) , deep learning ( LeCun et al. , 2015 ) and OPE in discrete action domains . The key ingredient of our method lies in adaptively discretizing the action space using deep jump Q-learning . This allows us to apply IPW or DR methods to handle continuous actions . It is worth mentioning that our method does not require kernel bandwidth selection . Theoretically , we show it allows the conditional mean to be either a continuous or piecewise function of the action ( Theorems 1 and 2 ) and converges faster than kernel-based OPE ( Theorem 3 ) . Empirically , we show it outperforms state-of-the-art OPE methods in synthetic and real datasets . 2 PRELIMINARIES . We first formulate the OPE problem . We next discuss the kernel-based OPE methods and multi-scale change point detection , since our proposal is closely related to them . 2.1 OFF-POLICY EVALUATION . The observed datasets can be summarized into { ( Xi , Ai , Yi ) } 1≤i≤n where Oi = ( Xi , Ai , Yi ) denotes the feature-action-reward triplet for the ith subject and n denotes the total sample size . We assume these data triplets are independent copies of some population variables ( X , A , Y ) . Let X and A denote the feature and action space , respectively . We focus on the setting where A is one-dimensional , as in dynamic pricing and personalized dose finding . A deterministic policy π : X → A determines the action to be assigned given the observed feature . We use b to denote the behavior policy that generates the observed data . Specifically , b ( •|x ) denotes the probability density or mass function of A given X = x , depending on whether A is continuous or not . Define the expected reward function conditional on the feature-action pair as Q ( x , a ) = E { Y |X = x , A = a } . We refer to this function as the Q-function , to be consistent with the literature on developing individualized treatment regime ( Murphy , 2003 ) . As standard in the OPE and the causal inference literature ( see e.g. , Chen et al. , 2016 ) , we assume the stable unit treatment value assumption ( SUTVA ) , no unmeasured confounders assumption , and the positivity assumption are satisfied . These assumptions guarantee that a policy ’ s value is estimable from the observed data . Specifically , for a given target policy π , its value can be represented by V ( π ) = E { Q ( X , π ( X ) ) } . The goal of the OPE is to learn the value V ( π ) based on the observed data . 2.2 KERNEL-BASED OPE . For discrete action , Zhang et al . ( 2012 ) and Dudı́k et al . ( 2011 ) proposed a DR estimator of V ( π ) by 1 n n∑ i=1 ψ ( Oi , π , Q̂ , b̂ ) = 1 n n∑ i=1 [ Q̂ ( Xi , π ( Xi ) ) + I ( Ai = π ( Xi ) ) b̂ ( Ai|Xi ) { Yi − Q̂ ( Xi , π ( Xi ) ) } ] , ( 1 ) where I denotes the indicator function , Q̂ and b̂ denote some estimators for the Q-function and the behavior policy . The second term b̂−1 ( Ai|Xi ) I ( Ai = π ( Xi ) ) { Yi − Q̂ ( Xi , π ( Xi ) ) } inside the bracket corresponds to an augmentation term . Its expectation equals zero when Q̂ = Q . The purpose of adding this term is to offer additional protection against potential model misspecification of the Q-function . Such an estimator is doubly-robust in the sense that its consistency relies on either Q̂ or b̂ to be correctly specified . By setting Q̂ = 0 , equation 1 is reduced to the IPW estimator . In continuous action domains , the indicator function I ( Ai = π ( Xi ) ) equals zero almost surely . Consequently , naively applying equation 1 yields the plug-in estimator ∑n i=1 Q̂ ( Xi , π ( Xi ) ) /n . To address this concern , the kernel-based OPE proposed to replace the indicator function in equation 1 with a kernel function K { ( Ai − π ( Xi ) ) /h } with some bandwidth parameter h , i.e. , 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) = 1 n n∑ i=1 [ Q̂ ( Xi , π ( Xi ) ) + K { ( Ai − π ( Xi ) ) /h } b̂ ( Ai|Xi ) { Yi − Q̂ ( Xi , π ( Xi ) ) } ] . The bandwidth h represents a trade-off . The variance of the resulting value estimator decays with h. Yet , its bias increases with h. More specifically , it follows from Theorem 1 of Kallus & Zhou ( 2018 ) that the leading term of the bias is equal to h2 ∫ u2K ( u ) du 2 E ( ∂2Q ( X , a ) ∂a2 ∣∣∣∣ a=π ( X ) ) . ( 2 ) To ensure the term in 2 decays to zero as h goes to 0 , it requires the expected second derivative of the Q-function to exist , and thus Q ( x , a ) needs to be a smooth function of a . However , as commented in the introduction , this assumption could be violated in applications such as dynamic pricing . 2.3 MULTI-SCALE CHANGE POINT DETECTION . The change point analysis considers an ordered sequence of data , Y1 : n = { Y1 , · · · , Yn } , with unknown change point locations , τ = { τ1 , · · · , τK } for some unknown integer K. Here , τi is an integer between 1 and n− 1 inclusive , and satisfies τi < τj for i < j . These change points split the data intoK+1 segment . Within each segments , the expected response is a constant function ( see the left panel of Figure 1 for details ) . A number of methods have been proposed on estimating change points ( see for example , Boysen et al. , 2009 ; Killick et al. , 2012 ; Frick et al. , 2014 ; Fryzlewicz , 2014 , and the references therein ) , by minimizing a penalized objective function : arg min τ , K ( 1 n K+1∑ i=1 [ C { Y ( τi−1+1 ) : τi } ] + γK ) , where C is a cost function that measures the goodness-of-the-fit of the constant function within each segment and γK penalizes the number of change points with some regularization parameter γ . We remark that all the above cited works focused on models without features . Our proposal goes beyond these works in that we consider models with features and use deep neural networks ( DNN ) to capture the complex relationship between the response and features . 3 DEEP JUMP Q-EVALUATION . In section 3.1 , we use a toy example to demonstrate the limitation of kernel-based methods . We present the main idea of our algorithm in Section 3.2 . Details are given in Section 3.3 . 3.1 TOY EXAMPLE . As discussed in the introduction , existing kernel-based OPE methods use a single bandwidth to construct the value estimator . Ideally , the bandwidth h in the kernel K { ( Ai− π ( Xi ) ) /h } shall vary with π ( Xi ) to improve the accuracy of the value estimator . To elaborate this , consider the Q-function Q ( x , a ) = 10 max { a2 − 0.25 , 0 } log ( x + 2 ) for any x , a ∈ [ 0 , 1 ] . By definition , the Q-function is smooth over the entire feature-action space . However , it has different “ patterns ” when the action belongs to different intervals . Specifically , for a ∈ [ 0 , 0.5 ] , Q ( x , a ) is constant as a function of a . For a ∈ ( 0.5 , 1 ] , Q ( x , a ) depends quadratically in a . See the middle panel of Figure 1 for details . Consider the target policy π ( x ) = x . We decompose the value V ( π ) into V ( 1 ) ( π ) +V ( 2 ) ( π ) where V ( 1 ) ( π ) = EQ ( X , π ( X ) ) I ( π ( X ) ≤ 0.5 ) and V ( 2 ) ( π ) = EQ ( X , π ( X ) ) I ( π ( X ) > 0.5 ) . Similarly , denote the corresponding kernel-based value estimators by V̂ ( 1 ) h ( π ) = 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) I ( π ( Xi ) ≤ 0.5 ) and V̂ ( 2 ) h ( π ) = 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) I ( π ( Xi ) > 0.5 ) . Since Q ( x , a ) is a constant function of a ∈ [ 0 , 0.5 ] , its second-order derivative ∂2Q ( x , a ) /∂a2 equals zero . In view of 2 , when π ( x ) ≤ 0.5 , the bias of V̂ ( 1 ) h ( π ) will be small even with a sufficiently large h. As such , a large h is preferred to reduce the variance of V̂ ( 1 ) h ( π ) . When π ( x ) > 0.5 , a small h is preferred to reduce the bias of V̂ ( 2 ) h ( π ) . See Table 1 for details where we report the bias and standard deviation of V̂ ( 1 ) h ( π ) and V̂ ( 2 ) h ( π ) with two different bandwidths . Due to the use of a single bandwidth , the kernel-based estimator suffers from either a large bias or a large variance . To overcome this limitation , we propose to adaptively discretize the action space into a union of disjoint intervals such that within each interval I , the Q-function { Q ( x , a ) : a ∈ I } can be well-approximated by some functionQI ( x ) that is constant in a ∈ I . Based on the discretization , one can apply IPW or DR to evaluate the value . The advantage of adaptive discretization is illustrated in the right panel of Figure 1 . When a ≤ 0.5 , the Q-function is constant in a . It is likely that our procedure will not further split the interval [ 0 , 0.5 ] . Consequently , the corresponding DR estimator for V ( 1 ) ( π ) will not suffer from large variance . When a > 0.5 , our procedure will split ( 0.5 , 1 ] into a series of sub-intervals , approximating Q by a step function . This guarantees the resulting DR estimator for V ( 2 ) ( π ) will not suffer from large bias . Consequently , the proposed value estimator achieves a smaller mean squared error than kernel-based estimators . See Table 1 for details . | The main contribution of this paper is a new algorithm to learn the expected reward function for a given target policy using the historical data generated by a different behavior policy in continuous action domains. All current Offline-Policy Evaluation (OPE) methods for handling continuous action domains use a kernel function to extend Inverse Probability Weighting (IPW) or Doubly Robust (DR) approaches for discrete action domains. The algorithm proposed in this work adaptively discretizes the action space by combining methods in multi-scale changepoint detection, multi-layer perceptron regression and OPE in discrete action domains. The finite sample performance of the proposed method, known as Deep-Jump Q-Evaluation (DJQE), is compared to that of two kernel-based methods, one due to Kallus and Zhou (2018) and another due to Colangelo and Lee (2020), on synthetic as well as real-world data. To generate synthetic data, four scenarios are considered, where in each case the Q-function is continuous in the action domain or is a piecewise function of the action. In almost all of these cases, DJQE outperforms the two kernel-based methods. Similarly, when applied to real-world Warfarin data (after calibration), DJQE outperforms the two kernel-based methods with respect to the bias, standard deviation and mean squared error, even when the sample size is small (n=50). The average runtime of DJQE in each scenario (for synthetic or real-world data) is about 5 minutes. | SP:9b5ab25b377e76d0e9aa753c7f043952724b5451 |
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space | 1 INTRODUCTION . Individualization proposes to leverage omni-channel data to meet individual needs . Individualized decision making plays a vital role in a wide variety of applications . Examples include customized pricing strategy in economics ( Qiang & Bayati , 2016 ; Turvey , 2017 ) , individualized treatment regime in medicine ( Chakraborty , 2013 ; Collins & Varmus , 2015 ) , personalized recommendation system in marketing ( McInerney et al. , 2018 ; Fong et al. , 2018 ) , etc . Prior to adopting any decision rule in practice , it is crucial to know the impact of implementing such a policy . In many applications , it is risky to run a policy online to estimate its value ( see , e.g. , Li et al. , 2011 ) . Off-policy evaluation ( OPE ) thus attracts a lot of attention by learning the policy value offline using logged historical data . Despite the popularity of developing OPE methods with a finite set of actions ( see e.g. , Dudı́k et al. , 2011 ; 2014 ; Swaminathan et al. , 2017 ; Wang et al. , 2017 ) , less attention has been paid to continuous action domains , such as dynamic pricing ( den Boer & Keskin , 2020 ) and personalized dose finding ( Chen et al. , 2016 ) . Recently , a few OPE methods have been proposed to handle continuous actions ( Kallus & Zhou , 2018 ; Sondhi et al. , 2020 ; Colangelo & Lee , 2020 ) . All these methods rely on the use of a kernel function to extend the inverse probability weighting ( IPW ) or doubly robust ( DR ) approaches developed in discrete action domains . They suffer from three limitations . First , the validity of these methods requires the conditional mean of the reward given the feature-action pair to be a smooth function over the action space . This assumption could be violated in applications such as dynamic pricing , where the expected demand for a product has jump discontinuities as a function of the charged price ( den Boer & Keskin , 2020 ) . Second , the value estimator could be sensitive to the choice of the bandwidth parameter in the kernel function . It remains challenging to select this hyperparameter . Kallus & Zhou ( 2018 ) proposed to tune this parameter by minimizing the mean squared error of the resulting value estimator . However , their method is extremely computationally intensive in moderate or high-dimensional feature space ; see Section 5 for details . Third , these kernel-based methods typically use a single bandwidth parameter . This is sub-optimal in cases where the second-order derivative of the conditional mean function has an abrupt change in the action space ; see the toy example in Section 3.1 for details . To address these limitations , we develop a deep jump Q-evaluation ( DJQE ) method by integrating multi-scale change point detection ( see e.g. , Fryzlewicz , 2014 ) , deep learning ( LeCun et al. , 2015 ) and OPE in discrete action domains . The key ingredient of our method lies in adaptively discretizing the action space using deep jump Q-learning . This allows us to apply IPW or DR methods to handle continuous actions . It is worth mentioning that our method does not require kernel bandwidth selection . Theoretically , we show it allows the conditional mean to be either a continuous or piecewise function of the action ( Theorems 1 and 2 ) and converges faster than kernel-based OPE ( Theorem 3 ) . Empirically , we show it outperforms state-of-the-art OPE methods in synthetic and real datasets . 2 PRELIMINARIES . We first formulate the OPE problem . We next discuss the kernel-based OPE methods and multi-scale change point detection , since our proposal is closely related to them . 2.1 OFF-POLICY EVALUATION . The observed datasets can be summarized into { ( Xi , Ai , Yi ) } 1≤i≤n where Oi = ( Xi , Ai , Yi ) denotes the feature-action-reward triplet for the ith subject and n denotes the total sample size . We assume these data triplets are independent copies of some population variables ( X , A , Y ) . Let X and A denote the feature and action space , respectively . We focus on the setting where A is one-dimensional , as in dynamic pricing and personalized dose finding . A deterministic policy π : X → A determines the action to be assigned given the observed feature . We use b to denote the behavior policy that generates the observed data . Specifically , b ( •|x ) denotes the probability density or mass function of A given X = x , depending on whether A is continuous or not . Define the expected reward function conditional on the feature-action pair as Q ( x , a ) = E { Y |X = x , A = a } . We refer to this function as the Q-function , to be consistent with the literature on developing individualized treatment regime ( Murphy , 2003 ) . As standard in the OPE and the causal inference literature ( see e.g. , Chen et al. , 2016 ) , we assume the stable unit treatment value assumption ( SUTVA ) , no unmeasured confounders assumption , and the positivity assumption are satisfied . These assumptions guarantee that a policy ’ s value is estimable from the observed data . Specifically , for a given target policy π , its value can be represented by V ( π ) = E { Q ( X , π ( X ) ) } . The goal of the OPE is to learn the value V ( π ) based on the observed data . 2.2 KERNEL-BASED OPE . For discrete action , Zhang et al . ( 2012 ) and Dudı́k et al . ( 2011 ) proposed a DR estimator of V ( π ) by 1 n n∑ i=1 ψ ( Oi , π , Q̂ , b̂ ) = 1 n n∑ i=1 [ Q̂ ( Xi , π ( Xi ) ) + I ( Ai = π ( Xi ) ) b̂ ( Ai|Xi ) { Yi − Q̂ ( Xi , π ( Xi ) ) } ] , ( 1 ) where I denotes the indicator function , Q̂ and b̂ denote some estimators for the Q-function and the behavior policy . The second term b̂−1 ( Ai|Xi ) I ( Ai = π ( Xi ) ) { Yi − Q̂ ( Xi , π ( Xi ) ) } inside the bracket corresponds to an augmentation term . Its expectation equals zero when Q̂ = Q . The purpose of adding this term is to offer additional protection against potential model misspecification of the Q-function . Such an estimator is doubly-robust in the sense that its consistency relies on either Q̂ or b̂ to be correctly specified . By setting Q̂ = 0 , equation 1 is reduced to the IPW estimator . In continuous action domains , the indicator function I ( Ai = π ( Xi ) ) equals zero almost surely . Consequently , naively applying equation 1 yields the plug-in estimator ∑n i=1 Q̂ ( Xi , π ( Xi ) ) /n . To address this concern , the kernel-based OPE proposed to replace the indicator function in equation 1 with a kernel function K { ( Ai − π ( Xi ) ) /h } with some bandwidth parameter h , i.e. , 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) = 1 n n∑ i=1 [ Q̂ ( Xi , π ( Xi ) ) + K { ( Ai − π ( Xi ) ) /h } b̂ ( Ai|Xi ) { Yi − Q̂ ( Xi , π ( Xi ) ) } ] . The bandwidth h represents a trade-off . The variance of the resulting value estimator decays with h. Yet , its bias increases with h. More specifically , it follows from Theorem 1 of Kallus & Zhou ( 2018 ) that the leading term of the bias is equal to h2 ∫ u2K ( u ) du 2 E ( ∂2Q ( X , a ) ∂a2 ∣∣∣∣ a=π ( X ) ) . ( 2 ) To ensure the term in 2 decays to zero as h goes to 0 , it requires the expected second derivative of the Q-function to exist , and thus Q ( x , a ) needs to be a smooth function of a . However , as commented in the introduction , this assumption could be violated in applications such as dynamic pricing . 2.3 MULTI-SCALE CHANGE POINT DETECTION . The change point analysis considers an ordered sequence of data , Y1 : n = { Y1 , · · · , Yn } , with unknown change point locations , τ = { τ1 , · · · , τK } for some unknown integer K. Here , τi is an integer between 1 and n− 1 inclusive , and satisfies τi < τj for i < j . These change points split the data intoK+1 segment . Within each segments , the expected response is a constant function ( see the left panel of Figure 1 for details ) . A number of methods have been proposed on estimating change points ( see for example , Boysen et al. , 2009 ; Killick et al. , 2012 ; Frick et al. , 2014 ; Fryzlewicz , 2014 , and the references therein ) , by minimizing a penalized objective function : arg min τ , K ( 1 n K+1∑ i=1 [ C { Y ( τi−1+1 ) : τi } ] + γK ) , where C is a cost function that measures the goodness-of-the-fit of the constant function within each segment and γK penalizes the number of change points with some regularization parameter γ . We remark that all the above cited works focused on models without features . Our proposal goes beyond these works in that we consider models with features and use deep neural networks ( DNN ) to capture the complex relationship between the response and features . 3 DEEP JUMP Q-EVALUATION . In section 3.1 , we use a toy example to demonstrate the limitation of kernel-based methods . We present the main idea of our algorithm in Section 3.2 . Details are given in Section 3.3 . 3.1 TOY EXAMPLE . As discussed in the introduction , existing kernel-based OPE methods use a single bandwidth to construct the value estimator . Ideally , the bandwidth h in the kernel K { ( Ai− π ( Xi ) ) /h } shall vary with π ( Xi ) to improve the accuracy of the value estimator . To elaborate this , consider the Q-function Q ( x , a ) = 10 max { a2 − 0.25 , 0 } log ( x + 2 ) for any x , a ∈ [ 0 , 1 ] . By definition , the Q-function is smooth over the entire feature-action space . However , it has different “ patterns ” when the action belongs to different intervals . Specifically , for a ∈ [ 0 , 0.5 ] , Q ( x , a ) is constant as a function of a . For a ∈ ( 0.5 , 1 ] , Q ( x , a ) depends quadratically in a . See the middle panel of Figure 1 for details . Consider the target policy π ( x ) = x . We decompose the value V ( π ) into V ( 1 ) ( π ) +V ( 2 ) ( π ) where V ( 1 ) ( π ) = EQ ( X , π ( X ) ) I ( π ( X ) ≤ 0.5 ) and V ( 2 ) ( π ) = EQ ( X , π ( X ) ) I ( π ( X ) > 0.5 ) . Similarly , denote the corresponding kernel-based value estimators by V̂ ( 1 ) h ( π ) = 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) I ( π ( Xi ) ≤ 0.5 ) and V̂ ( 2 ) h ( π ) = 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) I ( π ( Xi ) > 0.5 ) . Since Q ( x , a ) is a constant function of a ∈ [ 0 , 0.5 ] , its second-order derivative ∂2Q ( x , a ) /∂a2 equals zero . In view of 2 , when π ( x ) ≤ 0.5 , the bias of V̂ ( 1 ) h ( π ) will be small even with a sufficiently large h. As such , a large h is preferred to reduce the variance of V̂ ( 1 ) h ( π ) . When π ( x ) > 0.5 , a small h is preferred to reduce the bias of V̂ ( 2 ) h ( π ) . See Table 1 for details where we report the bias and standard deviation of V̂ ( 1 ) h ( π ) and V̂ ( 2 ) h ( π ) with two different bandwidths . Due to the use of a single bandwidth , the kernel-based estimator suffers from either a large bias or a large variance . To overcome this limitation , we propose to adaptively discretize the action space into a union of disjoint intervals such that within each interval I , the Q-function { Q ( x , a ) : a ∈ I } can be well-approximated by some functionQI ( x ) that is constant in a ∈ I . Based on the discretization , one can apply IPW or DR to evaluate the value . The advantage of adaptive discretization is illustrated in the right panel of Figure 1 . When a ≤ 0.5 , the Q-function is constant in a . It is likely that our procedure will not further split the interval [ 0 , 0.5 ] . Consequently , the corresponding DR estimator for V ( 1 ) ( π ) will not suffer from large variance . When a > 0.5 , our procedure will split ( 0.5 , 1 ] into a series of sub-intervals , approximating Q by a step function . This guarantees the resulting DR estimator for V ( 2 ) ( π ) will not suffer from large bias . Consequently , the proposed value estimator achieves a smaller mean squared error than kernel-based estimators . See Table 1 for details . | This paper proposes a new method for offline evaluation when the action space is continuous, one dimensional. This overcomes the drawbacks of the kernel based method, which cannot be applied to non-smooth Q functions and requires heavy computation to optimize the bandwidth. The proposed method can be applied to discontinuous Q functions like step functions, and achieves smaller bias. This is made possible by the adaptive jump q learning method. | SP:9b5ab25b377e76d0e9aa753c7f043952724b5451 |
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space | 1 INTRODUCTION . Individualization proposes to leverage omni-channel data to meet individual needs . Individualized decision making plays a vital role in a wide variety of applications . Examples include customized pricing strategy in economics ( Qiang & Bayati , 2016 ; Turvey , 2017 ) , individualized treatment regime in medicine ( Chakraborty , 2013 ; Collins & Varmus , 2015 ) , personalized recommendation system in marketing ( McInerney et al. , 2018 ; Fong et al. , 2018 ) , etc . Prior to adopting any decision rule in practice , it is crucial to know the impact of implementing such a policy . In many applications , it is risky to run a policy online to estimate its value ( see , e.g. , Li et al. , 2011 ) . Off-policy evaluation ( OPE ) thus attracts a lot of attention by learning the policy value offline using logged historical data . Despite the popularity of developing OPE methods with a finite set of actions ( see e.g. , Dudı́k et al. , 2011 ; 2014 ; Swaminathan et al. , 2017 ; Wang et al. , 2017 ) , less attention has been paid to continuous action domains , such as dynamic pricing ( den Boer & Keskin , 2020 ) and personalized dose finding ( Chen et al. , 2016 ) . Recently , a few OPE methods have been proposed to handle continuous actions ( Kallus & Zhou , 2018 ; Sondhi et al. , 2020 ; Colangelo & Lee , 2020 ) . All these methods rely on the use of a kernel function to extend the inverse probability weighting ( IPW ) or doubly robust ( DR ) approaches developed in discrete action domains . They suffer from three limitations . First , the validity of these methods requires the conditional mean of the reward given the feature-action pair to be a smooth function over the action space . This assumption could be violated in applications such as dynamic pricing , where the expected demand for a product has jump discontinuities as a function of the charged price ( den Boer & Keskin , 2020 ) . Second , the value estimator could be sensitive to the choice of the bandwidth parameter in the kernel function . It remains challenging to select this hyperparameter . Kallus & Zhou ( 2018 ) proposed to tune this parameter by minimizing the mean squared error of the resulting value estimator . However , their method is extremely computationally intensive in moderate or high-dimensional feature space ; see Section 5 for details . Third , these kernel-based methods typically use a single bandwidth parameter . This is sub-optimal in cases where the second-order derivative of the conditional mean function has an abrupt change in the action space ; see the toy example in Section 3.1 for details . To address these limitations , we develop a deep jump Q-evaluation ( DJQE ) method by integrating multi-scale change point detection ( see e.g. , Fryzlewicz , 2014 ) , deep learning ( LeCun et al. , 2015 ) and OPE in discrete action domains . The key ingredient of our method lies in adaptively discretizing the action space using deep jump Q-learning . This allows us to apply IPW or DR methods to handle continuous actions . It is worth mentioning that our method does not require kernel bandwidth selection . Theoretically , we show it allows the conditional mean to be either a continuous or piecewise function of the action ( Theorems 1 and 2 ) and converges faster than kernel-based OPE ( Theorem 3 ) . Empirically , we show it outperforms state-of-the-art OPE methods in synthetic and real datasets . 2 PRELIMINARIES . We first formulate the OPE problem . We next discuss the kernel-based OPE methods and multi-scale change point detection , since our proposal is closely related to them . 2.1 OFF-POLICY EVALUATION . The observed datasets can be summarized into { ( Xi , Ai , Yi ) } 1≤i≤n where Oi = ( Xi , Ai , Yi ) denotes the feature-action-reward triplet for the ith subject and n denotes the total sample size . We assume these data triplets are independent copies of some population variables ( X , A , Y ) . Let X and A denote the feature and action space , respectively . We focus on the setting where A is one-dimensional , as in dynamic pricing and personalized dose finding . A deterministic policy π : X → A determines the action to be assigned given the observed feature . We use b to denote the behavior policy that generates the observed data . Specifically , b ( •|x ) denotes the probability density or mass function of A given X = x , depending on whether A is continuous or not . Define the expected reward function conditional on the feature-action pair as Q ( x , a ) = E { Y |X = x , A = a } . We refer to this function as the Q-function , to be consistent with the literature on developing individualized treatment regime ( Murphy , 2003 ) . As standard in the OPE and the causal inference literature ( see e.g. , Chen et al. , 2016 ) , we assume the stable unit treatment value assumption ( SUTVA ) , no unmeasured confounders assumption , and the positivity assumption are satisfied . These assumptions guarantee that a policy ’ s value is estimable from the observed data . Specifically , for a given target policy π , its value can be represented by V ( π ) = E { Q ( X , π ( X ) ) } . The goal of the OPE is to learn the value V ( π ) based on the observed data . 2.2 KERNEL-BASED OPE . For discrete action , Zhang et al . ( 2012 ) and Dudı́k et al . ( 2011 ) proposed a DR estimator of V ( π ) by 1 n n∑ i=1 ψ ( Oi , π , Q̂ , b̂ ) = 1 n n∑ i=1 [ Q̂ ( Xi , π ( Xi ) ) + I ( Ai = π ( Xi ) ) b̂ ( Ai|Xi ) { Yi − Q̂ ( Xi , π ( Xi ) ) } ] , ( 1 ) where I denotes the indicator function , Q̂ and b̂ denote some estimators for the Q-function and the behavior policy . The second term b̂−1 ( Ai|Xi ) I ( Ai = π ( Xi ) ) { Yi − Q̂ ( Xi , π ( Xi ) ) } inside the bracket corresponds to an augmentation term . Its expectation equals zero when Q̂ = Q . The purpose of adding this term is to offer additional protection against potential model misspecification of the Q-function . Such an estimator is doubly-robust in the sense that its consistency relies on either Q̂ or b̂ to be correctly specified . By setting Q̂ = 0 , equation 1 is reduced to the IPW estimator . In continuous action domains , the indicator function I ( Ai = π ( Xi ) ) equals zero almost surely . Consequently , naively applying equation 1 yields the plug-in estimator ∑n i=1 Q̂ ( Xi , π ( Xi ) ) /n . To address this concern , the kernel-based OPE proposed to replace the indicator function in equation 1 with a kernel function K { ( Ai − π ( Xi ) ) /h } with some bandwidth parameter h , i.e. , 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) = 1 n n∑ i=1 [ Q̂ ( Xi , π ( Xi ) ) + K { ( Ai − π ( Xi ) ) /h } b̂ ( Ai|Xi ) { Yi − Q̂ ( Xi , π ( Xi ) ) } ] . The bandwidth h represents a trade-off . The variance of the resulting value estimator decays with h. Yet , its bias increases with h. More specifically , it follows from Theorem 1 of Kallus & Zhou ( 2018 ) that the leading term of the bias is equal to h2 ∫ u2K ( u ) du 2 E ( ∂2Q ( X , a ) ∂a2 ∣∣∣∣ a=π ( X ) ) . ( 2 ) To ensure the term in 2 decays to zero as h goes to 0 , it requires the expected second derivative of the Q-function to exist , and thus Q ( x , a ) needs to be a smooth function of a . However , as commented in the introduction , this assumption could be violated in applications such as dynamic pricing . 2.3 MULTI-SCALE CHANGE POINT DETECTION . The change point analysis considers an ordered sequence of data , Y1 : n = { Y1 , · · · , Yn } , with unknown change point locations , τ = { τ1 , · · · , τK } for some unknown integer K. Here , τi is an integer between 1 and n− 1 inclusive , and satisfies τi < τj for i < j . These change points split the data intoK+1 segment . Within each segments , the expected response is a constant function ( see the left panel of Figure 1 for details ) . A number of methods have been proposed on estimating change points ( see for example , Boysen et al. , 2009 ; Killick et al. , 2012 ; Frick et al. , 2014 ; Fryzlewicz , 2014 , and the references therein ) , by minimizing a penalized objective function : arg min τ , K ( 1 n K+1∑ i=1 [ C { Y ( τi−1+1 ) : τi } ] + γK ) , where C is a cost function that measures the goodness-of-the-fit of the constant function within each segment and γK penalizes the number of change points with some regularization parameter γ . We remark that all the above cited works focused on models without features . Our proposal goes beyond these works in that we consider models with features and use deep neural networks ( DNN ) to capture the complex relationship between the response and features . 3 DEEP JUMP Q-EVALUATION . In section 3.1 , we use a toy example to demonstrate the limitation of kernel-based methods . We present the main idea of our algorithm in Section 3.2 . Details are given in Section 3.3 . 3.1 TOY EXAMPLE . As discussed in the introduction , existing kernel-based OPE methods use a single bandwidth to construct the value estimator . Ideally , the bandwidth h in the kernel K { ( Ai− π ( Xi ) ) /h } shall vary with π ( Xi ) to improve the accuracy of the value estimator . To elaborate this , consider the Q-function Q ( x , a ) = 10 max { a2 − 0.25 , 0 } log ( x + 2 ) for any x , a ∈ [ 0 , 1 ] . By definition , the Q-function is smooth over the entire feature-action space . However , it has different “ patterns ” when the action belongs to different intervals . Specifically , for a ∈ [ 0 , 0.5 ] , Q ( x , a ) is constant as a function of a . For a ∈ ( 0.5 , 1 ] , Q ( x , a ) depends quadratically in a . See the middle panel of Figure 1 for details . Consider the target policy π ( x ) = x . We decompose the value V ( π ) into V ( 1 ) ( π ) +V ( 2 ) ( π ) where V ( 1 ) ( π ) = EQ ( X , π ( X ) ) I ( π ( X ) ≤ 0.5 ) and V ( 2 ) ( π ) = EQ ( X , π ( X ) ) I ( π ( X ) > 0.5 ) . Similarly , denote the corresponding kernel-based value estimators by V̂ ( 1 ) h ( π ) = 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) I ( π ( Xi ) ≤ 0.5 ) and V̂ ( 2 ) h ( π ) = 1 n n∑ i=1 ψh ( Oi , π , Q̂ , b̂ ) I ( π ( Xi ) > 0.5 ) . Since Q ( x , a ) is a constant function of a ∈ [ 0 , 0.5 ] , its second-order derivative ∂2Q ( x , a ) /∂a2 equals zero . In view of 2 , when π ( x ) ≤ 0.5 , the bias of V̂ ( 1 ) h ( π ) will be small even with a sufficiently large h. As such , a large h is preferred to reduce the variance of V̂ ( 1 ) h ( π ) . When π ( x ) > 0.5 , a small h is preferred to reduce the bias of V̂ ( 2 ) h ( π ) . See Table 1 for details where we report the bias and standard deviation of V̂ ( 1 ) h ( π ) and V̂ ( 2 ) h ( π ) with two different bandwidths . Due to the use of a single bandwidth , the kernel-based estimator suffers from either a large bias or a large variance . To overcome this limitation , we propose to adaptively discretize the action space into a union of disjoint intervals such that within each interval I , the Q-function { Q ( x , a ) : a ∈ I } can be well-approximated by some functionQI ( x ) that is constant in a ∈ I . Based on the discretization , one can apply IPW or DR to evaluate the value . The advantage of adaptive discretization is illustrated in the right panel of Figure 1 . When a ≤ 0.5 , the Q-function is constant in a . It is likely that our procedure will not further split the interval [ 0 , 0.5 ] . Consequently , the corresponding DR estimator for V ( 1 ) ( π ) will not suffer from large variance . When a > 0.5 , our procedure will split ( 0.5 , 1 ] into a series of sub-intervals , approximating Q by a step function . This guarantees the resulting DR estimator for V ( 2 ) ( π ) will not suffer from large bias . Consequently , the proposed value estimator achieves a smaller mean squared error than kernel-based estimators . See Table 1 for details . | This paper considers the problem of off-policy evaluation with continuous actions. The main idea is to first using multi-scale change point detection to discretize the action space and then apply traditional IPW or DR methods to estimate the value. The DJQE method is theoretically analyzed under both the cases that the Q function is either a piecewise function or a continuous function. For continuous function, it is not surprising that as the number of splits m goes to infinity as n, the estimation is consistent, while additional results in Theorem 2 also shows that for limited m, the estimator can also be shown as a uniform approximation of the Q value. Experiments consider both a toy dataset and a real problem in personalized does finding, and the results show that the DJQE method is superior than existing methods for continuous Q evaluation. | SP:9b5ab25b377e76d0e9aa753c7f043952724b5451 |
Achieving Explainability in a Visual Hard Attention Model through Content Prediction | 1 INTRODUCTION . Though deep convolution networks achieve state of the art performance on the image classification task , it is difficult to explain which input regions affected the output . A technique called visual hard attention provides this explanation by design . The hard attention model sequentially attends small but informative subregions of the input called glimpses to make predictions . While the attention mechanism explains the task-specific decisions , the attention policies learned by the model remain unexplainable . For example , one can not explain the attention policy of a caption generation model that correctly predicts the word ‘ frisbee ’ while looking at a region far from an actual frisbee ( Xu et al . ( 2015 ) ) . The majority of hard attention models first analyze a complete image to locate the task-relevant subregions and then attend to these locations to make predictions ( Ba et al . ( 2014 ) ; Elsayed et al . ( 2019 ) ) . However , in practice , we often do not have access to the entire scene , and we gradually attend to the important subregions to collect task-specific information . At each step in the process , we decide the next attention-worthy location based on the partial observations collected so far . The explainable attention policies are more desirable under such partial observability . Pioneering work by Mnih et al . ( 2014 ) presents a model that functions under partial observability but their attention policies are not explainable . They train their model with the REINFORCE algorithm ( Williams ( 1992 ) ) , which is challenging to optimize . Moreover , the model ’ s performance is affected adversely if the parameterization of the attention policy is not optimal . For example , an object classification model with unimodal Gaussian policy learns to attend the background region in the middle of the two objects ( Sermanet et al . ( 2014 ) ) . This paper develops a hard-attention model with an explainable attention policy for classifying images through a series of partial observations . We formulate the problem of hard attention as a Bayesian Optimal Experiment Design ( BOED ) . A recurrent model finds an optimal location that gains maximum expected information about the class label and attends to this location . To estimate expected information gain ( EIG ) under partial observability , the model predicts content of the un- seen regions based on the regions observed so far . Using the knowledge gained by attending various locations in an image , the model predicts the class label . To the best of our knowledge , ours is the first hard attention model that is entirely explainable under partial observability . Our main contributions are as follows . First , our attention policies are explainable by design . One can explain that the model attends a specific location because it expects the corresponding glimpse to maximize the expected information gain . Second , the model does not rely on the complete image to predict the attention locations and provides good performance under partial observability . Third , the training objective is differentiable and can be optimized using standard gradient backpropagation . We train the model using discriminative and generative objectives to predict the label and the image content , respectively . Fourth , our attention policy is non-parametric and can be implicitly multi-modal . 2 RELATED WORKS . A hard attention model prioritizes task-relevant regions to extract meaningful features from an input . Early attempts to model attention employed image saliency as a priority map . High priority regions were selected using methods such as winner-take-all ( Koch & Ullman ( 1987 ) ; Itti et al . ( 1998 ) ; Itti & Koch ( 2000 ) ) , searching by throwing out all features but the one with minimal activity ( Ahmad ( 1992 ) ) , and dynamic routing of information ( Olshausen et al . ( 1993 ) ) . Few used graphical models to model visual attention . Rimey & Brown ( 1991 ) used augmented hidden Markov models to model attention policy . Larochelle & Hinton ( 2010 ) used a Restricted Boltzmann Machine ( RBM ) with third-order connections between attention location , glimpse , and the representation of a scene . Motivated by this , Zheng et al . ( 2015 ) proposed an autoregressive model to compute exact gradients , unlike in an RBM . Tang et al . ( 2014 ) used an RBM as a generative model and searched for informative locations using the Hamiltonian Monte Carlo algorithm . Many used reinforcement learning to train attention models . Paletta et al . ( 2005 ) used Q-learning with the reward that measures the objectness of the attended region . Denil et al . ( 2012 ) estimated rewards using particle filters and employed a policy based on the Gaussian Process and the upper confidence bound . Butko & Movellan ( 2008 ) modeled attention as a partially observable Markov decision process and used a policy gradient algorithm for learning . Later , Butko & Movellan ( 2009 ) extended this approach to multiple objects . Recently , the machine learning community use the REINFORCE policy gradient algorithm to train hard attention models ( Mnih et al . ( 2014 ) ; Ba et al . ( 2014 ) ; Xu et al . ( 2015 ) ; Elsayed et al . ( 2019 ) ) . Among these , only Elsayed et al . ( 2019 ) claims explainability by design . Other works use EMstyle learning procedure ( Ranzato ( 2014 ) ) , wake-sleep algorithm ( Ba et al . ( 2015 ) ) , a voting based region selection ( Alexe et al . ( 2012 ) ) , and differentiable models ( Gregor et al . ( 2015 ) ; Jaderberg et al . ( 2015 ) ; Eslami et al . ( 2016 ) ) . Among the recent models , Ba et al . ( 2014 ) ; Ranzato ( 2014 ) ; Ba et al . ( 2015 ) look at the lowresolution gist of an input at the beginning , and Xu et al . ( 2015 ) ; Elsayed et al . ( 2019 ) ; Gregor et al . ( 2015 ) ; Jaderberg et al . ( 2015 ) ; Eslami et al . ( 2016 ) consume the whole image to predict the locations to attend . In contrast , our model does not look at the entire image at low resolution or otherwise . Moreover , our attention policies are explainable . We can apply our model in a wide range of scenarios where explainable predictions are desirable for the partially observable images . 3 MODEL . In this paper , we consider a recurrent attention model that sequentially captures glimpses from an image x and predicts a label y . The model runs for time t = 0 to T − 1 . It uses a recurrent net to maintain a hidden state ht−1 that summarizes glimpses observed until time t − 1 . At time t , it predicts coordinates lt based on the hidden state ht−1 and captures a square glimpse gt centered at lt in an image x , i.e . gt = g ( x , lt ) . It uses gt and lt to update the hidden state to ht and predicts the label y based on the updated state ht . 3.1 ARCHITECTURE . As shown in Figure 1 ( a ) , our model comprises the following three building blocks . A recurrent feature aggregator ( F and R ) maintains a hidden state ht . A classifier ( C ) predicts the class probabilities p ( y|ht ) . A normalizing flow-based variational autoencoder ( S and D ) predicts a complete image given the hidden state ht ; a flow-based encoder S predicts the posterior of a latent variable z from ht , and a decoderD predicts a complete image from z . The BOED , as discussed in section 3.2 , uses the predicted image to find an optimal location to attend at the next time-step . To distinguish the predicted image from the actual image , let us call the former x̃ . Henceforth , we crown any quantity derived from the predicted image x̃ with a ( ˜ ) . Next , we provide details about the three building blocks of the model , followed by a discussion of the BOED in the context of hard attention . 3.1.1 A RECURRENT FEATURE AGGREGATOR . Given a glimpse gt and its location lt , a feed-forward module extracts features ft = F ( gt , lt ) , and a recurrent network updates a hidden state to ht = R ( ht−1 , ft ) . Following Mnih et al . ( 2014 ) , we define F ( gt , lt ) = BN ( LeakyReLU ( Fg ( gt ) + Fl ( lt ) ) ) where Fg and Fl are deep networks , and R ( ht−1 , ft ) = LN ( LeakyReLU ( Linear ( ht−1 ) +Linear ( ft ) ) ) . Here , BN is a BatchNorm layer ( Ioffe & Szegedy ( 2015 ) ) and LN is a LayerNorm layer ( Ba et al . ( 2016 ) ) . 3.1.2 A CLASSIFIER . At each time-step t , a linear classifier predicts the distribution p ( y|ht ) = C ( ht ) from a hidden state ht . As the goal of the model is to predict a label y for an image x , we learn a distribution p ( y|ht ) by minimizing KL [ p ( y|x ) ||p ( y|ht ) ] . Optimization of this KL divergence is equivalent to the minimization of the following cross-entropy loss . LCE ( t ) = −p ( y|x ) log ( p ( y|ht ) ) ( 1 ) 3.1.3 A PARTIAL VARIATIONAL AUTOENCODER . We adapt a variational autoencoder ( VAE ) to predict the complete image x from the hidden state ht . A VAE learns a joint distribution between the image x and the latent variable z given ht , p ( x , z|ht ) = p ( x|z ) p ( z|ht ) . An amortized encoder infers the posterior q ( z|x , ht ) , which is an approximation of the true posterior p ( z|x , ht ) , and a decoder infers the likelihood p ( x|z ) . The training of VAE requires optimizing the Evidence Lower Bound ( ELBO ) , which involves calculating KL [ q ( z|x , ht ) ||p ( z|ht ) ] ( Kingma & Welling ( 2013 ) ) . As the hard attention model does not observe the complete image x , it can not estimate q ( z|x , ht ) . Hence , we can not incorporate the standard VAE directly into a hard attention framework . At the time t , we separate an image x into two parts , ot — the set of regions observed up to t , and ut — the set of regions as yet unobserved . Ma et al . ( 2018 ) observed that in a VAE , ot and ut are conditionally independent given z , i.e . p ( x|z ) = p ( ut|z ) p ( ot|z ) . They predict ut independently from the sample z ∼ q ( z|ot ) , while learning the approximate posterior q ( z|ot ) by optimizing the ELBO on log ( p ( ot ) ) . They refer to the resultant VAE as a Partial VAE . Recall that the hidden state ht calculated by our attention model is a summary of the glimpses observed up to t , which is equivalent to ot , the set of observed regions . Hence , we can write q ( z|ot ) as q ( z|ht ) in the ELBO of the Partial VAE . LPVAE ( t ) = Eq ( z|ot ) log ( p ( ot|z ) ) −KL [ q ( z|ot ) ||p ( z ) ] = Eq ( z|ht ) log ( p ( ot|z ) ) −KL [ q ( z|ht ) ||p ( z ) ] ( 2 ) In a Partial VAE , p ( x , z|ht ) = p ( ut|z ) p ( ot|z ) p ( z|ht ) . We implement a decoder D that predicts the complete image given the sample z ∼ q ( z|ht ) . Let mt be a binary mask with value 1 for the pixels observed by the model up to t and 0 otherwise ; hence , ot = mt x , where is an element-wise multiplication . We write the log-likelihood in equation 2 using the mask mt as follows . log ( p ( ot|z ) ) = −0.5 ∑ |mt D ( z ) −mt x|2 = −0.5 ∑ mt |D ( z ) − x|2 ( 3 ) In equation 2 , the prior p ( z ) is a Gaussian distribution with zero mean and unit variance . To obtain expressive posterior q ( z|ht ) , we use normalizing flows ( Kingma et al . ( 2016 ) ) . As an explicit inversion of the flows is not required , we use auto-regressive Neural Spline Flows ( NSF ) ( Durkan et al . ( 2019 ) ) and efficiently implement them using a single feed-forward network with masked weights as in De Cao et al . ( 2019 ) . Between the two flow layers , we flip the input ( Dinh et al . ( 2016 ) ) and normalize it using ActNorm ( Kingma & Dhariwal ( 2018 ) ) . In Figure 1 ( a ) , the flow-based encoder S infers the posterior q ( z|ht ) = S ( ht ) . As mentioned earlier , we refer to the prediction from the Partial VAE as x̃ . The BOED uses x̃ to find an optimal location to attend . 1 | This paper proposed a new hard attention model for the image classification. They designed hard attention mechanism as a bayesian optimal experimental setting. Compare to other hard attention model, the policies of proposed hard attention can be explainable and differentiable, which is non-parametric. They evaluated their model to four different image classification dataset and their model outperformed than other baseline models. | SP:131084bc72c0513e72f8514d48e27b0bf2cd66d1 |
Achieving Explainability in a Visual Hard Attention Model through Content Prediction | 1 INTRODUCTION . Though deep convolution networks achieve state of the art performance on the image classification task , it is difficult to explain which input regions affected the output . A technique called visual hard attention provides this explanation by design . The hard attention model sequentially attends small but informative subregions of the input called glimpses to make predictions . While the attention mechanism explains the task-specific decisions , the attention policies learned by the model remain unexplainable . For example , one can not explain the attention policy of a caption generation model that correctly predicts the word ‘ frisbee ’ while looking at a region far from an actual frisbee ( Xu et al . ( 2015 ) ) . The majority of hard attention models first analyze a complete image to locate the task-relevant subregions and then attend to these locations to make predictions ( Ba et al . ( 2014 ) ; Elsayed et al . ( 2019 ) ) . However , in practice , we often do not have access to the entire scene , and we gradually attend to the important subregions to collect task-specific information . At each step in the process , we decide the next attention-worthy location based on the partial observations collected so far . The explainable attention policies are more desirable under such partial observability . Pioneering work by Mnih et al . ( 2014 ) presents a model that functions under partial observability but their attention policies are not explainable . They train their model with the REINFORCE algorithm ( Williams ( 1992 ) ) , which is challenging to optimize . Moreover , the model ’ s performance is affected adversely if the parameterization of the attention policy is not optimal . For example , an object classification model with unimodal Gaussian policy learns to attend the background region in the middle of the two objects ( Sermanet et al . ( 2014 ) ) . This paper develops a hard-attention model with an explainable attention policy for classifying images through a series of partial observations . We formulate the problem of hard attention as a Bayesian Optimal Experiment Design ( BOED ) . A recurrent model finds an optimal location that gains maximum expected information about the class label and attends to this location . To estimate expected information gain ( EIG ) under partial observability , the model predicts content of the un- seen regions based on the regions observed so far . Using the knowledge gained by attending various locations in an image , the model predicts the class label . To the best of our knowledge , ours is the first hard attention model that is entirely explainable under partial observability . Our main contributions are as follows . First , our attention policies are explainable by design . One can explain that the model attends a specific location because it expects the corresponding glimpse to maximize the expected information gain . Second , the model does not rely on the complete image to predict the attention locations and provides good performance under partial observability . Third , the training objective is differentiable and can be optimized using standard gradient backpropagation . We train the model using discriminative and generative objectives to predict the label and the image content , respectively . Fourth , our attention policy is non-parametric and can be implicitly multi-modal . 2 RELATED WORKS . A hard attention model prioritizes task-relevant regions to extract meaningful features from an input . Early attempts to model attention employed image saliency as a priority map . High priority regions were selected using methods such as winner-take-all ( Koch & Ullman ( 1987 ) ; Itti et al . ( 1998 ) ; Itti & Koch ( 2000 ) ) , searching by throwing out all features but the one with minimal activity ( Ahmad ( 1992 ) ) , and dynamic routing of information ( Olshausen et al . ( 1993 ) ) . Few used graphical models to model visual attention . Rimey & Brown ( 1991 ) used augmented hidden Markov models to model attention policy . Larochelle & Hinton ( 2010 ) used a Restricted Boltzmann Machine ( RBM ) with third-order connections between attention location , glimpse , and the representation of a scene . Motivated by this , Zheng et al . ( 2015 ) proposed an autoregressive model to compute exact gradients , unlike in an RBM . Tang et al . ( 2014 ) used an RBM as a generative model and searched for informative locations using the Hamiltonian Monte Carlo algorithm . Many used reinforcement learning to train attention models . Paletta et al . ( 2005 ) used Q-learning with the reward that measures the objectness of the attended region . Denil et al . ( 2012 ) estimated rewards using particle filters and employed a policy based on the Gaussian Process and the upper confidence bound . Butko & Movellan ( 2008 ) modeled attention as a partially observable Markov decision process and used a policy gradient algorithm for learning . Later , Butko & Movellan ( 2009 ) extended this approach to multiple objects . Recently , the machine learning community use the REINFORCE policy gradient algorithm to train hard attention models ( Mnih et al . ( 2014 ) ; Ba et al . ( 2014 ) ; Xu et al . ( 2015 ) ; Elsayed et al . ( 2019 ) ) . Among these , only Elsayed et al . ( 2019 ) claims explainability by design . Other works use EMstyle learning procedure ( Ranzato ( 2014 ) ) , wake-sleep algorithm ( Ba et al . ( 2015 ) ) , a voting based region selection ( Alexe et al . ( 2012 ) ) , and differentiable models ( Gregor et al . ( 2015 ) ; Jaderberg et al . ( 2015 ) ; Eslami et al . ( 2016 ) ) . Among the recent models , Ba et al . ( 2014 ) ; Ranzato ( 2014 ) ; Ba et al . ( 2015 ) look at the lowresolution gist of an input at the beginning , and Xu et al . ( 2015 ) ; Elsayed et al . ( 2019 ) ; Gregor et al . ( 2015 ) ; Jaderberg et al . ( 2015 ) ; Eslami et al . ( 2016 ) consume the whole image to predict the locations to attend . In contrast , our model does not look at the entire image at low resolution or otherwise . Moreover , our attention policies are explainable . We can apply our model in a wide range of scenarios where explainable predictions are desirable for the partially observable images . 3 MODEL . In this paper , we consider a recurrent attention model that sequentially captures glimpses from an image x and predicts a label y . The model runs for time t = 0 to T − 1 . It uses a recurrent net to maintain a hidden state ht−1 that summarizes glimpses observed until time t − 1 . At time t , it predicts coordinates lt based on the hidden state ht−1 and captures a square glimpse gt centered at lt in an image x , i.e . gt = g ( x , lt ) . It uses gt and lt to update the hidden state to ht and predicts the label y based on the updated state ht . 3.1 ARCHITECTURE . As shown in Figure 1 ( a ) , our model comprises the following three building blocks . A recurrent feature aggregator ( F and R ) maintains a hidden state ht . A classifier ( C ) predicts the class probabilities p ( y|ht ) . A normalizing flow-based variational autoencoder ( S and D ) predicts a complete image given the hidden state ht ; a flow-based encoder S predicts the posterior of a latent variable z from ht , and a decoderD predicts a complete image from z . The BOED , as discussed in section 3.2 , uses the predicted image to find an optimal location to attend at the next time-step . To distinguish the predicted image from the actual image , let us call the former x̃ . Henceforth , we crown any quantity derived from the predicted image x̃ with a ( ˜ ) . Next , we provide details about the three building blocks of the model , followed by a discussion of the BOED in the context of hard attention . 3.1.1 A RECURRENT FEATURE AGGREGATOR . Given a glimpse gt and its location lt , a feed-forward module extracts features ft = F ( gt , lt ) , and a recurrent network updates a hidden state to ht = R ( ht−1 , ft ) . Following Mnih et al . ( 2014 ) , we define F ( gt , lt ) = BN ( LeakyReLU ( Fg ( gt ) + Fl ( lt ) ) ) where Fg and Fl are deep networks , and R ( ht−1 , ft ) = LN ( LeakyReLU ( Linear ( ht−1 ) +Linear ( ft ) ) ) . Here , BN is a BatchNorm layer ( Ioffe & Szegedy ( 2015 ) ) and LN is a LayerNorm layer ( Ba et al . ( 2016 ) ) . 3.1.2 A CLASSIFIER . At each time-step t , a linear classifier predicts the distribution p ( y|ht ) = C ( ht ) from a hidden state ht . As the goal of the model is to predict a label y for an image x , we learn a distribution p ( y|ht ) by minimizing KL [ p ( y|x ) ||p ( y|ht ) ] . Optimization of this KL divergence is equivalent to the minimization of the following cross-entropy loss . LCE ( t ) = −p ( y|x ) log ( p ( y|ht ) ) ( 1 ) 3.1.3 A PARTIAL VARIATIONAL AUTOENCODER . We adapt a variational autoencoder ( VAE ) to predict the complete image x from the hidden state ht . A VAE learns a joint distribution between the image x and the latent variable z given ht , p ( x , z|ht ) = p ( x|z ) p ( z|ht ) . An amortized encoder infers the posterior q ( z|x , ht ) , which is an approximation of the true posterior p ( z|x , ht ) , and a decoder infers the likelihood p ( x|z ) . The training of VAE requires optimizing the Evidence Lower Bound ( ELBO ) , which involves calculating KL [ q ( z|x , ht ) ||p ( z|ht ) ] ( Kingma & Welling ( 2013 ) ) . As the hard attention model does not observe the complete image x , it can not estimate q ( z|x , ht ) . Hence , we can not incorporate the standard VAE directly into a hard attention framework . At the time t , we separate an image x into two parts , ot — the set of regions observed up to t , and ut — the set of regions as yet unobserved . Ma et al . ( 2018 ) observed that in a VAE , ot and ut are conditionally independent given z , i.e . p ( x|z ) = p ( ut|z ) p ( ot|z ) . They predict ut independently from the sample z ∼ q ( z|ot ) , while learning the approximate posterior q ( z|ot ) by optimizing the ELBO on log ( p ( ot ) ) . They refer to the resultant VAE as a Partial VAE . Recall that the hidden state ht calculated by our attention model is a summary of the glimpses observed up to t , which is equivalent to ot , the set of observed regions . Hence , we can write q ( z|ot ) as q ( z|ht ) in the ELBO of the Partial VAE . LPVAE ( t ) = Eq ( z|ot ) log ( p ( ot|z ) ) −KL [ q ( z|ot ) ||p ( z ) ] = Eq ( z|ht ) log ( p ( ot|z ) ) −KL [ q ( z|ht ) ||p ( z ) ] ( 2 ) In a Partial VAE , p ( x , z|ht ) = p ( ut|z ) p ( ot|z ) p ( z|ht ) . We implement a decoder D that predicts the complete image given the sample z ∼ q ( z|ht ) . Let mt be a binary mask with value 1 for the pixels observed by the model up to t and 0 otherwise ; hence , ot = mt x , where is an element-wise multiplication . We write the log-likelihood in equation 2 using the mask mt as follows . log ( p ( ot|z ) ) = −0.5 ∑ |mt D ( z ) −mt x|2 = −0.5 ∑ mt |D ( z ) − x|2 ( 3 ) In equation 2 , the prior p ( z ) is a Gaussian distribution with zero mean and unit variance . To obtain expressive posterior q ( z|ht ) , we use normalizing flows ( Kingma et al . ( 2016 ) ) . As an explicit inversion of the flows is not required , we use auto-regressive Neural Spline Flows ( NSF ) ( Durkan et al . ( 2019 ) ) and efficiently implement them using a single feed-forward network with masked weights as in De Cao et al . ( 2019 ) . Between the two flow layers , we flip the input ( Dinh et al . ( 2016 ) ) and normalize it using ActNorm ( Kingma & Dhariwal ( 2018 ) ) . In Figure 1 ( a ) , the flow-based encoder S infers the posterior q ( z|ht ) = S ( ht ) . As mentioned earlier , we refer to the prediction from the Partial VAE as x̃ . The BOED uses x̃ to find an optimal location to attend . 1 | This paper follows a less explored strategy for achieving explainability via hard attention. They proposed a recurrent architecture which sequentially observe regions (glimpse) from an image. To decide where to look next, the model maintains a hidden state and use it to estimate the full image (or features of the image). This "content prediction" module allows the model to look ahead and make a decision based on the expected information gain (EIG) over different locations. The objective function (i.e. partial VAE loss and classification loss) in this system is differentiable thus the system can be trained with gradient descent. The authors validated the system on several benchmarks and show comparable performance with baselines. | SP:131084bc72c0513e72f8514d48e27b0bf2cd66d1 |
Achieving Explainability in a Visual Hard Attention Model through Content Prediction | 1 INTRODUCTION . Though deep convolution networks achieve state of the art performance on the image classification task , it is difficult to explain which input regions affected the output . A technique called visual hard attention provides this explanation by design . The hard attention model sequentially attends small but informative subregions of the input called glimpses to make predictions . While the attention mechanism explains the task-specific decisions , the attention policies learned by the model remain unexplainable . For example , one can not explain the attention policy of a caption generation model that correctly predicts the word ‘ frisbee ’ while looking at a region far from an actual frisbee ( Xu et al . ( 2015 ) ) . The majority of hard attention models first analyze a complete image to locate the task-relevant subregions and then attend to these locations to make predictions ( Ba et al . ( 2014 ) ; Elsayed et al . ( 2019 ) ) . However , in practice , we often do not have access to the entire scene , and we gradually attend to the important subregions to collect task-specific information . At each step in the process , we decide the next attention-worthy location based on the partial observations collected so far . The explainable attention policies are more desirable under such partial observability . Pioneering work by Mnih et al . ( 2014 ) presents a model that functions under partial observability but their attention policies are not explainable . They train their model with the REINFORCE algorithm ( Williams ( 1992 ) ) , which is challenging to optimize . Moreover , the model ’ s performance is affected adversely if the parameterization of the attention policy is not optimal . For example , an object classification model with unimodal Gaussian policy learns to attend the background region in the middle of the two objects ( Sermanet et al . ( 2014 ) ) . This paper develops a hard-attention model with an explainable attention policy for classifying images through a series of partial observations . We formulate the problem of hard attention as a Bayesian Optimal Experiment Design ( BOED ) . A recurrent model finds an optimal location that gains maximum expected information about the class label and attends to this location . To estimate expected information gain ( EIG ) under partial observability , the model predicts content of the un- seen regions based on the regions observed so far . Using the knowledge gained by attending various locations in an image , the model predicts the class label . To the best of our knowledge , ours is the first hard attention model that is entirely explainable under partial observability . Our main contributions are as follows . First , our attention policies are explainable by design . One can explain that the model attends a specific location because it expects the corresponding glimpse to maximize the expected information gain . Second , the model does not rely on the complete image to predict the attention locations and provides good performance under partial observability . Third , the training objective is differentiable and can be optimized using standard gradient backpropagation . We train the model using discriminative and generative objectives to predict the label and the image content , respectively . Fourth , our attention policy is non-parametric and can be implicitly multi-modal . 2 RELATED WORKS . A hard attention model prioritizes task-relevant regions to extract meaningful features from an input . Early attempts to model attention employed image saliency as a priority map . High priority regions were selected using methods such as winner-take-all ( Koch & Ullman ( 1987 ) ; Itti et al . ( 1998 ) ; Itti & Koch ( 2000 ) ) , searching by throwing out all features but the one with minimal activity ( Ahmad ( 1992 ) ) , and dynamic routing of information ( Olshausen et al . ( 1993 ) ) . Few used graphical models to model visual attention . Rimey & Brown ( 1991 ) used augmented hidden Markov models to model attention policy . Larochelle & Hinton ( 2010 ) used a Restricted Boltzmann Machine ( RBM ) with third-order connections between attention location , glimpse , and the representation of a scene . Motivated by this , Zheng et al . ( 2015 ) proposed an autoregressive model to compute exact gradients , unlike in an RBM . Tang et al . ( 2014 ) used an RBM as a generative model and searched for informative locations using the Hamiltonian Monte Carlo algorithm . Many used reinforcement learning to train attention models . Paletta et al . ( 2005 ) used Q-learning with the reward that measures the objectness of the attended region . Denil et al . ( 2012 ) estimated rewards using particle filters and employed a policy based on the Gaussian Process and the upper confidence bound . Butko & Movellan ( 2008 ) modeled attention as a partially observable Markov decision process and used a policy gradient algorithm for learning . Later , Butko & Movellan ( 2009 ) extended this approach to multiple objects . Recently , the machine learning community use the REINFORCE policy gradient algorithm to train hard attention models ( Mnih et al . ( 2014 ) ; Ba et al . ( 2014 ) ; Xu et al . ( 2015 ) ; Elsayed et al . ( 2019 ) ) . Among these , only Elsayed et al . ( 2019 ) claims explainability by design . Other works use EMstyle learning procedure ( Ranzato ( 2014 ) ) , wake-sleep algorithm ( Ba et al . ( 2015 ) ) , a voting based region selection ( Alexe et al . ( 2012 ) ) , and differentiable models ( Gregor et al . ( 2015 ) ; Jaderberg et al . ( 2015 ) ; Eslami et al . ( 2016 ) ) . Among the recent models , Ba et al . ( 2014 ) ; Ranzato ( 2014 ) ; Ba et al . ( 2015 ) look at the lowresolution gist of an input at the beginning , and Xu et al . ( 2015 ) ; Elsayed et al . ( 2019 ) ; Gregor et al . ( 2015 ) ; Jaderberg et al . ( 2015 ) ; Eslami et al . ( 2016 ) consume the whole image to predict the locations to attend . In contrast , our model does not look at the entire image at low resolution or otherwise . Moreover , our attention policies are explainable . We can apply our model in a wide range of scenarios where explainable predictions are desirable for the partially observable images . 3 MODEL . In this paper , we consider a recurrent attention model that sequentially captures glimpses from an image x and predicts a label y . The model runs for time t = 0 to T − 1 . It uses a recurrent net to maintain a hidden state ht−1 that summarizes glimpses observed until time t − 1 . At time t , it predicts coordinates lt based on the hidden state ht−1 and captures a square glimpse gt centered at lt in an image x , i.e . gt = g ( x , lt ) . It uses gt and lt to update the hidden state to ht and predicts the label y based on the updated state ht . 3.1 ARCHITECTURE . As shown in Figure 1 ( a ) , our model comprises the following three building blocks . A recurrent feature aggregator ( F and R ) maintains a hidden state ht . A classifier ( C ) predicts the class probabilities p ( y|ht ) . A normalizing flow-based variational autoencoder ( S and D ) predicts a complete image given the hidden state ht ; a flow-based encoder S predicts the posterior of a latent variable z from ht , and a decoderD predicts a complete image from z . The BOED , as discussed in section 3.2 , uses the predicted image to find an optimal location to attend at the next time-step . To distinguish the predicted image from the actual image , let us call the former x̃ . Henceforth , we crown any quantity derived from the predicted image x̃ with a ( ˜ ) . Next , we provide details about the three building blocks of the model , followed by a discussion of the BOED in the context of hard attention . 3.1.1 A RECURRENT FEATURE AGGREGATOR . Given a glimpse gt and its location lt , a feed-forward module extracts features ft = F ( gt , lt ) , and a recurrent network updates a hidden state to ht = R ( ht−1 , ft ) . Following Mnih et al . ( 2014 ) , we define F ( gt , lt ) = BN ( LeakyReLU ( Fg ( gt ) + Fl ( lt ) ) ) where Fg and Fl are deep networks , and R ( ht−1 , ft ) = LN ( LeakyReLU ( Linear ( ht−1 ) +Linear ( ft ) ) ) . Here , BN is a BatchNorm layer ( Ioffe & Szegedy ( 2015 ) ) and LN is a LayerNorm layer ( Ba et al . ( 2016 ) ) . 3.1.2 A CLASSIFIER . At each time-step t , a linear classifier predicts the distribution p ( y|ht ) = C ( ht ) from a hidden state ht . As the goal of the model is to predict a label y for an image x , we learn a distribution p ( y|ht ) by minimizing KL [ p ( y|x ) ||p ( y|ht ) ] . Optimization of this KL divergence is equivalent to the minimization of the following cross-entropy loss . LCE ( t ) = −p ( y|x ) log ( p ( y|ht ) ) ( 1 ) 3.1.3 A PARTIAL VARIATIONAL AUTOENCODER . We adapt a variational autoencoder ( VAE ) to predict the complete image x from the hidden state ht . A VAE learns a joint distribution between the image x and the latent variable z given ht , p ( x , z|ht ) = p ( x|z ) p ( z|ht ) . An amortized encoder infers the posterior q ( z|x , ht ) , which is an approximation of the true posterior p ( z|x , ht ) , and a decoder infers the likelihood p ( x|z ) . The training of VAE requires optimizing the Evidence Lower Bound ( ELBO ) , which involves calculating KL [ q ( z|x , ht ) ||p ( z|ht ) ] ( Kingma & Welling ( 2013 ) ) . As the hard attention model does not observe the complete image x , it can not estimate q ( z|x , ht ) . Hence , we can not incorporate the standard VAE directly into a hard attention framework . At the time t , we separate an image x into two parts , ot — the set of regions observed up to t , and ut — the set of regions as yet unobserved . Ma et al . ( 2018 ) observed that in a VAE , ot and ut are conditionally independent given z , i.e . p ( x|z ) = p ( ut|z ) p ( ot|z ) . They predict ut independently from the sample z ∼ q ( z|ot ) , while learning the approximate posterior q ( z|ot ) by optimizing the ELBO on log ( p ( ot ) ) . They refer to the resultant VAE as a Partial VAE . Recall that the hidden state ht calculated by our attention model is a summary of the glimpses observed up to t , which is equivalent to ot , the set of observed regions . Hence , we can write q ( z|ot ) as q ( z|ht ) in the ELBO of the Partial VAE . LPVAE ( t ) = Eq ( z|ot ) log ( p ( ot|z ) ) −KL [ q ( z|ot ) ||p ( z ) ] = Eq ( z|ht ) log ( p ( ot|z ) ) −KL [ q ( z|ht ) ||p ( z ) ] ( 2 ) In a Partial VAE , p ( x , z|ht ) = p ( ut|z ) p ( ot|z ) p ( z|ht ) . We implement a decoder D that predicts the complete image given the sample z ∼ q ( z|ht ) . Let mt be a binary mask with value 1 for the pixels observed by the model up to t and 0 otherwise ; hence , ot = mt x , where is an element-wise multiplication . We write the log-likelihood in equation 2 using the mask mt as follows . log ( p ( ot|z ) ) = −0.5 ∑ |mt D ( z ) −mt x|2 = −0.5 ∑ mt |D ( z ) − x|2 ( 3 ) In equation 2 , the prior p ( z ) is a Gaussian distribution with zero mean and unit variance . To obtain expressive posterior q ( z|ht ) , we use normalizing flows ( Kingma et al . ( 2016 ) ) . As an explicit inversion of the flows is not required , we use auto-regressive Neural Spline Flows ( NSF ) ( Durkan et al . ( 2019 ) ) and efficiently implement them using a single feed-forward network with masked weights as in De Cao et al . ( 2019 ) . Between the two flow layers , we flip the input ( Dinh et al . ( 2016 ) ) and normalize it using ActNorm ( Kingma & Dhariwal ( 2018 ) ) . In Figure 1 ( a ) , the flow-based encoder S infers the posterior q ( z|ht ) = S ( ht ) . As mentioned earlier , we refer to the prediction from the Partial VAE as x̃ . The BOED uses x̃ to find an optimal location to attend . 1 | This paper presents a visual hard-attention image classification model. The difference to standard classification methods such as CNN is that the model provides an explainable inner structure by default, that can be inspected to see what the model focused on. The difference to other state-or-the-art hard-attention models is that this model is differentiable, allowing for more robust and stable optimization. | SP:131084bc72c0513e72f8514d48e27b0bf2cd66d1 |
On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis | 1 INTRODUCTION . Recurrent neural networks ( RNNs ) ( Rumelhart et al. , 1986 ) are among the most frequently employed methods to build machine learning models on temporal data . Despite its ubiquitous application ( Baldi et al. , 1999 ; Graves & Schmidhuber , 2009 ; Graves , 2013 ; Graves et al. , 2013 ; Graves & Jaitly , 2014 ; Gregor et al. , 2015 ) , some fundamental theoretical questions remain to be answered . These come in several flavors . First , one may pose the approximation problem , which asks what kind of temporal input-output relationships can RNNs model to an arbitrary precision . Second , one may also consider the optimization problem , which concerns the dynamics of training ( say , by gradient descent ) the RNN . While such questions can be posed for any machine learning model , the crux of the problem for RNNs is how the recurrent structure of the model and the dynamical nature of the data shape the answers to these problems . For example , it is often observed that when there are long-term dependencies in the data ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) , RNNs may encounter problems in learning , but such statements have rarely been put on precise footing . In this paper , we make a step in this direction by studying the approximation and optimization properties of RNNs . Compared with the static feed-forward setting , the key distinguishing feature †Equal contribution ‡Corresponding author here is the presence of temporal dynamics in terms of both the recurrent architectures in the model and the dynamical structures in the data . Hence , to understand the influence of dynamics on learning is of fundamental importance . As is often the case , the key effects of dynamics can already be revealed in the simplest linear setting . For this reason , we will focus our analysis on linear RNNs , i.e . those with linear activations . Further , we will employ a continuous-time analysis initially studied in the context of feed-forward architectures ( E , 2017 ; Haber & Ruthotto , 2017 ; Li et al. , 2017 ) and recently in recurrent settings ( Ceni et al. , 2019 ; Chang et al. , 2019 ; Lim , 2020 ; Sherstinsky , 2018 ; Niu et al. , 2019 ; Herrera et al. , 2020 ; Rubanova et al. , 2019 ) and idealize the RNN as a continuous-time dynamical system . This allows us to phrase the problems under investigation in convenient analytical settings that accentuates the effect of dynamics . In this case , the RNNs serve to approximate relationships represented by sequences of linear functionals . On first look the setting appears to be simple , but we show that it yields representative results that underlie key differences in the dynamical setting as opposed to static supervised learning problems . In fact , we show that memory , which can be made precise by the decay rates of the target linear functionals , can affect both approximation rates and optimization dynamics in a non-trivial way . Our main results are : 1 . We give a systematic analysis of the approximation of linear functionals by continuoustime linear RNNs , including a precise characterization of the approximation rates in terms of regularity and memory of the target functional . 2 . We give a fine-grained analysis of the optimization dynamics when training linear RNNs , and show that the training efficiency is adversely affected by the presence of long-term memory . These results together paint a comprehensive picture of the interaction of learning and dynamics , and makes concrete the heuristic observations that the presence of long-term memory affects RNN learning in a negative manner ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) . In particular , mirroring the classical curse of dimensionality ( Bellman , 1957 ) , we introduce the concept of the curse of memory that captures the new phenomena that arises from learning temporal relationships : when there is long-term memory in the data , one requires an exponentially large number of neurons for approximation , and the learning dynamics suffers from exponential slow downs . These results form a basic step towards a mathematical understanding of the recurrent structure and its effects on learning from temporal data . 2 RELATED WORK . We will discuss related work on RNNs on three fronts concerning the central results in this paper , namely approximation theory , optimization analysis and the role of memory in learning . A number of universal approximation results for RNNs have been obtained in discrete ( Matthews , 1993 ; Doya , 1993 ; Schäfer & Zimmermann , 2006 ; 2007 ) and continuous time ( Funahashi & Nakamura , 1993 ; Chow & Xiao-Dong Li , 2000 ; Li et al. , 2005 ; Maass et al. , 2007 ; Nakamura & Nakagawa , 2009 ) . Most of these focus on the case where the target relationship is generated from a hidden dynamical system in the form of difference or differential equations . The formulation of functional approximation here is more general , albeit our results are currently limited to the linear setting . Nevertheless , this is already sufficient to reveal new phenomena involving the interaction of learning and dynamics . This will be especially apparent when we discuss approximation rates and optimization dynamics . We also note that the functional/operator approximation using neural networks has been explored in Chen & Chen ( 1993 ) ; Tianping Chen & Hong Chen ( 1995 ) ; Lu et al . ( 2019 ) for nonrecurrent structures and reservoir systems for which approximation results similar to random feature models are derived ( Gonon et al. , 2020 ) . The main difference here is that we explicitly study the effect of memory in target functionals on learning using recurrent structures . On the optimization side , there are a number of recent results concerning the training of RNNs using gradient methods , and they are mostly positive in the sense that trainability is proved under specific settings . These include recovering linear dynamics ( Hardt et al. , 2018 ) or training in overparameterized settings ( Allen-Zhu et al. , 2019 ) . Here , our result concerns the general setting of learning linear functionals that need not come from some underlying differential/difference equations , and is also away from the over-parameterized regime . In our case , we discover on the contrary that training can become very difficult even in the linear case , and this can be understood in a quantitative way , in relation to long-term memory in the target functionals . This points to the practical literature in relation to memory and learning . The dynamical analysis here puts the ubiquitous but heuristic observations - that long-term memory negatively impacts training efficiency ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) - on concrete theoretical footing , at least in idealized settings . This may serve to justify or improve current heuristic methods ( Tseng et al. , 2016 ; Dieng et al. , 2017 ; Trinh et al. , 2018 ) developed in applications to deal with the difficulty in training with long-term memory . At the same time , we also complement general results on “ vanishing and explosion of gradients ” ( Pascanu et al. , 2013 ; Hanin & Rolnick , 2018 ; Hanin , 2018 ) that are typically restricted to initialization settings with more precise characterizations in the dynamical regime during training . The long range dependency within temporal data has been studied for a long time in the time series literature , although its effect on learning input-output relationships is rarely covered . For example , the Hurst exponent ( Hurst , 1951 ) is often used as a measure of long-term memory in time series , e.g . fractional Brownian motion ( Mandelbrot & Ness , 1968 ) . In contrast with the setting in this paper where memory involves the dependence of the output time series on the input , the Hurst exponent measures temporal variations and dependence within the input time series itself . Much of the time series literature investigates statistical properties and estimation methods of data with long range dependence ( Samorodnitsky , 2006 ; Taqqu et al. , 1995 ; Beran , 1992 ; Doukhan et al. , 2003 ) . One can also combine these classic statistical methodologies with the RNN-like architectures to design hybrid models with various applications ( Loukas & Oke , 2007 ; Diaconescu , 2008 ; Mohan & Gaitonde , 2018 ; Bukhari et al. , 2020 ) . 3 PROBLEM FORMULATION . The basic problem of supervised learning on time series data is to learn a mapping from an input temporal sequence to an output sequence . Formally , one can think of the output at each time as being produced from the input via an unknown function that depends on the entire input sequence , or at least up to the time at which the prediction is made . In the discrete-time case , one can write the data generation process yk = Hk ( x0 , . . . , xk−1 ) , k = 0 , 1 , . . . ( 1 ) where xk , yk denote respectively the input data and output response , and { Hk : k = 0 , 1 , . . . } is a sequence of ground truth functions of increasing input dimension accounting for temporal evolution . The goal of supervised learning is to learn an approximation of the sequence of functions { Hk } given observation data . Recurrent neural networks ( RNN ) ( Rumelhart et al. , 1986 ) gives a natural way to parameterize such a sequence of functions . In the simplest case , the one-layer RNN is given by hk+1 = σ ( Whk + Uxk ) , ŷk = c > hk . ( 2 ) Here , { hk } are the hidden/latent states and its evolution is governed by a recursive application of a feed-forward layer with activation σ , and ŷk is called the observation or readout . We omit the bias term here and only consider a linear readout or output layer . For each time step k , the mapping { x0 , . . . , xk−1 } 7→ ŷk parameterizes a function Ĥk ( · ) through adjustable parameters ( c , W , U ) . Hence , for a particular choice of these parameters , a sequence of functions { Ĥk } is constructed at the same time . To understand the working principles of RNNs , we need to characterize how { Ĥk } approximates { Hk } . The model ( 2 ) is not easy to analyze due to its discrete iterative nature . Hence , here we employ a continuous-time idealization that replaces the time-step index k by a continuous time parameter t. This allows us to employ a large variety of continuum analysis tools to gain insights to the learning problem . Let us now introduce this framework . Continuous-time formulation . Consider a sequence of inputs indexed by a real-valued variable t ∈ R instead of a discrete variable k considered previously . Concretely , we consider the input space X = C0 ( R , Rd ) , ( 3 ) which is the linear space of continuous functions from R ( time ) to Rd that vanishes at infinity . Here d is the dimension of each point in the time series . We denote an element in X by x : = { xt ∈ Rd : t ∈ R } and equip X with the supremum norm ‖x‖X : = supt∈R ‖xt‖∞ . For the space of outputs we will take a scalar time series , i.e . the space of bounded continuous functions from R to R : Y = Cb ( R , R ) . ( 4 ) This is due to the fact that vector-valued outputs can be handled by considering each output separately . In continuous time , the target relationship ( ground truth ) to be learned is yt = Ht ( x ) , t ∈ R ( 5 ) where for each t ∈ R , Ht is a functional Ht : X → R. Correspondingly , we define a continuous version of ( 2 ) as a hypothesis space to model continuous-time functionals d dt ht = σ ( Wht + Uxt ) , ŷt = c > ht , ( 6 ) whose Euler discretization corresponds to a discrete-time residual RNN . The dynamics then naturally defines a sequences of functionals { Ĥt ( x ) = ŷt : t ∈ R } , which can be used to approximate the target functionals { Ht } via adjusting ( c , W , U ) . Linear RNNs in continuous time . In this paper we mainly investigate the approximation and optimization property of linear RNNs , which already reveals the essential effect of dynamics . The linear RNN obeys ( 6 ) with σ being the identity map . Notice that in the theoretical setup , the initial time of the system goes back to −∞ with limt→−∞ xt = 0 , ∀x ∈ X , thus by linearity ( Ht ( 0 ) = 0 ) we specify the initial condition of the hidden state h−∞ = 0 for consistency . In this case , ( 6 ) has the following solution ŷt = ∫ ∞ 0 c > eWsUxt−sds . ( 7 ) Since we will investigate uniform approximations over large time intervals , we will consider stable RNNs , where W ∈ Wm with Wm = { W ∈ Rm×m : eigenvalues of W have negative real parts } . ( 8 ) Owing to the representation of solutions in ( 7 ) , the linear RNN defines a family of functionals Ĥ : = ∪m≥1Ĥm , Ĥm : = { { Ĥt ( x ) , t ∈ R } : Ĥt ( x ) = ∫ ∞ 0 c > eWsUxt−sds , W ∈ Wm , U ∈ Rm×d , c ∈ Rm } . ( 9 ) Here , m is the width of the network and controls the complexity of the hypothesis space . Clearly , the family of functionals the RNN can represent is not arbitrary , and must possess some structure . Let us now introduce some definitions of functionals that makes these structures precise . Definition 3.1 . Let { Ht : t ∈ R } be a sequence of functionals . 1 . Ht is causal if it does not depend on future values of the input : for every pair of x , x′ ∈ X such that xs = x′s for all s ≤ t , we have Ht ( x ) = Ht ( x′ ) . 2 . Ht is linear and continuous if Ht ( λx + λ′x′ ) = λHt ( x ) + λ′Ht ( x′ ) for any x , x′ ∈ X and λ , λ′ ∈ R , and supx∈X , ‖x‖X≤1 |Ht ( x ) | < ∞ , in which case the induced norm can be defined as ‖Ht‖ : = supx∈X , ‖x‖X≤1 |Ht ( x ) | . 3 . Ht is regular if for any sequence { x ( n ) ∈ X : n ∈ N } such that x ( n ) t → 0 for Lebesgue almost every t ∈ R , limn→∞Ht ( x ( n ) ) = 0 . 4 . { Ht : t ∈ R } is time-homogeneous if Ht ( x ) = Ht+τ ( x ( τ ) ) for any t , τ ∈ R , where x ( τ ) s = xs−τ for all s ∈ R , i.e . x ( τ ) is x whose time index is shifted to the right by τ . Linear , continuous and causal functionals are common definitions . One can think of regular functionals as those that are not determined by values of the inputs on an arbitrarily small time interval , e.g . an infinitely thin spike input . Time-homogeneous functionals , on the other hand , are those where there is no special reference point in time : if the time index of both the input sequence and the functional are shifted in coordination , the output value remains the same . Given these definitions , the following observation can be verified directly and its proof is immediate and hence omitted . Proposition 3.1 . Let { Ĥt : t ∈ R } be a sequence of functionals in the RNN hypothesis space Ĥ ( see ( 9 ) ) . Then for each t ∈ R , Ĥt is a causal , continuous , linear and regular functional . Moreover , the sequence of functionals { Ĥt : t ∈ R } is time-homogeneous . | This paper studies approximation and optimization of linear RNNs for learning linear functions, from the perspective of the memory-properties of the temporal sequence. It shows that linear functionals can be approximated by a linear RNN, with the rate of approximation depending on the long-term memory of the process. It also shows that the training dynamics slow down for certain linear functionals with long-term memory. | SP:2757a1c9fc4d7c81c497024bd6f3eec65027e352 |
On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis | 1 INTRODUCTION . Recurrent neural networks ( RNNs ) ( Rumelhart et al. , 1986 ) are among the most frequently employed methods to build machine learning models on temporal data . Despite its ubiquitous application ( Baldi et al. , 1999 ; Graves & Schmidhuber , 2009 ; Graves , 2013 ; Graves et al. , 2013 ; Graves & Jaitly , 2014 ; Gregor et al. , 2015 ) , some fundamental theoretical questions remain to be answered . These come in several flavors . First , one may pose the approximation problem , which asks what kind of temporal input-output relationships can RNNs model to an arbitrary precision . Second , one may also consider the optimization problem , which concerns the dynamics of training ( say , by gradient descent ) the RNN . While such questions can be posed for any machine learning model , the crux of the problem for RNNs is how the recurrent structure of the model and the dynamical nature of the data shape the answers to these problems . For example , it is often observed that when there are long-term dependencies in the data ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) , RNNs may encounter problems in learning , but such statements have rarely been put on precise footing . In this paper , we make a step in this direction by studying the approximation and optimization properties of RNNs . Compared with the static feed-forward setting , the key distinguishing feature †Equal contribution ‡Corresponding author here is the presence of temporal dynamics in terms of both the recurrent architectures in the model and the dynamical structures in the data . Hence , to understand the influence of dynamics on learning is of fundamental importance . As is often the case , the key effects of dynamics can already be revealed in the simplest linear setting . For this reason , we will focus our analysis on linear RNNs , i.e . those with linear activations . Further , we will employ a continuous-time analysis initially studied in the context of feed-forward architectures ( E , 2017 ; Haber & Ruthotto , 2017 ; Li et al. , 2017 ) and recently in recurrent settings ( Ceni et al. , 2019 ; Chang et al. , 2019 ; Lim , 2020 ; Sherstinsky , 2018 ; Niu et al. , 2019 ; Herrera et al. , 2020 ; Rubanova et al. , 2019 ) and idealize the RNN as a continuous-time dynamical system . This allows us to phrase the problems under investigation in convenient analytical settings that accentuates the effect of dynamics . In this case , the RNNs serve to approximate relationships represented by sequences of linear functionals . On first look the setting appears to be simple , but we show that it yields representative results that underlie key differences in the dynamical setting as opposed to static supervised learning problems . In fact , we show that memory , which can be made precise by the decay rates of the target linear functionals , can affect both approximation rates and optimization dynamics in a non-trivial way . Our main results are : 1 . We give a systematic analysis of the approximation of linear functionals by continuoustime linear RNNs , including a precise characterization of the approximation rates in terms of regularity and memory of the target functional . 2 . We give a fine-grained analysis of the optimization dynamics when training linear RNNs , and show that the training efficiency is adversely affected by the presence of long-term memory . These results together paint a comprehensive picture of the interaction of learning and dynamics , and makes concrete the heuristic observations that the presence of long-term memory affects RNN learning in a negative manner ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) . In particular , mirroring the classical curse of dimensionality ( Bellman , 1957 ) , we introduce the concept of the curse of memory that captures the new phenomena that arises from learning temporal relationships : when there is long-term memory in the data , one requires an exponentially large number of neurons for approximation , and the learning dynamics suffers from exponential slow downs . These results form a basic step towards a mathematical understanding of the recurrent structure and its effects on learning from temporal data . 2 RELATED WORK . We will discuss related work on RNNs on three fronts concerning the central results in this paper , namely approximation theory , optimization analysis and the role of memory in learning . A number of universal approximation results for RNNs have been obtained in discrete ( Matthews , 1993 ; Doya , 1993 ; Schäfer & Zimmermann , 2006 ; 2007 ) and continuous time ( Funahashi & Nakamura , 1993 ; Chow & Xiao-Dong Li , 2000 ; Li et al. , 2005 ; Maass et al. , 2007 ; Nakamura & Nakagawa , 2009 ) . Most of these focus on the case where the target relationship is generated from a hidden dynamical system in the form of difference or differential equations . The formulation of functional approximation here is more general , albeit our results are currently limited to the linear setting . Nevertheless , this is already sufficient to reveal new phenomena involving the interaction of learning and dynamics . This will be especially apparent when we discuss approximation rates and optimization dynamics . We also note that the functional/operator approximation using neural networks has been explored in Chen & Chen ( 1993 ) ; Tianping Chen & Hong Chen ( 1995 ) ; Lu et al . ( 2019 ) for nonrecurrent structures and reservoir systems for which approximation results similar to random feature models are derived ( Gonon et al. , 2020 ) . The main difference here is that we explicitly study the effect of memory in target functionals on learning using recurrent structures . On the optimization side , there are a number of recent results concerning the training of RNNs using gradient methods , and they are mostly positive in the sense that trainability is proved under specific settings . These include recovering linear dynamics ( Hardt et al. , 2018 ) or training in overparameterized settings ( Allen-Zhu et al. , 2019 ) . Here , our result concerns the general setting of learning linear functionals that need not come from some underlying differential/difference equations , and is also away from the over-parameterized regime . In our case , we discover on the contrary that training can become very difficult even in the linear case , and this can be understood in a quantitative way , in relation to long-term memory in the target functionals . This points to the practical literature in relation to memory and learning . The dynamical analysis here puts the ubiquitous but heuristic observations - that long-term memory negatively impacts training efficiency ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) - on concrete theoretical footing , at least in idealized settings . This may serve to justify or improve current heuristic methods ( Tseng et al. , 2016 ; Dieng et al. , 2017 ; Trinh et al. , 2018 ) developed in applications to deal with the difficulty in training with long-term memory . At the same time , we also complement general results on “ vanishing and explosion of gradients ” ( Pascanu et al. , 2013 ; Hanin & Rolnick , 2018 ; Hanin , 2018 ) that are typically restricted to initialization settings with more precise characterizations in the dynamical regime during training . The long range dependency within temporal data has been studied for a long time in the time series literature , although its effect on learning input-output relationships is rarely covered . For example , the Hurst exponent ( Hurst , 1951 ) is often used as a measure of long-term memory in time series , e.g . fractional Brownian motion ( Mandelbrot & Ness , 1968 ) . In contrast with the setting in this paper where memory involves the dependence of the output time series on the input , the Hurst exponent measures temporal variations and dependence within the input time series itself . Much of the time series literature investigates statistical properties and estimation methods of data with long range dependence ( Samorodnitsky , 2006 ; Taqqu et al. , 1995 ; Beran , 1992 ; Doukhan et al. , 2003 ) . One can also combine these classic statistical methodologies with the RNN-like architectures to design hybrid models with various applications ( Loukas & Oke , 2007 ; Diaconescu , 2008 ; Mohan & Gaitonde , 2018 ; Bukhari et al. , 2020 ) . 3 PROBLEM FORMULATION . The basic problem of supervised learning on time series data is to learn a mapping from an input temporal sequence to an output sequence . Formally , one can think of the output at each time as being produced from the input via an unknown function that depends on the entire input sequence , or at least up to the time at which the prediction is made . In the discrete-time case , one can write the data generation process yk = Hk ( x0 , . . . , xk−1 ) , k = 0 , 1 , . . . ( 1 ) where xk , yk denote respectively the input data and output response , and { Hk : k = 0 , 1 , . . . } is a sequence of ground truth functions of increasing input dimension accounting for temporal evolution . The goal of supervised learning is to learn an approximation of the sequence of functions { Hk } given observation data . Recurrent neural networks ( RNN ) ( Rumelhart et al. , 1986 ) gives a natural way to parameterize such a sequence of functions . In the simplest case , the one-layer RNN is given by hk+1 = σ ( Whk + Uxk ) , ŷk = c > hk . ( 2 ) Here , { hk } are the hidden/latent states and its evolution is governed by a recursive application of a feed-forward layer with activation σ , and ŷk is called the observation or readout . We omit the bias term here and only consider a linear readout or output layer . For each time step k , the mapping { x0 , . . . , xk−1 } 7→ ŷk parameterizes a function Ĥk ( · ) through adjustable parameters ( c , W , U ) . Hence , for a particular choice of these parameters , a sequence of functions { Ĥk } is constructed at the same time . To understand the working principles of RNNs , we need to characterize how { Ĥk } approximates { Hk } . The model ( 2 ) is not easy to analyze due to its discrete iterative nature . Hence , here we employ a continuous-time idealization that replaces the time-step index k by a continuous time parameter t. This allows us to employ a large variety of continuum analysis tools to gain insights to the learning problem . Let us now introduce this framework . Continuous-time formulation . Consider a sequence of inputs indexed by a real-valued variable t ∈ R instead of a discrete variable k considered previously . Concretely , we consider the input space X = C0 ( R , Rd ) , ( 3 ) which is the linear space of continuous functions from R ( time ) to Rd that vanishes at infinity . Here d is the dimension of each point in the time series . We denote an element in X by x : = { xt ∈ Rd : t ∈ R } and equip X with the supremum norm ‖x‖X : = supt∈R ‖xt‖∞ . For the space of outputs we will take a scalar time series , i.e . the space of bounded continuous functions from R to R : Y = Cb ( R , R ) . ( 4 ) This is due to the fact that vector-valued outputs can be handled by considering each output separately . In continuous time , the target relationship ( ground truth ) to be learned is yt = Ht ( x ) , t ∈ R ( 5 ) where for each t ∈ R , Ht is a functional Ht : X → R. Correspondingly , we define a continuous version of ( 2 ) as a hypothesis space to model continuous-time functionals d dt ht = σ ( Wht + Uxt ) , ŷt = c > ht , ( 6 ) whose Euler discretization corresponds to a discrete-time residual RNN . The dynamics then naturally defines a sequences of functionals { Ĥt ( x ) = ŷt : t ∈ R } , which can be used to approximate the target functionals { Ht } via adjusting ( c , W , U ) . Linear RNNs in continuous time . In this paper we mainly investigate the approximation and optimization property of linear RNNs , which already reveals the essential effect of dynamics . The linear RNN obeys ( 6 ) with σ being the identity map . Notice that in the theoretical setup , the initial time of the system goes back to −∞ with limt→−∞ xt = 0 , ∀x ∈ X , thus by linearity ( Ht ( 0 ) = 0 ) we specify the initial condition of the hidden state h−∞ = 0 for consistency . In this case , ( 6 ) has the following solution ŷt = ∫ ∞ 0 c > eWsUxt−sds . ( 7 ) Since we will investigate uniform approximations over large time intervals , we will consider stable RNNs , where W ∈ Wm with Wm = { W ∈ Rm×m : eigenvalues of W have negative real parts } . ( 8 ) Owing to the representation of solutions in ( 7 ) , the linear RNN defines a family of functionals Ĥ : = ∪m≥1Ĥm , Ĥm : = { { Ĥt ( x ) , t ∈ R } : Ĥt ( x ) = ∫ ∞ 0 c > eWsUxt−sds , W ∈ Wm , U ∈ Rm×d , c ∈ Rm } . ( 9 ) Here , m is the width of the network and controls the complexity of the hypothesis space . Clearly , the family of functionals the RNN can represent is not arbitrary , and must possess some structure . Let us now introduce some definitions of functionals that makes these structures precise . Definition 3.1 . Let { Ht : t ∈ R } be a sequence of functionals . 1 . Ht is causal if it does not depend on future values of the input : for every pair of x , x′ ∈ X such that xs = x′s for all s ≤ t , we have Ht ( x ) = Ht ( x′ ) . 2 . Ht is linear and continuous if Ht ( λx + λ′x′ ) = λHt ( x ) + λ′Ht ( x′ ) for any x , x′ ∈ X and λ , λ′ ∈ R , and supx∈X , ‖x‖X≤1 |Ht ( x ) | < ∞ , in which case the induced norm can be defined as ‖Ht‖ : = supx∈X , ‖x‖X≤1 |Ht ( x ) | . 3 . Ht is regular if for any sequence { x ( n ) ∈ X : n ∈ N } such that x ( n ) t → 0 for Lebesgue almost every t ∈ R , limn→∞Ht ( x ( n ) ) = 0 . 4 . { Ht : t ∈ R } is time-homogeneous if Ht ( x ) = Ht+τ ( x ( τ ) ) for any t , τ ∈ R , where x ( τ ) s = xs−τ for all s ∈ R , i.e . x ( τ ) is x whose time index is shifted to the right by τ . Linear , continuous and causal functionals are common definitions . One can think of regular functionals as those that are not determined by values of the inputs on an arbitrarily small time interval , e.g . an infinitely thin spike input . Time-homogeneous functionals , on the other hand , are those where there is no special reference point in time : if the time index of both the input sequence and the functional are shifted in coordination , the output value remains the same . Given these definitions , the following observation can be verified directly and its proof is immediate and hence omitted . Proposition 3.1 . Let { Ĥt : t ∈ R } be a sequence of functionals in the RNN hypothesis space Ĥ ( see ( 9 ) ) . Then for each t ∈ R , Ĥt is a causal , continuous , linear and regular functional . Moreover , the sequence of functionals { Ĥt : t ∈ R } is time-homogeneous . | The paper provides a theoretical examination of the challenge of fitting recurrent neural networks (RNNs) to fit processes with long memory (or long-range dependence). Dubbed the “curse of memory”, the author(s) restrict to the case of linear activation functions, and show that for processes with increased spatial dependence: (a) the width of the layers must increase exponentially to _guarantee_ accurate approximations under a provided bound, and (b) a gradient-based optimization algorithm will take exponentially more time to converge. Sufficient details for reproducing the experiments are provided. | SP:2757a1c9fc4d7c81c497024bd6f3eec65027e352 |
On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis | 1 INTRODUCTION . Recurrent neural networks ( RNNs ) ( Rumelhart et al. , 1986 ) are among the most frequently employed methods to build machine learning models on temporal data . Despite its ubiquitous application ( Baldi et al. , 1999 ; Graves & Schmidhuber , 2009 ; Graves , 2013 ; Graves et al. , 2013 ; Graves & Jaitly , 2014 ; Gregor et al. , 2015 ) , some fundamental theoretical questions remain to be answered . These come in several flavors . First , one may pose the approximation problem , which asks what kind of temporal input-output relationships can RNNs model to an arbitrary precision . Second , one may also consider the optimization problem , which concerns the dynamics of training ( say , by gradient descent ) the RNN . While such questions can be posed for any machine learning model , the crux of the problem for RNNs is how the recurrent structure of the model and the dynamical nature of the data shape the answers to these problems . For example , it is often observed that when there are long-term dependencies in the data ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) , RNNs may encounter problems in learning , but such statements have rarely been put on precise footing . In this paper , we make a step in this direction by studying the approximation and optimization properties of RNNs . Compared with the static feed-forward setting , the key distinguishing feature †Equal contribution ‡Corresponding author here is the presence of temporal dynamics in terms of both the recurrent architectures in the model and the dynamical structures in the data . Hence , to understand the influence of dynamics on learning is of fundamental importance . As is often the case , the key effects of dynamics can already be revealed in the simplest linear setting . For this reason , we will focus our analysis on linear RNNs , i.e . those with linear activations . Further , we will employ a continuous-time analysis initially studied in the context of feed-forward architectures ( E , 2017 ; Haber & Ruthotto , 2017 ; Li et al. , 2017 ) and recently in recurrent settings ( Ceni et al. , 2019 ; Chang et al. , 2019 ; Lim , 2020 ; Sherstinsky , 2018 ; Niu et al. , 2019 ; Herrera et al. , 2020 ; Rubanova et al. , 2019 ) and idealize the RNN as a continuous-time dynamical system . This allows us to phrase the problems under investigation in convenient analytical settings that accentuates the effect of dynamics . In this case , the RNNs serve to approximate relationships represented by sequences of linear functionals . On first look the setting appears to be simple , but we show that it yields representative results that underlie key differences in the dynamical setting as opposed to static supervised learning problems . In fact , we show that memory , which can be made precise by the decay rates of the target linear functionals , can affect both approximation rates and optimization dynamics in a non-trivial way . Our main results are : 1 . We give a systematic analysis of the approximation of linear functionals by continuoustime linear RNNs , including a precise characterization of the approximation rates in terms of regularity and memory of the target functional . 2 . We give a fine-grained analysis of the optimization dynamics when training linear RNNs , and show that the training efficiency is adversely affected by the presence of long-term memory . These results together paint a comprehensive picture of the interaction of learning and dynamics , and makes concrete the heuristic observations that the presence of long-term memory affects RNN learning in a negative manner ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) . In particular , mirroring the classical curse of dimensionality ( Bellman , 1957 ) , we introduce the concept of the curse of memory that captures the new phenomena that arises from learning temporal relationships : when there is long-term memory in the data , one requires an exponentially large number of neurons for approximation , and the learning dynamics suffers from exponential slow downs . These results form a basic step towards a mathematical understanding of the recurrent structure and its effects on learning from temporal data . 2 RELATED WORK . We will discuss related work on RNNs on three fronts concerning the central results in this paper , namely approximation theory , optimization analysis and the role of memory in learning . A number of universal approximation results for RNNs have been obtained in discrete ( Matthews , 1993 ; Doya , 1993 ; Schäfer & Zimmermann , 2006 ; 2007 ) and continuous time ( Funahashi & Nakamura , 1993 ; Chow & Xiao-Dong Li , 2000 ; Li et al. , 2005 ; Maass et al. , 2007 ; Nakamura & Nakagawa , 2009 ) . Most of these focus on the case where the target relationship is generated from a hidden dynamical system in the form of difference or differential equations . The formulation of functional approximation here is more general , albeit our results are currently limited to the linear setting . Nevertheless , this is already sufficient to reveal new phenomena involving the interaction of learning and dynamics . This will be especially apparent when we discuss approximation rates and optimization dynamics . We also note that the functional/operator approximation using neural networks has been explored in Chen & Chen ( 1993 ) ; Tianping Chen & Hong Chen ( 1995 ) ; Lu et al . ( 2019 ) for nonrecurrent structures and reservoir systems for which approximation results similar to random feature models are derived ( Gonon et al. , 2020 ) . The main difference here is that we explicitly study the effect of memory in target functionals on learning using recurrent structures . On the optimization side , there are a number of recent results concerning the training of RNNs using gradient methods , and they are mostly positive in the sense that trainability is proved under specific settings . These include recovering linear dynamics ( Hardt et al. , 2018 ) or training in overparameterized settings ( Allen-Zhu et al. , 2019 ) . Here , our result concerns the general setting of learning linear functionals that need not come from some underlying differential/difference equations , and is also away from the over-parameterized regime . In our case , we discover on the contrary that training can become very difficult even in the linear case , and this can be understood in a quantitative way , in relation to long-term memory in the target functionals . This points to the practical literature in relation to memory and learning . The dynamical analysis here puts the ubiquitous but heuristic observations - that long-term memory negatively impacts training efficiency ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) - on concrete theoretical footing , at least in idealized settings . This may serve to justify or improve current heuristic methods ( Tseng et al. , 2016 ; Dieng et al. , 2017 ; Trinh et al. , 2018 ) developed in applications to deal with the difficulty in training with long-term memory . At the same time , we also complement general results on “ vanishing and explosion of gradients ” ( Pascanu et al. , 2013 ; Hanin & Rolnick , 2018 ; Hanin , 2018 ) that are typically restricted to initialization settings with more precise characterizations in the dynamical regime during training . The long range dependency within temporal data has been studied for a long time in the time series literature , although its effect on learning input-output relationships is rarely covered . For example , the Hurst exponent ( Hurst , 1951 ) is often used as a measure of long-term memory in time series , e.g . fractional Brownian motion ( Mandelbrot & Ness , 1968 ) . In contrast with the setting in this paper where memory involves the dependence of the output time series on the input , the Hurst exponent measures temporal variations and dependence within the input time series itself . Much of the time series literature investigates statistical properties and estimation methods of data with long range dependence ( Samorodnitsky , 2006 ; Taqqu et al. , 1995 ; Beran , 1992 ; Doukhan et al. , 2003 ) . One can also combine these classic statistical methodologies with the RNN-like architectures to design hybrid models with various applications ( Loukas & Oke , 2007 ; Diaconescu , 2008 ; Mohan & Gaitonde , 2018 ; Bukhari et al. , 2020 ) . 3 PROBLEM FORMULATION . The basic problem of supervised learning on time series data is to learn a mapping from an input temporal sequence to an output sequence . Formally , one can think of the output at each time as being produced from the input via an unknown function that depends on the entire input sequence , or at least up to the time at which the prediction is made . In the discrete-time case , one can write the data generation process yk = Hk ( x0 , . . . , xk−1 ) , k = 0 , 1 , . . . ( 1 ) where xk , yk denote respectively the input data and output response , and { Hk : k = 0 , 1 , . . . } is a sequence of ground truth functions of increasing input dimension accounting for temporal evolution . The goal of supervised learning is to learn an approximation of the sequence of functions { Hk } given observation data . Recurrent neural networks ( RNN ) ( Rumelhart et al. , 1986 ) gives a natural way to parameterize such a sequence of functions . In the simplest case , the one-layer RNN is given by hk+1 = σ ( Whk + Uxk ) , ŷk = c > hk . ( 2 ) Here , { hk } are the hidden/latent states and its evolution is governed by a recursive application of a feed-forward layer with activation σ , and ŷk is called the observation or readout . We omit the bias term here and only consider a linear readout or output layer . For each time step k , the mapping { x0 , . . . , xk−1 } 7→ ŷk parameterizes a function Ĥk ( · ) through adjustable parameters ( c , W , U ) . Hence , for a particular choice of these parameters , a sequence of functions { Ĥk } is constructed at the same time . To understand the working principles of RNNs , we need to characterize how { Ĥk } approximates { Hk } . The model ( 2 ) is not easy to analyze due to its discrete iterative nature . Hence , here we employ a continuous-time idealization that replaces the time-step index k by a continuous time parameter t. This allows us to employ a large variety of continuum analysis tools to gain insights to the learning problem . Let us now introduce this framework . Continuous-time formulation . Consider a sequence of inputs indexed by a real-valued variable t ∈ R instead of a discrete variable k considered previously . Concretely , we consider the input space X = C0 ( R , Rd ) , ( 3 ) which is the linear space of continuous functions from R ( time ) to Rd that vanishes at infinity . Here d is the dimension of each point in the time series . We denote an element in X by x : = { xt ∈ Rd : t ∈ R } and equip X with the supremum norm ‖x‖X : = supt∈R ‖xt‖∞ . For the space of outputs we will take a scalar time series , i.e . the space of bounded continuous functions from R to R : Y = Cb ( R , R ) . ( 4 ) This is due to the fact that vector-valued outputs can be handled by considering each output separately . In continuous time , the target relationship ( ground truth ) to be learned is yt = Ht ( x ) , t ∈ R ( 5 ) where for each t ∈ R , Ht is a functional Ht : X → R. Correspondingly , we define a continuous version of ( 2 ) as a hypothesis space to model continuous-time functionals d dt ht = σ ( Wht + Uxt ) , ŷt = c > ht , ( 6 ) whose Euler discretization corresponds to a discrete-time residual RNN . The dynamics then naturally defines a sequences of functionals { Ĥt ( x ) = ŷt : t ∈ R } , which can be used to approximate the target functionals { Ht } via adjusting ( c , W , U ) . Linear RNNs in continuous time . In this paper we mainly investigate the approximation and optimization property of linear RNNs , which already reveals the essential effect of dynamics . The linear RNN obeys ( 6 ) with σ being the identity map . Notice that in the theoretical setup , the initial time of the system goes back to −∞ with limt→−∞ xt = 0 , ∀x ∈ X , thus by linearity ( Ht ( 0 ) = 0 ) we specify the initial condition of the hidden state h−∞ = 0 for consistency . In this case , ( 6 ) has the following solution ŷt = ∫ ∞ 0 c > eWsUxt−sds . ( 7 ) Since we will investigate uniform approximations over large time intervals , we will consider stable RNNs , where W ∈ Wm with Wm = { W ∈ Rm×m : eigenvalues of W have negative real parts } . ( 8 ) Owing to the representation of solutions in ( 7 ) , the linear RNN defines a family of functionals Ĥ : = ∪m≥1Ĥm , Ĥm : = { { Ĥt ( x ) , t ∈ R } : Ĥt ( x ) = ∫ ∞ 0 c > eWsUxt−sds , W ∈ Wm , U ∈ Rm×d , c ∈ Rm } . ( 9 ) Here , m is the width of the network and controls the complexity of the hypothesis space . Clearly , the family of functionals the RNN can represent is not arbitrary , and must possess some structure . Let us now introduce some definitions of functionals that makes these structures precise . Definition 3.1 . Let { Ht : t ∈ R } be a sequence of functionals . 1 . Ht is causal if it does not depend on future values of the input : for every pair of x , x′ ∈ X such that xs = x′s for all s ≤ t , we have Ht ( x ) = Ht ( x′ ) . 2 . Ht is linear and continuous if Ht ( λx + λ′x′ ) = λHt ( x ) + λ′Ht ( x′ ) for any x , x′ ∈ X and λ , λ′ ∈ R , and supx∈X , ‖x‖X≤1 |Ht ( x ) | < ∞ , in which case the induced norm can be defined as ‖Ht‖ : = supx∈X , ‖x‖X≤1 |Ht ( x ) | . 3 . Ht is regular if for any sequence { x ( n ) ∈ X : n ∈ N } such that x ( n ) t → 0 for Lebesgue almost every t ∈ R , limn→∞Ht ( x ( n ) ) = 0 . 4 . { Ht : t ∈ R } is time-homogeneous if Ht ( x ) = Ht+τ ( x ( τ ) ) for any t , τ ∈ R , where x ( τ ) s = xs−τ for all s ∈ R , i.e . x ( τ ) is x whose time index is shifted to the right by τ . Linear , continuous and causal functionals are common definitions . One can think of regular functionals as those that are not determined by values of the inputs on an arbitrarily small time interval , e.g . an infinitely thin spike input . Time-homogeneous functionals , on the other hand , are those where there is no special reference point in time : if the time index of both the input sequence and the functional are shifted in coordination , the output value remains the same . Given these definitions , the following observation can be verified directly and its proof is immediate and hence omitted . Proposition 3.1 . Let { Ĥt : t ∈ R } be a sequence of functionals in the RNN hypothesis space Ĥ ( see ( 9 ) ) . Then for each t ∈ R , Ĥt is a causal , continuous , linear and regular functional . Moreover , the sequence of functionals { Ĥt : t ∈ R } is time-homogeneous . | This paper reports a mathematical study of approximation properties of linear RNNs. The first part reports a universal approximation theorem, and presents an analysis of how efficient the approximation is. In particular, it is shown that approximating a slowly decaying, power-law temporal filter requires a large number of neurons, a property the authors refer to as the "curse of memory". | SP:2757a1c9fc4d7c81c497024bd6f3eec65027e352 |
Deep Graph Neural Networks with Shallow Subgraph Samplers | 1 INTRODUCTION . Graph Neural Networks ( GNNs ) have now become the state-of-the-art models for graph mining ( Wu et al. , 2020 ; Hamilton et al. , 2017b ; Zhang et al. , 2019 ) , facilitating applications such as social recommendation ( Monti et al. , 2017 ; Ying et al. , 2018 ; Pal et al. , 2020 ) , knowledge understanding ( Schlichtkrull et al. , 2018 ; Park et al. , 2019 ; Zhang et al. , 2020 ) and drug discovery ( Stokes et al. , 2020 ; Lo et al. , 2018 ) . With the numerous architectures proposed ( Kipf & Welling , 2016 ; Hamilton et al. , 2017a ; Veličković et al. , 2018 ) , it still remains an open question how to effectively design deep GNNs . There are two fundamental obstacles that are intrinsic to the underlying graph structure : • Expressivity challenge : deep GNNs tend to oversmooth ( Li et al. , 2018 ) . They collapse embeddings of different nodes into a fixed low-dimensional subspace after repeated neighbor mixing . • Computation challenge : deep GNNs recursively expand the adjacent nodes along message passing edges . The neighborhood size may grow exponentially with model depth ( Chen et al. , 2017 ) . Due to oversmoothing , one of the most popular GNN architectures , Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2016 ) , has been theoretically proven as incapable of scaling to deep layers ( Oono & Suzuki , 2020 ; Rong et al. , 2020 ; Huang et al. , 2020 ) . Remedies to overcome the GCN limitations are two-folded . From the neural architecture perspective , researchers are actively seeking for more expressive neighbor aggregation operations ( Veličković et al. , 2018 ; Hamilton et al. , 2017a ; Xu et al. , 2018a ) , or transferring design components ( such as residual connection ) from deep CNNs to GNNs ( Xu et al. , 2018b ; Li et al. , 2019 ; Huang et al. , 2018 ) . From the data perspective , various works ( Klicpera et al. , 2019a ; b ; Bojchevski et al. , 2020 ) revisit classic graph analytic algorithms to reconstruct a graph with nicer topological property . The two kinds of works can also be combined to jointly improve the quality of message passing in deep GNNs . All the above GNN variants take a “ global ” view on the input graph G ( V , E ) — i.e. , all nodes are considered as belonging to the same G , whose size can often be massive . To generate the node embedding , no matter how we modify the architecture and the graph structure , a deep enough GNN would always propagate the influence from the entire node set V into a single target node . Intuitively , for a large graph , most nodes in V barely provide any useful information to the target nodes . We thus regard such “ global view ” on G as one of the root causes for both the expressivity and computation challenges discussed above . In this work , for the node embedding task , we take an alternative “ local view ” and interpret the GNN input as V = ⋃ v∈V V [ v ] and E = ⋃ v∈V E [ v ] . In other words , each target node v belongs to some small graph G [ v ] capturing the characteristics of only the node v. The entire input graph G is observed as the union of all such local yet latent G [ v ] . Such simple global-tolocal switch of perspective enables us to address both the expressivity and computation challenges without resorting to alternative GNN architectures or reconstructing the graph . Present work : SHADOW-GNN . We propose a “ Deep GNN , shallow sampler ” design principle that helps improve the expressive power and inference efficiency of various GNN architectures . We break the conventional thinking that an L-layer ( deep ) GNN has to aggregate L-hop ( faraway ) neighbors . We argue that the GNN receptive field for a target node should be shallower than the GNN depth . In other words , an L-layer GNN should only operate on a small subgraph G [ v ] surrounding the target node v , where G [ v ] consists of ( part of ) the L0-hop neighborhood . The deep vs. shallow comparison is reflected by setting L0 < L. We name such a GNN on G [ v ] as a SHADOW-GNN . We justify our design principle from two aspects . Firstly , why do we need the neighborhood to be shallow ? As a motivating example , the average number of 4-hop neighbors for the ogbn-products graph ( Hu et al. , 2020 ) is 0.6M , corresponding to 25 % of the full graph size . Blindly encoding the 0.6M node features into a single embedding vector can create the “ information bottleneck ” ( Alon & Yahav , 2020 ) . The irrelevant information from the majority of the 0.6M nodes may also “ dilute ” the truly useful signals from a small set of close neighbors . A simple solution to the above issues is to manually create a shallow neighborhood by subgraph sampling . The second question regarding SHADOW-GNN is : why do we still need deep GNNs ? Using more number of layers than the number of hops means the same pair of nodes may exchange messages with each other multiple times . Intuitively , this helps the GNN better absorb the subgraph information . Theoretically , we prove that a GNN deeper than the hops of the subgraph can be more powerful than the 1-dimensional WeisfeilerLehman test ( Shervashidze et al. , 2011 ) . A shallow GNN , on the contrary , can not accurately learn certain simple functions such as unweighted mean of the shallow neighborhood features . Note that with GCN as the backbone , a SHADOW-GCN still performs signal smoothing in each layer . However , the important distinction is that a deep GCN smooths the full G regardless of the target node , while a SHADOW-GCN constructs a customized smoothing domain G [ v ] for each target v. The variance in those smoothing domains created by SHADOW-GCN encourages variances in the node embedding vectors . With such intuition , our analysis shows that SHADOW-GNN does not oversmooth . Finally , since the sizes of the shallow neighborhoods are independent of the GNN depth , the computation challenge due to neighbor explosion is automatically addressed . We propose various subgraph samplers for SHADOW-GNN , including the simplest k-hop sampler and a sampler based on personalized PageRank , to improve the inference accuracy and computation efficiency . By experiments on five standard benchmarks , our SHADOW-SAGE and SHADOW-GAT models achieve significant accuracy gains compared with the original GraphSAGE and GAT models . In the meantime , the inference cost is reduced by orders of magnitude . 2 RELATED WORK AND PRELIMINARIES . Deep GNNs . Recently , numerous GNN models ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Hamilton et al. , 2017a ; Veličković et al. , 2018 ; Xu et al. , 2018b ; a ) have been proposed . In general , the input to a GNN is the graph G , and the outputs are representation vectors for each node , capturing both the feature and structural information of the neighborhood . Most state-of-the-art GNNs use shallow models ( i.e. , 2 to 3 layers ) . As first proposed by Li et al . ( 2018 ) and further elaborated by Luan et al . ( 2019 ) ; Oono & Suzuki ( 2020 ) ; Zhao & Akoglu ( 2020 ) ; Huang et al . ( 2020 ) , one of the major challenges to deepen GNNs is the “ oversmoothing ” of node features — each layer aggregation pushes the neighbor features towards similar values . Repeated aggregation over many layers results in node features being averaged over the full graph . A deep GNN may thus generate indistinguishable embeddings for different nodes . Viewing oversmoothing as a limitation of the layer aggregation , researchers develop alternative architectures . AS-GCN ( Huang et al. , 2018 ) , DeepGCN ( Li et al. , 2019 ) and JK-net ( Xu et al. , 2018b ) use skip-connection across layers . MixHop ( Abu-El-Haija et al. , 2019 ) , Snowball ( Luan et al. , 2019 ) and DAGNN ( Liu et al. , 2020 ) enable multi-hop message passing within a single layer . GraphSAGE ( Hamilton et al. , 2017a ) and GCNII ( Ming Chen et al. , 2020 ) encourage self-to-self message passing which effectively form an implicit skip-connection . GIN ( Xu et al. , 2018a ) and DeeperGCN ( Li et al. , 2020a ) propose more expressive neighbor aggregation operations . All the above focus on architectural exploration , which is a research direction orthogonal to ours . We can construct the SHADOW version of these GNNs in a plug-and-play fashion . Lastly , DropEdge ( Rong et al. , 2020 ) and Bayesian-GDC ( Hasanzadeh et al. , 2020 ) propose regularization techniques by adapting dropout ( Srivastava et al. , 2014 ) to graphs . Such techniques are only applied during training , and so oversmoothing during inference may not be alleviated . Learning from structural information . Another line of research is to go beyond the layer-wise message passing and more explicitly utilize the graph structural information ( Wu et al. , 2019 ; Klicpera et al. , 2019a ; Bojchevski et al. , 2020 ; Liu et al. , 2020 ; Frasca et al. , 2020 ; You et al. , 2019 ; Li et al. , 2020b ) . In particular , APPNP ( Klicpera et al. , 2019a ) and PPRGo ( Bojchevski et al. , 2020 ) utilize the personalized PageRank ( Page et al. , 1999 ) algorithm to re-define neighbor connections — instead of propagating features along the ( noisy ) graph edges , any nodes of structural significance can directly propagate to the target node . Other related methods such as GDC ( Klicpera et al. , 2019b ) and AM-GCN ( Wang et al. , 2020 ) reconstructs the adjacency matrix in each GNN layer to short-cut important multi-hop neighbors . Note that all the above methods takes a global view on G and operate the neural networks on the full graph . On the other hand , the idea of using subgraph samples to improve the GNN efficiency has also been explored . For example , SEAL ( Zhang & Chen , 2018 ) extracts local k-hop enclosing subgraphs to perform link prediction . GraphSAINT ( Zeng et al. , 2020 ) propose random walk samplers to construct minibatches during training . Notations . We focus on the node classification task , although our design principle can be naturally extended to other tasks . Let G ( V , E , X ) be an undirected graph , with node set V , edge set E ⊆ V×V and node feature matrix X ∈ R|V|×d . The u-th row of X corresponds to the length-d feature of node u . Let A be the adjacency matrix of G where Au , v = 1 if edge ( u , v ) ∈ E and Au , v = 0 otherwise . Denote à as the adjacency matrix after symmetric normalization ( used by GCN ) , and  as the one after random walk normalization ( used by GraphSAGE ) . Let subscript “ [ u ] ” mark the quantities corresponding to a small subgraph surrounding node u . For example , the subgraph itself is G [ u ] . For an L-layer GNN , let superscript “ ( ` ) ” denote the layer- ` quantities ( 1 ≤ ` ≤ L ) . Let d ( ` ) be the number of channels for layer ` ; H ( ` −1 ) ∈ R|V|×d ( ` −1 ) andH ( ` ) ∈ R|V|×d ( ` ) be the input and output feature matrices . Thus , H ( 0 ) = X and d ( 0 ) = d. Further , let Y = H ( L ) . The operation of a layer can be abstracted asH ( ` ) = f ( H ( ` −1 ) , A ; W ( ` ) ) , whereW ( ` ) are the learnable weights . | To address the oversmoothing problem and reduce the computational cost of GNNs, this paper proposes to train deep GNNs with shallow subgraph samplers. The following two theoretical proofs provide insightful motivations of Shadow-GNN: (1 )Obtaining node embeddings within shallow subgraphs can avoid oversmoothing; (2)Deep GNNs are strictly more expressive than a shallow one. Experiments are performed on five different graph datasets with 3 and 5 layer GNNs coupling with a k-hop sampler or a Personalized PageRank (PPR) sampler. | SP:add3ccfc58941a3bf72517666a38ca1d473b278d |
Deep Graph Neural Networks with Shallow Subgraph Samplers | 1 INTRODUCTION . Graph Neural Networks ( GNNs ) have now become the state-of-the-art models for graph mining ( Wu et al. , 2020 ; Hamilton et al. , 2017b ; Zhang et al. , 2019 ) , facilitating applications such as social recommendation ( Monti et al. , 2017 ; Ying et al. , 2018 ; Pal et al. , 2020 ) , knowledge understanding ( Schlichtkrull et al. , 2018 ; Park et al. , 2019 ; Zhang et al. , 2020 ) and drug discovery ( Stokes et al. , 2020 ; Lo et al. , 2018 ) . With the numerous architectures proposed ( Kipf & Welling , 2016 ; Hamilton et al. , 2017a ; Veličković et al. , 2018 ) , it still remains an open question how to effectively design deep GNNs . There are two fundamental obstacles that are intrinsic to the underlying graph structure : • Expressivity challenge : deep GNNs tend to oversmooth ( Li et al. , 2018 ) . They collapse embeddings of different nodes into a fixed low-dimensional subspace after repeated neighbor mixing . • Computation challenge : deep GNNs recursively expand the adjacent nodes along message passing edges . The neighborhood size may grow exponentially with model depth ( Chen et al. , 2017 ) . Due to oversmoothing , one of the most popular GNN architectures , Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2016 ) , has been theoretically proven as incapable of scaling to deep layers ( Oono & Suzuki , 2020 ; Rong et al. , 2020 ; Huang et al. , 2020 ) . Remedies to overcome the GCN limitations are two-folded . From the neural architecture perspective , researchers are actively seeking for more expressive neighbor aggregation operations ( Veličković et al. , 2018 ; Hamilton et al. , 2017a ; Xu et al. , 2018a ) , or transferring design components ( such as residual connection ) from deep CNNs to GNNs ( Xu et al. , 2018b ; Li et al. , 2019 ; Huang et al. , 2018 ) . From the data perspective , various works ( Klicpera et al. , 2019a ; b ; Bojchevski et al. , 2020 ) revisit classic graph analytic algorithms to reconstruct a graph with nicer topological property . The two kinds of works can also be combined to jointly improve the quality of message passing in deep GNNs . All the above GNN variants take a “ global ” view on the input graph G ( V , E ) — i.e. , all nodes are considered as belonging to the same G , whose size can often be massive . To generate the node embedding , no matter how we modify the architecture and the graph structure , a deep enough GNN would always propagate the influence from the entire node set V into a single target node . Intuitively , for a large graph , most nodes in V barely provide any useful information to the target nodes . We thus regard such “ global view ” on G as one of the root causes for both the expressivity and computation challenges discussed above . In this work , for the node embedding task , we take an alternative “ local view ” and interpret the GNN input as V = ⋃ v∈V V [ v ] and E = ⋃ v∈V E [ v ] . In other words , each target node v belongs to some small graph G [ v ] capturing the characteristics of only the node v. The entire input graph G is observed as the union of all such local yet latent G [ v ] . Such simple global-tolocal switch of perspective enables us to address both the expressivity and computation challenges without resorting to alternative GNN architectures or reconstructing the graph . Present work : SHADOW-GNN . We propose a “ Deep GNN , shallow sampler ” design principle that helps improve the expressive power and inference efficiency of various GNN architectures . We break the conventional thinking that an L-layer ( deep ) GNN has to aggregate L-hop ( faraway ) neighbors . We argue that the GNN receptive field for a target node should be shallower than the GNN depth . In other words , an L-layer GNN should only operate on a small subgraph G [ v ] surrounding the target node v , where G [ v ] consists of ( part of ) the L0-hop neighborhood . The deep vs. shallow comparison is reflected by setting L0 < L. We name such a GNN on G [ v ] as a SHADOW-GNN . We justify our design principle from two aspects . Firstly , why do we need the neighborhood to be shallow ? As a motivating example , the average number of 4-hop neighbors for the ogbn-products graph ( Hu et al. , 2020 ) is 0.6M , corresponding to 25 % of the full graph size . Blindly encoding the 0.6M node features into a single embedding vector can create the “ information bottleneck ” ( Alon & Yahav , 2020 ) . The irrelevant information from the majority of the 0.6M nodes may also “ dilute ” the truly useful signals from a small set of close neighbors . A simple solution to the above issues is to manually create a shallow neighborhood by subgraph sampling . The second question regarding SHADOW-GNN is : why do we still need deep GNNs ? Using more number of layers than the number of hops means the same pair of nodes may exchange messages with each other multiple times . Intuitively , this helps the GNN better absorb the subgraph information . Theoretically , we prove that a GNN deeper than the hops of the subgraph can be more powerful than the 1-dimensional WeisfeilerLehman test ( Shervashidze et al. , 2011 ) . A shallow GNN , on the contrary , can not accurately learn certain simple functions such as unweighted mean of the shallow neighborhood features . Note that with GCN as the backbone , a SHADOW-GCN still performs signal smoothing in each layer . However , the important distinction is that a deep GCN smooths the full G regardless of the target node , while a SHADOW-GCN constructs a customized smoothing domain G [ v ] for each target v. The variance in those smoothing domains created by SHADOW-GCN encourages variances in the node embedding vectors . With such intuition , our analysis shows that SHADOW-GNN does not oversmooth . Finally , since the sizes of the shallow neighborhoods are independent of the GNN depth , the computation challenge due to neighbor explosion is automatically addressed . We propose various subgraph samplers for SHADOW-GNN , including the simplest k-hop sampler and a sampler based on personalized PageRank , to improve the inference accuracy and computation efficiency . By experiments on five standard benchmarks , our SHADOW-SAGE and SHADOW-GAT models achieve significant accuracy gains compared with the original GraphSAGE and GAT models . In the meantime , the inference cost is reduced by orders of magnitude . 2 RELATED WORK AND PRELIMINARIES . Deep GNNs . Recently , numerous GNN models ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Hamilton et al. , 2017a ; Veličković et al. , 2018 ; Xu et al. , 2018b ; a ) have been proposed . In general , the input to a GNN is the graph G , and the outputs are representation vectors for each node , capturing both the feature and structural information of the neighborhood . Most state-of-the-art GNNs use shallow models ( i.e. , 2 to 3 layers ) . As first proposed by Li et al . ( 2018 ) and further elaborated by Luan et al . ( 2019 ) ; Oono & Suzuki ( 2020 ) ; Zhao & Akoglu ( 2020 ) ; Huang et al . ( 2020 ) , one of the major challenges to deepen GNNs is the “ oversmoothing ” of node features — each layer aggregation pushes the neighbor features towards similar values . Repeated aggregation over many layers results in node features being averaged over the full graph . A deep GNN may thus generate indistinguishable embeddings for different nodes . Viewing oversmoothing as a limitation of the layer aggregation , researchers develop alternative architectures . AS-GCN ( Huang et al. , 2018 ) , DeepGCN ( Li et al. , 2019 ) and JK-net ( Xu et al. , 2018b ) use skip-connection across layers . MixHop ( Abu-El-Haija et al. , 2019 ) , Snowball ( Luan et al. , 2019 ) and DAGNN ( Liu et al. , 2020 ) enable multi-hop message passing within a single layer . GraphSAGE ( Hamilton et al. , 2017a ) and GCNII ( Ming Chen et al. , 2020 ) encourage self-to-self message passing which effectively form an implicit skip-connection . GIN ( Xu et al. , 2018a ) and DeeperGCN ( Li et al. , 2020a ) propose more expressive neighbor aggregation operations . All the above focus on architectural exploration , which is a research direction orthogonal to ours . We can construct the SHADOW version of these GNNs in a plug-and-play fashion . Lastly , DropEdge ( Rong et al. , 2020 ) and Bayesian-GDC ( Hasanzadeh et al. , 2020 ) propose regularization techniques by adapting dropout ( Srivastava et al. , 2014 ) to graphs . Such techniques are only applied during training , and so oversmoothing during inference may not be alleviated . Learning from structural information . Another line of research is to go beyond the layer-wise message passing and more explicitly utilize the graph structural information ( Wu et al. , 2019 ; Klicpera et al. , 2019a ; Bojchevski et al. , 2020 ; Liu et al. , 2020 ; Frasca et al. , 2020 ; You et al. , 2019 ; Li et al. , 2020b ) . In particular , APPNP ( Klicpera et al. , 2019a ) and PPRGo ( Bojchevski et al. , 2020 ) utilize the personalized PageRank ( Page et al. , 1999 ) algorithm to re-define neighbor connections — instead of propagating features along the ( noisy ) graph edges , any nodes of structural significance can directly propagate to the target node . Other related methods such as GDC ( Klicpera et al. , 2019b ) and AM-GCN ( Wang et al. , 2020 ) reconstructs the adjacency matrix in each GNN layer to short-cut important multi-hop neighbors . Note that all the above methods takes a global view on G and operate the neural networks on the full graph . On the other hand , the idea of using subgraph samples to improve the GNN efficiency has also been explored . For example , SEAL ( Zhang & Chen , 2018 ) extracts local k-hop enclosing subgraphs to perform link prediction . GraphSAINT ( Zeng et al. , 2020 ) propose random walk samplers to construct minibatches during training . Notations . We focus on the node classification task , although our design principle can be naturally extended to other tasks . Let G ( V , E , X ) be an undirected graph , with node set V , edge set E ⊆ V×V and node feature matrix X ∈ R|V|×d . The u-th row of X corresponds to the length-d feature of node u . Let A be the adjacency matrix of G where Au , v = 1 if edge ( u , v ) ∈ E and Au , v = 0 otherwise . Denote à as the adjacency matrix after symmetric normalization ( used by GCN ) , and  as the one after random walk normalization ( used by GraphSAGE ) . Let subscript “ [ u ] ” mark the quantities corresponding to a small subgraph surrounding node u . For example , the subgraph itself is G [ u ] . For an L-layer GNN , let superscript “ ( ` ) ” denote the layer- ` quantities ( 1 ≤ ` ≤ L ) . Let d ( ` ) be the number of channels for layer ` ; H ( ` −1 ) ∈ R|V|×d ( ` −1 ) andH ( ` ) ∈ R|V|×d ( ` ) be the input and output feature matrices . Thus , H ( 0 ) = X and d ( 0 ) = d. Further , let Y = H ( L ) . The operation of a layer can be abstracted asH ( ` ) = f ( H ( ` −1 ) , A ; W ( ` ) ) , whereW ( ` ) are the learnable weights . | This paper proposes a new extension of GNNs to deep GNNs, which use subgraphs to keep the computational costs low for training large graphs. It addresses the two main reasons that GNNs have not previously been extended to deep GNNs: expressivity and computational cost. Increasing the number of layers in a GNN leads to averaging over more nodes, which in turn collapses the learned embeddings. The paper claims that using shallow graphs instead of the full graphs avoids this oversmoothing issue. Additionally, using the full graph is computationally expensive since the neighborhood sizes grow with the number of neighbors. Using shallow subgraphs instead allows the size of the neighborhoods to remain constant as the number of layers increase. To this end, the paper presents SHADOW-GNN, a Deep GNN with shallow sampling. They extend this framework to GraphSAGE and GAT models and show that it improves performance over the original model with a lower computational cost. Overall, this method seems well-motivated, and both theoretical and empirical results support their claims. There are a few points on which clarification from the authors would be helpful. | SP:add3ccfc58941a3bf72517666a38ca1d473b278d |
Deep Graph Neural Networks with Shallow Subgraph Samplers | 1 INTRODUCTION . Graph Neural Networks ( GNNs ) have now become the state-of-the-art models for graph mining ( Wu et al. , 2020 ; Hamilton et al. , 2017b ; Zhang et al. , 2019 ) , facilitating applications such as social recommendation ( Monti et al. , 2017 ; Ying et al. , 2018 ; Pal et al. , 2020 ) , knowledge understanding ( Schlichtkrull et al. , 2018 ; Park et al. , 2019 ; Zhang et al. , 2020 ) and drug discovery ( Stokes et al. , 2020 ; Lo et al. , 2018 ) . With the numerous architectures proposed ( Kipf & Welling , 2016 ; Hamilton et al. , 2017a ; Veličković et al. , 2018 ) , it still remains an open question how to effectively design deep GNNs . There are two fundamental obstacles that are intrinsic to the underlying graph structure : • Expressivity challenge : deep GNNs tend to oversmooth ( Li et al. , 2018 ) . They collapse embeddings of different nodes into a fixed low-dimensional subspace after repeated neighbor mixing . • Computation challenge : deep GNNs recursively expand the adjacent nodes along message passing edges . The neighborhood size may grow exponentially with model depth ( Chen et al. , 2017 ) . Due to oversmoothing , one of the most popular GNN architectures , Graph Convolutional Network ( GCN ) ( Kipf & Welling , 2016 ) , has been theoretically proven as incapable of scaling to deep layers ( Oono & Suzuki , 2020 ; Rong et al. , 2020 ; Huang et al. , 2020 ) . Remedies to overcome the GCN limitations are two-folded . From the neural architecture perspective , researchers are actively seeking for more expressive neighbor aggregation operations ( Veličković et al. , 2018 ; Hamilton et al. , 2017a ; Xu et al. , 2018a ) , or transferring design components ( such as residual connection ) from deep CNNs to GNNs ( Xu et al. , 2018b ; Li et al. , 2019 ; Huang et al. , 2018 ) . From the data perspective , various works ( Klicpera et al. , 2019a ; b ; Bojchevski et al. , 2020 ) revisit classic graph analytic algorithms to reconstruct a graph with nicer topological property . The two kinds of works can also be combined to jointly improve the quality of message passing in deep GNNs . All the above GNN variants take a “ global ” view on the input graph G ( V , E ) — i.e. , all nodes are considered as belonging to the same G , whose size can often be massive . To generate the node embedding , no matter how we modify the architecture and the graph structure , a deep enough GNN would always propagate the influence from the entire node set V into a single target node . Intuitively , for a large graph , most nodes in V barely provide any useful information to the target nodes . We thus regard such “ global view ” on G as one of the root causes for both the expressivity and computation challenges discussed above . In this work , for the node embedding task , we take an alternative “ local view ” and interpret the GNN input as V = ⋃ v∈V V [ v ] and E = ⋃ v∈V E [ v ] . In other words , each target node v belongs to some small graph G [ v ] capturing the characteristics of only the node v. The entire input graph G is observed as the union of all such local yet latent G [ v ] . Such simple global-tolocal switch of perspective enables us to address both the expressivity and computation challenges without resorting to alternative GNN architectures or reconstructing the graph . Present work : SHADOW-GNN . We propose a “ Deep GNN , shallow sampler ” design principle that helps improve the expressive power and inference efficiency of various GNN architectures . We break the conventional thinking that an L-layer ( deep ) GNN has to aggregate L-hop ( faraway ) neighbors . We argue that the GNN receptive field for a target node should be shallower than the GNN depth . In other words , an L-layer GNN should only operate on a small subgraph G [ v ] surrounding the target node v , where G [ v ] consists of ( part of ) the L0-hop neighborhood . The deep vs. shallow comparison is reflected by setting L0 < L. We name such a GNN on G [ v ] as a SHADOW-GNN . We justify our design principle from two aspects . Firstly , why do we need the neighborhood to be shallow ? As a motivating example , the average number of 4-hop neighbors for the ogbn-products graph ( Hu et al. , 2020 ) is 0.6M , corresponding to 25 % of the full graph size . Blindly encoding the 0.6M node features into a single embedding vector can create the “ information bottleneck ” ( Alon & Yahav , 2020 ) . The irrelevant information from the majority of the 0.6M nodes may also “ dilute ” the truly useful signals from a small set of close neighbors . A simple solution to the above issues is to manually create a shallow neighborhood by subgraph sampling . The second question regarding SHADOW-GNN is : why do we still need deep GNNs ? Using more number of layers than the number of hops means the same pair of nodes may exchange messages with each other multiple times . Intuitively , this helps the GNN better absorb the subgraph information . Theoretically , we prove that a GNN deeper than the hops of the subgraph can be more powerful than the 1-dimensional WeisfeilerLehman test ( Shervashidze et al. , 2011 ) . A shallow GNN , on the contrary , can not accurately learn certain simple functions such as unweighted mean of the shallow neighborhood features . Note that with GCN as the backbone , a SHADOW-GCN still performs signal smoothing in each layer . However , the important distinction is that a deep GCN smooths the full G regardless of the target node , while a SHADOW-GCN constructs a customized smoothing domain G [ v ] for each target v. The variance in those smoothing domains created by SHADOW-GCN encourages variances in the node embedding vectors . With such intuition , our analysis shows that SHADOW-GNN does not oversmooth . Finally , since the sizes of the shallow neighborhoods are independent of the GNN depth , the computation challenge due to neighbor explosion is automatically addressed . We propose various subgraph samplers for SHADOW-GNN , including the simplest k-hop sampler and a sampler based on personalized PageRank , to improve the inference accuracy and computation efficiency . By experiments on five standard benchmarks , our SHADOW-SAGE and SHADOW-GAT models achieve significant accuracy gains compared with the original GraphSAGE and GAT models . In the meantime , the inference cost is reduced by orders of magnitude . 2 RELATED WORK AND PRELIMINARIES . Deep GNNs . Recently , numerous GNN models ( Kipf & Welling , 2016 ; Defferrard et al. , 2016 ; Hamilton et al. , 2017a ; Veličković et al. , 2018 ; Xu et al. , 2018b ; a ) have been proposed . In general , the input to a GNN is the graph G , and the outputs are representation vectors for each node , capturing both the feature and structural information of the neighborhood . Most state-of-the-art GNNs use shallow models ( i.e. , 2 to 3 layers ) . As first proposed by Li et al . ( 2018 ) and further elaborated by Luan et al . ( 2019 ) ; Oono & Suzuki ( 2020 ) ; Zhao & Akoglu ( 2020 ) ; Huang et al . ( 2020 ) , one of the major challenges to deepen GNNs is the “ oversmoothing ” of node features — each layer aggregation pushes the neighbor features towards similar values . Repeated aggregation over many layers results in node features being averaged over the full graph . A deep GNN may thus generate indistinguishable embeddings for different nodes . Viewing oversmoothing as a limitation of the layer aggregation , researchers develop alternative architectures . AS-GCN ( Huang et al. , 2018 ) , DeepGCN ( Li et al. , 2019 ) and JK-net ( Xu et al. , 2018b ) use skip-connection across layers . MixHop ( Abu-El-Haija et al. , 2019 ) , Snowball ( Luan et al. , 2019 ) and DAGNN ( Liu et al. , 2020 ) enable multi-hop message passing within a single layer . GraphSAGE ( Hamilton et al. , 2017a ) and GCNII ( Ming Chen et al. , 2020 ) encourage self-to-self message passing which effectively form an implicit skip-connection . GIN ( Xu et al. , 2018a ) and DeeperGCN ( Li et al. , 2020a ) propose more expressive neighbor aggregation operations . All the above focus on architectural exploration , which is a research direction orthogonal to ours . We can construct the SHADOW version of these GNNs in a plug-and-play fashion . Lastly , DropEdge ( Rong et al. , 2020 ) and Bayesian-GDC ( Hasanzadeh et al. , 2020 ) propose regularization techniques by adapting dropout ( Srivastava et al. , 2014 ) to graphs . Such techniques are only applied during training , and so oversmoothing during inference may not be alleviated . Learning from structural information . Another line of research is to go beyond the layer-wise message passing and more explicitly utilize the graph structural information ( Wu et al. , 2019 ; Klicpera et al. , 2019a ; Bojchevski et al. , 2020 ; Liu et al. , 2020 ; Frasca et al. , 2020 ; You et al. , 2019 ; Li et al. , 2020b ) . In particular , APPNP ( Klicpera et al. , 2019a ) and PPRGo ( Bojchevski et al. , 2020 ) utilize the personalized PageRank ( Page et al. , 1999 ) algorithm to re-define neighbor connections — instead of propagating features along the ( noisy ) graph edges , any nodes of structural significance can directly propagate to the target node . Other related methods such as GDC ( Klicpera et al. , 2019b ) and AM-GCN ( Wang et al. , 2020 ) reconstructs the adjacency matrix in each GNN layer to short-cut important multi-hop neighbors . Note that all the above methods takes a global view on G and operate the neural networks on the full graph . On the other hand , the idea of using subgraph samples to improve the GNN efficiency has also been explored . For example , SEAL ( Zhang & Chen , 2018 ) extracts local k-hop enclosing subgraphs to perform link prediction . GraphSAINT ( Zeng et al. , 2020 ) propose random walk samplers to construct minibatches during training . Notations . We focus on the node classification task , although our design principle can be naturally extended to other tasks . Let G ( V , E , X ) be an undirected graph , with node set V , edge set E ⊆ V×V and node feature matrix X ∈ R|V|×d . The u-th row of X corresponds to the length-d feature of node u . Let A be the adjacency matrix of G where Au , v = 1 if edge ( u , v ) ∈ E and Au , v = 0 otherwise . Denote à as the adjacency matrix after symmetric normalization ( used by GCN ) , and  as the one after random walk normalization ( used by GraphSAGE ) . Let subscript “ [ u ] ” mark the quantities corresponding to a small subgraph surrounding node u . For example , the subgraph itself is G [ u ] . For an L-layer GNN , let superscript “ ( ` ) ” denote the layer- ` quantities ( 1 ≤ ` ≤ L ) . Let d ( ` ) be the number of channels for layer ` ; H ( ` −1 ) ∈ R|V|×d ( ` −1 ) andH ( ` ) ∈ R|V|×d ( ` ) be the input and output feature matrices . Thus , H ( 0 ) = X and d ( 0 ) = d. Further , let Y = H ( L ) . The operation of a layer can be abstracted asH ( ` ) = f ( H ( ` −1 ) , A ; W ( ` ) ) , whereW ( ` ) are the learnable weights . | The paper proposes a simple but interesting new graph sampling method for graph neural networks, called “deep GNN, shallow sampler”. Centered on the target nodes, they only sample shallow subgraphs within $L_0$-hop neighborhood and then run an $L$-layer GNN ($L>L_0$) on these subgraphs and aggregate their embeddings. In this way, they can limit the message passing only within a shallow neighborhood to exclude noisy nodes; and they can also improve the expressivity by using deep GNN. To my understanding, the two most similar works are GraphSAGE and GraphSAINT. Compared to GraphSAGE, it samples the subgraphs instead of just $l$-hop nodes (it means they may contain more edges/circles), and it can be more expressive; compared to GraphSAINT, it requires the samples to be centered around target nodes and shallow, and it also changes the way of subgraph ensemble and makes it applied to the testing phase. | SP:add3ccfc58941a3bf72517666a38ca1d473b278d |
Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation | 1 INTRODUCTION . Arrhythmia in electrocardiogram ( ECG ) is a reflection of heart conduction abnormality and occurs randomly among normal beats . Deep learning based methods have demonstrated strong power in classifying different types of arrhythmia . There are plenty of works on classifying a single beat , involving convolutional neural networks ( CNN ) ( Acharya et al. , 2017b ; Zubair et al. , 2016 ) , long short-term memory ( LSTM ) ( Yildirim , 2018 ) , and generative adversarial networks ( GAN ) ( Golany & Radinsky , 2019 ) . For these methods to work in clinical setting , however , a good segmenter is needed to accurately extract a single beat from an ECG segment , which may be hard when abnormal beats are present . Alternatively , other works ( Acharya et al. , 2017a ; Hannun et al. , 2019 ) try to directly identify the genres of arrhythmia present in an ECG segment . The limitation of these works is that they work as a black-box and fail to provide cardiologists with any clue on how the prediction is made such as the location of the associated abnormal beats . In terms of ECG segmentation , there are different tasks such as segmenting ECG records into beats or into P wave , QRS complexity , and T wave . On one hand , some existing works take advantage of signal processing techniques to locate some fiducial points of PQRST complex so that the ECG signals can be divided . For example , Pan-Tompkins algorithm ( Pan & Tompkins , 1985 ) uses a combination of filters , squaring , and moving window integration to detect QRS complexity . The shortcomings of these methods are that handcraft selection of filter parameters and threshold is needed . More importantly , they are unable to distinguish abnormal heartbeats from normal ones . To address these issues , Moskalenko et al . ( 2019 ) ; Oh et al . ( 2019 ) deploy CNNs for automatic beat segmentation . However , the quality of these methods highly depends on the labels for fiducial points of ECG signals , the annotation process of which can be laborious and sometimes very hard . Besides , due to the high morphological variation of arrhythmia , strong variations exist even between annotations from experienced cardiologists . As such , unsupervised learning based approaches might be a better choice . Inspired by human ’ s perception of ECG signals , our proposed framework firstly locates the abnormal beats in an ECG segment in the form of attention map and then does abnormal beats classification by focusing on these abnormal beats . Thus , the framework not only differentiates arrhythmia types but also identifies the location of the associated abnormal beats for better interpretability of the result . It is worth noting that , in our workflow , we only make use of annotation for the type of abnormality in each ECG segment without abnormal beat localization information during training , given the difficulty and tedious effort in obtaining the latter . We validate our methods on two datasets from different sources . The first one contains 508 12-lead ECG records of Premature Ventricular Contraction patients , which are categorized into different classes by the origin of premature contraction ( e.g. , left ventricle ( LV ) or right ventricle ( RV ) ) . For the other dataset , we process signals in the MIT-BIH Arrhythmia dataset into segments of standard length . This dataset includes various types of abnormal beats , and we select 2627 segments with PVC present and 356 segemnts with Atrial Premature Beat ( APB ) present . Experiments on both two dataset show quantitative evidence that introducing the segmentation of abnormal beats through an attention map , although unsupervised , can in fact benefit the arrhythmia classification performance as measured by accuracy , sensitivity , specificity , and area under Receiver Operating Characteristic ( ROC ) curve . At the same time , a grade study by experts qualitatively demonstrates our method ’ s promising capability to segment abnormal beats among normal ones , which can provide useful insight into the classification result . Our code and dataset , which is the first for the challenging PVC differentiation problem , will be released to the public . 2 RELATED WORKS . Multitask learning There are many works devoted to training one deep learning models for multitasks rather than one specific task , like simultaneous segmentation and classification . ( Yang et al. , 2017 ) solves skin lesion segmentation and classification at the same time by utilizing similarities and differences across tasks . In the area of ECG signals , ( Oh et al. , 2019 ) modifies UNet to output the localization of r peaks and arrhythmia prediction simultaneously . What those two works have in common is that different tasks share certain layers in feature extraction . In contrast , our segmenter and classifier are independent models and there is no layer sharing between them . As can be seen in Figure 1 , we use attention maps as a bridge connecting the two models . ( Mehta et al. , 2018 ) segments different types of issues in breast biopsy images with a UNet and apply a discriminative map generated by a subbranch of the UNet to the segmentation result as input to a MLP for diagnosis . However , their segmentation and classification tasks are not trained end-to-end . ( Zhou et al. , 2019 ) proposes a method for collaborative learning of disease grading and lesion segmentation . They first perform a traditional semantic segmentation task with a small portion of annotated labels , and then they jointly train the segmenter and classifier for fine-tuning with an attention mechanism , which is applied on the latent features in the classification model , different from our method . Another difference is that for most existing multitask learning works , labels for each task are necessary , i.e. , all tasks are supervised . Our method , on the other hand , only requires the labels of one task ( classification ) , leading to a joint supervised/unsupervised scheme . Attention mechanism After firstly proposed for machine translation ( Bahdanau et al. , 2014 ) , attention model became a prevalent concept in deep learning and leads to improved performance in various tasks in natural language processing and computer visions . ( Vaswani et al. , 2017 ) exploits self-attention in their encoder-decoder architecture to draw dependency between input and output sentences . ( Wang et al. , 2017 ) builds a very deep network with attention modules which generates attention-aware features for image classification and ( Oktay et al. , 2018 ) integrates attention gates into U-Net ( Ronneberger et al. , 2015 ) to highlight latent channels informative for segmentation task . When it comes to ECG , ( Hong et al. , 2019 ) proposes a multilevel knowledge guided attention network to discriminate Atrial fibrillation ( AF ) patients , making the learned models explainable at beat level , rhythm level , and frequency level , which is highly related to our work . Our method and theirs however are quite different in the way attention weights are derived and applied , as well as the output of attention network . First , in that work , the attention weights are obtained from the outputs of hidden layers , while ours are directly from the input . Second , domain knowledge about AF is needed to help the attention extraction , so the process is weakly supervised , while ours do not use any external information and is fully unsupervised . Third , their attention weights are applied to latent features in that work while ours are applied to the input for better interpretability . Finally , in that work , the input ECG segment is divided into equal-length segments in advance and the attention network output only indicates which segment contains the target arrhythmia . The quality highly depends on how the segment is divided , and it does not provide the exact locations of abnormal beats . On the other hand , our method directly locates the abnormal beats on the entire input ECG , offering potentially better interpretability and robustness . 3 METHOD . 3.1 OVERVIEW OF THE FRAMEWORK . Here we briefly introduce the workflow of our joint learning frameworks for supervised classification and unsupervised segmentation . Firstly , in this work , we choose to model the input signal as a one-dimensional signal D ∈ RM×N , where M is the number of leads and N is the length of the input ECG segment ( number of samples over time ) . We then use a one-dimensional ( 1D ) fully convolutional network called segmenter S to output a feature map L = S ( D ) ∈ RM×N , After that , we apply a pooling layer to generate window-style element-wise attention A ∈ RM×N , containing weights directly for every sample in the input ECG . The after-attention signalX = A D ∈ RM×N , where represents element-wise production , is then fed into a multi-layer CNN called classifier C , in which the outermost fully connected layer gives the prediction of the arrhythmia types . After training , the abnormal areas are highlighted in X , thus achieving the goal of segmenting abnormal beats from normal ones . Moreover , x , which indicates those beats that are highly associated with the differentiation task , also serves as an explanation for C ’ s decision . The architecture of our framework is illustrated in Fig 1 . 3.2 SEGMENTER AND CLASSIFIER . In most existing works , the attention map is fused with the deep features in a neural network . However , for our specific purposes of enhancing intepretability of the classification results as well as unsupervised segmentation , the best result would be obtained by directly applying it to the input signal D. In order to generate attention weights of the same length as D , we choose to utilize UNet ( Ronneberger et al. , 2015 ) , a fully convolutional network highlighted by the skip connection on different stages . Encoding path extract features recursively and decoding path reconstruct the data as instructed by loss function . Note that the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal and at the same time each channel gets the same attention . The reason is that the 12 leads are measured synchronously and the abnormal beats occur at the same time across all the leads . Both recurrent neural networks ( RNN ) and CNN are candidate architectures for many arrhythmia classification works . RNN takes an ECG signal as sequential data and is good at dealing with the temporal relationship . CNN focuses on recognition of shapes and patterns in ECG , thus is less sensitive to the relative position of abnormal beats with respect to normal ones . Because abnormal beats may occur randomly among normal beats , we decide to use CNN as the backbone of our classifier . The detailed implementation of C is shown in 1 ( b ) . 3.3 POOLING FOR WINDOW-STYLE ATTENTION . We do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first . This is out of considerations for both interpretability and performance . Regarding interpretability , it is desirable that each abnormal beat is uniformly highlighted , i.e. , the attention weights should be almost constant and smooth for all the samples within each abnormal beat . Regarding performance , it is desirable that the attention map A does not distort the shape of abnormal beats after it is applied to the input X . Pooling layer is the easiest way to achieve this goal , functioning as a sliding window over multiple samples in an ECG signal for global information extraction . Max pooling outputs the same value around a local maximum , and average pooling reduces fluctuation by averaging over multiple samples . The kernel size can not be too large , which may fuse sharp changes from neighboring areas and lead to the loss of local information . Therefore , deciding the proper pooling kernel size is essentially finding a balance between local information and global information preservation . Through experiments to be shown in Section 5.3 , we find that setting the kernel size as nearly half the length of a normal beat yields the best balance between performance and interpretability . Padding of zeros on both sides of the segmenter output L is implemented to keep the length of the resulting attention map A after pooling to be the same as the input X . Meanwhile , the polarization of QRS complex is a critical feature for ECG signal , while the commonly used pooling layers , like max pooling and average pooling , fail to control the sign of output , leading to differentiation performance degradation . Rectified linear unit ( ReLu ) σ ( lc , m ) = max ( 0 , lc , m ) , where c and m denote channel number and spatial position in L respectively , is usually performed as an activation function to add non-linearity to neural network for stronger representation ability . In this work , we can apply ReLU on L before pooling so that the all the weights in A generated by the following max pooling are positive . Alternatively , we replace average pooling with L2 norm pooling that takes square root of the L2 norm of input . In that case , ReLU is not needed . The two pooling implementations can be expressed as : ac , m max = PMAX ( L ) c , m = max { σ ( lc , m ) , σ ( lc , m+1 ) , ... , σ ( lc , m+k ) } ( 1 ) ac , m L2 = PL2 ( L ) c , m = √√√√m+k∑ i=m l2c , i ( 2 ) where , ac , m is the mth data point in channel c of A and k is the kernel size for the pooling . | This paper proposes a deep neural network for Premature Ventricular Contraction (PVC) differentiation and segmentation from electrocardiogram (ECG) signals. The network is jointly trained as a segmenter and a classifier with a multitask learning manner. Differentiation is achieved by the classifier, and segmentation is achieved by pooling for window-style attention from segmenter’s output. Quantitative experiments show better performance than baselines on differentiation tasks. Qualitative experiments show the effectiveness of segmentation tasks. | SP:0bc3eb9022a39f1bf9770699468b667c3f09d4d3 |
Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation | 1 INTRODUCTION . Arrhythmia in electrocardiogram ( ECG ) is a reflection of heart conduction abnormality and occurs randomly among normal beats . Deep learning based methods have demonstrated strong power in classifying different types of arrhythmia . There are plenty of works on classifying a single beat , involving convolutional neural networks ( CNN ) ( Acharya et al. , 2017b ; Zubair et al. , 2016 ) , long short-term memory ( LSTM ) ( Yildirim , 2018 ) , and generative adversarial networks ( GAN ) ( Golany & Radinsky , 2019 ) . For these methods to work in clinical setting , however , a good segmenter is needed to accurately extract a single beat from an ECG segment , which may be hard when abnormal beats are present . Alternatively , other works ( Acharya et al. , 2017a ; Hannun et al. , 2019 ) try to directly identify the genres of arrhythmia present in an ECG segment . The limitation of these works is that they work as a black-box and fail to provide cardiologists with any clue on how the prediction is made such as the location of the associated abnormal beats . In terms of ECG segmentation , there are different tasks such as segmenting ECG records into beats or into P wave , QRS complexity , and T wave . On one hand , some existing works take advantage of signal processing techniques to locate some fiducial points of PQRST complex so that the ECG signals can be divided . For example , Pan-Tompkins algorithm ( Pan & Tompkins , 1985 ) uses a combination of filters , squaring , and moving window integration to detect QRS complexity . The shortcomings of these methods are that handcraft selection of filter parameters and threshold is needed . More importantly , they are unable to distinguish abnormal heartbeats from normal ones . To address these issues , Moskalenko et al . ( 2019 ) ; Oh et al . ( 2019 ) deploy CNNs for automatic beat segmentation . However , the quality of these methods highly depends on the labels for fiducial points of ECG signals , the annotation process of which can be laborious and sometimes very hard . Besides , due to the high morphological variation of arrhythmia , strong variations exist even between annotations from experienced cardiologists . As such , unsupervised learning based approaches might be a better choice . Inspired by human ’ s perception of ECG signals , our proposed framework firstly locates the abnormal beats in an ECG segment in the form of attention map and then does abnormal beats classification by focusing on these abnormal beats . Thus , the framework not only differentiates arrhythmia types but also identifies the location of the associated abnormal beats for better interpretability of the result . It is worth noting that , in our workflow , we only make use of annotation for the type of abnormality in each ECG segment without abnormal beat localization information during training , given the difficulty and tedious effort in obtaining the latter . We validate our methods on two datasets from different sources . The first one contains 508 12-lead ECG records of Premature Ventricular Contraction patients , which are categorized into different classes by the origin of premature contraction ( e.g. , left ventricle ( LV ) or right ventricle ( RV ) ) . For the other dataset , we process signals in the MIT-BIH Arrhythmia dataset into segments of standard length . This dataset includes various types of abnormal beats , and we select 2627 segments with PVC present and 356 segemnts with Atrial Premature Beat ( APB ) present . Experiments on both two dataset show quantitative evidence that introducing the segmentation of abnormal beats through an attention map , although unsupervised , can in fact benefit the arrhythmia classification performance as measured by accuracy , sensitivity , specificity , and area under Receiver Operating Characteristic ( ROC ) curve . At the same time , a grade study by experts qualitatively demonstrates our method ’ s promising capability to segment abnormal beats among normal ones , which can provide useful insight into the classification result . Our code and dataset , which is the first for the challenging PVC differentiation problem , will be released to the public . 2 RELATED WORKS . Multitask learning There are many works devoted to training one deep learning models for multitasks rather than one specific task , like simultaneous segmentation and classification . ( Yang et al. , 2017 ) solves skin lesion segmentation and classification at the same time by utilizing similarities and differences across tasks . In the area of ECG signals , ( Oh et al. , 2019 ) modifies UNet to output the localization of r peaks and arrhythmia prediction simultaneously . What those two works have in common is that different tasks share certain layers in feature extraction . In contrast , our segmenter and classifier are independent models and there is no layer sharing between them . As can be seen in Figure 1 , we use attention maps as a bridge connecting the two models . ( Mehta et al. , 2018 ) segments different types of issues in breast biopsy images with a UNet and apply a discriminative map generated by a subbranch of the UNet to the segmentation result as input to a MLP for diagnosis . However , their segmentation and classification tasks are not trained end-to-end . ( Zhou et al. , 2019 ) proposes a method for collaborative learning of disease grading and lesion segmentation . They first perform a traditional semantic segmentation task with a small portion of annotated labels , and then they jointly train the segmenter and classifier for fine-tuning with an attention mechanism , which is applied on the latent features in the classification model , different from our method . Another difference is that for most existing multitask learning works , labels for each task are necessary , i.e. , all tasks are supervised . Our method , on the other hand , only requires the labels of one task ( classification ) , leading to a joint supervised/unsupervised scheme . Attention mechanism After firstly proposed for machine translation ( Bahdanau et al. , 2014 ) , attention model became a prevalent concept in deep learning and leads to improved performance in various tasks in natural language processing and computer visions . ( Vaswani et al. , 2017 ) exploits self-attention in their encoder-decoder architecture to draw dependency between input and output sentences . ( Wang et al. , 2017 ) builds a very deep network with attention modules which generates attention-aware features for image classification and ( Oktay et al. , 2018 ) integrates attention gates into U-Net ( Ronneberger et al. , 2015 ) to highlight latent channels informative for segmentation task . When it comes to ECG , ( Hong et al. , 2019 ) proposes a multilevel knowledge guided attention network to discriminate Atrial fibrillation ( AF ) patients , making the learned models explainable at beat level , rhythm level , and frequency level , which is highly related to our work . Our method and theirs however are quite different in the way attention weights are derived and applied , as well as the output of attention network . First , in that work , the attention weights are obtained from the outputs of hidden layers , while ours are directly from the input . Second , domain knowledge about AF is needed to help the attention extraction , so the process is weakly supervised , while ours do not use any external information and is fully unsupervised . Third , their attention weights are applied to latent features in that work while ours are applied to the input for better interpretability . Finally , in that work , the input ECG segment is divided into equal-length segments in advance and the attention network output only indicates which segment contains the target arrhythmia . The quality highly depends on how the segment is divided , and it does not provide the exact locations of abnormal beats . On the other hand , our method directly locates the abnormal beats on the entire input ECG , offering potentially better interpretability and robustness . 3 METHOD . 3.1 OVERVIEW OF THE FRAMEWORK . Here we briefly introduce the workflow of our joint learning frameworks for supervised classification and unsupervised segmentation . Firstly , in this work , we choose to model the input signal as a one-dimensional signal D ∈ RM×N , where M is the number of leads and N is the length of the input ECG segment ( number of samples over time ) . We then use a one-dimensional ( 1D ) fully convolutional network called segmenter S to output a feature map L = S ( D ) ∈ RM×N , After that , we apply a pooling layer to generate window-style element-wise attention A ∈ RM×N , containing weights directly for every sample in the input ECG . The after-attention signalX = A D ∈ RM×N , where represents element-wise production , is then fed into a multi-layer CNN called classifier C , in which the outermost fully connected layer gives the prediction of the arrhythmia types . After training , the abnormal areas are highlighted in X , thus achieving the goal of segmenting abnormal beats from normal ones . Moreover , x , which indicates those beats that are highly associated with the differentiation task , also serves as an explanation for C ’ s decision . The architecture of our framework is illustrated in Fig 1 . 3.2 SEGMENTER AND CLASSIFIER . In most existing works , the attention map is fused with the deep features in a neural network . However , for our specific purposes of enhancing intepretability of the classification results as well as unsupervised segmentation , the best result would be obtained by directly applying it to the input signal D. In order to generate attention weights of the same length as D , we choose to utilize UNet ( Ronneberger et al. , 2015 ) , a fully convolutional network highlighted by the skip connection on different stages . Encoding path extract features recursively and decoding path reconstruct the data as instructed by loss function . Note that the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal and at the same time each channel gets the same attention . The reason is that the 12 leads are measured synchronously and the abnormal beats occur at the same time across all the leads . Both recurrent neural networks ( RNN ) and CNN are candidate architectures for many arrhythmia classification works . RNN takes an ECG signal as sequential data and is good at dealing with the temporal relationship . CNN focuses on recognition of shapes and patterns in ECG , thus is less sensitive to the relative position of abnormal beats with respect to normal ones . Because abnormal beats may occur randomly among normal beats , we decide to use CNN as the backbone of our classifier . The detailed implementation of C is shown in 1 ( b ) . 3.3 POOLING FOR WINDOW-STYLE ATTENTION . We do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first . This is out of considerations for both interpretability and performance . Regarding interpretability , it is desirable that each abnormal beat is uniformly highlighted , i.e. , the attention weights should be almost constant and smooth for all the samples within each abnormal beat . Regarding performance , it is desirable that the attention map A does not distort the shape of abnormal beats after it is applied to the input X . Pooling layer is the easiest way to achieve this goal , functioning as a sliding window over multiple samples in an ECG signal for global information extraction . Max pooling outputs the same value around a local maximum , and average pooling reduces fluctuation by averaging over multiple samples . The kernel size can not be too large , which may fuse sharp changes from neighboring areas and lead to the loss of local information . Therefore , deciding the proper pooling kernel size is essentially finding a balance between local information and global information preservation . Through experiments to be shown in Section 5.3 , we find that setting the kernel size as nearly half the length of a normal beat yields the best balance between performance and interpretability . Padding of zeros on both sides of the segmenter output L is implemented to keep the length of the resulting attention map A after pooling to be the same as the input X . Meanwhile , the polarization of QRS complex is a critical feature for ECG signal , while the commonly used pooling layers , like max pooling and average pooling , fail to control the sign of output , leading to differentiation performance degradation . Rectified linear unit ( ReLu ) σ ( lc , m ) = max ( 0 , lc , m ) , where c and m denote channel number and spatial position in L respectively , is usually performed as an activation function to add non-linearity to neural network for stronger representation ability . In this work , we can apply ReLU on L before pooling so that the all the weights in A generated by the following max pooling are positive . Alternatively , we replace average pooling with L2 norm pooling that takes square root of the L2 norm of input . In that case , ReLU is not needed . The two pooling implementations can be expressed as : ac , m max = PMAX ( L ) c , m = max { σ ( lc , m ) , σ ( lc , m+1 ) , ... , σ ( lc , m+k ) } ( 1 ) ac , m L2 = PL2 ( L ) c , m = √√√√m+k∑ i=m l2c , i ( 2 ) where , ac , m is the mth data point in channel c of A and k is the kernel size for the pooling . | The paper proposes a framework for the classification of arrhythmias in electrocardiogram (ECG) data. The proposed approach performs segmentation and classification of the ECG signal. The segmenter performs segmentation of the signal (also called attention map) even though the term segmentation is not quite correct. This attention-modulated signal is then classified to identify the origin of Premature Ventricular Contraction (PVC). The proposed approach is evaluated on a dataset from a single machine consisting of 508 segments (I am not sure what “segments” means in this context). The results seem ok, but it is not clear to me what level of performance is required in order to achieve a similar level of performance as an expert. | SP:0bc3eb9022a39f1bf9770699468b667c3f09d4d3 |
Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation | 1 INTRODUCTION . Arrhythmia in electrocardiogram ( ECG ) is a reflection of heart conduction abnormality and occurs randomly among normal beats . Deep learning based methods have demonstrated strong power in classifying different types of arrhythmia . There are plenty of works on classifying a single beat , involving convolutional neural networks ( CNN ) ( Acharya et al. , 2017b ; Zubair et al. , 2016 ) , long short-term memory ( LSTM ) ( Yildirim , 2018 ) , and generative adversarial networks ( GAN ) ( Golany & Radinsky , 2019 ) . For these methods to work in clinical setting , however , a good segmenter is needed to accurately extract a single beat from an ECG segment , which may be hard when abnormal beats are present . Alternatively , other works ( Acharya et al. , 2017a ; Hannun et al. , 2019 ) try to directly identify the genres of arrhythmia present in an ECG segment . The limitation of these works is that they work as a black-box and fail to provide cardiologists with any clue on how the prediction is made such as the location of the associated abnormal beats . In terms of ECG segmentation , there are different tasks such as segmenting ECG records into beats or into P wave , QRS complexity , and T wave . On one hand , some existing works take advantage of signal processing techniques to locate some fiducial points of PQRST complex so that the ECG signals can be divided . For example , Pan-Tompkins algorithm ( Pan & Tompkins , 1985 ) uses a combination of filters , squaring , and moving window integration to detect QRS complexity . The shortcomings of these methods are that handcraft selection of filter parameters and threshold is needed . More importantly , they are unable to distinguish abnormal heartbeats from normal ones . To address these issues , Moskalenko et al . ( 2019 ) ; Oh et al . ( 2019 ) deploy CNNs for automatic beat segmentation . However , the quality of these methods highly depends on the labels for fiducial points of ECG signals , the annotation process of which can be laborious and sometimes very hard . Besides , due to the high morphological variation of arrhythmia , strong variations exist even between annotations from experienced cardiologists . As such , unsupervised learning based approaches might be a better choice . Inspired by human ’ s perception of ECG signals , our proposed framework firstly locates the abnormal beats in an ECG segment in the form of attention map and then does abnormal beats classification by focusing on these abnormal beats . Thus , the framework not only differentiates arrhythmia types but also identifies the location of the associated abnormal beats for better interpretability of the result . It is worth noting that , in our workflow , we only make use of annotation for the type of abnormality in each ECG segment without abnormal beat localization information during training , given the difficulty and tedious effort in obtaining the latter . We validate our methods on two datasets from different sources . The first one contains 508 12-lead ECG records of Premature Ventricular Contraction patients , which are categorized into different classes by the origin of premature contraction ( e.g. , left ventricle ( LV ) or right ventricle ( RV ) ) . For the other dataset , we process signals in the MIT-BIH Arrhythmia dataset into segments of standard length . This dataset includes various types of abnormal beats , and we select 2627 segments with PVC present and 356 segemnts with Atrial Premature Beat ( APB ) present . Experiments on both two dataset show quantitative evidence that introducing the segmentation of abnormal beats through an attention map , although unsupervised , can in fact benefit the arrhythmia classification performance as measured by accuracy , sensitivity , specificity , and area under Receiver Operating Characteristic ( ROC ) curve . At the same time , a grade study by experts qualitatively demonstrates our method ’ s promising capability to segment abnormal beats among normal ones , which can provide useful insight into the classification result . Our code and dataset , which is the first for the challenging PVC differentiation problem , will be released to the public . 2 RELATED WORKS . Multitask learning There are many works devoted to training one deep learning models for multitasks rather than one specific task , like simultaneous segmentation and classification . ( Yang et al. , 2017 ) solves skin lesion segmentation and classification at the same time by utilizing similarities and differences across tasks . In the area of ECG signals , ( Oh et al. , 2019 ) modifies UNet to output the localization of r peaks and arrhythmia prediction simultaneously . What those two works have in common is that different tasks share certain layers in feature extraction . In contrast , our segmenter and classifier are independent models and there is no layer sharing between them . As can be seen in Figure 1 , we use attention maps as a bridge connecting the two models . ( Mehta et al. , 2018 ) segments different types of issues in breast biopsy images with a UNet and apply a discriminative map generated by a subbranch of the UNet to the segmentation result as input to a MLP for diagnosis . However , their segmentation and classification tasks are not trained end-to-end . ( Zhou et al. , 2019 ) proposes a method for collaborative learning of disease grading and lesion segmentation . They first perform a traditional semantic segmentation task with a small portion of annotated labels , and then they jointly train the segmenter and classifier for fine-tuning with an attention mechanism , which is applied on the latent features in the classification model , different from our method . Another difference is that for most existing multitask learning works , labels for each task are necessary , i.e. , all tasks are supervised . Our method , on the other hand , only requires the labels of one task ( classification ) , leading to a joint supervised/unsupervised scheme . Attention mechanism After firstly proposed for machine translation ( Bahdanau et al. , 2014 ) , attention model became a prevalent concept in deep learning and leads to improved performance in various tasks in natural language processing and computer visions . ( Vaswani et al. , 2017 ) exploits self-attention in their encoder-decoder architecture to draw dependency between input and output sentences . ( Wang et al. , 2017 ) builds a very deep network with attention modules which generates attention-aware features for image classification and ( Oktay et al. , 2018 ) integrates attention gates into U-Net ( Ronneberger et al. , 2015 ) to highlight latent channels informative for segmentation task . When it comes to ECG , ( Hong et al. , 2019 ) proposes a multilevel knowledge guided attention network to discriminate Atrial fibrillation ( AF ) patients , making the learned models explainable at beat level , rhythm level , and frequency level , which is highly related to our work . Our method and theirs however are quite different in the way attention weights are derived and applied , as well as the output of attention network . First , in that work , the attention weights are obtained from the outputs of hidden layers , while ours are directly from the input . Second , domain knowledge about AF is needed to help the attention extraction , so the process is weakly supervised , while ours do not use any external information and is fully unsupervised . Third , their attention weights are applied to latent features in that work while ours are applied to the input for better interpretability . Finally , in that work , the input ECG segment is divided into equal-length segments in advance and the attention network output only indicates which segment contains the target arrhythmia . The quality highly depends on how the segment is divided , and it does not provide the exact locations of abnormal beats . On the other hand , our method directly locates the abnormal beats on the entire input ECG , offering potentially better interpretability and robustness . 3 METHOD . 3.1 OVERVIEW OF THE FRAMEWORK . Here we briefly introduce the workflow of our joint learning frameworks for supervised classification and unsupervised segmentation . Firstly , in this work , we choose to model the input signal as a one-dimensional signal D ∈ RM×N , where M is the number of leads and N is the length of the input ECG segment ( number of samples over time ) . We then use a one-dimensional ( 1D ) fully convolutional network called segmenter S to output a feature map L = S ( D ) ∈ RM×N , After that , we apply a pooling layer to generate window-style element-wise attention A ∈ RM×N , containing weights directly for every sample in the input ECG . The after-attention signalX = A D ∈ RM×N , where represents element-wise production , is then fed into a multi-layer CNN called classifier C , in which the outermost fully connected layer gives the prediction of the arrhythmia types . After training , the abnormal areas are highlighted in X , thus achieving the goal of segmenting abnormal beats from normal ones . Moreover , x , which indicates those beats that are highly associated with the differentiation task , also serves as an explanation for C ’ s decision . The architecture of our framework is illustrated in Fig 1 . 3.2 SEGMENTER AND CLASSIFIER . In most existing works , the attention map is fused with the deep features in a neural network . However , for our specific purposes of enhancing intepretability of the classification results as well as unsupervised segmentation , the best result would be obtained by directly applying it to the input signal D. In order to generate attention weights of the same length as D , we choose to utilize UNet ( Ronneberger et al. , 2015 ) , a fully convolutional network highlighted by the skip connection on different stages . Encoding path extract features recursively and decoding path reconstruct the data as instructed by loss function . Note that the output of S has only 1 channel and we expand it channel-wise so that it matches the channel dimension of the ECG signal and at the same time each channel gets the same attention . The reason is that the 12 leads are measured synchronously and the abnormal beats occur at the same time across all the leads . Both recurrent neural networks ( RNN ) and CNN are candidate architectures for many arrhythmia classification works . RNN takes an ECG signal as sequential data and is good at dealing with the temporal relationship . CNN focuses on recognition of shapes and patterns in ECG , thus is less sensitive to the relative position of abnormal beats with respect to normal ones . Because abnormal beats may occur randomly among normal beats , we decide to use CNN as the backbone of our classifier . The detailed implementation of C is shown in 1 ( b ) . 3.3 POOLING FOR WINDOW-STYLE ATTENTION . We do not use the output of the segmenter L as the attention map directly but instead perform a pooling with large kernel size first . This is out of considerations for both interpretability and performance . Regarding interpretability , it is desirable that each abnormal beat is uniformly highlighted , i.e. , the attention weights should be almost constant and smooth for all the samples within each abnormal beat . Regarding performance , it is desirable that the attention map A does not distort the shape of abnormal beats after it is applied to the input X . Pooling layer is the easiest way to achieve this goal , functioning as a sliding window over multiple samples in an ECG signal for global information extraction . Max pooling outputs the same value around a local maximum , and average pooling reduces fluctuation by averaging over multiple samples . The kernel size can not be too large , which may fuse sharp changes from neighboring areas and lead to the loss of local information . Therefore , deciding the proper pooling kernel size is essentially finding a balance between local information and global information preservation . Through experiments to be shown in Section 5.3 , we find that setting the kernel size as nearly half the length of a normal beat yields the best balance between performance and interpretability . Padding of zeros on both sides of the segmenter output L is implemented to keep the length of the resulting attention map A after pooling to be the same as the input X . Meanwhile , the polarization of QRS complex is a critical feature for ECG signal , while the commonly used pooling layers , like max pooling and average pooling , fail to control the sign of output , leading to differentiation performance degradation . Rectified linear unit ( ReLu ) σ ( lc , m ) = max ( 0 , lc , m ) , where c and m denote channel number and spatial position in L respectively , is usually performed as an activation function to add non-linearity to neural network for stronger representation ability . In this work , we can apply ReLU on L before pooling so that the all the weights in A generated by the following max pooling are positive . Alternatively , we replace average pooling with L2 norm pooling that takes square root of the L2 norm of input . In that case , ReLU is not needed . The two pooling implementations can be expressed as : ac , m max = PMAX ( L ) c , m = max { σ ( lc , m ) , σ ( lc , m+1 ) , ... , σ ( lc , m+k ) } ( 1 ) ac , m L2 = PL2 ( L ) c , m = √√√√m+k∑ i=m l2c , i ( 2 ) where , ac , m is the mth data point in channel c of A and k is the kernel size for the pooling . | This manuscript contributes a neural architecture to classify arrhythmia type from ECG data. The signal treated as 1D, and the architecture performs joint segmentation-classification detecting the abnormal beats and then classifying them as a function of their origine. It uses U-nets for segmentation and, for classification CNN and one fully-connected layer. The unet segmentation generates weights that are considered as an attention map and multipled with the original time series after pooling on a window (which amounts to smoothing). | SP:0bc3eb9022a39f1bf9770699468b667c3f09d4d3 |
Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective | 1 INTRODUCTION . Over the last five years , many Graph Neural Networks ( GNNs ) have been proposed in the literature of geometric deep learning ( Veličković et al. , 2018 ; Gilmer et al. , 2017 ; Bronstein et al. , 2017 ; Battaglia et al. , 2018 ) , in order to generalize the very efficient deep learning paradigm into the world of graphs . This large number of contributions explains a new challenge recently tackled by the community , which consists in assessing the expressive power of GNNs . In this area of research , there is a consensus to evaluate the theoretic expressive power of GNNs according to equivalence of Weisfeiler-Lehman ( WL ) test order ( Morris et al. , 2019 ; Xu et al. , 2019 ; Maron et al. , 2019b ; a ) . Hence , GNNs models are frequently classified as ” as powerful as 1-WL ” , ” as powerful as 2-WL ” , . . . , ” as powerful as k-WL ” . However , this perspective can not make differences between two methods if they are as powerful as the same WL test order . Moreover , it does not always explain success or failure of any GNN on common benchmark datasets . In this paper , we claim that analyzing theoretically and experimentally GNNs with a spectral point of view can bring a new perspective on their expressive power . So far , GNNs have been generally studied separately as spectral based or as spatial based ( Wu et al. , 2019b ; Chami et al. , 2020 ) . To the best of our knowledge , Message Passing Neural Networks ( MPNNs ) ( Gilmer et al. , 2017 ) and GraphNets ( Battaglia et al. , 2018 ) are the only attempts to merge ∗muhammetbalcilar @ gmail.com both approaches in the same framework . However , these models are not able to generalize custom designed spectral filters , as well as the effect of each convolution support in a multi convolution case . The spatial-spectral connection is also mentioned indirectly in several cornerstone studies by Defferrard et al . ( 2016 ) ; Kipf & Welling ( 2017 ) ; Levie et al . ( 2019 ) . Since the spectral-spatial interchangeability is missing , they did not propose to show spectral behavior of any graph convolution . Recent studies have also attempted to show , for a limited number of spatial GNNs , that they act as low-pass filters ( NT & Maehara , 2019 ; Wu et al. , 2019a ) . NT & Maehara ( 2019 ) concluded that using adjacency induces low-pass effects , while Wu et al . ( 2019a ) studied a single spatial GNN ’ s spectral behavior by assuming adding self-connection changes the given topology of the graph . In this paper , we bridge the gap between spectral and spatial domains for GNNs . Our first contribution consists in demonstrating the equivalence of convolution processes regardless if they are defined as spatial or as spectral GNN . Using this connection , we propose a new general framework and taxonomy for GNNs as the second contribution . Taking advantage of this equivalence , our third contribution is to provide a spectral analysis of any GNN model . This spectral analysis is another perspective for the analysis of expressive power of GNNs . Our theoretical spectral analysis is confirmed by experiments on various well-known graph datasets . Furthermore , we show the necessity of high and/or band-pass filters in our experiments , while the majority of GNNs are limited to only low-pass filters and thus inevitably fail when dealing with these problems . The code used in this paper is available at https : //github.com/balcilar/gnn-spectral-expressive-power . The remainder of this paper is organized as follows . Section 2 introduces convolutional GNNs and presents existing approaches . In Section 3 and Section 4 , we describe the main contributions mentioned above . Section 5 presents a series of experiments and results which validate our propositions . Finally , Section 6 concludes this paper . 2 PROBLEM STATEMENT AND STATE OF THE ART . Let G be a graph with n nodes and an arbitrary number of edges . Connectivity is given by the adjacency matrix A ∈ { 0 , 1 } n×n and features are defined on nodes by X ∈ Rn×f0 , with f0 the length of feature vectors . For any matrix X , we used Xi , X : j and Xi , j to refer its i-th column vector , j-th row vector and scalar value on ( i , j ) location , respectively . A graph Laplacian is L = D − A ( or L = I −D−1/2AD−1/2 ) where D ∈ Rn×n is the diagonal degree matrix and I is the identity . Through eigendecomposition , L can be written by L = Udiag ( λ ) UT where each column of U ∈ Rn×n is an eigenvector of L , λ ∈ Rn gathers the eigenvalues of L and diag ( . ) function creates a diagonal matrix whose diagonal elements are from a given vector . We use superscript to refer same kind variable as base . For instance , H ( l ) ∈ Rn×fl refers node representation on layer l whose feature dimension is fl . A Graph Convolution layer takes the node representation of the previous layer H ( l−1 ) as input and produces a new representation H ( l ) , with H ( 0 ) = X . 2.1 SPECTRAL APPROACHES . Spectral GNNs rely on the spectral graph theory ( Chung , 1997 ) . In this framework , signals on graphs are filtered using the eigendecomposition of graph Laplacian ( Shuman et al. , 2013 ) . By transposing the convolution theorem to graphs , the spectral filtering in the frequency domain can be defined by xflt = Udiag ( Φ ( λ ) ) U > x , where Φ ( . ) is the desired filter function . As a consequence , a graph convolution layer in spectral domain can be written by a sum of filtered signals followed by an activation function as in ( Bruna et al. , 2013 ) , namely H ( l+1 ) j = σ ( fl∑ i=1 Udiag ( F ( l , j ) i ) U > H ( l ) i ) , for j ∈ { 1 , . . . , fl+1 } . ( 1 ) Here , σ is the activation function , F ( l , j ) ∈ Rn×fl is the corresponding weight vector to be tuned as used in ( Henaff et al. , 2015 ) for the single-graph problem known as non-parametric spectral GNN . A first drawback is the necessity of Fourier and inverse Fourier transform by matrix multiplication of U and UT . Another drawback occurs when generalizing the approach to multi-graph learning problems . Indeed , the k-th element of the vector F ( l , j ) i weights the contribution of the k-th eigenvector to the output . Those weights are not shareable between graphs of different sizes , which means a different length of F ( l , j ) i is needed . Moreover , even though the graphs have the same number of nodes , their eigenvalues will be different if their structures differ . To overcome these issues , a few spatially-localized filters have been defined such as cubic B-spline ( Bruna et al. , 2013 ) , polynomial and Chebyshev polynomial ( Defferrard et al. , 2016 ) and Cayley polynomial parameterization ( Levie et al. , 2019 ) . With such approaches , trainable parameters are defined by F ( l , j ) i = B [ W ( l,1 ) i , j , . . . , W ( l , se ) i , j ] > , where each column in B ∈ Rn×se is designed as a function of eigenvalues , namely Bk , s = Φs ( λk ) , where k = 1 , . . . , n denotes eigenvalue index , s = 1 , . . . , se denotes index of filters and se is the number of desired filters . Here , W ( l , s ) ∈ Rfl×fl+1 is the trainable matrix for the l-th layer ’ s s-th filter ’ s . 2.2 SPATIAL APPROACHES . Spatial GNNs consider an agg operator , which aggregates the neighborhood nodes , and an upd operator , which updates the concerned node as follows : H ( l+1 ) : v = upd ( g0 ( H ( l ) : v ) , agg ( g1 ( H ( l ) : u ) : u ∈ N ( v ) ) ) , ( 2 ) whereN ( v ) is the set of neighborhood nodes and g0 , g1 : Rn×fl → Rn×fl+1 trainable models . The choice of agg , upd , g0 , g1 , and even N ( v ) , determines the capability of model . The vanilla GNN ( known by GIN-0 in ( Xu et al. , 2019 ) ) uses the same weights in g0 and g1 . N ( v ) is the set of connected nodes to v , agg is the sum of all connected node values and upd ( x , y ) : = σ ( x+ y ) where σ is an elementwise nonlinearity . GCN has the same selection but normalizes features as in ( Kipf & Welling , 2017 ) . Hamilton et al . ( 2017 ) used separated weights in g0 and g1 , which means that two sets of trainable weights are applied on self feature and neighbor nodes . Other approaches defined multi neighborhood and used different gi for different kind of neighborhood . For instance , Duvenaud et al . ( 2015 ) defined the neighborhood according to node label and/or degree , Niepert et al . ( 2016 ) reordered the neighbor nodes and used the same model gi to neighbors according to their order . These spatial GNNs use sum or normalized sum over gi in equation 2 . Other methods weighted this summation by another trainable parameter , where the weights can be written by the function of node and/or edge features in order to make the convolutions more productive , such as graph attention networks ( Veličković et al. , 2018 ) , MoNet ( Monti et al. , 2017 ) , GatedGCN ( Bresson & Laurent , 2018 ) and SplineCNN ( Fey et al. , 2018 ) . 3 BRIDGING SPATIAL AND SPECTRAL GNNS . In this section , we define a general framework which includes most of the well-know GNN models , including euclidean convolution and models which use anisotropic update schema such as in Veličković et al . ( 2018 ) ; Bresson & Laurent ( 2018 ) . When upd ( x , y ) = σ ( x + y ) , agg is a sum ( or weighted sum ) of the defined neighborhood nodes contributions and gi applies linear transformation , one can trivially show that mentioned spatial GNNs can be generalized as propagation of the node features to the neighboring nodes followed by feature transformation and activation function of the form H ( l+1 ) = σ ( ∑ s C ( s ) H ( l ) W ( l , s ) ) , ( 3 ) where C ( s ) ∈ Rn×n is the s-th convolution support that defines how the node features are propagated to the neighboring nodes . Within this generalization , GNNs differ from each other by the choice of convolution supports C ( s ) . This formulation generalizes many different kinds of Graph Convolutions , as well as Euclidean domain convolutions , which can be seen in Appendix A with the detailed schema . Definition 1 . A Trainable-support is a Graph Convolution Support C ( s ) with at least one trainable parameter that can be tuned during training . If C ( s ) has no trainable parameters , i.e . when the supports are pre-designed , it is called a fixed-support graph convolution . In the trainable support case , supports can be different in each layer , which can be shown by C ( l , s ) for the s-th support in layer l. Formally , we can define a trainable support by : ( C ( l , s ) ) v , u = hs , l ( H ( l ) : v , H ( l ) : u , E ( l ) v , u , A ) , ( 4 ) where E ( l ) v , u shows edge features on layer l from node v to node u if it is available and h ( . ) is any trainable model parametrized by ( s , l ) . Theorem 1 . Spectral GNN parameterized with B of entries Bi , j = Φj ( λi ) , defined as H ( l+1 ) j =σ ( fl∑ i=1 U diag ( B [ W ( l,1 ) i , j , . . . , W ( l , se ) i , j ] > ) U > H ( l ) i ) , ( 5 ) is a particular case of framework in equation 3 with the convolution kernel set to C ( s ) = U diag ( Φs ( λ ) ) U > . ( 6 ) The proof can be found in Appendix B . This theorem is general and it covers many well-known spectral GNNs , such as non-parametric spectral graph convolution ( Henaff et al. , 2015 ) , polynomial parameterization ( Defferrard et al. , 2016 ) , cubic B-spline parameterization ( Bruna et al. , 2013 ) , CayleyNet ( Levie et al. , 2019 ) and also any custom designed graph convolution . From Theorem 1 , one can see that the spatial and spectral GNNs work all the same way . Therefore , Fourier calculations are not necessary when convolutions are parameterized by B . As a consequence of Theorem 1 , one can see that the separation of spectral and spatial GNNs is just an interpretation . The only difference is the way convolution supports are designed : either in the spectral domain or in the spatial one . Definition 2 . A Spectral-designed graph convolution refers to a convolution where supports are written as a function of eigenvalues ( Φs ( λ ) ) and eigenvectors ( U ) of the corresponding graph Laplacian ( equation 6 ) . Thus , each convolution supportC ( s ) has the same frequency response Φs ( λ ) over different graphs . Graph convolution out of this definition is called spatial-designed graph convolution . Corollary 1.1 . The frequency profile of any given graph convolution support C ( s ) can be defined in spectral domain by Φs ( λ ) = diag−1 ( U > C ( s ) U ) . ( 7 ) where diag−1 ( . ) returns the vector made of the diagonal elements from the given matrix . The proof of this corollary is given in Appendix C. This corollary leads to the spectral analysis of any given graph convolution support , including spatial-designed convolutions . Since the spatialdesigned convolutions do not fit into equation 6 , U > C ( s ) U is not a diagonal matrix . Therefore , we also compute the full frequency profile by Φs = U > C ( s ) U , which includes all eigenvectors pairwise contributions for spatial-designed convolutions . | This paper shows an equivalence framework between Graph Neural Networks (GNNs) defined in the spatial domain (based on local node neighbourhoods updates) and spectral domain (based on filters defined on eigenvalues of the graph Laplacian). Using this framework, the paper derives the spectral equivalent of common spatial GNNs hence showing these act low-pass filters in the spectral domain. The paper shows experimentally that on MNIST superpixel dataset, spatial GNNs show poor results compared to spectral GNNs. | SP:fb7d909ac287383723943fb71bbbb5abb9dee7a1 |
Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective | 1 INTRODUCTION . Over the last five years , many Graph Neural Networks ( GNNs ) have been proposed in the literature of geometric deep learning ( Veličković et al. , 2018 ; Gilmer et al. , 2017 ; Bronstein et al. , 2017 ; Battaglia et al. , 2018 ) , in order to generalize the very efficient deep learning paradigm into the world of graphs . This large number of contributions explains a new challenge recently tackled by the community , which consists in assessing the expressive power of GNNs . In this area of research , there is a consensus to evaluate the theoretic expressive power of GNNs according to equivalence of Weisfeiler-Lehman ( WL ) test order ( Morris et al. , 2019 ; Xu et al. , 2019 ; Maron et al. , 2019b ; a ) . Hence , GNNs models are frequently classified as ” as powerful as 1-WL ” , ” as powerful as 2-WL ” , . . . , ” as powerful as k-WL ” . However , this perspective can not make differences between two methods if they are as powerful as the same WL test order . Moreover , it does not always explain success or failure of any GNN on common benchmark datasets . In this paper , we claim that analyzing theoretically and experimentally GNNs with a spectral point of view can bring a new perspective on their expressive power . So far , GNNs have been generally studied separately as spectral based or as spatial based ( Wu et al. , 2019b ; Chami et al. , 2020 ) . To the best of our knowledge , Message Passing Neural Networks ( MPNNs ) ( Gilmer et al. , 2017 ) and GraphNets ( Battaglia et al. , 2018 ) are the only attempts to merge ∗muhammetbalcilar @ gmail.com both approaches in the same framework . However , these models are not able to generalize custom designed spectral filters , as well as the effect of each convolution support in a multi convolution case . The spatial-spectral connection is also mentioned indirectly in several cornerstone studies by Defferrard et al . ( 2016 ) ; Kipf & Welling ( 2017 ) ; Levie et al . ( 2019 ) . Since the spectral-spatial interchangeability is missing , they did not propose to show spectral behavior of any graph convolution . Recent studies have also attempted to show , for a limited number of spatial GNNs , that they act as low-pass filters ( NT & Maehara , 2019 ; Wu et al. , 2019a ) . NT & Maehara ( 2019 ) concluded that using adjacency induces low-pass effects , while Wu et al . ( 2019a ) studied a single spatial GNN ’ s spectral behavior by assuming adding self-connection changes the given topology of the graph . In this paper , we bridge the gap between spectral and spatial domains for GNNs . Our first contribution consists in demonstrating the equivalence of convolution processes regardless if they are defined as spatial or as spectral GNN . Using this connection , we propose a new general framework and taxonomy for GNNs as the second contribution . Taking advantage of this equivalence , our third contribution is to provide a spectral analysis of any GNN model . This spectral analysis is another perspective for the analysis of expressive power of GNNs . Our theoretical spectral analysis is confirmed by experiments on various well-known graph datasets . Furthermore , we show the necessity of high and/or band-pass filters in our experiments , while the majority of GNNs are limited to only low-pass filters and thus inevitably fail when dealing with these problems . The code used in this paper is available at https : //github.com/balcilar/gnn-spectral-expressive-power . The remainder of this paper is organized as follows . Section 2 introduces convolutional GNNs and presents existing approaches . In Section 3 and Section 4 , we describe the main contributions mentioned above . Section 5 presents a series of experiments and results which validate our propositions . Finally , Section 6 concludes this paper . 2 PROBLEM STATEMENT AND STATE OF THE ART . Let G be a graph with n nodes and an arbitrary number of edges . Connectivity is given by the adjacency matrix A ∈ { 0 , 1 } n×n and features are defined on nodes by X ∈ Rn×f0 , with f0 the length of feature vectors . For any matrix X , we used Xi , X : j and Xi , j to refer its i-th column vector , j-th row vector and scalar value on ( i , j ) location , respectively . A graph Laplacian is L = D − A ( or L = I −D−1/2AD−1/2 ) where D ∈ Rn×n is the diagonal degree matrix and I is the identity . Through eigendecomposition , L can be written by L = Udiag ( λ ) UT where each column of U ∈ Rn×n is an eigenvector of L , λ ∈ Rn gathers the eigenvalues of L and diag ( . ) function creates a diagonal matrix whose diagonal elements are from a given vector . We use superscript to refer same kind variable as base . For instance , H ( l ) ∈ Rn×fl refers node representation on layer l whose feature dimension is fl . A Graph Convolution layer takes the node representation of the previous layer H ( l−1 ) as input and produces a new representation H ( l ) , with H ( 0 ) = X . 2.1 SPECTRAL APPROACHES . Spectral GNNs rely on the spectral graph theory ( Chung , 1997 ) . In this framework , signals on graphs are filtered using the eigendecomposition of graph Laplacian ( Shuman et al. , 2013 ) . By transposing the convolution theorem to graphs , the spectral filtering in the frequency domain can be defined by xflt = Udiag ( Φ ( λ ) ) U > x , where Φ ( . ) is the desired filter function . As a consequence , a graph convolution layer in spectral domain can be written by a sum of filtered signals followed by an activation function as in ( Bruna et al. , 2013 ) , namely H ( l+1 ) j = σ ( fl∑ i=1 Udiag ( F ( l , j ) i ) U > H ( l ) i ) , for j ∈ { 1 , . . . , fl+1 } . ( 1 ) Here , σ is the activation function , F ( l , j ) ∈ Rn×fl is the corresponding weight vector to be tuned as used in ( Henaff et al. , 2015 ) for the single-graph problem known as non-parametric spectral GNN . A first drawback is the necessity of Fourier and inverse Fourier transform by matrix multiplication of U and UT . Another drawback occurs when generalizing the approach to multi-graph learning problems . Indeed , the k-th element of the vector F ( l , j ) i weights the contribution of the k-th eigenvector to the output . Those weights are not shareable between graphs of different sizes , which means a different length of F ( l , j ) i is needed . Moreover , even though the graphs have the same number of nodes , their eigenvalues will be different if their structures differ . To overcome these issues , a few spatially-localized filters have been defined such as cubic B-spline ( Bruna et al. , 2013 ) , polynomial and Chebyshev polynomial ( Defferrard et al. , 2016 ) and Cayley polynomial parameterization ( Levie et al. , 2019 ) . With such approaches , trainable parameters are defined by F ( l , j ) i = B [ W ( l,1 ) i , j , . . . , W ( l , se ) i , j ] > , where each column in B ∈ Rn×se is designed as a function of eigenvalues , namely Bk , s = Φs ( λk ) , where k = 1 , . . . , n denotes eigenvalue index , s = 1 , . . . , se denotes index of filters and se is the number of desired filters . Here , W ( l , s ) ∈ Rfl×fl+1 is the trainable matrix for the l-th layer ’ s s-th filter ’ s . 2.2 SPATIAL APPROACHES . Spatial GNNs consider an agg operator , which aggregates the neighborhood nodes , and an upd operator , which updates the concerned node as follows : H ( l+1 ) : v = upd ( g0 ( H ( l ) : v ) , agg ( g1 ( H ( l ) : u ) : u ∈ N ( v ) ) ) , ( 2 ) whereN ( v ) is the set of neighborhood nodes and g0 , g1 : Rn×fl → Rn×fl+1 trainable models . The choice of agg , upd , g0 , g1 , and even N ( v ) , determines the capability of model . The vanilla GNN ( known by GIN-0 in ( Xu et al. , 2019 ) ) uses the same weights in g0 and g1 . N ( v ) is the set of connected nodes to v , agg is the sum of all connected node values and upd ( x , y ) : = σ ( x+ y ) where σ is an elementwise nonlinearity . GCN has the same selection but normalizes features as in ( Kipf & Welling , 2017 ) . Hamilton et al . ( 2017 ) used separated weights in g0 and g1 , which means that two sets of trainable weights are applied on self feature and neighbor nodes . Other approaches defined multi neighborhood and used different gi for different kind of neighborhood . For instance , Duvenaud et al . ( 2015 ) defined the neighborhood according to node label and/or degree , Niepert et al . ( 2016 ) reordered the neighbor nodes and used the same model gi to neighbors according to their order . These spatial GNNs use sum or normalized sum over gi in equation 2 . Other methods weighted this summation by another trainable parameter , where the weights can be written by the function of node and/or edge features in order to make the convolutions more productive , such as graph attention networks ( Veličković et al. , 2018 ) , MoNet ( Monti et al. , 2017 ) , GatedGCN ( Bresson & Laurent , 2018 ) and SplineCNN ( Fey et al. , 2018 ) . 3 BRIDGING SPATIAL AND SPECTRAL GNNS . In this section , we define a general framework which includes most of the well-know GNN models , including euclidean convolution and models which use anisotropic update schema such as in Veličković et al . ( 2018 ) ; Bresson & Laurent ( 2018 ) . When upd ( x , y ) = σ ( x + y ) , agg is a sum ( or weighted sum ) of the defined neighborhood nodes contributions and gi applies linear transformation , one can trivially show that mentioned spatial GNNs can be generalized as propagation of the node features to the neighboring nodes followed by feature transformation and activation function of the form H ( l+1 ) = σ ( ∑ s C ( s ) H ( l ) W ( l , s ) ) , ( 3 ) where C ( s ) ∈ Rn×n is the s-th convolution support that defines how the node features are propagated to the neighboring nodes . Within this generalization , GNNs differ from each other by the choice of convolution supports C ( s ) . This formulation generalizes many different kinds of Graph Convolutions , as well as Euclidean domain convolutions , which can be seen in Appendix A with the detailed schema . Definition 1 . A Trainable-support is a Graph Convolution Support C ( s ) with at least one trainable parameter that can be tuned during training . If C ( s ) has no trainable parameters , i.e . when the supports are pre-designed , it is called a fixed-support graph convolution . In the trainable support case , supports can be different in each layer , which can be shown by C ( l , s ) for the s-th support in layer l. Formally , we can define a trainable support by : ( C ( l , s ) ) v , u = hs , l ( H ( l ) : v , H ( l ) : u , E ( l ) v , u , A ) , ( 4 ) where E ( l ) v , u shows edge features on layer l from node v to node u if it is available and h ( . ) is any trainable model parametrized by ( s , l ) . Theorem 1 . Spectral GNN parameterized with B of entries Bi , j = Φj ( λi ) , defined as H ( l+1 ) j =σ ( fl∑ i=1 U diag ( B [ W ( l,1 ) i , j , . . . , W ( l , se ) i , j ] > ) U > H ( l ) i ) , ( 5 ) is a particular case of framework in equation 3 with the convolution kernel set to C ( s ) = U diag ( Φs ( λ ) ) U > . ( 6 ) The proof can be found in Appendix B . This theorem is general and it covers many well-known spectral GNNs , such as non-parametric spectral graph convolution ( Henaff et al. , 2015 ) , polynomial parameterization ( Defferrard et al. , 2016 ) , cubic B-spline parameterization ( Bruna et al. , 2013 ) , CayleyNet ( Levie et al. , 2019 ) and also any custom designed graph convolution . From Theorem 1 , one can see that the spatial and spectral GNNs work all the same way . Therefore , Fourier calculations are not necessary when convolutions are parameterized by B . As a consequence of Theorem 1 , one can see that the separation of spectral and spatial GNNs is just an interpretation . The only difference is the way convolution supports are designed : either in the spectral domain or in the spatial one . Definition 2 . A Spectral-designed graph convolution refers to a convolution where supports are written as a function of eigenvalues ( Φs ( λ ) ) and eigenvectors ( U ) of the corresponding graph Laplacian ( equation 6 ) . Thus , each convolution supportC ( s ) has the same frequency response Φs ( λ ) over different graphs . Graph convolution out of this definition is called spatial-designed graph convolution . Corollary 1.1 . The frequency profile of any given graph convolution support C ( s ) can be defined in spectral domain by Φs ( λ ) = diag−1 ( U > C ( s ) U ) . ( 7 ) where diag−1 ( . ) returns the vector made of the diagonal elements from the given matrix . The proof of this corollary is given in Appendix C. This corollary leads to the spectral analysis of any given graph convolution support , including spatial-designed convolutions . Since the spatialdesigned convolutions do not fit into equation 6 , U > C ( s ) U is not a diagonal matrix . Therefore , we also compute the full frequency profile by Φs = U > C ( s ) U , which includes all eigenvectors pairwise contributions for spatial-designed convolutions . | This work studies the performance of different Graph Neural Network (GNN) from a spectral perspective. In particular, it shows the kernel for all kinds of proposed GNN models can be expressed in a general form with a specific frequency response definition, which indicates the spectral property (spectrum) of the kernel. Based on this definition, this work empirically studies the band-pass property for kernels in different models and demonstrate the importance of such spectral perspective. | SP:fb7d909ac287383723943fb71bbbb5abb9dee7a1 |
Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective | 1 INTRODUCTION . Over the last five years , many Graph Neural Networks ( GNNs ) have been proposed in the literature of geometric deep learning ( Veličković et al. , 2018 ; Gilmer et al. , 2017 ; Bronstein et al. , 2017 ; Battaglia et al. , 2018 ) , in order to generalize the very efficient deep learning paradigm into the world of graphs . This large number of contributions explains a new challenge recently tackled by the community , which consists in assessing the expressive power of GNNs . In this area of research , there is a consensus to evaluate the theoretic expressive power of GNNs according to equivalence of Weisfeiler-Lehman ( WL ) test order ( Morris et al. , 2019 ; Xu et al. , 2019 ; Maron et al. , 2019b ; a ) . Hence , GNNs models are frequently classified as ” as powerful as 1-WL ” , ” as powerful as 2-WL ” , . . . , ” as powerful as k-WL ” . However , this perspective can not make differences between two methods if they are as powerful as the same WL test order . Moreover , it does not always explain success or failure of any GNN on common benchmark datasets . In this paper , we claim that analyzing theoretically and experimentally GNNs with a spectral point of view can bring a new perspective on their expressive power . So far , GNNs have been generally studied separately as spectral based or as spatial based ( Wu et al. , 2019b ; Chami et al. , 2020 ) . To the best of our knowledge , Message Passing Neural Networks ( MPNNs ) ( Gilmer et al. , 2017 ) and GraphNets ( Battaglia et al. , 2018 ) are the only attempts to merge ∗muhammetbalcilar @ gmail.com both approaches in the same framework . However , these models are not able to generalize custom designed spectral filters , as well as the effect of each convolution support in a multi convolution case . The spatial-spectral connection is also mentioned indirectly in several cornerstone studies by Defferrard et al . ( 2016 ) ; Kipf & Welling ( 2017 ) ; Levie et al . ( 2019 ) . Since the spectral-spatial interchangeability is missing , they did not propose to show spectral behavior of any graph convolution . Recent studies have also attempted to show , for a limited number of spatial GNNs , that they act as low-pass filters ( NT & Maehara , 2019 ; Wu et al. , 2019a ) . NT & Maehara ( 2019 ) concluded that using adjacency induces low-pass effects , while Wu et al . ( 2019a ) studied a single spatial GNN ’ s spectral behavior by assuming adding self-connection changes the given topology of the graph . In this paper , we bridge the gap between spectral and spatial domains for GNNs . Our first contribution consists in demonstrating the equivalence of convolution processes regardless if they are defined as spatial or as spectral GNN . Using this connection , we propose a new general framework and taxonomy for GNNs as the second contribution . Taking advantage of this equivalence , our third contribution is to provide a spectral analysis of any GNN model . This spectral analysis is another perspective for the analysis of expressive power of GNNs . Our theoretical spectral analysis is confirmed by experiments on various well-known graph datasets . Furthermore , we show the necessity of high and/or band-pass filters in our experiments , while the majority of GNNs are limited to only low-pass filters and thus inevitably fail when dealing with these problems . The code used in this paper is available at https : //github.com/balcilar/gnn-spectral-expressive-power . The remainder of this paper is organized as follows . Section 2 introduces convolutional GNNs and presents existing approaches . In Section 3 and Section 4 , we describe the main contributions mentioned above . Section 5 presents a series of experiments and results which validate our propositions . Finally , Section 6 concludes this paper . 2 PROBLEM STATEMENT AND STATE OF THE ART . Let G be a graph with n nodes and an arbitrary number of edges . Connectivity is given by the adjacency matrix A ∈ { 0 , 1 } n×n and features are defined on nodes by X ∈ Rn×f0 , with f0 the length of feature vectors . For any matrix X , we used Xi , X : j and Xi , j to refer its i-th column vector , j-th row vector and scalar value on ( i , j ) location , respectively . A graph Laplacian is L = D − A ( or L = I −D−1/2AD−1/2 ) where D ∈ Rn×n is the diagonal degree matrix and I is the identity . Through eigendecomposition , L can be written by L = Udiag ( λ ) UT where each column of U ∈ Rn×n is an eigenvector of L , λ ∈ Rn gathers the eigenvalues of L and diag ( . ) function creates a diagonal matrix whose diagonal elements are from a given vector . We use superscript to refer same kind variable as base . For instance , H ( l ) ∈ Rn×fl refers node representation on layer l whose feature dimension is fl . A Graph Convolution layer takes the node representation of the previous layer H ( l−1 ) as input and produces a new representation H ( l ) , with H ( 0 ) = X . 2.1 SPECTRAL APPROACHES . Spectral GNNs rely on the spectral graph theory ( Chung , 1997 ) . In this framework , signals on graphs are filtered using the eigendecomposition of graph Laplacian ( Shuman et al. , 2013 ) . By transposing the convolution theorem to graphs , the spectral filtering in the frequency domain can be defined by xflt = Udiag ( Φ ( λ ) ) U > x , where Φ ( . ) is the desired filter function . As a consequence , a graph convolution layer in spectral domain can be written by a sum of filtered signals followed by an activation function as in ( Bruna et al. , 2013 ) , namely H ( l+1 ) j = σ ( fl∑ i=1 Udiag ( F ( l , j ) i ) U > H ( l ) i ) , for j ∈ { 1 , . . . , fl+1 } . ( 1 ) Here , σ is the activation function , F ( l , j ) ∈ Rn×fl is the corresponding weight vector to be tuned as used in ( Henaff et al. , 2015 ) for the single-graph problem known as non-parametric spectral GNN . A first drawback is the necessity of Fourier and inverse Fourier transform by matrix multiplication of U and UT . Another drawback occurs when generalizing the approach to multi-graph learning problems . Indeed , the k-th element of the vector F ( l , j ) i weights the contribution of the k-th eigenvector to the output . Those weights are not shareable between graphs of different sizes , which means a different length of F ( l , j ) i is needed . Moreover , even though the graphs have the same number of nodes , their eigenvalues will be different if their structures differ . To overcome these issues , a few spatially-localized filters have been defined such as cubic B-spline ( Bruna et al. , 2013 ) , polynomial and Chebyshev polynomial ( Defferrard et al. , 2016 ) and Cayley polynomial parameterization ( Levie et al. , 2019 ) . With such approaches , trainable parameters are defined by F ( l , j ) i = B [ W ( l,1 ) i , j , . . . , W ( l , se ) i , j ] > , where each column in B ∈ Rn×se is designed as a function of eigenvalues , namely Bk , s = Φs ( λk ) , where k = 1 , . . . , n denotes eigenvalue index , s = 1 , . . . , se denotes index of filters and se is the number of desired filters . Here , W ( l , s ) ∈ Rfl×fl+1 is the trainable matrix for the l-th layer ’ s s-th filter ’ s . 2.2 SPATIAL APPROACHES . Spatial GNNs consider an agg operator , which aggregates the neighborhood nodes , and an upd operator , which updates the concerned node as follows : H ( l+1 ) : v = upd ( g0 ( H ( l ) : v ) , agg ( g1 ( H ( l ) : u ) : u ∈ N ( v ) ) ) , ( 2 ) whereN ( v ) is the set of neighborhood nodes and g0 , g1 : Rn×fl → Rn×fl+1 trainable models . The choice of agg , upd , g0 , g1 , and even N ( v ) , determines the capability of model . The vanilla GNN ( known by GIN-0 in ( Xu et al. , 2019 ) ) uses the same weights in g0 and g1 . N ( v ) is the set of connected nodes to v , agg is the sum of all connected node values and upd ( x , y ) : = σ ( x+ y ) where σ is an elementwise nonlinearity . GCN has the same selection but normalizes features as in ( Kipf & Welling , 2017 ) . Hamilton et al . ( 2017 ) used separated weights in g0 and g1 , which means that two sets of trainable weights are applied on self feature and neighbor nodes . Other approaches defined multi neighborhood and used different gi for different kind of neighborhood . For instance , Duvenaud et al . ( 2015 ) defined the neighborhood according to node label and/or degree , Niepert et al . ( 2016 ) reordered the neighbor nodes and used the same model gi to neighbors according to their order . These spatial GNNs use sum or normalized sum over gi in equation 2 . Other methods weighted this summation by another trainable parameter , where the weights can be written by the function of node and/or edge features in order to make the convolutions more productive , such as graph attention networks ( Veličković et al. , 2018 ) , MoNet ( Monti et al. , 2017 ) , GatedGCN ( Bresson & Laurent , 2018 ) and SplineCNN ( Fey et al. , 2018 ) . 3 BRIDGING SPATIAL AND SPECTRAL GNNS . In this section , we define a general framework which includes most of the well-know GNN models , including euclidean convolution and models which use anisotropic update schema such as in Veličković et al . ( 2018 ) ; Bresson & Laurent ( 2018 ) . When upd ( x , y ) = σ ( x + y ) , agg is a sum ( or weighted sum ) of the defined neighborhood nodes contributions and gi applies linear transformation , one can trivially show that mentioned spatial GNNs can be generalized as propagation of the node features to the neighboring nodes followed by feature transformation and activation function of the form H ( l+1 ) = σ ( ∑ s C ( s ) H ( l ) W ( l , s ) ) , ( 3 ) where C ( s ) ∈ Rn×n is the s-th convolution support that defines how the node features are propagated to the neighboring nodes . Within this generalization , GNNs differ from each other by the choice of convolution supports C ( s ) . This formulation generalizes many different kinds of Graph Convolutions , as well as Euclidean domain convolutions , which can be seen in Appendix A with the detailed schema . Definition 1 . A Trainable-support is a Graph Convolution Support C ( s ) with at least one trainable parameter that can be tuned during training . If C ( s ) has no trainable parameters , i.e . when the supports are pre-designed , it is called a fixed-support graph convolution . In the trainable support case , supports can be different in each layer , which can be shown by C ( l , s ) for the s-th support in layer l. Formally , we can define a trainable support by : ( C ( l , s ) ) v , u = hs , l ( H ( l ) : v , H ( l ) : u , E ( l ) v , u , A ) , ( 4 ) where E ( l ) v , u shows edge features on layer l from node v to node u if it is available and h ( . ) is any trainable model parametrized by ( s , l ) . Theorem 1 . Spectral GNN parameterized with B of entries Bi , j = Φj ( λi ) , defined as H ( l+1 ) j =σ ( fl∑ i=1 U diag ( B [ W ( l,1 ) i , j , . . . , W ( l , se ) i , j ] > ) U > H ( l ) i ) , ( 5 ) is a particular case of framework in equation 3 with the convolution kernel set to C ( s ) = U diag ( Φs ( λ ) ) U > . ( 6 ) The proof can be found in Appendix B . This theorem is general and it covers many well-known spectral GNNs , such as non-parametric spectral graph convolution ( Henaff et al. , 2015 ) , polynomial parameterization ( Defferrard et al. , 2016 ) , cubic B-spline parameterization ( Bruna et al. , 2013 ) , CayleyNet ( Levie et al. , 2019 ) and also any custom designed graph convolution . From Theorem 1 , one can see that the spatial and spectral GNNs work all the same way . Therefore , Fourier calculations are not necessary when convolutions are parameterized by B . As a consequence of Theorem 1 , one can see that the separation of spectral and spatial GNNs is just an interpretation . The only difference is the way convolution supports are designed : either in the spectral domain or in the spatial one . Definition 2 . A Spectral-designed graph convolution refers to a convolution where supports are written as a function of eigenvalues ( Φs ( λ ) ) and eigenvectors ( U ) of the corresponding graph Laplacian ( equation 6 ) . Thus , each convolution supportC ( s ) has the same frequency response Φs ( λ ) over different graphs . Graph convolution out of this definition is called spatial-designed graph convolution . Corollary 1.1 . The frequency profile of any given graph convolution support C ( s ) can be defined in spectral domain by Φs ( λ ) = diag−1 ( U > C ( s ) U ) . ( 7 ) where diag−1 ( . ) returns the vector made of the diagonal elements from the given matrix . The proof of this corollary is given in Appendix C. This corollary leads to the spectral analysis of any given graph convolution support , including spatial-designed convolutions . Since the spatialdesigned convolutions do not fit into equation 6 , U > C ( s ) U is not a diagonal matrix . Therefore , we also compute the full frequency profile by Φs = U > C ( s ) U , which includes all eigenvectors pairwise contributions for spatial-designed convolutions . | In this paper, the authors propose a spectral-based analysis method to analyze the modeling abilities of major GNNs. Specifically, the first use the concept of convolution support to unite the ideas of spatial-based methods and spectral-based methods. By further identifying the frequency profile of different models, the authors obtain an overview of which spectrum range different models may cover. The evaluation on regular and in-regular graph datasets validate their arguments. In general, the paper brings an interesting perspective in addition to the WL-test to reveal the expressive power of GNNs, and both the theory and evaluation sound solid. It would be better if the authors can further address a few issues. | SP:fb7d909ac287383723943fb71bbbb5abb9dee7a1 |
Towards Understanding the Cause of Error in Few-Shot Learning | 1 INTRODUCTION . Learning novel concepts from few samples is one of the most important ability in human cognition system ( Chen et al . ( 2018 ) ; Dhillon et al . ( 2019 ) ; Wang et al . ( 2020 ) ) . By contrast , massive achievements of modern artificial intelligent systems are dependent upon lots of data and annotation which are hard to acquire in many scenarios . Blocked by the difficulty in obtaining large labeled datasets , community shows more interests in developing algorithms with high data-efficiency . It is so-called few-shot learning that learns to generalize well to new categories with scarce labeled samples ( Sung et al . ( 2018 ) ; Vinyals et al . ( 2016 ) ) . Existing methods deal with few-shot learning in the general framework of meta-learning where a base learner is developed and optimized across different episodes ( or tasks ) . Episodes are formed in a N-way K-shot fashion where K support samples per class are available for training . The overall objective is enabling the base learner to exploit on base classes and to transfer learnt knowledge to recognize novel classes with few support data . Since training and evaluation are performed on different tasks , the base learner holds different task-specific classifiers that depend on data sampling . In general , classification model has two components : feature extractor and classifier ( Simonyan & Zisserman ( 2015 ) ; He et al . ( 2016 ) ; Zagoruyko & Komodakis ( 2016 ) ) . Most approaches of few-shot learning exploit from according perspectives : learning a good embedding and finding a right base learner . Rethinking-FSC ( Tian et al . ( 2020 ) ) demonstrates that a good learned embedding space can be more effective than many sophisticated meta-learning algorithms . It argues for the performance on meta set where embeddings are learnt in supervised or self-supervised way . Goldblum et al . ( 2020 ) reveal the importance of feature clustering in few-shot learning . Since classifier performance is sample-dependent especially in one-shot scenario , variance of feature is expected to be small so as to retain good performance . It shows that classifier performance is not stable across different tasks . MetaOptNet ( Lee et al . ( 2019 ) ) and R2-D2 ( Bertinetto et al . ( 2018 ) ) explore training and optimization routines for linear classifier , enabling good few-shot performance through simple base learner . These literatures develop specific algorithms from the aspects of learning good representation or optimizing base learner . Most recent methods use linear classifier as base learner , so we also consider linear model in this paper . To our best knowledge , there has been little research focusing on how the two components ( aka feature representation and classifier ) respectively influence the performance on novel classes in FSL . In this paper , we introduce an upper bound of error rate in few-shot learning , indicating that the error comes from two aspects : 1 ) linear separability in the embedding space and 2 ) classifier discrepancy between task-specific and task-independent classifiers . The ideal classifier is viewed to be task-independent since its performance is not sample-dependent ( Goldblum et al . ( 2020 ) ) . To quantitively estimate each term , an experiment is performed where we use error rate of supervised classification tasks on novel classes to measure feature separability , and use disagreement of results obtained from different classifiers to measure discrepancy . It comes to an interesting observation that features learned through simple methods are sufficiently discriminative and the error mainly comes from classifier discrepancy . Based on our observation and theoretical analysis , we propose a simple method of reducing classifier discrepancy so as to boost few-shot performance . Experiments on three benchmarks are conducted to empirically verify our theory . Results on different datasets with various base learners show consistent improvements , supporting our finding and theory in few-shot learning . The main contributions of this paper are : 1 . The upper bound of error rate on novel classes is theoretically analyzed . From derived equations we figure out that the error in FSL is caused by linear separability in the feature space and discrepancy between task-specific and task-independent classifiers . 2 . Quantitative experiments are conducted to verify the theoretical analysis . Results show that the error is dominantly caused by classifier discrepancy . 3 . Based on the theoretical analysis and the experiment results , a constraint is proposed to reduce classifier discrepancy so as to decrease the upper bound of error rate in FSL . 4 . Further experiments on mini-ImageNet , tiered-ImageNet and CIFAR-FS confirm the effectiveness of the proposed method . It shows that decreasing classifier discrepancy can consistently achieve improvements in most cases . 2 RELATED WORK . Algorithms of Few-Shot Learning Prototypical Network ( Snell et al . ( 2017 ) ) is a classical algorithm for its simplicity and effectiveness , which performs few-shot classification by nearestprototype matching . Since class prototype is the mean of features , the linear separability in feature space has direct impact on classification results . Performance of following series of prototype based methods ( Allen et al . ( 2019 ) ; Liu et al . ( 2020 ) ) is also limited by feature separability . Different with these methods using nearest-neighbor classifier , Bertinetto et al . ( 2018 ) adopt ridge regression and logistic regression as base learner . Similarly , Lee et al . ( 2019 ) use classical linear classifier SVM in few-shot learning to learn representations . Simple linear classifier shows competitive performance and in this paper , we use linear classifier in measuring linear separability and classifier discrepancy . Theoretical Analysis of Few-Shot Learning Cao et al . ( 2019 ) introduce a bound for accuracy of Prototypical Network ( Snell et al . ( 2017 ) ) , demonstrating that the intrinsic dimension of the embedding function ’ s output space varies with the number of shots . They further propose a method to overcome the negative impact of mismatched shots in meta-train and meta-test stages . As in ( Liu et al . ( 2020 ) ) , they give a lower bound for accuracy cosine similarity based prototypical network . Two key factors : intra-class bias and cross-class bias are theoretically formulated . We also analyze theoretical bounds in few-shot learning . Theory in this paper does not focus on specific algorithm like Prototypical Network but on general scenarios , from the perspective of feature separability and classifier discrepancy . Theoretical Analysis of Domain Adaptation : Methods of domain adaptation ( Ben-David et al . ( 2007 ; 2010 ) ; Ganin & Lempitsky ( 2015 ) ) solve the problem of how to train a classifier on source domain and guarantee the classifier performs well on target domain . A classifier ’ s target error is bound by its source error and the divergence between the two domains in ( Ben-David et al . ( 2010 ) ) . They utilize H-divergence and H∆H-divergence to measure discrepancy between two domains . H∆H-divergence can be computed from finite unlabeled data , allowing us to directly estimate the error of a source-trained classifier on the target domain . Inspired by their work , we also use H∆Hdivergence to measure discrepancy between sets on novel classes and base classes . 3 BACKGROUND . 3.1 PROBLEM SETUP . The common setup of few-shot learning used in this paper is described below . A space of class is divided into two parts : base classes Cbase and novel classes Cnovel where Cbase ∩ Cnovel = ∅ . Dataset Dbase of base classes is used for model training and the model is evaluated on dataset Dnovel whose samples belong to unseen classes during training . The model is composed of a feature extractor F and a classifier h. In few-shot learning , we usually consider N -way K-shot Q-query tasks T . In task τi = ( Dsi , D q i , h ) , the support set D s i includes K data x ∈ Rd per class and its true label y ∈ { c1 , ... , cN } . The goal is to predict labels for query data in Dqi given Dsi . In this paper , we use error rate on novel classes to evaluate the few-shot performance of a trained model . The error rate is formulated as : novel = E [ τ ] = 1 M ×Q M∑ i Q∑ j 1 ( h ( F ( xi , j ) ) ! = yi , j ) ( 1 ) where M is the number of sampled tasks τi ∼ Tnovel . 1 ( · ) is indicator function . 3.2 DISTRIBUTION DIVERGENCE . We adopt following concepts to explore the cause of error in few-shot scenarios . Definition 1 Given a set D = { ( x1 , y1 ) , ... , ( xm , ym ) } where xi ∈ X and yi ∈ Y , for any mappings h1 , h2 ∈ X , disagreement is defined in Eqn . 2 to measure the difference of these two mappings . dis ( h1 , h2 ) = Px∼DX ( h1 ( x ) 6= h2 ( x ) ) ( 2 ) Definition 2 Given a domain X withD1 andD2 probability distributions over X , letH be a hypothesis class on X and denote by I ( h ) the set for which h ∈ H is the characteristic function ; that is , x ∈ I ( h ) ⇔ h ( x ) = 1 . H divergence between D1 and D2 is dH ( D1 , D2 ) = 2suph∈H|PrD1 [ I ( h ) ] − PrD2 [ I ( h ) ] | ( 3 ) Definition 3 For hypotheses h , h′ ∈ H , the symmetric difference hypothesis space H∆H is the set of hypotheses g ∈ H∆H ⇔ g ( x ) = h ( x ) ⊕h′ ( x ) where ⊕ is the XOR function . H∆H divergence over distributions is defined as following : dH∆H ( D1 , D2 ) = 2suph∈H|Prx∼D1 [ h ( x ) 6= h′ ( x ) ] − Prx∼D2 [ h ( x ) 6= h′ ( x ) ] | ( 4 ) 4 PROPOSED METHOD . 4.1 MEASURING CLASSIFICATION PERFORMANCE ON NOVEL CLASSES . A classification model generally consists of two parts : feature extractor and classifier . Hence , the model holds two expectations : 1 ) Extracted features are expected to be discriminative for classification ( Snell et al . ( 2017 ) ; Lee et al . ( 2019 ) ) . 2 ) Classifier is supposed to be stable concerning different tasks ( Cao et al . ( 2019 ) ; Liu et al . ( 2020 ) ) . Consider linear classifier in this paper , we investigate from linear separability and classifier stability to measure classification performance on novel classes . To quantitively estimate these two terms , we design experiments using a base model with ResNet-12 backbone and FC ( fully-connected ) layer classifier . The base model is first trained on base classes in supervised way . When testing on novel classes , we replace FC layer classifier with ProtoNet ( Snell et al . ( 2017 ) ) and ridge regression ( RR ) as in paper Ye et al . ( 2020 ) . The novel ( h ∗ ) ( i.e . error rate on Cnovel with classifier h∗ ) , is used to approximate and quantify feature separability on novel classes ( Eqn . 5 ) . novel ( h ∗ ) = 1 N ×Q∗ N×Q∗∑ i 1 ( ŷi 6= yi ) ( 5 ) In Eqn . 5 , Q∗ is the number of all samples of each class c ∈ { c1 , ... , cN } . ŷi is predicted label and yi is true label . h∗ is trained under supervised way from large set of samples and is used to approximate the expected ideal N -way classifier . On the other hand , we use disagreement defined in Eqn . 6 to measure classifier discrepancy . dis ( h , h∗ ) = 1 N ×Q N×Q∑ i 1 ( ŷi 6= ŷ∗i ) ( 6 ) where Q is the number of query samples . h is task-specific classifier that differentiates among tasks , decided by sampled support data . h∗ is task-independent concerning these N classes . Thus , dis ( h , h∗ ) indicates the discrepancy between the task-specific classifiers and the ideal classifier . Table 1 shows results on two benchmarks : mini-ImageNet and tiered-ImageNet . From Table 1 , we can see that novel ( h∗ ) is generally lower than dis ( h , h∗ ) in a large margin . For example , 1-shot dis ( h , h∗ ) on mini-ImageNet with RR is up to 38.34 % while novel ( h∗ ) is 6.72 % , which is five times lower . Furthermore , the obvious drop of dis ( h , h∗ ) from 1-shot to 5-shot indicates obvious raising of classifier discrepancy . An interesting conclusion can be drawn from this experiment that the error on novel classes is dominantly caused by classifier discrepancy in low-data regimes rather than linear separability . More details about this experiment are presented in the appendix . | This paper seeks to understand theoretically the current bottleneck in few-shot learning and address it with a new way of training the embedding. The authors find that the key issue in few-shot learning is not the separability of the novel classes but the discrepancy between classifiers trained on large datasets and few-shot classifiers. They then propose a way to reducee this discrepancy. | SP:68adb3cef88d96379ca9e717817a25d6ba545ef9 |
Towards Understanding the Cause of Error in Few-Shot Learning | 1 INTRODUCTION . Learning novel concepts from few samples is one of the most important ability in human cognition system ( Chen et al . ( 2018 ) ; Dhillon et al . ( 2019 ) ; Wang et al . ( 2020 ) ) . By contrast , massive achievements of modern artificial intelligent systems are dependent upon lots of data and annotation which are hard to acquire in many scenarios . Blocked by the difficulty in obtaining large labeled datasets , community shows more interests in developing algorithms with high data-efficiency . It is so-called few-shot learning that learns to generalize well to new categories with scarce labeled samples ( Sung et al . ( 2018 ) ; Vinyals et al . ( 2016 ) ) . Existing methods deal with few-shot learning in the general framework of meta-learning where a base learner is developed and optimized across different episodes ( or tasks ) . Episodes are formed in a N-way K-shot fashion where K support samples per class are available for training . The overall objective is enabling the base learner to exploit on base classes and to transfer learnt knowledge to recognize novel classes with few support data . Since training and evaluation are performed on different tasks , the base learner holds different task-specific classifiers that depend on data sampling . In general , classification model has two components : feature extractor and classifier ( Simonyan & Zisserman ( 2015 ) ; He et al . ( 2016 ) ; Zagoruyko & Komodakis ( 2016 ) ) . Most approaches of few-shot learning exploit from according perspectives : learning a good embedding and finding a right base learner . Rethinking-FSC ( Tian et al . ( 2020 ) ) demonstrates that a good learned embedding space can be more effective than many sophisticated meta-learning algorithms . It argues for the performance on meta set where embeddings are learnt in supervised or self-supervised way . Goldblum et al . ( 2020 ) reveal the importance of feature clustering in few-shot learning . Since classifier performance is sample-dependent especially in one-shot scenario , variance of feature is expected to be small so as to retain good performance . It shows that classifier performance is not stable across different tasks . MetaOptNet ( Lee et al . ( 2019 ) ) and R2-D2 ( Bertinetto et al . ( 2018 ) ) explore training and optimization routines for linear classifier , enabling good few-shot performance through simple base learner . These literatures develop specific algorithms from the aspects of learning good representation or optimizing base learner . Most recent methods use linear classifier as base learner , so we also consider linear model in this paper . To our best knowledge , there has been little research focusing on how the two components ( aka feature representation and classifier ) respectively influence the performance on novel classes in FSL . In this paper , we introduce an upper bound of error rate in few-shot learning , indicating that the error comes from two aspects : 1 ) linear separability in the embedding space and 2 ) classifier discrepancy between task-specific and task-independent classifiers . The ideal classifier is viewed to be task-independent since its performance is not sample-dependent ( Goldblum et al . ( 2020 ) ) . To quantitively estimate each term , an experiment is performed where we use error rate of supervised classification tasks on novel classes to measure feature separability , and use disagreement of results obtained from different classifiers to measure discrepancy . It comes to an interesting observation that features learned through simple methods are sufficiently discriminative and the error mainly comes from classifier discrepancy . Based on our observation and theoretical analysis , we propose a simple method of reducing classifier discrepancy so as to boost few-shot performance . Experiments on three benchmarks are conducted to empirically verify our theory . Results on different datasets with various base learners show consistent improvements , supporting our finding and theory in few-shot learning . The main contributions of this paper are : 1 . The upper bound of error rate on novel classes is theoretically analyzed . From derived equations we figure out that the error in FSL is caused by linear separability in the feature space and discrepancy between task-specific and task-independent classifiers . 2 . Quantitative experiments are conducted to verify the theoretical analysis . Results show that the error is dominantly caused by classifier discrepancy . 3 . Based on the theoretical analysis and the experiment results , a constraint is proposed to reduce classifier discrepancy so as to decrease the upper bound of error rate in FSL . 4 . Further experiments on mini-ImageNet , tiered-ImageNet and CIFAR-FS confirm the effectiveness of the proposed method . It shows that decreasing classifier discrepancy can consistently achieve improvements in most cases . 2 RELATED WORK . Algorithms of Few-Shot Learning Prototypical Network ( Snell et al . ( 2017 ) ) is a classical algorithm for its simplicity and effectiveness , which performs few-shot classification by nearestprototype matching . Since class prototype is the mean of features , the linear separability in feature space has direct impact on classification results . Performance of following series of prototype based methods ( Allen et al . ( 2019 ) ; Liu et al . ( 2020 ) ) is also limited by feature separability . Different with these methods using nearest-neighbor classifier , Bertinetto et al . ( 2018 ) adopt ridge regression and logistic regression as base learner . Similarly , Lee et al . ( 2019 ) use classical linear classifier SVM in few-shot learning to learn representations . Simple linear classifier shows competitive performance and in this paper , we use linear classifier in measuring linear separability and classifier discrepancy . Theoretical Analysis of Few-Shot Learning Cao et al . ( 2019 ) introduce a bound for accuracy of Prototypical Network ( Snell et al . ( 2017 ) ) , demonstrating that the intrinsic dimension of the embedding function ’ s output space varies with the number of shots . They further propose a method to overcome the negative impact of mismatched shots in meta-train and meta-test stages . As in ( Liu et al . ( 2020 ) ) , they give a lower bound for accuracy cosine similarity based prototypical network . Two key factors : intra-class bias and cross-class bias are theoretically formulated . We also analyze theoretical bounds in few-shot learning . Theory in this paper does not focus on specific algorithm like Prototypical Network but on general scenarios , from the perspective of feature separability and classifier discrepancy . Theoretical Analysis of Domain Adaptation : Methods of domain adaptation ( Ben-David et al . ( 2007 ; 2010 ) ; Ganin & Lempitsky ( 2015 ) ) solve the problem of how to train a classifier on source domain and guarantee the classifier performs well on target domain . A classifier ’ s target error is bound by its source error and the divergence between the two domains in ( Ben-David et al . ( 2010 ) ) . They utilize H-divergence and H∆H-divergence to measure discrepancy between two domains . H∆H-divergence can be computed from finite unlabeled data , allowing us to directly estimate the error of a source-trained classifier on the target domain . Inspired by their work , we also use H∆Hdivergence to measure discrepancy between sets on novel classes and base classes . 3 BACKGROUND . 3.1 PROBLEM SETUP . The common setup of few-shot learning used in this paper is described below . A space of class is divided into two parts : base classes Cbase and novel classes Cnovel where Cbase ∩ Cnovel = ∅ . Dataset Dbase of base classes is used for model training and the model is evaluated on dataset Dnovel whose samples belong to unseen classes during training . The model is composed of a feature extractor F and a classifier h. In few-shot learning , we usually consider N -way K-shot Q-query tasks T . In task τi = ( Dsi , D q i , h ) , the support set D s i includes K data x ∈ Rd per class and its true label y ∈ { c1 , ... , cN } . The goal is to predict labels for query data in Dqi given Dsi . In this paper , we use error rate on novel classes to evaluate the few-shot performance of a trained model . The error rate is formulated as : novel = E [ τ ] = 1 M ×Q M∑ i Q∑ j 1 ( h ( F ( xi , j ) ) ! = yi , j ) ( 1 ) where M is the number of sampled tasks τi ∼ Tnovel . 1 ( · ) is indicator function . 3.2 DISTRIBUTION DIVERGENCE . We adopt following concepts to explore the cause of error in few-shot scenarios . Definition 1 Given a set D = { ( x1 , y1 ) , ... , ( xm , ym ) } where xi ∈ X and yi ∈ Y , for any mappings h1 , h2 ∈ X , disagreement is defined in Eqn . 2 to measure the difference of these two mappings . dis ( h1 , h2 ) = Px∼DX ( h1 ( x ) 6= h2 ( x ) ) ( 2 ) Definition 2 Given a domain X withD1 andD2 probability distributions over X , letH be a hypothesis class on X and denote by I ( h ) the set for which h ∈ H is the characteristic function ; that is , x ∈ I ( h ) ⇔ h ( x ) = 1 . H divergence between D1 and D2 is dH ( D1 , D2 ) = 2suph∈H|PrD1 [ I ( h ) ] − PrD2 [ I ( h ) ] | ( 3 ) Definition 3 For hypotheses h , h′ ∈ H , the symmetric difference hypothesis space H∆H is the set of hypotheses g ∈ H∆H ⇔ g ( x ) = h ( x ) ⊕h′ ( x ) where ⊕ is the XOR function . H∆H divergence over distributions is defined as following : dH∆H ( D1 , D2 ) = 2suph∈H|Prx∼D1 [ h ( x ) 6= h′ ( x ) ] − Prx∼D2 [ h ( x ) 6= h′ ( x ) ] | ( 4 ) 4 PROPOSED METHOD . 4.1 MEASURING CLASSIFICATION PERFORMANCE ON NOVEL CLASSES . A classification model generally consists of two parts : feature extractor and classifier . Hence , the model holds two expectations : 1 ) Extracted features are expected to be discriminative for classification ( Snell et al . ( 2017 ) ; Lee et al . ( 2019 ) ) . 2 ) Classifier is supposed to be stable concerning different tasks ( Cao et al . ( 2019 ) ; Liu et al . ( 2020 ) ) . Consider linear classifier in this paper , we investigate from linear separability and classifier stability to measure classification performance on novel classes . To quantitively estimate these two terms , we design experiments using a base model with ResNet-12 backbone and FC ( fully-connected ) layer classifier . The base model is first trained on base classes in supervised way . When testing on novel classes , we replace FC layer classifier with ProtoNet ( Snell et al . ( 2017 ) ) and ridge regression ( RR ) as in paper Ye et al . ( 2020 ) . The novel ( h ∗ ) ( i.e . error rate on Cnovel with classifier h∗ ) , is used to approximate and quantify feature separability on novel classes ( Eqn . 5 ) . novel ( h ∗ ) = 1 N ×Q∗ N×Q∗∑ i 1 ( ŷi 6= yi ) ( 5 ) In Eqn . 5 , Q∗ is the number of all samples of each class c ∈ { c1 , ... , cN } . ŷi is predicted label and yi is true label . h∗ is trained under supervised way from large set of samples and is used to approximate the expected ideal N -way classifier . On the other hand , we use disagreement defined in Eqn . 6 to measure classifier discrepancy . dis ( h , h∗ ) = 1 N ×Q N×Q∑ i 1 ( ŷi 6= ŷ∗i ) ( 6 ) where Q is the number of query samples . h is task-specific classifier that differentiates among tasks , decided by sampled support data . h∗ is task-independent concerning these N classes . Thus , dis ( h , h∗ ) indicates the discrepancy between the task-specific classifiers and the ideal classifier . Table 1 shows results on two benchmarks : mini-ImageNet and tiered-ImageNet . From Table 1 , we can see that novel ( h∗ ) is generally lower than dis ( h , h∗ ) in a large margin . For example , 1-shot dis ( h , h∗ ) on mini-ImageNet with RR is up to 38.34 % while novel ( h∗ ) is 6.72 % , which is five times lower . Furthermore , the obvious drop of dis ( h , h∗ ) from 1-shot to 5-shot indicates obvious raising of classifier discrepancy . An interesting conclusion can be drawn from this experiment that the error on novel classes is dominantly caused by classifier discrepancy in low-data regimes rather than linear separability . More details about this experiment are presented in the appendix . | This paper analyzes the upper bound of error rate on novel classes in few-shot learning theoretically. It derives that the upper bound is decided by feature separability and classifier discrepancy and shows that classification error is mainly caused by classifier discrepancy in few-shot scenarios. In addition, this paper proposes a new method to lower the upper bound of classification error by reducing classifier discrepancy. | SP:68adb3cef88d96379ca9e717817a25d6ba545ef9 |
Towards Understanding the Cause of Error in Few-Shot Learning | 1 INTRODUCTION . Learning novel concepts from few samples is one of the most important ability in human cognition system ( Chen et al . ( 2018 ) ; Dhillon et al . ( 2019 ) ; Wang et al . ( 2020 ) ) . By contrast , massive achievements of modern artificial intelligent systems are dependent upon lots of data and annotation which are hard to acquire in many scenarios . Blocked by the difficulty in obtaining large labeled datasets , community shows more interests in developing algorithms with high data-efficiency . It is so-called few-shot learning that learns to generalize well to new categories with scarce labeled samples ( Sung et al . ( 2018 ) ; Vinyals et al . ( 2016 ) ) . Existing methods deal with few-shot learning in the general framework of meta-learning where a base learner is developed and optimized across different episodes ( or tasks ) . Episodes are formed in a N-way K-shot fashion where K support samples per class are available for training . The overall objective is enabling the base learner to exploit on base classes and to transfer learnt knowledge to recognize novel classes with few support data . Since training and evaluation are performed on different tasks , the base learner holds different task-specific classifiers that depend on data sampling . In general , classification model has two components : feature extractor and classifier ( Simonyan & Zisserman ( 2015 ) ; He et al . ( 2016 ) ; Zagoruyko & Komodakis ( 2016 ) ) . Most approaches of few-shot learning exploit from according perspectives : learning a good embedding and finding a right base learner . Rethinking-FSC ( Tian et al . ( 2020 ) ) demonstrates that a good learned embedding space can be more effective than many sophisticated meta-learning algorithms . It argues for the performance on meta set where embeddings are learnt in supervised or self-supervised way . Goldblum et al . ( 2020 ) reveal the importance of feature clustering in few-shot learning . Since classifier performance is sample-dependent especially in one-shot scenario , variance of feature is expected to be small so as to retain good performance . It shows that classifier performance is not stable across different tasks . MetaOptNet ( Lee et al . ( 2019 ) ) and R2-D2 ( Bertinetto et al . ( 2018 ) ) explore training and optimization routines for linear classifier , enabling good few-shot performance through simple base learner . These literatures develop specific algorithms from the aspects of learning good representation or optimizing base learner . Most recent methods use linear classifier as base learner , so we also consider linear model in this paper . To our best knowledge , there has been little research focusing on how the two components ( aka feature representation and classifier ) respectively influence the performance on novel classes in FSL . In this paper , we introduce an upper bound of error rate in few-shot learning , indicating that the error comes from two aspects : 1 ) linear separability in the embedding space and 2 ) classifier discrepancy between task-specific and task-independent classifiers . The ideal classifier is viewed to be task-independent since its performance is not sample-dependent ( Goldblum et al . ( 2020 ) ) . To quantitively estimate each term , an experiment is performed where we use error rate of supervised classification tasks on novel classes to measure feature separability , and use disagreement of results obtained from different classifiers to measure discrepancy . It comes to an interesting observation that features learned through simple methods are sufficiently discriminative and the error mainly comes from classifier discrepancy . Based on our observation and theoretical analysis , we propose a simple method of reducing classifier discrepancy so as to boost few-shot performance . Experiments on three benchmarks are conducted to empirically verify our theory . Results on different datasets with various base learners show consistent improvements , supporting our finding and theory in few-shot learning . The main contributions of this paper are : 1 . The upper bound of error rate on novel classes is theoretically analyzed . From derived equations we figure out that the error in FSL is caused by linear separability in the feature space and discrepancy between task-specific and task-independent classifiers . 2 . Quantitative experiments are conducted to verify the theoretical analysis . Results show that the error is dominantly caused by classifier discrepancy . 3 . Based on the theoretical analysis and the experiment results , a constraint is proposed to reduce classifier discrepancy so as to decrease the upper bound of error rate in FSL . 4 . Further experiments on mini-ImageNet , tiered-ImageNet and CIFAR-FS confirm the effectiveness of the proposed method . It shows that decreasing classifier discrepancy can consistently achieve improvements in most cases . 2 RELATED WORK . Algorithms of Few-Shot Learning Prototypical Network ( Snell et al . ( 2017 ) ) is a classical algorithm for its simplicity and effectiveness , which performs few-shot classification by nearestprototype matching . Since class prototype is the mean of features , the linear separability in feature space has direct impact on classification results . Performance of following series of prototype based methods ( Allen et al . ( 2019 ) ; Liu et al . ( 2020 ) ) is also limited by feature separability . Different with these methods using nearest-neighbor classifier , Bertinetto et al . ( 2018 ) adopt ridge regression and logistic regression as base learner . Similarly , Lee et al . ( 2019 ) use classical linear classifier SVM in few-shot learning to learn representations . Simple linear classifier shows competitive performance and in this paper , we use linear classifier in measuring linear separability and classifier discrepancy . Theoretical Analysis of Few-Shot Learning Cao et al . ( 2019 ) introduce a bound for accuracy of Prototypical Network ( Snell et al . ( 2017 ) ) , demonstrating that the intrinsic dimension of the embedding function ’ s output space varies with the number of shots . They further propose a method to overcome the negative impact of mismatched shots in meta-train and meta-test stages . As in ( Liu et al . ( 2020 ) ) , they give a lower bound for accuracy cosine similarity based prototypical network . Two key factors : intra-class bias and cross-class bias are theoretically formulated . We also analyze theoretical bounds in few-shot learning . Theory in this paper does not focus on specific algorithm like Prototypical Network but on general scenarios , from the perspective of feature separability and classifier discrepancy . Theoretical Analysis of Domain Adaptation : Methods of domain adaptation ( Ben-David et al . ( 2007 ; 2010 ) ; Ganin & Lempitsky ( 2015 ) ) solve the problem of how to train a classifier on source domain and guarantee the classifier performs well on target domain . A classifier ’ s target error is bound by its source error and the divergence between the two domains in ( Ben-David et al . ( 2010 ) ) . They utilize H-divergence and H∆H-divergence to measure discrepancy between two domains . H∆H-divergence can be computed from finite unlabeled data , allowing us to directly estimate the error of a source-trained classifier on the target domain . Inspired by their work , we also use H∆Hdivergence to measure discrepancy between sets on novel classes and base classes . 3 BACKGROUND . 3.1 PROBLEM SETUP . The common setup of few-shot learning used in this paper is described below . A space of class is divided into two parts : base classes Cbase and novel classes Cnovel where Cbase ∩ Cnovel = ∅ . Dataset Dbase of base classes is used for model training and the model is evaluated on dataset Dnovel whose samples belong to unseen classes during training . The model is composed of a feature extractor F and a classifier h. In few-shot learning , we usually consider N -way K-shot Q-query tasks T . In task τi = ( Dsi , D q i , h ) , the support set D s i includes K data x ∈ Rd per class and its true label y ∈ { c1 , ... , cN } . The goal is to predict labels for query data in Dqi given Dsi . In this paper , we use error rate on novel classes to evaluate the few-shot performance of a trained model . The error rate is formulated as : novel = E [ τ ] = 1 M ×Q M∑ i Q∑ j 1 ( h ( F ( xi , j ) ) ! = yi , j ) ( 1 ) where M is the number of sampled tasks τi ∼ Tnovel . 1 ( · ) is indicator function . 3.2 DISTRIBUTION DIVERGENCE . We adopt following concepts to explore the cause of error in few-shot scenarios . Definition 1 Given a set D = { ( x1 , y1 ) , ... , ( xm , ym ) } where xi ∈ X and yi ∈ Y , for any mappings h1 , h2 ∈ X , disagreement is defined in Eqn . 2 to measure the difference of these two mappings . dis ( h1 , h2 ) = Px∼DX ( h1 ( x ) 6= h2 ( x ) ) ( 2 ) Definition 2 Given a domain X withD1 andD2 probability distributions over X , letH be a hypothesis class on X and denote by I ( h ) the set for which h ∈ H is the characteristic function ; that is , x ∈ I ( h ) ⇔ h ( x ) = 1 . H divergence between D1 and D2 is dH ( D1 , D2 ) = 2suph∈H|PrD1 [ I ( h ) ] − PrD2 [ I ( h ) ] | ( 3 ) Definition 3 For hypotheses h , h′ ∈ H , the symmetric difference hypothesis space H∆H is the set of hypotheses g ∈ H∆H ⇔ g ( x ) = h ( x ) ⊕h′ ( x ) where ⊕ is the XOR function . H∆H divergence over distributions is defined as following : dH∆H ( D1 , D2 ) = 2suph∈H|Prx∼D1 [ h ( x ) 6= h′ ( x ) ] − Prx∼D2 [ h ( x ) 6= h′ ( x ) ] | ( 4 ) 4 PROPOSED METHOD . 4.1 MEASURING CLASSIFICATION PERFORMANCE ON NOVEL CLASSES . A classification model generally consists of two parts : feature extractor and classifier . Hence , the model holds two expectations : 1 ) Extracted features are expected to be discriminative for classification ( Snell et al . ( 2017 ) ; Lee et al . ( 2019 ) ) . 2 ) Classifier is supposed to be stable concerning different tasks ( Cao et al . ( 2019 ) ; Liu et al . ( 2020 ) ) . Consider linear classifier in this paper , we investigate from linear separability and classifier stability to measure classification performance on novel classes . To quantitively estimate these two terms , we design experiments using a base model with ResNet-12 backbone and FC ( fully-connected ) layer classifier . The base model is first trained on base classes in supervised way . When testing on novel classes , we replace FC layer classifier with ProtoNet ( Snell et al . ( 2017 ) ) and ridge regression ( RR ) as in paper Ye et al . ( 2020 ) . The novel ( h ∗ ) ( i.e . error rate on Cnovel with classifier h∗ ) , is used to approximate and quantify feature separability on novel classes ( Eqn . 5 ) . novel ( h ∗ ) = 1 N ×Q∗ N×Q∗∑ i 1 ( ŷi 6= yi ) ( 5 ) In Eqn . 5 , Q∗ is the number of all samples of each class c ∈ { c1 , ... , cN } . ŷi is predicted label and yi is true label . h∗ is trained under supervised way from large set of samples and is used to approximate the expected ideal N -way classifier . On the other hand , we use disagreement defined in Eqn . 6 to measure classifier discrepancy . dis ( h , h∗ ) = 1 N ×Q N×Q∑ i 1 ( ŷi 6= ŷ∗i ) ( 6 ) where Q is the number of query samples . h is task-specific classifier that differentiates among tasks , decided by sampled support data . h∗ is task-independent concerning these N classes . Thus , dis ( h , h∗ ) indicates the discrepancy between the task-specific classifiers and the ideal classifier . Table 1 shows results on two benchmarks : mini-ImageNet and tiered-ImageNet . From Table 1 , we can see that novel ( h∗ ) is generally lower than dis ( h , h∗ ) in a large margin . For example , 1-shot dis ( h , h∗ ) on mini-ImageNet with RR is up to 38.34 % while novel ( h∗ ) is 6.72 % , which is five times lower . Furthermore , the obvious drop of dis ( h , h∗ ) from 1-shot to 5-shot indicates obvious raising of classifier discrepancy . An interesting conclusion can be drawn from this experiment that the error on novel classes is dominantly caused by classifier discrepancy in low-data regimes rather than linear separability . More details about this experiment are presented in the appendix . | This paper aims to understand the cause of error in few-shot classification. The authors are particularly interested in the upper-bound of the error rate, which they break down into linear separability in the feature space of the meta-train classes and classifier discrepancy on the meta-train classes, among other terms. Empirical results show that the latter is the dominant term. After identifying this, the authors propose a method, Reducing Classifier Discrepancy, to reduce the classifier discrepancy, lowering the upper-bound of the error rate. Empirical results show the benefits of the proposed method on three few-shot datasets. | SP:68adb3cef88d96379ca9e717817a25d6ba545ef9 |
Learning Spatiotemporal Features via Video and Text Pair Discrimination | 1 INTRODUCTION . Deep learning has made a remarkable progress for visual recognition in both image and video domain ( Krizhevsky et al. , 2012 ; He et al. , 2016 ; Carreira & Zisserman , 2017 ; Feichtenhofer et al. , 2018 ) by training powerful neural networks on large-scale manually annotated datasets ( e.g. , ImageNet ( Deng et al. , 2009 ) and Kinetics ( Kay et al. , 2017 ) ) . More importantly , it is well-established that this supervised pre-training on large-scale datasets would benefit the downstream tasks ( e.g. , object detection ( Ren et al. , 2015 ) , pose estimation ( He et al. , 2017 ) , and temporal action detection ( Zhao et al. , 2017 ) ) , in particular when the target datasets are relatively small . Yet , annotating a large-scale dataset for training such deep neural networks is costly and time-consuming , and even more challenging for video due to its various temporal structure and complex semantics . As a result , the existing video datasets size is still smaller than ImageNet in terms of training samples and classes . On the other hand , videos typically contain richer structure with abundant side information such as motion ( Diba et al. , 2019 ; Ng et al. , 2018 ) , audio ( Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) , and text ( Miech et al. , 2019 ; Sun et al. , 2019b ) . So these expected these associated modalities are expected to provide useful cues to learn video representations in a more efficient way . Language or text is probably the most natural and easy way to describe the semantic information of a video , and the associated textual information could be easily acquired when collecting video dataset ( Rohrbach et al. , 2017 ; Miech et al. , 2019 ) from Internet or Movie . We argue that this correlation between a clip and its associated text could serve as an alternative supervision to learn video representation from scratch . This is different from some recent works ( Sun et al. , 2019b ; Miech et al. , 2019 ) , in which these abundant textual information has been used to learn a high-level visual-text embedding applied to text-to-video retrieval or video captioning . Intuitively , it is more challenging to learn a general visual representation solely from text information without any human annotation , for reasons such as large numbers of noise in text , lacking careful initialization , and being hard to design an effective objective . In this paper , we aim to learn effective video representation from noisy and diverse textual information , which could serves as the basis for a variety of downstream tasks . Basically , we learn a mapping of text and video into a shared embedding space and leverage their correlation as supervision signal . The technical difficulty is how to design an effective objective function , that is capable of modeling this complex visual-textual correlation and as well easily optimized by training from scratch on noisy datasets . Inspired by unsupervised feature learning in images ( Wu et al. , 2018 ; Tian et al. , 2019 ) , we present a cross-modal pair discrimination ( CPD ) framework , which tries to recognize each video and text pair into a class via a non-parametric classifier . To solve the computational issues imposed by the huge numbers of pair classes , we adapt noise-contrastive estimation technique ( Gutmann & Hyvärinen , 2010 ) to approximate the original loss function . Specifically , we learn the CPD framework from web videos with the associated title or caption that could be directly crawled from web platforms such as YouTube ( Kay et al. , 2017 ) and Instagram ( Duan et al. , 2020 ) . We utilize the off-the-shelf language models such as BERT ( Devlin et al. , 2019 ) or Word2vec ( Mikolov et al. , 2013 ) and devise a curriculum learning strategy to progressively train the video models . We first test the generalization ability of learned video representation by CPD on the Kinetics dataset ( Kay et al. , 2017 ) by using shallow classifiers such k-NN and linear classifier . It shows that our learned spatiotemporal features obtain promising results which are comparable to some supervised learning methods on the Kinetics dataset ( Kay et al. , 2017 ) . Then , we investigate the generalization power of learned spatiotemporal features of CPD by fine-tuning on the Kinetics ( Kay et al. , 2017 ) , UCF101 ( Soomro et al. , 2012 ) and HMDB51 ( Kuehne et al. , 2011 ) datasets , demonstrating that our method obtain superior performance to previous state-of-the-art self-supervised methods and comparable performance to the very recent methods of using orders of magnitude more videos ( 70M-100M vs. 0.3M ) . 2 RELATED WORK . Self/Weakly Supervised Representation Learning . Self supervised representation was popular in both image and video domains by designing various proxy tasks . In image domain , for instance , these tasks could be predicting the image context ( Doersch et al. , 2015 ) , counting the objects ( Noroozi et al. , 2017 ) , converting gray images to color one ( Zhang et al. , 2016 ) , keeping global and local consistency ( Hjelm et al. , 2019 ) . In video domain , typical examples include frame prediction ( Diba et al. , 2019 ; Vondrick et al. , 2016 ) , optical flow estimation ( Ng et al. , 2018 ; Zhou et al. , 2017 ; Jayaraman & Grauman , 2017 ) , instance tracking ( Wang & Gupta , 2015 ; Wang et al. , 2019b ) , temporal order or structure prediction ( Misra et al. , 2016 ; Fernando et al. , 2017 ; Wei et al. , 2018 ; Xu et al. , 2019a ) . These learnt representations may capture some aspects of low-level image or video structures , but are generally outperformed by those using cross modal information . Several cross-modal self-supervised tasks was proposed to enhance single-modality representation power and typical example is audio-visual representation learning ( Aytar et al. , 2016 ; Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) . Meanwhile , some weakly-supervised methods were developed by utilizing web supervision obtained in an automatic way , such as query ID ( Chen & Gupta , 2015 ; Ghadiyaram et al. , 2019 ) , and hashtag ( Mahajan et al. , 2018 ) . Concurrent work ( Miech et al. , 2020 ) tried to learn video representations by using narration as supervision with instructional videos ( e.g. , HowTo100M ( Miech et al. , 2019 ) ) . However , they are limited by the video type . Our CPD is applicable to more general video type and we experiment with a much smaller dataset ( 0.3M vs. 100M ) of both PGC and UGC videos , but achieves a similar performance on UCF101 and HMDB51 . Concurrent work ( Stroud et al. , 2020 ) proposed a similar framework but required more training videos ( 0.3M vs. 70M ) and richer textual information to obtain similar performance to ours . Motion , Audio , and Text . Multi-modal information in videos provides natural cues for learning deep models . Motion or temporal information has been studied as to design proxy tasks to assist cross-modal learning , such as optical flow or tracking ( Ng et al. , 2018 ; Wang & Gupta , 2015 ) , frame prediction ( Diba et al. , 2019 ; Vondrick et al. , 2016 ) , or high-level temporal structure ( Wei et al. , 2018 ; Xu et al. , 2019a ; Fernando et al. , 2017 ) . As most video contain synchronized audio and visual signals , audio information has served another common modality to supervised visual learning ( Aytar et al. , 2016 ; Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) . However , both motion and audio information seem to be low-level signals and may lack high-level semantic for cross-modal learning . Speech or text has been widely studied as another cross-modal setting in video learning ( Sun et al. , 2019b ; Miech et al. , 2019 ; Dong et al. , 2019 ; Miech et al. , 2018 ; Pan et al. , 2016 ; Plummer et al. , 2017 ) . These works mainly aimed to learn a joint video-text embedding where visual and textual cues are adjacent if they are semantically . However , these works focused on learn high-level visual- textual embedding by using the off-the-shelf models as feature extractors . Instead , our proposed CPD framework addresses a different issue of video representation learning from scratch . 3 CROSS-MODAL PAIR DISCRIMINATION . In this section we provide an detailed description on our proposed cross-modal pair discrimination ( CPD ) for weakly supervised spatiotemporal feature learning . First , we present the whole framework and analyze its important properties . Then , we describe the training strategy of CPD framework . Finally , we introduce text and video feature extraction networks . 3.1 FRAMEWORK AND ANALYSIS . Our goal is to propose a weakly supervised representation learning method by exploiting the correlation between each video clip and its associated text information , which could be easily obtained from a variety of sources such as YouTube titles , Instagram captions and automatic speech recognition ( ASR ) . It is generally assumed that these text information contains semantic information , but also might be noisy and irrelevant . Therefore , from technical perspective , we need to design an effective objective function and training strategy to capture this semantic correlation and as well also suppress the effect of noisy and irrelevant information . To this end , we devise a video-text pair discrimination objective and a curriculum learning strategy as follows . More formally , as shown in Figure 1 , we aim to learn a modality-specific embedding function Fv and Ft for the visual and textual information from a set of N video clips and their associated textual information { ( vi , ti ) i=1 } N . Let fvi and f ti denote Fv ( vi ) and Ft ( ti ) , respectively . These embedding functions would map these two modality into a common space ( i.e. , fvi ∈ Rd and fvi ∈ Rd ) , and related visual and text information should be close to each other . The embedding functions could be implemented by neural networks which will be clarified in next section . We first focus on how to devise objective function to optimize these embedding functions . Inspired by the work of unsupervised learning in images ( Wu et al. , 2018 ) , we design a cross-modal pair discrimination objective to learn these two embedding functions . Self-instance discrimination . In the original instance-level discrimination framework ( Wu et al. , 2018 ) , each image is treated as a distinct class and it would learn a classifier to categorize each image into its own class . This framework could be naturally extended into the setting of video and text pair by directly using feature concatenation , and we call this extension as self-instance discrimination . Formally , this video-text level instance discrimination objective could be implemented with the following softmax criterion : p ( i| ( v , t ) ) = exp ( w vT i f v +wtTi f t ) ∑N j=1 exp ( w vT j f v +wtTj f t ) , ( 1 ) where the ith video-text pair define a class i , ( wvi , w t i ) is a weight for class i , and the class number is equal to training sample numberN . This class weight represent a class prototype for each video-text instance and is probably not easy to optimize as we only have a single sample for each class . Thus , the above parametric classifier could be refined with the following non-parametric variant : p ( i| ( v , t ) ) = exp ( f vT i f v/τ + f tTi f t/τ ) ∑N j=1 exp ( f vT j f v/τ + f tTj f t/τ ) , ( 2 ) where τ is a temperature parameter to control the class concentration level and our training objective is to optimize the likelihood ∏N i=1 p ( i| ( vi , ti ) ) . This straight forward extension shares the advantage of instance-level discrimination by directly modeling in the joint video-text space . Yet , in fact , the semantic information of text modality is higher than video pixels and we aims at learning video features with the supervision of textual information . To meet this requirement , we propose a refined objective function from the perspective of conditional distribution . Cross-pair discrimination . According to the above analysis , we design the objective function by considering conditional distribution p ( it|v ) and p ( iv|t ) rather than implicitly modeling distribution p ( v , t ) . Specifically , we design the following conditional distribution : p ( it|v ) = exp ( f tTi f v/τ ) ∑N j=1 exp ( f tT j f v/τ ) , ( 3 ) where ith text define a text class it , and both f t and fv with unit-norm constraint . The conditional distribution p ( iv|t ) could be defined at the same way . We call this framework as cross-pair discrimination , and during training phase , the objective is to maximize the likelihood∏N i=1 p ( it|vi ) ∏N i=1 p ( iv|ti ) . The key difference between Equation ( 2 ) and ( 3 ) is that we propose to use cross-correlation term f tT fv to replace the self-correlation term ( fvT fv+f tT f t ) . This cross correlation is more effective to capture the mutual information between visual and textual information , and thereby better at guiding the spatiotemporal feature learning from video with text information as supervision . Ranking loss . There is some common ranking loss for cross-modal matching . To well study the effectiveness of proposed cross-modal pair discrimination objective , we also compare with a baseline of ranking loss , which is defined as follows : L ( vi , ti ) = 1 n− 1 ∑ j 6=i max ( 0 , δ + S ( f tj , fvi ) − S ( f ti , fvi ) ) , ( 4 ) where each video vi has a associated text ti and unrelated text tj from current batch . S ( f tj , fvi ) is the cosine similarity , n is the batch size and δ is a margin . We apply Equation ( 4 ) in both ways of video with its associated text and text with its video . In experiment , we empirically compare this ranking loss with our designed cross-pair discrimination objective . | The paper proposes an approach to learn a video feature backbone in an unsupervised manner through the use of video titles (text modality) associated with user generated content from Youtube or Instagram. The key idea is to use a contrastive loss that increases the similarity score between a positive pair vs. a negative pair. Contrary to previous works in this direction that require millions to hundreds of millions of paired clips, this work shows that good performance can be achieved by using much fewer (on the order of 100k) clips. The learned video model achieves good performance on standard action recognition datasets. | SP:c33e067199692acdfb5377694f8b4945415d0321 |
Learning Spatiotemporal Features via Video and Text Pair Discrimination | 1 INTRODUCTION . Deep learning has made a remarkable progress for visual recognition in both image and video domain ( Krizhevsky et al. , 2012 ; He et al. , 2016 ; Carreira & Zisserman , 2017 ; Feichtenhofer et al. , 2018 ) by training powerful neural networks on large-scale manually annotated datasets ( e.g. , ImageNet ( Deng et al. , 2009 ) and Kinetics ( Kay et al. , 2017 ) ) . More importantly , it is well-established that this supervised pre-training on large-scale datasets would benefit the downstream tasks ( e.g. , object detection ( Ren et al. , 2015 ) , pose estimation ( He et al. , 2017 ) , and temporal action detection ( Zhao et al. , 2017 ) ) , in particular when the target datasets are relatively small . Yet , annotating a large-scale dataset for training such deep neural networks is costly and time-consuming , and even more challenging for video due to its various temporal structure and complex semantics . As a result , the existing video datasets size is still smaller than ImageNet in terms of training samples and classes . On the other hand , videos typically contain richer structure with abundant side information such as motion ( Diba et al. , 2019 ; Ng et al. , 2018 ) , audio ( Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) , and text ( Miech et al. , 2019 ; Sun et al. , 2019b ) . So these expected these associated modalities are expected to provide useful cues to learn video representations in a more efficient way . Language or text is probably the most natural and easy way to describe the semantic information of a video , and the associated textual information could be easily acquired when collecting video dataset ( Rohrbach et al. , 2017 ; Miech et al. , 2019 ) from Internet or Movie . We argue that this correlation between a clip and its associated text could serve as an alternative supervision to learn video representation from scratch . This is different from some recent works ( Sun et al. , 2019b ; Miech et al. , 2019 ) , in which these abundant textual information has been used to learn a high-level visual-text embedding applied to text-to-video retrieval or video captioning . Intuitively , it is more challenging to learn a general visual representation solely from text information without any human annotation , for reasons such as large numbers of noise in text , lacking careful initialization , and being hard to design an effective objective . In this paper , we aim to learn effective video representation from noisy and diverse textual information , which could serves as the basis for a variety of downstream tasks . Basically , we learn a mapping of text and video into a shared embedding space and leverage their correlation as supervision signal . The technical difficulty is how to design an effective objective function , that is capable of modeling this complex visual-textual correlation and as well easily optimized by training from scratch on noisy datasets . Inspired by unsupervised feature learning in images ( Wu et al. , 2018 ; Tian et al. , 2019 ) , we present a cross-modal pair discrimination ( CPD ) framework , which tries to recognize each video and text pair into a class via a non-parametric classifier . To solve the computational issues imposed by the huge numbers of pair classes , we adapt noise-contrastive estimation technique ( Gutmann & Hyvärinen , 2010 ) to approximate the original loss function . Specifically , we learn the CPD framework from web videos with the associated title or caption that could be directly crawled from web platforms such as YouTube ( Kay et al. , 2017 ) and Instagram ( Duan et al. , 2020 ) . We utilize the off-the-shelf language models such as BERT ( Devlin et al. , 2019 ) or Word2vec ( Mikolov et al. , 2013 ) and devise a curriculum learning strategy to progressively train the video models . We first test the generalization ability of learned video representation by CPD on the Kinetics dataset ( Kay et al. , 2017 ) by using shallow classifiers such k-NN and linear classifier . It shows that our learned spatiotemporal features obtain promising results which are comparable to some supervised learning methods on the Kinetics dataset ( Kay et al. , 2017 ) . Then , we investigate the generalization power of learned spatiotemporal features of CPD by fine-tuning on the Kinetics ( Kay et al. , 2017 ) , UCF101 ( Soomro et al. , 2012 ) and HMDB51 ( Kuehne et al. , 2011 ) datasets , demonstrating that our method obtain superior performance to previous state-of-the-art self-supervised methods and comparable performance to the very recent methods of using orders of magnitude more videos ( 70M-100M vs. 0.3M ) . 2 RELATED WORK . Self/Weakly Supervised Representation Learning . Self supervised representation was popular in both image and video domains by designing various proxy tasks . In image domain , for instance , these tasks could be predicting the image context ( Doersch et al. , 2015 ) , counting the objects ( Noroozi et al. , 2017 ) , converting gray images to color one ( Zhang et al. , 2016 ) , keeping global and local consistency ( Hjelm et al. , 2019 ) . In video domain , typical examples include frame prediction ( Diba et al. , 2019 ; Vondrick et al. , 2016 ) , optical flow estimation ( Ng et al. , 2018 ; Zhou et al. , 2017 ; Jayaraman & Grauman , 2017 ) , instance tracking ( Wang & Gupta , 2015 ; Wang et al. , 2019b ) , temporal order or structure prediction ( Misra et al. , 2016 ; Fernando et al. , 2017 ; Wei et al. , 2018 ; Xu et al. , 2019a ) . These learnt representations may capture some aspects of low-level image or video structures , but are generally outperformed by those using cross modal information . Several cross-modal self-supervised tasks was proposed to enhance single-modality representation power and typical example is audio-visual representation learning ( Aytar et al. , 2016 ; Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) . Meanwhile , some weakly-supervised methods were developed by utilizing web supervision obtained in an automatic way , such as query ID ( Chen & Gupta , 2015 ; Ghadiyaram et al. , 2019 ) , and hashtag ( Mahajan et al. , 2018 ) . Concurrent work ( Miech et al. , 2020 ) tried to learn video representations by using narration as supervision with instructional videos ( e.g. , HowTo100M ( Miech et al. , 2019 ) ) . However , they are limited by the video type . Our CPD is applicable to more general video type and we experiment with a much smaller dataset ( 0.3M vs. 100M ) of both PGC and UGC videos , but achieves a similar performance on UCF101 and HMDB51 . Concurrent work ( Stroud et al. , 2020 ) proposed a similar framework but required more training videos ( 0.3M vs. 70M ) and richer textual information to obtain similar performance to ours . Motion , Audio , and Text . Multi-modal information in videos provides natural cues for learning deep models . Motion or temporal information has been studied as to design proxy tasks to assist cross-modal learning , such as optical flow or tracking ( Ng et al. , 2018 ; Wang & Gupta , 2015 ) , frame prediction ( Diba et al. , 2019 ; Vondrick et al. , 2016 ) , or high-level temporal structure ( Wei et al. , 2018 ; Xu et al. , 2019a ; Fernando et al. , 2017 ) . As most video contain synchronized audio and visual signals , audio information has served another common modality to supervised visual learning ( Aytar et al. , 2016 ; Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) . However , both motion and audio information seem to be low-level signals and may lack high-level semantic for cross-modal learning . Speech or text has been widely studied as another cross-modal setting in video learning ( Sun et al. , 2019b ; Miech et al. , 2019 ; Dong et al. , 2019 ; Miech et al. , 2018 ; Pan et al. , 2016 ; Plummer et al. , 2017 ) . These works mainly aimed to learn a joint video-text embedding where visual and textual cues are adjacent if they are semantically . However , these works focused on learn high-level visual- textual embedding by using the off-the-shelf models as feature extractors . Instead , our proposed CPD framework addresses a different issue of video representation learning from scratch . 3 CROSS-MODAL PAIR DISCRIMINATION . In this section we provide an detailed description on our proposed cross-modal pair discrimination ( CPD ) for weakly supervised spatiotemporal feature learning . First , we present the whole framework and analyze its important properties . Then , we describe the training strategy of CPD framework . Finally , we introduce text and video feature extraction networks . 3.1 FRAMEWORK AND ANALYSIS . Our goal is to propose a weakly supervised representation learning method by exploiting the correlation between each video clip and its associated text information , which could be easily obtained from a variety of sources such as YouTube titles , Instagram captions and automatic speech recognition ( ASR ) . It is generally assumed that these text information contains semantic information , but also might be noisy and irrelevant . Therefore , from technical perspective , we need to design an effective objective function and training strategy to capture this semantic correlation and as well also suppress the effect of noisy and irrelevant information . To this end , we devise a video-text pair discrimination objective and a curriculum learning strategy as follows . More formally , as shown in Figure 1 , we aim to learn a modality-specific embedding function Fv and Ft for the visual and textual information from a set of N video clips and their associated textual information { ( vi , ti ) i=1 } N . Let fvi and f ti denote Fv ( vi ) and Ft ( ti ) , respectively . These embedding functions would map these two modality into a common space ( i.e. , fvi ∈ Rd and fvi ∈ Rd ) , and related visual and text information should be close to each other . The embedding functions could be implemented by neural networks which will be clarified in next section . We first focus on how to devise objective function to optimize these embedding functions . Inspired by the work of unsupervised learning in images ( Wu et al. , 2018 ) , we design a cross-modal pair discrimination objective to learn these two embedding functions . Self-instance discrimination . In the original instance-level discrimination framework ( Wu et al. , 2018 ) , each image is treated as a distinct class and it would learn a classifier to categorize each image into its own class . This framework could be naturally extended into the setting of video and text pair by directly using feature concatenation , and we call this extension as self-instance discrimination . Formally , this video-text level instance discrimination objective could be implemented with the following softmax criterion : p ( i| ( v , t ) ) = exp ( w vT i f v +wtTi f t ) ∑N j=1 exp ( w vT j f v +wtTj f t ) , ( 1 ) where the ith video-text pair define a class i , ( wvi , w t i ) is a weight for class i , and the class number is equal to training sample numberN . This class weight represent a class prototype for each video-text instance and is probably not easy to optimize as we only have a single sample for each class . Thus , the above parametric classifier could be refined with the following non-parametric variant : p ( i| ( v , t ) ) = exp ( f vT i f v/τ + f tTi f t/τ ) ∑N j=1 exp ( f vT j f v/τ + f tTj f t/τ ) , ( 2 ) where τ is a temperature parameter to control the class concentration level and our training objective is to optimize the likelihood ∏N i=1 p ( i| ( vi , ti ) ) . This straight forward extension shares the advantage of instance-level discrimination by directly modeling in the joint video-text space . Yet , in fact , the semantic information of text modality is higher than video pixels and we aims at learning video features with the supervision of textual information . To meet this requirement , we propose a refined objective function from the perspective of conditional distribution . Cross-pair discrimination . According to the above analysis , we design the objective function by considering conditional distribution p ( it|v ) and p ( iv|t ) rather than implicitly modeling distribution p ( v , t ) . Specifically , we design the following conditional distribution : p ( it|v ) = exp ( f tTi f v/τ ) ∑N j=1 exp ( f tT j f v/τ ) , ( 3 ) where ith text define a text class it , and both f t and fv with unit-norm constraint . The conditional distribution p ( iv|t ) could be defined at the same way . We call this framework as cross-pair discrimination , and during training phase , the objective is to maximize the likelihood∏N i=1 p ( it|vi ) ∏N i=1 p ( iv|ti ) . The key difference between Equation ( 2 ) and ( 3 ) is that we propose to use cross-correlation term f tT fv to replace the self-correlation term ( fvT fv+f tT f t ) . This cross correlation is more effective to capture the mutual information between visual and textual information , and thereby better at guiding the spatiotemporal feature learning from video with text information as supervision . Ranking loss . There is some common ranking loss for cross-modal matching . To well study the effectiveness of proposed cross-modal pair discrimination objective , we also compare with a baseline of ranking loss , which is defined as follows : L ( vi , ti ) = 1 n− 1 ∑ j 6=i max ( 0 , δ + S ( f tj , fvi ) − S ( f ti , fvi ) ) , ( 4 ) where each video vi has a associated text ti and unrelated text tj from current batch . S ( f tj , fvi ) is the cosine similarity , n is the batch size and δ is a margin . We apply Equation ( 4 ) in both ways of video with its associated text and text with its video . In experiment , we empirically compare this ranking loss with our designed cross-pair discrimination objective . | This paper concerns the problem of learning video representation from paired video-text pairs. The proposed framework is weakly-supervised as the text associated with videos comes from user-provided YouTube titles or Instagram captions. The proposed method uses standard visual encoder and textual encoder and similarity measurement for the joint embedding space. Overall, the paper is written in good clarity and has shown decent improvements over some of the existing methods. But the contributions are somewhat incremental considering the numerous existing/concurrent work built upon contrastive learning for video. The reasons are as follows. | SP:c33e067199692acdfb5377694f8b4945415d0321 |
Learning Spatiotemporal Features via Video and Text Pair Discrimination | 1 INTRODUCTION . Deep learning has made a remarkable progress for visual recognition in both image and video domain ( Krizhevsky et al. , 2012 ; He et al. , 2016 ; Carreira & Zisserman , 2017 ; Feichtenhofer et al. , 2018 ) by training powerful neural networks on large-scale manually annotated datasets ( e.g. , ImageNet ( Deng et al. , 2009 ) and Kinetics ( Kay et al. , 2017 ) ) . More importantly , it is well-established that this supervised pre-training on large-scale datasets would benefit the downstream tasks ( e.g. , object detection ( Ren et al. , 2015 ) , pose estimation ( He et al. , 2017 ) , and temporal action detection ( Zhao et al. , 2017 ) ) , in particular when the target datasets are relatively small . Yet , annotating a large-scale dataset for training such deep neural networks is costly and time-consuming , and even more challenging for video due to its various temporal structure and complex semantics . As a result , the existing video datasets size is still smaller than ImageNet in terms of training samples and classes . On the other hand , videos typically contain richer structure with abundant side information such as motion ( Diba et al. , 2019 ; Ng et al. , 2018 ) , audio ( Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) , and text ( Miech et al. , 2019 ; Sun et al. , 2019b ) . So these expected these associated modalities are expected to provide useful cues to learn video representations in a more efficient way . Language or text is probably the most natural and easy way to describe the semantic information of a video , and the associated textual information could be easily acquired when collecting video dataset ( Rohrbach et al. , 2017 ; Miech et al. , 2019 ) from Internet or Movie . We argue that this correlation between a clip and its associated text could serve as an alternative supervision to learn video representation from scratch . This is different from some recent works ( Sun et al. , 2019b ; Miech et al. , 2019 ) , in which these abundant textual information has been used to learn a high-level visual-text embedding applied to text-to-video retrieval or video captioning . Intuitively , it is more challenging to learn a general visual representation solely from text information without any human annotation , for reasons such as large numbers of noise in text , lacking careful initialization , and being hard to design an effective objective . In this paper , we aim to learn effective video representation from noisy and diverse textual information , which could serves as the basis for a variety of downstream tasks . Basically , we learn a mapping of text and video into a shared embedding space and leverage their correlation as supervision signal . The technical difficulty is how to design an effective objective function , that is capable of modeling this complex visual-textual correlation and as well easily optimized by training from scratch on noisy datasets . Inspired by unsupervised feature learning in images ( Wu et al. , 2018 ; Tian et al. , 2019 ) , we present a cross-modal pair discrimination ( CPD ) framework , which tries to recognize each video and text pair into a class via a non-parametric classifier . To solve the computational issues imposed by the huge numbers of pair classes , we adapt noise-contrastive estimation technique ( Gutmann & Hyvärinen , 2010 ) to approximate the original loss function . Specifically , we learn the CPD framework from web videos with the associated title or caption that could be directly crawled from web platforms such as YouTube ( Kay et al. , 2017 ) and Instagram ( Duan et al. , 2020 ) . We utilize the off-the-shelf language models such as BERT ( Devlin et al. , 2019 ) or Word2vec ( Mikolov et al. , 2013 ) and devise a curriculum learning strategy to progressively train the video models . We first test the generalization ability of learned video representation by CPD on the Kinetics dataset ( Kay et al. , 2017 ) by using shallow classifiers such k-NN and linear classifier . It shows that our learned spatiotemporal features obtain promising results which are comparable to some supervised learning methods on the Kinetics dataset ( Kay et al. , 2017 ) . Then , we investigate the generalization power of learned spatiotemporal features of CPD by fine-tuning on the Kinetics ( Kay et al. , 2017 ) , UCF101 ( Soomro et al. , 2012 ) and HMDB51 ( Kuehne et al. , 2011 ) datasets , demonstrating that our method obtain superior performance to previous state-of-the-art self-supervised methods and comparable performance to the very recent methods of using orders of magnitude more videos ( 70M-100M vs. 0.3M ) . 2 RELATED WORK . Self/Weakly Supervised Representation Learning . Self supervised representation was popular in both image and video domains by designing various proxy tasks . In image domain , for instance , these tasks could be predicting the image context ( Doersch et al. , 2015 ) , counting the objects ( Noroozi et al. , 2017 ) , converting gray images to color one ( Zhang et al. , 2016 ) , keeping global and local consistency ( Hjelm et al. , 2019 ) . In video domain , typical examples include frame prediction ( Diba et al. , 2019 ; Vondrick et al. , 2016 ) , optical flow estimation ( Ng et al. , 2018 ; Zhou et al. , 2017 ; Jayaraman & Grauman , 2017 ) , instance tracking ( Wang & Gupta , 2015 ; Wang et al. , 2019b ) , temporal order or structure prediction ( Misra et al. , 2016 ; Fernando et al. , 2017 ; Wei et al. , 2018 ; Xu et al. , 2019a ) . These learnt representations may capture some aspects of low-level image or video structures , but are generally outperformed by those using cross modal information . Several cross-modal self-supervised tasks was proposed to enhance single-modality representation power and typical example is audio-visual representation learning ( Aytar et al. , 2016 ; Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) . Meanwhile , some weakly-supervised methods were developed by utilizing web supervision obtained in an automatic way , such as query ID ( Chen & Gupta , 2015 ; Ghadiyaram et al. , 2019 ) , and hashtag ( Mahajan et al. , 2018 ) . Concurrent work ( Miech et al. , 2020 ) tried to learn video representations by using narration as supervision with instructional videos ( e.g. , HowTo100M ( Miech et al. , 2019 ) ) . However , they are limited by the video type . Our CPD is applicable to more general video type and we experiment with a much smaller dataset ( 0.3M vs. 100M ) of both PGC and UGC videos , but achieves a similar performance on UCF101 and HMDB51 . Concurrent work ( Stroud et al. , 2020 ) proposed a similar framework but required more training videos ( 0.3M vs. 70M ) and richer textual information to obtain similar performance to ours . Motion , Audio , and Text . Multi-modal information in videos provides natural cues for learning deep models . Motion or temporal information has been studied as to design proxy tasks to assist cross-modal learning , such as optical flow or tracking ( Ng et al. , 2018 ; Wang & Gupta , 2015 ) , frame prediction ( Diba et al. , 2019 ; Vondrick et al. , 2016 ) , or high-level temporal structure ( Wei et al. , 2018 ; Xu et al. , 2019a ; Fernando et al. , 2017 ) . As most video contain synchronized audio and visual signals , audio information has served another common modality to supervised visual learning ( Aytar et al. , 2016 ; Arandjelovic & Zisserman , 2017 ; Korbar et al. , 2018 ) . However , both motion and audio information seem to be low-level signals and may lack high-level semantic for cross-modal learning . Speech or text has been widely studied as another cross-modal setting in video learning ( Sun et al. , 2019b ; Miech et al. , 2019 ; Dong et al. , 2019 ; Miech et al. , 2018 ; Pan et al. , 2016 ; Plummer et al. , 2017 ) . These works mainly aimed to learn a joint video-text embedding where visual and textual cues are adjacent if they are semantically . However , these works focused on learn high-level visual- textual embedding by using the off-the-shelf models as feature extractors . Instead , our proposed CPD framework addresses a different issue of video representation learning from scratch . 3 CROSS-MODAL PAIR DISCRIMINATION . In this section we provide an detailed description on our proposed cross-modal pair discrimination ( CPD ) for weakly supervised spatiotemporal feature learning . First , we present the whole framework and analyze its important properties . Then , we describe the training strategy of CPD framework . Finally , we introduce text and video feature extraction networks . 3.1 FRAMEWORK AND ANALYSIS . Our goal is to propose a weakly supervised representation learning method by exploiting the correlation between each video clip and its associated text information , which could be easily obtained from a variety of sources such as YouTube titles , Instagram captions and automatic speech recognition ( ASR ) . It is generally assumed that these text information contains semantic information , but also might be noisy and irrelevant . Therefore , from technical perspective , we need to design an effective objective function and training strategy to capture this semantic correlation and as well also suppress the effect of noisy and irrelevant information . To this end , we devise a video-text pair discrimination objective and a curriculum learning strategy as follows . More formally , as shown in Figure 1 , we aim to learn a modality-specific embedding function Fv and Ft for the visual and textual information from a set of N video clips and their associated textual information { ( vi , ti ) i=1 } N . Let fvi and f ti denote Fv ( vi ) and Ft ( ti ) , respectively . These embedding functions would map these two modality into a common space ( i.e. , fvi ∈ Rd and fvi ∈ Rd ) , and related visual and text information should be close to each other . The embedding functions could be implemented by neural networks which will be clarified in next section . We first focus on how to devise objective function to optimize these embedding functions . Inspired by the work of unsupervised learning in images ( Wu et al. , 2018 ) , we design a cross-modal pair discrimination objective to learn these two embedding functions . Self-instance discrimination . In the original instance-level discrimination framework ( Wu et al. , 2018 ) , each image is treated as a distinct class and it would learn a classifier to categorize each image into its own class . This framework could be naturally extended into the setting of video and text pair by directly using feature concatenation , and we call this extension as self-instance discrimination . Formally , this video-text level instance discrimination objective could be implemented with the following softmax criterion : p ( i| ( v , t ) ) = exp ( w vT i f v +wtTi f t ) ∑N j=1 exp ( w vT j f v +wtTj f t ) , ( 1 ) where the ith video-text pair define a class i , ( wvi , w t i ) is a weight for class i , and the class number is equal to training sample numberN . This class weight represent a class prototype for each video-text instance and is probably not easy to optimize as we only have a single sample for each class . Thus , the above parametric classifier could be refined with the following non-parametric variant : p ( i| ( v , t ) ) = exp ( f vT i f v/τ + f tTi f t/τ ) ∑N j=1 exp ( f vT j f v/τ + f tTj f t/τ ) , ( 2 ) where τ is a temperature parameter to control the class concentration level and our training objective is to optimize the likelihood ∏N i=1 p ( i| ( vi , ti ) ) . This straight forward extension shares the advantage of instance-level discrimination by directly modeling in the joint video-text space . Yet , in fact , the semantic information of text modality is higher than video pixels and we aims at learning video features with the supervision of textual information . To meet this requirement , we propose a refined objective function from the perspective of conditional distribution . Cross-pair discrimination . According to the above analysis , we design the objective function by considering conditional distribution p ( it|v ) and p ( iv|t ) rather than implicitly modeling distribution p ( v , t ) . Specifically , we design the following conditional distribution : p ( it|v ) = exp ( f tTi f v/τ ) ∑N j=1 exp ( f tT j f v/τ ) , ( 3 ) where ith text define a text class it , and both f t and fv with unit-norm constraint . The conditional distribution p ( iv|t ) could be defined at the same way . We call this framework as cross-pair discrimination , and during training phase , the objective is to maximize the likelihood∏N i=1 p ( it|vi ) ∏N i=1 p ( iv|ti ) . The key difference between Equation ( 2 ) and ( 3 ) is that we propose to use cross-correlation term f tT fv to replace the self-correlation term ( fvT fv+f tT f t ) . This cross correlation is more effective to capture the mutual information between visual and textual information , and thereby better at guiding the spatiotemporal feature learning from video with text information as supervision . Ranking loss . There is some common ranking loss for cross-modal matching . To well study the effectiveness of proposed cross-modal pair discrimination objective , we also compare with a baseline of ranking loss , which is defined as follows : L ( vi , ti ) = 1 n− 1 ∑ j 6=i max ( 0 , δ + S ( f tj , fvi ) − S ( f ti , fvi ) ) , ( 4 ) where each video vi has a associated text ti and unrelated text tj from current batch . S ( f tj , fvi ) is the cosine similarity , n is the batch size and δ is a margin . We apply Equation ( 4 ) in both ways of video with its associated text and text with its video . In experiment , we empirically compare this ranking loss with our designed cross-pair discrimination objective . | The paper proposes a weakly supervised method for learning spatiotemporal features by video and text pair discrimination, namely cross-modal pair discrimination (CPD). This can be considered as an extension of (Wu et al. 2018) to video and text. On technical perspective, the original method Wu et al. is applied on images, while CPD is applied on video and text (video's title for Kinetics or hashtag search for Instagram). The most novel technical contribution of this paper is making Wu et al. 2018 cross-modal (between video and text). However, compared with Wu et al. 2018, it requires more supervision (weakly supervised vs. unsupervised). On the experiments, some comparisons are unfair and some experimental setups are biased (detail below). | SP:c33e067199692acdfb5377694f8b4945415d0321 |
Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory | 1 INTRODUCTION . Bayesian Neural Networks ( BNNs ) offer a probabilistic interpretation of deep learning by inferring distributions over the model ’ s weights ( Neal , 1996 ) . With the potential of combining the scalability and performance of neural networks ( NNs ) with a framework for uncertainty quantification , BNNs have lately received increased attention ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) . In particular , their ability to express epistemic uncertainty makes them highly relevant for applications such as active learning ( Hernández-Lobato & Adams , 2015 ) and reinforcement learning ( Riquelme et al. , 2018 ) . However , BNNs face two major issues : 1 ) the intractability of posterior inference and 2 ) the difficulty of choosing good Bayesian priors . While the former has been addressed in an extensive body of literature on variational inference ( e.g . Blundell et al. , 2015 ; Blei et al. , 2016 ; Mishkin et al. , 2018 ; Liu & Wang , 2016 ) , the latter has only received limited attention ( Vladimirova et al. , 2019 ; Ghosh & Doshi-Velez , 2017 ) . Choosing an informative prior for BNNs is particularly difficult due to the high-dimensional and hardly interpretable parameter space of NNs . Due to the lack of good alternatives , often a zero-centered , isotropic Gaussian is used , reflecting ( almost ) no a priori knowledge about the problem at hand . This does not only lead to poor generalization when data is scarce , but also renders the Bayesian uncertainty estimates poorly calibrated ( Kuleshov et al. , 2018 ) . Meta-learning ( Schmidhuber , 1987 ; Thrun & Pratt , 1998 ) acquires inductive bias in a data-driven way , thus , constituting an alternative route for addressing this issue . In particular , meta-learners attempt to extract shared ( prior ) knowledge from a set of related learning tasks ( i.e. , datasets ) , aiming to learn in the face of a new , related task . Our work develops a principled and scalable algorithm for metalearning BNN priors . We build on the PAC-Bayesian framework ( McAllester , 1999 ) , a methodology from statistical learning theory for deriving generalization bounds . Previous PAC-Bayesian bounds for meta-learners ( Pentina & Lampert , 2014 ; Amit & Meir , 2018 ) require solving a difficult optimization problem , involving the optimization of the prior as well as multiple variational posteriors in a nested manner . Aiming to overcome this issue , we present a PAC-Bayesian bound that does not rely on nested optimization and , unlike ( Rothfuss et al. , 2020 ) , can be tractably optimized for BNNs . This makes the resulting meta-learner , referred to as PACOH-NN , not only much more computationally efficient and scalable than previous approaches for meta-learning BNN priors ( Amit & Meir , 2018 ) , but also agnostic to the choice of approximate posterior inference method which allows us to combine it freely with recent advances in MCMC ( e.g . Chen et al. , 2014 ) or variational inference ( e.g . Wang et al. , 2019 ) . Our experiments demonstrate that the computational advantages of PACOH-NN do not result in degraded predictive performance . In fact , across several regression and classification environments , PACOH-NN achieves a comparable or better predictive accuracy than several popular meta-learning approaches , while improving the quality of the uncertainty estimates . Finally , we showcase how metalearned PACOH-NN priors can be used in a real-world bandit task concerning the development of vaccines , suggesting that many other challenging real-world problems may benefit from our approach . 2 RELATED WORK . Bayesian Neural Networks . The majority of research on BNNs focuses on approximating the intractable posterior distribution ( Graves , 2011 ; Blundell et al. , 2015 ; Liu & Wang , 2016 ; Wang et al. , 2019 ) . In particular , we employ the approximate inference method of Liu & Wang ( 2016 ) . Another crucial question is how to select a good BNN prior ( Vladimirova et al. , 2019 ) . While the majority of work ( e.g . Louizos & Welling , 2016 ; Huang et al. , 2020 ) employs a simple zero-centered , isotropic Gaussian , Ghosh & Doshi-Velez ( 2017 ) and Pearce et al . ( 2020 ) have proposed other prior distributions for BNNs . In contrast , we go the alternative route of choosing priors in a data-driven way . Meta-learning . A range of popular methods in meta-learning attempt to learn the “ learning program ” in form of a recurrent model ( Hochreiter et al. , 2001 ; Andrychowicz et al. , 2016 ; Chen et al. , 2017 ) , learn an embedding space shared across tasks ( Snell et al. , 2017 ; Vinyals et al. , 2016 ) or the initialization of a NN such that it can be quickly adapted to new tasks ( Finn et al. , 2017 ; Nichol et al. , 2018 ; Rothfuss et al. , 2019b ) . A group of recent methods also uses probabilistic modeling to also allow for uncertainty quantification ( Kim et al. , 2018 ; Finn et al. , 2018 ; Garnelo et al. , 2018 ) . Although the mentioned approaches are able to learn complex inference patterns , they rely on settings where metatraining tasks are abundant and fall short of performance guarantees . In contrast , we provide a formal assessment of the generalization properties of our algorithm . Moreover , PACOH-NN allows for principled uncertainty quantification , including separate treatment of epistemic and aleatoric uncertainty . This makes it particularly useful for sequential decision algorithms ( Lattimore & Szepesvari , 2020 ) . PAC-Bayesian theory . Previous work presents generalization bounds for randomized predictors , assuming a prior to be given exogenously ( McAllester , 1999 ; Catoni , 2007 ; Germain et al. , 2016 ; Alquier et al. , 2016 ) . More recent work explores data-dependent priors ( Parrado-Hernandez et al. , 2012 ; Dziugaite & Roy , 2016 ) or extends previous bounds to the scenario where priors are meta-learned ( Pentina & Lampert , 2014 ; Amit & Meir , 2018 ) . However , these meta-generalization bounds are hard to minimize as they leave both the hyper-posterior and posterior unspecified , which leads to nested optimization problems . Our work builds on the results of Rothfuss et al . ( 2020 ) who introduce the methodology to derive the closed-form solution of the PAC-Bayesian meta-learning problem . However , unlike ours , their approach suffers from ( asymptotically ) non-vanishing terms in the bounds and relies on a closedform solution of the marginal log-likelihood . By contributing a numerically stable score estimator for the generalized marginal log-likelihood , we are able to overcome such limitations , making PACBayesian meta-learning both tractable and scalable for a much larger array of models , including BNNs . 3 BACKGROUND : THE PAC-BAYESIAN FRAMEWORK . Bayesian Neural Networks . Consider a supervised learning task with data S = { ( xj , yj ) } mj=1 drawn from unknown distribution D. In that , X = { xj } mj=1 ∈ Xm denotes training inputs and Y = { yj } mj=1 ∈ Ym the targets . For brevity , we also write zj : = ( xj , yj ) ∈ Z . Let hθ : X → Y be a function parametrized by a NN with weights θ ∈ Θ . Using the NN mapping , we define a conditional distribution p ( y|x , θ ) . For regression , we set p ( y|x , θ ) = N ( y|hθ ( x ) , σ2 ) , where σ2 is the observation noise variance . For classification , we choose p ( y|x , θ ) = Categorical ( softmax ( hθ ( x ) ) ) . For Bayesian inference , one presumes a prior distribution p ( θ ) over the model parameters θ which is combined with the training data S into a posterior distribution p ( θ|X , Y ) ∝ p ( θ ) p ( Y|X , θ ) . For unseen test data points x∗ , we form the predictive distribution as p ( y∗|x∗ , X , Y ) = ∫ p ( y∗|x∗ , θ ) p ( θ|X , Y ) dθ . The Bayesian framework presumes partial knowledge of the data-generating process in form of a prior distribution . However , due to the practical difficulties in choosing an appropriate BNN prior , the prior is typically strongly misspecified ( Syring & Martin , 2018 ) . As a result , modulating the influence of the prior relative to the likelihood during inference typically improves the empirical performance of BNNs and is thus a common practice ( Wenzel et al. , 2020 ) . Using such a ” tempered ” posterior pτ ( θ|X , Y ) ∝ p ( θ ) p ( Y|X , θ ) τ with τ > 0 is also referred to as generalized Bayesian learning ( Guedj , 2019 ) . The PAC-Bayesian Framework . In the following , we introduce the most relevant concepts of PAC-Bayesian learning theory . For more details , we refer to Guedj ( 2019 ) . Given a loss function l : θ ×Z → R , we typically want to minimize the generalization error L ( θ , D ) = Ez∗∼D l ( θ , z∗ ) . Since D is unknown , the empirical error L̂ ( θ , S ) = 1m ∑m i=1 l ( θ , zi ) is usually employed , instead . In the PAC-Bayesian framework , we are concerned with randomized predictors , i.e. , probability measures on the parameter space Θ , allowing us to reason about epistemic uncertainty . In particular , we consider two such probability measures , a prior P ∈ M ( Θ ) and a posterior Q ∈ M ( Θ ) . In here , byM ( Θ ) , we denote the set of all probability measures on Θ . While in Bayesian inference , the prior and posterior are tightly coupled through Bayes ’ theorem , the PAC-Bayesian framework only requires the prior to be independent of the data S. Using the definitions above , the so-called Gibbs error for a randomized predictor Q is defined as L ( Q , D ) = Eh∼Q L ( h , D ) . Similarly , we define its empirical counterpart as L̂ ( Q , S ) = Eh∼Q L̂ ( h , S ) . The PAC-Bayesian framework provides upper bounds for the unknown Gibbs error in the following form : Theorem 1 . ( Alquier et al. , 2016 ) Given a data distribution D , a prior P ∈ M ( Θ ) , a confidence level δ ∈ ( 0 , 1 ] , with probability at least 1− δ over samples S ∼ Dm , we have : ∀Q ∈M ( Θ ) : L ( Q , D ) ≤ L̂ ( Q , S ) + 1√ m [ DKL ( Q||P ) + ln 1 δ + Ψ ( √ m ) ] ( 1 ) where Ψ ( √ m ) = lnEθ∼PES∈Dm exp [ √ m ( L ( θ , D ) − L̂ ( θ , S ) ) ] . In that , Ψ ( √ m ) is a log moment generating function that quantifies how strong the empirical error deviates from the Gibbs error . By making additional assumptions about the loss function l , we can bound Ψ ( √ m ) and thereby obtain tractable bounds . For instance , if l ( θ , z ) is bounded in [ a , b ] , we obtain Ψ ( √ m ) ≤ ( ( b− a ) 2 ) /8 by Hoeffding ’ s lemma . For unbounded loss functions , it is common to assume bounded moments . For instance , a loss is considered sub-gamma with variance factor s2 and scale parameter c , under a prior P and data distribution D , if its deviations from the mean can be characterized by random variable V : = L ( h , D ) − l ( h , z ) whose moment generating function is upper bounded by that of a Gamma distribution Γ ( s , c ) ( Boucheron et al. , 2013 ) . In such case , we obtain Ψ ( √ m ) ≤ s2/ ( 2− 2c√ m ) ) . Connecting the PAC-Bayesian framework and generalized Bayesian learning . In PACBayesian learning we aim to find the posterior that minimizes the bound in ( 1 ) which is in general a challenging optimization problem over the space of measuresM ( Θ ) . However , to our benefit , it can be shown that the Gibbs posterior is the probability measure that minimizes ( 1 ) . For details we refer to Lemma 2 in the Appendix or Catoni ( 2007 ) and ( Germain et al. , 2016 ) . In particular , this gives us Q∗ ( θ ) : = arg min Q∈M ( Θ ) √ mL̂ ( Q , S ) +DKL ( Q||P ) = P ( θ ) e− √ mL̂ ( θ , S ) /Z ( S , P ) , ( 2 ) where Z ( S , P ) is a normalization constant . In a probabilistic setting , our loss function is the negative log-likelihood , i.e . l ( θ , zi ) : = − log p ( zi|θ ) . In this case , the optimal Gibbs posterior coincides with the generalized Bayesian posterior Q∗ ( θ ; P , S ) ∝ P ( θ ) p ( S|θ ) 1/ √ m/Z ( S , P ) where Z ( S , P ) = ∫ Θ P ( θ ) p ( S|θ ) 1/ √ m dθ is the generalized marginal likelihood of the data sample S . | One of the main issues that BNNs have to face is the choice of good informative priors in order to provide precise information about the uncertainty of predictions. The present work connects BNNs and PAC theory to formulate a new system to obtain a general-purpose approach for obtaining significative priors for Bayesian NNs. This is done by employing the closed-form solution for the PAC-Bayesian meta-learning problem. The meta learner here learns a hyper-posterior over the priors of the parameters of the BNN by using the closed-form expression PACOH (PAC Optimal Hyper-Posterior). This is applied in the context of NNs, where the priors are to be defined over the NN parameters. Extensive experiments are carried out to show the performance both in regression and classification datasets, as well as its scalability. In all of these regards, the system seems to be competitive, improving on the results of previous state-of-the-art methods and producing promising results in real-world problems. | SP:90cfa7a84021909d38e23a7013a29f990f8f8f02 |
Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory | 1 INTRODUCTION . Bayesian Neural Networks ( BNNs ) offer a probabilistic interpretation of deep learning by inferring distributions over the model ’ s weights ( Neal , 1996 ) . With the potential of combining the scalability and performance of neural networks ( NNs ) with a framework for uncertainty quantification , BNNs have lately received increased attention ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) . In particular , their ability to express epistemic uncertainty makes them highly relevant for applications such as active learning ( Hernández-Lobato & Adams , 2015 ) and reinforcement learning ( Riquelme et al. , 2018 ) . However , BNNs face two major issues : 1 ) the intractability of posterior inference and 2 ) the difficulty of choosing good Bayesian priors . While the former has been addressed in an extensive body of literature on variational inference ( e.g . Blundell et al. , 2015 ; Blei et al. , 2016 ; Mishkin et al. , 2018 ; Liu & Wang , 2016 ) , the latter has only received limited attention ( Vladimirova et al. , 2019 ; Ghosh & Doshi-Velez , 2017 ) . Choosing an informative prior for BNNs is particularly difficult due to the high-dimensional and hardly interpretable parameter space of NNs . Due to the lack of good alternatives , often a zero-centered , isotropic Gaussian is used , reflecting ( almost ) no a priori knowledge about the problem at hand . This does not only lead to poor generalization when data is scarce , but also renders the Bayesian uncertainty estimates poorly calibrated ( Kuleshov et al. , 2018 ) . Meta-learning ( Schmidhuber , 1987 ; Thrun & Pratt , 1998 ) acquires inductive bias in a data-driven way , thus , constituting an alternative route for addressing this issue . In particular , meta-learners attempt to extract shared ( prior ) knowledge from a set of related learning tasks ( i.e. , datasets ) , aiming to learn in the face of a new , related task . Our work develops a principled and scalable algorithm for metalearning BNN priors . We build on the PAC-Bayesian framework ( McAllester , 1999 ) , a methodology from statistical learning theory for deriving generalization bounds . Previous PAC-Bayesian bounds for meta-learners ( Pentina & Lampert , 2014 ; Amit & Meir , 2018 ) require solving a difficult optimization problem , involving the optimization of the prior as well as multiple variational posteriors in a nested manner . Aiming to overcome this issue , we present a PAC-Bayesian bound that does not rely on nested optimization and , unlike ( Rothfuss et al. , 2020 ) , can be tractably optimized for BNNs . This makes the resulting meta-learner , referred to as PACOH-NN , not only much more computationally efficient and scalable than previous approaches for meta-learning BNN priors ( Amit & Meir , 2018 ) , but also agnostic to the choice of approximate posterior inference method which allows us to combine it freely with recent advances in MCMC ( e.g . Chen et al. , 2014 ) or variational inference ( e.g . Wang et al. , 2019 ) . Our experiments demonstrate that the computational advantages of PACOH-NN do not result in degraded predictive performance . In fact , across several regression and classification environments , PACOH-NN achieves a comparable or better predictive accuracy than several popular meta-learning approaches , while improving the quality of the uncertainty estimates . Finally , we showcase how metalearned PACOH-NN priors can be used in a real-world bandit task concerning the development of vaccines , suggesting that many other challenging real-world problems may benefit from our approach . 2 RELATED WORK . Bayesian Neural Networks . The majority of research on BNNs focuses on approximating the intractable posterior distribution ( Graves , 2011 ; Blundell et al. , 2015 ; Liu & Wang , 2016 ; Wang et al. , 2019 ) . In particular , we employ the approximate inference method of Liu & Wang ( 2016 ) . Another crucial question is how to select a good BNN prior ( Vladimirova et al. , 2019 ) . While the majority of work ( e.g . Louizos & Welling , 2016 ; Huang et al. , 2020 ) employs a simple zero-centered , isotropic Gaussian , Ghosh & Doshi-Velez ( 2017 ) and Pearce et al . ( 2020 ) have proposed other prior distributions for BNNs . In contrast , we go the alternative route of choosing priors in a data-driven way . Meta-learning . A range of popular methods in meta-learning attempt to learn the “ learning program ” in form of a recurrent model ( Hochreiter et al. , 2001 ; Andrychowicz et al. , 2016 ; Chen et al. , 2017 ) , learn an embedding space shared across tasks ( Snell et al. , 2017 ; Vinyals et al. , 2016 ) or the initialization of a NN such that it can be quickly adapted to new tasks ( Finn et al. , 2017 ; Nichol et al. , 2018 ; Rothfuss et al. , 2019b ) . A group of recent methods also uses probabilistic modeling to also allow for uncertainty quantification ( Kim et al. , 2018 ; Finn et al. , 2018 ; Garnelo et al. , 2018 ) . Although the mentioned approaches are able to learn complex inference patterns , they rely on settings where metatraining tasks are abundant and fall short of performance guarantees . In contrast , we provide a formal assessment of the generalization properties of our algorithm . Moreover , PACOH-NN allows for principled uncertainty quantification , including separate treatment of epistemic and aleatoric uncertainty . This makes it particularly useful for sequential decision algorithms ( Lattimore & Szepesvari , 2020 ) . PAC-Bayesian theory . Previous work presents generalization bounds for randomized predictors , assuming a prior to be given exogenously ( McAllester , 1999 ; Catoni , 2007 ; Germain et al. , 2016 ; Alquier et al. , 2016 ) . More recent work explores data-dependent priors ( Parrado-Hernandez et al. , 2012 ; Dziugaite & Roy , 2016 ) or extends previous bounds to the scenario where priors are meta-learned ( Pentina & Lampert , 2014 ; Amit & Meir , 2018 ) . However , these meta-generalization bounds are hard to minimize as they leave both the hyper-posterior and posterior unspecified , which leads to nested optimization problems . Our work builds on the results of Rothfuss et al . ( 2020 ) who introduce the methodology to derive the closed-form solution of the PAC-Bayesian meta-learning problem . However , unlike ours , their approach suffers from ( asymptotically ) non-vanishing terms in the bounds and relies on a closedform solution of the marginal log-likelihood . By contributing a numerically stable score estimator for the generalized marginal log-likelihood , we are able to overcome such limitations , making PACBayesian meta-learning both tractable and scalable for a much larger array of models , including BNNs . 3 BACKGROUND : THE PAC-BAYESIAN FRAMEWORK . Bayesian Neural Networks . Consider a supervised learning task with data S = { ( xj , yj ) } mj=1 drawn from unknown distribution D. In that , X = { xj } mj=1 ∈ Xm denotes training inputs and Y = { yj } mj=1 ∈ Ym the targets . For brevity , we also write zj : = ( xj , yj ) ∈ Z . Let hθ : X → Y be a function parametrized by a NN with weights θ ∈ Θ . Using the NN mapping , we define a conditional distribution p ( y|x , θ ) . For regression , we set p ( y|x , θ ) = N ( y|hθ ( x ) , σ2 ) , where σ2 is the observation noise variance . For classification , we choose p ( y|x , θ ) = Categorical ( softmax ( hθ ( x ) ) ) . For Bayesian inference , one presumes a prior distribution p ( θ ) over the model parameters θ which is combined with the training data S into a posterior distribution p ( θ|X , Y ) ∝ p ( θ ) p ( Y|X , θ ) . For unseen test data points x∗ , we form the predictive distribution as p ( y∗|x∗ , X , Y ) = ∫ p ( y∗|x∗ , θ ) p ( θ|X , Y ) dθ . The Bayesian framework presumes partial knowledge of the data-generating process in form of a prior distribution . However , due to the practical difficulties in choosing an appropriate BNN prior , the prior is typically strongly misspecified ( Syring & Martin , 2018 ) . As a result , modulating the influence of the prior relative to the likelihood during inference typically improves the empirical performance of BNNs and is thus a common practice ( Wenzel et al. , 2020 ) . Using such a ” tempered ” posterior pτ ( θ|X , Y ) ∝ p ( θ ) p ( Y|X , θ ) τ with τ > 0 is also referred to as generalized Bayesian learning ( Guedj , 2019 ) . The PAC-Bayesian Framework . In the following , we introduce the most relevant concepts of PAC-Bayesian learning theory . For more details , we refer to Guedj ( 2019 ) . Given a loss function l : θ ×Z → R , we typically want to minimize the generalization error L ( θ , D ) = Ez∗∼D l ( θ , z∗ ) . Since D is unknown , the empirical error L̂ ( θ , S ) = 1m ∑m i=1 l ( θ , zi ) is usually employed , instead . In the PAC-Bayesian framework , we are concerned with randomized predictors , i.e. , probability measures on the parameter space Θ , allowing us to reason about epistemic uncertainty . In particular , we consider two such probability measures , a prior P ∈ M ( Θ ) and a posterior Q ∈ M ( Θ ) . In here , byM ( Θ ) , we denote the set of all probability measures on Θ . While in Bayesian inference , the prior and posterior are tightly coupled through Bayes ’ theorem , the PAC-Bayesian framework only requires the prior to be independent of the data S. Using the definitions above , the so-called Gibbs error for a randomized predictor Q is defined as L ( Q , D ) = Eh∼Q L ( h , D ) . Similarly , we define its empirical counterpart as L̂ ( Q , S ) = Eh∼Q L̂ ( h , S ) . The PAC-Bayesian framework provides upper bounds for the unknown Gibbs error in the following form : Theorem 1 . ( Alquier et al. , 2016 ) Given a data distribution D , a prior P ∈ M ( Θ ) , a confidence level δ ∈ ( 0 , 1 ] , with probability at least 1− δ over samples S ∼ Dm , we have : ∀Q ∈M ( Θ ) : L ( Q , D ) ≤ L̂ ( Q , S ) + 1√ m [ DKL ( Q||P ) + ln 1 δ + Ψ ( √ m ) ] ( 1 ) where Ψ ( √ m ) = lnEθ∼PES∈Dm exp [ √ m ( L ( θ , D ) − L̂ ( θ , S ) ) ] . In that , Ψ ( √ m ) is a log moment generating function that quantifies how strong the empirical error deviates from the Gibbs error . By making additional assumptions about the loss function l , we can bound Ψ ( √ m ) and thereby obtain tractable bounds . For instance , if l ( θ , z ) is bounded in [ a , b ] , we obtain Ψ ( √ m ) ≤ ( ( b− a ) 2 ) /8 by Hoeffding ’ s lemma . For unbounded loss functions , it is common to assume bounded moments . For instance , a loss is considered sub-gamma with variance factor s2 and scale parameter c , under a prior P and data distribution D , if its deviations from the mean can be characterized by random variable V : = L ( h , D ) − l ( h , z ) whose moment generating function is upper bounded by that of a Gamma distribution Γ ( s , c ) ( Boucheron et al. , 2013 ) . In such case , we obtain Ψ ( √ m ) ≤ s2/ ( 2− 2c√ m ) ) . Connecting the PAC-Bayesian framework and generalized Bayesian learning . In PACBayesian learning we aim to find the posterior that minimizes the bound in ( 1 ) which is in general a challenging optimization problem over the space of measuresM ( Θ ) . However , to our benefit , it can be shown that the Gibbs posterior is the probability measure that minimizes ( 1 ) . For details we refer to Lemma 2 in the Appendix or Catoni ( 2007 ) and ( Germain et al. , 2016 ) . In particular , this gives us Q∗ ( θ ) : = arg min Q∈M ( Θ ) √ mL̂ ( Q , S ) +DKL ( Q||P ) = P ( θ ) e− √ mL̂ ( θ , S ) /Z ( S , P ) , ( 2 ) where Z ( S , P ) is a normalization constant . In a probabilistic setting , our loss function is the negative log-likelihood , i.e . l ( θ , zi ) : = − log p ( zi|θ ) . In this case , the optimal Gibbs posterior coincides with the generalized Bayesian posterior Q∗ ( θ ; P , S ) ∝ P ( θ ) p ( S|θ ) 1/ √ m/Z ( S , P ) where Z ( S , P ) = ∫ Θ P ( θ ) p ( S|θ ) 1/ √ m dθ is the generalized marginal likelihood of the data sample S . | The paper proposes a method for learning BNN priors based on optimising a PAC-Bayes bound for meta-learning, which they call PACOH-NN. This extends previous work on optimising PAC-Bayes bounds (PACOH), and is attractive for being principled in construction and effective in practice. The main claims are that fewer tasks are needed to achieve good performance in terms of both utility and calibration, and that the algorithm comes with performance guarantees. | SP:90cfa7a84021909d38e23a7013a29f990f8f8f02 |
Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory | 1 INTRODUCTION . Bayesian Neural Networks ( BNNs ) offer a probabilistic interpretation of deep learning by inferring distributions over the model ’ s weights ( Neal , 1996 ) . With the potential of combining the scalability and performance of neural networks ( NNs ) with a framework for uncertainty quantification , BNNs have lately received increased attention ( Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ) . In particular , their ability to express epistemic uncertainty makes them highly relevant for applications such as active learning ( Hernández-Lobato & Adams , 2015 ) and reinforcement learning ( Riquelme et al. , 2018 ) . However , BNNs face two major issues : 1 ) the intractability of posterior inference and 2 ) the difficulty of choosing good Bayesian priors . While the former has been addressed in an extensive body of literature on variational inference ( e.g . Blundell et al. , 2015 ; Blei et al. , 2016 ; Mishkin et al. , 2018 ; Liu & Wang , 2016 ) , the latter has only received limited attention ( Vladimirova et al. , 2019 ; Ghosh & Doshi-Velez , 2017 ) . Choosing an informative prior for BNNs is particularly difficult due to the high-dimensional and hardly interpretable parameter space of NNs . Due to the lack of good alternatives , often a zero-centered , isotropic Gaussian is used , reflecting ( almost ) no a priori knowledge about the problem at hand . This does not only lead to poor generalization when data is scarce , but also renders the Bayesian uncertainty estimates poorly calibrated ( Kuleshov et al. , 2018 ) . Meta-learning ( Schmidhuber , 1987 ; Thrun & Pratt , 1998 ) acquires inductive bias in a data-driven way , thus , constituting an alternative route for addressing this issue . In particular , meta-learners attempt to extract shared ( prior ) knowledge from a set of related learning tasks ( i.e. , datasets ) , aiming to learn in the face of a new , related task . Our work develops a principled and scalable algorithm for metalearning BNN priors . We build on the PAC-Bayesian framework ( McAllester , 1999 ) , a methodology from statistical learning theory for deriving generalization bounds . Previous PAC-Bayesian bounds for meta-learners ( Pentina & Lampert , 2014 ; Amit & Meir , 2018 ) require solving a difficult optimization problem , involving the optimization of the prior as well as multiple variational posteriors in a nested manner . Aiming to overcome this issue , we present a PAC-Bayesian bound that does not rely on nested optimization and , unlike ( Rothfuss et al. , 2020 ) , can be tractably optimized for BNNs . This makes the resulting meta-learner , referred to as PACOH-NN , not only much more computationally efficient and scalable than previous approaches for meta-learning BNN priors ( Amit & Meir , 2018 ) , but also agnostic to the choice of approximate posterior inference method which allows us to combine it freely with recent advances in MCMC ( e.g . Chen et al. , 2014 ) or variational inference ( e.g . Wang et al. , 2019 ) . Our experiments demonstrate that the computational advantages of PACOH-NN do not result in degraded predictive performance . In fact , across several regression and classification environments , PACOH-NN achieves a comparable or better predictive accuracy than several popular meta-learning approaches , while improving the quality of the uncertainty estimates . Finally , we showcase how metalearned PACOH-NN priors can be used in a real-world bandit task concerning the development of vaccines , suggesting that many other challenging real-world problems may benefit from our approach . 2 RELATED WORK . Bayesian Neural Networks . The majority of research on BNNs focuses on approximating the intractable posterior distribution ( Graves , 2011 ; Blundell et al. , 2015 ; Liu & Wang , 2016 ; Wang et al. , 2019 ) . In particular , we employ the approximate inference method of Liu & Wang ( 2016 ) . Another crucial question is how to select a good BNN prior ( Vladimirova et al. , 2019 ) . While the majority of work ( e.g . Louizos & Welling , 2016 ; Huang et al. , 2020 ) employs a simple zero-centered , isotropic Gaussian , Ghosh & Doshi-Velez ( 2017 ) and Pearce et al . ( 2020 ) have proposed other prior distributions for BNNs . In contrast , we go the alternative route of choosing priors in a data-driven way . Meta-learning . A range of popular methods in meta-learning attempt to learn the “ learning program ” in form of a recurrent model ( Hochreiter et al. , 2001 ; Andrychowicz et al. , 2016 ; Chen et al. , 2017 ) , learn an embedding space shared across tasks ( Snell et al. , 2017 ; Vinyals et al. , 2016 ) or the initialization of a NN such that it can be quickly adapted to new tasks ( Finn et al. , 2017 ; Nichol et al. , 2018 ; Rothfuss et al. , 2019b ) . A group of recent methods also uses probabilistic modeling to also allow for uncertainty quantification ( Kim et al. , 2018 ; Finn et al. , 2018 ; Garnelo et al. , 2018 ) . Although the mentioned approaches are able to learn complex inference patterns , they rely on settings where metatraining tasks are abundant and fall short of performance guarantees . In contrast , we provide a formal assessment of the generalization properties of our algorithm . Moreover , PACOH-NN allows for principled uncertainty quantification , including separate treatment of epistemic and aleatoric uncertainty . This makes it particularly useful for sequential decision algorithms ( Lattimore & Szepesvari , 2020 ) . PAC-Bayesian theory . Previous work presents generalization bounds for randomized predictors , assuming a prior to be given exogenously ( McAllester , 1999 ; Catoni , 2007 ; Germain et al. , 2016 ; Alquier et al. , 2016 ) . More recent work explores data-dependent priors ( Parrado-Hernandez et al. , 2012 ; Dziugaite & Roy , 2016 ) or extends previous bounds to the scenario where priors are meta-learned ( Pentina & Lampert , 2014 ; Amit & Meir , 2018 ) . However , these meta-generalization bounds are hard to minimize as they leave both the hyper-posterior and posterior unspecified , which leads to nested optimization problems . Our work builds on the results of Rothfuss et al . ( 2020 ) who introduce the methodology to derive the closed-form solution of the PAC-Bayesian meta-learning problem . However , unlike ours , their approach suffers from ( asymptotically ) non-vanishing terms in the bounds and relies on a closedform solution of the marginal log-likelihood . By contributing a numerically stable score estimator for the generalized marginal log-likelihood , we are able to overcome such limitations , making PACBayesian meta-learning both tractable and scalable for a much larger array of models , including BNNs . 3 BACKGROUND : THE PAC-BAYESIAN FRAMEWORK . Bayesian Neural Networks . Consider a supervised learning task with data S = { ( xj , yj ) } mj=1 drawn from unknown distribution D. In that , X = { xj } mj=1 ∈ Xm denotes training inputs and Y = { yj } mj=1 ∈ Ym the targets . For brevity , we also write zj : = ( xj , yj ) ∈ Z . Let hθ : X → Y be a function parametrized by a NN with weights θ ∈ Θ . Using the NN mapping , we define a conditional distribution p ( y|x , θ ) . For regression , we set p ( y|x , θ ) = N ( y|hθ ( x ) , σ2 ) , where σ2 is the observation noise variance . For classification , we choose p ( y|x , θ ) = Categorical ( softmax ( hθ ( x ) ) ) . For Bayesian inference , one presumes a prior distribution p ( θ ) over the model parameters θ which is combined with the training data S into a posterior distribution p ( θ|X , Y ) ∝ p ( θ ) p ( Y|X , θ ) . For unseen test data points x∗ , we form the predictive distribution as p ( y∗|x∗ , X , Y ) = ∫ p ( y∗|x∗ , θ ) p ( θ|X , Y ) dθ . The Bayesian framework presumes partial knowledge of the data-generating process in form of a prior distribution . However , due to the practical difficulties in choosing an appropriate BNN prior , the prior is typically strongly misspecified ( Syring & Martin , 2018 ) . As a result , modulating the influence of the prior relative to the likelihood during inference typically improves the empirical performance of BNNs and is thus a common practice ( Wenzel et al. , 2020 ) . Using such a ” tempered ” posterior pτ ( θ|X , Y ) ∝ p ( θ ) p ( Y|X , θ ) τ with τ > 0 is also referred to as generalized Bayesian learning ( Guedj , 2019 ) . The PAC-Bayesian Framework . In the following , we introduce the most relevant concepts of PAC-Bayesian learning theory . For more details , we refer to Guedj ( 2019 ) . Given a loss function l : θ ×Z → R , we typically want to minimize the generalization error L ( θ , D ) = Ez∗∼D l ( θ , z∗ ) . Since D is unknown , the empirical error L̂ ( θ , S ) = 1m ∑m i=1 l ( θ , zi ) is usually employed , instead . In the PAC-Bayesian framework , we are concerned with randomized predictors , i.e. , probability measures on the parameter space Θ , allowing us to reason about epistemic uncertainty . In particular , we consider two such probability measures , a prior P ∈ M ( Θ ) and a posterior Q ∈ M ( Θ ) . In here , byM ( Θ ) , we denote the set of all probability measures on Θ . While in Bayesian inference , the prior and posterior are tightly coupled through Bayes ’ theorem , the PAC-Bayesian framework only requires the prior to be independent of the data S. Using the definitions above , the so-called Gibbs error for a randomized predictor Q is defined as L ( Q , D ) = Eh∼Q L ( h , D ) . Similarly , we define its empirical counterpart as L̂ ( Q , S ) = Eh∼Q L̂ ( h , S ) . The PAC-Bayesian framework provides upper bounds for the unknown Gibbs error in the following form : Theorem 1 . ( Alquier et al. , 2016 ) Given a data distribution D , a prior P ∈ M ( Θ ) , a confidence level δ ∈ ( 0 , 1 ] , with probability at least 1− δ over samples S ∼ Dm , we have : ∀Q ∈M ( Θ ) : L ( Q , D ) ≤ L̂ ( Q , S ) + 1√ m [ DKL ( Q||P ) + ln 1 δ + Ψ ( √ m ) ] ( 1 ) where Ψ ( √ m ) = lnEθ∼PES∈Dm exp [ √ m ( L ( θ , D ) − L̂ ( θ , S ) ) ] . In that , Ψ ( √ m ) is a log moment generating function that quantifies how strong the empirical error deviates from the Gibbs error . By making additional assumptions about the loss function l , we can bound Ψ ( √ m ) and thereby obtain tractable bounds . For instance , if l ( θ , z ) is bounded in [ a , b ] , we obtain Ψ ( √ m ) ≤ ( ( b− a ) 2 ) /8 by Hoeffding ’ s lemma . For unbounded loss functions , it is common to assume bounded moments . For instance , a loss is considered sub-gamma with variance factor s2 and scale parameter c , under a prior P and data distribution D , if its deviations from the mean can be characterized by random variable V : = L ( h , D ) − l ( h , z ) whose moment generating function is upper bounded by that of a Gamma distribution Γ ( s , c ) ( Boucheron et al. , 2013 ) . In such case , we obtain Ψ ( √ m ) ≤ s2/ ( 2− 2c√ m ) ) . Connecting the PAC-Bayesian framework and generalized Bayesian learning . In PACBayesian learning we aim to find the posterior that minimizes the bound in ( 1 ) which is in general a challenging optimization problem over the space of measuresM ( Θ ) . However , to our benefit , it can be shown that the Gibbs posterior is the probability measure that minimizes ( 1 ) . For details we refer to Lemma 2 in the Appendix or Catoni ( 2007 ) and ( Germain et al. , 2016 ) . In particular , this gives us Q∗ ( θ ) : = arg min Q∈M ( Θ ) √ mL̂ ( Q , S ) +DKL ( Q||P ) = P ( θ ) e− √ mL̂ ( θ , S ) /Z ( S , P ) , ( 2 ) where Z ( S , P ) is a normalization constant . In a probabilistic setting , our loss function is the negative log-likelihood , i.e . l ( θ , zi ) : = − log p ( zi|θ ) . In this case , the optimal Gibbs posterior coincides with the generalized Bayesian posterior Q∗ ( θ ; P , S ) ∝ P ( θ ) p ( S|θ ) 1/ √ m/Z ( S , P ) where Z ( S , P ) = ∫ Θ P ( θ ) p ( S|θ ) 1/ √ m dθ is the generalized marginal likelihood of the data sample S . | The paper addresses the problem of learning data-driven priors for Bayesian neural networks. Assuming zero-centered Gaussian priors for BNNs often results in poor generalization and uncertainty quantification, whereas choosing informative priors is challenging due to limited interpretability of network weights. To be able to overcome these issues, the authors propose a meta-learning framework based on PAC-Bayesian theory, in which they optimize a PAC bound called PACOH in the space of possible posterior distributions of BNN weights. Unlike the previous approaches in the literature, their approach doesn’t rely on nested optimization schemes, instead they directly minimize PAC bound via a variational algorithm called PACOH-NN which is based on SVGD and reparameterization trick. | SP:90cfa7a84021909d38e23a7013a29f990f8f8f02 |
Improving Tail Label Prediction for Extreme Multi-label Learning | Extreme multi-label learning ( XML ) works to annotate objects with relevant labels from an extremely large label set . Many previous methods treat labels uniformly such that the learned model tends to perform better on head labels , while the performance is severely deteriorated for tail labels . However , it is often desirable to predict more tail labels in many real-world applications . To alleviate this problem , in this work , we show theoretical and experimental evidence for the inferior performance of representative XML methods on tail labels . Our finding is that the norm of label classifier weights typically follows a long-tailed distribution similar to the label frequency , which results in the over-suppression of tail labels . Base on this new finding , we present two new modules : ( 1 ) RANKNET learns to re-rank the predictions by optimizing a population-aware loss , which predicts tail labels with high rank ; ( 2 ) TAUG augments tail labels via a decoupled learning scheme , which can yield more balanced classification boundary . We conduct experiments on commonly used XML benchmarks with hundreds of thousands of labels , showing that the proposed methods improve the performance of many state-of-the-art XML models by a considerable margin ( 6 % performance gain with respect to PSP @ 1 on average ) . 1 INTRODUCTION . Extreme multi-label learning ( XML ) aims to annotate objects with relevant labels from an extremely large candidate label set . Recently , XML has demonstrated its broad applications . For example , in webpage categorization Partalas et al . ( 2015 ) , millions of labels ( categories ) are collected in Wikipedia and one wishes to annotate new webpages with relevant labels from a huge candidate set ; in recommender systems McAuley et al . ( 2015 ) , one hopes to make informative personalized recommendations from millions of items . Because of the high dimensionality of label space , classic multi-label learning algorithms , such as Zhang & Zhou ( 2007 ) ; Tsoumakas & Vlahavas ( 2007 ) , become infeasible . To this end , a number of computational efficient XML approaches are proposed Weston et al . ( 2011 ) ; Agrawal et al . ( 2013 ) ; Bi & Kwok ( 2013 ) ; Yu et al . ( 2014 ) ; Bhatia et al . ( 2015 ) ; E.-H . Yen et al . ( 2016 ) ; Yeh et al . ( 2017 ) ; Yen et al . ( 2017 ) ; Tagami ( 2017 ) . In XML , one important statistical characteristic is that labels follow a long-tailed distribution as illustrated in Figure 4 ( left ) . Most labels occur only a few times in the dataset . Infrequently occurring labels ( referred to as tail label ) possess limited training samples and are harder to predict than frequently occurring ones ( referred to as head label ) . Many existing XML approaches treat labels with equal importance , such as Prabhu & Varma ( 2014 ) ; Babbar & Schölkopf ( 2017 ) ; Khandagale et al . ( 2019 ) , while Wei & Li ( 2018 ) demonstrates that most predictions of well-established methods are heads labels . However , in many real-world applications , it is still desirable to predict more tail labels which are more rewarding and informative , such as recommender systems Jain et al . ( 2016 ) ; Babbar & Schölkopf ( 2019 ) ; Wei & Li ( 2018 ) ; Wei et al . ( 2019 ) . To improve the performance for tail labels , existing solutions typically involve optimizing loss functions that are suitable for tail labels Jain et al . ( 2016 ) ; Babbar & Schölkopf ( 2019 ) , leveraging the sparsity of tail labels in the annotated label matrix Xu et al . ( 2016 ) , and transferring knowledge from data-rich head labels to data-scarce tail labels K. Dahiya ( 2019 ) . These methods typically achieve better performance on tail labels than standard XML methods which treat labels equally , while they usually involve high computational costs . Moreover , previous studies do not explicitly explain the underlying cause of the inferior performance of many standard XML methods for tail labels . In this work , we disclose theoretical and experimental evidence for the inferior performance of previous XML methods on tail labels . Our finding is that the norm of label classifier weights follows a long-tailed distribution similar to the label frequency as shown in Figure 4 ( middle ) , and the prediction score of tail labels thereby is underrated . To alleviate this problem , we propose to rectify the classifier ’ s outputs and training data distribution such that the prediction of tail labels is enhanced . We present two general modules suitable for any well-established XML methods : ( 1 ) RANKNET learns to re-rank the predictions by optimizing a population-aware loss function , which predicts tail labels with high rank ; ( 2 ) TAUG augments tail labels via a decoupled learning scheme , which reduces the skewness of training data and yields more balanced classification boundary . We conduct experiments to verify the effectiveness of the aforementioned instantiations . From our extensive studies across four benchmark datasets , we make the following intriguing contributions : • We show that from both theoretical and experimental perspectives , the norm of label classifier weights follow a long-tailed distribution , i.e. , the norms of head label classifier weights are considerably larger than that of tail label classifiers , which is a key cause of the inferior performance of many XML methods on tail labels . • We propose two general modules : RANKNET for prediction score re-ranking by optimizing a new population-aware loss , and TAUG for decoupled tail label augmentation . Both methods can be paired with any XML model without changing the model . • Experiments verify that our proposed modules achieve significant improvements ( 6 % w.r.t . PSP @ 1 on average ) for well-established XML methods on benchmark datasets . • We provide an ablation study to highlight the effectiveness of each individual factor . 2 PREVIOUS EFFORTS . Existing work on XML can be roughly categorized as three directions : One-vs-all methods . This branch of work trains classifiers for each label separately . Due to the huge size of label set , parallelization Babbar & Schölkopf ( 2017 ) , label partitioning Khandagale et al . ( 2019 ) , and label filter Niculescu-Mizil & Abbasnejad ( 2017 ) techniques are used to facilitate efficient training and testing . To alleviate memory overhead , recent works restrict the model capacity by imposing sparse constraints E.-H . Yen et al . ( 2016 ) or removing spurious parameters Babbar & Schölkopf ( 2017 ) . One criticism of one-vs-all methods is that it fails to capture label correlations . Embedding-based methods . Along this direction , researchers have proposed to embed the feature space and label space onto a joint low-dimensional space , then model the correlation between features and labels in hidden space Tai & Lin ( 2012 ) ; Chen & Lin ( 2012 ) ; Yu et al . ( 2014 ) ; Bhatia et al . ( 2015 ) ; Tagami ( 2017 ) ; Evron et al . ( 2018 ) . This method can dramatically reduce the model parameters compared with the one-vs-all methods , but involves solving complex optimization problems . Tree-based methods . In comparison to other types of approaches , tree-based methods greatly reduce inference time , which generally scales logarithmically in the number of labels . There are typically two types of trees including instance trees Prabhu & Varma ( 2014 ) ; Siblini et al . ( 2018 ) and label trees Daume III et al . ( 2016 ) ; You et al . ( 2018 ) , depending whether instance or label is partitioned in tree nodes . Tree-based methods usually suffer from low prediction accuracy affected by the cascading effect , where the prediction error at the top can not be corrected at a lower level . These methods can readily scale up to problems with hundreds of thousands of labels . However , Wei & Li ( 2018 ; 2019 ) claims that head labels make a significantly higher contribution to the performance than tail labels . Therefore , many work are conducted to improve the performance for tail labels . Optimization . Jain et al . ( 2016 ) proposes propensity scored loss functions that promote the prediction of tail label with high ranks . Xu et al . ( 2016 ) decomposes the label matrix into a low-rank matrix and a sparse matrix . The low-rank matrix is expected to capture label correlations , and the sparse matrix is used to capture tail labels . Babbar & Schölkopf ( 2019 ) views tail label from an adversarial perspective and optimizes hamming loss to yield a robust model . Knowledge transfer . K. Dahiya ( 2019 ) trains two deep models on head labels and tail labels . The semantic representations learned from head labels are transferred to the tail label model . These methods achieve better performance on tail labels than standard XML methods which treat labels equally , while they do not explicitly explain the underlying cause of the inferior performance of many standard XML methods for tail labels . In this work , we find that the classification boundary of existing XML methods is skewed to head labels , causing the inferior performance . 3 METHODOLOGY . In XML , as we possess fewer data about tail labels , models learned on long-tailed datasets tend to exhibit inferior performance on tail labels Wei & Li ( 2018 ) . However in practice , it is more informative and rewarding to accurately predict tail labels than head labels Jain et al . ( 2016 ) . In this work , we attempt to alleviate this problem from the perspective of the classification boundary . We make an observation that the norm of label classifier weights follow a long-tailed distribution similar to the label frequency , which means that the prediction of tail labels is over-suppressed . This finding provides an evidence for us to improve the prediction of tail labels . We present ways of rectifying the classifier ’ s outputs and data distribution via re-ranking and tail label augmentation , respectively . Notations . We first describe notations used through the paper . Let X = { xi } Ni=1 , Y = { yi } Ni=1 be a training set of size N , where yi is the label vector for data point xi . Formally , XML is the task of learning a function f that maps an input ( or instance ) x ∈ RD to its target y ∈ { 0 , 1 } L. We denote nj = ∑N i=1 yij as the frequency of the j-th label . Without loss of generality , we assume that the labels are sorted by cardinality in non-increasing order , i.e. , if j < k , then nj ≥ nk , where 1 ≤ j , k ≤ L. In our setting , we have n1 nL . According to the label frequency , we can split the label set into head labels and tail labels by a threshold τ ∈ ( 0 , 1 ) . We denote head label set H = { 1 , . . . , bτLc } and tail label set T = { bτLc+ 1 , . . . , L } . τ is a user-specified parameter . 3.1 THE LONG-TAILED DISTRIBUTION OF CLASSIFIER WEIGHTS NORM . We present a different perspective regarding XML model , showing its inferior performance on tail labels is due to the imbalanced classification boundary . In Figure 4 ( middle ) , we empirically observe that the norm of label classifier weights follows a similar long-tailed distribution as the label frequency . The results are produced on EUR-Lex dataset using a representative one-vs-all method Bonsai Khandagale et al . ( 2019 ) . A similar observation on Wiki10-31K dataset is presented in the supplementary material . Since the norm of tail label classifier weights is considerably smaller than that of head label classifier weights , the predicted score of tail labels are typically underestimated in inference . We further support our finding theoretically and demonstrate the fact that the small norm of tail label classifier weights is the root cause of inferior performance . We make the following mild assumption on the data : every input x is sampled from feature space completely at random , and there exists a constant threshold t > 0 for the input x , such that the top-k prediction for x is made as β ( k ) = { yl | P̂ ( yl | x ) ≥ t , 1 ≤ l ≤ L } , where P̂ ( yl | x ) denotes the estimated label distribution . We assume W = { wj } Lj=1 be the weight matrix of a standard XML method . In particular , for binary relevance and tree-based classifier , W can be obtained by optimizing Eq . ( 1 ) , where L denotes the loss function , e.g. , squared hinge loss , and constant λ is a trade-off parameter . Note that for some tree-based methods , such as Bonsai Khandagale et al . ( 2019 ) and Parabel Prabhu et al . ( 2018 ) , we consider W be the label classifier weights in leaf nodes , i.e. , excluding meta-labels of internal tree nodes . min wj ‖wj‖22 + λ N∑ i=1 L ( Yi , j , wTj xi ) , ∀1 ≤ j ≤ L ( 1 ) For deep learning methods , we denote W be the weights of the last linear layer for classification by optimizing Eq . ( 2 ) , where σ is the softmax function , fθ is the feature extractor parameterized by θ , and L denotes the selected loss function , e.g. , binary cross entropy . Note that this interpretation can also be adapted to typical embedding-based methods , such as Yu et al . ( 2014 ) , where fθ is linear and σ is the identity function . min W N∑ i=1 L ( yi , σ ( W > fθ ( xi ) ) ) ( 2 ) With the above setup , we summarize our findings in Theorem 1 . Theorem 1 . Let D = { ( xi , yi ) } Ni=1 be a sample set and W , which can be decomposed as { wj } Lj=1 , be the label classifier weights learned on D by optimizing Eq . ( 1 ) and Eq . ( 2 ) . For an uniformly sampled point x which is i.i.d . with points in D , we have ||wj || ∝ E [ yj ∈ β ( k ) ] , ∀1 ≤ j ≤ L , where β ( k ) denotes the k top-ranked indices of predicted labels in P̂ ( y | x ) . This theorem shows that the need for re-balancing the classifier weights to improve the performance on tail labels . Motivated by our finding , in the following we propose two new modules and discuss their effectiveness on tail labels . Proof of this theorem can be found in the supplementary material . | This paper considers the setting of extreme multi-label classification, where labels typically follow a power-law distribution with many infrequently-observed labels (so-called tail labels). In this setting it often happens that multi-label classifiers more often predict frequent labels as positive than infrequent labels. In practical applications this is not always wanted, and the authors present a new algorithm that favors tail labels over frequent labels. To this end, a specific ranking-based loss function that consists of two parts is minimized. The first part of the loss ranks positive tail labels higher than positive frequent labels. The second part is more standard, and ranks positive labels higher than negative labels. | SP:25a306a19b267dfcdcd927fa9e65f3d8f7487918 |
Improving Tail Label Prediction for Extreme Multi-label Learning | Extreme multi-label learning ( XML ) works to annotate objects with relevant labels from an extremely large label set . Many previous methods treat labels uniformly such that the learned model tends to perform better on head labels , while the performance is severely deteriorated for tail labels . However , it is often desirable to predict more tail labels in many real-world applications . To alleviate this problem , in this work , we show theoretical and experimental evidence for the inferior performance of representative XML methods on tail labels . Our finding is that the norm of label classifier weights typically follows a long-tailed distribution similar to the label frequency , which results in the over-suppression of tail labels . Base on this new finding , we present two new modules : ( 1 ) RANKNET learns to re-rank the predictions by optimizing a population-aware loss , which predicts tail labels with high rank ; ( 2 ) TAUG augments tail labels via a decoupled learning scheme , which can yield more balanced classification boundary . We conduct experiments on commonly used XML benchmarks with hundreds of thousands of labels , showing that the proposed methods improve the performance of many state-of-the-art XML models by a considerable margin ( 6 % performance gain with respect to PSP @ 1 on average ) . 1 INTRODUCTION . Extreme multi-label learning ( XML ) aims to annotate objects with relevant labels from an extremely large candidate label set . Recently , XML has demonstrated its broad applications . For example , in webpage categorization Partalas et al . ( 2015 ) , millions of labels ( categories ) are collected in Wikipedia and one wishes to annotate new webpages with relevant labels from a huge candidate set ; in recommender systems McAuley et al . ( 2015 ) , one hopes to make informative personalized recommendations from millions of items . Because of the high dimensionality of label space , classic multi-label learning algorithms , such as Zhang & Zhou ( 2007 ) ; Tsoumakas & Vlahavas ( 2007 ) , become infeasible . To this end , a number of computational efficient XML approaches are proposed Weston et al . ( 2011 ) ; Agrawal et al . ( 2013 ) ; Bi & Kwok ( 2013 ) ; Yu et al . ( 2014 ) ; Bhatia et al . ( 2015 ) ; E.-H . Yen et al . ( 2016 ) ; Yeh et al . ( 2017 ) ; Yen et al . ( 2017 ) ; Tagami ( 2017 ) . In XML , one important statistical characteristic is that labels follow a long-tailed distribution as illustrated in Figure 4 ( left ) . Most labels occur only a few times in the dataset . Infrequently occurring labels ( referred to as tail label ) possess limited training samples and are harder to predict than frequently occurring ones ( referred to as head label ) . Many existing XML approaches treat labels with equal importance , such as Prabhu & Varma ( 2014 ) ; Babbar & Schölkopf ( 2017 ) ; Khandagale et al . ( 2019 ) , while Wei & Li ( 2018 ) demonstrates that most predictions of well-established methods are heads labels . However , in many real-world applications , it is still desirable to predict more tail labels which are more rewarding and informative , such as recommender systems Jain et al . ( 2016 ) ; Babbar & Schölkopf ( 2019 ) ; Wei & Li ( 2018 ) ; Wei et al . ( 2019 ) . To improve the performance for tail labels , existing solutions typically involve optimizing loss functions that are suitable for tail labels Jain et al . ( 2016 ) ; Babbar & Schölkopf ( 2019 ) , leveraging the sparsity of tail labels in the annotated label matrix Xu et al . ( 2016 ) , and transferring knowledge from data-rich head labels to data-scarce tail labels K. Dahiya ( 2019 ) . These methods typically achieve better performance on tail labels than standard XML methods which treat labels equally , while they usually involve high computational costs . Moreover , previous studies do not explicitly explain the underlying cause of the inferior performance of many standard XML methods for tail labels . In this work , we disclose theoretical and experimental evidence for the inferior performance of previous XML methods on tail labels . Our finding is that the norm of label classifier weights follows a long-tailed distribution similar to the label frequency as shown in Figure 4 ( middle ) , and the prediction score of tail labels thereby is underrated . To alleviate this problem , we propose to rectify the classifier ’ s outputs and training data distribution such that the prediction of tail labels is enhanced . We present two general modules suitable for any well-established XML methods : ( 1 ) RANKNET learns to re-rank the predictions by optimizing a population-aware loss function , which predicts tail labels with high rank ; ( 2 ) TAUG augments tail labels via a decoupled learning scheme , which reduces the skewness of training data and yields more balanced classification boundary . We conduct experiments to verify the effectiveness of the aforementioned instantiations . From our extensive studies across four benchmark datasets , we make the following intriguing contributions : • We show that from both theoretical and experimental perspectives , the norm of label classifier weights follow a long-tailed distribution , i.e. , the norms of head label classifier weights are considerably larger than that of tail label classifiers , which is a key cause of the inferior performance of many XML methods on tail labels . • We propose two general modules : RANKNET for prediction score re-ranking by optimizing a new population-aware loss , and TAUG for decoupled tail label augmentation . Both methods can be paired with any XML model without changing the model . • Experiments verify that our proposed modules achieve significant improvements ( 6 % w.r.t . PSP @ 1 on average ) for well-established XML methods on benchmark datasets . • We provide an ablation study to highlight the effectiveness of each individual factor . 2 PREVIOUS EFFORTS . Existing work on XML can be roughly categorized as three directions : One-vs-all methods . This branch of work trains classifiers for each label separately . Due to the huge size of label set , parallelization Babbar & Schölkopf ( 2017 ) , label partitioning Khandagale et al . ( 2019 ) , and label filter Niculescu-Mizil & Abbasnejad ( 2017 ) techniques are used to facilitate efficient training and testing . To alleviate memory overhead , recent works restrict the model capacity by imposing sparse constraints E.-H . Yen et al . ( 2016 ) or removing spurious parameters Babbar & Schölkopf ( 2017 ) . One criticism of one-vs-all methods is that it fails to capture label correlations . Embedding-based methods . Along this direction , researchers have proposed to embed the feature space and label space onto a joint low-dimensional space , then model the correlation between features and labels in hidden space Tai & Lin ( 2012 ) ; Chen & Lin ( 2012 ) ; Yu et al . ( 2014 ) ; Bhatia et al . ( 2015 ) ; Tagami ( 2017 ) ; Evron et al . ( 2018 ) . This method can dramatically reduce the model parameters compared with the one-vs-all methods , but involves solving complex optimization problems . Tree-based methods . In comparison to other types of approaches , tree-based methods greatly reduce inference time , which generally scales logarithmically in the number of labels . There are typically two types of trees including instance trees Prabhu & Varma ( 2014 ) ; Siblini et al . ( 2018 ) and label trees Daume III et al . ( 2016 ) ; You et al . ( 2018 ) , depending whether instance or label is partitioned in tree nodes . Tree-based methods usually suffer from low prediction accuracy affected by the cascading effect , where the prediction error at the top can not be corrected at a lower level . These methods can readily scale up to problems with hundreds of thousands of labels . However , Wei & Li ( 2018 ; 2019 ) claims that head labels make a significantly higher contribution to the performance than tail labels . Therefore , many work are conducted to improve the performance for tail labels . Optimization . Jain et al . ( 2016 ) proposes propensity scored loss functions that promote the prediction of tail label with high ranks . Xu et al . ( 2016 ) decomposes the label matrix into a low-rank matrix and a sparse matrix . The low-rank matrix is expected to capture label correlations , and the sparse matrix is used to capture tail labels . Babbar & Schölkopf ( 2019 ) views tail label from an adversarial perspective and optimizes hamming loss to yield a robust model . Knowledge transfer . K. Dahiya ( 2019 ) trains two deep models on head labels and tail labels . The semantic representations learned from head labels are transferred to the tail label model . These methods achieve better performance on tail labels than standard XML methods which treat labels equally , while they do not explicitly explain the underlying cause of the inferior performance of many standard XML methods for tail labels . In this work , we find that the classification boundary of existing XML methods is skewed to head labels , causing the inferior performance . 3 METHODOLOGY . In XML , as we possess fewer data about tail labels , models learned on long-tailed datasets tend to exhibit inferior performance on tail labels Wei & Li ( 2018 ) . However in practice , it is more informative and rewarding to accurately predict tail labels than head labels Jain et al . ( 2016 ) . In this work , we attempt to alleviate this problem from the perspective of the classification boundary . We make an observation that the norm of label classifier weights follow a long-tailed distribution similar to the label frequency , which means that the prediction of tail labels is over-suppressed . This finding provides an evidence for us to improve the prediction of tail labels . We present ways of rectifying the classifier ’ s outputs and data distribution via re-ranking and tail label augmentation , respectively . Notations . We first describe notations used through the paper . Let X = { xi } Ni=1 , Y = { yi } Ni=1 be a training set of size N , where yi is the label vector for data point xi . Formally , XML is the task of learning a function f that maps an input ( or instance ) x ∈ RD to its target y ∈ { 0 , 1 } L. We denote nj = ∑N i=1 yij as the frequency of the j-th label . Without loss of generality , we assume that the labels are sorted by cardinality in non-increasing order , i.e. , if j < k , then nj ≥ nk , where 1 ≤ j , k ≤ L. In our setting , we have n1 nL . According to the label frequency , we can split the label set into head labels and tail labels by a threshold τ ∈ ( 0 , 1 ) . We denote head label set H = { 1 , . . . , bτLc } and tail label set T = { bτLc+ 1 , . . . , L } . τ is a user-specified parameter . 3.1 THE LONG-TAILED DISTRIBUTION OF CLASSIFIER WEIGHTS NORM . We present a different perspective regarding XML model , showing its inferior performance on tail labels is due to the imbalanced classification boundary . In Figure 4 ( middle ) , we empirically observe that the norm of label classifier weights follows a similar long-tailed distribution as the label frequency . The results are produced on EUR-Lex dataset using a representative one-vs-all method Bonsai Khandagale et al . ( 2019 ) . A similar observation on Wiki10-31K dataset is presented in the supplementary material . Since the norm of tail label classifier weights is considerably smaller than that of head label classifier weights , the predicted score of tail labels are typically underestimated in inference . We further support our finding theoretically and demonstrate the fact that the small norm of tail label classifier weights is the root cause of inferior performance . We make the following mild assumption on the data : every input x is sampled from feature space completely at random , and there exists a constant threshold t > 0 for the input x , such that the top-k prediction for x is made as β ( k ) = { yl | P̂ ( yl | x ) ≥ t , 1 ≤ l ≤ L } , where P̂ ( yl | x ) denotes the estimated label distribution . We assume W = { wj } Lj=1 be the weight matrix of a standard XML method . In particular , for binary relevance and tree-based classifier , W can be obtained by optimizing Eq . ( 1 ) , where L denotes the loss function , e.g. , squared hinge loss , and constant λ is a trade-off parameter . Note that for some tree-based methods , such as Bonsai Khandagale et al . ( 2019 ) and Parabel Prabhu et al . ( 2018 ) , we consider W be the label classifier weights in leaf nodes , i.e. , excluding meta-labels of internal tree nodes . min wj ‖wj‖22 + λ N∑ i=1 L ( Yi , j , wTj xi ) , ∀1 ≤ j ≤ L ( 1 ) For deep learning methods , we denote W be the weights of the last linear layer for classification by optimizing Eq . ( 2 ) , where σ is the softmax function , fθ is the feature extractor parameterized by θ , and L denotes the selected loss function , e.g. , binary cross entropy . Note that this interpretation can also be adapted to typical embedding-based methods , such as Yu et al . ( 2014 ) , where fθ is linear and σ is the identity function . min W N∑ i=1 L ( yi , σ ( W > fθ ( xi ) ) ) ( 2 ) With the above setup , we summarize our findings in Theorem 1 . Theorem 1 . Let D = { ( xi , yi ) } Ni=1 be a sample set and W , which can be decomposed as { wj } Lj=1 , be the label classifier weights learned on D by optimizing Eq . ( 1 ) and Eq . ( 2 ) . For an uniformly sampled point x which is i.i.d . with points in D , we have ||wj || ∝ E [ yj ∈ β ( k ) ] , ∀1 ≤ j ≤ L , where β ( k ) denotes the k top-ranked indices of predicted labels in P̂ ( y | x ) . This theorem shows that the need for re-balancing the classifier weights to improve the performance on tail labels . Motivated by our finding , in the following we propose two new modules and discuss their effectiveness on tail labels . Proof of this theorem can be found in the supplementary material . | In prediction problems with millions of labels also known as Extreme Multi-label Learning (XML) problems, e.g., recommender systems, the model predictions are not as good for the tail (rarer) labels. This paper proposes two models for this problem. The first model is re-ranking-based, that is, it reranks the prediction scores of a standard XML model. The second model tries to augment the rarer labels to reduce the skew in data. Results shown on several real-world datasets highlight the superior predictive ability of the proposed reranking model for tail labels compared to a host of competitive baselines. | SP:25a306a19b267dfcdcd927fa9e65f3d8f7487918 |
Improving Tail Label Prediction for Extreme Multi-label Learning | Extreme multi-label learning ( XML ) works to annotate objects with relevant labels from an extremely large label set . Many previous methods treat labels uniformly such that the learned model tends to perform better on head labels , while the performance is severely deteriorated for tail labels . However , it is often desirable to predict more tail labels in many real-world applications . To alleviate this problem , in this work , we show theoretical and experimental evidence for the inferior performance of representative XML methods on tail labels . Our finding is that the norm of label classifier weights typically follows a long-tailed distribution similar to the label frequency , which results in the over-suppression of tail labels . Base on this new finding , we present two new modules : ( 1 ) RANKNET learns to re-rank the predictions by optimizing a population-aware loss , which predicts tail labels with high rank ; ( 2 ) TAUG augments tail labels via a decoupled learning scheme , which can yield more balanced classification boundary . We conduct experiments on commonly used XML benchmarks with hundreds of thousands of labels , showing that the proposed methods improve the performance of many state-of-the-art XML models by a considerable margin ( 6 % performance gain with respect to PSP @ 1 on average ) . 1 INTRODUCTION . Extreme multi-label learning ( XML ) aims to annotate objects with relevant labels from an extremely large candidate label set . Recently , XML has demonstrated its broad applications . For example , in webpage categorization Partalas et al . ( 2015 ) , millions of labels ( categories ) are collected in Wikipedia and one wishes to annotate new webpages with relevant labels from a huge candidate set ; in recommender systems McAuley et al . ( 2015 ) , one hopes to make informative personalized recommendations from millions of items . Because of the high dimensionality of label space , classic multi-label learning algorithms , such as Zhang & Zhou ( 2007 ) ; Tsoumakas & Vlahavas ( 2007 ) , become infeasible . To this end , a number of computational efficient XML approaches are proposed Weston et al . ( 2011 ) ; Agrawal et al . ( 2013 ) ; Bi & Kwok ( 2013 ) ; Yu et al . ( 2014 ) ; Bhatia et al . ( 2015 ) ; E.-H . Yen et al . ( 2016 ) ; Yeh et al . ( 2017 ) ; Yen et al . ( 2017 ) ; Tagami ( 2017 ) . In XML , one important statistical characteristic is that labels follow a long-tailed distribution as illustrated in Figure 4 ( left ) . Most labels occur only a few times in the dataset . Infrequently occurring labels ( referred to as tail label ) possess limited training samples and are harder to predict than frequently occurring ones ( referred to as head label ) . Many existing XML approaches treat labels with equal importance , such as Prabhu & Varma ( 2014 ) ; Babbar & Schölkopf ( 2017 ) ; Khandagale et al . ( 2019 ) , while Wei & Li ( 2018 ) demonstrates that most predictions of well-established methods are heads labels . However , in many real-world applications , it is still desirable to predict more tail labels which are more rewarding and informative , such as recommender systems Jain et al . ( 2016 ) ; Babbar & Schölkopf ( 2019 ) ; Wei & Li ( 2018 ) ; Wei et al . ( 2019 ) . To improve the performance for tail labels , existing solutions typically involve optimizing loss functions that are suitable for tail labels Jain et al . ( 2016 ) ; Babbar & Schölkopf ( 2019 ) , leveraging the sparsity of tail labels in the annotated label matrix Xu et al . ( 2016 ) , and transferring knowledge from data-rich head labels to data-scarce tail labels K. Dahiya ( 2019 ) . These methods typically achieve better performance on tail labels than standard XML methods which treat labels equally , while they usually involve high computational costs . Moreover , previous studies do not explicitly explain the underlying cause of the inferior performance of many standard XML methods for tail labels . In this work , we disclose theoretical and experimental evidence for the inferior performance of previous XML methods on tail labels . Our finding is that the norm of label classifier weights follows a long-tailed distribution similar to the label frequency as shown in Figure 4 ( middle ) , and the prediction score of tail labels thereby is underrated . To alleviate this problem , we propose to rectify the classifier ’ s outputs and training data distribution such that the prediction of tail labels is enhanced . We present two general modules suitable for any well-established XML methods : ( 1 ) RANKNET learns to re-rank the predictions by optimizing a population-aware loss function , which predicts tail labels with high rank ; ( 2 ) TAUG augments tail labels via a decoupled learning scheme , which reduces the skewness of training data and yields more balanced classification boundary . We conduct experiments to verify the effectiveness of the aforementioned instantiations . From our extensive studies across four benchmark datasets , we make the following intriguing contributions : • We show that from both theoretical and experimental perspectives , the norm of label classifier weights follow a long-tailed distribution , i.e. , the norms of head label classifier weights are considerably larger than that of tail label classifiers , which is a key cause of the inferior performance of many XML methods on tail labels . • We propose two general modules : RANKNET for prediction score re-ranking by optimizing a new population-aware loss , and TAUG for decoupled tail label augmentation . Both methods can be paired with any XML model without changing the model . • Experiments verify that our proposed modules achieve significant improvements ( 6 % w.r.t . PSP @ 1 on average ) for well-established XML methods on benchmark datasets . • We provide an ablation study to highlight the effectiveness of each individual factor . 2 PREVIOUS EFFORTS . Existing work on XML can be roughly categorized as three directions : One-vs-all methods . This branch of work trains classifiers for each label separately . Due to the huge size of label set , parallelization Babbar & Schölkopf ( 2017 ) , label partitioning Khandagale et al . ( 2019 ) , and label filter Niculescu-Mizil & Abbasnejad ( 2017 ) techniques are used to facilitate efficient training and testing . To alleviate memory overhead , recent works restrict the model capacity by imposing sparse constraints E.-H . Yen et al . ( 2016 ) or removing spurious parameters Babbar & Schölkopf ( 2017 ) . One criticism of one-vs-all methods is that it fails to capture label correlations . Embedding-based methods . Along this direction , researchers have proposed to embed the feature space and label space onto a joint low-dimensional space , then model the correlation between features and labels in hidden space Tai & Lin ( 2012 ) ; Chen & Lin ( 2012 ) ; Yu et al . ( 2014 ) ; Bhatia et al . ( 2015 ) ; Tagami ( 2017 ) ; Evron et al . ( 2018 ) . This method can dramatically reduce the model parameters compared with the one-vs-all methods , but involves solving complex optimization problems . Tree-based methods . In comparison to other types of approaches , tree-based methods greatly reduce inference time , which generally scales logarithmically in the number of labels . There are typically two types of trees including instance trees Prabhu & Varma ( 2014 ) ; Siblini et al . ( 2018 ) and label trees Daume III et al . ( 2016 ) ; You et al . ( 2018 ) , depending whether instance or label is partitioned in tree nodes . Tree-based methods usually suffer from low prediction accuracy affected by the cascading effect , where the prediction error at the top can not be corrected at a lower level . These methods can readily scale up to problems with hundreds of thousands of labels . However , Wei & Li ( 2018 ; 2019 ) claims that head labels make a significantly higher contribution to the performance than tail labels . Therefore , many work are conducted to improve the performance for tail labels . Optimization . Jain et al . ( 2016 ) proposes propensity scored loss functions that promote the prediction of tail label with high ranks . Xu et al . ( 2016 ) decomposes the label matrix into a low-rank matrix and a sparse matrix . The low-rank matrix is expected to capture label correlations , and the sparse matrix is used to capture tail labels . Babbar & Schölkopf ( 2019 ) views tail label from an adversarial perspective and optimizes hamming loss to yield a robust model . Knowledge transfer . K. Dahiya ( 2019 ) trains two deep models on head labels and tail labels . The semantic representations learned from head labels are transferred to the tail label model . These methods achieve better performance on tail labels than standard XML methods which treat labels equally , while they do not explicitly explain the underlying cause of the inferior performance of many standard XML methods for tail labels . In this work , we find that the classification boundary of existing XML methods is skewed to head labels , causing the inferior performance . 3 METHODOLOGY . In XML , as we possess fewer data about tail labels , models learned on long-tailed datasets tend to exhibit inferior performance on tail labels Wei & Li ( 2018 ) . However in practice , it is more informative and rewarding to accurately predict tail labels than head labels Jain et al . ( 2016 ) . In this work , we attempt to alleviate this problem from the perspective of the classification boundary . We make an observation that the norm of label classifier weights follow a long-tailed distribution similar to the label frequency , which means that the prediction of tail labels is over-suppressed . This finding provides an evidence for us to improve the prediction of tail labels . We present ways of rectifying the classifier ’ s outputs and data distribution via re-ranking and tail label augmentation , respectively . Notations . We first describe notations used through the paper . Let X = { xi } Ni=1 , Y = { yi } Ni=1 be a training set of size N , where yi is the label vector for data point xi . Formally , XML is the task of learning a function f that maps an input ( or instance ) x ∈ RD to its target y ∈ { 0 , 1 } L. We denote nj = ∑N i=1 yij as the frequency of the j-th label . Without loss of generality , we assume that the labels are sorted by cardinality in non-increasing order , i.e. , if j < k , then nj ≥ nk , where 1 ≤ j , k ≤ L. In our setting , we have n1 nL . According to the label frequency , we can split the label set into head labels and tail labels by a threshold τ ∈ ( 0 , 1 ) . We denote head label set H = { 1 , . . . , bτLc } and tail label set T = { bτLc+ 1 , . . . , L } . τ is a user-specified parameter . 3.1 THE LONG-TAILED DISTRIBUTION OF CLASSIFIER WEIGHTS NORM . We present a different perspective regarding XML model , showing its inferior performance on tail labels is due to the imbalanced classification boundary . In Figure 4 ( middle ) , we empirically observe that the norm of label classifier weights follows a similar long-tailed distribution as the label frequency . The results are produced on EUR-Lex dataset using a representative one-vs-all method Bonsai Khandagale et al . ( 2019 ) . A similar observation on Wiki10-31K dataset is presented in the supplementary material . Since the norm of tail label classifier weights is considerably smaller than that of head label classifier weights , the predicted score of tail labels are typically underestimated in inference . We further support our finding theoretically and demonstrate the fact that the small norm of tail label classifier weights is the root cause of inferior performance . We make the following mild assumption on the data : every input x is sampled from feature space completely at random , and there exists a constant threshold t > 0 for the input x , such that the top-k prediction for x is made as β ( k ) = { yl | P̂ ( yl | x ) ≥ t , 1 ≤ l ≤ L } , where P̂ ( yl | x ) denotes the estimated label distribution . We assume W = { wj } Lj=1 be the weight matrix of a standard XML method . In particular , for binary relevance and tree-based classifier , W can be obtained by optimizing Eq . ( 1 ) , where L denotes the loss function , e.g. , squared hinge loss , and constant λ is a trade-off parameter . Note that for some tree-based methods , such as Bonsai Khandagale et al . ( 2019 ) and Parabel Prabhu et al . ( 2018 ) , we consider W be the label classifier weights in leaf nodes , i.e. , excluding meta-labels of internal tree nodes . min wj ‖wj‖22 + λ N∑ i=1 L ( Yi , j , wTj xi ) , ∀1 ≤ j ≤ L ( 1 ) For deep learning methods , we denote W be the weights of the last linear layer for classification by optimizing Eq . ( 2 ) , where σ is the softmax function , fθ is the feature extractor parameterized by θ , and L denotes the selected loss function , e.g. , binary cross entropy . Note that this interpretation can also be adapted to typical embedding-based methods , such as Yu et al . ( 2014 ) , where fθ is linear and σ is the identity function . min W N∑ i=1 L ( yi , σ ( W > fθ ( xi ) ) ) ( 2 ) With the above setup , we summarize our findings in Theorem 1 . Theorem 1 . Let D = { ( xi , yi ) } Ni=1 be a sample set and W , which can be decomposed as { wj } Lj=1 , be the label classifier weights learned on D by optimizing Eq . ( 1 ) and Eq . ( 2 ) . For an uniformly sampled point x which is i.i.d . with points in D , we have ||wj || ∝ E [ yj ∈ β ( k ) ] , ∀1 ≤ j ≤ L , where β ( k ) denotes the k top-ranked indices of predicted labels in P̂ ( y | x ) . This theorem shows that the need for re-balancing the classifier weights to improve the performance on tail labels . Motivated by our finding , in the following we propose two new modules and discuss their effectiveness on tail labels . Proof of this theorem can be found in the supplementary material . | The paper presents a method for improving tail-label performance in extreme multi-label learning setup where the number of target labels can be extremely large. It is based on the finding that the distribution of the norms of the learnt weight vectors also follows a power-law as does the distribution of the samples among labels. The main contribution of the paper is proposing methods for re-ranking which encourages precedence of tail-labels and a data augmentation mechanism. It achieves improvements when applied to sota methods on relevant PSP metrics. | SP:25a306a19b267dfcdcd927fa9e65f3d8f7487918 |
Semantic Re-tuning with Contrastive Tension | 1 INTRODUCTION . Representation learning concerns the pursuit of automatically learning representations of data that are useful for future extraction of information ( Bengio et al. , 2013 ) . Recent work has predominantly been focused on training and extracting such representations from various deep neural architectures . However , as these deep models are mostly trained via error minimization of an objective function applied to the final layers ( Rumelhart et al. , 1988 ) , features residing in layers close to the objective function will be task-specific Yosinski et al . ( 2014 ) . Therefore , to reduce the representation ’ s bias towards the objective function it is common to discard one or several of the final layers , or alternatively consider features of other intermediate layers , as with AutoEncoders ( Rumelhart et al. , 1986 ) . One domain where this issue is particularly striking is learning semantic sentence embeddings with deep Transformer networks ( Vaswani et al. , 2017 ) pre-trained towards some language modeling task . Although utilizing pre-trained Transformer models such as BERT , XLnet , ELECTRA and GPT-2 ( Devlin et al. , 2019 ; Yang et al. , 2019 ; Clark et al. , 2020 ; Brown et al. , 2020 ) has become the dominant approach within the field of Natural Language Processing ( NLP ) , with current State Of The Art ( SOTA ) results in basically all NLP tasks belonging to fine-tuned versions of such models , it has been shown that simply extracting features from the layers of such models does not produce competitive sentence embeddings ( Reimers & Gurevych , 2019 ; Liu et al. , 2019a ) . Our interpretation of this phenomenon , which we will demonstrate in this paper , is that the currently used language modeling objectives enforce a task-bias at the final layers of the Transformer , and that this bias is not beneficial for the learning of semantic sentence representations . Reimers & Gurevych ( 2019 ) propose to solve this by pooling a fixed size sentence embedding from the final Transformer layer and fine-tune towards a Natural Language Inference ( NLI ) task , an approach that when applied to Transformers is known as Sentence-BERT ( or S-BERT in short ) . While Hill et al . ( 2016a ) empirically show that fine-tuning language models towards NLI data yields good results on Semantic Textual Similarity ( STS ) , there exists no convincing argument for why NLI is preferred over other tasks . Hence , it is unclear whether the impressive improvements of S-BERT are to be mainly attributed to the NLI task itself , or if this merely trains the model to output sentence embeddings , in turn exposing the semantics learned during pre-training . Since NLI requires labeled data , it would be highly valuable if an alternative method that requires no such labels was possible . ∗Main contribution . We therefore propose a fully self-supervised training objective that aims to remove the bias posed by the pre-training objective and to encourage the model to output semantically useful sentence representations . Our method trains two separate language models on the task of maximizing the dot product between the two models ’ representations for identical sentences , and minimizing the dot product between the models ’ representations for different sentences . When applied to pre-trained BERT models , our method achieves SOTA results for multiple unsupervised STS tasks , and when applied to the S-BERT model it outperforms previous SOTA by a clear margin . To further bolster the robustness of our method , we demonstrate that CT drastically improves STS scores for various models , across multiple languages . Additionally , we contribute with a layer-wise STS survey for the most common Transformer-based language models , in which we find great variability in performance between different architectures and pre-training objectives . Finally , by introducing an alteration to the supervised regression task of S-BERT , we are able to improve upon the supervised STS embedding results for all tested models . In summary , the main contributions of our paper are as follows : 1 . A novel self-supervised approach for learning sentence embeddings from pre-trained language models . 2 . Analytical results of the layer-wise STS performance for commonly used language models . 3 . An improvement to the supervised regression task of S-BERT that yields a higher performance for all tested models . Code and models is available at Github.com/FreddeFrallan/Contrastive-Tension 2 RELATED WORK . Where earlier work for learning sentence embeddings focused on the composition of pre-trained word embeddings ( Le & Mikolov ( 2014 ) ; Wieting et al . ( 2015 ) ; Arora et al . ( 2016 ) ) , recent work has instead favored extracting features from deep neural networks . The training methods of such networks can be divided into supervised and self-supervised . A systematic comparison of preTransformer sentence embedding methods is available in the works of Hill et al . ( 2016b ) . Self-supervised methods typically rely on the assumption that sentences sharing similar adjacent sentences , have similar meaning . Utilizing this assumption , Kiros et al . ( 2015 ) introduced SkipThoughts that trains an encoder-decoder to reconstruct surrounding sentences from an encoded passage . Logeswaran & Lee ( 2018 ) proposed QuickThoughts that instead frames the training objective as a sentence context classification task . Recently , and still under peer-review , Giorgi et al . ( 2020 ) proposed DeCLUTR that uses a setup similar to QuickThoughts , but allow positive sentences to be overlapping or subsuming ( one being a subsequence of the other ) , which further improves results . Supervised methods utilize labeled datasets to introduce a semantic learning signal . As the amount of explicitly labeled STS data is very limited , supervised methods often rely on various proxy tasks where more labeled data is available . Conneau et al . ( 2017 ) introduced InferSent that learns sentence embeddings via a siamese BiLSTM trained on NLI data . The Universal Sentence Encoder ( USE ) of Cer et al . ( 2018 ) is a Transformer encoder trained with both unlabeled data and labeled NLI data . S-BERT by Reimers & Gurevych ( 2019 ) adopts the training objective of InferSent but instead applies pre-trained BERT models . Finally , Wang & Kuo ( 2020 ) recently proposed S-BERT-WK , an extension to S-BERT that further increases the performance by subspace analysis of the model ’ s layer-wise word features . Recently , Grill et al . ( 2020 ) introduced the self-supervised BYOL framework that attain useful image representations , comparable with previous supervised methods . Although their method also utilizes two untied dual networks , the main training objective and the underlying motivation for this differ greatly . Where BYOL train using solely positive samples generated via data augmentation , our method mainly aims to dissipate negative examples and relies on two networks in order to stabilize the training process . To the best of our knowledge , our work is the first that suggests learning sentence representations by removing the bias imposed from the pre-training objective . 3 LAYER-WISE STUDY OF TRANSFORMER MODELS . Previous work analyzing the downstream applicability of layer-wise features in Transformer model reports similar trends of performance increasing until the middle layers before decreasing towards the final layers . Merchant et al . ( 2020 ) found the best suited features for linguistic tasks such as entity typing and relation classification reside in the intermediate layers of BERT , and Chen et al . ( 2020 ) found the most useful representations for image classification in the intermediate layers of Image-GPT . We contribute with a layer-wise study of the semantic quality of the sentence representations found in a selected number of common Transformer architectures . Following the approach of S-BERT , we generate sentence embeddings by mean pooling over the word-piece features of a given layer . These sentence embeddings are directly evaluated towards the STS-b test ( Cer et al. , 2017 ) , without any additional training , from which we report the Spearman correlation between the cosine similarity of the embeddings and the manually collected similarity scores . The test partition of the dataset contains 1,379 sentence pairs , with decimal human similarity scores ranging from 0.0 ( two sentences having completely different meanings ) to 5.0 ( two sentences have identical meaning ) . Figure 1 shows the results for BERT , Electra , XLNet and GPT-2 , with results for additional models in appendix B.4 . Although the different models display different layer-wise patterns , a common theme is that it is not obvious where to extract features for semantic sentence embeddings ; the worst-performing representations are often found in the layers close to the objective function , with the exception of RoBerta base ( Liu et al. , 2019b ) . Considering the discrepancy between BERT and Electra which share an almost identical architecture but differ drastically in their pre-training objectives , it is clear that the semantic quality of a model ’ s sentence representations is heavily impacted by the choice of pre-training objective . 4 METHOD . To counter the negative trend found in Section 3 , where the lacking STS performance of the sentence representations in the final layers became apparent , we define a training objective meant to encourage the model to retain a semantically distinguishable sentence representation until the final layer . We name this method Contrastive Tension ( CT ) , where two independent models , with identically initialized weights , are set to maximise the dot product between their sentence representations for identical sentences , and minimize the dot product for their sentence representations of differing sentences . Hence , the CT objective is defined as : z = f1 ( s1 ) T · f2 ( s2 ) L ( z , s1 , s2 ) = { −log σ ( z ) if s1 = s2 −log σ ( 1− z ) if s1 6= s2 ( 1 ) Where f1 and f2 are two independently parameterized models that given a sentence s produces a fixed size vector representation and where σ refers to the Logistic function . Following the works of Reimers & Gurevych ( 2019 ) , we generate fixed size sentence representations by mean pooling over the features in the final layer of pre-trained transformer models . Training data is randomly generated from a given corpus , where for each randomly selected sentence s , K negative sentences are sampled to generate K + 1 training samples by pairing s with the negative sentences and copying s into an identical sentence pair . This yields one positive training sample and K negative training samples . We include the K + 1 training samples in the same batch and always use f2 to embed the K negative sentences ( See Appendix A.1 for a visual example ) . Our approach for generating negative samples is based on the assumption that two randomly selected sentences are very likely to be semantically dissimilar . As the models are initialized with identical weights , the CT objective creates a tension between having the two models retain similar representations for identical sentences , at the same time as the two models are encouraged to distinguish their representations for differing sentences . Our intuition is that this creates a training dynamic where the two models acts as smooth anchors to each other , where the tension to remain synchronized mitigates the downsides of simply distancing the embeddings of differing sentences . This makes CT a nondestructive method for distinguishing the sentence embeddings of non semantically similar sentences . 5 EXPERIMENTS . Unless stated otherwise , the following set of hyperparameters is applied when using CT throughout all experiments : Training data is randomly sampled from English Wikipedia ( See Appendix C.2 ) , where we collectK = 7 negative sentence pairs for each positive sentence pair . The batch size is set to 16 , which results in every batch having 2 positive sentence pairs and 14 negative sentence pairs . We apply an RMSProp optimizer ( Hinton , 2012 ) with a fixed learning rate schedule that decreases from 1e−5 to 2e−6 ( Appendix A.3 ) . To showcase the robustness and unsupervised applicability of CT , we strictly perform 50,000 update steps before evaluating , and for all unsupervised tasks we report results for the worst-performing of the two models used in the CT setup . The experiment section follows the model naming convention elaborated upon in A.2 , which describes the order and what training objectives that has been applied to a model . There exists a clear discrepancy between previously reported STS scores for various methods and models . To improve upon this state of confusion we perform all evaluation with the SentEval package ( Conneau & Kiela , 2018 ) , to which we provide code and models for full reproducability of all tested methods . A Discussion regarding our experience with trying to reproduce previous work is available in Appendix A.4 . A comprehensive list of all used model checkpoints is available in Appendix C.1 | This paper proposes Contrastive Tension, a self-supervised method to improve sentence representations from pre-trained language models for Semantic Textual Similarity tasks. This work is motivated by the observation by previous work that the final layers of pre-trained model are often biased towards token-level pre-training objectives, and perform poorly for sentence similarity tasks. The proposed method counters this bias by introducing a sentence-level self-supervised task where two different models are encouraged to generate similar representation for the same input sentence, and different representations for different inputs. Experiments show the proposed method significantly improves over previous SotA methods on STS benchmarks. | SP:b5dbddb2672f4567426094a6b52f84fcdce01d50 |
Semantic Re-tuning with Contrastive Tension | 1 INTRODUCTION . Representation learning concerns the pursuit of automatically learning representations of data that are useful for future extraction of information ( Bengio et al. , 2013 ) . Recent work has predominantly been focused on training and extracting such representations from various deep neural architectures . However , as these deep models are mostly trained via error minimization of an objective function applied to the final layers ( Rumelhart et al. , 1988 ) , features residing in layers close to the objective function will be task-specific Yosinski et al . ( 2014 ) . Therefore , to reduce the representation ’ s bias towards the objective function it is common to discard one or several of the final layers , or alternatively consider features of other intermediate layers , as with AutoEncoders ( Rumelhart et al. , 1986 ) . One domain where this issue is particularly striking is learning semantic sentence embeddings with deep Transformer networks ( Vaswani et al. , 2017 ) pre-trained towards some language modeling task . Although utilizing pre-trained Transformer models such as BERT , XLnet , ELECTRA and GPT-2 ( Devlin et al. , 2019 ; Yang et al. , 2019 ; Clark et al. , 2020 ; Brown et al. , 2020 ) has become the dominant approach within the field of Natural Language Processing ( NLP ) , with current State Of The Art ( SOTA ) results in basically all NLP tasks belonging to fine-tuned versions of such models , it has been shown that simply extracting features from the layers of such models does not produce competitive sentence embeddings ( Reimers & Gurevych , 2019 ; Liu et al. , 2019a ) . Our interpretation of this phenomenon , which we will demonstrate in this paper , is that the currently used language modeling objectives enforce a task-bias at the final layers of the Transformer , and that this bias is not beneficial for the learning of semantic sentence representations . Reimers & Gurevych ( 2019 ) propose to solve this by pooling a fixed size sentence embedding from the final Transformer layer and fine-tune towards a Natural Language Inference ( NLI ) task , an approach that when applied to Transformers is known as Sentence-BERT ( or S-BERT in short ) . While Hill et al . ( 2016a ) empirically show that fine-tuning language models towards NLI data yields good results on Semantic Textual Similarity ( STS ) , there exists no convincing argument for why NLI is preferred over other tasks . Hence , it is unclear whether the impressive improvements of S-BERT are to be mainly attributed to the NLI task itself , or if this merely trains the model to output sentence embeddings , in turn exposing the semantics learned during pre-training . Since NLI requires labeled data , it would be highly valuable if an alternative method that requires no such labels was possible . ∗Main contribution . We therefore propose a fully self-supervised training objective that aims to remove the bias posed by the pre-training objective and to encourage the model to output semantically useful sentence representations . Our method trains two separate language models on the task of maximizing the dot product between the two models ’ representations for identical sentences , and minimizing the dot product between the models ’ representations for different sentences . When applied to pre-trained BERT models , our method achieves SOTA results for multiple unsupervised STS tasks , and when applied to the S-BERT model it outperforms previous SOTA by a clear margin . To further bolster the robustness of our method , we demonstrate that CT drastically improves STS scores for various models , across multiple languages . Additionally , we contribute with a layer-wise STS survey for the most common Transformer-based language models , in which we find great variability in performance between different architectures and pre-training objectives . Finally , by introducing an alteration to the supervised regression task of S-BERT , we are able to improve upon the supervised STS embedding results for all tested models . In summary , the main contributions of our paper are as follows : 1 . A novel self-supervised approach for learning sentence embeddings from pre-trained language models . 2 . Analytical results of the layer-wise STS performance for commonly used language models . 3 . An improvement to the supervised regression task of S-BERT that yields a higher performance for all tested models . Code and models is available at Github.com/FreddeFrallan/Contrastive-Tension 2 RELATED WORK . Where earlier work for learning sentence embeddings focused on the composition of pre-trained word embeddings ( Le & Mikolov ( 2014 ) ; Wieting et al . ( 2015 ) ; Arora et al . ( 2016 ) ) , recent work has instead favored extracting features from deep neural networks . The training methods of such networks can be divided into supervised and self-supervised . A systematic comparison of preTransformer sentence embedding methods is available in the works of Hill et al . ( 2016b ) . Self-supervised methods typically rely on the assumption that sentences sharing similar adjacent sentences , have similar meaning . Utilizing this assumption , Kiros et al . ( 2015 ) introduced SkipThoughts that trains an encoder-decoder to reconstruct surrounding sentences from an encoded passage . Logeswaran & Lee ( 2018 ) proposed QuickThoughts that instead frames the training objective as a sentence context classification task . Recently , and still under peer-review , Giorgi et al . ( 2020 ) proposed DeCLUTR that uses a setup similar to QuickThoughts , but allow positive sentences to be overlapping or subsuming ( one being a subsequence of the other ) , which further improves results . Supervised methods utilize labeled datasets to introduce a semantic learning signal . As the amount of explicitly labeled STS data is very limited , supervised methods often rely on various proxy tasks where more labeled data is available . Conneau et al . ( 2017 ) introduced InferSent that learns sentence embeddings via a siamese BiLSTM trained on NLI data . The Universal Sentence Encoder ( USE ) of Cer et al . ( 2018 ) is a Transformer encoder trained with both unlabeled data and labeled NLI data . S-BERT by Reimers & Gurevych ( 2019 ) adopts the training objective of InferSent but instead applies pre-trained BERT models . Finally , Wang & Kuo ( 2020 ) recently proposed S-BERT-WK , an extension to S-BERT that further increases the performance by subspace analysis of the model ’ s layer-wise word features . Recently , Grill et al . ( 2020 ) introduced the self-supervised BYOL framework that attain useful image representations , comparable with previous supervised methods . Although their method also utilizes two untied dual networks , the main training objective and the underlying motivation for this differ greatly . Where BYOL train using solely positive samples generated via data augmentation , our method mainly aims to dissipate negative examples and relies on two networks in order to stabilize the training process . To the best of our knowledge , our work is the first that suggests learning sentence representations by removing the bias imposed from the pre-training objective . 3 LAYER-WISE STUDY OF TRANSFORMER MODELS . Previous work analyzing the downstream applicability of layer-wise features in Transformer model reports similar trends of performance increasing until the middle layers before decreasing towards the final layers . Merchant et al . ( 2020 ) found the best suited features for linguistic tasks such as entity typing and relation classification reside in the intermediate layers of BERT , and Chen et al . ( 2020 ) found the most useful representations for image classification in the intermediate layers of Image-GPT . We contribute with a layer-wise study of the semantic quality of the sentence representations found in a selected number of common Transformer architectures . Following the approach of S-BERT , we generate sentence embeddings by mean pooling over the word-piece features of a given layer . These sentence embeddings are directly evaluated towards the STS-b test ( Cer et al. , 2017 ) , without any additional training , from which we report the Spearman correlation between the cosine similarity of the embeddings and the manually collected similarity scores . The test partition of the dataset contains 1,379 sentence pairs , with decimal human similarity scores ranging from 0.0 ( two sentences having completely different meanings ) to 5.0 ( two sentences have identical meaning ) . Figure 1 shows the results for BERT , Electra , XLNet and GPT-2 , with results for additional models in appendix B.4 . Although the different models display different layer-wise patterns , a common theme is that it is not obvious where to extract features for semantic sentence embeddings ; the worst-performing representations are often found in the layers close to the objective function , with the exception of RoBerta base ( Liu et al. , 2019b ) . Considering the discrepancy between BERT and Electra which share an almost identical architecture but differ drastically in their pre-training objectives , it is clear that the semantic quality of a model ’ s sentence representations is heavily impacted by the choice of pre-training objective . 4 METHOD . To counter the negative trend found in Section 3 , where the lacking STS performance of the sentence representations in the final layers became apparent , we define a training objective meant to encourage the model to retain a semantically distinguishable sentence representation until the final layer . We name this method Contrastive Tension ( CT ) , where two independent models , with identically initialized weights , are set to maximise the dot product between their sentence representations for identical sentences , and minimize the dot product for their sentence representations of differing sentences . Hence , the CT objective is defined as : z = f1 ( s1 ) T · f2 ( s2 ) L ( z , s1 , s2 ) = { −log σ ( z ) if s1 = s2 −log σ ( 1− z ) if s1 6= s2 ( 1 ) Where f1 and f2 are two independently parameterized models that given a sentence s produces a fixed size vector representation and where σ refers to the Logistic function . Following the works of Reimers & Gurevych ( 2019 ) , we generate fixed size sentence representations by mean pooling over the features in the final layer of pre-trained transformer models . Training data is randomly generated from a given corpus , where for each randomly selected sentence s , K negative sentences are sampled to generate K + 1 training samples by pairing s with the negative sentences and copying s into an identical sentence pair . This yields one positive training sample and K negative training samples . We include the K + 1 training samples in the same batch and always use f2 to embed the K negative sentences ( See Appendix A.1 for a visual example ) . Our approach for generating negative samples is based on the assumption that two randomly selected sentences are very likely to be semantically dissimilar . As the models are initialized with identical weights , the CT objective creates a tension between having the two models retain similar representations for identical sentences , at the same time as the two models are encouraged to distinguish their representations for differing sentences . Our intuition is that this creates a training dynamic where the two models acts as smooth anchors to each other , where the tension to remain synchronized mitigates the downsides of simply distancing the embeddings of differing sentences . This makes CT a nondestructive method for distinguishing the sentence embeddings of non semantically similar sentences . 5 EXPERIMENTS . Unless stated otherwise , the following set of hyperparameters is applied when using CT throughout all experiments : Training data is randomly sampled from English Wikipedia ( See Appendix C.2 ) , where we collectK = 7 negative sentence pairs for each positive sentence pair . The batch size is set to 16 , which results in every batch having 2 positive sentence pairs and 14 negative sentence pairs . We apply an RMSProp optimizer ( Hinton , 2012 ) with a fixed learning rate schedule that decreases from 1e−5 to 2e−6 ( Appendix A.3 ) . To showcase the robustness and unsupervised applicability of CT , we strictly perform 50,000 update steps before evaluating , and for all unsupervised tasks we report results for the worst-performing of the two models used in the CT setup . The experiment section follows the model naming convention elaborated upon in A.2 , which describes the order and what training objectives that has been applied to a model . There exists a clear discrepancy between previously reported STS scores for various methods and models . To improve upon this state of confusion we perform all evaluation with the SentEval package ( Conneau & Kiela , 2018 ) , to which we provide code and models for full reproducability of all tested methods . A Discussion regarding our experience with trying to reproduce previous work is available in Appendix A.4 . A comprehensive list of all used model checkpoints is available in Appendix C.1 | The paper studies the problem of finding effective representation for semantic text similarity (STS). The paper first investigates the effectiveness of pre-trained masked language models (e.g. BERT) in the STS task. They found different layers of BERT have different performance when employed in STS task -- in particular, the popular method of using the last layer does not usually lead to good performance. In fact, the last layers are worse than those preceding layers in STS. This is universal in several models including BERT, Electra, XLNet, and GPT-2, but not RoBerta. The paper then propose to use dual contrastive training method (basically use two BERT branches) to further fine-tune BERT, where the additional objective is defined by bringing two models' output closer for the same input sentence and further for randomly sampled different sentences. | SP:b5dbddb2672f4567426094a6b52f84fcdce01d50 |
Semantic Re-tuning with Contrastive Tension | 1 INTRODUCTION . Representation learning concerns the pursuit of automatically learning representations of data that are useful for future extraction of information ( Bengio et al. , 2013 ) . Recent work has predominantly been focused on training and extracting such representations from various deep neural architectures . However , as these deep models are mostly trained via error minimization of an objective function applied to the final layers ( Rumelhart et al. , 1988 ) , features residing in layers close to the objective function will be task-specific Yosinski et al . ( 2014 ) . Therefore , to reduce the representation ’ s bias towards the objective function it is common to discard one or several of the final layers , or alternatively consider features of other intermediate layers , as with AutoEncoders ( Rumelhart et al. , 1986 ) . One domain where this issue is particularly striking is learning semantic sentence embeddings with deep Transformer networks ( Vaswani et al. , 2017 ) pre-trained towards some language modeling task . Although utilizing pre-trained Transformer models such as BERT , XLnet , ELECTRA and GPT-2 ( Devlin et al. , 2019 ; Yang et al. , 2019 ; Clark et al. , 2020 ; Brown et al. , 2020 ) has become the dominant approach within the field of Natural Language Processing ( NLP ) , with current State Of The Art ( SOTA ) results in basically all NLP tasks belonging to fine-tuned versions of such models , it has been shown that simply extracting features from the layers of such models does not produce competitive sentence embeddings ( Reimers & Gurevych , 2019 ; Liu et al. , 2019a ) . Our interpretation of this phenomenon , which we will demonstrate in this paper , is that the currently used language modeling objectives enforce a task-bias at the final layers of the Transformer , and that this bias is not beneficial for the learning of semantic sentence representations . Reimers & Gurevych ( 2019 ) propose to solve this by pooling a fixed size sentence embedding from the final Transformer layer and fine-tune towards a Natural Language Inference ( NLI ) task , an approach that when applied to Transformers is known as Sentence-BERT ( or S-BERT in short ) . While Hill et al . ( 2016a ) empirically show that fine-tuning language models towards NLI data yields good results on Semantic Textual Similarity ( STS ) , there exists no convincing argument for why NLI is preferred over other tasks . Hence , it is unclear whether the impressive improvements of S-BERT are to be mainly attributed to the NLI task itself , or if this merely trains the model to output sentence embeddings , in turn exposing the semantics learned during pre-training . Since NLI requires labeled data , it would be highly valuable if an alternative method that requires no such labels was possible . ∗Main contribution . We therefore propose a fully self-supervised training objective that aims to remove the bias posed by the pre-training objective and to encourage the model to output semantically useful sentence representations . Our method trains two separate language models on the task of maximizing the dot product between the two models ’ representations for identical sentences , and minimizing the dot product between the models ’ representations for different sentences . When applied to pre-trained BERT models , our method achieves SOTA results for multiple unsupervised STS tasks , and when applied to the S-BERT model it outperforms previous SOTA by a clear margin . To further bolster the robustness of our method , we demonstrate that CT drastically improves STS scores for various models , across multiple languages . Additionally , we contribute with a layer-wise STS survey for the most common Transformer-based language models , in which we find great variability in performance between different architectures and pre-training objectives . Finally , by introducing an alteration to the supervised regression task of S-BERT , we are able to improve upon the supervised STS embedding results for all tested models . In summary , the main contributions of our paper are as follows : 1 . A novel self-supervised approach for learning sentence embeddings from pre-trained language models . 2 . Analytical results of the layer-wise STS performance for commonly used language models . 3 . An improvement to the supervised regression task of S-BERT that yields a higher performance for all tested models . Code and models is available at Github.com/FreddeFrallan/Contrastive-Tension 2 RELATED WORK . Where earlier work for learning sentence embeddings focused on the composition of pre-trained word embeddings ( Le & Mikolov ( 2014 ) ; Wieting et al . ( 2015 ) ; Arora et al . ( 2016 ) ) , recent work has instead favored extracting features from deep neural networks . The training methods of such networks can be divided into supervised and self-supervised . A systematic comparison of preTransformer sentence embedding methods is available in the works of Hill et al . ( 2016b ) . Self-supervised methods typically rely on the assumption that sentences sharing similar adjacent sentences , have similar meaning . Utilizing this assumption , Kiros et al . ( 2015 ) introduced SkipThoughts that trains an encoder-decoder to reconstruct surrounding sentences from an encoded passage . Logeswaran & Lee ( 2018 ) proposed QuickThoughts that instead frames the training objective as a sentence context classification task . Recently , and still under peer-review , Giorgi et al . ( 2020 ) proposed DeCLUTR that uses a setup similar to QuickThoughts , but allow positive sentences to be overlapping or subsuming ( one being a subsequence of the other ) , which further improves results . Supervised methods utilize labeled datasets to introduce a semantic learning signal . As the amount of explicitly labeled STS data is very limited , supervised methods often rely on various proxy tasks where more labeled data is available . Conneau et al . ( 2017 ) introduced InferSent that learns sentence embeddings via a siamese BiLSTM trained on NLI data . The Universal Sentence Encoder ( USE ) of Cer et al . ( 2018 ) is a Transformer encoder trained with both unlabeled data and labeled NLI data . S-BERT by Reimers & Gurevych ( 2019 ) adopts the training objective of InferSent but instead applies pre-trained BERT models . Finally , Wang & Kuo ( 2020 ) recently proposed S-BERT-WK , an extension to S-BERT that further increases the performance by subspace analysis of the model ’ s layer-wise word features . Recently , Grill et al . ( 2020 ) introduced the self-supervised BYOL framework that attain useful image representations , comparable with previous supervised methods . Although their method also utilizes two untied dual networks , the main training objective and the underlying motivation for this differ greatly . Where BYOL train using solely positive samples generated via data augmentation , our method mainly aims to dissipate negative examples and relies on two networks in order to stabilize the training process . To the best of our knowledge , our work is the first that suggests learning sentence representations by removing the bias imposed from the pre-training objective . 3 LAYER-WISE STUDY OF TRANSFORMER MODELS . Previous work analyzing the downstream applicability of layer-wise features in Transformer model reports similar trends of performance increasing until the middle layers before decreasing towards the final layers . Merchant et al . ( 2020 ) found the best suited features for linguistic tasks such as entity typing and relation classification reside in the intermediate layers of BERT , and Chen et al . ( 2020 ) found the most useful representations for image classification in the intermediate layers of Image-GPT . We contribute with a layer-wise study of the semantic quality of the sentence representations found in a selected number of common Transformer architectures . Following the approach of S-BERT , we generate sentence embeddings by mean pooling over the word-piece features of a given layer . These sentence embeddings are directly evaluated towards the STS-b test ( Cer et al. , 2017 ) , without any additional training , from which we report the Spearman correlation between the cosine similarity of the embeddings and the manually collected similarity scores . The test partition of the dataset contains 1,379 sentence pairs , with decimal human similarity scores ranging from 0.0 ( two sentences having completely different meanings ) to 5.0 ( two sentences have identical meaning ) . Figure 1 shows the results for BERT , Electra , XLNet and GPT-2 , with results for additional models in appendix B.4 . Although the different models display different layer-wise patterns , a common theme is that it is not obvious where to extract features for semantic sentence embeddings ; the worst-performing representations are often found in the layers close to the objective function , with the exception of RoBerta base ( Liu et al. , 2019b ) . Considering the discrepancy between BERT and Electra which share an almost identical architecture but differ drastically in their pre-training objectives , it is clear that the semantic quality of a model ’ s sentence representations is heavily impacted by the choice of pre-training objective . 4 METHOD . To counter the negative trend found in Section 3 , where the lacking STS performance of the sentence representations in the final layers became apparent , we define a training objective meant to encourage the model to retain a semantically distinguishable sentence representation until the final layer . We name this method Contrastive Tension ( CT ) , where two independent models , with identically initialized weights , are set to maximise the dot product between their sentence representations for identical sentences , and minimize the dot product for their sentence representations of differing sentences . Hence , the CT objective is defined as : z = f1 ( s1 ) T · f2 ( s2 ) L ( z , s1 , s2 ) = { −log σ ( z ) if s1 = s2 −log σ ( 1− z ) if s1 6= s2 ( 1 ) Where f1 and f2 are two independently parameterized models that given a sentence s produces a fixed size vector representation and where σ refers to the Logistic function . Following the works of Reimers & Gurevych ( 2019 ) , we generate fixed size sentence representations by mean pooling over the features in the final layer of pre-trained transformer models . Training data is randomly generated from a given corpus , where for each randomly selected sentence s , K negative sentences are sampled to generate K + 1 training samples by pairing s with the negative sentences and copying s into an identical sentence pair . This yields one positive training sample and K negative training samples . We include the K + 1 training samples in the same batch and always use f2 to embed the K negative sentences ( See Appendix A.1 for a visual example ) . Our approach for generating negative samples is based on the assumption that two randomly selected sentences are very likely to be semantically dissimilar . As the models are initialized with identical weights , the CT objective creates a tension between having the two models retain similar representations for identical sentences , at the same time as the two models are encouraged to distinguish their representations for differing sentences . Our intuition is that this creates a training dynamic where the two models acts as smooth anchors to each other , where the tension to remain synchronized mitigates the downsides of simply distancing the embeddings of differing sentences . This makes CT a nondestructive method for distinguishing the sentence embeddings of non semantically similar sentences . 5 EXPERIMENTS . Unless stated otherwise , the following set of hyperparameters is applied when using CT throughout all experiments : Training data is randomly sampled from English Wikipedia ( See Appendix C.2 ) , where we collectK = 7 negative sentence pairs for each positive sentence pair . The batch size is set to 16 , which results in every batch having 2 positive sentence pairs and 14 negative sentence pairs . We apply an RMSProp optimizer ( Hinton , 2012 ) with a fixed learning rate schedule that decreases from 1e−5 to 2e−6 ( Appendix A.3 ) . To showcase the robustness and unsupervised applicability of CT , we strictly perform 50,000 update steps before evaluating , and for all unsupervised tasks we report results for the worst-performing of the two models used in the CT setup . The experiment section follows the model naming convention elaborated upon in A.2 , which describes the order and what training objectives that has been applied to a model . There exists a clear discrepancy between previously reported STS scores for various methods and models . To improve upon this state of confusion we perform all evaluation with the SentEval package ( Conneau & Kiela , 2018 ) , to which we provide code and models for full reproducability of all tested methods . A Discussion regarding our experience with trying to reproduce previous work is available in Appendix A.4 . A comprehensive list of all used model checkpoints is available in Appendix C.1 | The paper investigates a new training objective, contrastive tension (CT), for obtaining unsupervised sentence embeddings. The objective operates by initializing two models with identical weights and then training the models to produce similar sentence embeddings to each other for identical sentences and dissimilar representations for different sentences. This objective encourages the paired models to agree on positive examples, but at the same time encourages divergence in their models weights by providing different sentences to each encoders for the negative pairs containing different sentences. The new objective is applied as an unsupervised finetuning tasks for BERT, Sentence-BERT, Distill BERT, multilingual BERT, XLNet and XLMR. | SP:b5dbddb2672f4567426094a6b52f84fcdce01d50 |
Weak and Strong Gradient Directions: Explaining Memorization, Generalization, and Hardness of Examples at Scale | 1 INTRODUCTION . Generalization in over-parameterized neural networks trained using Stochastic Gradient Descent ( SGD ) is not well understood . Such networks typically have sufficient capacity to memorize their training set ( Zhang et al. , 2017 ) which naturally leads to the question : Among all the maps that are consistent with the training set , why does SGD learn one that generalizes well to the test set ? This question has spawned a lot of research in the past few years ( Arora et al. , 2018 ; Arpit et al. , 2017 ; Bartlett et al. , 2017 ; Belkin et al. , 2019 ; Fort et al. , 2020 ; Kawaguchi et al. , 2017 ; Neyshabur et al. , 2018 ; Sankararaman et al. , 2019 ; Rahaman et al. , 2019 ; Zhang et al. , 2017 ) . There have been many attempts to extend classical algorithm-independent techniques for reasoning about generalization ( e.g. , VC-dimension ) to incorporate the “ implicit bias ” of SGD to get tighter bounds ( by limiting the size of the hypothesis space to that reachable through SGD ) . Although this line of work is too large to review here , the recent paper of Nagarajan & Kolter ( 2019 ) provides a nice overview . However , they also point out some fundamental problems with this approach ( particularly , poor asymptotics ) , and come to the conclusion that the underlying proof technique itself ( uniform convergence ) may be inadequate . They argue instead for looking at algorithmic stability ( Bousquet & Elisseeff , 2002 ) . While there has been work on analysing the algorithmic stability of SGD ( Hardt et al. , 2016 ; Kuzborskij & Lampert , 2018 ) , it does not take into account the training data . Since SGD can memorize training data with random labels , and yet generalize on real data ( i.e. , its generalization behavior is data-dependent ( Arpit et al. , 2017 ) ) , any such analysis must lead to vacuous bounds in practical settings ( Zhang et al. , 2017 ) . Thus , in order for an algorithmic stability based argument to work , what is needed is an approach that takes into account both the algorithmic details of SGD as well as the training data . Recently , a new approach , for understanding generalization along these lines has been proposed in Chatterjee ( 2020 ) . Called the Coherent Gradients Hypothesis ( CGH ) , the key observation is that descent directions that are common to multiple examples ( i.e. , similar ) add up in the overall gradient ( i.e. , reinforce each other ) whereas directions that are idiosyncratic to particular examples fail to add up . Thus , the biggest changes to the network parameters are those that benefit multiple examples . In other words , certain directions in the tangent space of the loss function are “ strong ” gradient directions supported by multiple examples whereas other directions are “ weak ” directions supported by only a few examples . Intuitively–and CGH is only a qualitative theory at this point–strong directions are ( algorithmically ) stable ( in the sense of Bousquet & Elisseeff ( 2002 ) , i.e. , altered marginally by the removal of a single example ) whereas weak directions are ( algorithmically ) unstable ( could disappear entirely if the example supporting it is removed ) . Therefore , a change to the parameters along a strong direction should generalize better than one along a weak direction . Since the overall gradient is the mean of per-example gradients , if strong directions exist , the overall gradient has large components along it , and thus the parameter updates are biased towards algorithmic stability . Since CGH is a causal explanation for generalization , Chatterjee ( 2020 ) tested the theory by performing two causal interventions . Although they found good agreement between the qualitative predictions of the theory and experiments , an important limitation of their work is that their experiments were on shallow ( 1-3 hidden layers ) fully connected networks trained on MNIST using SGD with a fixed learning rate . In this work , we test CGH on large convolutional networks such as ResNet , Inception and VGG on ImageNet . While one of the tests of Chatterjee ( 2020 ) ( reducing similarity ) scales to this setting , the more compelling test ( suppressing weak gradients by winsorization ) does not . We propose a new class of scalable techniques for suppressing weak gradients , and also propose an entirely new test of CGH which is not based on causal intervention but on analyzing why some examples are learned earlier in training than others . 2 PRELIMINARY : REDUCING SIMILARITY ON IMAGENET . One test of CGH proposed in Chatterjee ( 2020 ) is to study how dataset similarity impacts training . Since directly studying similarity is difficult because which examples are considered similar may change during training ( in CGH examples are similar if their gradients are similar ) , Chatterjee ( 2020 ) proposed adding label noise to a dataset based on the intuition is that no matter what the notion of similarity , adding label noise is likely to decrease it . Therefore , if CGH is true , we should expect that : • As the label noise increases , the rate at which examples are learned decreases , • Examples whose labels have not been corrupted ( pristine examples ) should be learned faster than the rest ( corrupt examples ) , and , • With increasing noise , since there are fewer pristine examples , the rate at which they are learned should decrease . As preliminary experiment , we ran this test on ImageNet and the results for ResNet-18 are shown in Figure 1 . The results for Inception-V3 and VGG-13 are very similar ( please see Appendix A ) . We note the good agreement with the predictions from CGH thus providing initial evidence that CGH holds at scale . 3 ABLATING SGD TO TEST THE COHERENT GRADIENT HYPOTHESIS : SCALABLE TECHNIQUES TO SUPPRESS WEAK GRADIENT DIRECTIONS . Since weak directions are supported by few examples , CGH holds that overfitting and memorization in SGD is caused by descending down weak directions . The original CGH paper proposed to test CGH by modifying SGD to suppressing weak directions in order to verify that it significantly reduces overfitting ( Chatterjee , 2020 ) , i.e. , improves generalization through greater algorithmic stability ( Bousquet & Elisseeff , 2002 ) . 3.1 REVIEW OF THE WINSORIZATION TECHNIQUE . The test modified SGD by using the ( coordinate-wise ) winsorized mean of the per-example gradients in a mini-batch ( instead of the usual mean ) to update the weights in each step . Everything else , including learning rate was kept the same . The winsorized mean limits the influence of outliers by clipping them to a specified percentile of the data ( called the winsorization level ) . As expected from CGH , they found that as the winsorization level increased , the rate of overfitting decreased . We replicated this study with a ResNet-32 on CIFAR-10 and confirmed the results of the original study on this new dataset and architecture ( please see Appendix B ) . We note that since the original study was only on MNIST which has low generalization gap , the effect of suppressing weak directions only manifested with label noise . On CIFAR-10 even the real label case ( i.e. , 0 % noise ) has significant overfitting which is reduced by winsorization . This provides stronger evidence in support of CGH . However , a big challenge with winsorization is the need to compute and store per-example gradients which makes training much slower . For example , in our CIFAR-10 experiment we had to reduce the mini-batch size to 32 to make training feasible . This is exacerbated with larger models and more complex datasets such as ImageNet , and new techniques are needed to scale up the test . 3.2 TECHNIQUES BASED ON MEDIAN OF MEANS . We start by observing that the problem of suppressing weak gradient directions can be posed as a robust mean estimation problem from the robust statistics literature ( Huber , 1981 ) . Although winsorization is one way of obtaining robust mean estimates , there are others . In particular , the median of means algorithm ( Minsker , 2013 ) is an optimal estimation technique in the sense that deviation from the true mean is bounded above by O ( 1/ √ m ) with high probability ( m is the number of samples ) . The sample mean satisfies this property only if the observations are Gaussian . The main idea of the median of means algorithm is to divide the samples into k groups , computing the sample mean of each group , and then returning the geometric median of these k means . The geometric median of k vectors x1 , x2 , . . . xk ∈ Rd is the vector y∗ such that y∗ = argminy∈Rd ∑k i=1 ‖ y − xi ‖2 . When d = 1 , the geometric median is just the ordinary median of scalars . However , in high dimensions the algorithm to compute the geometric median ( Weiszfeld , 1937 ) is iterative and is expensive to integrate in a traditional training loop . A simpler technique is to apply the median of means algorithm to each coordinate that gives a dimension dependent bound on the performance of the estimator . The M3 Technique . The most obvious way to apply this idea to SGD is to divide a mini-batch into k groups of equal size . We compute the mean gradients of each group as usual , and then take their coordinate-wise median . The median is then used to update the weights of the network.1 Even though the algorithm is straightforward , its most efficient implementation ( i.e. , where the k groups are large and processed in parallel ) on modern hardware accelerators requires low-level changes to the stack to allow for a median-based aggregation instead of the mean . Therefore , in this work , we simply compute the mean gradient of each group as a separate micro-batch and only update the network weights with the median every k micro-batches , i.e. , we process the groups serially . 1Since we are simply replacing the mini-batch gradient with a more robust alternative , this technique may be used to study optimizers other than vanilla SGD ( such as SGD with momentum , ADAM , etc. ) . A systematic exploration of that is outside the scope of this study . In the serial implementation , k = 3 is a sweet spot . We have to remember only 2 previous microbatches , and since median ( x1 , x2 , x3 ) = ∑ i xi −mini xi −maxi xi ( where i ∈ { 1 , 2 , 3 } ) , we can compute the median with simple operations . We call this median-of-3 micro-batches ( M3 ) . Example ( Effectiveness of M3 ) . Fix d and consider a set of m < d training examples . At some point in training , let gi ∈ Rd ( 1 ≤ i ≤ m ) be their gradients . Suppose further that each gradient gi has an idiosyncratic component ui and a common component c , i.e. , gi = ui + c with ui · uj = 0 ( for j 6= i ) and ui · c = 0 . Since M3 involves coordinate-wise operations , let us assume that we are working in a basis where the non-zero coordinates of ui do not overlap with each other or with c. Now , consider a mini-batch of size 3b ≤ m constructed by picking 3b examples ( i.e. , their gradients ) uniformly at random without replacement . The expected value of this update if we take an SGD step ( i.e. , simply take the mean of the mini-batch ) , is gSGD = c+ 1m ∑ i ui . On the other hand , the expected value of the update with M3 is gM3 = c since any non-zero coordinate of any ui can not be the median value for that coordinate across the 3 groups since it can appear at most once . In this extreme case , we see that M3 suppresses the weak gradient directions ( the ui ) while preserving the strong gradient direction c. The RM3 Technique . Now , rather than update the weights every k micro-batches , as we do in M3 , we can update the weights every micro-batch using the median from that micro-batch and the previous two . In this rolling setup , there is no longer a difference between mini-batches and micro-batches , i.e. , this is essentially the same as computing the median over the current mini-batch gradient and the mini-batch gradients from the previous 2 steps . We call this rolling median of 3 mini-batches ( RM3 ) . Two remarks concerning RM3 are in order . First , RM3 may be seen as an approximation to M3 chosen for implementation efficiency . The assumption is that the loss function does not change very much in 1 gradient step , i.e. , it is locally stationary . Second , since RM3 uses recent gradient history , one may be tempted to think of RM3 as a form of momentum , but that would be wrong . We can understand this better by replacing the median operation in RM3 with the mean . We call this RA3 for rolling average of 3 mini-batches . As we shall see , it is the median v/s mean that makes a significant difference in suppressing weak gradient directions , not the rolling v/s non-rolling . Schematically , in their ability to suppress weak directions , we find that SGD ≈ RA3 RM3 < M3 . ( 1 ) | The paper gives algorithms to test the Coherent Gradients Hypothesis (CHG) for larger models and larger datasets. CGH is a recently proposed hypothesis to explain generalization of neural networks using algorithmic stability and other empirical observations for deep learning. It claims that there are strong gradient directions which are shared by many examples and those directions lead to better generalization where there is overfitting in weak gradient directions where only a few examples contribute. This paper extends the experiments to larger datasets like Imagenet and CIFAR and bigger models like ResNet, Inception, and VGG whereas the original paper looked at small fully connected networks for MNIST dataset. To test this hypothesis on larger datasets, they pose suppressing weak gradients as a robust mean estimation algorithm and propose two algorithms. This paper also gives another empirical test that can be used to confirm CGH. | SP:02bc759eabcd069bc67c0f50daa6fe99f82779f0 |
Weak and Strong Gradient Directions: Explaining Memorization, Generalization, and Hardness of Examples at Scale | 1 INTRODUCTION . Generalization in over-parameterized neural networks trained using Stochastic Gradient Descent ( SGD ) is not well understood . Such networks typically have sufficient capacity to memorize their training set ( Zhang et al. , 2017 ) which naturally leads to the question : Among all the maps that are consistent with the training set , why does SGD learn one that generalizes well to the test set ? This question has spawned a lot of research in the past few years ( Arora et al. , 2018 ; Arpit et al. , 2017 ; Bartlett et al. , 2017 ; Belkin et al. , 2019 ; Fort et al. , 2020 ; Kawaguchi et al. , 2017 ; Neyshabur et al. , 2018 ; Sankararaman et al. , 2019 ; Rahaman et al. , 2019 ; Zhang et al. , 2017 ) . There have been many attempts to extend classical algorithm-independent techniques for reasoning about generalization ( e.g. , VC-dimension ) to incorporate the “ implicit bias ” of SGD to get tighter bounds ( by limiting the size of the hypothesis space to that reachable through SGD ) . Although this line of work is too large to review here , the recent paper of Nagarajan & Kolter ( 2019 ) provides a nice overview . However , they also point out some fundamental problems with this approach ( particularly , poor asymptotics ) , and come to the conclusion that the underlying proof technique itself ( uniform convergence ) may be inadequate . They argue instead for looking at algorithmic stability ( Bousquet & Elisseeff , 2002 ) . While there has been work on analysing the algorithmic stability of SGD ( Hardt et al. , 2016 ; Kuzborskij & Lampert , 2018 ) , it does not take into account the training data . Since SGD can memorize training data with random labels , and yet generalize on real data ( i.e. , its generalization behavior is data-dependent ( Arpit et al. , 2017 ) ) , any such analysis must lead to vacuous bounds in practical settings ( Zhang et al. , 2017 ) . Thus , in order for an algorithmic stability based argument to work , what is needed is an approach that takes into account both the algorithmic details of SGD as well as the training data . Recently , a new approach , for understanding generalization along these lines has been proposed in Chatterjee ( 2020 ) . Called the Coherent Gradients Hypothesis ( CGH ) , the key observation is that descent directions that are common to multiple examples ( i.e. , similar ) add up in the overall gradient ( i.e. , reinforce each other ) whereas directions that are idiosyncratic to particular examples fail to add up . Thus , the biggest changes to the network parameters are those that benefit multiple examples . In other words , certain directions in the tangent space of the loss function are “ strong ” gradient directions supported by multiple examples whereas other directions are “ weak ” directions supported by only a few examples . Intuitively–and CGH is only a qualitative theory at this point–strong directions are ( algorithmically ) stable ( in the sense of Bousquet & Elisseeff ( 2002 ) , i.e. , altered marginally by the removal of a single example ) whereas weak directions are ( algorithmically ) unstable ( could disappear entirely if the example supporting it is removed ) . Therefore , a change to the parameters along a strong direction should generalize better than one along a weak direction . Since the overall gradient is the mean of per-example gradients , if strong directions exist , the overall gradient has large components along it , and thus the parameter updates are biased towards algorithmic stability . Since CGH is a causal explanation for generalization , Chatterjee ( 2020 ) tested the theory by performing two causal interventions . Although they found good agreement between the qualitative predictions of the theory and experiments , an important limitation of their work is that their experiments were on shallow ( 1-3 hidden layers ) fully connected networks trained on MNIST using SGD with a fixed learning rate . In this work , we test CGH on large convolutional networks such as ResNet , Inception and VGG on ImageNet . While one of the tests of Chatterjee ( 2020 ) ( reducing similarity ) scales to this setting , the more compelling test ( suppressing weak gradients by winsorization ) does not . We propose a new class of scalable techniques for suppressing weak gradients , and also propose an entirely new test of CGH which is not based on causal intervention but on analyzing why some examples are learned earlier in training than others . 2 PRELIMINARY : REDUCING SIMILARITY ON IMAGENET . One test of CGH proposed in Chatterjee ( 2020 ) is to study how dataset similarity impacts training . Since directly studying similarity is difficult because which examples are considered similar may change during training ( in CGH examples are similar if their gradients are similar ) , Chatterjee ( 2020 ) proposed adding label noise to a dataset based on the intuition is that no matter what the notion of similarity , adding label noise is likely to decrease it . Therefore , if CGH is true , we should expect that : • As the label noise increases , the rate at which examples are learned decreases , • Examples whose labels have not been corrupted ( pristine examples ) should be learned faster than the rest ( corrupt examples ) , and , • With increasing noise , since there are fewer pristine examples , the rate at which they are learned should decrease . As preliminary experiment , we ran this test on ImageNet and the results for ResNet-18 are shown in Figure 1 . The results for Inception-V3 and VGG-13 are very similar ( please see Appendix A ) . We note the good agreement with the predictions from CGH thus providing initial evidence that CGH holds at scale . 3 ABLATING SGD TO TEST THE COHERENT GRADIENT HYPOTHESIS : SCALABLE TECHNIQUES TO SUPPRESS WEAK GRADIENT DIRECTIONS . Since weak directions are supported by few examples , CGH holds that overfitting and memorization in SGD is caused by descending down weak directions . The original CGH paper proposed to test CGH by modifying SGD to suppressing weak directions in order to verify that it significantly reduces overfitting ( Chatterjee , 2020 ) , i.e. , improves generalization through greater algorithmic stability ( Bousquet & Elisseeff , 2002 ) . 3.1 REVIEW OF THE WINSORIZATION TECHNIQUE . The test modified SGD by using the ( coordinate-wise ) winsorized mean of the per-example gradients in a mini-batch ( instead of the usual mean ) to update the weights in each step . Everything else , including learning rate was kept the same . The winsorized mean limits the influence of outliers by clipping them to a specified percentile of the data ( called the winsorization level ) . As expected from CGH , they found that as the winsorization level increased , the rate of overfitting decreased . We replicated this study with a ResNet-32 on CIFAR-10 and confirmed the results of the original study on this new dataset and architecture ( please see Appendix B ) . We note that since the original study was only on MNIST which has low generalization gap , the effect of suppressing weak directions only manifested with label noise . On CIFAR-10 even the real label case ( i.e. , 0 % noise ) has significant overfitting which is reduced by winsorization . This provides stronger evidence in support of CGH . However , a big challenge with winsorization is the need to compute and store per-example gradients which makes training much slower . For example , in our CIFAR-10 experiment we had to reduce the mini-batch size to 32 to make training feasible . This is exacerbated with larger models and more complex datasets such as ImageNet , and new techniques are needed to scale up the test . 3.2 TECHNIQUES BASED ON MEDIAN OF MEANS . We start by observing that the problem of suppressing weak gradient directions can be posed as a robust mean estimation problem from the robust statistics literature ( Huber , 1981 ) . Although winsorization is one way of obtaining robust mean estimates , there are others . In particular , the median of means algorithm ( Minsker , 2013 ) is an optimal estimation technique in the sense that deviation from the true mean is bounded above by O ( 1/ √ m ) with high probability ( m is the number of samples ) . The sample mean satisfies this property only if the observations are Gaussian . The main idea of the median of means algorithm is to divide the samples into k groups , computing the sample mean of each group , and then returning the geometric median of these k means . The geometric median of k vectors x1 , x2 , . . . xk ∈ Rd is the vector y∗ such that y∗ = argminy∈Rd ∑k i=1 ‖ y − xi ‖2 . When d = 1 , the geometric median is just the ordinary median of scalars . However , in high dimensions the algorithm to compute the geometric median ( Weiszfeld , 1937 ) is iterative and is expensive to integrate in a traditional training loop . A simpler technique is to apply the median of means algorithm to each coordinate that gives a dimension dependent bound on the performance of the estimator . The M3 Technique . The most obvious way to apply this idea to SGD is to divide a mini-batch into k groups of equal size . We compute the mean gradients of each group as usual , and then take their coordinate-wise median . The median is then used to update the weights of the network.1 Even though the algorithm is straightforward , its most efficient implementation ( i.e. , where the k groups are large and processed in parallel ) on modern hardware accelerators requires low-level changes to the stack to allow for a median-based aggregation instead of the mean . Therefore , in this work , we simply compute the mean gradient of each group as a separate micro-batch and only update the network weights with the median every k micro-batches , i.e. , we process the groups serially . 1Since we are simply replacing the mini-batch gradient with a more robust alternative , this technique may be used to study optimizers other than vanilla SGD ( such as SGD with momentum , ADAM , etc. ) . A systematic exploration of that is outside the scope of this study . In the serial implementation , k = 3 is a sweet spot . We have to remember only 2 previous microbatches , and since median ( x1 , x2 , x3 ) = ∑ i xi −mini xi −maxi xi ( where i ∈ { 1 , 2 , 3 } ) , we can compute the median with simple operations . We call this median-of-3 micro-batches ( M3 ) . Example ( Effectiveness of M3 ) . Fix d and consider a set of m < d training examples . At some point in training , let gi ∈ Rd ( 1 ≤ i ≤ m ) be their gradients . Suppose further that each gradient gi has an idiosyncratic component ui and a common component c , i.e. , gi = ui + c with ui · uj = 0 ( for j 6= i ) and ui · c = 0 . Since M3 involves coordinate-wise operations , let us assume that we are working in a basis where the non-zero coordinates of ui do not overlap with each other or with c. Now , consider a mini-batch of size 3b ≤ m constructed by picking 3b examples ( i.e. , their gradients ) uniformly at random without replacement . The expected value of this update if we take an SGD step ( i.e. , simply take the mean of the mini-batch ) , is gSGD = c+ 1m ∑ i ui . On the other hand , the expected value of the update with M3 is gM3 = c since any non-zero coordinate of any ui can not be the median value for that coordinate across the 3 groups since it can appear at most once . In this extreme case , we see that M3 suppresses the weak gradient directions ( the ui ) while preserving the strong gradient direction c. The RM3 Technique . Now , rather than update the weights every k micro-batches , as we do in M3 , we can update the weights every micro-batch using the median from that micro-batch and the previous two . In this rolling setup , there is no longer a difference between mini-batches and micro-batches , i.e. , this is essentially the same as computing the median over the current mini-batch gradient and the mini-batch gradients from the previous 2 steps . We call this rolling median of 3 mini-batches ( RM3 ) . Two remarks concerning RM3 are in order . First , RM3 may be seen as an approximation to M3 chosen for implementation efficiency . The assumption is that the loss function does not change very much in 1 gradient step , i.e. , it is locally stationary . Second , since RM3 uses recent gradient history , one may be tempted to think of RM3 as a form of momentum , but that would be wrong . We can understand this better by replacing the median operation in RM3 with the mean . We call this RA3 for rolling average of 3 mini-batches . As we shall see , it is the median v/s mean that makes a significant difference in suppressing weak gradient directions , not the rolling v/s non-rolling . Schematically , in their ability to suppress weak directions , we find that SGD ≈ RA3 RM3 < M3 . ( 1 ) | The paper aims at providing experimental evidence to support the Coherent Gradients Hypothesis (CGH), that was published previously. The hypothesis suggests that the ability of large neural networks to generalise comes from the aligned gradients of the examples in the dataset. Once SGD follows common gradients, ignoring the rare directions, the model will generalise better. For the experimental evidence the authors present two algorithms for approximate smoothing insignificant gradient directions for large scale networks and datasets, as opposed to the original CGH paper, where experiments were performed for MNIST and one hidden layer network. The proposed techniques allow for large-scale experiments and show a decreased generalisation gap compared to vanilla SGD. The authors also propose to analyse “easy” (ones that are learned first) and “hard” (require lots of training) examples. The claim is that the easy examples are the ones having coherent gradients, while hard ones push the network to rare directions. | SP:02bc759eabcd069bc67c0f50daa6fe99f82779f0 |
Weak and Strong Gradient Directions: Explaining Memorization, Generalization, and Hardness of Examples at Scale | 1 INTRODUCTION . Generalization in over-parameterized neural networks trained using Stochastic Gradient Descent ( SGD ) is not well understood . Such networks typically have sufficient capacity to memorize their training set ( Zhang et al. , 2017 ) which naturally leads to the question : Among all the maps that are consistent with the training set , why does SGD learn one that generalizes well to the test set ? This question has spawned a lot of research in the past few years ( Arora et al. , 2018 ; Arpit et al. , 2017 ; Bartlett et al. , 2017 ; Belkin et al. , 2019 ; Fort et al. , 2020 ; Kawaguchi et al. , 2017 ; Neyshabur et al. , 2018 ; Sankararaman et al. , 2019 ; Rahaman et al. , 2019 ; Zhang et al. , 2017 ) . There have been many attempts to extend classical algorithm-independent techniques for reasoning about generalization ( e.g. , VC-dimension ) to incorporate the “ implicit bias ” of SGD to get tighter bounds ( by limiting the size of the hypothesis space to that reachable through SGD ) . Although this line of work is too large to review here , the recent paper of Nagarajan & Kolter ( 2019 ) provides a nice overview . However , they also point out some fundamental problems with this approach ( particularly , poor asymptotics ) , and come to the conclusion that the underlying proof technique itself ( uniform convergence ) may be inadequate . They argue instead for looking at algorithmic stability ( Bousquet & Elisseeff , 2002 ) . While there has been work on analysing the algorithmic stability of SGD ( Hardt et al. , 2016 ; Kuzborskij & Lampert , 2018 ) , it does not take into account the training data . Since SGD can memorize training data with random labels , and yet generalize on real data ( i.e. , its generalization behavior is data-dependent ( Arpit et al. , 2017 ) ) , any such analysis must lead to vacuous bounds in practical settings ( Zhang et al. , 2017 ) . Thus , in order for an algorithmic stability based argument to work , what is needed is an approach that takes into account both the algorithmic details of SGD as well as the training data . Recently , a new approach , for understanding generalization along these lines has been proposed in Chatterjee ( 2020 ) . Called the Coherent Gradients Hypothesis ( CGH ) , the key observation is that descent directions that are common to multiple examples ( i.e. , similar ) add up in the overall gradient ( i.e. , reinforce each other ) whereas directions that are idiosyncratic to particular examples fail to add up . Thus , the biggest changes to the network parameters are those that benefit multiple examples . In other words , certain directions in the tangent space of the loss function are “ strong ” gradient directions supported by multiple examples whereas other directions are “ weak ” directions supported by only a few examples . Intuitively–and CGH is only a qualitative theory at this point–strong directions are ( algorithmically ) stable ( in the sense of Bousquet & Elisseeff ( 2002 ) , i.e. , altered marginally by the removal of a single example ) whereas weak directions are ( algorithmically ) unstable ( could disappear entirely if the example supporting it is removed ) . Therefore , a change to the parameters along a strong direction should generalize better than one along a weak direction . Since the overall gradient is the mean of per-example gradients , if strong directions exist , the overall gradient has large components along it , and thus the parameter updates are biased towards algorithmic stability . Since CGH is a causal explanation for generalization , Chatterjee ( 2020 ) tested the theory by performing two causal interventions . Although they found good agreement between the qualitative predictions of the theory and experiments , an important limitation of their work is that their experiments were on shallow ( 1-3 hidden layers ) fully connected networks trained on MNIST using SGD with a fixed learning rate . In this work , we test CGH on large convolutional networks such as ResNet , Inception and VGG on ImageNet . While one of the tests of Chatterjee ( 2020 ) ( reducing similarity ) scales to this setting , the more compelling test ( suppressing weak gradients by winsorization ) does not . We propose a new class of scalable techniques for suppressing weak gradients , and also propose an entirely new test of CGH which is not based on causal intervention but on analyzing why some examples are learned earlier in training than others . 2 PRELIMINARY : REDUCING SIMILARITY ON IMAGENET . One test of CGH proposed in Chatterjee ( 2020 ) is to study how dataset similarity impacts training . Since directly studying similarity is difficult because which examples are considered similar may change during training ( in CGH examples are similar if their gradients are similar ) , Chatterjee ( 2020 ) proposed adding label noise to a dataset based on the intuition is that no matter what the notion of similarity , adding label noise is likely to decrease it . Therefore , if CGH is true , we should expect that : • As the label noise increases , the rate at which examples are learned decreases , • Examples whose labels have not been corrupted ( pristine examples ) should be learned faster than the rest ( corrupt examples ) , and , • With increasing noise , since there are fewer pristine examples , the rate at which they are learned should decrease . As preliminary experiment , we ran this test on ImageNet and the results for ResNet-18 are shown in Figure 1 . The results for Inception-V3 and VGG-13 are very similar ( please see Appendix A ) . We note the good agreement with the predictions from CGH thus providing initial evidence that CGH holds at scale . 3 ABLATING SGD TO TEST THE COHERENT GRADIENT HYPOTHESIS : SCALABLE TECHNIQUES TO SUPPRESS WEAK GRADIENT DIRECTIONS . Since weak directions are supported by few examples , CGH holds that overfitting and memorization in SGD is caused by descending down weak directions . The original CGH paper proposed to test CGH by modifying SGD to suppressing weak directions in order to verify that it significantly reduces overfitting ( Chatterjee , 2020 ) , i.e. , improves generalization through greater algorithmic stability ( Bousquet & Elisseeff , 2002 ) . 3.1 REVIEW OF THE WINSORIZATION TECHNIQUE . The test modified SGD by using the ( coordinate-wise ) winsorized mean of the per-example gradients in a mini-batch ( instead of the usual mean ) to update the weights in each step . Everything else , including learning rate was kept the same . The winsorized mean limits the influence of outliers by clipping them to a specified percentile of the data ( called the winsorization level ) . As expected from CGH , they found that as the winsorization level increased , the rate of overfitting decreased . We replicated this study with a ResNet-32 on CIFAR-10 and confirmed the results of the original study on this new dataset and architecture ( please see Appendix B ) . We note that since the original study was only on MNIST which has low generalization gap , the effect of suppressing weak directions only manifested with label noise . On CIFAR-10 even the real label case ( i.e. , 0 % noise ) has significant overfitting which is reduced by winsorization . This provides stronger evidence in support of CGH . However , a big challenge with winsorization is the need to compute and store per-example gradients which makes training much slower . For example , in our CIFAR-10 experiment we had to reduce the mini-batch size to 32 to make training feasible . This is exacerbated with larger models and more complex datasets such as ImageNet , and new techniques are needed to scale up the test . 3.2 TECHNIQUES BASED ON MEDIAN OF MEANS . We start by observing that the problem of suppressing weak gradient directions can be posed as a robust mean estimation problem from the robust statistics literature ( Huber , 1981 ) . Although winsorization is one way of obtaining robust mean estimates , there are others . In particular , the median of means algorithm ( Minsker , 2013 ) is an optimal estimation technique in the sense that deviation from the true mean is bounded above by O ( 1/ √ m ) with high probability ( m is the number of samples ) . The sample mean satisfies this property only if the observations are Gaussian . The main idea of the median of means algorithm is to divide the samples into k groups , computing the sample mean of each group , and then returning the geometric median of these k means . The geometric median of k vectors x1 , x2 , . . . xk ∈ Rd is the vector y∗ such that y∗ = argminy∈Rd ∑k i=1 ‖ y − xi ‖2 . When d = 1 , the geometric median is just the ordinary median of scalars . However , in high dimensions the algorithm to compute the geometric median ( Weiszfeld , 1937 ) is iterative and is expensive to integrate in a traditional training loop . A simpler technique is to apply the median of means algorithm to each coordinate that gives a dimension dependent bound on the performance of the estimator . The M3 Technique . The most obvious way to apply this idea to SGD is to divide a mini-batch into k groups of equal size . We compute the mean gradients of each group as usual , and then take their coordinate-wise median . The median is then used to update the weights of the network.1 Even though the algorithm is straightforward , its most efficient implementation ( i.e. , where the k groups are large and processed in parallel ) on modern hardware accelerators requires low-level changes to the stack to allow for a median-based aggregation instead of the mean . Therefore , in this work , we simply compute the mean gradient of each group as a separate micro-batch and only update the network weights with the median every k micro-batches , i.e. , we process the groups serially . 1Since we are simply replacing the mini-batch gradient with a more robust alternative , this technique may be used to study optimizers other than vanilla SGD ( such as SGD with momentum , ADAM , etc. ) . A systematic exploration of that is outside the scope of this study . In the serial implementation , k = 3 is a sweet spot . We have to remember only 2 previous microbatches , and since median ( x1 , x2 , x3 ) = ∑ i xi −mini xi −maxi xi ( where i ∈ { 1 , 2 , 3 } ) , we can compute the median with simple operations . We call this median-of-3 micro-batches ( M3 ) . Example ( Effectiveness of M3 ) . Fix d and consider a set of m < d training examples . At some point in training , let gi ∈ Rd ( 1 ≤ i ≤ m ) be their gradients . Suppose further that each gradient gi has an idiosyncratic component ui and a common component c , i.e. , gi = ui + c with ui · uj = 0 ( for j 6= i ) and ui · c = 0 . Since M3 involves coordinate-wise operations , let us assume that we are working in a basis where the non-zero coordinates of ui do not overlap with each other or with c. Now , consider a mini-batch of size 3b ≤ m constructed by picking 3b examples ( i.e. , their gradients ) uniformly at random without replacement . The expected value of this update if we take an SGD step ( i.e. , simply take the mean of the mini-batch ) , is gSGD = c+ 1m ∑ i ui . On the other hand , the expected value of the update with M3 is gM3 = c since any non-zero coordinate of any ui can not be the median value for that coordinate across the 3 groups since it can appear at most once . In this extreme case , we see that M3 suppresses the weak gradient directions ( the ui ) while preserving the strong gradient direction c. The RM3 Technique . Now , rather than update the weights every k micro-batches , as we do in M3 , we can update the weights every micro-batch using the median from that micro-batch and the previous two . In this rolling setup , there is no longer a difference between mini-batches and micro-batches , i.e. , this is essentially the same as computing the median over the current mini-batch gradient and the mini-batch gradients from the previous 2 steps . We call this rolling median of 3 mini-batches ( RM3 ) . Two remarks concerning RM3 are in order . First , RM3 may be seen as an approximation to M3 chosen for implementation efficiency . The assumption is that the loss function does not change very much in 1 gradient step , i.e. , it is locally stationary . Second , since RM3 uses recent gradient history , one may be tempted to think of RM3 as a form of momentum , but that would be wrong . We can understand this better by replacing the median operation in RM3 with the mean . We call this RA3 for rolling average of 3 mini-batches . As we shall see , it is the median v/s mean that makes a significant difference in suppressing weak gradient directions , not the rolling v/s non-rolling . Schematically , in their ability to suppress weak directions , we find that SGD ≈ RA3 RM3 < M3 . ( 1 ) | of paper:** Builds on and tests the "coherent gradients hypothesis" (CGH), which proposes that SGD is able to generalize well because of 'coherence' (similar direction) of gradients. The methodology of CGH involved comparing gradient directions on individual examples, infeasible for large datasets, so this work proposes and compares 2 mean- and median- based methods for testing CGH at scale. It also investigates easy/hard examples and thereby proposes a test of coherence of gradients based on the number of times an example of a certain difficulty is correctly classified. | SP:02bc759eabcd069bc67c0f50daa6fe99f82779f0 |
Unsupervised Hierarchical Concept Learning | 1 INTRODUCTION . Consider a video ( Figure 1 ) that demonstrates how to cook an egg . Humans subconsciously learn concepts ( such as boiling water ) that describe different concepts ( or skills ) in such demonstrations Pammi et al . ( 2004 ) . These learned skills can be composed and reused in different ways to learn new concepts . Discovering such concepts automatically from demonstration data is a non-trivial problem . Shankar et al . ( 2019 ) introduces a sequence-to-sequence architecture that clusters long-horizon action trajectories into shorter temporal skills . However , their approach treats skills as independent concepts . In contrast , humans organize these concepts in hierarchies where lower-level concepts can be grouped to define higher-level concepts Naim et al . ( 2019 ) . We extend the architecture in Shankar et al . ( 2019 ) to simultaneously discover concepts along with their hierarchical organization without any supervision . We propose an end-to-end trainable architecture UNHCLE for hierarchical representation learning from demonstrations . UNHCLE takes as input a long horizon trajectory of high-dimensional images demonstrating a complex task ( in our case , chess and cooking ) and the associated textual commentary and isolates semantically meaningful subsequences in input trajectories . We emphasize that it does not require temporal annotations which link subsequences in the trajectories of images to the freeflowing commentary , but instead , autonomously discovers this mapping . Therefore , this work takes a step towards unsupervised video understanding of high-dimensional data . Our contributions can be summarized as follows : • We introduce a transformer-based architecture to learn a multi-modal hierarchical latent embedding space to encode the various concepts in long-horizon demonstration trajectories . UNHCLE abstracts these concepts ( shown through visual qualitative analysis ) without requiring any temporal supervision , i.e. , it divides long-horizon trajectories into semantically meaningful subsequences , without access to any temporal annotations that split these trajectories optimally . • We show the quantitative effectiveness of learning high-level concepts in a hierarchical manner compared to learning them in isolation while outperforming several baselines on YouCook2 ( Zhou et al . ( 2017 ) ) and Chess Opening dataset1 . • We further introduce a mechanism to incorporate commentary accompanying demonstrations in UNHCLE and show improvements in hierarchical concepts discovered . 1https : //www.kaggle.com/residentmario/recommending-chess-openings • We introduce TimeWarped IoU ( TW-IoU ) , an evaluation metric that we use to compare the alignment of our discovered concepts and ground-truth events . Existing approaches to representation learning for demonstrations or videos typically require significant supervision . Typically , sequence-to-sequence architectures are trained on datasets segmented by humans . During inference , these architectures generate proposals for timestamps that segment the input trajectory into semantically meaningful sequences . These complex sequence-to-sequence models require significant amounts of annotated data , making them costly to train them . More generally , video and scene understanding is an important research area with wide-ranging applications . Most recently , Chen et al . ( 2019 ) utilize semantic awareness to perform complex depth estimation tasks to acquire the geometric properties of 3-dimensional space from 2-dimensional images . Tosi et al . ( 2020 ) utilize similar semantic information for depth estimation , optical flow and motion segmentation . Boggust et al . ( 2019 ) attempt to ground words in the video , but apply significant supervision to synchronize them , requiring human intervention . We attempt to learn similar embeddings but do so in a completely unsupervised manner , not utilizing any of the temporal labels available . The field of learning from demonstrations ( Nicolescu & Mataric ( 2003 ) ) seeks to learn to perform tasks from a set of demonstrated behaviors . Behavioral Cloning is one popular scheme ( Esmaili et al . ( 1995 ) ) . Atkeson & Schaal ( 1997 ) and Schaal ( 1997 ) show how agents can learn simple tasks like cartpole simply from demonstrations . Pastor et al . ( 2009 ) also study how robots can learn from human demonstrations of tasks . Peters et al . ( 2013 ) and Kober & Peters ( 2009 ) fit a parametric model to the demonstrations . Niekum et al . ( 2012 ) , Murali et al . ( 2016 ) , and Meier et al . ( 2011 ) first segment trajectories into subsequence and then apply a parametric models to each subsequence . More recently , Schmeckpeper et al . ( 2019 ) shows that agents can learn to maximize external reward using a large corpus of observation data , i.e. , trajectories of states on a relatively smaller corpus of interaction data , i.e. , trajectories of state-action pairs . Hierarchical task representations have been studied as well . Instead of treating demonstrations in a flat manner , one may also infer the hierarchical structure . A few recent works attempt to do so ( Xu et al . ( 2018 ) ; Sun et al . ( 2018 ) ) , or as task graphs ( Huang et al. , 2019 ) . Both Xu et al . ( 2018 ) and Huang et al . ( 2019 ) address generalizing to new instances of manipulation tasks in the few-shot regime by abstracting away low-level controls . However , all of these approaches require an environment i.e. , a transition and reward function to learn from . On the contrary , humans show an ability to learn by watching demonstrations , which we attempt to replicate . Temporal abstractions of action sequences or skill/primitive learning is also a related field . Eysenbach et al . ( 2018 ) , learn a large number of low-level sequences of actions by forcing the agent to produce skills that are different from those previously acquired . However , due to the diversity bias , the agent results in learning many useless skills that can not be used for any semantically meaningful task . Similarly , Sharma et al . ( 2019 ) attempt to learn skills such that their transitions are almost deterministic in a given environment . These approaches also require access to an environment , whereas we try to learn without an environment . 2 APPROACH . 2.1 UNHCLE : UNSUPERVISED HIERARCHICAL CONCEPT LEARNING . Intuitively , we define a concept as a short sequence of states which repeatedly occur across several demonstration trajectories . Concepts have an upper limit on their length in time-steps . These concepts can be obtained from the images of demonstrations , denoted by z , and from the associated textual description , represented by c. We also refer to them as Observational Concept ( OC ) and Instructional Concept ( IC ) respectively and the module that encodes these concepts is referred to as Observational Concept Abstraction ( OCA ) and Instructional Concept Abstraction ( ICA ) modules . Additionally , concepts are hierarchical in nature - thus , lower-level and higher-level observational or image concepts are denoted by zL and zH , respectively . Analogously , lower-level , and higher-level textual concepts are represented by cL and cH , respectively . Similarly , the higher-level and lower-level modules are denoted by low or high for the corresponding level . Once we obtain these concept abstractions across levels , we can also transform them across the levels using a Concept Regeneration ( CR ) Module to transform low-level concepts to high-level concepts and vice-versa . Instead of just traversing in the concept level hierarchy , we can also use the Concept Instruction Regeneration Module or CIR to obtain the original instructions that map to that concept in its respective concept modality . Subsequently , we provide details about the different stages in our proposed technique . Encoding Observation : Given a long horizon trajectory of demonstration images along with its associated textual description , UNHCLE is able to abstract a hierarchy of concepts from demonstration images . We first pass these input images through ResNet-32 ( He et al . ( 2016 ) ) to get a sequence of image vectors as S = s1 : m , and the associated text is converted into word vectors W = w1 : n using BERT-base ( Devlin et al . ( 2018 ) ) . Observations combine to produce lower-level concepts whereas higher-level concepts are simply aggregations of such lower-level concepts . Thus , the Lower-level Observation Concept Abstraction module ( OCAlow ) is trained to embed a sequence of image vectors of a video ( s1 , s2 .... sm ) into a sequence of concept vectors ( zL1 , z L 2 , .. , z L U ) where u < < m such that zL1 : u = OCA low ( s1 : m ) . Subsequently , lower-level concepts combine together to form a higher level of concepts using the Higher-level Observation Concept Abstraction module ( OCAhigh ) such that zH1 : v = OCA high ( zL1 : u ) . Encoding Instructions : We also endeavour to discover these higher-level and lower-level concepts through natural language instructions . The Lower-level Instruction Concept Abstraction module ( ICAlow ) and Higher-level Instruction Concept Abstraction module ( ICAhigh ) are responsible for this functionality . From a corpus of words , ( w1 , w2 .... wn ) , the ICAlow module generates concept ( cL1 , c L 2 , .. , c L u ) , u < < n : c L 1 : u = ICA low ( w1 : n ) . Subsequently the ICAhigh encodes the lower-level language concepts cL1 : u into higher level concepts as c H 1 : v = ICA high ( cL1 : u ) . Traversing across the concept hierarchy : Learning concepts at different levels has an added advantage of traversing these concepts in the concept hierarchy . This additionally allows us to utilize these hierarchical traversals to obtain coarse or fine-grained concepts at any level . We can thus regenerate the lower concepts from higher-level instruction concepts using the Lower-level Concept Regeneration Module ( CRlow ) such that z′L1 : u′ = CR low ( cH1 : v ) . We can then later utilize this to obtain lower-level concepts from the higher-level concepts to regenerate the demonstration images SL = sL1 : mL in a cross-modal fashion . Observation and Instruction Regeneration : Under a concept , the sequence of frames is nearly deterministic i.e . the knowledge of a concept uniquely identifies the accompanying sequence of images in the demonstration trajectory . Subsequently , we regenerate the demonstration image vectors SL = sL1 : mL from lower-level concepts using Lower-level Concept Observation Regeneration Module ( CORlow ) such that sL1 : mL = COR low ( z′L1 : u′ ) . We also regenerate the demonstration image vectors SU = sU1 : mU from higher-level concepts abstracted from language using Higher-level Concept Observation Regeneration Module ( CORhigh ) such that sU1 : mU = COR high ( cH1 : v ) in a similar cross-modal manner . Finally , inspired by humans who can easily describe concept representations using free-flowing natural language , we first regenerate lower-level concepts from higher-level observation concepts using the Lower-level Concept Regeneration Module ( CRlow ) such that c′L1 : u′ = CR low ( zH1 : v ) , and subsequently , regenerate the word vectors WL = wL1 : nL from lower-level concepts using Lower-level Concept Instruction Regeneration Module ( CIRlow ) such that wL1 : nL = CIR low ( c′L1 : u′ ) . Additionally , the higher-level concepts identified by the OCAhigh module from demonstration frames are also described using a meaningful free-flowing commentary by the Higher-level Concept Instruction Regeneration module or CIRhigh . Thus , we regenerate the word vectors WU = wU1 : nU from higherlevel concepts using CIRhigh such that wU1 : mU = CIR high ( zH1 : v ) . | This paper addresses the problem of extracting a hierarchy of concepts in an unsupervised way from demonstration data. The authors present a Transformer-based concept abstraction architecture called UNHCLE and show how it discovers meaningful hierarchies using datasets from Chess and Cooking domains. In particular, the model is designed to function without specific temporal supervision, which makes it potentially practical for real-world applications. | SP:bfdb68759a70c3ced66ef2952a05cbcb0dd2aca7 |
Unsupervised Hierarchical Concept Learning | 1 INTRODUCTION . Consider a video ( Figure 1 ) that demonstrates how to cook an egg . Humans subconsciously learn concepts ( such as boiling water ) that describe different concepts ( or skills ) in such demonstrations Pammi et al . ( 2004 ) . These learned skills can be composed and reused in different ways to learn new concepts . Discovering such concepts automatically from demonstration data is a non-trivial problem . Shankar et al . ( 2019 ) introduces a sequence-to-sequence architecture that clusters long-horizon action trajectories into shorter temporal skills . However , their approach treats skills as independent concepts . In contrast , humans organize these concepts in hierarchies where lower-level concepts can be grouped to define higher-level concepts Naim et al . ( 2019 ) . We extend the architecture in Shankar et al . ( 2019 ) to simultaneously discover concepts along with their hierarchical organization without any supervision . We propose an end-to-end trainable architecture UNHCLE for hierarchical representation learning from demonstrations . UNHCLE takes as input a long horizon trajectory of high-dimensional images demonstrating a complex task ( in our case , chess and cooking ) and the associated textual commentary and isolates semantically meaningful subsequences in input trajectories . We emphasize that it does not require temporal annotations which link subsequences in the trajectories of images to the freeflowing commentary , but instead , autonomously discovers this mapping . Therefore , this work takes a step towards unsupervised video understanding of high-dimensional data . Our contributions can be summarized as follows : • We introduce a transformer-based architecture to learn a multi-modal hierarchical latent embedding space to encode the various concepts in long-horizon demonstration trajectories . UNHCLE abstracts these concepts ( shown through visual qualitative analysis ) without requiring any temporal supervision , i.e. , it divides long-horizon trajectories into semantically meaningful subsequences , without access to any temporal annotations that split these trajectories optimally . • We show the quantitative effectiveness of learning high-level concepts in a hierarchical manner compared to learning them in isolation while outperforming several baselines on YouCook2 ( Zhou et al . ( 2017 ) ) and Chess Opening dataset1 . • We further introduce a mechanism to incorporate commentary accompanying demonstrations in UNHCLE and show improvements in hierarchical concepts discovered . 1https : //www.kaggle.com/residentmario/recommending-chess-openings • We introduce TimeWarped IoU ( TW-IoU ) , an evaluation metric that we use to compare the alignment of our discovered concepts and ground-truth events . Existing approaches to representation learning for demonstrations or videos typically require significant supervision . Typically , sequence-to-sequence architectures are trained on datasets segmented by humans . During inference , these architectures generate proposals for timestamps that segment the input trajectory into semantically meaningful sequences . These complex sequence-to-sequence models require significant amounts of annotated data , making them costly to train them . More generally , video and scene understanding is an important research area with wide-ranging applications . Most recently , Chen et al . ( 2019 ) utilize semantic awareness to perform complex depth estimation tasks to acquire the geometric properties of 3-dimensional space from 2-dimensional images . Tosi et al . ( 2020 ) utilize similar semantic information for depth estimation , optical flow and motion segmentation . Boggust et al . ( 2019 ) attempt to ground words in the video , but apply significant supervision to synchronize them , requiring human intervention . We attempt to learn similar embeddings but do so in a completely unsupervised manner , not utilizing any of the temporal labels available . The field of learning from demonstrations ( Nicolescu & Mataric ( 2003 ) ) seeks to learn to perform tasks from a set of demonstrated behaviors . Behavioral Cloning is one popular scheme ( Esmaili et al . ( 1995 ) ) . Atkeson & Schaal ( 1997 ) and Schaal ( 1997 ) show how agents can learn simple tasks like cartpole simply from demonstrations . Pastor et al . ( 2009 ) also study how robots can learn from human demonstrations of tasks . Peters et al . ( 2013 ) and Kober & Peters ( 2009 ) fit a parametric model to the demonstrations . Niekum et al . ( 2012 ) , Murali et al . ( 2016 ) , and Meier et al . ( 2011 ) first segment trajectories into subsequence and then apply a parametric models to each subsequence . More recently , Schmeckpeper et al . ( 2019 ) shows that agents can learn to maximize external reward using a large corpus of observation data , i.e. , trajectories of states on a relatively smaller corpus of interaction data , i.e. , trajectories of state-action pairs . Hierarchical task representations have been studied as well . Instead of treating demonstrations in a flat manner , one may also infer the hierarchical structure . A few recent works attempt to do so ( Xu et al . ( 2018 ) ; Sun et al . ( 2018 ) ) , or as task graphs ( Huang et al. , 2019 ) . Both Xu et al . ( 2018 ) and Huang et al . ( 2019 ) address generalizing to new instances of manipulation tasks in the few-shot regime by abstracting away low-level controls . However , all of these approaches require an environment i.e. , a transition and reward function to learn from . On the contrary , humans show an ability to learn by watching demonstrations , which we attempt to replicate . Temporal abstractions of action sequences or skill/primitive learning is also a related field . Eysenbach et al . ( 2018 ) , learn a large number of low-level sequences of actions by forcing the agent to produce skills that are different from those previously acquired . However , due to the diversity bias , the agent results in learning many useless skills that can not be used for any semantically meaningful task . Similarly , Sharma et al . ( 2019 ) attempt to learn skills such that their transitions are almost deterministic in a given environment . These approaches also require access to an environment , whereas we try to learn without an environment . 2 APPROACH . 2.1 UNHCLE : UNSUPERVISED HIERARCHICAL CONCEPT LEARNING . Intuitively , we define a concept as a short sequence of states which repeatedly occur across several demonstration trajectories . Concepts have an upper limit on their length in time-steps . These concepts can be obtained from the images of demonstrations , denoted by z , and from the associated textual description , represented by c. We also refer to them as Observational Concept ( OC ) and Instructional Concept ( IC ) respectively and the module that encodes these concepts is referred to as Observational Concept Abstraction ( OCA ) and Instructional Concept Abstraction ( ICA ) modules . Additionally , concepts are hierarchical in nature - thus , lower-level and higher-level observational or image concepts are denoted by zL and zH , respectively . Analogously , lower-level , and higher-level textual concepts are represented by cL and cH , respectively . Similarly , the higher-level and lower-level modules are denoted by low or high for the corresponding level . Once we obtain these concept abstractions across levels , we can also transform them across the levels using a Concept Regeneration ( CR ) Module to transform low-level concepts to high-level concepts and vice-versa . Instead of just traversing in the concept level hierarchy , we can also use the Concept Instruction Regeneration Module or CIR to obtain the original instructions that map to that concept in its respective concept modality . Subsequently , we provide details about the different stages in our proposed technique . Encoding Observation : Given a long horizon trajectory of demonstration images along with its associated textual description , UNHCLE is able to abstract a hierarchy of concepts from demonstration images . We first pass these input images through ResNet-32 ( He et al . ( 2016 ) ) to get a sequence of image vectors as S = s1 : m , and the associated text is converted into word vectors W = w1 : n using BERT-base ( Devlin et al . ( 2018 ) ) . Observations combine to produce lower-level concepts whereas higher-level concepts are simply aggregations of such lower-level concepts . Thus , the Lower-level Observation Concept Abstraction module ( OCAlow ) is trained to embed a sequence of image vectors of a video ( s1 , s2 .... sm ) into a sequence of concept vectors ( zL1 , z L 2 , .. , z L U ) where u < < m such that zL1 : u = OCA low ( s1 : m ) . Subsequently , lower-level concepts combine together to form a higher level of concepts using the Higher-level Observation Concept Abstraction module ( OCAhigh ) such that zH1 : v = OCA high ( zL1 : u ) . Encoding Instructions : We also endeavour to discover these higher-level and lower-level concepts through natural language instructions . The Lower-level Instruction Concept Abstraction module ( ICAlow ) and Higher-level Instruction Concept Abstraction module ( ICAhigh ) are responsible for this functionality . From a corpus of words , ( w1 , w2 .... wn ) , the ICAlow module generates concept ( cL1 , c L 2 , .. , c L u ) , u < < n : c L 1 : u = ICA low ( w1 : n ) . Subsequently the ICAhigh encodes the lower-level language concepts cL1 : u into higher level concepts as c H 1 : v = ICA high ( cL1 : u ) . Traversing across the concept hierarchy : Learning concepts at different levels has an added advantage of traversing these concepts in the concept hierarchy . This additionally allows us to utilize these hierarchical traversals to obtain coarse or fine-grained concepts at any level . We can thus regenerate the lower concepts from higher-level instruction concepts using the Lower-level Concept Regeneration Module ( CRlow ) such that z′L1 : u′ = CR low ( cH1 : v ) . We can then later utilize this to obtain lower-level concepts from the higher-level concepts to regenerate the demonstration images SL = sL1 : mL in a cross-modal fashion . Observation and Instruction Regeneration : Under a concept , the sequence of frames is nearly deterministic i.e . the knowledge of a concept uniquely identifies the accompanying sequence of images in the demonstration trajectory . Subsequently , we regenerate the demonstration image vectors SL = sL1 : mL from lower-level concepts using Lower-level Concept Observation Regeneration Module ( CORlow ) such that sL1 : mL = COR low ( z′L1 : u′ ) . We also regenerate the demonstration image vectors SU = sU1 : mU from higher-level concepts abstracted from language using Higher-level Concept Observation Regeneration Module ( CORhigh ) such that sU1 : mU = COR high ( cH1 : v ) in a similar cross-modal manner . Finally , inspired by humans who can easily describe concept representations using free-flowing natural language , we first regenerate lower-level concepts from higher-level observation concepts using the Lower-level Concept Regeneration Module ( CRlow ) such that c′L1 : u′ = CR low ( zH1 : v ) , and subsequently , regenerate the word vectors WL = wL1 : nL from lower-level concepts using Lower-level Concept Instruction Regeneration Module ( CIRlow ) such that wL1 : nL = CIR low ( c′L1 : u′ ) . Additionally , the higher-level concepts identified by the OCAhigh module from demonstration frames are also described using a meaningful free-flowing commentary by the Higher-level Concept Instruction Regeneration module or CIRhigh . Thus , we regenerate the word vectors WU = wU1 : nU from higherlevel concepts using CIRhigh such that wU1 : mU = CIR high ( zH1 : v ) . | This paper addresses a relatively new topic to learn the hierarchical concepts in videos and commentary in an unsupervised manner. The authors proposed a hierarchical and transversing encoder-decoder network architecture to tackle the problem, where the pre-trained ResNet32 and BERT are used as feature extractors, transformers and GRUs serve as the concept encoders and decoders. As the network is unsupervised, the starting time and ending time of each concept is quite crucial and the problem was tackled by training using a soft-DTW loss. A metric (time-wrapped IoU) is also proposed for quantitative evaluation. The experiments indicate the effectiveness of the network and the network was tested in different datasets and scenarios. | SP:bfdb68759a70c3ced66ef2952a05cbcb0dd2aca7 |
Unsupervised Hierarchical Concept Learning | 1 INTRODUCTION . Consider a video ( Figure 1 ) that demonstrates how to cook an egg . Humans subconsciously learn concepts ( such as boiling water ) that describe different concepts ( or skills ) in such demonstrations Pammi et al . ( 2004 ) . These learned skills can be composed and reused in different ways to learn new concepts . Discovering such concepts automatically from demonstration data is a non-trivial problem . Shankar et al . ( 2019 ) introduces a sequence-to-sequence architecture that clusters long-horizon action trajectories into shorter temporal skills . However , their approach treats skills as independent concepts . In contrast , humans organize these concepts in hierarchies where lower-level concepts can be grouped to define higher-level concepts Naim et al . ( 2019 ) . We extend the architecture in Shankar et al . ( 2019 ) to simultaneously discover concepts along with their hierarchical organization without any supervision . We propose an end-to-end trainable architecture UNHCLE for hierarchical representation learning from demonstrations . UNHCLE takes as input a long horizon trajectory of high-dimensional images demonstrating a complex task ( in our case , chess and cooking ) and the associated textual commentary and isolates semantically meaningful subsequences in input trajectories . We emphasize that it does not require temporal annotations which link subsequences in the trajectories of images to the freeflowing commentary , but instead , autonomously discovers this mapping . Therefore , this work takes a step towards unsupervised video understanding of high-dimensional data . Our contributions can be summarized as follows : • We introduce a transformer-based architecture to learn a multi-modal hierarchical latent embedding space to encode the various concepts in long-horizon demonstration trajectories . UNHCLE abstracts these concepts ( shown through visual qualitative analysis ) without requiring any temporal supervision , i.e. , it divides long-horizon trajectories into semantically meaningful subsequences , without access to any temporal annotations that split these trajectories optimally . • We show the quantitative effectiveness of learning high-level concepts in a hierarchical manner compared to learning them in isolation while outperforming several baselines on YouCook2 ( Zhou et al . ( 2017 ) ) and Chess Opening dataset1 . • We further introduce a mechanism to incorporate commentary accompanying demonstrations in UNHCLE and show improvements in hierarchical concepts discovered . 1https : //www.kaggle.com/residentmario/recommending-chess-openings • We introduce TimeWarped IoU ( TW-IoU ) , an evaluation metric that we use to compare the alignment of our discovered concepts and ground-truth events . Existing approaches to representation learning for demonstrations or videos typically require significant supervision . Typically , sequence-to-sequence architectures are trained on datasets segmented by humans . During inference , these architectures generate proposals for timestamps that segment the input trajectory into semantically meaningful sequences . These complex sequence-to-sequence models require significant amounts of annotated data , making them costly to train them . More generally , video and scene understanding is an important research area with wide-ranging applications . Most recently , Chen et al . ( 2019 ) utilize semantic awareness to perform complex depth estimation tasks to acquire the geometric properties of 3-dimensional space from 2-dimensional images . Tosi et al . ( 2020 ) utilize similar semantic information for depth estimation , optical flow and motion segmentation . Boggust et al . ( 2019 ) attempt to ground words in the video , but apply significant supervision to synchronize them , requiring human intervention . We attempt to learn similar embeddings but do so in a completely unsupervised manner , not utilizing any of the temporal labels available . The field of learning from demonstrations ( Nicolescu & Mataric ( 2003 ) ) seeks to learn to perform tasks from a set of demonstrated behaviors . Behavioral Cloning is one popular scheme ( Esmaili et al . ( 1995 ) ) . Atkeson & Schaal ( 1997 ) and Schaal ( 1997 ) show how agents can learn simple tasks like cartpole simply from demonstrations . Pastor et al . ( 2009 ) also study how robots can learn from human demonstrations of tasks . Peters et al . ( 2013 ) and Kober & Peters ( 2009 ) fit a parametric model to the demonstrations . Niekum et al . ( 2012 ) , Murali et al . ( 2016 ) , and Meier et al . ( 2011 ) first segment trajectories into subsequence and then apply a parametric models to each subsequence . More recently , Schmeckpeper et al . ( 2019 ) shows that agents can learn to maximize external reward using a large corpus of observation data , i.e. , trajectories of states on a relatively smaller corpus of interaction data , i.e. , trajectories of state-action pairs . Hierarchical task representations have been studied as well . Instead of treating demonstrations in a flat manner , one may also infer the hierarchical structure . A few recent works attempt to do so ( Xu et al . ( 2018 ) ; Sun et al . ( 2018 ) ) , or as task graphs ( Huang et al. , 2019 ) . Both Xu et al . ( 2018 ) and Huang et al . ( 2019 ) address generalizing to new instances of manipulation tasks in the few-shot regime by abstracting away low-level controls . However , all of these approaches require an environment i.e. , a transition and reward function to learn from . On the contrary , humans show an ability to learn by watching demonstrations , which we attempt to replicate . Temporal abstractions of action sequences or skill/primitive learning is also a related field . Eysenbach et al . ( 2018 ) , learn a large number of low-level sequences of actions by forcing the agent to produce skills that are different from those previously acquired . However , due to the diversity bias , the agent results in learning many useless skills that can not be used for any semantically meaningful task . Similarly , Sharma et al . ( 2019 ) attempt to learn skills such that their transitions are almost deterministic in a given environment . These approaches also require access to an environment , whereas we try to learn without an environment . 2 APPROACH . 2.1 UNHCLE : UNSUPERVISED HIERARCHICAL CONCEPT LEARNING . Intuitively , we define a concept as a short sequence of states which repeatedly occur across several demonstration trajectories . Concepts have an upper limit on their length in time-steps . These concepts can be obtained from the images of demonstrations , denoted by z , and from the associated textual description , represented by c. We also refer to them as Observational Concept ( OC ) and Instructional Concept ( IC ) respectively and the module that encodes these concepts is referred to as Observational Concept Abstraction ( OCA ) and Instructional Concept Abstraction ( ICA ) modules . Additionally , concepts are hierarchical in nature - thus , lower-level and higher-level observational or image concepts are denoted by zL and zH , respectively . Analogously , lower-level , and higher-level textual concepts are represented by cL and cH , respectively . Similarly , the higher-level and lower-level modules are denoted by low or high for the corresponding level . Once we obtain these concept abstractions across levels , we can also transform them across the levels using a Concept Regeneration ( CR ) Module to transform low-level concepts to high-level concepts and vice-versa . Instead of just traversing in the concept level hierarchy , we can also use the Concept Instruction Regeneration Module or CIR to obtain the original instructions that map to that concept in its respective concept modality . Subsequently , we provide details about the different stages in our proposed technique . Encoding Observation : Given a long horizon trajectory of demonstration images along with its associated textual description , UNHCLE is able to abstract a hierarchy of concepts from demonstration images . We first pass these input images through ResNet-32 ( He et al . ( 2016 ) ) to get a sequence of image vectors as S = s1 : m , and the associated text is converted into word vectors W = w1 : n using BERT-base ( Devlin et al . ( 2018 ) ) . Observations combine to produce lower-level concepts whereas higher-level concepts are simply aggregations of such lower-level concepts . Thus , the Lower-level Observation Concept Abstraction module ( OCAlow ) is trained to embed a sequence of image vectors of a video ( s1 , s2 .... sm ) into a sequence of concept vectors ( zL1 , z L 2 , .. , z L U ) where u < < m such that zL1 : u = OCA low ( s1 : m ) . Subsequently , lower-level concepts combine together to form a higher level of concepts using the Higher-level Observation Concept Abstraction module ( OCAhigh ) such that zH1 : v = OCA high ( zL1 : u ) . Encoding Instructions : We also endeavour to discover these higher-level and lower-level concepts through natural language instructions . The Lower-level Instruction Concept Abstraction module ( ICAlow ) and Higher-level Instruction Concept Abstraction module ( ICAhigh ) are responsible for this functionality . From a corpus of words , ( w1 , w2 .... wn ) , the ICAlow module generates concept ( cL1 , c L 2 , .. , c L u ) , u < < n : c L 1 : u = ICA low ( w1 : n ) . Subsequently the ICAhigh encodes the lower-level language concepts cL1 : u into higher level concepts as c H 1 : v = ICA high ( cL1 : u ) . Traversing across the concept hierarchy : Learning concepts at different levels has an added advantage of traversing these concepts in the concept hierarchy . This additionally allows us to utilize these hierarchical traversals to obtain coarse or fine-grained concepts at any level . We can thus regenerate the lower concepts from higher-level instruction concepts using the Lower-level Concept Regeneration Module ( CRlow ) such that z′L1 : u′ = CR low ( cH1 : v ) . We can then later utilize this to obtain lower-level concepts from the higher-level concepts to regenerate the demonstration images SL = sL1 : mL in a cross-modal fashion . Observation and Instruction Regeneration : Under a concept , the sequence of frames is nearly deterministic i.e . the knowledge of a concept uniquely identifies the accompanying sequence of images in the demonstration trajectory . Subsequently , we regenerate the demonstration image vectors SL = sL1 : mL from lower-level concepts using Lower-level Concept Observation Regeneration Module ( CORlow ) such that sL1 : mL = COR low ( z′L1 : u′ ) . We also regenerate the demonstration image vectors SU = sU1 : mU from higher-level concepts abstracted from language using Higher-level Concept Observation Regeneration Module ( CORhigh ) such that sU1 : mU = COR high ( cH1 : v ) in a similar cross-modal manner . Finally , inspired by humans who can easily describe concept representations using free-flowing natural language , we first regenerate lower-level concepts from higher-level observation concepts using the Lower-level Concept Regeneration Module ( CRlow ) such that c′L1 : u′ = CR low ( zH1 : v ) , and subsequently , regenerate the word vectors WL = wL1 : nL from lower-level concepts using Lower-level Concept Instruction Regeneration Module ( CIRlow ) such that wL1 : nL = CIR low ( c′L1 : u′ ) . Additionally , the higher-level concepts identified by the OCAhigh module from demonstration frames are also described using a meaningful free-flowing commentary by the Higher-level Concept Instruction Regeneration module or CIRhigh . Thus , we regenerate the word vectors WU = wU1 : nU from higherlevel concepts using CIRhigh such that wU1 : mU = CIR high ( zH1 : v ) . | The paper introduces the solution of an important task: hierarchical concept learning(or temporal abstractions) from demonstration data. Specifically, this paper considers 1) unsupervised setting; 2) the hierarchy of concepts, and conducts experiments in two datasets. However, there are some points in the experiment section to be discussed. | SP:bfdb68759a70c3ced66ef2952a05cbcb0dd2aca7 |
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study | 1 INTRODUCTION . Label smoothing ( Szegedy et al. , 2016 ) and knowledge distillation ( Hinton et al. , 2015 ) are two commonly recognized techniques in training deep neural networks and have been applied in many state-of-the-art models , such as language translation ( Vaswani et al. , 2017 ; Tan et al. , 2019 ; Zhou et al. , 2020 ) , image classification ( Xie et al. , 2019 ; He et al. , 2019 ) and speech recognition ( Chiu et al. , 2018 ; Pereyra et al. , 2017 ; Chorowski & Jaitly , 2017 ) . Recently a large body of studies is focusing on exploring the underlying relationships between these two methods , for instance , Müller et al . ( Müller et al. , 2019 ) discovered that label smoothing could improve calibration implicitly but will hurt the effectiveness of knowledge distillation . Yuan et al . ( Yuan et al. , 2019 ) considered knowledge distillation as a dynamical form of label smoothing as it delivered a regularization effect in training . The recent study ( Lukasik et al. , 2020 ) further noticed label smoothing could help mitigate label noise , they showed that when distilling models from noisy data , the teacher with label smoothing is helpful . Despite the massive and intensive researches , how to use label smoothing as well as knowledge distillation in practice is still unclear , divergent , and under-explored . Moreover , it is hard to answer when and why label smoothing works well or not under a variety of discrepant circumstances . View of incompatibility between label smoothing and knowledge distillation . Recently , Müller et al . proposed a new standpoint that teachers trained with label smoothing distill inferior student compared to teachers trained with hard labels , even label smoothing improves teacher ’ s accuracy , as the authors found that label smoothing tends to “ erase ” information contained intra-class across individual examples , which indicates that the relative information between logits will be erased to some extent when the teacher is trained with label smoothing . This rising idea is becoming more and more dominant and has been quoted by a large number of recent literatures ( Arani et al. , 2019 ; Tang et al. , 2020 ; Mghabbar & Ratnamogan , 2020 ; Shen et al. , 2020 ; Khosla et al. , 2020 ) . However , this seems reasonable observation basically has many inconsistencies in practice when adopting knowledge distillation with smoothing trained teachers . Thus , we would like to challenge whether this perspective is entirely correct ? To make label smoothing and knowledge distillation less mysterious , in this paper , we first systematically introduce the mechanism and correlation 1Project page : http : //zhiqiangshen.com/projects/LS_and_KD/index.html . Training w/o LS Training w/ LS Validation w/o LS Validation w/ LS 1 1 2 2 1 2 1 2 ! '' ! ! ! ! ! '' ! ! ! ! • What will actually determine the performance of a student in knowledge distillation ? From our empirical study , we observe if the student architecture is settled , the dominating factor in knowledge distillation is the quality of supervision , i.e. , the performance of a teacher network . A higher-accuracy teacher is particularly successful in distilling a better student , regardless it is trained with or without label smoothing . This observation is partially against the conclusion in ( Müller et al. , 2019 ) which stated “ a teacher with better accuracy is not necessary to distill a better student ” . • When will the label smoothing indeed lose its effectiveness for learning deep neural networks ? Long-tailed class distribution and increased number of classes are two scenarios we observed wherein label smoothing will lose or impair its effectiveness . We empirically verify the findings on iNaturalist 2019 ( Van Horn et al. , 2018 ) , Place-LT ( Liu et al. , 2019 ) and curated ImageNet ( Liu et al. , 2019 ) . 2 BACKGROUND . In this section , we first introduce the background of label smoothing and knowledge distillation through a mathematical description . Given a dataset D = ( X , Y ) over a set of classes K , X is the input data and Y is the corresponding one-hot label with each sample ’ s label y ∈ { 0 , 1 } K , where the element yc is 1 for the ground-truth class and 0 for others . Label smoothing replaces one-hot hard label vector y with a mixture of weighted y and a uniform distribution : yc = { 1− α if c = label , α/ ( K − 1 ) otherwise . ( 1 ) where α is a small constant coefficient for flattening the one-hot labels . Usually , label smoothing is adopted when the loss function is cross-entropy , and the network uses softmax function to the last layer ’ s logits z to compute the output probabilities p , so the gradient of each training sample with respect to z will be : ∇H ( p , y ) = p−y = K∑ c=1 ( Softmax ( zc ) − yc ) , whereH ( p , y ) = − K∑ c=1 yclogpc is the cross-entropy loss and zc is c-th logit in z . Effects of label smoothing on loss To further understand the effects of label smoothing on loss function , Fig . 3 illustrates correction effects of smoothing on the binary cross-entropy loss ( K = 2 ) . We can observe that the standard logistic loss ( α = 0 ) vanishes for large and confident positive predictions , and becomes linear for large negative predictions . Label smoothing will penalize confident predictions and involve a finite positive minimum as it aims to minimize the average perclass . Generally , larger α values will produce larger loss values rebounding at positive predictions . This is also the underlying reason that smoothed loss can flatten the predictions of a network . In knowledge distillation , we usually pre-train the teacher model Tw on the dataset in advance . The student model Sw is trained over the same set of data , but utilizes labels generated by Tw . More specifically , we can regard this process as learning Sw on a new labeled dataset D̃ = ( X , Tw ( X ) ) . Once the teacher network is trained , its parameters will be frozen in the whole distillation . The student network Sw is trained by minimizing the similarity between its output and two parts : the hard one-hot labels and the soft labels generated by the teacher network . Letting pTwc ( X ) = Tw ( X ) [ c ] , pSwc ( X ) = Sw ( X ) [ c ] be the probabilities assigned to class c in the teacher model Tw and student model Sw . The distillation loss can be formulated as λH ( pSw , y ) + ( 1 − λ ) H ( pSw/T , pTw/T ) where T is the temperature scaling factor and λ is the trade-off coefficient to balance the two terms . 3 THE “ ERASE INFORMATION ” EFFECT BY LABEL SMOOTHING . This section aims to explain the erasing information effect more thoroughly . We start by reproducing the visualization of penultimate layer ’ s activations using the same procedure from ( Müller et al. , 2019 ) . We adopt ResNet-50 trained with hard and smoothed labels on ImageNet . As shown in Fig . 1 , we obtain similar distributions as ( Müller et al. , 2019 ) . Since examples in training set are the ones used for distillation , we mainly analyze the visualization from the training data . The core finding in ( Müller et al. , 2019 ) is that if a teacher is trained with hard labels , representations of examples are distributed in broad clusters , which means that different examples from the same class can have different similarities ( D1 and D2 ) to other classes . For a teacher trained with label smoothing , they observed the opposite behavior . Label smoothing encourages examples to lie in tight equally separated clusters , so each example of one class has very similar proximities ( D1 is closer to D2 ) to examples of the other classes . Our re-visualization also supports this discovery . The authors derive the conclusion that a teacher with better accuracy is not necessarily to distill a better student . This seems reasonable as the broad clusters can enable different examples from the same class to provide different similarities to other classes , which contains more information for knowledge distillation . However , if refocusing on the two semantically similar classes , when label smoothing is applied , the clusters are much tighter because label smoothing encourages each example is to be equidistant from all other class ’ s templates , while , the tight cluster substantially promotes different class representations to be separate , i.e. , the distance of clusters Dc increases , which further indicates that different class examples obtain more distinguishable features . This phenomenon is crucial as these difficult classes are the key for boosting classification performance . Generally , it is not necessary to measure “ how much a poodle is a particularly similar to a tench ” since we have enough evidence to classify them , but it is critical to have information “ how different is a toy poodle to a miniature poodle ” . Visualizations of teacher predictions . We further visualization the mean distribution of different classes crossing examples , as shown in Fig . 4 . We average all the probabilities after softmax layer if the examples belong to the same category , and show the first 100 classes in ImageNet . Usually , the probabilities have a major value ( the bars in Fig . 4 ( 1 ) ) that represents model ’ s prediction on category and other small values ( i.e. , minor predictions in Fig . 4 ( 2 ) ) indicate that the input image is somewhat similar to those other categories , some discussions about minor predictions are given in Appendix F. Our purpose of this visualization is to make certain of what label smoothing really calibrates in a network and shed light on how it affects the network predictions . We can observe in this figure that a model trained with label smoothing will generate more softened distributions , but the relations across different classes are still preserved . We conjecture the softened supervision is also the reason why teachers with label smoothing produce larger training loss during knowledge distillation . Consequently , label smoothing will both decrease the variance ( verified by following stability metric ) and mean predictive values within a class , but will not impair the relations crossing different classes . 3.1 A SIMPLE METRIC FOR MEASURING THE DEGREE OF ERASED INFORMATION . Different from the visualization scheme ( Müller et al. , 2019 ) of finding an orthonormal basis of the plane that only studies this problem qualitatively , we further address the “ erasing ” phenomenon through a statistical metric that is simple yet effective , and can measure the degree of erasing operation quantitatively . Our motivation behind it is straight-forward : If label smoothing erases relative information within a class , the variance of intra-class probabilities will decrease accordingly , thus we can use such variance to monitor the erasing degree , since this metric evaluates the fluctuation of the representations , we can also call it the stability metric . The definition is as follows : SStability = 1− 1 K K∑ c=1 ( 1 nc nc∑ i=1 ||pTw { i , c } − p Tw { i , c } || 2 ) ( 2 ) where i is the index of images and nc is the # image in class c. pTw { i , c } is the mean of p Tw in class c. This metric utilizes the probabilities of intra-class variance to measure the stability of a teacher ’ s prediction . The results on various network architectures are shown in Sec . 5 and a PyTorch-like code for calculating this metric is given in Appendix C. Such metric has at least two advantages : 1 ) It can measure the degree of erased information quantitatively and further help discover more interesting phenomena , e.g. , we observe that data augmentation method like CutMix ( Yun et al. , 2019 ) together with longer training erases the relative information on logits dramatically and can further be reinforced by label smoothing . 2 ) We found that the proposed metric is highly aligned with model accuracy , thus such metric can be used as a complement for accuracy to evaluate the quality of teacher ’ s supervision for knowledge distillation . | The authors re-analyze and re-confirm the relationship between label-smoothing and knowledge distillation, which is firstly argued by Muller et al. (“When Does Label Smoothing Help?“, NerurIPS 2019.). This paper shows that the previous argument, "label smoothing is not helpful for knowledge distillation", does not always hold, and carefully re-visits the missing points of the previous analysis by Muller et al. Based on this analysis, label smoothing can be helpful for knowledge distillation and can be explained using the intra-class variation and between-class distance within similar classes. The authors have empirically verified the arguments of the paper with various experiments. | SP:8ed1e265fd31cf19ada7cd9e2e3cbde7eeaef578 |
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study | 1 INTRODUCTION . Label smoothing ( Szegedy et al. , 2016 ) and knowledge distillation ( Hinton et al. , 2015 ) are two commonly recognized techniques in training deep neural networks and have been applied in many state-of-the-art models , such as language translation ( Vaswani et al. , 2017 ; Tan et al. , 2019 ; Zhou et al. , 2020 ) , image classification ( Xie et al. , 2019 ; He et al. , 2019 ) and speech recognition ( Chiu et al. , 2018 ; Pereyra et al. , 2017 ; Chorowski & Jaitly , 2017 ) . Recently a large body of studies is focusing on exploring the underlying relationships between these two methods , for instance , Müller et al . ( Müller et al. , 2019 ) discovered that label smoothing could improve calibration implicitly but will hurt the effectiveness of knowledge distillation . Yuan et al . ( Yuan et al. , 2019 ) considered knowledge distillation as a dynamical form of label smoothing as it delivered a regularization effect in training . The recent study ( Lukasik et al. , 2020 ) further noticed label smoothing could help mitigate label noise , they showed that when distilling models from noisy data , the teacher with label smoothing is helpful . Despite the massive and intensive researches , how to use label smoothing as well as knowledge distillation in practice is still unclear , divergent , and under-explored . Moreover , it is hard to answer when and why label smoothing works well or not under a variety of discrepant circumstances . View of incompatibility between label smoothing and knowledge distillation . Recently , Müller et al . proposed a new standpoint that teachers trained with label smoothing distill inferior student compared to teachers trained with hard labels , even label smoothing improves teacher ’ s accuracy , as the authors found that label smoothing tends to “ erase ” information contained intra-class across individual examples , which indicates that the relative information between logits will be erased to some extent when the teacher is trained with label smoothing . This rising idea is becoming more and more dominant and has been quoted by a large number of recent literatures ( Arani et al. , 2019 ; Tang et al. , 2020 ; Mghabbar & Ratnamogan , 2020 ; Shen et al. , 2020 ; Khosla et al. , 2020 ) . However , this seems reasonable observation basically has many inconsistencies in practice when adopting knowledge distillation with smoothing trained teachers . Thus , we would like to challenge whether this perspective is entirely correct ? To make label smoothing and knowledge distillation less mysterious , in this paper , we first systematically introduce the mechanism and correlation 1Project page : http : //zhiqiangshen.com/projects/LS_and_KD/index.html . Training w/o LS Training w/ LS Validation w/o LS Validation w/ LS 1 1 2 2 1 2 1 2 ! '' ! ! ! ! ! '' ! ! ! ! • What will actually determine the performance of a student in knowledge distillation ? From our empirical study , we observe if the student architecture is settled , the dominating factor in knowledge distillation is the quality of supervision , i.e. , the performance of a teacher network . A higher-accuracy teacher is particularly successful in distilling a better student , regardless it is trained with or without label smoothing . This observation is partially against the conclusion in ( Müller et al. , 2019 ) which stated “ a teacher with better accuracy is not necessary to distill a better student ” . • When will the label smoothing indeed lose its effectiveness for learning deep neural networks ? Long-tailed class distribution and increased number of classes are two scenarios we observed wherein label smoothing will lose or impair its effectiveness . We empirically verify the findings on iNaturalist 2019 ( Van Horn et al. , 2018 ) , Place-LT ( Liu et al. , 2019 ) and curated ImageNet ( Liu et al. , 2019 ) . 2 BACKGROUND . In this section , we first introduce the background of label smoothing and knowledge distillation through a mathematical description . Given a dataset D = ( X , Y ) over a set of classes K , X is the input data and Y is the corresponding one-hot label with each sample ’ s label y ∈ { 0 , 1 } K , where the element yc is 1 for the ground-truth class and 0 for others . Label smoothing replaces one-hot hard label vector y with a mixture of weighted y and a uniform distribution : yc = { 1− α if c = label , α/ ( K − 1 ) otherwise . ( 1 ) where α is a small constant coefficient for flattening the one-hot labels . Usually , label smoothing is adopted when the loss function is cross-entropy , and the network uses softmax function to the last layer ’ s logits z to compute the output probabilities p , so the gradient of each training sample with respect to z will be : ∇H ( p , y ) = p−y = K∑ c=1 ( Softmax ( zc ) − yc ) , whereH ( p , y ) = − K∑ c=1 yclogpc is the cross-entropy loss and zc is c-th logit in z . Effects of label smoothing on loss To further understand the effects of label smoothing on loss function , Fig . 3 illustrates correction effects of smoothing on the binary cross-entropy loss ( K = 2 ) . We can observe that the standard logistic loss ( α = 0 ) vanishes for large and confident positive predictions , and becomes linear for large negative predictions . Label smoothing will penalize confident predictions and involve a finite positive minimum as it aims to minimize the average perclass . Generally , larger α values will produce larger loss values rebounding at positive predictions . This is also the underlying reason that smoothed loss can flatten the predictions of a network . In knowledge distillation , we usually pre-train the teacher model Tw on the dataset in advance . The student model Sw is trained over the same set of data , but utilizes labels generated by Tw . More specifically , we can regard this process as learning Sw on a new labeled dataset D̃ = ( X , Tw ( X ) ) . Once the teacher network is trained , its parameters will be frozen in the whole distillation . The student network Sw is trained by minimizing the similarity between its output and two parts : the hard one-hot labels and the soft labels generated by the teacher network . Letting pTwc ( X ) = Tw ( X ) [ c ] , pSwc ( X ) = Sw ( X ) [ c ] be the probabilities assigned to class c in the teacher model Tw and student model Sw . The distillation loss can be formulated as λH ( pSw , y ) + ( 1 − λ ) H ( pSw/T , pTw/T ) where T is the temperature scaling factor and λ is the trade-off coefficient to balance the two terms . 3 THE “ ERASE INFORMATION ” EFFECT BY LABEL SMOOTHING . This section aims to explain the erasing information effect more thoroughly . We start by reproducing the visualization of penultimate layer ’ s activations using the same procedure from ( Müller et al. , 2019 ) . We adopt ResNet-50 trained with hard and smoothed labels on ImageNet . As shown in Fig . 1 , we obtain similar distributions as ( Müller et al. , 2019 ) . Since examples in training set are the ones used for distillation , we mainly analyze the visualization from the training data . The core finding in ( Müller et al. , 2019 ) is that if a teacher is trained with hard labels , representations of examples are distributed in broad clusters , which means that different examples from the same class can have different similarities ( D1 and D2 ) to other classes . For a teacher trained with label smoothing , they observed the opposite behavior . Label smoothing encourages examples to lie in tight equally separated clusters , so each example of one class has very similar proximities ( D1 is closer to D2 ) to examples of the other classes . Our re-visualization also supports this discovery . The authors derive the conclusion that a teacher with better accuracy is not necessarily to distill a better student . This seems reasonable as the broad clusters can enable different examples from the same class to provide different similarities to other classes , which contains more information for knowledge distillation . However , if refocusing on the two semantically similar classes , when label smoothing is applied , the clusters are much tighter because label smoothing encourages each example is to be equidistant from all other class ’ s templates , while , the tight cluster substantially promotes different class representations to be separate , i.e. , the distance of clusters Dc increases , which further indicates that different class examples obtain more distinguishable features . This phenomenon is crucial as these difficult classes are the key for boosting classification performance . Generally , it is not necessary to measure “ how much a poodle is a particularly similar to a tench ” since we have enough evidence to classify them , but it is critical to have information “ how different is a toy poodle to a miniature poodle ” . Visualizations of teacher predictions . We further visualization the mean distribution of different classes crossing examples , as shown in Fig . 4 . We average all the probabilities after softmax layer if the examples belong to the same category , and show the first 100 classes in ImageNet . Usually , the probabilities have a major value ( the bars in Fig . 4 ( 1 ) ) that represents model ’ s prediction on category and other small values ( i.e. , minor predictions in Fig . 4 ( 2 ) ) indicate that the input image is somewhat similar to those other categories , some discussions about minor predictions are given in Appendix F. Our purpose of this visualization is to make certain of what label smoothing really calibrates in a network and shed light on how it affects the network predictions . We can observe in this figure that a model trained with label smoothing will generate more softened distributions , but the relations across different classes are still preserved . We conjecture the softened supervision is also the reason why teachers with label smoothing produce larger training loss during knowledge distillation . Consequently , label smoothing will both decrease the variance ( verified by following stability metric ) and mean predictive values within a class , but will not impair the relations crossing different classes . 3.1 A SIMPLE METRIC FOR MEASURING THE DEGREE OF ERASED INFORMATION . Different from the visualization scheme ( Müller et al. , 2019 ) of finding an orthonormal basis of the plane that only studies this problem qualitatively , we further address the “ erasing ” phenomenon through a statistical metric that is simple yet effective , and can measure the degree of erasing operation quantitatively . Our motivation behind it is straight-forward : If label smoothing erases relative information within a class , the variance of intra-class probabilities will decrease accordingly , thus we can use such variance to monitor the erasing degree , since this metric evaluates the fluctuation of the representations , we can also call it the stability metric . The definition is as follows : SStability = 1− 1 K K∑ c=1 ( 1 nc nc∑ i=1 ||pTw { i , c } − p Tw { i , c } || 2 ) ( 2 ) where i is the index of images and nc is the # image in class c. pTw { i , c } is the mean of p Tw in class c. This metric utilizes the probabilities of intra-class variance to measure the stability of a teacher ’ s prediction . The results on various network architectures are shown in Sec . 5 and a PyTorch-like code for calculating this metric is given in Appendix C. Such metric has at least two advantages : 1 ) It can measure the degree of erased information quantitatively and further help discover more interesting phenomena , e.g. , we observe that data augmentation method like CutMix ( Yun et al. , 2019 ) together with longer training erases the relative information on logits dramatically and can further be reinforced by label smoothing . 2 ) We found that the proposed metric is highly aligned with model accuracy , thus such metric can be used as a complement for accuracy to evaluate the quality of teacher ’ s supervision for knowledge distillation . | Recent literature proposed that even label smoothing improves the teacher model, it will hurt the distillation training of student models due to the information erasing. Although this idea dominated more and more literature, this paper argued that this observation is not entirely correct. In order to clarify this idea, the paper systematically discussed the correlation between knowledge distillation and label smoothing. Comprehensive experiments well support the claims in this paper, i.e. label smoothing is compatible with knowledge distillation. The correlation between label smoothing and knowledge distillation remains an open question to date, and this paper made a breakthrough regarding this question. Besides the main purpose (clarify previous ideas), this paper also provided multiple interesting empirical conclusions, e.g. a better teacher always leads to a better student by producing more informative distillation labels, the distillation itself can provide enough regularization for training and the hard-label classification loss is no more needed. | SP:8ed1e265fd31cf19ada7cd9e2e3cbde7eeaef578 |
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study | 1 INTRODUCTION . Label smoothing ( Szegedy et al. , 2016 ) and knowledge distillation ( Hinton et al. , 2015 ) are two commonly recognized techniques in training deep neural networks and have been applied in many state-of-the-art models , such as language translation ( Vaswani et al. , 2017 ; Tan et al. , 2019 ; Zhou et al. , 2020 ) , image classification ( Xie et al. , 2019 ; He et al. , 2019 ) and speech recognition ( Chiu et al. , 2018 ; Pereyra et al. , 2017 ; Chorowski & Jaitly , 2017 ) . Recently a large body of studies is focusing on exploring the underlying relationships between these two methods , for instance , Müller et al . ( Müller et al. , 2019 ) discovered that label smoothing could improve calibration implicitly but will hurt the effectiveness of knowledge distillation . Yuan et al . ( Yuan et al. , 2019 ) considered knowledge distillation as a dynamical form of label smoothing as it delivered a regularization effect in training . The recent study ( Lukasik et al. , 2020 ) further noticed label smoothing could help mitigate label noise , they showed that when distilling models from noisy data , the teacher with label smoothing is helpful . Despite the massive and intensive researches , how to use label smoothing as well as knowledge distillation in practice is still unclear , divergent , and under-explored . Moreover , it is hard to answer when and why label smoothing works well or not under a variety of discrepant circumstances . View of incompatibility between label smoothing and knowledge distillation . Recently , Müller et al . proposed a new standpoint that teachers trained with label smoothing distill inferior student compared to teachers trained with hard labels , even label smoothing improves teacher ’ s accuracy , as the authors found that label smoothing tends to “ erase ” information contained intra-class across individual examples , which indicates that the relative information between logits will be erased to some extent when the teacher is trained with label smoothing . This rising idea is becoming more and more dominant and has been quoted by a large number of recent literatures ( Arani et al. , 2019 ; Tang et al. , 2020 ; Mghabbar & Ratnamogan , 2020 ; Shen et al. , 2020 ; Khosla et al. , 2020 ) . However , this seems reasonable observation basically has many inconsistencies in practice when adopting knowledge distillation with smoothing trained teachers . Thus , we would like to challenge whether this perspective is entirely correct ? To make label smoothing and knowledge distillation less mysterious , in this paper , we first systematically introduce the mechanism and correlation 1Project page : http : //zhiqiangshen.com/projects/LS_and_KD/index.html . Training w/o LS Training w/ LS Validation w/o LS Validation w/ LS 1 1 2 2 1 2 1 2 ! '' ! ! ! ! ! '' ! ! ! ! • What will actually determine the performance of a student in knowledge distillation ? From our empirical study , we observe if the student architecture is settled , the dominating factor in knowledge distillation is the quality of supervision , i.e. , the performance of a teacher network . A higher-accuracy teacher is particularly successful in distilling a better student , regardless it is trained with or without label smoothing . This observation is partially against the conclusion in ( Müller et al. , 2019 ) which stated “ a teacher with better accuracy is not necessary to distill a better student ” . • When will the label smoothing indeed lose its effectiveness for learning deep neural networks ? Long-tailed class distribution and increased number of classes are two scenarios we observed wherein label smoothing will lose or impair its effectiveness . We empirically verify the findings on iNaturalist 2019 ( Van Horn et al. , 2018 ) , Place-LT ( Liu et al. , 2019 ) and curated ImageNet ( Liu et al. , 2019 ) . 2 BACKGROUND . In this section , we first introduce the background of label smoothing and knowledge distillation through a mathematical description . Given a dataset D = ( X , Y ) over a set of classes K , X is the input data and Y is the corresponding one-hot label with each sample ’ s label y ∈ { 0 , 1 } K , where the element yc is 1 for the ground-truth class and 0 for others . Label smoothing replaces one-hot hard label vector y with a mixture of weighted y and a uniform distribution : yc = { 1− α if c = label , α/ ( K − 1 ) otherwise . ( 1 ) where α is a small constant coefficient for flattening the one-hot labels . Usually , label smoothing is adopted when the loss function is cross-entropy , and the network uses softmax function to the last layer ’ s logits z to compute the output probabilities p , so the gradient of each training sample with respect to z will be : ∇H ( p , y ) = p−y = K∑ c=1 ( Softmax ( zc ) − yc ) , whereH ( p , y ) = − K∑ c=1 yclogpc is the cross-entropy loss and zc is c-th logit in z . Effects of label smoothing on loss To further understand the effects of label smoothing on loss function , Fig . 3 illustrates correction effects of smoothing on the binary cross-entropy loss ( K = 2 ) . We can observe that the standard logistic loss ( α = 0 ) vanishes for large and confident positive predictions , and becomes linear for large negative predictions . Label smoothing will penalize confident predictions and involve a finite positive minimum as it aims to minimize the average perclass . Generally , larger α values will produce larger loss values rebounding at positive predictions . This is also the underlying reason that smoothed loss can flatten the predictions of a network . In knowledge distillation , we usually pre-train the teacher model Tw on the dataset in advance . The student model Sw is trained over the same set of data , but utilizes labels generated by Tw . More specifically , we can regard this process as learning Sw on a new labeled dataset D̃ = ( X , Tw ( X ) ) . Once the teacher network is trained , its parameters will be frozen in the whole distillation . The student network Sw is trained by minimizing the similarity between its output and two parts : the hard one-hot labels and the soft labels generated by the teacher network . Letting pTwc ( X ) = Tw ( X ) [ c ] , pSwc ( X ) = Sw ( X ) [ c ] be the probabilities assigned to class c in the teacher model Tw and student model Sw . The distillation loss can be formulated as λH ( pSw , y ) + ( 1 − λ ) H ( pSw/T , pTw/T ) where T is the temperature scaling factor and λ is the trade-off coefficient to balance the two terms . 3 THE “ ERASE INFORMATION ” EFFECT BY LABEL SMOOTHING . This section aims to explain the erasing information effect more thoroughly . We start by reproducing the visualization of penultimate layer ’ s activations using the same procedure from ( Müller et al. , 2019 ) . We adopt ResNet-50 trained with hard and smoothed labels on ImageNet . As shown in Fig . 1 , we obtain similar distributions as ( Müller et al. , 2019 ) . Since examples in training set are the ones used for distillation , we mainly analyze the visualization from the training data . The core finding in ( Müller et al. , 2019 ) is that if a teacher is trained with hard labels , representations of examples are distributed in broad clusters , which means that different examples from the same class can have different similarities ( D1 and D2 ) to other classes . For a teacher trained with label smoothing , they observed the opposite behavior . Label smoothing encourages examples to lie in tight equally separated clusters , so each example of one class has very similar proximities ( D1 is closer to D2 ) to examples of the other classes . Our re-visualization also supports this discovery . The authors derive the conclusion that a teacher with better accuracy is not necessarily to distill a better student . This seems reasonable as the broad clusters can enable different examples from the same class to provide different similarities to other classes , which contains more information for knowledge distillation . However , if refocusing on the two semantically similar classes , when label smoothing is applied , the clusters are much tighter because label smoothing encourages each example is to be equidistant from all other class ’ s templates , while , the tight cluster substantially promotes different class representations to be separate , i.e. , the distance of clusters Dc increases , which further indicates that different class examples obtain more distinguishable features . This phenomenon is crucial as these difficult classes are the key for boosting classification performance . Generally , it is not necessary to measure “ how much a poodle is a particularly similar to a tench ” since we have enough evidence to classify them , but it is critical to have information “ how different is a toy poodle to a miniature poodle ” . Visualizations of teacher predictions . We further visualization the mean distribution of different classes crossing examples , as shown in Fig . 4 . We average all the probabilities after softmax layer if the examples belong to the same category , and show the first 100 classes in ImageNet . Usually , the probabilities have a major value ( the bars in Fig . 4 ( 1 ) ) that represents model ’ s prediction on category and other small values ( i.e. , minor predictions in Fig . 4 ( 2 ) ) indicate that the input image is somewhat similar to those other categories , some discussions about minor predictions are given in Appendix F. Our purpose of this visualization is to make certain of what label smoothing really calibrates in a network and shed light on how it affects the network predictions . We can observe in this figure that a model trained with label smoothing will generate more softened distributions , but the relations across different classes are still preserved . We conjecture the softened supervision is also the reason why teachers with label smoothing produce larger training loss during knowledge distillation . Consequently , label smoothing will both decrease the variance ( verified by following stability metric ) and mean predictive values within a class , but will not impair the relations crossing different classes . 3.1 A SIMPLE METRIC FOR MEASURING THE DEGREE OF ERASED INFORMATION . Different from the visualization scheme ( Müller et al. , 2019 ) of finding an orthonormal basis of the plane that only studies this problem qualitatively , we further address the “ erasing ” phenomenon through a statistical metric that is simple yet effective , and can measure the degree of erasing operation quantitatively . Our motivation behind it is straight-forward : If label smoothing erases relative information within a class , the variance of intra-class probabilities will decrease accordingly , thus we can use such variance to monitor the erasing degree , since this metric evaluates the fluctuation of the representations , we can also call it the stability metric . The definition is as follows : SStability = 1− 1 K K∑ c=1 ( 1 nc nc∑ i=1 ||pTw { i , c } − p Tw { i , c } || 2 ) ( 2 ) where i is the index of images and nc is the # image in class c. pTw { i , c } is the mean of p Tw in class c. This metric utilizes the probabilities of intra-class variance to measure the stability of a teacher ’ s prediction . The results on various network architectures are shown in Sec . 5 and a PyTorch-like code for calculating this metric is given in Appendix C. Such metric has at least two advantages : 1 ) It can measure the degree of erased information quantitatively and further help discover more interesting phenomena , e.g. , we observe that data augmentation method like CutMix ( Yun et al. , 2019 ) together with longer training erases the relative information on logits dramatically and can further be reinforced by label smoothing . 2 ) We found that the proposed metric is highly aligned with model accuracy , thus such metric can be used as a complement for accuracy to evaluate the quality of teacher ’ s supervision for knowledge distillation . | This paper is mainly based on the prior work by Muller et al., which suggests that label smoothing is incompatible with knowledge distillation. Firstly, this paper provides an explanation of this incompatibility---label smooth tends to erase relative information among different classes, and provide a way to qualitatively measure the degree of erased information. Then, this paper argues that label smoothing actually is compatible with knowledge distillation, and show several empirical results as evidence. Lastly, this paper suggests that the performance of the teacher model is a more directly related factor for determining the performance of the student model. | SP:8ed1e265fd31cf19ada7cd9e2e3cbde7eeaef578 |
Self-Supervised Variational Auto-Encoders | Density estimation , compression , and data generation are crucial tasks in artificial intelligence . Variational Auto-Encoders ( VAEs ) constitute a single framework to achieve these goals . Here , we present a novel class of generative models , called self-supervised Variational Auto-Encoder ( selfVAE ) , that utilizes deterministic and discrete transformations of data . This class of models allows performing both conditional and unconditional sampling while simplifying the objective function . First , we use a single self-supervised transformation as a latent variable , where a transformation is either downscaling or edge detection . Next , we consider a hierarchical architecture , i.e. , multiple transformations , and we show its benefits compared to the VAE . The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks , where we can trade-off memory for better data quality , and vice-versa . We present the performance of our approach on three benchmark image data ( Cifar10 , Imagenette64 , and CelebA ) . 1 INTRODUCTION . The framework of variational autoencoders ( VAEs ) provides a principled approach for learning latentvariable models . As it utilizes a meaningful low-dimensional latent space with density estimation capabilities , it forms an attractive solution for generative modelling tasks . However , its performance in terms of the test log-likelihood and quality of generated samples is often disappointing , thus , many modifications were proposed . In general , one can obtain a tighter lower bound , and , thus , a more powerful and flexible model , by advancing over the following three components : the encoder ( Rezende et al. , 2014 ; van den Berg et al. , 2018 ; Hoogeboom et al. , 2020 ; Maaløe et al. , 2016 ) , the prior ( or marginal over latents ) ( Chen et al. , 2016 ; Habibian et al. , 2019 ; Lavda et al. , 2020 ; Lin & Clark , 2020 ; Tomczak & Welling , 2017 ) and the decoder ( Gulrajani et al. , 2016 ) . Recent studies have shown that by employing deep hierarchical architectures and by carefully designing building blocks of the neural networks , VAEs can successfully model high-dimensional data and reach state-of-the-art test likelihoods ( Zhao et al. , 2017 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ) . In this work , we present a novel class of VAEs , called self-supervised Variational Auto-Encoders , where we introduce additional variables to VAEs that result from discrete and deterministic transformations of observed images . Since the transformations are deterministic , and they provide a specific aspect of images ( e.g. , contextual information through detecting edges or downscaling ) , we refer to them as self-supervised representations . The introduction of the discrete and deterministic variables allows to train deep hierarchical models efficiently by decomposing the task of learning a highly complex distribution into training smaller and conditional distributions . In this way , the model allows to integrate the prior knowledge about the data , but still enables to synthesize unconditional samples . Furthermore , the discrete and deterministic variables could be used to conditionally reconstruct data , which could be of great use in data compression and super-resolution tasks . We make the following contributions : i ) We propose an extension of the VAE framework by incorporating self-supervised representations of the data . ii ) We analyze the impact of modelling natural images with different data transformations as self-supervised representations . iii ) This new type of generative model ( self-supervised Variational Auto-Encoders ) , which is able to perform both conditional and unconditional sampling , demonstrate improved quantitative performance in terms of density estimation and generative capabilities on image benchmarks . 2 BACKGROUND . 2.1 VARIATIONAL AUTO-ENCODERS . Let x 2 XD be a vector of observable variables , where X ✓ R or X ✓ Z , and z 2 RM denote a vector of latent variables . Since calculating p # ( x ) = R p # ( x , z ) dz is computationally intractable for non-linear stochastic dependencies , a variational family of distributions could be used for approximate inference . Then , the following objective function could be derived , namely , the evidence lower bound ( ELBO ) ( Jordan et al. , 1999 ) : ln p # ( x ) Eq ( z|x ) [ ln p✓ ( x|z ) + ln p ( z ) ln q ( z|x ) ] , ( 1 ) where q ( z|x ) is the variational posterior ( or the encoder ) , p✓ ( x|z ) is the conditional likelihood function ( or the decoder ) and p ( z ) is the prior ( or marginal ) , , ✓ and denote parameters . The expectation is approximated by Monte Carlo sampling while exploiting the reparameterization trick in order to obtain unbiased gradient estimators . The models are parameterized by neural networks . This generative framework is known as Variational Auto-Encoder ( VAE ) ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . 2.2 VAES WITH BIJECTIVE PRIORS . Even though the lower-bound suggests that the prior plays a crucial role in improving the variational bounds , usually a fixed distribution is used , e.g. , a standard multivariate Gaussian . While being relatively simple and computationally cheap , the fixed prior is known to result in over-regularized models that tend to ignore most of the latent dimensions ( Burda et al. , 2015 ; Hoffman & Johnson , 2016 ; Tomczak & Welling , 2017 ) . Moreover , even with powerful encoders , VAEs may still fail to match the variational posterior to a unit Gaussian prior ( Rosca et al. , 2018 ) . However , it is possible to obtain a rich , multi-modal prior distribution p ( z ) by using a bijective ( or flow-based ) model ( Dinh et al. , 2016 ) . Formally , given a latent code z , a base distribution pV ( v ) over latent variables v 2 RM , and f : RM ! RM consisting of a sequence of L diffeomorphic transformations1 , where fi ( vi 1 ) = vi , v0 = v and vL = z , the change of variable can be used sequentially to express the distribution of z as a function of v as follows : log p ( z ) = log pV ( v ) LX i=1 log @ fi ( vi 1 ) @ vi 1 , ( 2 ) where @ fi ( vi 1 ) @ vi 1 is the Jacobian-determinant of the ith transformation . Thus , using the bijective prior yields the following lower-bound : ln p ( x ) Eq ( z|x ) h log p✓ ( x|z ) log q ( z|x ) + log pV ( v0 ) + LX i=1 log @ f 1i ( vi ) @ vi i . ( 3 ) In this work , we utilize RealNVP ( Dinh et al. , 2016 ) as the prior , however , any other flow-based model could be used ( Kingma & Dhariwal , 2018 ; Hoogeboom et al. , 2020 ) . For the experiments and ablation study that shows the impact of the bijective prior on VAEs , we refer to the appendix A.1 . 3 METHOD . 3.1 MOTIVATION . The idea of self-supervised learning is about utilizing original unlabeled data to create additional context information . It could be achieved in multiple manners , e.g. , by adding noise to data ( Vincent et al. , 2008 ) or masking data during training ( Zhang et al. , 2017 ) . Self-supervised learning could also be seen as turning an unsupervised model into a supervised by , e.g. , treating predicting next pixels as a classification task ( Hénaff et al. , 2019 ; Oord et al. , 2018 ) . These are only a few examples of a quickly growing research line ( Liu et al. , 2020 ) . 1That is , invertible and differentiable transformations . i ) Stochastic dependencies of self-supervised VAE ii ) Hierarchical ssVAE self-supervised VAE Here , we propose to use non-trainable transformations to obtain information about image data . Our main hypothesis is that since working with highly-quality images is challenging , we could alleviate this problem by additionally considering partial information about them . Fitting a model to images of lower quality , and then enhancing them to match the target distribution seems to be overall an easier task ( Chang et al. , 2004 ; Gatopoulos et al. , 2020 ) . By incorporating compressed transformations ( i.e. , the self-supervised representations ) that still contain global information , with the premise that it would be easier to approximate , the process of modelling a high-dimensional complex density breaks down into simpler tasks . In this way , the expressivity of the model will grow and gradually result into richer , better generations . A positive effect of the proposed framework is that the model allows us to integrate prior knowledge through the image transformations , without losing its uncon- ditional generative functionality . Overall , we end up with a two-level VAE with three latent variables , where one is a data transformation that can be obtained in a self-supervised fashion . In Figure 1 a schematic representation of the proposed approach with downscaling is presented . A number of exemplary image transformations are presented in Figure 2 . We notice that with these transformations , even though they discard a lot of information , the global structure is preserved . As a result , in practice the model should have the ability to extract a general concept of the data , and add local information afterwards . In this work , we focus on downscaling ( Figure 2.b , c & d ) and edge detection or sketching ( Fig . 2.i ) . 3.2 MODEL FORMULATION . In our model , we consider representations that result from deterministic and discrete transformations of an image . Formally , we introduce a transformation d : XD ! XC that takes x and returns an image representation y , e.g. , a downscaled image . Since we lose information about the original image , z could be seen as a variable that compensates lost details in x . Further we propose to introduce an additional latent variable , u 2 RN to model y and z . We can define the joint distribution of x and y as follows : p ( x , y ) = p ( y|x ) p ( x ) , where p ( y|x ) = ( y d ( x ) ) due to the deterministic transformation d ( · ) , where ( · ) is the Kronecker delta . Thus , the empirical distribution is ( y d ( x ) ) pdata ( x ) . However , since we are interested in decomposing the problem of modeling a complex distribution p ( x ) , we propose to model p ( x|y ) p ( y ) instead , and utilize the variational inference of the form Q ( u , z|x , y ) = q ( u|y ) q ( z|x ) that yields : ln p ( x , y ) EQ ⇥ ln p✓ ( x|y , z ) + ln p ( z|u , y ) + ln p ( y|u ) + ln p ( u ) ln q ( z|x ) ln q ( u|y ) ⇤ . ( 4 ) Intuitively , the premise for selfVAE is that the latents u will capture the global structure of the input data and the latents z will encode the missing information between y and x , guiding the model to discover the distribution of the target observations . In order to highlight the self-supervised part in our model , we refer to it as the self-supervised Variational Auto-Encoder ( or selfVAE for short ) . Further , we propose to choose the following distributions : p ( v ) = N ( v|0,1 ) p ( u ) = p ( v ) FY i=1 det @ fi ( vi 1 ) @ vi 1 1 p✓1 ( y|u ) = IX i=1 ⇡ ( u ) i Dlogistic ⇣ µ ( u ) i , s ( u ) i ⌘ q 1 ( u|y ) = N ( u|µ 1 ( y ) , diag ( 1 ( y ) ) ) q 2 ( z|x ) = N ( z|µ 2 ( x ) , diag ( 2 ( x ) ) ) . p✓2 ( z|y , u ) = N ( z|µ✓2 ( y , u ) , diag ( ✓2 ( y , u ) ) ) p✓3 ( x|z , y ) = IX i=1 ⇡ ( z , y ) i Dlogistic ⇣ µ ( z , y ) i , s ( z , y ) i ⌘ where Dlogistic is defined as the discretized logistic distribution ( Salimans et al. , 2017 ) , and we utilize a flow-based model for p ( u ) . Notice that we use the discretized logistic distribution , because images are represented by values between 0 and 255 . For integer-valued random variables , other distributions like Gaussian are inappropriate . | This paper focuses on the task of generating high-quality data with generative models. To be specific, the authors proposed a variant of variational autoencoder (VAE) model, named self-supervised VAE. The intuition behind this model is that by breaking down the complex generation task into simpler/smaller ones, complex models can be trained steadily with the guidance from the simpler-level task. To his end, a hierarchical generative model with multiple-level latent variables is proposed, in which lower-level latent variables are governed by lower-level data features. The lower-level feature is generally obtained by a determined and discrete transformation, like down scaling. In addition, to further the modeling capability, a flow-based prior is proposed to fit the data distribution. Experiments were conducted to evaluate the performance of the proposed generative model. | SP:0b8ee1b00665d1bfec7342a1eefb0caf5521bbae |
Self-Supervised Variational Auto-Encoders | Density estimation , compression , and data generation are crucial tasks in artificial intelligence . Variational Auto-Encoders ( VAEs ) constitute a single framework to achieve these goals . Here , we present a novel class of generative models , called self-supervised Variational Auto-Encoder ( selfVAE ) , that utilizes deterministic and discrete transformations of data . This class of models allows performing both conditional and unconditional sampling while simplifying the objective function . First , we use a single self-supervised transformation as a latent variable , where a transformation is either downscaling or edge detection . Next , we consider a hierarchical architecture , i.e. , multiple transformations , and we show its benefits compared to the VAE . The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks , where we can trade-off memory for better data quality , and vice-versa . We present the performance of our approach on three benchmark image data ( Cifar10 , Imagenette64 , and CelebA ) . 1 INTRODUCTION . The framework of variational autoencoders ( VAEs ) provides a principled approach for learning latentvariable models . As it utilizes a meaningful low-dimensional latent space with density estimation capabilities , it forms an attractive solution for generative modelling tasks . However , its performance in terms of the test log-likelihood and quality of generated samples is often disappointing , thus , many modifications were proposed . In general , one can obtain a tighter lower bound , and , thus , a more powerful and flexible model , by advancing over the following three components : the encoder ( Rezende et al. , 2014 ; van den Berg et al. , 2018 ; Hoogeboom et al. , 2020 ; Maaløe et al. , 2016 ) , the prior ( or marginal over latents ) ( Chen et al. , 2016 ; Habibian et al. , 2019 ; Lavda et al. , 2020 ; Lin & Clark , 2020 ; Tomczak & Welling , 2017 ) and the decoder ( Gulrajani et al. , 2016 ) . Recent studies have shown that by employing deep hierarchical architectures and by carefully designing building blocks of the neural networks , VAEs can successfully model high-dimensional data and reach state-of-the-art test likelihoods ( Zhao et al. , 2017 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ) . In this work , we present a novel class of VAEs , called self-supervised Variational Auto-Encoders , where we introduce additional variables to VAEs that result from discrete and deterministic transformations of observed images . Since the transformations are deterministic , and they provide a specific aspect of images ( e.g. , contextual information through detecting edges or downscaling ) , we refer to them as self-supervised representations . The introduction of the discrete and deterministic variables allows to train deep hierarchical models efficiently by decomposing the task of learning a highly complex distribution into training smaller and conditional distributions . In this way , the model allows to integrate the prior knowledge about the data , but still enables to synthesize unconditional samples . Furthermore , the discrete and deterministic variables could be used to conditionally reconstruct data , which could be of great use in data compression and super-resolution tasks . We make the following contributions : i ) We propose an extension of the VAE framework by incorporating self-supervised representations of the data . ii ) We analyze the impact of modelling natural images with different data transformations as self-supervised representations . iii ) This new type of generative model ( self-supervised Variational Auto-Encoders ) , which is able to perform both conditional and unconditional sampling , demonstrate improved quantitative performance in terms of density estimation and generative capabilities on image benchmarks . 2 BACKGROUND . 2.1 VARIATIONAL AUTO-ENCODERS . Let x 2 XD be a vector of observable variables , where X ✓ R or X ✓ Z , and z 2 RM denote a vector of latent variables . Since calculating p # ( x ) = R p # ( x , z ) dz is computationally intractable for non-linear stochastic dependencies , a variational family of distributions could be used for approximate inference . Then , the following objective function could be derived , namely , the evidence lower bound ( ELBO ) ( Jordan et al. , 1999 ) : ln p # ( x ) Eq ( z|x ) [ ln p✓ ( x|z ) + ln p ( z ) ln q ( z|x ) ] , ( 1 ) where q ( z|x ) is the variational posterior ( or the encoder ) , p✓ ( x|z ) is the conditional likelihood function ( or the decoder ) and p ( z ) is the prior ( or marginal ) , , ✓ and denote parameters . The expectation is approximated by Monte Carlo sampling while exploiting the reparameterization trick in order to obtain unbiased gradient estimators . The models are parameterized by neural networks . This generative framework is known as Variational Auto-Encoder ( VAE ) ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . 2.2 VAES WITH BIJECTIVE PRIORS . Even though the lower-bound suggests that the prior plays a crucial role in improving the variational bounds , usually a fixed distribution is used , e.g. , a standard multivariate Gaussian . While being relatively simple and computationally cheap , the fixed prior is known to result in over-regularized models that tend to ignore most of the latent dimensions ( Burda et al. , 2015 ; Hoffman & Johnson , 2016 ; Tomczak & Welling , 2017 ) . Moreover , even with powerful encoders , VAEs may still fail to match the variational posterior to a unit Gaussian prior ( Rosca et al. , 2018 ) . However , it is possible to obtain a rich , multi-modal prior distribution p ( z ) by using a bijective ( or flow-based ) model ( Dinh et al. , 2016 ) . Formally , given a latent code z , a base distribution pV ( v ) over latent variables v 2 RM , and f : RM ! RM consisting of a sequence of L diffeomorphic transformations1 , where fi ( vi 1 ) = vi , v0 = v and vL = z , the change of variable can be used sequentially to express the distribution of z as a function of v as follows : log p ( z ) = log pV ( v ) LX i=1 log @ fi ( vi 1 ) @ vi 1 , ( 2 ) where @ fi ( vi 1 ) @ vi 1 is the Jacobian-determinant of the ith transformation . Thus , using the bijective prior yields the following lower-bound : ln p ( x ) Eq ( z|x ) h log p✓ ( x|z ) log q ( z|x ) + log pV ( v0 ) + LX i=1 log @ f 1i ( vi ) @ vi i . ( 3 ) In this work , we utilize RealNVP ( Dinh et al. , 2016 ) as the prior , however , any other flow-based model could be used ( Kingma & Dhariwal , 2018 ; Hoogeboom et al. , 2020 ) . For the experiments and ablation study that shows the impact of the bijective prior on VAEs , we refer to the appendix A.1 . 3 METHOD . 3.1 MOTIVATION . The idea of self-supervised learning is about utilizing original unlabeled data to create additional context information . It could be achieved in multiple manners , e.g. , by adding noise to data ( Vincent et al. , 2008 ) or masking data during training ( Zhang et al. , 2017 ) . Self-supervised learning could also be seen as turning an unsupervised model into a supervised by , e.g. , treating predicting next pixels as a classification task ( Hénaff et al. , 2019 ; Oord et al. , 2018 ) . These are only a few examples of a quickly growing research line ( Liu et al. , 2020 ) . 1That is , invertible and differentiable transformations . i ) Stochastic dependencies of self-supervised VAE ii ) Hierarchical ssVAE self-supervised VAE Here , we propose to use non-trainable transformations to obtain information about image data . Our main hypothesis is that since working with highly-quality images is challenging , we could alleviate this problem by additionally considering partial information about them . Fitting a model to images of lower quality , and then enhancing them to match the target distribution seems to be overall an easier task ( Chang et al. , 2004 ; Gatopoulos et al. , 2020 ) . By incorporating compressed transformations ( i.e. , the self-supervised representations ) that still contain global information , with the premise that it would be easier to approximate , the process of modelling a high-dimensional complex density breaks down into simpler tasks . In this way , the expressivity of the model will grow and gradually result into richer , better generations . A positive effect of the proposed framework is that the model allows us to integrate prior knowledge through the image transformations , without losing its uncon- ditional generative functionality . Overall , we end up with a two-level VAE with three latent variables , where one is a data transformation that can be obtained in a self-supervised fashion . In Figure 1 a schematic representation of the proposed approach with downscaling is presented . A number of exemplary image transformations are presented in Figure 2 . We notice that with these transformations , even though they discard a lot of information , the global structure is preserved . As a result , in practice the model should have the ability to extract a general concept of the data , and add local information afterwards . In this work , we focus on downscaling ( Figure 2.b , c & d ) and edge detection or sketching ( Fig . 2.i ) . 3.2 MODEL FORMULATION . In our model , we consider representations that result from deterministic and discrete transformations of an image . Formally , we introduce a transformation d : XD ! XC that takes x and returns an image representation y , e.g. , a downscaled image . Since we lose information about the original image , z could be seen as a variable that compensates lost details in x . Further we propose to introduce an additional latent variable , u 2 RN to model y and z . We can define the joint distribution of x and y as follows : p ( x , y ) = p ( y|x ) p ( x ) , where p ( y|x ) = ( y d ( x ) ) due to the deterministic transformation d ( · ) , where ( · ) is the Kronecker delta . Thus , the empirical distribution is ( y d ( x ) ) pdata ( x ) . However , since we are interested in decomposing the problem of modeling a complex distribution p ( x ) , we propose to model p ( x|y ) p ( y ) instead , and utilize the variational inference of the form Q ( u , z|x , y ) = q ( u|y ) q ( z|x ) that yields : ln p ( x , y ) EQ ⇥ ln p✓ ( x|y , z ) + ln p ( z|u , y ) + ln p ( y|u ) + ln p ( u ) ln q ( z|x ) ln q ( u|y ) ⇤ . ( 4 ) Intuitively , the premise for selfVAE is that the latents u will capture the global structure of the input data and the latents z will encode the missing information between y and x , guiding the model to discover the distribution of the target observations . In order to highlight the self-supervised part in our model , we refer to it as the self-supervised Variational Auto-Encoder ( or selfVAE for short ) . Further , we propose to choose the following distributions : p ( v ) = N ( v|0,1 ) p ( u ) = p ( v ) FY i=1 det @ fi ( vi 1 ) @ vi 1 1 p✓1 ( y|u ) = IX i=1 ⇡ ( u ) i Dlogistic ⇣ µ ( u ) i , s ( u ) i ⌘ q 1 ( u|y ) = N ( u|µ 1 ( y ) , diag ( 1 ( y ) ) ) q 2 ( z|x ) = N ( z|µ 2 ( x ) , diag ( 2 ( x ) ) ) . p✓2 ( z|y , u ) = N ( z|µ✓2 ( y , u ) , diag ( ✓2 ( y , u ) ) ) p✓3 ( x|z , y ) = IX i=1 ⇡ ( z , y ) i Dlogistic ⇣ µ ( z , y ) i , s ( z , y ) i ⌘ where Dlogistic is defined as the discretized logistic distribution ( Salimans et al. , 2017 ) , and we utilize a flow-based model for p ( u ) . Notice that we use the discretized logistic distribution , because images are represented by values between 0 and 255 . For integer-valued random variables , other distributions like Gaussian are inappropriate . | $\bullet$ VAEs can ignore some dimensions of the latent code. Enforcing the posterior distributions to consider desired factors of variations in the input can be fulfilled by either making it more structured (i.e., quantization as in VQ-VAE-2) or introducing additional constraints. This paper tackles this problem by applying the latter, two self-supervised tasks: edge maps and downscaled versions of inputs. | SP:0b8ee1b00665d1bfec7342a1eefb0caf5521bbae |
Self-Supervised Variational Auto-Encoders | Density estimation , compression , and data generation are crucial tasks in artificial intelligence . Variational Auto-Encoders ( VAEs ) constitute a single framework to achieve these goals . Here , we present a novel class of generative models , called self-supervised Variational Auto-Encoder ( selfVAE ) , that utilizes deterministic and discrete transformations of data . This class of models allows performing both conditional and unconditional sampling while simplifying the objective function . First , we use a single self-supervised transformation as a latent variable , where a transformation is either downscaling or edge detection . Next , we consider a hierarchical architecture , i.e. , multiple transformations , and we show its benefits compared to the VAE . The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks , where we can trade-off memory for better data quality , and vice-versa . We present the performance of our approach on three benchmark image data ( Cifar10 , Imagenette64 , and CelebA ) . 1 INTRODUCTION . The framework of variational autoencoders ( VAEs ) provides a principled approach for learning latentvariable models . As it utilizes a meaningful low-dimensional latent space with density estimation capabilities , it forms an attractive solution for generative modelling tasks . However , its performance in terms of the test log-likelihood and quality of generated samples is often disappointing , thus , many modifications were proposed . In general , one can obtain a tighter lower bound , and , thus , a more powerful and flexible model , by advancing over the following three components : the encoder ( Rezende et al. , 2014 ; van den Berg et al. , 2018 ; Hoogeboom et al. , 2020 ; Maaløe et al. , 2016 ) , the prior ( or marginal over latents ) ( Chen et al. , 2016 ; Habibian et al. , 2019 ; Lavda et al. , 2020 ; Lin & Clark , 2020 ; Tomczak & Welling , 2017 ) and the decoder ( Gulrajani et al. , 2016 ) . Recent studies have shown that by employing deep hierarchical architectures and by carefully designing building blocks of the neural networks , VAEs can successfully model high-dimensional data and reach state-of-the-art test likelihoods ( Zhao et al. , 2017 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ) . In this work , we present a novel class of VAEs , called self-supervised Variational Auto-Encoders , where we introduce additional variables to VAEs that result from discrete and deterministic transformations of observed images . Since the transformations are deterministic , and they provide a specific aspect of images ( e.g. , contextual information through detecting edges or downscaling ) , we refer to them as self-supervised representations . The introduction of the discrete and deterministic variables allows to train deep hierarchical models efficiently by decomposing the task of learning a highly complex distribution into training smaller and conditional distributions . In this way , the model allows to integrate the prior knowledge about the data , but still enables to synthesize unconditional samples . Furthermore , the discrete and deterministic variables could be used to conditionally reconstruct data , which could be of great use in data compression and super-resolution tasks . We make the following contributions : i ) We propose an extension of the VAE framework by incorporating self-supervised representations of the data . ii ) We analyze the impact of modelling natural images with different data transformations as self-supervised representations . iii ) This new type of generative model ( self-supervised Variational Auto-Encoders ) , which is able to perform both conditional and unconditional sampling , demonstrate improved quantitative performance in terms of density estimation and generative capabilities on image benchmarks . 2 BACKGROUND . 2.1 VARIATIONAL AUTO-ENCODERS . Let x 2 XD be a vector of observable variables , where X ✓ R or X ✓ Z , and z 2 RM denote a vector of latent variables . Since calculating p # ( x ) = R p # ( x , z ) dz is computationally intractable for non-linear stochastic dependencies , a variational family of distributions could be used for approximate inference . Then , the following objective function could be derived , namely , the evidence lower bound ( ELBO ) ( Jordan et al. , 1999 ) : ln p # ( x ) Eq ( z|x ) [ ln p✓ ( x|z ) + ln p ( z ) ln q ( z|x ) ] , ( 1 ) where q ( z|x ) is the variational posterior ( or the encoder ) , p✓ ( x|z ) is the conditional likelihood function ( or the decoder ) and p ( z ) is the prior ( or marginal ) , , ✓ and denote parameters . The expectation is approximated by Monte Carlo sampling while exploiting the reparameterization trick in order to obtain unbiased gradient estimators . The models are parameterized by neural networks . This generative framework is known as Variational Auto-Encoder ( VAE ) ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . 2.2 VAES WITH BIJECTIVE PRIORS . Even though the lower-bound suggests that the prior plays a crucial role in improving the variational bounds , usually a fixed distribution is used , e.g. , a standard multivariate Gaussian . While being relatively simple and computationally cheap , the fixed prior is known to result in over-regularized models that tend to ignore most of the latent dimensions ( Burda et al. , 2015 ; Hoffman & Johnson , 2016 ; Tomczak & Welling , 2017 ) . Moreover , even with powerful encoders , VAEs may still fail to match the variational posterior to a unit Gaussian prior ( Rosca et al. , 2018 ) . However , it is possible to obtain a rich , multi-modal prior distribution p ( z ) by using a bijective ( or flow-based ) model ( Dinh et al. , 2016 ) . Formally , given a latent code z , a base distribution pV ( v ) over latent variables v 2 RM , and f : RM ! RM consisting of a sequence of L diffeomorphic transformations1 , where fi ( vi 1 ) = vi , v0 = v and vL = z , the change of variable can be used sequentially to express the distribution of z as a function of v as follows : log p ( z ) = log pV ( v ) LX i=1 log @ fi ( vi 1 ) @ vi 1 , ( 2 ) where @ fi ( vi 1 ) @ vi 1 is the Jacobian-determinant of the ith transformation . Thus , using the bijective prior yields the following lower-bound : ln p ( x ) Eq ( z|x ) h log p✓ ( x|z ) log q ( z|x ) + log pV ( v0 ) + LX i=1 log @ f 1i ( vi ) @ vi i . ( 3 ) In this work , we utilize RealNVP ( Dinh et al. , 2016 ) as the prior , however , any other flow-based model could be used ( Kingma & Dhariwal , 2018 ; Hoogeboom et al. , 2020 ) . For the experiments and ablation study that shows the impact of the bijective prior on VAEs , we refer to the appendix A.1 . 3 METHOD . 3.1 MOTIVATION . The idea of self-supervised learning is about utilizing original unlabeled data to create additional context information . It could be achieved in multiple manners , e.g. , by adding noise to data ( Vincent et al. , 2008 ) or masking data during training ( Zhang et al. , 2017 ) . Self-supervised learning could also be seen as turning an unsupervised model into a supervised by , e.g. , treating predicting next pixels as a classification task ( Hénaff et al. , 2019 ; Oord et al. , 2018 ) . These are only a few examples of a quickly growing research line ( Liu et al. , 2020 ) . 1That is , invertible and differentiable transformations . i ) Stochastic dependencies of self-supervised VAE ii ) Hierarchical ssVAE self-supervised VAE Here , we propose to use non-trainable transformations to obtain information about image data . Our main hypothesis is that since working with highly-quality images is challenging , we could alleviate this problem by additionally considering partial information about them . Fitting a model to images of lower quality , and then enhancing them to match the target distribution seems to be overall an easier task ( Chang et al. , 2004 ; Gatopoulos et al. , 2020 ) . By incorporating compressed transformations ( i.e. , the self-supervised representations ) that still contain global information , with the premise that it would be easier to approximate , the process of modelling a high-dimensional complex density breaks down into simpler tasks . In this way , the expressivity of the model will grow and gradually result into richer , better generations . A positive effect of the proposed framework is that the model allows us to integrate prior knowledge through the image transformations , without losing its uncon- ditional generative functionality . Overall , we end up with a two-level VAE with three latent variables , where one is a data transformation that can be obtained in a self-supervised fashion . In Figure 1 a schematic representation of the proposed approach with downscaling is presented . A number of exemplary image transformations are presented in Figure 2 . We notice that with these transformations , even though they discard a lot of information , the global structure is preserved . As a result , in practice the model should have the ability to extract a general concept of the data , and add local information afterwards . In this work , we focus on downscaling ( Figure 2.b , c & d ) and edge detection or sketching ( Fig . 2.i ) . 3.2 MODEL FORMULATION . In our model , we consider representations that result from deterministic and discrete transformations of an image . Formally , we introduce a transformation d : XD ! XC that takes x and returns an image representation y , e.g. , a downscaled image . Since we lose information about the original image , z could be seen as a variable that compensates lost details in x . Further we propose to introduce an additional latent variable , u 2 RN to model y and z . We can define the joint distribution of x and y as follows : p ( x , y ) = p ( y|x ) p ( x ) , where p ( y|x ) = ( y d ( x ) ) due to the deterministic transformation d ( · ) , where ( · ) is the Kronecker delta . Thus , the empirical distribution is ( y d ( x ) ) pdata ( x ) . However , since we are interested in decomposing the problem of modeling a complex distribution p ( x ) , we propose to model p ( x|y ) p ( y ) instead , and utilize the variational inference of the form Q ( u , z|x , y ) = q ( u|y ) q ( z|x ) that yields : ln p ( x , y ) EQ ⇥ ln p✓ ( x|y , z ) + ln p ( z|u , y ) + ln p ( y|u ) + ln p ( u ) ln q ( z|x ) ln q ( u|y ) ⇤ . ( 4 ) Intuitively , the premise for selfVAE is that the latents u will capture the global structure of the input data and the latents z will encode the missing information between y and x , guiding the model to discover the distribution of the target observations . In order to highlight the self-supervised part in our model , we refer to it as the self-supervised Variational Auto-Encoder ( or selfVAE for short ) . Further , we propose to choose the following distributions : p ( v ) = N ( v|0,1 ) p ( u ) = p ( v ) FY i=1 det @ fi ( vi 1 ) @ vi 1 1 p✓1 ( y|u ) = IX i=1 ⇡ ( u ) i Dlogistic ⇣ µ ( u ) i , s ( u ) i ⌘ q 1 ( u|y ) = N ( u|µ 1 ( y ) , diag ( 1 ( y ) ) ) q 2 ( z|x ) = N ( z|µ 2 ( x ) , diag ( 2 ( x ) ) ) . p✓2 ( z|y , u ) = N ( z|µ✓2 ( y , u ) , diag ( ✓2 ( y , u ) ) ) p✓3 ( x|z , y ) = IX i=1 ⇡ ( z , y ) i Dlogistic ⇣ µ ( z , y ) i , s ( z , y ) i ⌘ where Dlogistic is defined as the discretized logistic distribution ( Salimans et al. , 2017 ) , and we utilize a flow-based model for p ( u ) . Notice that we use the discretized logistic distribution , because images are represented by values between 0 and 255 . For integer-valued random variables , other distributions like Gaussian are inappropriate . | This paper targets richer and higher-quality generation with VAE. Two techniques are adopted to achieve the goal: 1). bijective model to enrich data generation with flexible prior. 2). presenting compressed variants of the input data, i.e. self -supervision as additional condition $y$, for reconstruction. The two techniques interact through a hierarchical sampling process, $... y\sim p(y|u)\rightarrow z\sim p(z|u,y)$, thus benefits VAE generation with data-dependent prior and condition generation. | SP:0b8ee1b00665d1bfec7342a1eefb0caf5521bbae |
On the Dynamics of Training Attention Models | 1 INTRODUCTION . Attention-based neural networks have been broadly adopted in many natural language models for machine translation ( Bahdanau et al. , 2014 ; Luong et al. , 2015 ) , sentiment classification ( Wang et al. , 2016 ) , image caption generation ( Xu et al. , 2015 ) , and the unsupervised representation learning ( Devlin et al. , 2019 ) , etc . Particularly in the powerful transformers ( Vaswani et al. , 2017 ) , attention is its key ingredient . Despite its great successes established empirically , the working mechanism of attention has not been well understood ( see Section 2 ) . This paper sets up a simple text classification task and considers a basic neural network model with the most straightforward attention mechanism . We study the model ’ s training trajectory to understand why attention can attend to the discriminative words ( referred to as the topic words ) . More specifically , in this task , each sentence is treated as a bag of words , and its class label , or topic , is indicated by a topic word . The model we consider involves a basic attention mechanism , which creates weighting factors to combine the word embedding vectors into a “ context vector ” ; the context vector is then passed to a classifier . In this setting , we prove a closed-form relationship between the topic word embedding norm and the inner product of its key and the query , referred to as the “ score ” , during gradient-descent training . It is particularly remarkable that this relationship holds irrespective of the classifier architecture or configuration . This relationship suggests the existence of a “ synergy ” in the amplification of the topic word score and its word embedding ; that is , the growths of the two quantities promote each other . This , in turn , allows the topic word embedding to stand out rapidly in the context vector during training . Moreover , when the model takes a fixed linear classifier , this relationship allows rigorous proofs of this “ mutual promotion ” phenomenon and the convergence of training to the topic words . Our theoretical results and their implications are corroborated by experiments performed on a synthetic dataset and real-world datasets . Additional insights are also obtained from these experiments . For example , low-capacity classifiers tend to give stronger training signals to the attention module . The “ mutual promotion ” effect implied by the discovered relationship can also exhibit itself as “ mutual suppression ” in the early training phase . Furthermore , in the real-world datasets , where perfect delimitation of topic and non-topic words does not exist , interesting training dynamics is observed . Due to length constraints , all proofs are presented in Appendix . 2 RELATED WORKS . Since 2019 , a series of works have been published to understand the working and behaviour of attention . One focus of these works pertains to understanding whether an attention mechanism can provide meaningful explanations ( Michel et al. , 2019 ; Voita et al. , 2019 ; Jain & Wallace , 2019 ; Wiegreffe & Pinter , 2019 ; Serrano & Smith , 2020 ; Vashishth et al. , 2020 ) . Most of these works are empirical in nature , for example , by analyzing the behaviours of a well-trained attention-based model ( Clark et al. , 2019 ) , or observing the impact of altering the output weights of the attention module or pruning a few heads ( Michel et al. , 2019 ; Voita et al. , 2019 ) , or a combination of them ( Jain & Wallace , 2019 ; Vashishth et al. , 2020 ) . Apart from acquiring insights from experiments , Brunner et al . ( 2019 ) and Hahn ( 2020 ) show theoretically that the self-attention blocks lacks identifiability , where multiple weight configurations may give equally good end predictions . The non-uniqueness of the attention weights therefore makes the architecture lack interpretability . As a fully connected neural network with infinite width can be seen as a Gaussian process ( Lee et al. , 2018 ) , a few works apply this perspective to understanding attention with infinite number of heads and infinite width of the network layers ( Yang , 2019 ; Hron et al. , 2020 ) . In this paper , we restrict our study to the more realist non-asymptotic regime . 3 PROBLEM SETUP . Learning Task To obtain insights into the training dynamics of attention models , we set up a simple topic classification task . Each input sentence contains m non-topic words and one topic word indicating its topic . Note that a topic may have multiple topic words , but a sentence is assumed to include only one of them . Assume that there are J topics that correspond to the mutually exclusive topic word sets T1 , T2 , · · · , TJ . Let T = ⋃J j=1 Tj be the set of all topic words . The non-topic words are drawn from a dictionary Θ , which are assumed not to contain any topic word . The training set Ψ consists of sentence-topic pairs , where each pair ( χ , y ) is generated by ( 1 ) randomly pick a topic y ∈ { 1 , 2 , · · · , J } ( 2 ) pick a topic word from set Ty and combine it with m words drawn uniformly at random from Θ to generate the sentence ( or the bag of words ) χ . In this task , one aims to develop a classifier from the training set that predicts the topic y for a random sentence χ generated in this way . We will consider the case that |Θ| > > |T| , which implies that a topic word appears much more frequently in the sentences than a non-topic word . Attention Model For this task , we consider a simple attention mechanism similar to the one proposed by Wang et al . ( 2016 ) . Each word w is associated with two parameters : an embedding νw ∈ Rd and a key κw ∈ Rd ′ . Based on a global query q ∈ Rd′ , the context vector of sentence χ is computed by ν̄ ( χ ) = ∑ w∈χ νw exp ( qTκw ) Z ( χ ) , where Z ( χ ) = ∑ w′∈χ exp ( q Tκw′ ) . Then ν̄ ( χ ) is fed into a classifier that predicts the sentence ’ s topic in terms of a distribution over all topics.1 Denote the loss function by l ( χ , y ) . Our upcoming analysis implies this attention model , although simple , may capture plenty of insight in understanding the training of more general attention models . Problem Statement Our objective is to investigate the training dynamics , under gradient descent , of this attention model . In particular , we wish to understand if there is an intrinsic mechanism that allows the attention model to discover the topic word and accelerates training . Moreover , we wish to investigate , beyond this setup , how the model is optimized when there is no clear delimitation between topic and non-topic words , as in real-world data . 1The condition that the attention layer directly attends to the word embeddings merely serves to simplify the analysis in Section 4 but this condition is not required for most results presented in Sections 4 and 5 . More discussions are given in Appendix A in this regard . 4 THEORETICAL ANALYSIS . It is common to fix some parameters when we train a model with limited resources . Also Lemma 1 . Assume q 6= 0 when initialized . Fixing it does not affect the attention block ’ s capacity . Thus , our upcoming discussion focuses on the case in which the query is fixed . Doing so also allows us to establish a closed-form expression connecting the word ’ s embedding and the inner product of its key and the query . In Appendix B , extra discussions and experimental results reveal that the trainability of the query does not affect the fundamental relationship we are about to present . For a topic word t , let Ψt denote the training samples involving it . Then , by gradient descent , ∆νt = τ |Ψ| ∑ ( χ , y ) ∈Ψt ∇ν̄ ( χ ) l ( χ , y ) exp ( qTκt ) Z ( χ ) ( 1 ) ∆κt = τ |Ψ| ∑ ( χ , y ) ∈Ψt q ( νt − ν̄ ( χ ) ) T ∇ν̄ ( χ ) l ( χ , y ) exp ( qTκt ) Z ( χ ) , ( 2 ) where τ denote the learning rate . As it will turn out , an important quantity in this setting is the inner product qT kw of query q and the key kw , which we denote by sw , and refer to it as the score of the word w. Denoting vw = ||q||2νw , η = τ ||q||2 , v̄ ( χ ) = ∑ w∈χ exp ( sw ) Z vw , and h ( v̄ ( χ ) ; y ) = ∇ν̄ ( χ ) l ( χ , y ) , for a topic word t , the dynamics simplifies to ∆vt = η |Ψ| ∑ ( χ , y ) ∈Ψt h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) ( 3 ) ∆st = η |Ψ| ∑ ( χ , y ) ∈Ψt ( vt − v̄ ( χ ) ) T h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) . ( 4 ) In the rest of the paper , whenever we refer to the embedding of word t , we actually mean vt not νt . Our analysis assumes the word embeddings are sampled i.i.d . from a distribution with mean zero and variance σ 2 d , where σ 2 is assumed close to zero . The word keys and the query are also sampled from zero mean distributions with a possibly different variance . We assume that this variance is so small that the initial word scores are approximately zero . This assumption of the initial configurations corresponds to the attention model starting as a word-averaging model , and allows us to investigate how the model deviates from this initial setting with training . We also assume the derivative h ( v̄ ( χ ) ; y ) of ` is Lipschitz continuous in v̄ ( χ ) throughout training . Further the assumption in Section 3 that the number of non-topic words |Θ| is much larger than the number of topic words |T| implies that with a sufficient number of training samples , the occurrence rate of a topic word is significantly higher than the non-topic ones . This then justifies the following assumption we will use throughout our analysis . Assumption 1 . The scores and the embeddings of the non-topic words are nearly unchanged compared to their counterparts for the topic words . Hence , our upcoming analysis will treat the scores and embeddings of the non-topic words as constants . Assumption 1 will be validated by experimental results presented in Section 5 . By selecting a sufficiently small η , we can take the gradient-descent updates in Eq ( 3 ) and Eq ( 4 ) to its continuous-time limit and get2 dvt dt = η |Ψ| ∑ ( χ , y ) ∈Ψt h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) ( 5 ) dst dt = η |Ψ| ∑ ( χ , y ) ∈Ψt ( vt − v̄ ( χ ) ) T h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) . ( 6 ) 2Reversely , Eq ( 3 ) is a discretized approximation of Eq ( 5 ) : vt ( t+1 ) −vt ( t ) = ∫ t+1 t dvt ( t ′ ) dt′ dt ′ ≈ 1· dvt ( t ) dt = ∆vt ( t ) . The approximation becomes accurate if vt ( t + 1 ) is close to vt ( t ) , which can be achieved by choosing a sufficiently small η . Likewise , Eq ( 4 ) is a discretized approximation of Eq ( 6 ) . We can then characterize the update of the score and the embedding of a topic word as a continuoustime dynamical system stated in Lemma 2 . The same technique has been used to analyze the training of neural networks in other contexts ( Saxe et al. , 2014 ; Greydanus et al. , 2019 ) . Lemma 2 . For sufficiently small η and σ2 , the score st and embedding vt of topic word t satisfy dvt dt = η|Ψt| |Ψ| 〈 h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) 〉 Ψt , ( 7 ) dst dt = [ ( vt − 〈v̄ ( χ \ t ) 〉Ψt ) T dvt dt ] 〈 exp ( st ) + Z ( χ \ t ) Z ( χ \ t ) 〉−1 Ψt , ( 8 ) where Z ( χ \ t ) = ∑w∈χ\ { t } exp ( sw ) , v̄ ( χ \ t ) = ∑w∈χ\ { t } vw exp ( sw ) Z ( χ\t ) , and 〈 · 〉Ψt denotes taking sample mean over the set Ψt . Eq ( 7 ) implies the speed of moving vt along the direction of 〈h ( v̄ ( χ ) ; y ) 〉Ψt is controlled by the attention weight exp ( st ) Z ( χ ) . Eq ( 8 ) shows that vt increases if and only if vt has a greater projection on 〈h ( v̄ ( χ ) ; y ) 〉Ψt than the weighted average of the non-topic word counterparts . Consider a simplified case where 〈h ( v̄ ( χ ) ; y ) 〉Ψt is fixed . Since the change of vt is much faster than the non-topic word counterparts , vt will have a larger projection on 〈h ( v̄ ( χ ) ; y ) 〉Ψt after a few epochs of training . Then st increases as well as its attention weight , which in turn speeds up the extension of the embedding vt . This observation reveals a mutual enhancement effect between the score increment and the embedding elongation . In fact such an effect exists in general , as stated in the theorem below , irrespective of whether 〈h ( v̄ ( χ ) ; y ) 〉Ψt is fixed . Theorem 1 . In the setting of Lemma 2 , from epoch t0 to t1 , the topic word score st and its embedding vt satisfy [ st ( t ) + exp ( st ( t ) ) 〈 1 Z ( χ \ t ) 〉 Ψt ] t1 t0 = [ 1 2 ||vt ( t ) − 〈v̄ ( χ \ t ) 〉Ψt || 2 2 ] t1 t0 . ( 9 ) Following from Lemma 2 , this theorem implies a positive relationship between the topic word score and the distance between vt and the non-topic word embedding average 〈v̄ ( χ \ t ) 〉Ψt . Remarkably this result makes no reference to 〈h ( v̄ ( χ ) ; y ) 〉Ψt , hence independent of it . This implies the identity in Eq ( 9 ) holds irrespective of the choice and setting of the classifier . Theorem 1 further implies a score and embedding norm ( “ SEN ” in short ) relationship for the topic words : Corollary 1 . In the context of Theorem 1 , by setting t0 = 0 and t1 = t , Eq ( 9 ) is reduced to ||vt ( t ) ||2 = √ 2 ( st ( t ) + exp st ( t ) m − 1 m ) , ( 10 ) The corollary indicates that ||vt ( t ) ||2 is monotonically increasing with st ( t ) . So , st increases if and only if the point vt departs from its initial location . That is , if the norm of the topic word embedding increases , it will be attended to . This result is independent of the configuration of all other network layers . Thus , if 〈h ( v̄ ( χ ) ; y ) 〉Ψt has a gradient field that pushes vt away from its original location , the topic word is expected to be attended to . This statement can be made precise , as in Theorem 2 , when the model uses a linear classifier . Theorem 2 . Assume the model has a fixed classifier in the form c ( v̄ ( χ ) ) = softmax ( UT v̄ ( χ ) ) , where the columns of U are linearly independent , and the model is trained using gradient descent with the cross-entropy loss . As training proceeds , the model will attend to the topic word in every input sentence and have its training loss approach zero . It is notable that the theorem holds broadly for any arbitrary fixed linear classifier ( subjective to the mild linear independence constraint of its parameter U ) . Additionally , we anticipate that this result holds for a much wider family of classifiers including trainable and even nonlinear ones . But rigorous proof appears difficult to obtain in such settings , and we will corroborate this claim in an experimental study in Section 5 . To sum up , in this section , we have shown two main results : ( a ) there is a closed-form positive relationship , the SEN relationship , between the topic word score and its embedding norm , which is independent of the configuration of the classifier . ( b ) the model , equipped with a fixed linear classifier stated in Theorem 2 , can be trained to have all topic words attended to . | This paper aims to prove and illustrate that attention components are defined during training by gradients that mutually amplify the embedding and score associated with crucial features. In particular, a word embedding with a high magnitude increases the gradient following the attention score for the same word, while a high attention score increases the gradient directed at the word's embedding. In addition to a proof that treats behavior during training as a dynamical system under a large suite of assumptions, they test the analytic predictions on a synthetic dataset following the same suite of assumptions. They then test on a natural language data set and discuss where it diverges from the analytic and synthetic findings, concluding that the difference is a result of competition between different words associated with a label. | SP:58c220b21af74c85e23004a9bad82e4d0dd7333c |
On the Dynamics of Training Attention Models | 1 INTRODUCTION . Attention-based neural networks have been broadly adopted in many natural language models for machine translation ( Bahdanau et al. , 2014 ; Luong et al. , 2015 ) , sentiment classification ( Wang et al. , 2016 ) , image caption generation ( Xu et al. , 2015 ) , and the unsupervised representation learning ( Devlin et al. , 2019 ) , etc . Particularly in the powerful transformers ( Vaswani et al. , 2017 ) , attention is its key ingredient . Despite its great successes established empirically , the working mechanism of attention has not been well understood ( see Section 2 ) . This paper sets up a simple text classification task and considers a basic neural network model with the most straightforward attention mechanism . We study the model ’ s training trajectory to understand why attention can attend to the discriminative words ( referred to as the topic words ) . More specifically , in this task , each sentence is treated as a bag of words , and its class label , or topic , is indicated by a topic word . The model we consider involves a basic attention mechanism , which creates weighting factors to combine the word embedding vectors into a “ context vector ” ; the context vector is then passed to a classifier . In this setting , we prove a closed-form relationship between the topic word embedding norm and the inner product of its key and the query , referred to as the “ score ” , during gradient-descent training . It is particularly remarkable that this relationship holds irrespective of the classifier architecture or configuration . This relationship suggests the existence of a “ synergy ” in the amplification of the topic word score and its word embedding ; that is , the growths of the two quantities promote each other . This , in turn , allows the topic word embedding to stand out rapidly in the context vector during training . Moreover , when the model takes a fixed linear classifier , this relationship allows rigorous proofs of this “ mutual promotion ” phenomenon and the convergence of training to the topic words . Our theoretical results and their implications are corroborated by experiments performed on a synthetic dataset and real-world datasets . Additional insights are also obtained from these experiments . For example , low-capacity classifiers tend to give stronger training signals to the attention module . The “ mutual promotion ” effect implied by the discovered relationship can also exhibit itself as “ mutual suppression ” in the early training phase . Furthermore , in the real-world datasets , where perfect delimitation of topic and non-topic words does not exist , interesting training dynamics is observed . Due to length constraints , all proofs are presented in Appendix . 2 RELATED WORKS . Since 2019 , a series of works have been published to understand the working and behaviour of attention . One focus of these works pertains to understanding whether an attention mechanism can provide meaningful explanations ( Michel et al. , 2019 ; Voita et al. , 2019 ; Jain & Wallace , 2019 ; Wiegreffe & Pinter , 2019 ; Serrano & Smith , 2020 ; Vashishth et al. , 2020 ) . Most of these works are empirical in nature , for example , by analyzing the behaviours of a well-trained attention-based model ( Clark et al. , 2019 ) , or observing the impact of altering the output weights of the attention module or pruning a few heads ( Michel et al. , 2019 ; Voita et al. , 2019 ) , or a combination of them ( Jain & Wallace , 2019 ; Vashishth et al. , 2020 ) . Apart from acquiring insights from experiments , Brunner et al . ( 2019 ) and Hahn ( 2020 ) show theoretically that the self-attention blocks lacks identifiability , where multiple weight configurations may give equally good end predictions . The non-uniqueness of the attention weights therefore makes the architecture lack interpretability . As a fully connected neural network with infinite width can be seen as a Gaussian process ( Lee et al. , 2018 ) , a few works apply this perspective to understanding attention with infinite number of heads and infinite width of the network layers ( Yang , 2019 ; Hron et al. , 2020 ) . In this paper , we restrict our study to the more realist non-asymptotic regime . 3 PROBLEM SETUP . Learning Task To obtain insights into the training dynamics of attention models , we set up a simple topic classification task . Each input sentence contains m non-topic words and one topic word indicating its topic . Note that a topic may have multiple topic words , but a sentence is assumed to include only one of them . Assume that there are J topics that correspond to the mutually exclusive topic word sets T1 , T2 , · · · , TJ . Let T = ⋃J j=1 Tj be the set of all topic words . The non-topic words are drawn from a dictionary Θ , which are assumed not to contain any topic word . The training set Ψ consists of sentence-topic pairs , where each pair ( χ , y ) is generated by ( 1 ) randomly pick a topic y ∈ { 1 , 2 , · · · , J } ( 2 ) pick a topic word from set Ty and combine it with m words drawn uniformly at random from Θ to generate the sentence ( or the bag of words ) χ . In this task , one aims to develop a classifier from the training set that predicts the topic y for a random sentence χ generated in this way . We will consider the case that |Θ| > > |T| , which implies that a topic word appears much more frequently in the sentences than a non-topic word . Attention Model For this task , we consider a simple attention mechanism similar to the one proposed by Wang et al . ( 2016 ) . Each word w is associated with two parameters : an embedding νw ∈ Rd and a key κw ∈ Rd ′ . Based on a global query q ∈ Rd′ , the context vector of sentence χ is computed by ν̄ ( χ ) = ∑ w∈χ νw exp ( qTκw ) Z ( χ ) , where Z ( χ ) = ∑ w′∈χ exp ( q Tκw′ ) . Then ν̄ ( χ ) is fed into a classifier that predicts the sentence ’ s topic in terms of a distribution over all topics.1 Denote the loss function by l ( χ , y ) . Our upcoming analysis implies this attention model , although simple , may capture plenty of insight in understanding the training of more general attention models . Problem Statement Our objective is to investigate the training dynamics , under gradient descent , of this attention model . In particular , we wish to understand if there is an intrinsic mechanism that allows the attention model to discover the topic word and accelerates training . Moreover , we wish to investigate , beyond this setup , how the model is optimized when there is no clear delimitation between topic and non-topic words , as in real-world data . 1The condition that the attention layer directly attends to the word embeddings merely serves to simplify the analysis in Section 4 but this condition is not required for most results presented in Sections 4 and 5 . More discussions are given in Appendix A in this regard . 4 THEORETICAL ANALYSIS . It is common to fix some parameters when we train a model with limited resources . Also Lemma 1 . Assume q 6= 0 when initialized . Fixing it does not affect the attention block ’ s capacity . Thus , our upcoming discussion focuses on the case in which the query is fixed . Doing so also allows us to establish a closed-form expression connecting the word ’ s embedding and the inner product of its key and the query . In Appendix B , extra discussions and experimental results reveal that the trainability of the query does not affect the fundamental relationship we are about to present . For a topic word t , let Ψt denote the training samples involving it . Then , by gradient descent , ∆νt = τ |Ψ| ∑ ( χ , y ) ∈Ψt ∇ν̄ ( χ ) l ( χ , y ) exp ( qTκt ) Z ( χ ) ( 1 ) ∆κt = τ |Ψ| ∑ ( χ , y ) ∈Ψt q ( νt − ν̄ ( χ ) ) T ∇ν̄ ( χ ) l ( χ , y ) exp ( qTκt ) Z ( χ ) , ( 2 ) where τ denote the learning rate . As it will turn out , an important quantity in this setting is the inner product qT kw of query q and the key kw , which we denote by sw , and refer to it as the score of the word w. Denoting vw = ||q||2νw , η = τ ||q||2 , v̄ ( χ ) = ∑ w∈χ exp ( sw ) Z vw , and h ( v̄ ( χ ) ; y ) = ∇ν̄ ( χ ) l ( χ , y ) , for a topic word t , the dynamics simplifies to ∆vt = η |Ψ| ∑ ( χ , y ) ∈Ψt h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) ( 3 ) ∆st = η |Ψ| ∑ ( χ , y ) ∈Ψt ( vt − v̄ ( χ ) ) T h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) . ( 4 ) In the rest of the paper , whenever we refer to the embedding of word t , we actually mean vt not νt . Our analysis assumes the word embeddings are sampled i.i.d . from a distribution with mean zero and variance σ 2 d , where σ 2 is assumed close to zero . The word keys and the query are also sampled from zero mean distributions with a possibly different variance . We assume that this variance is so small that the initial word scores are approximately zero . This assumption of the initial configurations corresponds to the attention model starting as a word-averaging model , and allows us to investigate how the model deviates from this initial setting with training . We also assume the derivative h ( v̄ ( χ ) ; y ) of ` is Lipschitz continuous in v̄ ( χ ) throughout training . Further the assumption in Section 3 that the number of non-topic words |Θ| is much larger than the number of topic words |T| implies that with a sufficient number of training samples , the occurrence rate of a topic word is significantly higher than the non-topic ones . This then justifies the following assumption we will use throughout our analysis . Assumption 1 . The scores and the embeddings of the non-topic words are nearly unchanged compared to their counterparts for the topic words . Hence , our upcoming analysis will treat the scores and embeddings of the non-topic words as constants . Assumption 1 will be validated by experimental results presented in Section 5 . By selecting a sufficiently small η , we can take the gradient-descent updates in Eq ( 3 ) and Eq ( 4 ) to its continuous-time limit and get2 dvt dt = η |Ψ| ∑ ( χ , y ) ∈Ψt h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) ( 5 ) dst dt = η |Ψ| ∑ ( χ , y ) ∈Ψt ( vt − v̄ ( χ ) ) T h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) . ( 6 ) 2Reversely , Eq ( 3 ) is a discretized approximation of Eq ( 5 ) : vt ( t+1 ) −vt ( t ) = ∫ t+1 t dvt ( t ′ ) dt′ dt ′ ≈ 1· dvt ( t ) dt = ∆vt ( t ) . The approximation becomes accurate if vt ( t + 1 ) is close to vt ( t ) , which can be achieved by choosing a sufficiently small η . Likewise , Eq ( 4 ) is a discretized approximation of Eq ( 6 ) . We can then characterize the update of the score and the embedding of a topic word as a continuoustime dynamical system stated in Lemma 2 . The same technique has been used to analyze the training of neural networks in other contexts ( Saxe et al. , 2014 ; Greydanus et al. , 2019 ) . Lemma 2 . For sufficiently small η and σ2 , the score st and embedding vt of topic word t satisfy dvt dt = η|Ψt| |Ψ| 〈 h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) 〉 Ψt , ( 7 ) dst dt = [ ( vt − 〈v̄ ( χ \ t ) 〉Ψt ) T dvt dt ] 〈 exp ( st ) + Z ( χ \ t ) Z ( χ \ t ) 〉−1 Ψt , ( 8 ) where Z ( χ \ t ) = ∑w∈χ\ { t } exp ( sw ) , v̄ ( χ \ t ) = ∑w∈χ\ { t } vw exp ( sw ) Z ( χ\t ) , and 〈 · 〉Ψt denotes taking sample mean over the set Ψt . Eq ( 7 ) implies the speed of moving vt along the direction of 〈h ( v̄ ( χ ) ; y ) 〉Ψt is controlled by the attention weight exp ( st ) Z ( χ ) . Eq ( 8 ) shows that vt increases if and only if vt has a greater projection on 〈h ( v̄ ( χ ) ; y ) 〉Ψt than the weighted average of the non-topic word counterparts . Consider a simplified case where 〈h ( v̄ ( χ ) ; y ) 〉Ψt is fixed . Since the change of vt is much faster than the non-topic word counterparts , vt will have a larger projection on 〈h ( v̄ ( χ ) ; y ) 〉Ψt after a few epochs of training . Then st increases as well as its attention weight , which in turn speeds up the extension of the embedding vt . This observation reveals a mutual enhancement effect between the score increment and the embedding elongation . In fact such an effect exists in general , as stated in the theorem below , irrespective of whether 〈h ( v̄ ( χ ) ; y ) 〉Ψt is fixed . Theorem 1 . In the setting of Lemma 2 , from epoch t0 to t1 , the topic word score st and its embedding vt satisfy [ st ( t ) + exp ( st ( t ) ) 〈 1 Z ( χ \ t ) 〉 Ψt ] t1 t0 = [ 1 2 ||vt ( t ) − 〈v̄ ( χ \ t ) 〉Ψt || 2 2 ] t1 t0 . ( 9 ) Following from Lemma 2 , this theorem implies a positive relationship between the topic word score and the distance between vt and the non-topic word embedding average 〈v̄ ( χ \ t ) 〉Ψt . Remarkably this result makes no reference to 〈h ( v̄ ( χ ) ; y ) 〉Ψt , hence independent of it . This implies the identity in Eq ( 9 ) holds irrespective of the choice and setting of the classifier . Theorem 1 further implies a score and embedding norm ( “ SEN ” in short ) relationship for the topic words : Corollary 1 . In the context of Theorem 1 , by setting t0 = 0 and t1 = t , Eq ( 9 ) is reduced to ||vt ( t ) ||2 = √ 2 ( st ( t ) + exp st ( t ) m − 1 m ) , ( 10 ) The corollary indicates that ||vt ( t ) ||2 is monotonically increasing with st ( t ) . So , st increases if and only if the point vt departs from its initial location . That is , if the norm of the topic word embedding increases , it will be attended to . This result is independent of the configuration of all other network layers . Thus , if 〈h ( v̄ ( χ ) ; y ) 〉Ψt has a gradient field that pushes vt away from its original location , the topic word is expected to be attended to . This statement can be made precise , as in Theorem 2 , when the model uses a linear classifier . Theorem 2 . Assume the model has a fixed classifier in the form c ( v̄ ( χ ) ) = softmax ( UT v̄ ( χ ) ) , where the columns of U are linearly independent , and the model is trained using gradient descent with the cross-entropy loss . As training proceeds , the model will attend to the topic word in every input sentence and have its training loss approach zero . It is notable that the theorem holds broadly for any arbitrary fixed linear classifier ( subjective to the mild linear independence constraint of its parameter U ) . Additionally , we anticipate that this result holds for a much wider family of classifiers including trainable and even nonlinear ones . But rigorous proof appears difficult to obtain in such settings , and we will corroborate this claim in an experimental study in Section 5 . To sum up , in this section , we have shown two main results : ( a ) there is a closed-form positive relationship , the SEN relationship , between the topic word score and its embedding norm , which is independent of the configuration of the classifier . ( b ) the model , equipped with a fixed linear classifier stated in Theorem 2 , can be trained to have all topic words attended to . | The paper investigates the dynamics of attention mechanism by configurating a controlled experiment on a simple topic classification task and training via gradient descent. Each random sentence in the training data is synthesized to include only one topic word among many. Then the authors try to find an intrinsic mechanism that triggers the attention model to discover the topic word and accelerates training via mutual promotion. They further experiment the evolution of models during optimization when no clear distinction between topic and non-topic words exist like in real data. | SP:58c220b21af74c85e23004a9bad82e4d0dd7333c |
On the Dynamics of Training Attention Models | 1 INTRODUCTION . Attention-based neural networks have been broadly adopted in many natural language models for machine translation ( Bahdanau et al. , 2014 ; Luong et al. , 2015 ) , sentiment classification ( Wang et al. , 2016 ) , image caption generation ( Xu et al. , 2015 ) , and the unsupervised representation learning ( Devlin et al. , 2019 ) , etc . Particularly in the powerful transformers ( Vaswani et al. , 2017 ) , attention is its key ingredient . Despite its great successes established empirically , the working mechanism of attention has not been well understood ( see Section 2 ) . This paper sets up a simple text classification task and considers a basic neural network model with the most straightforward attention mechanism . We study the model ’ s training trajectory to understand why attention can attend to the discriminative words ( referred to as the topic words ) . More specifically , in this task , each sentence is treated as a bag of words , and its class label , or topic , is indicated by a topic word . The model we consider involves a basic attention mechanism , which creates weighting factors to combine the word embedding vectors into a “ context vector ” ; the context vector is then passed to a classifier . In this setting , we prove a closed-form relationship between the topic word embedding norm and the inner product of its key and the query , referred to as the “ score ” , during gradient-descent training . It is particularly remarkable that this relationship holds irrespective of the classifier architecture or configuration . This relationship suggests the existence of a “ synergy ” in the amplification of the topic word score and its word embedding ; that is , the growths of the two quantities promote each other . This , in turn , allows the topic word embedding to stand out rapidly in the context vector during training . Moreover , when the model takes a fixed linear classifier , this relationship allows rigorous proofs of this “ mutual promotion ” phenomenon and the convergence of training to the topic words . Our theoretical results and their implications are corroborated by experiments performed on a synthetic dataset and real-world datasets . Additional insights are also obtained from these experiments . For example , low-capacity classifiers tend to give stronger training signals to the attention module . The “ mutual promotion ” effect implied by the discovered relationship can also exhibit itself as “ mutual suppression ” in the early training phase . Furthermore , in the real-world datasets , where perfect delimitation of topic and non-topic words does not exist , interesting training dynamics is observed . Due to length constraints , all proofs are presented in Appendix . 2 RELATED WORKS . Since 2019 , a series of works have been published to understand the working and behaviour of attention . One focus of these works pertains to understanding whether an attention mechanism can provide meaningful explanations ( Michel et al. , 2019 ; Voita et al. , 2019 ; Jain & Wallace , 2019 ; Wiegreffe & Pinter , 2019 ; Serrano & Smith , 2020 ; Vashishth et al. , 2020 ) . Most of these works are empirical in nature , for example , by analyzing the behaviours of a well-trained attention-based model ( Clark et al. , 2019 ) , or observing the impact of altering the output weights of the attention module or pruning a few heads ( Michel et al. , 2019 ; Voita et al. , 2019 ) , or a combination of them ( Jain & Wallace , 2019 ; Vashishth et al. , 2020 ) . Apart from acquiring insights from experiments , Brunner et al . ( 2019 ) and Hahn ( 2020 ) show theoretically that the self-attention blocks lacks identifiability , where multiple weight configurations may give equally good end predictions . The non-uniqueness of the attention weights therefore makes the architecture lack interpretability . As a fully connected neural network with infinite width can be seen as a Gaussian process ( Lee et al. , 2018 ) , a few works apply this perspective to understanding attention with infinite number of heads and infinite width of the network layers ( Yang , 2019 ; Hron et al. , 2020 ) . In this paper , we restrict our study to the more realist non-asymptotic regime . 3 PROBLEM SETUP . Learning Task To obtain insights into the training dynamics of attention models , we set up a simple topic classification task . Each input sentence contains m non-topic words and one topic word indicating its topic . Note that a topic may have multiple topic words , but a sentence is assumed to include only one of them . Assume that there are J topics that correspond to the mutually exclusive topic word sets T1 , T2 , · · · , TJ . Let T = ⋃J j=1 Tj be the set of all topic words . The non-topic words are drawn from a dictionary Θ , which are assumed not to contain any topic word . The training set Ψ consists of sentence-topic pairs , where each pair ( χ , y ) is generated by ( 1 ) randomly pick a topic y ∈ { 1 , 2 , · · · , J } ( 2 ) pick a topic word from set Ty and combine it with m words drawn uniformly at random from Θ to generate the sentence ( or the bag of words ) χ . In this task , one aims to develop a classifier from the training set that predicts the topic y for a random sentence χ generated in this way . We will consider the case that |Θ| > > |T| , which implies that a topic word appears much more frequently in the sentences than a non-topic word . Attention Model For this task , we consider a simple attention mechanism similar to the one proposed by Wang et al . ( 2016 ) . Each word w is associated with two parameters : an embedding νw ∈ Rd and a key κw ∈ Rd ′ . Based on a global query q ∈ Rd′ , the context vector of sentence χ is computed by ν̄ ( χ ) = ∑ w∈χ νw exp ( qTκw ) Z ( χ ) , where Z ( χ ) = ∑ w′∈χ exp ( q Tκw′ ) . Then ν̄ ( χ ) is fed into a classifier that predicts the sentence ’ s topic in terms of a distribution over all topics.1 Denote the loss function by l ( χ , y ) . Our upcoming analysis implies this attention model , although simple , may capture plenty of insight in understanding the training of more general attention models . Problem Statement Our objective is to investigate the training dynamics , under gradient descent , of this attention model . In particular , we wish to understand if there is an intrinsic mechanism that allows the attention model to discover the topic word and accelerates training . Moreover , we wish to investigate , beyond this setup , how the model is optimized when there is no clear delimitation between topic and non-topic words , as in real-world data . 1The condition that the attention layer directly attends to the word embeddings merely serves to simplify the analysis in Section 4 but this condition is not required for most results presented in Sections 4 and 5 . More discussions are given in Appendix A in this regard . 4 THEORETICAL ANALYSIS . It is common to fix some parameters when we train a model with limited resources . Also Lemma 1 . Assume q 6= 0 when initialized . Fixing it does not affect the attention block ’ s capacity . Thus , our upcoming discussion focuses on the case in which the query is fixed . Doing so also allows us to establish a closed-form expression connecting the word ’ s embedding and the inner product of its key and the query . In Appendix B , extra discussions and experimental results reveal that the trainability of the query does not affect the fundamental relationship we are about to present . For a topic word t , let Ψt denote the training samples involving it . Then , by gradient descent , ∆νt = τ |Ψ| ∑ ( χ , y ) ∈Ψt ∇ν̄ ( χ ) l ( χ , y ) exp ( qTκt ) Z ( χ ) ( 1 ) ∆κt = τ |Ψ| ∑ ( χ , y ) ∈Ψt q ( νt − ν̄ ( χ ) ) T ∇ν̄ ( χ ) l ( χ , y ) exp ( qTκt ) Z ( χ ) , ( 2 ) where τ denote the learning rate . As it will turn out , an important quantity in this setting is the inner product qT kw of query q and the key kw , which we denote by sw , and refer to it as the score of the word w. Denoting vw = ||q||2νw , η = τ ||q||2 , v̄ ( χ ) = ∑ w∈χ exp ( sw ) Z vw , and h ( v̄ ( χ ) ; y ) = ∇ν̄ ( χ ) l ( χ , y ) , for a topic word t , the dynamics simplifies to ∆vt = η |Ψ| ∑ ( χ , y ) ∈Ψt h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) ( 3 ) ∆st = η |Ψ| ∑ ( χ , y ) ∈Ψt ( vt − v̄ ( χ ) ) T h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) . ( 4 ) In the rest of the paper , whenever we refer to the embedding of word t , we actually mean vt not νt . Our analysis assumes the word embeddings are sampled i.i.d . from a distribution with mean zero and variance σ 2 d , where σ 2 is assumed close to zero . The word keys and the query are also sampled from zero mean distributions with a possibly different variance . We assume that this variance is so small that the initial word scores are approximately zero . This assumption of the initial configurations corresponds to the attention model starting as a word-averaging model , and allows us to investigate how the model deviates from this initial setting with training . We also assume the derivative h ( v̄ ( χ ) ; y ) of ` is Lipschitz continuous in v̄ ( χ ) throughout training . Further the assumption in Section 3 that the number of non-topic words |Θ| is much larger than the number of topic words |T| implies that with a sufficient number of training samples , the occurrence rate of a topic word is significantly higher than the non-topic ones . This then justifies the following assumption we will use throughout our analysis . Assumption 1 . The scores and the embeddings of the non-topic words are nearly unchanged compared to their counterparts for the topic words . Hence , our upcoming analysis will treat the scores and embeddings of the non-topic words as constants . Assumption 1 will be validated by experimental results presented in Section 5 . By selecting a sufficiently small η , we can take the gradient-descent updates in Eq ( 3 ) and Eq ( 4 ) to its continuous-time limit and get2 dvt dt = η |Ψ| ∑ ( χ , y ) ∈Ψt h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) ( 5 ) dst dt = η |Ψ| ∑ ( χ , y ) ∈Ψt ( vt − v̄ ( χ ) ) T h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) . ( 6 ) 2Reversely , Eq ( 3 ) is a discretized approximation of Eq ( 5 ) : vt ( t+1 ) −vt ( t ) = ∫ t+1 t dvt ( t ′ ) dt′ dt ′ ≈ 1· dvt ( t ) dt = ∆vt ( t ) . The approximation becomes accurate if vt ( t + 1 ) is close to vt ( t ) , which can be achieved by choosing a sufficiently small η . Likewise , Eq ( 4 ) is a discretized approximation of Eq ( 6 ) . We can then characterize the update of the score and the embedding of a topic word as a continuoustime dynamical system stated in Lemma 2 . The same technique has been used to analyze the training of neural networks in other contexts ( Saxe et al. , 2014 ; Greydanus et al. , 2019 ) . Lemma 2 . For sufficiently small η and σ2 , the score st and embedding vt of topic word t satisfy dvt dt = η|Ψt| |Ψ| 〈 h ( v̄ ( χ ) ; y ) exp ( st ) Z ( χ ) 〉 Ψt , ( 7 ) dst dt = [ ( vt − 〈v̄ ( χ \ t ) 〉Ψt ) T dvt dt ] 〈 exp ( st ) + Z ( χ \ t ) Z ( χ \ t ) 〉−1 Ψt , ( 8 ) where Z ( χ \ t ) = ∑w∈χ\ { t } exp ( sw ) , v̄ ( χ \ t ) = ∑w∈χ\ { t } vw exp ( sw ) Z ( χ\t ) , and 〈 · 〉Ψt denotes taking sample mean over the set Ψt . Eq ( 7 ) implies the speed of moving vt along the direction of 〈h ( v̄ ( χ ) ; y ) 〉Ψt is controlled by the attention weight exp ( st ) Z ( χ ) . Eq ( 8 ) shows that vt increases if and only if vt has a greater projection on 〈h ( v̄ ( χ ) ; y ) 〉Ψt than the weighted average of the non-topic word counterparts . Consider a simplified case where 〈h ( v̄ ( χ ) ; y ) 〉Ψt is fixed . Since the change of vt is much faster than the non-topic word counterparts , vt will have a larger projection on 〈h ( v̄ ( χ ) ; y ) 〉Ψt after a few epochs of training . Then st increases as well as its attention weight , which in turn speeds up the extension of the embedding vt . This observation reveals a mutual enhancement effect between the score increment and the embedding elongation . In fact such an effect exists in general , as stated in the theorem below , irrespective of whether 〈h ( v̄ ( χ ) ; y ) 〉Ψt is fixed . Theorem 1 . In the setting of Lemma 2 , from epoch t0 to t1 , the topic word score st and its embedding vt satisfy [ st ( t ) + exp ( st ( t ) ) 〈 1 Z ( χ \ t ) 〉 Ψt ] t1 t0 = [ 1 2 ||vt ( t ) − 〈v̄ ( χ \ t ) 〉Ψt || 2 2 ] t1 t0 . ( 9 ) Following from Lemma 2 , this theorem implies a positive relationship between the topic word score and the distance between vt and the non-topic word embedding average 〈v̄ ( χ \ t ) 〉Ψt . Remarkably this result makes no reference to 〈h ( v̄ ( χ ) ; y ) 〉Ψt , hence independent of it . This implies the identity in Eq ( 9 ) holds irrespective of the choice and setting of the classifier . Theorem 1 further implies a score and embedding norm ( “ SEN ” in short ) relationship for the topic words : Corollary 1 . In the context of Theorem 1 , by setting t0 = 0 and t1 = t , Eq ( 9 ) is reduced to ||vt ( t ) ||2 = √ 2 ( st ( t ) + exp st ( t ) m − 1 m ) , ( 10 ) The corollary indicates that ||vt ( t ) ||2 is monotonically increasing with st ( t ) . So , st increases if and only if the point vt departs from its initial location . That is , if the norm of the topic word embedding increases , it will be attended to . This result is independent of the configuration of all other network layers . Thus , if 〈h ( v̄ ( χ ) ; y ) 〉Ψt has a gradient field that pushes vt away from its original location , the topic word is expected to be attended to . This statement can be made precise , as in Theorem 2 , when the model uses a linear classifier . Theorem 2 . Assume the model has a fixed classifier in the form c ( v̄ ( χ ) ) = softmax ( UT v̄ ( χ ) ) , where the columns of U are linearly independent , and the model is trained using gradient descent with the cross-entropy loss . As training proceeds , the model will attend to the topic word in every input sentence and have its training loss approach zero . It is notable that the theorem holds broadly for any arbitrary fixed linear classifier ( subjective to the mild linear independence constraint of its parameter U ) . Additionally , we anticipate that this result holds for a much wider family of classifiers including trainable and even nonlinear ones . But rigorous proof appears difficult to obtain in such settings , and we will corroborate this claim in an experimental study in Section 5 . To sum up , in this section , we have shown two main results : ( a ) there is a closed-form positive relationship , the SEN relationship , between the topic word score and its embedding norm , which is independent of the configuration of the classifier . ( b ) the model , equipped with a fixed linear classifier stated in Theorem 2 , can be trained to have all topic words attended to . | This paper studies the dynamics of attention in a task of simplified topic modeling, over the course of training for a specific model, where the context vector is the sum over words in a sentence of their embedding weighted by the exponential of the dot-product their key embedding with a global query vector, normalized. Due to the simplification of the topic modeling problem (two null-intersect sets of words: topic vs. non-topic), they consider the embeddings of the non-topic words to be fixed over the course of training for their theoretical analysis. The applicability of the theoretical result is close to zero, and a somewhat known property (e.g. in word2vec, Mikolov et al. 2013). The experimental results include two parts. One on a tiny synthetic dataset that matches the simplified topic modeling problem and serves as illustration. The other is on SST2 and SST5 (movie comments and ratings, sentiment analysis), where the results are poor (obviously, as the model is simple), e.g. yielding 79.59% on SST while the SOTA is 97.4, and BERT base is at 91.2. The analysis is interesting, but does not lead to new insights. | SP:58c220b21af74c85e23004a9bad82e4d0dd7333c |
Sample-Efficient Automated Deep Reinforcement Learning | 1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms are often sensitive to the choice of internal hyperparameters ( Jaderberg et al. , 2017 ; Mahmood et al. , 2018 ) , and the hyperparameters of the neural network architecture ( Islam et al. , 2017 ; Henderson et al. , 2018 ) , hindering them from being applied out-of-the-box to new environments . Tuning hyperparameters of RL algorithms can quickly become very expensive , both in terms of high computational costs and a large number of required environment interactions . Especially in real-world applications , sample efficiency is crucial ( Lee et al. , 2019 ) . Hyperparameter optimization ( HPO ; Snoek et al. , 2012 ; Feurer & Hutter , 2019 ) approaches often treat the algorithm under optimization as a black-box , which in the setting of RL requires a full training run every time a configuration is evaluated . This leads to a suboptimal sample efficiency in terms of environment interactions . Another pitfall for HPO is the non-stationarity of the RL problem . Hyperparameter settings optimal at the beginning of the learning phase can become unfavorable or even harmful in later stages ( François-Lavet et al. , 2015 ) . This issue can be addressed through dynamic configuration , either through self adaptation ( Tokic & Palm , 2011 ; François-Lavet et al. , 2015 ; Tokic , 2010 ) or through external adaptation as in population-based training ( PBT ; Jaderberg et al. , 2017 ) . However , current dynamic configuration approaches substantially increase the number of environment interactions . Furthermore , this prior work does not consider adapting the architecture . In this work , we introduce a simple meta-optimization framework for Sample-Efficient Automated RL ( SEARL ) to address all three challenges : sample-efficient HPO , dynamic configuration , and the dynamic modification of the neural architecture . The foundation of our approach is a joint optimization of an off-policy RL agent and its hyperparameters using an evolutionary approach . To reduce the amount of required environment interactions , we use a shared replay memory across the population of different RL agents . This allows agents to learn better policies due to the diverse collection of experience and enables us to perform AutoRL at practically the same amount of environment interactions as training a single configuration . Further , SEARL preserves the benefits of dynamic configuration present in PBT to enable online HPO and discovers hyperparameter schedules rather than a single static configuration . Our approach uses evolvable neural networks that preserve trained network parameters while adapting their architecture . We emphasize that SEARL is simple to use and allows efficient AutoRL for any off-policy deep RL algorithm . In a case study optimizing the popular TD3 algorithm ( Fujimoto et al. , 2018 ) in the MuJoCo benchmark suite we demonstrate the benefits of our framework and provide extensive ablation and analytic experiments . We show a 10× improvement in sample efficiency of the meta-optimization compared to random search and PBT . We also demonstrate the generalization capabilities of our approach by meta-optimizing the established DQN ( Mnih et al. , 2015 ) algorithm for the Atari benchmark . We provide an open-source implementation of SEARL.1 Our contributions are : • We introduce an AutoRL framework for off-policy RL which enables : ( i ) Sample-efficient HPO while training a population of RL agents using a shared replay memory . ( ii ) Dynamic optimization of hyperparameters to adjust to different training phases ; ( iii ) Online neural architecture search in the context of gradient-based deep RL ; • We propose a fair evaluation protocol to compare AutoRL and HPO in RL , taking into account the actual cost in terms of environment interactions . • We demonstrate the benefits of SEARL in a case study , reducing the number of environment interactions by up to an order of magnitude . 2 RELATED WORK . Advanced experience collection : Evolutionary RL ( ERL ) introduced by Khadka & Tumer ( 2018 ) and successors PDERL ( Bodnar et al. , 2020 ) and CERL ( Khadka et al. , 2019 ) combine Actor-Critic RL algorithms with genetic algorithms to evolve a small population of agents . This line of work mutates policies to increase the diversity of collected sample trajectories . The experience is stored in a shared replay memory and used to train an Actor-Critic learner with fixed network architectures using DDPG/TD3 while periodically adding the trained actor to a separate population of evolved actors . CERL extends this approach by using a whole population of learners with varying discount rates . However , this line of work aims to increase a single configuration ’ s performance , while our work optimizes hyperparameters and the neural architecture while training multiple agents . SEARL also benefits from a diverse set of mutated actors collecting experience in a shared replay memory . Schmitt et al . ( 2019 ) mix on-policy experience with shared experiences across concurrent hyperparameter sweeps to take advantage of parallel exploration . However , this work neither tackles dynamic configuration schedules nor architecture adaptation . ApeX/IMPALA : Resource utilization in the RL setting can be improved using multiple actors in a distributed setup and decoupling the learner from the actor . Horgan et al . ( 2018 ) extends a prioritized replay memory to a distributed setting ( Ape-X ) to scale experience collection for a replay memory used by a single trainer . In IMPALA ( Espeholt et al. , 2018 ) , multiple rollout actors asynchronously send their collected trajectories to a central learner through a queue . To correct the policy lag that this distributed setup introduces , IMPALA leverages the proposed V-trace algorithm for the central learner . These works aim at collecting large amounts of experience to benefit the learner , but they do not explore the space of hyperparameter configurations . In contrast , the presented work aims to reduce the number of environment interactions to perform efficient AutoRL . Neural architecture search with Reinforcement Learining : The work of Zoph & Le ( 2016 ) on RL for neural architecture search ( NAS ) is an interesting counterpart to our work on the intersection of RL and NAS . Zoph & Le ( 2016 ) employ RL for NAS to search for better performing architectures , whereas we employ NAS for RL to make use of better network architectures . AutoRL : Within the framework of AutoRL , the joint hyperparameter optimization and architecture search problem is addressed as a two-stage optimization problem in Chiang et al . ( 2019 ) , first shaping the reward function and optimizing for the network architecture afterward . Similarly , Runge et al . ( 2019 ) propose to jointly optimize algorithm hyperparameters and network architectures by searching 1Please find the source code on GitHub : github.com/automl/SEARL over the joint space . However , they treat the RL training as a black-box and do not focus on online optimization nor sample-efficiency . In contrast to black-box optimization , we jointly train the agent and dynamically optimize hyperparameters . Faust et al . ( 2019 ) uses an evolutionary approach to optimize a parametrized reward function based on which fixed network topologies are trained using standard RL algorithms , treating the RL algorithm together with a sampled reward function as a black-box optimizer . In this work , we do not use parametrized reward functions , but instead , directly optimize the environment reward . The main difference to this line of work is sample efficiency : While they train and evaluate thousands of configurations from scratch , we dynamically adapt the architecture and RL-algorithm hyperparameters online , thereby drastically reducing the total amount of interactions required for the algorithm to achieve good performance on a given task . We propose an evaluation protocol taking into account the aspect of sample-efficiency in RL in section 4.3 . Self-Tuning Actor-Critic : Zahavy et al . ( 2020 ) propose to meta-optimize a subset of differentiable hyperparameters in an outer loop using metagradients . This however , does not extend to non-differentiable hyperparameters and thus does not allow for online tuning of e.g . the network architecture . As a results , such hyperparameters can not be meta-optimized in their framework . HOOF : Paul et al . ( 2019 ) proposes sample-efficient hyperparameter tuning for policy gradient methods by greedily maximizing the value of a set of candidate policies at each iteration . In contrast to our work , HOOF performs HPO for on-policy algorithms that do not achieve comparable performance on continuous control tasks while requiring more interactions and not considering architecture optimization . Population-Based Training ( PBT ) : PBT ( Jaderberg et al. , 2017 ) is a widely used dynamic and asynchronous optimization algorithm . This approach adapts a population of different hyperparameter settings online and in parallel during training , periodically replacing inferior members of the population with more promising members . Similarly to SEARL , PBT can jointly optimize the RL agent and its hyperparameters online , making it the most closely related work . Recent work has improved upon PBT by using more advanced hyperparameter selection techniques ( Parker-Holder et al. , 2020 ) . In contrast to SEARL , PBT and follow-ups do not optimize the architecture and , more importantly , do not share experience within the population . As our experiments show , these advances lead to speedups of up to 10x in terms of sample efficiency . 3 SAMPLE-EFFICIENT AUTORL . In this section , we introduce a Sample-Efficient framework for Automated Reinforcement Learning ( SEARL ) based on an evolutionary algorithm acting on hyperparameters and gradient-based training using a shared experience replay . First , we discuss relevant background that describes SEARL building blocks before giving an overview of the proposed AutoRL framework , followed by a detailed description of each individual component . 3.1 BACKGROUND . Evolvable neural network : Using evolutionary algorithms to design neural networks , called Neuroevolution ( Floreano et al. , 2008 ; Stanley et al. , 2019 ) , is a long-standing approach . Some approaches only optimize the network weights ( Such et al. , 2017 ) , while others optimize architectures and weights jointly ( Zhang & Mühlenbein , 1993 ; Stanley & Miikkulainen , 2002 ) . To evolve the neural network in SEARL , we encode RL agent ’ s neural architectures by the number of layers and the nodes per layer , similar to ( Miikkulainen et al. , 2019 ) . When adding a new node to a layer , existing parameters are copied , and newly added parameters are initialized with a small magnitude . This is a common technique to preserve already trained network weights ( Wei et al. , 2016 ) . Shared experience replay : Replaying collected experiences ( Lin , 1992 ; Mnih et al. , 2015 ) smooths the training distribution over many past rollouts . The experience replay acts as a store for experience collected by agents interacting with the environment . Deep RL algorithms can sample from this storage to calculate gradient-based updates for the neural networks . It has been used and extended in various flavors , often to make use of diverse experience or experience collected in parallel ( Horgan et al. , 2018 ; Khadka & Tumer , 2018 ; Bodnar et al. , 2020 ; Khadka et al. , 2019 ) . SEARL employs a shared experience replay , which stores the diverse trajectories of all differently configured RL agents in the population so that each individual can benefit from the collective experiences during training . 3.2 FRAMEWORK . In SEARL , each individual in our population represents a deep reinforcement learning agent consisting of a policy and value network and the RL training algorithm hyperparameters , including the neural network architecture . The training and meta-optimization of these individuals take place in an evolutionary loop that consists of five basic phases ( initialization , evaluation , selection , mutation , and training ) , as shown in Figure 1 . During one epoch of this evolutionary loop , all individual properties could change through different mutations and training operators . This happens independently for each individual and can be processed in parallel . A novel feature of our approach is that rollouts of each individual are not only used for evaluation and selection purposes but also serve as experience for off-policy training of all agents and are stored in a shared replay memory . When changing the architecture , we follow the approach of Lamarckian evolution ( Ross , 1999 ) , where the updated weights of an agent during training are not only used in the evaluation phase but are preserved for the next generation . In the following , we describe the different phases of our evolutionary loop in detail ; we refer the reader to Appendix B for detailed pseudocode of the algorithm . Initialization : SEARL uses a population pop , of N individuals , each consisting of an RL agent Ai , and its hyperparameter settings θi . Thus , we can represent popg at each generation g as follows : popg = ( { A1 , θ1 } g , { A2 , θ2 } g , ... , { AN , θN } g ) ( 1 ) The individual ’ s hyperparameter setting θi is composed of architecture hyperparameters , such as the number of layers or the layer sizes , and algorithm hyperparameters , such as the learning rate . To arrive at a minimum viable neural network size , we start with reasonably small neural network architecture and enable its growth by using mutation operators . Other hyperparameters are set to some initial value , as they are subsequently adapted in the evolutionary loop . In our experiments we observed that using random initialization of hyperparameters for SEARL did not lead to large performance differences , see Appendix E. This suggests that there is very little domain-specific knowledge required to effectively use SEARL . Evaluation : After initialization and after each training phase , we evaluate each individual in the population using the RL agent , Ai , for at least one episode or a minimum number of steps in the environment . This ensures a minimum amount of new experience from each agent and keeps the stored trajectories in the shared replay memory diverse . The evaluation can be performed in parallel since each agent acts independently . We use the mean reward of the individual ’ s evaluation as fitness value fi . Selection : We use tournament selection with elitism ( Miller & Goldberg , 1995 ) for each prospective slot in the population of the new generation . For each tournament , k individuals are randomly chosen from the current population popg , and the individual with the largest fitness value fi is selected for the slot . We repeat this tournament N − 1 times to fill all slots . The size k allows us to control how greedily the selection mechanism picks candidates for the new population . We reserve one spot in the new population for the current population ’ s best- performing individual , thus preserving it across generations . Mutation : To explore the space of network weights and hyperparameters , we use different singleparent mutation operators . We apply one of the following operators uniformly at random to each member : ( 1 ) Mutation of the weights of the neural networks by adding Gaussian noise . ( 2 ) Change of activation function of the neural network . ( 3 ) Change of the neural network size by either adding additional nodes to a given layer or adding a new layer altogether while reusing the trained weights and initializing new weights randomly . ( 4 ) Change of algorithm hyperparameters . ( 5 ) No operation . We refer the reader to Appendix A for more details . Training : Using each individual ’ s current hyperparameters , we train it by sampling from the shared replay memory . Each individual is trained for as many steps as frames have been generated in the evaluation phase , as is common practice in deep RL . Optionally , the training time could be reduced by using only a fraction j of the steps to adapt to computational constraints . Since the neural network size could be subject to mutation between two training phases , the target network of the RL algorithm needs to be adapted too . Furthermore , the optimizer weight-parameters connected to individual network weights can not remain the same across generations . We address these issues by creating a new target network and re-initializing the optimizer at the beginning of each training phase . Our experiments show that this re-creation and re-initialization does not harm the performance of the considered RL algorithm . Please find more details in Appendix C. Like the evaluation phase , the training can be performed in parallel since every individual is trained independently . Shared experience replay : A shared experience replay memory collects trajectories from all evaluations and provides a diverse set of samples for each agent during the training phase . This helps to improve training speed and reduces the potential of over-fitting . | Motivated by the sensitivity of RL algorithms to the choice of hyperparameters and the data efficiency issue in training RL agents, the authors propose a population-based automated RL framework which can be applied to any off-policy RL algorithms. In the framework, they optimise hyperparameters together with neural architectures. The authors use TD3 on MuJoCo environments as a showcase to demonstrate the advantages of the proposed method. They reduced the number of environment interactions significantly compared to baselines like random search and a modified population-based training algorithm. | SP:d58e5f01c1c68e9c2fca423be935d790ef5346ee |
Sample-Efficient Automated Deep Reinforcement Learning | 1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms are often sensitive to the choice of internal hyperparameters ( Jaderberg et al. , 2017 ; Mahmood et al. , 2018 ) , and the hyperparameters of the neural network architecture ( Islam et al. , 2017 ; Henderson et al. , 2018 ) , hindering them from being applied out-of-the-box to new environments . Tuning hyperparameters of RL algorithms can quickly become very expensive , both in terms of high computational costs and a large number of required environment interactions . Especially in real-world applications , sample efficiency is crucial ( Lee et al. , 2019 ) . Hyperparameter optimization ( HPO ; Snoek et al. , 2012 ; Feurer & Hutter , 2019 ) approaches often treat the algorithm under optimization as a black-box , which in the setting of RL requires a full training run every time a configuration is evaluated . This leads to a suboptimal sample efficiency in terms of environment interactions . Another pitfall for HPO is the non-stationarity of the RL problem . Hyperparameter settings optimal at the beginning of the learning phase can become unfavorable or even harmful in later stages ( François-Lavet et al. , 2015 ) . This issue can be addressed through dynamic configuration , either through self adaptation ( Tokic & Palm , 2011 ; François-Lavet et al. , 2015 ; Tokic , 2010 ) or through external adaptation as in population-based training ( PBT ; Jaderberg et al. , 2017 ) . However , current dynamic configuration approaches substantially increase the number of environment interactions . Furthermore , this prior work does not consider adapting the architecture . In this work , we introduce a simple meta-optimization framework for Sample-Efficient Automated RL ( SEARL ) to address all three challenges : sample-efficient HPO , dynamic configuration , and the dynamic modification of the neural architecture . The foundation of our approach is a joint optimization of an off-policy RL agent and its hyperparameters using an evolutionary approach . To reduce the amount of required environment interactions , we use a shared replay memory across the population of different RL agents . This allows agents to learn better policies due to the diverse collection of experience and enables us to perform AutoRL at practically the same amount of environment interactions as training a single configuration . Further , SEARL preserves the benefits of dynamic configuration present in PBT to enable online HPO and discovers hyperparameter schedules rather than a single static configuration . Our approach uses evolvable neural networks that preserve trained network parameters while adapting their architecture . We emphasize that SEARL is simple to use and allows efficient AutoRL for any off-policy deep RL algorithm . In a case study optimizing the popular TD3 algorithm ( Fujimoto et al. , 2018 ) in the MuJoCo benchmark suite we demonstrate the benefits of our framework and provide extensive ablation and analytic experiments . We show a 10× improvement in sample efficiency of the meta-optimization compared to random search and PBT . We also demonstrate the generalization capabilities of our approach by meta-optimizing the established DQN ( Mnih et al. , 2015 ) algorithm for the Atari benchmark . We provide an open-source implementation of SEARL.1 Our contributions are : • We introduce an AutoRL framework for off-policy RL which enables : ( i ) Sample-efficient HPO while training a population of RL agents using a shared replay memory . ( ii ) Dynamic optimization of hyperparameters to adjust to different training phases ; ( iii ) Online neural architecture search in the context of gradient-based deep RL ; • We propose a fair evaluation protocol to compare AutoRL and HPO in RL , taking into account the actual cost in terms of environment interactions . • We demonstrate the benefits of SEARL in a case study , reducing the number of environment interactions by up to an order of magnitude . 2 RELATED WORK . Advanced experience collection : Evolutionary RL ( ERL ) introduced by Khadka & Tumer ( 2018 ) and successors PDERL ( Bodnar et al. , 2020 ) and CERL ( Khadka et al. , 2019 ) combine Actor-Critic RL algorithms with genetic algorithms to evolve a small population of agents . This line of work mutates policies to increase the diversity of collected sample trajectories . The experience is stored in a shared replay memory and used to train an Actor-Critic learner with fixed network architectures using DDPG/TD3 while periodically adding the trained actor to a separate population of evolved actors . CERL extends this approach by using a whole population of learners with varying discount rates . However , this line of work aims to increase a single configuration ’ s performance , while our work optimizes hyperparameters and the neural architecture while training multiple agents . SEARL also benefits from a diverse set of mutated actors collecting experience in a shared replay memory . Schmitt et al . ( 2019 ) mix on-policy experience with shared experiences across concurrent hyperparameter sweeps to take advantage of parallel exploration . However , this work neither tackles dynamic configuration schedules nor architecture adaptation . ApeX/IMPALA : Resource utilization in the RL setting can be improved using multiple actors in a distributed setup and decoupling the learner from the actor . Horgan et al . ( 2018 ) extends a prioritized replay memory to a distributed setting ( Ape-X ) to scale experience collection for a replay memory used by a single trainer . In IMPALA ( Espeholt et al. , 2018 ) , multiple rollout actors asynchronously send their collected trajectories to a central learner through a queue . To correct the policy lag that this distributed setup introduces , IMPALA leverages the proposed V-trace algorithm for the central learner . These works aim at collecting large amounts of experience to benefit the learner , but they do not explore the space of hyperparameter configurations . In contrast , the presented work aims to reduce the number of environment interactions to perform efficient AutoRL . Neural architecture search with Reinforcement Learining : The work of Zoph & Le ( 2016 ) on RL for neural architecture search ( NAS ) is an interesting counterpart to our work on the intersection of RL and NAS . Zoph & Le ( 2016 ) employ RL for NAS to search for better performing architectures , whereas we employ NAS for RL to make use of better network architectures . AutoRL : Within the framework of AutoRL , the joint hyperparameter optimization and architecture search problem is addressed as a two-stage optimization problem in Chiang et al . ( 2019 ) , first shaping the reward function and optimizing for the network architecture afterward . Similarly , Runge et al . ( 2019 ) propose to jointly optimize algorithm hyperparameters and network architectures by searching 1Please find the source code on GitHub : github.com/automl/SEARL over the joint space . However , they treat the RL training as a black-box and do not focus on online optimization nor sample-efficiency . In contrast to black-box optimization , we jointly train the agent and dynamically optimize hyperparameters . Faust et al . ( 2019 ) uses an evolutionary approach to optimize a parametrized reward function based on which fixed network topologies are trained using standard RL algorithms , treating the RL algorithm together with a sampled reward function as a black-box optimizer . In this work , we do not use parametrized reward functions , but instead , directly optimize the environment reward . The main difference to this line of work is sample efficiency : While they train and evaluate thousands of configurations from scratch , we dynamically adapt the architecture and RL-algorithm hyperparameters online , thereby drastically reducing the total amount of interactions required for the algorithm to achieve good performance on a given task . We propose an evaluation protocol taking into account the aspect of sample-efficiency in RL in section 4.3 . Self-Tuning Actor-Critic : Zahavy et al . ( 2020 ) propose to meta-optimize a subset of differentiable hyperparameters in an outer loop using metagradients . This however , does not extend to non-differentiable hyperparameters and thus does not allow for online tuning of e.g . the network architecture . As a results , such hyperparameters can not be meta-optimized in their framework . HOOF : Paul et al . ( 2019 ) proposes sample-efficient hyperparameter tuning for policy gradient methods by greedily maximizing the value of a set of candidate policies at each iteration . In contrast to our work , HOOF performs HPO for on-policy algorithms that do not achieve comparable performance on continuous control tasks while requiring more interactions and not considering architecture optimization . Population-Based Training ( PBT ) : PBT ( Jaderberg et al. , 2017 ) is a widely used dynamic and asynchronous optimization algorithm . This approach adapts a population of different hyperparameter settings online and in parallel during training , periodically replacing inferior members of the population with more promising members . Similarly to SEARL , PBT can jointly optimize the RL agent and its hyperparameters online , making it the most closely related work . Recent work has improved upon PBT by using more advanced hyperparameter selection techniques ( Parker-Holder et al. , 2020 ) . In contrast to SEARL , PBT and follow-ups do not optimize the architecture and , more importantly , do not share experience within the population . As our experiments show , these advances lead to speedups of up to 10x in terms of sample efficiency . 3 SAMPLE-EFFICIENT AUTORL . In this section , we introduce a Sample-Efficient framework for Automated Reinforcement Learning ( SEARL ) based on an evolutionary algorithm acting on hyperparameters and gradient-based training using a shared experience replay . First , we discuss relevant background that describes SEARL building blocks before giving an overview of the proposed AutoRL framework , followed by a detailed description of each individual component . 3.1 BACKGROUND . Evolvable neural network : Using evolutionary algorithms to design neural networks , called Neuroevolution ( Floreano et al. , 2008 ; Stanley et al. , 2019 ) , is a long-standing approach . Some approaches only optimize the network weights ( Such et al. , 2017 ) , while others optimize architectures and weights jointly ( Zhang & Mühlenbein , 1993 ; Stanley & Miikkulainen , 2002 ) . To evolve the neural network in SEARL , we encode RL agent ’ s neural architectures by the number of layers and the nodes per layer , similar to ( Miikkulainen et al. , 2019 ) . When adding a new node to a layer , existing parameters are copied , and newly added parameters are initialized with a small magnitude . This is a common technique to preserve already trained network weights ( Wei et al. , 2016 ) . Shared experience replay : Replaying collected experiences ( Lin , 1992 ; Mnih et al. , 2015 ) smooths the training distribution over many past rollouts . The experience replay acts as a store for experience collected by agents interacting with the environment . Deep RL algorithms can sample from this storage to calculate gradient-based updates for the neural networks . It has been used and extended in various flavors , often to make use of diverse experience or experience collected in parallel ( Horgan et al. , 2018 ; Khadka & Tumer , 2018 ; Bodnar et al. , 2020 ; Khadka et al. , 2019 ) . SEARL employs a shared experience replay , which stores the diverse trajectories of all differently configured RL agents in the population so that each individual can benefit from the collective experiences during training . 3.2 FRAMEWORK . In SEARL , each individual in our population represents a deep reinforcement learning agent consisting of a policy and value network and the RL training algorithm hyperparameters , including the neural network architecture . The training and meta-optimization of these individuals take place in an evolutionary loop that consists of five basic phases ( initialization , evaluation , selection , mutation , and training ) , as shown in Figure 1 . During one epoch of this evolutionary loop , all individual properties could change through different mutations and training operators . This happens independently for each individual and can be processed in parallel . A novel feature of our approach is that rollouts of each individual are not only used for evaluation and selection purposes but also serve as experience for off-policy training of all agents and are stored in a shared replay memory . When changing the architecture , we follow the approach of Lamarckian evolution ( Ross , 1999 ) , where the updated weights of an agent during training are not only used in the evaluation phase but are preserved for the next generation . In the following , we describe the different phases of our evolutionary loop in detail ; we refer the reader to Appendix B for detailed pseudocode of the algorithm . Initialization : SEARL uses a population pop , of N individuals , each consisting of an RL agent Ai , and its hyperparameter settings θi . Thus , we can represent popg at each generation g as follows : popg = ( { A1 , θ1 } g , { A2 , θ2 } g , ... , { AN , θN } g ) ( 1 ) The individual ’ s hyperparameter setting θi is composed of architecture hyperparameters , such as the number of layers or the layer sizes , and algorithm hyperparameters , such as the learning rate . To arrive at a minimum viable neural network size , we start with reasonably small neural network architecture and enable its growth by using mutation operators . Other hyperparameters are set to some initial value , as they are subsequently adapted in the evolutionary loop . In our experiments we observed that using random initialization of hyperparameters for SEARL did not lead to large performance differences , see Appendix E. This suggests that there is very little domain-specific knowledge required to effectively use SEARL . Evaluation : After initialization and after each training phase , we evaluate each individual in the population using the RL agent , Ai , for at least one episode or a minimum number of steps in the environment . This ensures a minimum amount of new experience from each agent and keeps the stored trajectories in the shared replay memory diverse . The evaluation can be performed in parallel since each agent acts independently . We use the mean reward of the individual ’ s evaluation as fitness value fi . Selection : We use tournament selection with elitism ( Miller & Goldberg , 1995 ) for each prospective slot in the population of the new generation . For each tournament , k individuals are randomly chosen from the current population popg , and the individual with the largest fitness value fi is selected for the slot . We repeat this tournament N − 1 times to fill all slots . The size k allows us to control how greedily the selection mechanism picks candidates for the new population . We reserve one spot in the new population for the current population ’ s best- performing individual , thus preserving it across generations . Mutation : To explore the space of network weights and hyperparameters , we use different singleparent mutation operators . We apply one of the following operators uniformly at random to each member : ( 1 ) Mutation of the weights of the neural networks by adding Gaussian noise . ( 2 ) Change of activation function of the neural network . ( 3 ) Change of the neural network size by either adding additional nodes to a given layer or adding a new layer altogether while reusing the trained weights and initializing new weights randomly . ( 4 ) Change of algorithm hyperparameters . ( 5 ) No operation . We refer the reader to Appendix A for more details . Training : Using each individual ’ s current hyperparameters , we train it by sampling from the shared replay memory . Each individual is trained for as many steps as frames have been generated in the evaluation phase , as is common practice in deep RL . Optionally , the training time could be reduced by using only a fraction j of the steps to adapt to computational constraints . Since the neural network size could be subject to mutation between two training phases , the target network of the RL algorithm needs to be adapted too . Furthermore , the optimizer weight-parameters connected to individual network weights can not remain the same across generations . We address these issues by creating a new target network and re-initializing the optimizer at the beginning of each training phase . Our experiments show that this re-creation and re-initialization does not harm the performance of the considered RL algorithm . Please find more details in Appendix C. Like the evaluation phase , the training can be performed in parallel since every individual is trained independently . Shared experience replay : A shared experience replay memory collects trajectories from all evaluations and provides a diverse set of samples for each agent during the training phase . This helps to improve training speed and reduces the potential of over-fitting . | This paper propose a population-based AutoRL framework for hyperparameter optimization of off-policy RL algorithms. The framework optimizes both the hyperparameters and also the neural architecture in a one-shot manner, e.g., search and train at the same time. A shared experience replay buffer is used across the population, which as demonstrated in the experiments, substantially increase the sample efficiency compared to PBT and random search. | SP:d58e5f01c1c68e9c2fca423be935d790ef5346ee |
Sample-Efficient Automated Deep Reinforcement Learning | 1 INTRODUCTION . Deep reinforcement learning ( RL ) algorithms are often sensitive to the choice of internal hyperparameters ( Jaderberg et al. , 2017 ; Mahmood et al. , 2018 ) , and the hyperparameters of the neural network architecture ( Islam et al. , 2017 ; Henderson et al. , 2018 ) , hindering them from being applied out-of-the-box to new environments . Tuning hyperparameters of RL algorithms can quickly become very expensive , both in terms of high computational costs and a large number of required environment interactions . Especially in real-world applications , sample efficiency is crucial ( Lee et al. , 2019 ) . Hyperparameter optimization ( HPO ; Snoek et al. , 2012 ; Feurer & Hutter , 2019 ) approaches often treat the algorithm under optimization as a black-box , which in the setting of RL requires a full training run every time a configuration is evaluated . This leads to a suboptimal sample efficiency in terms of environment interactions . Another pitfall for HPO is the non-stationarity of the RL problem . Hyperparameter settings optimal at the beginning of the learning phase can become unfavorable or even harmful in later stages ( François-Lavet et al. , 2015 ) . This issue can be addressed through dynamic configuration , either through self adaptation ( Tokic & Palm , 2011 ; François-Lavet et al. , 2015 ; Tokic , 2010 ) or through external adaptation as in population-based training ( PBT ; Jaderberg et al. , 2017 ) . However , current dynamic configuration approaches substantially increase the number of environment interactions . Furthermore , this prior work does not consider adapting the architecture . In this work , we introduce a simple meta-optimization framework for Sample-Efficient Automated RL ( SEARL ) to address all three challenges : sample-efficient HPO , dynamic configuration , and the dynamic modification of the neural architecture . The foundation of our approach is a joint optimization of an off-policy RL agent and its hyperparameters using an evolutionary approach . To reduce the amount of required environment interactions , we use a shared replay memory across the population of different RL agents . This allows agents to learn better policies due to the diverse collection of experience and enables us to perform AutoRL at practically the same amount of environment interactions as training a single configuration . Further , SEARL preserves the benefits of dynamic configuration present in PBT to enable online HPO and discovers hyperparameter schedules rather than a single static configuration . Our approach uses evolvable neural networks that preserve trained network parameters while adapting their architecture . We emphasize that SEARL is simple to use and allows efficient AutoRL for any off-policy deep RL algorithm . In a case study optimizing the popular TD3 algorithm ( Fujimoto et al. , 2018 ) in the MuJoCo benchmark suite we demonstrate the benefits of our framework and provide extensive ablation and analytic experiments . We show a 10× improvement in sample efficiency of the meta-optimization compared to random search and PBT . We also demonstrate the generalization capabilities of our approach by meta-optimizing the established DQN ( Mnih et al. , 2015 ) algorithm for the Atari benchmark . We provide an open-source implementation of SEARL.1 Our contributions are : • We introduce an AutoRL framework for off-policy RL which enables : ( i ) Sample-efficient HPO while training a population of RL agents using a shared replay memory . ( ii ) Dynamic optimization of hyperparameters to adjust to different training phases ; ( iii ) Online neural architecture search in the context of gradient-based deep RL ; • We propose a fair evaluation protocol to compare AutoRL and HPO in RL , taking into account the actual cost in terms of environment interactions . • We demonstrate the benefits of SEARL in a case study , reducing the number of environment interactions by up to an order of magnitude . 2 RELATED WORK . Advanced experience collection : Evolutionary RL ( ERL ) introduced by Khadka & Tumer ( 2018 ) and successors PDERL ( Bodnar et al. , 2020 ) and CERL ( Khadka et al. , 2019 ) combine Actor-Critic RL algorithms with genetic algorithms to evolve a small population of agents . This line of work mutates policies to increase the diversity of collected sample trajectories . The experience is stored in a shared replay memory and used to train an Actor-Critic learner with fixed network architectures using DDPG/TD3 while periodically adding the trained actor to a separate population of evolved actors . CERL extends this approach by using a whole population of learners with varying discount rates . However , this line of work aims to increase a single configuration ’ s performance , while our work optimizes hyperparameters and the neural architecture while training multiple agents . SEARL also benefits from a diverse set of mutated actors collecting experience in a shared replay memory . Schmitt et al . ( 2019 ) mix on-policy experience with shared experiences across concurrent hyperparameter sweeps to take advantage of parallel exploration . However , this work neither tackles dynamic configuration schedules nor architecture adaptation . ApeX/IMPALA : Resource utilization in the RL setting can be improved using multiple actors in a distributed setup and decoupling the learner from the actor . Horgan et al . ( 2018 ) extends a prioritized replay memory to a distributed setting ( Ape-X ) to scale experience collection for a replay memory used by a single trainer . In IMPALA ( Espeholt et al. , 2018 ) , multiple rollout actors asynchronously send their collected trajectories to a central learner through a queue . To correct the policy lag that this distributed setup introduces , IMPALA leverages the proposed V-trace algorithm for the central learner . These works aim at collecting large amounts of experience to benefit the learner , but they do not explore the space of hyperparameter configurations . In contrast , the presented work aims to reduce the number of environment interactions to perform efficient AutoRL . Neural architecture search with Reinforcement Learining : The work of Zoph & Le ( 2016 ) on RL for neural architecture search ( NAS ) is an interesting counterpart to our work on the intersection of RL and NAS . Zoph & Le ( 2016 ) employ RL for NAS to search for better performing architectures , whereas we employ NAS for RL to make use of better network architectures . AutoRL : Within the framework of AutoRL , the joint hyperparameter optimization and architecture search problem is addressed as a two-stage optimization problem in Chiang et al . ( 2019 ) , first shaping the reward function and optimizing for the network architecture afterward . Similarly , Runge et al . ( 2019 ) propose to jointly optimize algorithm hyperparameters and network architectures by searching 1Please find the source code on GitHub : github.com/automl/SEARL over the joint space . However , they treat the RL training as a black-box and do not focus on online optimization nor sample-efficiency . In contrast to black-box optimization , we jointly train the agent and dynamically optimize hyperparameters . Faust et al . ( 2019 ) uses an evolutionary approach to optimize a parametrized reward function based on which fixed network topologies are trained using standard RL algorithms , treating the RL algorithm together with a sampled reward function as a black-box optimizer . In this work , we do not use parametrized reward functions , but instead , directly optimize the environment reward . The main difference to this line of work is sample efficiency : While they train and evaluate thousands of configurations from scratch , we dynamically adapt the architecture and RL-algorithm hyperparameters online , thereby drastically reducing the total amount of interactions required for the algorithm to achieve good performance on a given task . We propose an evaluation protocol taking into account the aspect of sample-efficiency in RL in section 4.3 . Self-Tuning Actor-Critic : Zahavy et al . ( 2020 ) propose to meta-optimize a subset of differentiable hyperparameters in an outer loop using metagradients . This however , does not extend to non-differentiable hyperparameters and thus does not allow for online tuning of e.g . the network architecture . As a results , such hyperparameters can not be meta-optimized in their framework . HOOF : Paul et al . ( 2019 ) proposes sample-efficient hyperparameter tuning for policy gradient methods by greedily maximizing the value of a set of candidate policies at each iteration . In contrast to our work , HOOF performs HPO for on-policy algorithms that do not achieve comparable performance on continuous control tasks while requiring more interactions and not considering architecture optimization . Population-Based Training ( PBT ) : PBT ( Jaderberg et al. , 2017 ) is a widely used dynamic and asynchronous optimization algorithm . This approach adapts a population of different hyperparameter settings online and in parallel during training , periodically replacing inferior members of the population with more promising members . Similarly to SEARL , PBT can jointly optimize the RL agent and its hyperparameters online , making it the most closely related work . Recent work has improved upon PBT by using more advanced hyperparameter selection techniques ( Parker-Holder et al. , 2020 ) . In contrast to SEARL , PBT and follow-ups do not optimize the architecture and , more importantly , do not share experience within the population . As our experiments show , these advances lead to speedups of up to 10x in terms of sample efficiency . 3 SAMPLE-EFFICIENT AUTORL . In this section , we introduce a Sample-Efficient framework for Automated Reinforcement Learning ( SEARL ) based on an evolutionary algorithm acting on hyperparameters and gradient-based training using a shared experience replay . First , we discuss relevant background that describes SEARL building blocks before giving an overview of the proposed AutoRL framework , followed by a detailed description of each individual component . 3.1 BACKGROUND . Evolvable neural network : Using evolutionary algorithms to design neural networks , called Neuroevolution ( Floreano et al. , 2008 ; Stanley et al. , 2019 ) , is a long-standing approach . Some approaches only optimize the network weights ( Such et al. , 2017 ) , while others optimize architectures and weights jointly ( Zhang & Mühlenbein , 1993 ; Stanley & Miikkulainen , 2002 ) . To evolve the neural network in SEARL , we encode RL agent ’ s neural architectures by the number of layers and the nodes per layer , similar to ( Miikkulainen et al. , 2019 ) . When adding a new node to a layer , existing parameters are copied , and newly added parameters are initialized with a small magnitude . This is a common technique to preserve already trained network weights ( Wei et al. , 2016 ) . Shared experience replay : Replaying collected experiences ( Lin , 1992 ; Mnih et al. , 2015 ) smooths the training distribution over many past rollouts . The experience replay acts as a store for experience collected by agents interacting with the environment . Deep RL algorithms can sample from this storage to calculate gradient-based updates for the neural networks . It has been used and extended in various flavors , often to make use of diverse experience or experience collected in parallel ( Horgan et al. , 2018 ; Khadka & Tumer , 2018 ; Bodnar et al. , 2020 ; Khadka et al. , 2019 ) . SEARL employs a shared experience replay , which stores the diverse trajectories of all differently configured RL agents in the population so that each individual can benefit from the collective experiences during training . 3.2 FRAMEWORK . In SEARL , each individual in our population represents a deep reinforcement learning agent consisting of a policy and value network and the RL training algorithm hyperparameters , including the neural network architecture . The training and meta-optimization of these individuals take place in an evolutionary loop that consists of five basic phases ( initialization , evaluation , selection , mutation , and training ) , as shown in Figure 1 . During one epoch of this evolutionary loop , all individual properties could change through different mutations and training operators . This happens independently for each individual and can be processed in parallel . A novel feature of our approach is that rollouts of each individual are not only used for evaluation and selection purposes but also serve as experience for off-policy training of all agents and are stored in a shared replay memory . When changing the architecture , we follow the approach of Lamarckian evolution ( Ross , 1999 ) , where the updated weights of an agent during training are not only used in the evaluation phase but are preserved for the next generation . In the following , we describe the different phases of our evolutionary loop in detail ; we refer the reader to Appendix B for detailed pseudocode of the algorithm . Initialization : SEARL uses a population pop , of N individuals , each consisting of an RL agent Ai , and its hyperparameter settings θi . Thus , we can represent popg at each generation g as follows : popg = ( { A1 , θ1 } g , { A2 , θ2 } g , ... , { AN , θN } g ) ( 1 ) The individual ’ s hyperparameter setting θi is composed of architecture hyperparameters , such as the number of layers or the layer sizes , and algorithm hyperparameters , such as the learning rate . To arrive at a minimum viable neural network size , we start with reasonably small neural network architecture and enable its growth by using mutation operators . Other hyperparameters are set to some initial value , as they are subsequently adapted in the evolutionary loop . In our experiments we observed that using random initialization of hyperparameters for SEARL did not lead to large performance differences , see Appendix E. This suggests that there is very little domain-specific knowledge required to effectively use SEARL . Evaluation : After initialization and after each training phase , we evaluate each individual in the population using the RL agent , Ai , for at least one episode or a minimum number of steps in the environment . This ensures a minimum amount of new experience from each agent and keeps the stored trajectories in the shared replay memory diverse . The evaluation can be performed in parallel since each agent acts independently . We use the mean reward of the individual ’ s evaluation as fitness value fi . Selection : We use tournament selection with elitism ( Miller & Goldberg , 1995 ) for each prospective slot in the population of the new generation . For each tournament , k individuals are randomly chosen from the current population popg , and the individual with the largest fitness value fi is selected for the slot . We repeat this tournament N − 1 times to fill all slots . The size k allows us to control how greedily the selection mechanism picks candidates for the new population . We reserve one spot in the new population for the current population ’ s best- performing individual , thus preserving it across generations . Mutation : To explore the space of network weights and hyperparameters , we use different singleparent mutation operators . We apply one of the following operators uniformly at random to each member : ( 1 ) Mutation of the weights of the neural networks by adding Gaussian noise . ( 2 ) Change of activation function of the neural network . ( 3 ) Change of the neural network size by either adding additional nodes to a given layer or adding a new layer altogether while reusing the trained weights and initializing new weights randomly . ( 4 ) Change of algorithm hyperparameters . ( 5 ) No operation . We refer the reader to Appendix A for more details . Training : Using each individual ’ s current hyperparameters , we train it by sampling from the shared replay memory . Each individual is trained for as many steps as frames have been generated in the evaluation phase , as is common practice in deep RL . Optionally , the training time could be reduced by using only a fraction j of the steps to adapt to computational constraints . Since the neural network size could be subject to mutation between two training phases , the target network of the RL algorithm needs to be adapted too . Furthermore , the optimizer weight-parameters connected to individual network weights can not remain the same across generations . We address these issues by creating a new target network and re-initializing the optimizer at the beginning of each training phase . Our experiments show that this re-creation and re-initialization does not harm the performance of the considered RL algorithm . Please find more details in Appendix C. Like the evaluation phase , the training can be performed in parallel since every individual is trained independently . Shared experience replay : A shared experience replay memory collects trajectories from all evaluations and provides a diverse set of samples for each agent during the training phase . This helps to improve training speed and reduces the potential of over-fitting . | In this paper, the authors intend to propose an efficient automated reinforcement learning (RL) framework. To achieve this goal, they integrate three technologies, i.e., evolutionary RL for hyperparameter search, evolvable neural network for policy network design, and shared experience replay for improving data usage. The paper uses a case study on MuJoCo to demonstrate the claimed advantages over baselines. | SP:d58e5f01c1c68e9c2fca423be935d790ef5346ee |
GenQu: A Hybrid System for Learning Classical Data in Quantum States | 1 INTRODUCTION . In the past decade , machine learning and artificial intelligence powered applications dramatically changed our daily life . Many novel algorithms and models achieve widespread practical successes in a variety of domains such as autonomous cars , healthcare , manufacturing , etc . Despite the wide adoption of ML models , training the machine learning models such as DNNs requires a tremendous amount of computing resources to tune millions of hyper-parameters . Especially in the post Moore ’ s Law era , the limit of semiconductor fabrication technology can not satisfy the the rapidly increased data volume needed for training , which restricts the development of this field ( Thompson et al. , 2020 ) . Encouraged by the recent demonstration of quantum supremacy ( Arute et al. , 2019 ) , researchers are searching for a transition from the classical learning to the quantum learning , with the promise of providing a quantum speedup over the classical learning . The current state of quantum-based learning inspires alternative architectures to classical learning ‘ s sub-fields , such as Deep Learning ( DL ) or Support Vector Machine ( SVM ) ( Garg & Ramakrishnan , 2020 ; Beer et al. , 2020 ; Potok et al. , 2018 ; Levine et al. , 2019 ) , where the quantum algorithm provides improvements over their classical counterparts . For example , there are quite a number of adoptions of quantum learning algorithms in domains of expectation maximization solving ( QEM ) ( Kerenidis et al. , 2019 ) that speeds up the kernel methods to sub-linear time ( Li et al. , 2019 ) , Quantum-SVM ( Ding et al. , 2019 ) , and NLP ( Panahi et al. , 2019 ) . Employing quantum systems to train deep learning models is rather developed with a multitude of approaches to creating and mimicking aspects of classical deep learning systems ( Verdon et al. , 2019 ; Beer et al. , 2020 ; Chen et al. , 2020 ; Kerenidis et al. , 2019 ) , with the following challenges : ( i ) , such systems are held back by the low qubit count of current quantum computers . ( ii ) , learning in a quantum computer becomes even more difficult due to the lack of efficient classical-to-quantum data encoding methodology ( Zoufal et al. , 2019 ; Cortese & Braje , 2019 ) . ( iii ) , most of the existing studies are based on purely theoretical analysis or simulations , lacking practical usability on near-term quantum devices ( NISQ ) ( Preskill , 2018 ) . More importantly , the above challenges would presist even when the number of qubits supported in quantum machines get siginificantly increased : when the number of qubits in the quantum system increases , the computational complexity grows exponentially ( Kaye et al. , 2007 ) , which quickly leads to tasks that become completely infeasible for simulation and near-term quantum computers . Therefore , discovering the representative power of qubits in quantum based learning system is extremely important , as not only does it allow near-term devices to tackle more complex learning problems , but also it eases the complexity of the quantum state exponentially . However , to tackle the topic of low-qubit counts of current quantum machines is rather sparse : to the best of our knowledge , there is only one paper for the problem of the power of one qubit ( Ghobadi et al. , 2019 ) . Within this domain , the learning potential of qubits are under-investigated . In this paper , we propose GenQu , a general-purpose quantum-classic hybrid framework for learning classical data in quantum states . We demonstrate the power of qubits in machine learning by approaching the encoding of data onto a single qubit and accomplish tasks that are impossible for comparative data streams on classical machines , which addressing the challenges ( i ) and ( ii ) . Enabled by GenQU , we develop a deep neural network architecture for classification problems with only 2 qubits , and a quantum generative architecture for learning distributions with only 1 qubit , and , additionally , We evaluate GenQU with intensive experiments on both IBM-Q real quantum computers and simulators ( addressing the challenge ( iii ) ) . Our major contributions include : • We propose , GenQu , a hybrid and general-purpose quantum framework that works with near-term quantum computers and has the potential to fit in various learning models with a very low qubit count . • Based on GenQu , we propose three different quantum based learning models to demonstrate the potential of learning data in quantum state . • Through experiments on both simulators and IBM-Q real quantum computers , we show that models in GenQu are able to reduce parameters by up to 95.86 % but still achieves similar accuracy in classification with Principal Component Analysis ( PCA ) ( Hoffmann , 2007 ) MNIST dataset , and converge up to 66.67 % faster than traditional neural networks . 2 PRELIMINARIES . 2.1 THE QUANTUM BIT ( QUBIT ) . Quantum computers operate on a fundamentally different architecture compared to classical computers . Classical computers operate on binary digits ( bits ) , represented by a 1 or a 0 . Quantum computers however , operate on quantum bits ( qubits ) . Qubits can represent a 1 or a 0 , or can be placed into a probabilistic mixture of both 1 and 0 simultaneously , namely superposition . Superposition is one of the core principles that allows quantum computers to be able to perform certain tasks significantly faster than that of their traditional counterparts . When discussing a quantum framework , we make use of the 〈bra| and |ket〉 notation , where a 〈bra| indicates a horizontal quantum state vector ( 1×n ) and |ket〉 indicates a vertical quantum state vector ( n× 1 ) . A qubit , as it is some combination of both a |1〉 and |0〉 simultaneously , is described as a linear combination between of |0〉 and |1〉 . This combination is described in Equation 1 . |Ψ〉 = α|0〉+ β|1〉 , |Ψ〉 = [ α β ] , |0〉 = [ 1 0 ] , |1〉 = [ 0 1 ] ( 1 ) In Equation 1 , the state of |Φ〉 describes the probabilistic quantum state of one qubit , respectively |φ〉 . The values of α and β are the probability coefficients and what encode information regarding this qubit ’ s state . Although qubits can exist in both |1〉 and |0〉 at the same time , when they are measured for a definite output , they collapse to one of two possible value , where in the case above those values are |0〉 or |1〉 . The coefficients , α and β , indicate the square root of the probability that the qubit measures as a |1〉 or a |0〉 . The definite states we are measuring the qubit against are based on how we measure the qubit , measuring as one of two possible measurements . These two possible measurements are two orthogonal eigen-vectors , and can be in any 3-Dimensional direction . This is best visualized and understood by the Bloch Sphere representation of a qubit , as illustrated in Figure 1 . A qubit can be represented by the unit Bloch Sphere visualized in Figure 1 . In the case of |0〉 and |1〉 , we are measuring across the z axis . Although the qubit could be measured against the Y or X axis , once a qubit is measured in a direction and is observed as some vector , the qubit is in that state unless acted upon , therefore making a measurement in Z then X be fraught without further processing . A pure quantum state has data encoded and manipulated through rotations over the Bloch sphere surface . Relating to Equation 1 , the α and β can be thought of as the states |φ〉 distance to the state vectors |0〉 and |1〉 , where a high α indicates being relatively close to |0〉 and vice-versa.The power of quantum computing lies in the ability to sample the output repeatedly , thereby providing multiple ” answers ” for one question . 2.2 QUANTUM DATA MANIPULATION . To accomplish data transformation and data encoding , a qubit and its quantum state must be manipulated to encapsulate information onto it . Qubits are manipulated through quantum gates , which in turn manipulates the overall quantum state . These gates can allow for complete manipulation over the Bloch sphere in Figure 1 , and more specifically complete manipulation of the quantum state vector , which can describe the state of a mixture of more than 1 qubit . We introduce the few gates that we make use of in this paper in Equations 2 and 3 . RY ( θ ) = [ cos ( θ 2 ) − sin ( θ 2 ) sin ( θ 2 ) cos ( θ 2 ) ] RZ ( θ ) = [ e−iθ2 0 0 e −iθ 2 ] ( 2 ) CRY ( θ ) = 1 0 0 0 0 1 0 0 0 0 cos θ2 − sin θ 2 0 0 sin θ2 cos θ 2 CRZ ( θ ) = 1 0 0 0 0 1 0 0 0 0 e iθ 2 0 0 0 0 e iθ 2 ( 3 ) The gates above accomplish specific tasks of quantum state manipulation.Equation 2 allows for a single qubit to be manipulated to any position on a Bloch sphere ’ s surface , from any starting point on aforementioned sphere . Equation 3 accomplishes entangling two qubits with controlled rotations . Controlled rotations allow for a single qubits state to be entangled with another . In the case of a controlled rotation gate , a qubits state is manipulated based on whether the control qubit measures as a |1〉 . Although we brush over this for the sake of easier reading , quantum entanglement empowers quantum computers to accomplish phenomenal tasks . These two styles of gates , single qubit rotations and controlled qubit rotations , allow for complete manipulation of quantum states , be it 1 or more qubits . 2.3 QUANTUM DEEP LEARNING Quantum Deep Learning is a relatively new approach to Quantum Machine Learning that takes quantum circuits and applies similar training techniques and learning methods of how classical neural networks work Chen et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Beer et al . ( 2020 ) . In traditional deep learning layers are often used , where a layer is some large transformation function that takes in a set of inputs , and outputs a set of outputs , where the number of inputs does not necessarily equal the output . These functions are connected in series , sometimes in parallel , and typically trained through the use of back propagation Goodfellow et al . ( 2016 ) ; Chen et al . ( 2020 ) . This data flow through layers to some output is similar to how quantum circuits operate . Similar to how classical deep learning works , the way this data flows through time is up to the practitioner , who chooses and designs their network according to their needs . Quantum deep learning is approached through the use of layering gates sequentially . For our paper , our layers are comprised of the gates in Equations 2 and 3 . Similar to how deep learning is parameterized by connection weights , these gates are parameterized through rotations ( θ ) . At the end of the quantum circuit , a loss function is described by the practitioner , and the quantum networks parameters θ are updated iteratively such that the circuits loss is minimized Beer et al . ( 2020 ) ; Crooks ( 2019 ) . In the case of binary classification , one can make use of quantum entanglement to pool data down to one qubit channel , which then can be used as the final classification output of the network . These layers are visualized in Figures 3 and 2 . In Figure 3 we visualize the grouping of these circuits to be reminiscent of quantum traditional deep learning layers with the oracle approach . Interpreting these operations can be seen as a qubit entering through the left starting in state |0〉 , passing through gates until it has been transformed into state |φ〉 . 3 GENQU FRAMEWORK AND LEARNING MODELS 3.1 GENQU FRAMEWORK Our proposed GenQu framework is illustrated in Figure 4 . Before any operation of the framework is performed , the data must be transformed from classical to quantum states . This is done by transforming classical data into applicable quantum rotations , and is described under section 3.2 . Following this , the rotations are loaded onto a quantum computer . The quantum circuit preparation section is where the circuit relating to a specific machine learning algorithm is designed . For example , this is where a deep neural network or a convolutional neu- ral networks architecture would be set up , initialized and prepared . This circuit is loaded onto the quantum computer after the quantum data loading section . Once the circuit is set up , it can be induced . Inducing the quantum circuit results in the quantum state transformation of the input data over the quantum machine learning model . From here , if the output of the model was a quantum state , one could end here and feed it to another quantum algorithm . However , in the case of updating learn-able parameters , the relevant qubits need to be measured . We feed the qubits measurements to a loss analysis section , where we update our parameters accordingly . Once the parameters have been updated , we repeat this process of circuit loading , circuit inducing , and measurement , updating parameters until a desired loss of the network is attained or a predefined number of epochs have run . | The authors identify that “classical” learning is running into limitations due to power and scale of computing systems. The authors suggest “quantum” learning might supplant classical learning and solve these fundamental challenges. And, the authors suggest a method that is efficient in the number of QuBits, which can be quite precious. | SP:783b8a3cf723a02ef4c26d1c254ad4e97efeafba |
GenQu: A Hybrid System for Learning Classical Data in Quantum States | 1 INTRODUCTION . In the past decade , machine learning and artificial intelligence powered applications dramatically changed our daily life . Many novel algorithms and models achieve widespread practical successes in a variety of domains such as autonomous cars , healthcare , manufacturing , etc . Despite the wide adoption of ML models , training the machine learning models such as DNNs requires a tremendous amount of computing resources to tune millions of hyper-parameters . Especially in the post Moore ’ s Law era , the limit of semiconductor fabrication technology can not satisfy the the rapidly increased data volume needed for training , which restricts the development of this field ( Thompson et al. , 2020 ) . Encouraged by the recent demonstration of quantum supremacy ( Arute et al. , 2019 ) , researchers are searching for a transition from the classical learning to the quantum learning , with the promise of providing a quantum speedup over the classical learning . The current state of quantum-based learning inspires alternative architectures to classical learning ‘ s sub-fields , such as Deep Learning ( DL ) or Support Vector Machine ( SVM ) ( Garg & Ramakrishnan , 2020 ; Beer et al. , 2020 ; Potok et al. , 2018 ; Levine et al. , 2019 ) , where the quantum algorithm provides improvements over their classical counterparts . For example , there are quite a number of adoptions of quantum learning algorithms in domains of expectation maximization solving ( QEM ) ( Kerenidis et al. , 2019 ) that speeds up the kernel methods to sub-linear time ( Li et al. , 2019 ) , Quantum-SVM ( Ding et al. , 2019 ) , and NLP ( Panahi et al. , 2019 ) . Employing quantum systems to train deep learning models is rather developed with a multitude of approaches to creating and mimicking aspects of classical deep learning systems ( Verdon et al. , 2019 ; Beer et al. , 2020 ; Chen et al. , 2020 ; Kerenidis et al. , 2019 ) , with the following challenges : ( i ) , such systems are held back by the low qubit count of current quantum computers . ( ii ) , learning in a quantum computer becomes even more difficult due to the lack of efficient classical-to-quantum data encoding methodology ( Zoufal et al. , 2019 ; Cortese & Braje , 2019 ) . ( iii ) , most of the existing studies are based on purely theoretical analysis or simulations , lacking practical usability on near-term quantum devices ( NISQ ) ( Preskill , 2018 ) . More importantly , the above challenges would presist even when the number of qubits supported in quantum machines get siginificantly increased : when the number of qubits in the quantum system increases , the computational complexity grows exponentially ( Kaye et al. , 2007 ) , which quickly leads to tasks that become completely infeasible for simulation and near-term quantum computers . Therefore , discovering the representative power of qubits in quantum based learning system is extremely important , as not only does it allow near-term devices to tackle more complex learning problems , but also it eases the complexity of the quantum state exponentially . However , to tackle the topic of low-qubit counts of current quantum machines is rather sparse : to the best of our knowledge , there is only one paper for the problem of the power of one qubit ( Ghobadi et al. , 2019 ) . Within this domain , the learning potential of qubits are under-investigated . In this paper , we propose GenQu , a general-purpose quantum-classic hybrid framework for learning classical data in quantum states . We demonstrate the power of qubits in machine learning by approaching the encoding of data onto a single qubit and accomplish tasks that are impossible for comparative data streams on classical machines , which addressing the challenges ( i ) and ( ii ) . Enabled by GenQU , we develop a deep neural network architecture for classification problems with only 2 qubits , and a quantum generative architecture for learning distributions with only 1 qubit , and , additionally , We evaluate GenQU with intensive experiments on both IBM-Q real quantum computers and simulators ( addressing the challenge ( iii ) ) . Our major contributions include : • We propose , GenQu , a hybrid and general-purpose quantum framework that works with near-term quantum computers and has the potential to fit in various learning models with a very low qubit count . • Based on GenQu , we propose three different quantum based learning models to demonstrate the potential of learning data in quantum state . • Through experiments on both simulators and IBM-Q real quantum computers , we show that models in GenQu are able to reduce parameters by up to 95.86 % but still achieves similar accuracy in classification with Principal Component Analysis ( PCA ) ( Hoffmann , 2007 ) MNIST dataset , and converge up to 66.67 % faster than traditional neural networks . 2 PRELIMINARIES . 2.1 THE QUANTUM BIT ( QUBIT ) . Quantum computers operate on a fundamentally different architecture compared to classical computers . Classical computers operate on binary digits ( bits ) , represented by a 1 or a 0 . Quantum computers however , operate on quantum bits ( qubits ) . Qubits can represent a 1 or a 0 , or can be placed into a probabilistic mixture of both 1 and 0 simultaneously , namely superposition . Superposition is one of the core principles that allows quantum computers to be able to perform certain tasks significantly faster than that of their traditional counterparts . When discussing a quantum framework , we make use of the 〈bra| and |ket〉 notation , where a 〈bra| indicates a horizontal quantum state vector ( 1×n ) and |ket〉 indicates a vertical quantum state vector ( n× 1 ) . A qubit , as it is some combination of both a |1〉 and |0〉 simultaneously , is described as a linear combination between of |0〉 and |1〉 . This combination is described in Equation 1 . |Ψ〉 = α|0〉+ β|1〉 , |Ψ〉 = [ α β ] , |0〉 = [ 1 0 ] , |1〉 = [ 0 1 ] ( 1 ) In Equation 1 , the state of |Φ〉 describes the probabilistic quantum state of one qubit , respectively |φ〉 . The values of α and β are the probability coefficients and what encode information regarding this qubit ’ s state . Although qubits can exist in both |1〉 and |0〉 at the same time , when they are measured for a definite output , they collapse to one of two possible value , where in the case above those values are |0〉 or |1〉 . The coefficients , α and β , indicate the square root of the probability that the qubit measures as a |1〉 or a |0〉 . The definite states we are measuring the qubit against are based on how we measure the qubit , measuring as one of two possible measurements . These two possible measurements are two orthogonal eigen-vectors , and can be in any 3-Dimensional direction . This is best visualized and understood by the Bloch Sphere representation of a qubit , as illustrated in Figure 1 . A qubit can be represented by the unit Bloch Sphere visualized in Figure 1 . In the case of |0〉 and |1〉 , we are measuring across the z axis . Although the qubit could be measured against the Y or X axis , once a qubit is measured in a direction and is observed as some vector , the qubit is in that state unless acted upon , therefore making a measurement in Z then X be fraught without further processing . A pure quantum state has data encoded and manipulated through rotations over the Bloch sphere surface . Relating to Equation 1 , the α and β can be thought of as the states |φ〉 distance to the state vectors |0〉 and |1〉 , where a high α indicates being relatively close to |0〉 and vice-versa.The power of quantum computing lies in the ability to sample the output repeatedly , thereby providing multiple ” answers ” for one question . 2.2 QUANTUM DATA MANIPULATION . To accomplish data transformation and data encoding , a qubit and its quantum state must be manipulated to encapsulate information onto it . Qubits are manipulated through quantum gates , which in turn manipulates the overall quantum state . These gates can allow for complete manipulation over the Bloch sphere in Figure 1 , and more specifically complete manipulation of the quantum state vector , which can describe the state of a mixture of more than 1 qubit . We introduce the few gates that we make use of in this paper in Equations 2 and 3 . RY ( θ ) = [ cos ( θ 2 ) − sin ( θ 2 ) sin ( θ 2 ) cos ( θ 2 ) ] RZ ( θ ) = [ e−iθ2 0 0 e −iθ 2 ] ( 2 ) CRY ( θ ) = 1 0 0 0 0 1 0 0 0 0 cos θ2 − sin θ 2 0 0 sin θ2 cos θ 2 CRZ ( θ ) = 1 0 0 0 0 1 0 0 0 0 e iθ 2 0 0 0 0 e iθ 2 ( 3 ) The gates above accomplish specific tasks of quantum state manipulation.Equation 2 allows for a single qubit to be manipulated to any position on a Bloch sphere ’ s surface , from any starting point on aforementioned sphere . Equation 3 accomplishes entangling two qubits with controlled rotations . Controlled rotations allow for a single qubits state to be entangled with another . In the case of a controlled rotation gate , a qubits state is manipulated based on whether the control qubit measures as a |1〉 . Although we brush over this for the sake of easier reading , quantum entanglement empowers quantum computers to accomplish phenomenal tasks . These two styles of gates , single qubit rotations and controlled qubit rotations , allow for complete manipulation of quantum states , be it 1 or more qubits . 2.3 QUANTUM DEEP LEARNING Quantum Deep Learning is a relatively new approach to Quantum Machine Learning that takes quantum circuits and applies similar training techniques and learning methods of how classical neural networks work Chen et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Beer et al . ( 2020 ) . In traditional deep learning layers are often used , where a layer is some large transformation function that takes in a set of inputs , and outputs a set of outputs , where the number of inputs does not necessarily equal the output . These functions are connected in series , sometimes in parallel , and typically trained through the use of back propagation Goodfellow et al . ( 2016 ) ; Chen et al . ( 2020 ) . This data flow through layers to some output is similar to how quantum circuits operate . Similar to how classical deep learning works , the way this data flows through time is up to the practitioner , who chooses and designs their network according to their needs . Quantum deep learning is approached through the use of layering gates sequentially . For our paper , our layers are comprised of the gates in Equations 2 and 3 . Similar to how deep learning is parameterized by connection weights , these gates are parameterized through rotations ( θ ) . At the end of the quantum circuit , a loss function is described by the practitioner , and the quantum networks parameters θ are updated iteratively such that the circuits loss is minimized Beer et al . ( 2020 ) ; Crooks ( 2019 ) . In the case of binary classification , one can make use of quantum entanglement to pool data down to one qubit channel , which then can be used as the final classification output of the network . These layers are visualized in Figures 3 and 2 . In Figure 3 we visualize the grouping of these circuits to be reminiscent of quantum traditional deep learning layers with the oracle approach . Interpreting these operations can be seen as a qubit entering through the left starting in state |0〉 , passing through gates until it has been transformed into state |φ〉 . 3 GENQU FRAMEWORK AND LEARNING MODELS 3.1 GENQU FRAMEWORK Our proposed GenQu framework is illustrated in Figure 4 . Before any operation of the framework is performed , the data must be transformed from classical to quantum states . This is done by transforming classical data into applicable quantum rotations , and is described under section 3.2 . Following this , the rotations are loaded onto a quantum computer . The quantum circuit preparation section is where the circuit relating to a specific machine learning algorithm is designed . For example , this is where a deep neural network or a convolutional neu- ral networks architecture would be set up , initialized and prepared . This circuit is loaded onto the quantum computer after the quantum data loading section . Once the circuit is set up , it can be induced . Inducing the quantum circuit results in the quantum state transformation of the input data over the quantum machine learning model . From here , if the output of the model was a quantum state , one could end here and feed it to another quantum algorithm . However , in the case of updating learn-able parameters , the relevant qubits need to be measured . We feed the qubits measurements to a loss analysis section , where we update our parameters accordingly . Once the parameters have been updated , we repeat this process of circuit loading , circuit inducing , and measurement , updating parameters until a desired loss of the network is attained or a predefined number of epochs have run . | This paper presents GenQu, a hybrid and general-purpose quantum framework for learning classical data through quantum states. By encoding two dimensions of data per one qubit. they demonstrate the effectiveness of their framework via two classical classification tasks, where 1 and 2 qubits are used, respectively. This paper is more like an entry-level tutorial, rather than a technical paper. More technical contributions are needed towards a paper. | SP:783b8a3cf723a02ef4c26d1c254ad4e97efeafba |
GenQu: A Hybrid System for Learning Classical Data in Quantum States | 1 INTRODUCTION . In the past decade , machine learning and artificial intelligence powered applications dramatically changed our daily life . Many novel algorithms and models achieve widespread practical successes in a variety of domains such as autonomous cars , healthcare , manufacturing , etc . Despite the wide adoption of ML models , training the machine learning models such as DNNs requires a tremendous amount of computing resources to tune millions of hyper-parameters . Especially in the post Moore ’ s Law era , the limit of semiconductor fabrication technology can not satisfy the the rapidly increased data volume needed for training , which restricts the development of this field ( Thompson et al. , 2020 ) . Encouraged by the recent demonstration of quantum supremacy ( Arute et al. , 2019 ) , researchers are searching for a transition from the classical learning to the quantum learning , with the promise of providing a quantum speedup over the classical learning . The current state of quantum-based learning inspires alternative architectures to classical learning ‘ s sub-fields , such as Deep Learning ( DL ) or Support Vector Machine ( SVM ) ( Garg & Ramakrishnan , 2020 ; Beer et al. , 2020 ; Potok et al. , 2018 ; Levine et al. , 2019 ) , where the quantum algorithm provides improvements over their classical counterparts . For example , there are quite a number of adoptions of quantum learning algorithms in domains of expectation maximization solving ( QEM ) ( Kerenidis et al. , 2019 ) that speeds up the kernel methods to sub-linear time ( Li et al. , 2019 ) , Quantum-SVM ( Ding et al. , 2019 ) , and NLP ( Panahi et al. , 2019 ) . Employing quantum systems to train deep learning models is rather developed with a multitude of approaches to creating and mimicking aspects of classical deep learning systems ( Verdon et al. , 2019 ; Beer et al. , 2020 ; Chen et al. , 2020 ; Kerenidis et al. , 2019 ) , with the following challenges : ( i ) , such systems are held back by the low qubit count of current quantum computers . ( ii ) , learning in a quantum computer becomes even more difficult due to the lack of efficient classical-to-quantum data encoding methodology ( Zoufal et al. , 2019 ; Cortese & Braje , 2019 ) . ( iii ) , most of the existing studies are based on purely theoretical analysis or simulations , lacking practical usability on near-term quantum devices ( NISQ ) ( Preskill , 2018 ) . More importantly , the above challenges would presist even when the number of qubits supported in quantum machines get siginificantly increased : when the number of qubits in the quantum system increases , the computational complexity grows exponentially ( Kaye et al. , 2007 ) , which quickly leads to tasks that become completely infeasible for simulation and near-term quantum computers . Therefore , discovering the representative power of qubits in quantum based learning system is extremely important , as not only does it allow near-term devices to tackle more complex learning problems , but also it eases the complexity of the quantum state exponentially . However , to tackle the topic of low-qubit counts of current quantum machines is rather sparse : to the best of our knowledge , there is only one paper for the problem of the power of one qubit ( Ghobadi et al. , 2019 ) . Within this domain , the learning potential of qubits are under-investigated . In this paper , we propose GenQu , a general-purpose quantum-classic hybrid framework for learning classical data in quantum states . We demonstrate the power of qubits in machine learning by approaching the encoding of data onto a single qubit and accomplish tasks that are impossible for comparative data streams on classical machines , which addressing the challenges ( i ) and ( ii ) . Enabled by GenQU , we develop a deep neural network architecture for classification problems with only 2 qubits , and a quantum generative architecture for learning distributions with only 1 qubit , and , additionally , We evaluate GenQU with intensive experiments on both IBM-Q real quantum computers and simulators ( addressing the challenge ( iii ) ) . Our major contributions include : • We propose , GenQu , a hybrid and general-purpose quantum framework that works with near-term quantum computers and has the potential to fit in various learning models with a very low qubit count . • Based on GenQu , we propose three different quantum based learning models to demonstrate the potential of learning data in quantum state . • Through experiments on both simulators and IBM-Q real quantum computers , we show that models in GenQu are able to reduce parameters by up to 95.86 % but still achieves similar accuracy in classification with Principal Component Analysis ( PCA ) ( Hoffmann , 2007 ) MNIST dataset , and converge up to 66.67 % faster than traditional neural networks . 2 PRELIMINARIES . 2.1 THE QUANTUM BIT ( QUBIT ) . Quantum computers operate on a fundamentally different architecture compared to classical computers . Classical computers operate on binary digits ( bits ) , represented by a 1 or a 0 . Quantum computers however , operate on quantum bits ( qubits ) . Qubits can represent a 1 or a 0 , or can be placed into a probabilistic mixture of both 1 and 0 simultaneously , namely superposition . Superposition is one of the core principles that allows quantum computers to be able to perform certain tasks significantly faster than that of their traditional counterparts . When discussing a quantum framework , we make use of the 〈bra| and |ket〉 notation , where a 〈bra| indicates a horizontal quantum state vector ( 1×n ) and |ket〉 indicates a vertical quantum state vector ( n× 1 ) . A qubit , as it is some combination of both a |1〉 and |0〉 simultaneously , is described as a linear combination between of |0〉 and |1〉 . This combination is described in Equation 1 . |Ψ〉 = α|0〉+ β|1〉 , |Ψ〉 = [ α β ] , |0〉 = [ 1 0 ] , |1〉 = [ 0 1 ] ( 1 ) In Equation 1 , the state of |Φ〉 describes the probabilistic quantum state of one qubit , respectively |φ〉 . The values of α and β are the probability coefficients and what encode information regarding this qubit ’ s state . Although qubits can exist in both |1〉 and |0〉 at the same time , when they are measured for a definite output , they collapse to one of two possible value , where in the case above those values are |0〉 or |1〉 . The coefficients , α and β , indicate the square root of the probability that the qubit measures as a |1〉 or a |0〉 . The definite states we are measuring the qubit against are based on how we measure the qubit , measuring as one of two possible measurements . These two possible measurements are two orthogonal eigen-vectors , and can be in any 3-Dimensional direction . This is best visualized and understood by the Bloch Sphere representation of a qubit , as illustrated in Figure 1 . A qubit can be represented by the unit Bloch Sphere visualized in Figure 1 . In the case of |0〉 and |1〉 , we are measuring across the z axis . Although the qubit could be measured against the Y or X axis , once a qubit is measured in a direction and is observed as some vector , the qubit is in that state unless acted upon , therefore making a measurement in Z then X be fraught without further processing . A pure quantum state has data encoded and manipulated through rotations over the Bloch sphere surface . Relating to Equation 1 , the α and β can be thought of as the states |φ〉 distance to the state vectors |0〉 and |1〉 , where a high α indicates being relatively close to |0〉 and vice-versa.The power of quantum computing lies in the ability to sample the output repeatedly , thereby providing multiple ” answers ” for one question . 2.2 QUANTUM DATA MANIPULATION . To accomplish data transformation and data encoding , a qubit and its quantum state must be manipulated to encapsulate information onto it . Qubits are manipulated through quantum gates , which in turn manipulates the overall quantum state . These gates can allow for complete manipulation over the Bloch sphere in Figure 1 , and more specifically complete manipulation of the quantum state vector , which can describe the state of a mixture of more than 1 qubit . We introduce the few gates that we make use of in this paper in Equations 2 and 3 . RY ( θ ) = [ cos ( θ 2 ) − sin ( θ 2 ) sin ( θ 2 ) cos ( θ 2 ) ] RZ ( θ ) = [ e−iθ2 0 0 e −iθ 2 ] ( 2 ) CRY ( θ ) = 1 0 0 0 0 1 0 0 0 0 cos θ2 − sin θ 2 0 0 sin θ2 cos θ 2 CRZ ( θ ) = 1 0 0 0 0 1 0 0 0 0 e iθ 2 0 0 0 0 e iθ 2 ( 3 ) The gates above accomplish specific tasks of quantum state manipulation.Equation 2 allows for a single qubit to be manipulated to any position on a Bloch sphere ’ s surface , from any starting point on aforementioned sphere . Equation 3 accomplishes entangling two qubits with controlled rotations . Controlled rotations allow for a single qubits state to be entangled with another . In the case of a controlled rotation gate , a qubits state is manipulated based on whether the control qubit measures as a |1〉 . Although we brush over this for the sake of easier reading , quantum entanglement empowers quantum computers to accomplish phenomenal tasks . These two styles of gates , single qubit rotations and controlled qubit rotations , allow for complete manipulation of quantum states , be it 1 or more qubits . 2.3 QUANTUM DEEP LEARNING Quantum Deep Learning is a relatively new approach to Quantum Machine Learning that takes quantum circuits and applies similar training techniques and learning methods of how classical neural networks work Chen et al . ( 2020 ) ; Garg & Ramakrishnan ( 2020 ) ; Beer et al . ( 2020 ) . In traditional deep learning layers are often used , where a layer is some large transformation function that takes in a set of inputs , and outputs a set of outputs , where the number of inputs does not necessarily equal the output . These functions are connected in series , sometimes in parallel , and typically trained through the use of back propagation Goodfellow et al . ( 2016 ) ; Chen et al . ( 2020 ) . This data flow through layers to some output is similar to how quantum circuits operate . Similar to how classical deep learning works , the way this data flows through time is up to the practitioner , who chooses and designs their network according to their needs . Quantum deep learning is approached through the use of layering gates sequentially . For our paper , our layers are comprised of the gates in Equations 2 and 3 . Similar to how deep learning is parameterized by connection weights , these gates are parameterized through rotations ( θ ) . At the end of the quantum circuit , a loss function is described by the practitioner , and the quantum networks parameters θ are updated iteratively such that the circuits loss is minimized Beer et al . ( 2020 ) ; Crooks ( 2019 ) . In the case of binary classification , one can make use of quantum entanglement to pool data down to one qubit channel , which then can be used as the final classification output of the network . These layers are visualized in Figures 3 and 2 . In Figure 3 we visualize the grouping of these circuits to be reminiscent of quantum traditional deep learning layers with the oracle approach . Interpreting these operations can be seen as a qubit entering through the left starting in state |0〉 , passing through gates until it has been transformed into state |φ〉 . 3 GENQU FRAMEWORK AND LEARNING MODELS 3.1 GENQU FRAMEWORK Our proposed GenQu framework is illustrated in Figure 4 . Before any operation of the framework is performed , the data must be transformed from classical to quantum states . This is done by transforming classical data into applicable quantum rotations , and is described under section 3.2 . Following this , the rotations are loaded onto a quantum computer . The quantum circuit preparation section is where the circuit relating to a specific machine learning algorithm is designed . For example , this is where a deep neural network or a convolutional neu- ral networks architecture would be set up , initialized and prepared . This circuit is loaded onto the quantum computer after the quantum data loading section . Once the circuit is set up , it can be induced . Inducing the quantum circuit results in the quantum state transformation of the input data over the quantum machine learning model . From here , if the output of the model was a quantum state , one could end here and feed it to another quantum algorithm . However , in the case of updating learn-able parameters , the relevant qubits need to be measured . We feed the qubits measurements to a loss analysis section , where we update our parameters accordingly . Once the parameters have been updated , we repeat this process of circuit loading , circuit inducing , and measurement , updating parameters until a desired loss of the network is attained or a predefined number of epochs have run . | The paper claims to introduce a new quantum machine learning framework called GenQu. However, the description of the framework very vague (using classical computers to optimize the parameters of a fixed quantum circuit), and hardly novel. In fact, the same basic ideas are so well-known in the community that they are described in detail as usage examples for popular quantum computing platforms such as Qiskit and IBM Q. | SP:783b8a3cf723a02ef4c26d1c254ad4e97efeafba |
Double Q-learning: New Analysis and Sharper Finite-time Bound | Double Q-learning ( Hasselt , 2010 ) has gained significant success in practice due to its effectiveness in overcoming the overestimation issue of Q-learning . However , theoretical understanding of double Q-learning is rather limited and the only existing finite-time analysis was recently established in Xiong et al . ( 2020 ) under a polynomial learning rate . This paper analyzes the more challenging case with a rescaled linear/constant learning rate for which the previous method does not appear to be applicable . We develop new analytical tools that achieve an orderlevel better finite-time convergence rate than the previously established result . Specifically , we show that synchronous double Q-learning attains an -accurate global optimum with a time complexity of Ω ( lnD ( 1−γ ) 7 2 ) , and the asynchronous algorithm attains a time complexity of Ω̃ ( L ( 1−γ ) 7 2 ) , where D is the cardinality of the state-action space , γ is the discount factor , and L is a parameter related to the sampling strategy for asynchronous double Q-learning . These results improve the order-level dependence of the convergence rate on all major parameters ( , 1 − γ , D , L ) provided in Xiong et al . ( 2020 ) . The new analysis in this paper presents a more direct and succinct approach for characterizing the finite-time convergence rate of double Q-learning . 1 INTRODUCTION . Double Q-learning proposed in Hasselt ( 2010 ) is a widely used model-free reinforcement learning ( RL ) algorithm in practice for searching for an optimal policy ( Zhang et al. , 2018a ; b ; Hessel et al. , 2018 ) . Compared to the vanilla Q-learning proposed in Watkins & Dayan ( 1992 ) , double Q-learning uses two Q-estimators with their roles randomly selected at each iteration , respectively for estimating the maximum Q-function value and updating the Q-function . In this way , the overestimation of the action-value function in vanilla Q-learning can be effectively mitigated , especially when the reward is random or prone to errors ( Hasselt , 2010 ; Hasselt et al. , 2016 ; Xiong et al. , 2020 ) . Moreover , double Q-learning has been shown to have the desired performance in both finite state-action setting ( Hasselt , 2010 ) and infinite setting ( Hasselt et al. , 2016 ) where it successfully improved the performance of deep Q-network ( DQN ) , and thus inspired many variants ( Zhang et al. , 2017 ; Abed-alguni & Ottom , 2018 ) subsequently . In parallel to its empirical success in practice , the theoretical convergence properties of double Qlearning has also been explored . Its asymptotic convergence was first established in Hasselt ( 2010 ) . The asymptotic mean-square error for double Q-learning was studied in Weng et al . ( 2020c ) under the assumption that the algorithm converges to a unique optimal policy . Furthermore , in Xiong et al . ( 2020 ) , the finite-time convergence rate has been established for double Q-learning with a polynomial learning rate α = 1/tω , ω ∈ ( 0 , 1 ) . Under such a choice for the learning rate , they showed that double Q-learning attains an -accurate optimal Q-function at a time complexity approaching to but never reaching Ω ( 1 2 ) at the cost of an asymptotically large exponent on 1 1−γ . However , a polynomial learning rate typically does not offer the best possible convergence rate , as having been shown for RL algorithms that a so-called rescaled linear learning rate ( with a form of αt = ab+ct ) and a constant learning rate achieve a better convergence rate ( Bhandari et al. , 2018 ; Wainwright , 2019a ; b ; Chen et al. , 2020 ; Qu & Wierman , 2020 ) . Therefore , a natural question arises as follows : Can a rescaled linear learning rate or a constant learning rate improve the convergence rate of double Q-learning order-wisely ? If yes , does it also improve the dependence of the convergence rate on other important parameters of the Markov decision process ( MDP ) such as the discount factor and the cardinality of the state and action spaces ? The answer to the above question does not follow immediately from Xiong et al . ( 2020 ) , because the finite-time analysis framework in Xiong et al . ( 2020 ) does not handle such learning rates to yield a desirable result . This paper develops a novel analysis approach and provides affirmative answers to the above question . 1.1 OUR CONTRIBUTIONS . This paper establishes sharper finite-time bounds for double Q-learning with a rescaled linear/constant learning rate , which are orderwisely better than the existing bounds in Xiong et al . ( 2020 ) . We devise a different analysis approach from that in Xiong et al . ( 2020 ) , which is more capable of handling variants of double Q-learning . • For synchronous double Q-learning , where all state-action pairs are visited at each iteration , we apply a rescaled linear learning rate αt = 33+ ( 1−γ ) t and show that the algorithm can attain an -accurate global optimum with a time complexity of Ω ( lnD ( 1−γ ) 7 2 ) , where γ is the discount factor and D = |S||A| is the cardinality of the finite state-action space . As a comparison , for the dominated regime ( with relatively small γ ) , our result attains an -accurate optimal Qfunction with a time complexity Ω ( 1 2 ) , whereas the result in Xiong et al . ( 2020 ) ( see Table 1 ) does not exactly reach Ω ( 1 2 ) and its approaching to such an order ( η : = 1 − ω → 0 ) is at an additional cost of an asymptotically large exponent on 11−γ . For 1− γ dominated regime , our result improves on that in Xiong et al . ( 2020 ) ( which has been optimized in the dependence on 1− γ in Table 1 ) by O ( ( ln 11−γ ) 7 ) . • For asynchronous double Q-learning , where only one state-action pair is visited at each iteration , we obtain a time complexity of Ω̃ ( L ( 1−γ ) 7 2 ) , where L is a parameter related to the sampling strategy in Assumption 1 . As illustrated in Table 1 , our result improves upon that in Xiong et al . ( 2020 ) order-wisely in terms of its dependence on and 1− γ as well as on L by at least O ( L5 ) . Our analysis takes a different approach from that in Xiong et al . ( 2020 ) in order to handle the rescaled linear/constant learning rate . More specifically , to deal with a pair of nested stochastic approximation ( SA ) recursions , we directly establish the dependence bound of the error dynamics ( of the outer SA ) between the Q-estimator and the global optimum on the error propagation ( of the inner SA ) between the two Q-estimators . Then we develop a bound on the inner SA , integrate it into that on the outer SA as a noise term , and establish the final convergence bound . This is a very different yet more direct approach than that in Xiong et al . ( 2020 ) , the latter of which captures the blockwise convergence by constructing two complicated block-wisely decreasing bounds for the two SAs . The sharpness of the bound also requires careful selection of the rescaled learning rates and proper usage of their properties . 1.2 RELATED WORK . Theory on double Q-learning : Double Q-learning was proposed and proved to converge asymptotically in Hasselt ( 2010 ) . In Weng et al . ( 2020c ) , the authors explored the properties of mean-square errors for double Q-learning both in the tabular case and with linear function approximation , under the assumption that a unique optimal policy exists and the algorithm can converge . The most relevant work to this paper is Xiong et al . ( 2020 ) , which established the first finite-time convergence rate for tabular double Q-learning with a polynomial learning rate . This paper provides sharper finite-time convergence bounds for double Q-learning , which requires a different analysis approach . Tabular Q-learning and convergence under various learning rates : Proposed in Watkins & Dayan ( 1992 ) under finite state-action space , Q-learning has aroused great interest in its theoretical study . Its asymptotic convergence has been established in Tsitsiklis ( 1994 ) ; Jaakkola et al . ( 1994 ) ; Borkar & Meyn ( 2000 ) ; Melo ( 2001 ) ; Lee & He ( 2019 ) by requiring the learning rates to satisfy∑∞ t=0 αt = ∞ and ∑∞ t=0 α 2 t < ∞ . Another line of research focuses on the finite-time analysis of Q-learning under different choices of the learning rates . Szepesvári ( 1998 ) captured the first convergence rate of Q-learning using a linear learning rate ( i.e. , αt = 1t ) . Under similar learning rates , Even-Dar & Mansour ( 2003 ) provided finite-time results for both synchronous and asynchronous Q-learning with a convergence rate being exponentially slow as a function of 11−γ . Another popular choice is the polynomial learning rate which has been studied for synchronous Q-learning in Wainwright ( 2019b ) and for both synchronous/asynchronous Q-learning in Even-Dar & Mansour ( 2003 ) . With this learning rate , however , the convergence rate still has a gap with the lower bound of O ( 1√ T ) ( Azar et al. , 2013 ) . To handle this , a more sophisticated rescaled linear learning rate was introduced for synchronous Q-learning ( Wainwright , 2019b ; Chen et al. , 2020 ) and asynchronous Q-learning ( Qu & Wierman , 2020 ) , and thus yields a better convergence rate . The finite-time bounds for Q-learning were also given with constant stepsizes ( Beck & Srikant , 2012 ; Chen et al. , 2020 ; Li et al. , 2020 ) . In this paper , we focus on the rescaled linear/constant learning rate and obtain sharper finite-time bounds for double Q-learning . Q-learning with function approximation : When the state-action space is considerably large or even infinite , the Q-function is usually approximated by a class of parameterized functions . In such a case , Q-learning has been shown not to converge in general ( Baird , 1995 ) . Strong assumptions are typically needed to establish the convergence of Q-learning with linear function approximation ( Bertsekas & Tsitsiklis , 1996 ; Melo et al. , 2008 ; Zou et al. , 2019 ; Chen et al. , 2019 ; Du et al. , 2019 ; Yang & Wang , 2019 ; Jia et al. , 2019 ; Weng et al. , 2020a ; b ) or neural network approximation ( Cai et al. , 2019 ; Xu & Gu , 2019 ) . The convergence analysis of double Q-learning with function approximation raises new technical challenges and can be an interesting topic for future study . 2 PRELIMINARIES ON DOUBLE Q-LEARNING . We consider a Markov decision process ( MDP ) over a finite state space S and a finite action space A with the total cardinality given by D : = |S||A| . The transition kernel of the MDP is given by P : S × A × S → [ 0 , 1 ] denoted as P ( ·|s , a ) . We denote the random reward function at time t as Rt : S × A × S 7→ [ 0 , Rmax ] , with E [ Rt ( s , a , s′ ) ] = Rs ′ sa . A policy π : = π ( ·|s ) captures the conditional probability distribution over the action space given state s ∈ S. For a policy π , we define Q-function Qπ ∈ R|S|×|A| as Qπ ( s , a ) : =E [ ∞∑ t=1 γtRt ( st , at , s ′ t ) ∣∣∣s1 = s , a1 = a ] , ( 1 ) where γ ∈ ( 0 , 1 ) is the discount factor , at ∼ π ( ·|st ) , and s′t ∼ P ( ·|st , at ) . Both vanilla Q-learning ( Watkins & Dayan , 1992 ) and double Q-learning ( Hasselt , 2010 ) aim to find the optimal Q-function Q∗ which is the unique fixed point of the Bellman operator T ( Bertsekas & Tsitsiklis , 1996 ) given by T Q ( s , a ) = Es′∼P ( ·|s , a ) [ Rs ′ sa + γmax a′∈A Q ( s′ , a′ ) ] . ( 2 ) Note that the Bellman operator T is γ-contractive which satisfies ‖T Q− T Q′‖ ≤ γ ‖Q−Q′‖ under the supremum norm ‖Q‖ : = maxs , a |Q ( s , a ) | . The idea of double Q-learning is to keep two Q-tables ( i.e. , Q-function estimators ) QA and QB , and randomly choose one Q-table to update at each iteration based on the Bellman operator computed from the other Q-table . We next describe synchronous and asynchronous double Q-learning algorithms in more detail . Synchronous double Q-learning : Let { βt } t≥1 be a sequence of i.i.d . Bernoulli random variables satisfying P ( βt = 0 ) = P ( βt = 1 ) = 0.5 . At each time t , βt = 0 indicates that QB is updated , and otherwise QA is updated . The update at time t ≥ 1 can be written in a compact form as , { QAt+1 ( s , a ) = ( 1− αtβt ) QAt ( s , a ) + αtβt ( Rt ( s , a , s ′ ) + γQBt ( s ′ , a∗ ) ) , QBt+1 ( s , a ) = ( 1− αt ( 1− βt ) ) QBt ( s , a ) + αt ( 1− βt ) ( Rt ( s , a , s ′ ) + γQAt ( s ′ , b∗ ) ) , ( 3 ) for all ( s , a ) ∈ S × A , where s′ is sampled independently for each ( s , a ) by s′ ∼ P ( ·|s , a ) , a∗ = arg maxa∈AQ A ( s′ , a ) , b∗ = arg maxa∈AQ B ( s′ , a ) and αt is the learning rate . Note that the rewards for both updates of QAt+1 and Q B t+1 are the same copy of Rt . Asynchronous double Q-learning : Different from synchronous double Q-learning , at each iteration the asynchronous version samples only one state-action pair to update the chosen Q-estimator . That is , at time t , only the chosen Q-estimator and its value at the sampled state-action pair ( st , at ) will be updated . We model this by introducing an indicator function τt ( s , a ) = 1 { ( st , at ) = ( s , a ) } . Then the update at time t ≥ 1 of asynchronous double Q-learning can be written compactly as { QAt+1 ( s , a ) = ( 1− αtτt ( s , a ) βt ) QAt ( s , a ) + αtτt ( s , a ) βt ( Rt + γQ B t ( s ′ , a∗ ) ) , QBt+1 ( s , a ) = ( 1− αtτt ( s , a ) ( 1− βt ) ) QBt ( s , a ) + αtτt ( s , a ) ( 1− βt ) ( Rt + γQ A t ( s ′ , b∗ ) ) , ( 4 ) for all ( s , a ) ∈ S ×A , where Rt is evaluated as Rt ( s , a , s′ ) . In the above update rules ( 3 ) and ( 4 ) , at each iteration only one of the two Q-tables is randomly chosen to be updated . This chosen Q-table generates a greedy optimal action , and the other Qtable is used for estimating the corresponding Bellman operator ( or evaluating the greedy action ) for updating the chosen table . Specifically , if QA is chosen to be updated , we use QA to obtain the optimal action a∗ and then estimate the corresponding Bellman operator using QB to update QA . As shown in Hasselt ( 2010 ) , E [ QB ( s′ , a∗ ) ] is likely smaller than Emaxa [ QA ( s′ , a ) ] , where the expectation is taken over the randomness of the reward for the same ( s , a , s′ ) tuple . Such a two-estimator framework adopted by double Q-learning can effectively reduce the overestimation . Without loss of generality , we assume that QA and QB are initialized with the same value ( usually both all-zero tables in practice ) . For both synchronous and asynchronous double Q-learning , it has been shown in Xiong et al . ( 2020 ) that either Q-estimator is uniformly bounded by Rmax1−γ throughout the learning process . Specifically , for either i ∈ { A , B } , we have ∥∥Qit∥∥ ≤ Rmax1−γ and ∥∥Qit −Q∗∥∥ ≤ 2Rmax 1−γ : = Vmax for all t ≥ 1 . This boundedness property will be useful in our finite-time analysis . | This paper provides a sharper analysis for the finite time convergence rate of the double Q learning algorithm. The authors provides bounds for the synchronous and asynchronous settings and uses a more refined learning rate of $a/(b+t)$. It is shown that with such step size rule, a sharper convergence rate than (Xiong et al., 2020) can be obtained. | SP:5aefbc73adb97f8ad3aa65edab1b38e96b9e8f3b |
Double Q-learning: New Analysis and Sharper Finite-time Bound | Double Q-learning ( Hasselt , 2010 ) has gained significant success in practice due to its effectiveness in overcoming the overestimation issue of Q-learning . However , theoretical understanding of double Q-learning is rather limited and the only existing finite-time analysis was recently established in Xiong et al . ( 2020 ) under a polynomial learning rate . This paper analyzes the more challenging case with a rescaled linear/constant learning rate for which the previous method does not appear to be applicable . We develop new analytical tools that achieve an orderlevel better finite-time convergence rate than the previously established result . Specifically , we show that synchronous double Q-learning attains an -accurate global optimum with a time complexity of Ω ( lnD ( 1−γ ) 7 2 ) , and the asynchronous algorithm attains a time complexity of Ω̃ ( L ( 1−γ ) 7 2 ) , where D is the cardinality of the state-action space , γ is the discount factor , and L is a parameter related to the sampling strategy for asynchronous double Q-learning . These results improve the order-level dependence of the convergence rate on all major parameters ( , 1 − γ , D , L ) provided in Xiong et al . ( 2020 ) . The new analysis in this paper presents a more direct and succinct approach for characterizing the finite-time convergence rate of double Q-learning . 1 INTRODUCTION . Double Q-learning proposed in Hasselt ( 2010 ) is a widely used model-free reinforcement learning ( RL ) algorithm in practice for searching for an optimal policy ( Zhang et al. , 2018a ; b ; Hessel et al. , 2018 ) . Compared to the vanilla Q-learning proposed in Watkins & Dayan ( 1992 ) , double Q-learning uses two Q-estimators with their roles randomly selected at each iteration , respectively for estimating the maximum Q-function value and updating the Q-function . In this way , the overestimation of the action-value function in vanilla Q-learning can be effectively mitigated , especially when the reward is random or prone to errors ( Hasselt , 2010 ; Hasselt et al. , 2016 ; Xiong et al. , 2020 ) . Moreover , double Q-learning has been shown to have the desired performance in both finite state-action setting ( Hasselt , 2010 ) and infinite setting ( Hasselt et al. , 2016 ) where it successfully improved the performance of deep Q-network ( DQN ) , and thus inspired many variants ( Zhang et al. , 2017 ; Abed-alguni & Ottom , 2018 ) subsequently . In parallel to its empirical success in practice , the theoretical convergence properties of double Qlearning has also been explored . Its asymptotic convergence was first established in Hasselt ( 2010 ) . The asymptotic mean-square error for double Q-learning was studied in Weng et al . ( 2020c ) under the assumption that the algorithm converges to a unique optimal policy . Furthermore , in Xiong et al . ( 2020 ) , the finite-time convergence rate has been established for double Q-learning with a polynomial learning rate α = 1/tω , ω ∈ ( 0 , 1 ) . Under such a choice for the learning rate , they showed that double Q-learning attains an -accurate optimal Q-function at a time complexity approaching to but never reaching Ω ( 1 2 ) at the cost of an asymptotically large exponent on 1 1−γ . However , a polynomial learning rate typically does not offer the best possible convergence rate , as having been shown for RL algorithms that a so-called rescaled linear learning rate ( with a form of αt = ab+ct ) and a constant learning rate achieve a better convergence rate ( Bhandari et al. , 2018 ; Wainwright , 2019a ; b ; Chen et al. , 2020 ; Qu & Wierman , 2020 ) . Therefore , a natural question arises as follows : Can a rescaled linear learning rate or a constant learning rate improve the convergence rate of double Q-learning order-wisely ? If yes , does it also improve the dependence of the convergence rate on other important parameters of the Markov decision process ( MDP ) such as the discount factor and the cardinality of the state and action spaces ? The answer to the above question does not follow immediately from Xiong et al . ( 2020 ) , because the finite-time analysis framework in Xiong et al . ( 2020 ) does not handle such learning rates to yield a desirable result . This paper develops a novel analysis approach and provides affirmative answers to the above question . 1.1 OUR CONTRIBUTIONS . This paper establishes sharper finite-time bounds for double Q-learning with a rescaled linear/constant learning rate , which are orderwisely better than the existing bounds in Xiong et al . ( 2020 ) . We devise a different analysis approach from that in Xiong et al . ( 2020 ) , which is more capable of handling variants of double Q-learning . • For synchronous double Q-learning , where all state-action pairs are visited at each iteration , we apply a rescaled linear learning rate αt = 33+ ( 1−γ ) t and show that the algorithm can attain an -accurate global optimum with a time complexity of Ω ( lnD ( 1−γ ) 7 2 ) , where γ is the discount factor and D = |S||A| is the cardinality of the finite state-action space . As a comparison , for the dominated regime ( with relatively small γ ) , our result attains an -accurate optimal Qfunction with a time complexity Ω ( 1 2 ) , whereas the result in Xiong et al . ( 2020 ) ( see Table 1 ) does not exactly reach Ω ( 1 2 ) and its approaching to such an order ( η : = 1 − ω → 0 ) is at an additional cost of an asymptotically large exponent on 11−γ . For 1− γ dominated regime , our result improves on that in Xiong et al . ( 2020 ) ( which has been optimized in the dependence on 1− γ in Table 1 ) by O ( ( ln 11−γ ) 7 ) . • For asynchronous double Q-learning , where only one state-action pair is visited at each iteration , we obtain a time complexity of Ω̃ ( L ( 1−γ ) 7 2 ) , where L is a parameter related to the sampling strategy in Assumption 1 . As illustrated in Table 1 , our result improves upon that in Xiong et al . ( 2020 ) order-wisely in terms of its dependence on and 1− γ as well as on L by at least O ( L5 ) . Our analysis takes a different approach from that in Xiong et al . ( 2020 ) in order to handle the rescaled linear/constant learning rate . More specifically , to deal with a pair of nested stochastic approximation ( SA ) recursions , we directly establish the dependence bound of the error dynamics ( of the outer SA ) between the Q-estimator and the global optimum on the error propagation ( of the inner SA ) between the two Q-estimators . Then we develop a bound on the inner SA , integrate it into that on the outer SA as a noise term , and establish the final convergence bound . This is a very different yet more direct approach than that in Xiong et al . ( 2020 ) , the latter of which captures the blockwise convergence by constructing two complicated block-wisely decreasing bounds for the two SAs . The sharpness of the bound also requires careful selection of the rescaled learning rates and proper usage of their properties . 1.2 RELATED WORK . Theory on double Q-learning : Double Q-learning was proposed and proved to converge asymptotically in Hasselt ( 2010 ) . In Weng et al . ( 2020c ) , the authors explored the properties of mean-square errors for double Q-learning both in the tabular case and with linear function approximation , under the assumption that a unique optimal policy exists and the algorithm can converge . The most relevant work to this paper is Xiong et al . ( 2020 ) , which established the first finite-time convergence rate for tabular double Q-learning with a polynomial learning rate . This paper provides sharper finite-time convergence bounds for double Q-learning , which requires a different analysis approach . Tabular Q-learning and convergence under various learning rates : Proposed in Watkins & Dayan ( 1992 ) under finite state-action space , Q-learning has aroused great interest in its theoretical study . Its asymptotic convergence has been established in Tsitsiklis ( 1994 ) ; Jaakkola et al . ( 1994 ) ; Borkar & Meyn ( 2000 ) ; Melo ( 2001 ) ; Lee & He ( 2019 ) by requiring the learning rates to satisfy∑∞ t=0 αt = ∞ and ∑∞ t=0 α 2 t < ∞ . Another line of research focuses on the finite-time analysis of Q-learning under different choices of the learning rates . Szepesvári ( 1998 ) captured the first convergence rate of Q-learning using a linear learning rate ( i.e. , αt = 1t ) . Under similar learning rates , Even-Dar & Mansour ( 2003 ) provided finite-time results for both synchronous and asynchronous Q-learning with a convergence rate being exponentially slow as a function of 11−γ . Another popular choice is the polynomial learning rate which has been studied for synchronous Q-learning in Wainwright ( 2019b ) and for both synchronous/asynchronous Q-learning in Even-Dar & Mansour ( 2003 ) . With this learning rate , however , the convergence rate still has a gap with the lower bound of O ( 1√ T ) ( Azar et al. , 2013 ) . To handle this , a more sophisticated rescaled linear learning rate was introduced for synchronous Q-learning ( Wainwright , 2019b ; Chen et al. , 2020 ) and asynchronous Q-learning ( Qu & Wierman , 2020 ) , and thus yields a better convergence rate . The finite-time bounds for Q-learning were also given with constant stepsizes ( Beck & Srikant , 2012 ; Chen et al. , 2020 ; Li et al. , 2020 ) . In this paper , we focus on the rescaled linear/constant learning rate and obtain sharper finite-time bounds for double Q-learning . Q-learning with function approximation : When the state-action space is considerably large or even infinite , the Q-function is usually approximated by a class of parameterized functions . In such a case , Q-learning has been shown not to converge in general ( Baird , 1995 ) . Strong assumptions are typically needed to establish the convergence of Q-learning with linear function approximation ( Bertsekas & Tsitsiklis , 1996 ; Melo et al. , 2008 ; Zou et al. , 2019 ; Chen et al. , 2019 ; Du et al. , 2019 ; Yang & Wang , 2019 ; Jia et al. , 2019 ; Weng et al. , 2020a ; b ) or neural network approximation ( Cai et al. , 2019 ; Xu & Gu , 2019 ) . The convergence analysis of double Q-learning with function approximation raises new technical challenges and can be an interesting topic for future study . 2 PRELIMINARIES ON DOUBLE Q-LEARNING . We consider a Markov decision process ( MDP ) over a finite state space S and a finite action space A with the total cardinality given by D : = |S||A| . The transition kernel of the MDP is given by P : S × A × S → [ 0 , 1 ] denoted as P ( ·|s , a ) . We denote the random reward function at time t as Rt : S × A × S 7→ [ 0 , Rmax ] , with E [ Rt ( s , a , s′ ) ] = Rs ′ sa . A policy π : = π ( ·|s ) captures the conditional probability distribution over the action space given state s ∈ S. For a policy π , we define Q-function Qπ ∈ R|S|×|A| as Qπ ( s , a ) : =E [ ∞∑ t=1 γtRt ( st , at , s ′ t ) ∣∣∣s1 = s , a1 = a ] , ( 1 ) where γ ∈ ( 0 , 1 ) is the discount factor , at ∼ π ( ·|st ) , and s′t ∼ P ( ·|st , at ) . Both vanilla Q-learning ( Watkins & Dayan , 1992 ) and double Q-learning ( Hasselt , 2010 ) aim to find the optimal Q-function Q∗ which is the unique fixed point of the Bellman operator T ( Bertsekas & Tsitsiklis , 1996 ) given by T Q ( s , a ) = Es′∼P ( ·|s , a ) [ Rs ′ sa + γmax a′∈A Q ( s′ , a′ ) ] . ( 2 ) Note that the Bellman operator T is γ-contractive which satisfies ‖T Q− T Q′‖ ≤ γ ‖Q−Q′‖ under the supremum norm ‖Q‖ : = maxs , a |Q ( s , a ) | . The idea of double Q-learning is to keep two Q-tables ( i.e. , Q-function estimators ) QA and QB , and randomly choose one Q-table to update at each iteration based on the Bellman operator computed from the other Q-table . We next describe synchronous and asynchronous double Q-learning algorithms in more detail . Synchronous double Q-learning : Let { βt } t≥1 be a sequence of i.i.d . Bernoulli random variables satisfying P ( βt = 0 ) = P ( βt = 1 ) = 0.5 . At each time t , βt = 0 indicates that QB is updated , and otherwise QA is updated . The update at time t ≥ 1 can be written in a compact form as , { QAt+1 ( s , a ) = ( 1− αtβt ) QAt ( s , a ) + αtβt ( Rt ( s , a , s ′ ) + γQBt ( s ′ , a∗ ) ) , QBt+1 ( s , a ) = ( 1− αt ( 1− βt ) ) QBt ( s , a ) + αt ( 1− βt ) ( Rt ( s , a , s ′ ) + γQAt ( s ′ , b∗ ) ) , ( 3 ) for all ( s , a ) ∈ S × A , where s′ is sampled independently for each ( s , a ) by s′ ∼ P ( ·|s , a ) , a∗ = arg maxa∈AQ A ( s′ , a ) , b∗ = arg maxa∈AQ B ( s′ , a ) and αt is the learning rate . Note that the rewards for both updates of QAt+1 and Q B t+1 are the same copy of Rt . Asynchronous double Q-learning : Different from synchronous double Q-learning , at each iteration the asynchronous version samples only one state-action pair to update the chosen Q-estimator . That is , at time t , only the chosen Q-estimator and its value at the sampled state-action pair ( st , at ) will be updated . We model this by introducing an indicator function τt ( s , a ) = 1 { ( st , at ) = ( s , a ) } . Then the update at time t ≥ 1 of asynchronous double Q-learning can be written compactly as { QAt+1 ( s , a ) = ( 1− αtτt ( s , a ) βt ) QAt ( s , a ) + αtτt ( s , a ) βt ( Rt + γQ B t ( s ′ , a∗ ) ) , QBt+1 ( s , a ) = ( 1− αtτt ( s , a ) ( 1− βt ) ) QBt ( s , a ) + αtτt ( s , a ) ( 1− βt ) ( Rt + γQ A t ( s ′ , b∗ ) ) , ( 4 ) for all ( s , a ) ∈ S ×A , where Rt is evaluated as Rt ( s , a , s′ ) . In the above update rules ( 3 ) and ( 4 ) , at each iteration only one of the two Q-tables is randomly chosen to be updated . This chosen Q-table generates a greedy optimal action , and the other Qtable is used for estimating the corresponding Bellman operator ( or evaluating the greedy action ) for updating the chosen table . Specifically , if QA is chosen to be updated , we use QA to obtain the optimal action a∗ and then estimate the corresponding Bellman operator using QB to update QA . As shown in Hasselt ( 2010 ) , E [ QB ( s′ , a∗ ) ] is likely smaller than Emaxa [ QA ( s′ , a ) ] , where the expectation is taken over the randomness of the reward for the same ( s , a , s′ ) tuple . Such a two-estimator framework adopted by double Q-learning can effectively reduce the overestimation . Without loss of generality , we assume that QA and QB are initialized with the same value ( usually both all-zero tables in practice ) . For both synchronous and asynchronous double Q-learning , it has been shown in Xiong et al . ( 2020 ) that either Q-estimator is uniformly bounded by Rmax1−γ throughout the learning process . Specifically , for either i ∈ { A , B } , we have ∥∥Qit∥∥ ≤ Rmax1−γ and ∥∥Qit −Q∗∥∥ ≤ 2Rmax 1−γ : = Vmax for all t ≥ 1 . This boundedness property will be useful in our finite-time analysis . | This paper provides a new theoretical analysis of double Q-learning in the tabular case. The analysis improves over previous result of Xiong et al. which assumes polynomial learning rate. This paper considers rescaled linear learning rate and the sample complexity has better dependency on 1/eps. The improvement comes from a better characterization of the error dynamics. | SP:5aefbc73adb97f8ad3aa65edab1b38e96b9e8f3b |
Double Q-learning: New Analysis and Sharper Finite-time Bound | Double Q-learning ( Hasselt , 2010 ) has gained significant success in practice due to its effectiveness in overcoming the overestimation issue of Q-learning . However , theoretical understanding of double Q-learning is rather limited and the only existing finite-time analysis was recently established in Xiong et al . ( 2020 ) under a polynomial learning rate . This paper analyzes the more challenging case with a rescaled linear/constant learning rate for which the previous method does not appear to be applicable . We develop new analytical tools that achieve an orderlevel better finite-time convergence rate than the previously established result . Specifically , we show that synchronous double Q-learning attains an -accurate global optimum with a time complexity of Ω ( lnD ( 1−γ ) 7 2 ) , and the asynchronous algorithm attains a time complexity of Ω̃ ( L ( 1−γ ) 7 2 ) , where D is the cardinality of the state-action space , γ is the discount factor , and L is a parameter related to the sampling strategy for asynchronous double Q-learning . These results improve the order-level dependence of the convergence rate on all major parameters ( , 1 − γ , D , L ) provided in Xiong et al . ( 2020 ) . The new analysis in this paper presents a more direct and succinct approach for characterizing the finite-time convergence rate of double Q-learning . 1 INTRODUCTION . Double Q-learning proposed in Hasselt ( 2010 ) is a widely used model-free reinforcement learning ( RL ) algorithm in practice for searching for an optimal policy ( Zhang et al. , 2018a ; b ; Hessel et al. , 2018 ) . Compared to the vanilla Q-learning proposed in Watkins & Dayan ( 1992 ) , double Q-learning uses two Q-estimators with their roles randomly selected at each iteration , respectively for estimating the maximum Q-function value and updating the Q-function . In this way , the overestimation of the action-value function in vanilla Q-learning can be effectively mitigated , especially when the reward is random or prone to errors ( Hasselt , 2010 ; Hasselt et al. , 2016 ; Xiong et al. , 2020 ) . Moreover , double Q-learning has been shown to have the desired performance in both finite state-action setting ( Hasselt , 2010 ) and infinite setting ( Hasselt et al. , 2016 ) where it successfully improved the performance of deep Q-network ( DQN ) , and thus inspired many variants ( Zhang et al. , 2017 ; Abed-alguni & Ottom , 2018 ) subsequently . In parallel to its empirical success in practice , the theoretical convergence properties of double Qlearning has also been explored . Its asymptotic convergence was first established in Hasselt ( 2010 ) . The asymptotic mean-square error for double Q-learning was studied in Weng et al . ( 2020c ) under the assumption that the algorithm converges to a unique optimal policy . Furthermore , in Xiong et al . ( 2020 ) , the finite-time convergence rate has been established for double Q-learning with a polynomial learning rate α = 1/tω , ω ∈ ( 0 , 1 ) . Under such a choice for the learning rate , they showed that double Q-learning attains an -accurate optimal Q-function at a time complexity approaching to but never reaching Ω ( 1 2 ) at the cost of an asymptotically large exponent on 1 1−γ . However , a polynomial learning rate typically does not offer the best possible convergence rate , as having been shown for RL algorithms that a so-called rescaled linear learning rate ( with a form of αt = ab+ct ) and a constant learning rate achieve a better convergence rate ( Bhandari et al. , 2018 ; Wainwright , 2019a ; b ; Chen et al. , 2020 ; Qu & Wierman , 2020 ) . Therefore , a natural question arises as follows : Can a rescaled linear learning rate or a constant learning rate improve the convergence rate of double Q-learning order-wisely ? If yes , does it also improve the dependence of the convergence rate on other important parameters of the Markov decision process ( MDP ) such as the discount factor and the cardinality of the state and action spaces ? The answer to the above question does not follow immediately from Xiong et al . ( 2020 ) , because the finite-time analysis framework in Xiong et al . ( 2020 ) does not handle such learning rates to yield a desirable result . This paper develops a novel analysis approach and provides affirmative answers to the above question . 1.1 OUR CONTRIBUTIONS . This paper establishes sharper finite-time bounds for double Q-learning with a rescaled linear/constant learning rate , which are orderwisely better than the existing bounds in Xiong et al . ( 2020 ) . We devise a different analysis approach from that in Xiong et al . ( 2020 ) , which is more capable of handling variants of double Q-learning . • For synchronous double Q-learning , where all state-action pairs are visited at each iteration , we apply a rescaled linear learning rate αt = 33+ ( 1−γ ) t and show that the algorithm can attain an -accurate global optimum with a time complexity of Ω ( lnD ( 1−γ ) 7 2 ) , where γ is the discount factor and D = |S||A| is the cardinality of the finite state-action space . As a comparison , for the dominated regime ( with relatively small γ ) , our result attains an -accurate optimal Qfunction with a time complexity Ω ( 1 2 ) , whereas the result in Xiong et al . ( 2020 ) ( see Table 1 ) does not exactly reach Ω ( 1 2 ) and its approaching to such an order ( η : = 1 − ω → 0 ) is at an additional cost of an asymptotically large exponent on 11−γ . For 1− γ dominated regime , our result improves on that in Xiong et al . ( 2020 ) ( which has been optimized in the dependence on 1− γ in Table 1 ) by O ( ( ln 11−γ ) 7 ) . • For asynchronous double Q-learning , where only one state-action pair is visited at each iteration , we obtain a time complexity of Ω̃ ( L ( 1−γ ) 7 2 ) , where L is a parameter related to the sampling strategy in Assumption 1 . As illustrated in Table 1 , our result improves upon that in Xiong et al . ( 2020 ) order-wisely in terms of its dependence on and 1− γ as well as on L by at least O ( L5 ) . Our analysis takes a different approach from that in Xiong et al . ( 2020 ) in order to handle the rescaled linear/constant learning rate . More specifically , to deal with a pair of nested stochastic approximation ( SA ) recursions , we directly establish the dependence bound of the error dynamics ( of the outer SA ) between the Q-estimator and the global optimum on the error propagation ( of the inner SA ) between the two Q-estimators . Then we develop a bound on the inner SA , integrate it into that on the outer SA as a noise term , and establish the final convergence bound . This is a very different yet more direct approach than that in Xiong et al . ( 2020 ) , the latter of which captures the blockwise convergence by constructing two complicated block-wisely decreasing bounds for the two SAs . The sharpness of the bound also requires careful selection of the rescaled learning rates and proper usage of their properties . 1.2 RELATED WORK . Theory on double Q-learning : Double Q-learning was proposed and proved to converge asymptotically in Hasselt ( 2010 ) . In Weng et al . ( 2020c ) , the authors explored the properties of mean-square errors for double Q-learning both in the tabular case and with linear function approximation , under the assumption that a unique optimal policy exists and the algorithm can converge . The most relevant work to this paper is Xiong et al . ( 2020 ) , which established the first finite-time convergence rate for tabular double Q-learning with a polynomial learning rate . This paper provides sharper finite-time convergence bounds for double Q-learning , which requires a different analysis approach . Tabular Q-learning and convergence under various learning rates : Proposed in Watkins & Dayan ( 1992 ) under finite state-action space , Q-learning has aroused great interest in its theoretical study . Its asymptotic convergence has been established in Tsitsiklis ( 1994 ) ; Jaakkola et al . ( 1994 ) ; Borkar & Meyn ( 2000 ) ; Melo ( 2001 ) ; Lee & He ( 2019 ) by requiring the learning rates to satisfy∑∞ t=0 αt = ∞ and ∑∞ t=0 α 2 t < ∞ . Another line of research focuses on the finite-time analysis of Q-learning under different choices of the learning rates . Szepesvári ( 1998 ) captured the first convergence rate of Q-learning using a linear learning rate ( i.e. , αt = 1t ) . Under similar learning rates , Even-Dar & Mansour ( 2003 ) provided finite-time results for both synchronous and asynchronous Q-learning with a convergence rate being exponentially slow as a function of 11−γ . Another popular choice is the polynomial learning rate which has been studied for synchronous Q-learning in Wainwright ( 2019b ) and for both synchronous/asynchronous Q-learning in Even-Dar & Mansour ( 2003 ) . With this learning rate , however , the convergence rate still has a gap with the lower bound of O ( 1√ T ) ( Azar et al. , 2013 ) . To handle this , a more sophisticated rescaled linear learning rate was introduced for synchronous Q-learning ( Wainwright , 2019b ; Chen et al. , 2020 ) and asynchronous Q-learning ( Qu & Wierman , 2020 ) , and thus yields a better convergence rate . The finite-time bounds for Q-learning were also given with constant stepsizes ( Beck & Srikant , 2012 ; Chen et al. , 2020 ; Li et al. , 2020 ) . In this paper , we focus on the rescaled linear/constant learning rate and obtain sharper finite-time bounds for double Q-learning . Q-learning with function approximation : When the state-action space is considerably large or even infinite , the Q-function is usually approximated by a class of parameterized functions . In such a case , Q-learning has been shown not to converge in general ( Baird , 1995 ) . Strong assumptions are typically needed to establish the convergence of Q-learning with linear function approximation ( Bertsekas & Tsitsiklis , 1996 ; Melo et al. , 2008 ; Zou et al. , 2019 ; Chen et al. , 2019 ; Du et al. , 2019 ; Yang & Wang , 2019 ; Jia et al. , 2019 ; Weng et al. , 2020a ; b ) or neural network approximation ( Cai et al. , 2019 ; Xu & Gu , 2019 ) . The convergence analysis of double Q-learning with function approximation raises new technical challenges and can be an interesting topic for future study . 2 PRELIMINARIES ON DOUBLE Q-LEARNING . We consider a Markov decision process ( MDP ) over a finite state space S and a finite action space A with the total cardinality given by D : = |S||A| . The transition kernel of the MDP is given by P : S × A × S → [ 0 , 1 ] denoted as P ( ·|s , a ) . We denote the random reward function at time t as Rt : S × A × S 7→ [ 0 , Rmax ] , with E [ Rt ( s , a , s′ ) ] = Rs ′ sa . A policy π : = π ( ·|s ) captures the conditional probability distribution over the action space given state s ∈ S. For a policy π , we define Q-function Qπ ∈ R|S|×|A| as Qπ ( s , a ) : =E [ ∞∑ t=1 γtRt ( st , at , s ′ t ) ∣∣∣s1 = s , a1 = a ] , ( 1 ) where γ ∈ ( 0 , 1 ) is the discount factor , at ∼ π ( ·|st ) , and s′t ∼ P ( ·|st , at ) . Both vanilla Q-learning ( Watkins & Dayan , 1992 ) and double Q-learning ( Hasselt , 2010 ) aim to find the optimal Q-function Q∗ which is the unique fixed point of the Bellman operator T ( Bertsekas & Tsitsiklis , 1996 ) given by T Q ( s , a ) = Es′∼P ( ·|s , a ) [ Rs ′ sa + γmax a′∈A Q ( s′ , a′ ) ] . ( 2 ) Note that the Bellman operator T is γ-contractive which satisfies ‖T Q− T Q′‖ ≤ γ ‖Q−Q′‖ under the supremum norm ‖Q‖ : = maxs , a |Q ( s , a ) | . The idea of double Q-learning is to keep two Q-tables ( i.e. , Q-function estimators ) QA and QB , and randomly choose one Q-table to update at each iteration based on the Bellman operator computed from the other Q-table . We next describe synchronous and asynchronous double Q-learning algorithms in more detail . Synchronous double Q-learning : Let { βt } t≥1 be a sequence of i.i.d . Bernoulli random variables satisfying P ( βt = 0 ) = P ( βt = 1 ) = 0.5 . At each time t , βt = 0 indicates that QB is updated , and otherwise QA is updated . The update at time t ≥ 1 can be written in a compact form as , { QAt+1 ( s , a ) = ( 1− αtβt ) QAt ( s , a ) + αtβt ( Rt ( s , a , s ′ ) + γQBt ( s ′ , a∗ ) ) , QBt+1 ( s , a ) = ( 1− αt ( 1− βt ) ) QBt ( s , a ) + αt ( 1− βt ) ( Rt ( s , a , s ′ ) + γQAt ( s ′ , b∗ ) ) , ( 3 ) for all ( s , a ) ∈ S × A , where s′ is sampled independently for each ( s , a ) by s′ ∼ P ( ·|s , a ) , a∗ = arg maxa∈AQ A ( s′ , a ) , b∗ = arg maxa∈AQ B ( s′ , a ) and αt is the learning rate . Note that the rewards for both updates of QAt+1 and Q B t+1 are the same copy of Rt . Asynchronous double Q-learning : Different from synchronous double Q-learning , at each iteration the asynchronous version samples only one state-action pair to update the chosen Q-estimator . That is , at time t , only the chosen Q-estimator and its value at the sampled state-action pair ( st , at ) will be updated . We model this by introducing an indicator function τt ( s , a ) = 1 { ( st , at ) = ( s , a ) } . Then the update at time t ≥ 1 of asynchronous double Q-learning can be written compactly as { QAt+1 ( s , a ) = ( 1− αtτt ( s , a ) βt ) QAt ( s , a ) + αtτt ( s , a ) βt ( Rt + γQ B t ( s ′ , a∗ ) ) , QBt+1 ( s , a ) = ( 1− αtτt ( s , a ) ( 1− βt ) ) QBt ( s , a ) + αtτt ( s , a ) ( 1− βt ) ( Rt + γQ A t ( s ′ , b∗ ) ) , ( 4 ) for all ( s , a ) ∈ S ×A , where Rt is evaluated as Rt ( s , a , s′ ) . In the above update rules ( 3 ) and ( 4 ) , at each iteration only one of the two Q-tables is randomly chosen to be updated . This chosen Q-table generates a greedy optimal action , and the other Qtable is used for estimating the corresponding Bellman operator ( or evaluating the greedy action ) for updating the chosen table . Specifically , if QA is chosen to be updated , we use QA to obtain the optimal action a∗ and then estimate the corresponding Bellman operator using QB to update QA . As shown in Hasselt ( 2010 ) , E [ QB ( s′ , a∗ ) ] is likely smaller than Emaxa [ QA ( s′ , a ) ] , where the expectation is taken over the randomness of the reward for the same ( s , a , s′ ) tuple . Such a two-estimator framework adopted by double Q-learning can effectively reduce the overestimation . Without loss of generality , we assume that QA and QB are initialized with the same value ( usually both all-zero tables in practice ) . For both synchronous and asynchronous double Q-learning , it has been shown in Xiong et al . ( 2020 ) that either Q-estimator is uniformly bounded by Rmax1−γ throughout the learning process . Specifically , for either i ∈ { A , B } , we have ∥∥Qit∥∥ ≤ Rmax1−γ and ∥∥Qit −Q∗∥∥ ≤ 2Rmax 1−γ : = Vmax for all t ≥ 1 . This boundedness property will be useful in our finite-time analysis . | This paper studies the convergence rate of double Q learning under the tabular setting. Both the synchronous and asynchronous double Q learning algorithms are studied and analyzed. The technical novelty of this paper seems limited -- possibly a direct combination of [Wainwright 2019a] and [Xiong et al 2020]. Moreover, the scope of this work seems also limited -- it only consider the tabular version of double Q-learning, with either a generative model (synchronous) or i.i.d. sampling (asynchronous) sampling models, which is never used in practice. This seems to makes this work only appealing to theorists. However, in terms of the theory, the results for double Q learning is pessimist -- the rate is much worse than standard Q learning. However, practical implementations of double Q learning has demonstrated its advantage in terms of correcting the over-estimation bias. Without numerical results or additional theory, such a gap makes one ponder whether the rates in this paper are tight. | SP:5aefbc73adb97f8ad3aa65edab1b38e96b9e8f3b |
XLA: A Robust Unsupervised Data Augmentation Framework for Cross-Lingual NLP | 1 INTRODUCTION . Self-supervised learning in the form of pretrained language models ( LM ) has been the driving force in developing state-of-the-art natural language processing ( NLP ) systems in recent years . These methods typically follow two basic steps , where a supervised task-specific fine-tuning follows a large-scale LM pretraining ( Devlin et al. , 2019 ; Radford et al. , 2019 ) . However , getting annotated data for every target task in every target language is difficult , especially for low-resource languages . Recently , the pretrain-finetune paradigm has also been extended to multi-lingual setups to train effective multi-lingual models that can be used for zero-shot cross-lingual transfer . Jointly trained deep contextualized multi-lingual LMs like mBERT ( Devlin et al. , 2019 ) and XLM-R ( Conneau et al. , 2020 ) coupled with supervised fine-tuning in the source language have been quite successful in transferring linguistic and task knowledge from one language to another without using any task label in the target language . The joint pretraining with multiple languages allows these models to generalize across languages . Despite their effectiveness , recent studies ( Pires et al. , 2019 ; K et al. , 2020 ) have also highlighted one crucial limiting factor for successful cross-lingual transfer . They all agree that the cross-lingual generalization ability of the model is limited by the ( lack of ) structural similarity between the source and target languages . For example , for transferring mBERT from English , K et al . ( 2020 ) report about 23.6 % accuracy drop in Hindi ( structurally dissimilar ) compared to 9 % drop in Spanish ( structurally similar ) in cross-lingual natural language inference ( XNLI ) . The difficulty level of transfer is further exacerbated if the ( dissimilar ) target language is low-resourced , as the joint pretraining step may not have seen many instances from this language in the first place . In our experiments ( §4.2 ) , in cross-lingual NER ( XNER ) , we report F1 reductions of 28.3 % in Urdu and 30.4 % in Burmese for XLM-R , which is trained on a much larger multi-lingual dataset than mBERT . One attractive way to improve cross-lingual generalization is to perform data augmentation ( Simard et al. , 1998 ) , and train the model ( e.g. , XLM-R ) on examples that are similar but different from the labeled data in the source language . Formalized by the Vicinal Risk Minimization ( VRM ) principle ( Chapelle et al. , 2001 ) , such data augmentation methods have shown impressive results recently in computer vision ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ; Li et al. , 2020a ) . These methods enlarge the support of the training distribution by generating new data points from a vicinity distribution around each training example . For images , the vicinity of a training image can be defined by a set of operations like rotation and scaling , or by linear mixtures of features and labels ( Zhang et al. , 2018 ) . However , when it comes to text , such methods have rarely been successful . The main reason is that unlike images , linguistic units ( e.g. , words , phrases ) are discrete and a smooth change in their embeddings may not result in a plausible linguistic unit that has similar meanings . In NLP , the most successful data augmentation method has so far been back-translation ( Sennrich et al. , 2016 ) which generates paraphrases of an input sentence through round-trip translations . However , it requires parallel data to train effective machine translation systems , acquiring which can be more expensive for low-resource languages than annotating the target language data with task labels . Furthermore , back-translation is only applicable in a supervised setup and to tasks where it is possible to find the alignments between the original labeled entities and the back-translated entities , such as in question answering ( Yu et al. , 2018 ; Dong et al. , 2017 ) . In this work , we propose XLA , a robust unsupervised cross-lingual augmentation framework for improving cross-lingual generalization of multilingual LMs . XLA augments data from the unlabeled training examples in the target language as well as from the virtual input samples ( sentences ) generated from the vicinity distribution of the source and target language sentences . With the augmented data , it performs simultaneous self-learning with an effective distillation strategy to learn a strongly adapted cross-lingual model from noisy ( pseudo ) labels for the target language task . We propose novel ways to generate virtual sentences using a multilingual masked LM ( Conneau et al. , 2020 ) , and get reliable task labels by simultaneous multilingual co-training . This co-training employs a two-stage co-distillation process to ensure robust transfer to dissimilar and/or low-resource languages . We validate the effectiveness and robustness of XLA by performing extensive experiments on three different zero-resource cross-lingual transfer tasks – XNER , XNLI , and PAWS-X , which posit different sets of challenges . We have experimented with many different language pairs ( 14 in total ) comprising languages that are similar/dissimilar/low-resourced . XLA yields impressive results on XNER , setting SoTA in all tested languages outperforming the baselines by a good margin . In particular , the relative gains for XLA are higher for structurally dissimilar and/or low-resource languages , where the base model is weaker : 28.54 % , 16.05 % , and 9.25 % absolute improvements for Urdu , Burmese , and Arabic , respectively . For XNLI , with only 5 % labeled data in the source , it gets comparable results to the baseline that uses all the labeled data , and surpasses the standard baseline by 2.55 % on average when it uses all the labeled data in the source . We also have similar findings in PAWS-X . We provide a comprehensive analysis of the factors that contribute to XLA ’ s performance . 2 BACKGROUND . Contextual representation and cross-lingual transfer In recent years , significant progress has been made in learning contextual word representations and pretrained models . Notably , BERT ( Devlin et al. , 2019 ) pretrains a Transformer ( Vaswani et al. , 2017 ) encoder with a masked language model ( MLM ) objective , and uses the same model architecture to adapt to a new task . It also comes with a multilingual version mBERT , which is trained jointly on 102 languages . RoBERTa ( Liu et al. , 2019 ) extends BERT with improved training , while XLM ( Lample and Conneau , 2019 ) extends mBERT with a conditional LM and a translation LM ( using parallel data ) objectives . Conneau et al . ( 2020 ) train the largest multilingual language model XLM-R with RoBERTa framework . Despite any explicit cross-lingual supervision , mBERT and its variants have been shown to learn cross-lingual representations that generalize well across languages . Wu and Dredze ( 2019 ) and Pires et al . ( 2019 ) evaluate the zero-shot cross-lingual transferability of mBERT on several tasks and attribute its generalization capability to shared subword units . Pires et al . ( 2019 ) also found structural similarity ( e.g. , word order ) to be another important factor for successful cross-lingual transfer . K et al . ( 2020 ) , however , show that the shared subword has a minimal contribution ; instead , the structural similarity between languages is more crucial for effective transfer ( more in Appendix D ) . Vicinal risk minimization ( VRM ) Data augmentation supported by the VRM principle ( Chapelle et al. , 2001 ) can be an effective choice for achieving better out-of-distribution adaptation . In VRM , we minimize the empirical vicinal risk defined as : Lv ( θ ) = 1N ∑N n=1 l ( fθ ( x̃n ) , ỹn ) , where fθ denotes the model parameterized by θ , and D̃ = { ( x̃n , ỹn ) } Nn=1 is an augmented dataset constructed by sampling the vicinal distribution ϑ ( x̃i , ỹi|xi , yi ) around the original training sample ( xi , yi ) . Defining vicinity is however quite challenging as it requires the extraction of samples from a distribution without hurting their labels . Earlier methods apply simple rules like rotation and scaling of images ( Simard et al. , 1998 ) . Recent work ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ) show impressive results in image classification with simple linear interpolation of data . However , to our knowledge , none of these methods have so far been successful in NLP due to the discrete nature of texts.1 LM-based supervised augmentation Recently , a number of data-augmentation methods have been proposed using contextualized LMs like BERT , e.g. , Contextual Augmentation ( Kobayashi , 2018 ) , Conditional BERT ( Wu et al. , 2018 ) , and AUG-BERT ( Wu et al. , 2018 ) . These approaches use a constrained augmentation method which alters a pretrained LM to a label-conditional LM for a specific task . This means these methods update the parameters of the pretrained LM using the labels . 3 XLA FRAMEWORK . While recent cross-lingual transfer learning efforts have relied almost exclusively on multi-lingual pretraining and zero-shot transfer of a fine-tuned source model , there is a great potential for more elaborate methods that can leverage the unlabeled data better . Motivated by this , we present XLA - our unsupervised data augmentation framework for zero-resource cross-lingual task adaptation . Figure 1 gives an overview of XLA . Let Ds = ( Xs , Ys ) and Dt = ( Xt ) denote the training data for a source language s and a target language t , respectively . XLA augments data from various origins at different stages of training . In the initial stage ( epoch 1 ) , it uses the augmented training samples from the target language ( D′t ) along with the original source ( Ds ) . In later stages ( epoch 2-3 ) , it uses virtual ( vicinal ) sentences generated from the vicinity distribution of source and target examples : ϑ ( x̃sn|xsn ) and ϑ ( x̃tn|xtn ) , where xsn ∼ Xs and xtn ∼ Xt . It performs self-training on the augmented data to acquire the corresponding pseudo labels . To avoid confirmation bias with self-training where the model accumulates its own errors , it simultaneously trains three task models to generate virtual training data through data augmentation and filtering of potential label noises via multi-epoch co-teaching ( Zhou and Li , 2005 ) . In each epoch , the co-teaching process first performs co-distillation , where two peer task models are used to select “ reliable ” training examples to train the third model . The selected samples with pseudo labels are then added to the target task model ’ s training data by taking the agreement from the other two models , a process we refer to as co-guessing . The co-distillation and co-guessing mechanism ensure robustness of XLA to out-of-domain distributions that can occur in a multilingual setup , e.g. , due to a structurally dissimilar and/or low-resource target language . Algorithm 1 gives a pseudocode of the overall training method . Each of the task models in XLA is an instance of XLM-R fine-tuned on the source language task ( e.g. , English NER ) , whereas the pretrained masked LM parameterized by θmlm ( i.e. , before fine-tuning ) is used to define the vicinity distribution ϑ ( x̃n|xn , θmlm ) around each selected example xn . 1Considering papers that have been published ( or accepted ) through peer review . There has been some concurrent work that uses pretrained LMs like BERT to craft adversarial examples ( Li et al. , 2020b ) . Although relevant , these methods have a different objective than ours , and none of them is cross- or multi-lingual . Algorithm 1 XLA : a robust unsupervised data augmentation framework for cross-lingual NLP Input : source ( s ) and target ( t ) language datasets : Ds = ( Xs , Ys ) , Dt = ( Xt ) ; task models : θ ( 1 ) , θ ( 2 ) , θ ( 3 ) , pre-trained masked LM θmlm , mask ratio P , diversification factor δ , sampling factor α , and distillation factor η Output : models trained on augmented data 1 : θ ( 1 ) , θ ( 2 ) , θ ( 3 ) = WARMUP ( Ds , θ ( 1 ) , θ ( 2 ) , θ ( 3 ) ) . warm up with conf . penalty . 2 : for e ∈ [ 1 : 3 ] do . e denotes epoch . 3 : for k ∈ { 1 , 2 , 3 } do 4 : X ( k ) t , Y ( k ) t = DISTIL ( Xt , ηe , θ ( k ) ) . infer and select tgt training data for augmentation . 5 : for j ∈ { 1 , 2 , 3 } do 6 : if k == j then Continue 7 : / * source language data augmentation * / 8 : X̃s = GEN-LM ( Xs , θmlm , P , δ ) . vicinal example generation . 9 : X ( k ) s , Y ( k ) s = DISTIL ( X̃s , ηe , θ ( k ) ) ; X ( j ) s , Y ( j ) s = DISTIL ( X̃s , ηe , θ ( j ) ) 10 : D̃s = AGREEMENT ( D ( k ) s = ( X ( k ) s , Y ( k ) s ) , D ( j ) s = ( X ( j ) s , Y ( j ) s ) ) 11 : / * target language data augmentation ( no vicinity ) * / 12 : X ( j ) t , Y ( j ) t = DISTIL ( Xt , ηe , θ ( j ) ) 13 : D′t = AGREEMENT ( D ( k ) t = ( X ( k ) t , Y ( k ) t ) , D ( j ) t = ( X ( j ) t , Y ( j ) t ) ) . see line 4 14 : / * target language data augmentation * / 15 : X̃t = GEN-LM ( Xt , θmlm , P , δ ) . vicinal example generation . 16 : X ( k ) t , Y ( k ) t = DISTIL ( X̃t , ηe , θ ( k ) ) ; X ( j ) t , Y ( j ) t = DISTIL ( X̃t , ηe , θ ( j ) ) 17 : D̃t = AGREEMENT ( D ( k ) t = ( X ( k ) t , Y ( k ) t ) , D ( j ) t = ( X ( j ) t , Y ( j ) t ) ) 18 : / * train new models on augmented data * / 19 : for l ∈ { 1 , 2 , 3 } do 20 : if l 6= j and l 6= k then 21 : with sampling factor α , train θ ( l ) on D , . train progressively 22 : where D = { Ds1 ( e ∈ { 1 , 3 } ) ∪ D′t1 ( e ∈ { 1 , 3 } ) ∪ D̃s1 ( e = 3 ) ∪ D̃t1 ( e ∈ { 2 , 3 } ) } 23 : Return { θ ( 1 ) , θ ( 2 ) , θ ( 3 ) } Although the data augmentation proposed in Contextual Augmentation ( Kobayashi , 2018 ) , Conditional BERT ( Wu et al. , 2018 ) , AUG-BERT ( Wu et al. , 2018 ) also use a pretrained masked LM , there are some fundamental differences between our method and these approaches . Unlike these approaches our vicinity-based LM augmentation is purely unsupervised and we do not perform any fine-tuning of the pretrained vicinity model ( θlm ) . The vicinity model in XLA is a disjoint pretrained entity whose weights are not trained on any task objective . This disjoint characteristic gives our framework the flexibility to replace θlm even with a better monolingual LM for a specific target language , which in turn makes XLA extendable to utilize stronger LMs that may come in the future . In the following , we describe the steps in Algorithm 1 . | The paper presents a data augmentation framework for zero-shot cross-lingual transfer learning. The framework uses different types of data (labeled source data, unlabeled source data, automatically generated augmented data) for training a model for the target language. Experiments are conducted on three different tasks: Named Entity Recognition (NER), Natural Language Inference (NLI) and paraphrase identification (PAWS). The approach combines multiple components together namely self-training, augmented sentence generation and confidence penalty. | SP:937d3a7858616e03c7d95fed51082dff234198f4 |
XLA: A Robust Unsupervised Data Augmentation Framework for Cross-Lingual NLP | 1 INTRODUCTION . Self-supervised learning in the form of pretrained language models ( LM ) has been the driving force in developing state-of-the-art natural language processing ( NLP ) systems in recent years . These methods typically follow two basic steps , where a supervised task-specific fine-tuning follows a large-scale LM pretraining ( Devlin et al. , 2019 ; Radford et al. , 2019 ) . However , getting annotated data for every target task in every target language is difficult , especially for low-resource languages . Recently , the pretrain-finetune paradigm has also been extended to multi-lingual setups to train effective multi-lingual models that can be used for zero-shot cross-lingual transfer . Jointly trained deep contextualized multi-lingual LMs like mBERT ( Devlin et al. , 2019 ) and XLM-R ( Conneau et al. , 2020 ) coupled with supervised fine-tuning in the source language have been quite successful in transferring linguistic and task knowledge from one language to another without using any task label in the target language . The joint pretraining with multiple languages allows these models to generalize across languages . Despite their effectiveness , recent studies ( Pires et al. , 2019 ; K et al. , 2020 ) have also highlighted one crucial limiting factor for successful cross-lingual transfer . They all agree that the cross-lingual generalization ability of the model is limited by the ( lack of ) structural similarity between the source and target languages . For example , for transferring mBERT from English , K et al . ( 2020 ) report about 23.6 % accuracy drop in Hindi ( structurally dissimilar ) compared to 9 % drop in Spanish ( structurally similar ) in cross-lingual natural language inference ( XNLI ) . The difficulty level of transfer is further exacerbated if the ( dissimilar ) target language is low-resourced , as the joint pretraining step may not have seen many instances from this language in the first place . In our experiments ( §4.2 ) , in cross-lingual NER ( XNER ) , we report F1 reductions of 28.3 % in Urdu and 30.4 % in Burmese for XLM-R , which is trained on a much larger multi-lingual dataset than mBERT . One attractive way to improve cross-lingual generalization is to perform data augmentation ( Simard et al. , 1998 ) , and train the model ( e.g. , XLM-R ) on examples that are similar but different from the labeled data in the source language . Formalized by the Vicinal Risk Minimization ( VRM ) principle ( Chapelle et al. , 2001 ) , such data augmentation methods have shown impressive results recently in computer vision ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ; Li et al. , 2020a ) . These methods enlarge the support of the training distribution by generating new data points from a vicinity distribution around each training example . For images , the vicinity of a training image can be defined by a set of operations like rotation and scaling , or by linear mixtures of features and labels ( Zhang et al. , 2018 ) . However , when it comes to text , such methods have rarely been successful . The main reason is that unlike images , linguistic units ( e.g. , words , phrases ) are discrete and a smooth change in their embeddings may not result in a plausible linguistic unit that has similar meanings . In NLP , the most successful data augmentation method has so far been back-translation ( Sennrich et al. , 2016 ) which generates paraphrases of an input sentence through round-trip translations . However , it requires parallel data to train effective machine translation systems , acquiring which can be more expensive for low-resource languages than annotating the target language data with task labels . Furthermore , back-translation is only applicable in a supervised setup and to tasks where it is possible to find the alignments between the original labeled entities and the back-translated entities , such as in question answering ( Yu et al. , 2018 ; Dong et al. , 2017 ) . In this work , we propose XLA , a robust unsupervised cross-lingual augmentation framework for improving cross-lingual generalization of multilingual LMs . XLA augments data from the unlabeled training examples in the target language as well as from the virtual input samples ( sentences ) generated from the vicinity distribution of the source and target language sentences . With the augmented data , it performs simultaneous self-learning with an effective distillation strategy to learn a strongly adapted cross-lingual model from noisy ( pseudo ) labels for the target language task . We propose novel ways to generate virtual sentences using a multilingual masked LM ( Conneau et al. , 2020 ) , and get reliable task labels by simultaneous multilingual co-training . This co-training employs a two-stage co-distillation process to ensure robust transfer to dissimilar and/or low-resource languages . We validate the effectiveness and robustness of XLA by performing extensive experiments on three different zero-resource cross-lingual transfer tasks – XNER , XNLI , and PAWS-X , which posit different sets of challenges . We have experimented with many different language pairs ( 14 in total ) comprising languages that are similar/dissimilar/low-resourced . XLA yields impressive results on XNER , setting SoTA in all tested languages outperforming the baselines by a good margin . In particular , the relative gains for XLA are higher for structurally dissimilar and/or low-resource languages , where the base model is weaker : 28.54 % , 16.05 % , and 9.25 % absolute improvements for Urdu , Burmese , and Arabic , respectively . For XNLI , with only 5 % labeled data in the source , it gets comparable results to the baseline that uses all the labeled data , and surpasses the standard baseline by 2.55 % on average when it uses all the labeled data in the source . We also have similar findings in PAWS-X . We provide a comprehensive analysis of the factors that contribute to XLA ’ s performance . 2 BACKGROUND . Contextual representation and cross-lingual transfer In recent years , significant progress has been made in learning contextual word representations and pretrained models . Notably , BERT ( Devlin et al. , 2019 ) pretrains a Transformer ( Vaswani et al. , 2017 ) encoder with a masked language model ( MLM ) objective , and uses the same model architecture to adapt to a new task . It also comes with a multilingual version mBERT , which is trained jointly on 102 languages . RoBERTa ( Liu et al. , 2019 ) extends BERT with improved training , while XLM ( Lample and Conneau , 2019 ) extends mBERT with a conditional LM and a translation LM ( using parallel data ) objectives . Conneau et al . ( 2020 ) train the largest multilingual language model XLM-R with RoBERTa framework . Despite any explicit cross-lingual supervision , mBERT and its variants have been shown to learn cross-lingual representations that generalize well across languages . Wu and Dredze ( 2019 ) and Pires et al . ( 2019 ) evaluate the zero-shot cross-lingual transferability of mBERT on several tasks and attribute its generalization capability to shared subword units . Pires et al . ( 2019 ) also found structural similarity ( e.g. , word order ) to be another important factor for successful cross-lingual transfer . K et al . ( 2020 ) , however , show that the shared subword has a minimal contribution ; instead , the structural similarity between languages is more crucial for effective transfer ( more in Appendix D ) . Vicinal risk minimization ( VRM ) Data augmentation supported by the VRM principle ( Chapelle et al. , 2001 ) can be an effective choice for achieving better out-of-distribution adaptation . In VRM , we minimize the empirical vicinal risk defined as : Lv ( θ ) = 1N ∑N n=1 l ( fθ ( x̃n ) , ỹn ) , where fθ denotes the model parameterized by θ , and D̃ = { ( x̃n , ỹn ) } Nn=1 is an augmented dataset constructed by sampling the vicinal distribution ϑ ( x̃i , ỹi|xi , yi ) around the original training sample ( xi , yi ) . Defining vicinity is however quite challenging as it requires the extraction of samples from a distribution without hurting their labels . Earlier methods apply simple rules like rotation and scaling of images ( Simard et al. , 1998 ) . Recent work ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ) show impressive results in image classification with simple linear interpolation of data . However , to our knowledge , none of these methods have so far been successful in NLP due to the discrete nature of texts.1 LM-based supervised augmentation Recently , a number of data-augmentation methods have been proposed using contextualized LMs like BERT , e.g. , Contextual Augmentation ( Kobayashi , 2018 ) , Conditional BERT ( Wu et al. , 2018 ) , and AUG-BERT ( Wu et al. , 2018 ) . These approaches use a constrained augmentation method which alters a pretrained LM to a label-conditional LM for a specific task . This means these methods update the parameters of the pretrained LM using the labels . 3 XLA FRAMEWORK . While recent cross-lingual transfer learning efforts have relied almost exclusively on multi-lingual pretraining and zero-shot transfer of a fine-tuned source model , there is a great potential for more elaborate methods that can leverage the unlabeled data better . Motivated by this , we present XLA - our unsupervised data augmentation framework for zero-resource cross-lingual task adaptation . Figure 1 gives an overview of XLA . Let Ds = ( Xs , Ys ) and Dt = ( Xt ) denote the training data for a source language s and a target language t , respectively . XLA augments data from various origins at different stages of training . In the initial stage ( epoch 1 ) , it uses the augmented training samples from the target language ( D′t ) along with the original source ( Ds ) . In later stages ( epoch 2-3 ) , it uses virtual ( vicinal ) sentences generated from the vicinity distribution of source and target examples : ϑ ( x̃sn|xsn ) and ϑ ( x̃tn|xtn ) , where xsn ∼ Xs and xtn ∼ Xt . It performs self-training on the augmented data to acquire the corresponding pseudo labels . To avoid confirmation bias with self-training where the model accumulates its own errors , it simultaneously trains three task models to generate virtual training data through data augmentation and filtering of potential label noises via multi-epoch co-teaching ( Zhou and Li , 2005 ) . In each epoch , the co-teaching process first performs co-distillation , where two peer task models are used to select “ reliable ” training examples to train the third model . The selected samples with pseudo labels are then added to the target task model ’ s training data by taking the agreement from the other two models , a process we refer to as co-guessing . The co-distillation and co-guessing mechanism ensure robustness of XLA to out-of-domain distributions that can occur in a multilingual setup , e.g. , due to a structurally dissimilar and/or low-resource target language . Algorithm 1 gives a pseudocode of the overall training method . Each of the task models in XLA is an instance of XLM-R fine-tuned on the source language task ( e.g. , English NER ) , whereas the pretrained masked LM parameterized by θmlm ( i.e. , before fine-tuning ) is used to define the vicinity distribution ϑ ( x̃n|xn , θmlm ) around each selected example xn . 1Considering papers that have been published ( or accepted ) through peer review . There has been some concurrent work that uses pretrained LMs like BERT to craft adversarial examples ( Li et al. , 2020b ) . Although relevant , these methods have a different objective than ours , and none of them is cross- or multi-lingual . Algorithm 1 XLA : a robust unsupervised data augmentation framework for cross-lingual NLP Input : source ( s ) and target ( t ) language datasets : Ds = ( Xs , Ys ) , Dt = ( Xt ) ; task models : θ ( 1 ) , θ ( 2 ) , θ ( 3 ) , pre-trained masked LM θmlm , mask ratio P , diversification factor δ , sampling factor α , and distillation factor η Output : models trained on augmented data 1 : θ ( 1 ) , θ ( 2 ) , θ ( 3 ) = WARMUP ( Ds , θ ( 1 ) , θ ( 2 ) , θ ( 3 ) ) . warm up with conf . penalty . 2 : for e ∈ [ 1 : 3 ] do . e denotes epoch . 3 : for k ∈ { 1 , 2 , 3 } do 4 : X ( k ) t , Y ( k ) t = DISTIL ( Xt , ηe , θ ( k ) ) . infer and select tgt training data for augmentation . 5 : for j ∈ { 1 , 2 , 3 } do 6 : if k == j then Continue 7 : / * source language data augmentation * / 8 : X̃s = GEN-LM ( Xs , θmlm , P , δ ) . vicinal example generation . 9 : X ( k ) s , Y ( k ) s = DISTIL ( X̃s , ηe , θ ( k ) ) ; X ( j ) s , Y ( j ) s = DISTIL ( X̃s , ηe , θ ( j ) ) 10 : D̃s = AGREEMENT ( D ( k ) s = ( X ( k ) s , Y ( k ) s ) , D ( j ) s = ( X ( j ) s , Y ( j ) s ) ) 11 : / * target language data augmentation ( no vicinity ) * / 12 : X ( j ) t , Y ( j ) t = DISTIL ( Xt , ηe , θ ( j ) ) 13 : D′t = AGREEMENT ( D ( k ) t = ( X ( k ) t , Y ( k ) t ) , D ( j ) t = ( X ( j ) t , Y ( j ) t ) ) . see line 4 14 : / * target language data augmentation * / 15 : X̃t = GEN-LM ( Xt , θmlm , P , δ ) . vicinal example generation . 16 : X ( k ) t , Y ( k ) t = DISTIL ( X̃t , ηe , θ ( k ) ) ; X ( j ) t , Y ( j ) t = DISTIL ( X̃t , ηe , θ ( j ) ) 17 : D̃t = AGREEMENT ( D ( k ) t = ( X ( k ) t , Y ( k ) t ) , D ( j ) t = ( X ( j ) t , Y ( j ) t ) ) 18 : / * train new models on augmented data * / 19 : for l ∈ { 1 , 2 , 3 } do 20 : if l 6= j and l 6= k then 21 : with sampling factor α , train θ ( l ) on D , . train progressively 22 : where D = { Ds1 ( e ∈ { 1 , 3 } ) ∪ D′t1 ( e ∈ { 1 , 3 } ) ∪ D̃s1 ( e = 3 ) ∪ D̃t1 ( e ∈ { 2 , 3 } ) } 23 : Return { θ ( 1 ) , θ ( 2 ) , θ ( 3 ) } Although the data augmentation proposed in Contextual Augmentation ( Kobayashi , 2018 ) , Conditional BERT ( Wu et al. , 2018 ) , AUG-BERT ( Wu et al. , 2018 ) also use a pretrained masked LM , there are some fundamental differences between our method and these approaches . Unlike these approaches our vicinity-based LM augmentation is purely unsupervised and we do not perform any fine-tuning of the pretrained vicinity model ( θlm ) . The vicinity model in XLA is a disjoint pretrained entity whose weights are not trained on any task objective . This disjoint characteristic gives our framework the flexibility to replace θlm even with a better monolingual LM for a specific target language , which in turn makes XLA extendable to utilize stronger LMs that may come in the future . In the following , we describe the steps in Algorithm 1 . | The authors present an unsupervised data augmentation framework for cross-lingual NLP. Their method, called XLA, combines self-learning with co-learning and filtering. They generate additional synthetic examples by replacing words with predictions from pretrained multilingual masked LM (taken from XLM Conneau 2020). The key contribution in this paper is how they get reliable labels by simultaneously co-training three student models and using them to filter examples for training each other to avoid confirmation bias. | SP:937d3a7858616e03c7d95fed51082dff234198f4 |
XLA: A Robust Unsupervised Data Augmentation Framework for Cross-Lingual NLP | 1 INTRODUCTION . Self-supervised learning in the form of pretrained language models ( LM ) has been the driving force in developing state-of-the-art natural language processing ( NLP ) systems in recent years . These methods typically follow two basic steps , where a supervised task-specific fine-tuning follows a large-scale LM pretraining ( Devlin et al. , 2019 ; Radford et al. , 2019 ) . However , getting annotated data for every target task in every target language is difficult , especially for low-resource languages . Recently , the pretrain-finetune paradigm has also been extended to multi-lingual setups to train effective multi-lingual models that can be used for zero-shot cross-lingual transfer . Jointly trained deep contextualized multi-lingual LMs like mBERT ( Devlin et al. , 2019 ) and XLM-R ( Conneau et al. , 2020 ) coupled with supervised fine-tuning in the source language have been quite successful in transferring linguistic and task knowledge from one language to another without using any task label in the target language . The joint pretraining with multiple languages allows these models to generalize across languages . Despite their effectiveness , recent studies ( Pires et al. , 2019 ; K et al. , 2020 ) have also highlighted one crucial limiting factor for successful cross-lingual transfer . They all agree that the cross-lingual generalization ability of the model is limited by the ( lack of ) structural similarity between the source and target languages . For example , for transferring mBERT from English , K et al . ( 2020 ) report about 23.6 % accuracy drop in Hindi ( structurally dissimilar ) compared to 9 % drop in Spanish ( structurally similar ) in cross-lingual natural language inference ( XNLI ) . The difficulty level of transfer is further exacerbated if the ( dissimilar ) target language is low-resourced , as the joint pretraining step may not have seen many instances from this language in the first place . In our experiments ( §4.2 ) , in cross-lingual NER ( XNER ) , we report F1 reductions of 28.3 % in Urdu and 30.4 % in Burmese for XLM-R , which is trained on a much larger multi-lingual dataset than mBERT . One attractive way to improve cross-lingual generalization is to perform data augmentation ( Simard et al. , 1998 ) , and train the model ( e.g. , XLM-R ) on examples that are similar but different from the labeled data in the source language . Formalized by the Vicinal Risk Minimization ( VRM ) principle ( Chapelle et al. , 2001 ) , such data augmentation methods have shown impressive results recently in computer vision ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ; Li et al. , 2020a ) . These methods enlarge the support of the training distribution by generating new data points from a vicinity distribution around each training example . For images , the vicinity of a training image can be defined by a set of operations like rotation and scaling , or by linear mixtures of features and labels ( Zhang et al. , 2018 ) . However , when it comes to text , such methods have rarely been successful . The main reason is that unlike images , linguistic units ( e.g. , words , phrases ) are discrete and a smooth change in their embeddings may not result in a plausible linguistic unit that has similar meanings . In NLP , the most successful data augmentation method has so far been back-translation ( Sennrich et al. , 2016 ) which generates paraphrases of an input sentence through round-trip translations . However , it requires parallel data to train effective machine translation systems , acquiring which can be more expensive for low-resource languages than annotating the target language data with task labels . Furthermore , back-translation is only applicable in a supervised setup and to tasks where it is possible to find the alignments between the original labeled entities and the back-translated entities , such as in question answering ( Yu et al. , 2018 ; Dong et al. , 2017 ) . In this work , we propose XLA , a robust unsupervised cross-lingual augmentation framework for improving cross-lingual generalization of multilingual LMs . XLA augments data from the unlabeled training examples in the target language as well as from the virtual input samples ( sentences ) generated from the vicinity distribution of the source and target language sentences . With the augmented data , it performs simultaneous self-learning with an effective distillation strategy to learn a strongly adapted cross-lingual model from noisy ( pseudo ) labels for the target language task . We propose novel ways to generate virtual sentences using a multilingual masked LM ( Conneau et al. , 2020 ) , and get reliable task labels by simultaneous multilingual co-training . This co-training employs a two-stage co-distillation process to ensure robust transfer to dissimilar and/or low-resource languages . We validate the effectiveness and robustness of XLA by performing extensive experiments on three different zero-resource cross-lingual transfer tasks – XNER , XNLI , and PAWS-X , which posit different sets of challenges . We have experimented with many different language pairs ( 14 in total ) comprising languages that are similar/dissimilar/low-resourced . XLA yields impressive results on XNER , setting SoTA in all tested languages outperforming the baselines by a good margin . In particular , the relative gains for XLA are higher for structurally dissimilar and/or low-resource languages , where the base model is weaker : 28.54 % , 16.05 % , and 9.25 % absolute improvements for Urdu , Burmese , and Arabic , respectively . For XNLI , with only 5 % labeled data in the source , it gets comparable results to the baseline that uses all the labeled data , and surpasses the standard baseline by 2.55 % on average when it uses all the labeled data in the source . We also have similar findings in PAWS-X . We provide a comprehensive analysis of the factors that contribute to XLA ’ s performance . 2 BACKGROUND . Contextual representation and cross-lingual transfer In recent years , significant progress has been made in learning contextual word representations and pretrained models . Notably , BERT ( Devlin et al. , 2019 ) pretrains a Transformer ( Vaswani et al. , 2017 ) encoder with a masked language model ( MLM ) objective , and uses the same model architecture to adapt to a new task . It also comes with a multilingual version mBERT , which is trained jointly on 102 languages . RoBERTa ( Liu et al. , 2019 ) extends BERT with improved training , while XLM ( Lample and Conneau , 2019 ) extends mBERT with a conditional LM and a translation LM ( using parallel data ) objectives . Conneau et al . ( 2020 ) train the largest multilingual language model XLM-R with RoBERTa framework . Despite any explicit cross-lingual supervision , mBERT and its variants have been shown to learn cross-lingual representations that generalize well across languages . Wu and Dredze ( 2019 ) and Pires et al . ( 2019 ) evaluate the zero-shot cross-lingual transferability of mBERT on several tasks and attribute its generalization capability to shared subword units . Pires et al . ( 2019 ) also found structural similarity ( e.g. , word order ) to be another important factor for successful cross-lingual transfer . K et al . ( 2020 ) , however , show that the shared subword has a minimal contribution ; instead , the structural similarity between languages is more crucial for effective transfer ( more in Appendix D ) . Vicinal risk minimization ( VRM ) Data augmentation supported by the VRM principle ( Chapelle et al. , 2001 ) can be an effective choice for achieving better out-of-distribution adaptation . In VRM , we minimize the empirical vicinal risk defined as : Lv ( θ ) = 1N ∑N n=1 l ( fθ ( x̃n ) , ỹn ) , where fθ denotes the model parameterized by θ , and D̃ = { ( x̃n , ỹn ) } Nn=1 is an augmented dataset constructed by sampling the vicinal distribution ϑ ( x̃i , ỹi|xi , yi ) around the original training sample ( xi , yi ) . Defining vicinity is however quite challenging as it requires the extraction of samples from a distribution without hurting their labels . Earlier methods apply simple rules like rotation and scaling of images ( Simard et al. , 1998 ) . Recent work ( Zhang et al. , 2018 ; Berthelot et al. , 2019 ) show impressive results in image classification with simple linear interpolation of data . However , to our knowledge , none of these methods have so far been successful in NLP due to the discrete nature of texts.1 LM-based supervised augmentation Recently , a number of data-augmentation methods have been proposed using contextualized LMs like BERT , e.g. , Contextual Augmentation ( Kobayashi , 2018 ) , Conditional BERT ( Wu et al. , 2018 ) , and AUG-BERT ( Wu et al. , 2018 ) . These approaches use a constrained augmentation method which alters a pretrained LM to a label-conditional LM for a specific task . This means these methods update the parameters of the pretrained LM using the labels . 3 XLA FRAMEWORK . While recent cross-lingual transfer learning efforts have relied almost exclusively on multi-lingual pretraining and zero-shot transfer of a fine-tuned source model , there is a great potential for more elaborate methods that can leverage the unlabeled data better . Motivated by this , we present XLA - our unsupervised data augmentation framework for zero-resource cross-lingual task adaptation . Figure 1 gives an overview of XLA . Let Ds = ( Xs , Ys ) and Dt = ( Xt ) denote the training data for a source language s and a target language t , respectively . XLA augments data from various origins at different stages of training . In the initial stage ( epoch 1 ) , it uses the augmented training samples from the target language ( D′t ) along with the original source ( Ds ) . In later stages ( epoch 2-3 ) , it uses virtual ( vicinal ) sentences generated from the vicinity distribution of source and target examples : ϑ ( x̃sn|xsn ) and ϑ ( x̃tn|xtn ) , where xsn ∼ Xs and xtn ∼ Xt . It performs self-training on the augmented data to acquire the corresponding pseudo labels . To avoid confirmation bias with self-training where the model accumulates its own errors , it simultaneously trains three task models to generate virtual training data through data augmentation and filtering of potential label noises via multi-epoch co-teaching ( Zhou and Li , 2005 ) . In each epoch , the co-teaching process first performs co-distillation , where two peer task models are used to select “ reliable ” training examples to train the third model . The selected samples with pseudo labels are then added to the target task model ’ s training data by taking the agreement from the other two models , a process we refer to as co-guessing . The co-distillation and co-guessing mechanism ensure robustness of XLA to out-of-domain distributions that can occur in a multilingual setup , e.g. , due to a structurally dissimilar and/or low-resource target language . Algorithm 1 gives a pseudocode of the overall training method . Each of the task models in XLA is an instance of XLM-R fine-tuned on the source language task ( e.g. , English NER ) , whereas the pretrained masked LM parameterized by θmlm ( i.e. , before fine-tuning ) is used to define the vicinity distribution ϑ ( x̃n|xn , θmlm ) around each selected example xn . 1Considering papers that have been published ( or accepted ) through peer review . There has been some concurrent work that uses pretrained LMs like BERT to craft adversarial examples ( Li et al. , 2020b ) . Although relevant , these methods have a different objective than ours , and none of them is cross- or multi-lingual . Algorithm 1 XLA : a robust unsupervised data augmentation framework for cross-lingual NLP Input : source ( s ) and target ( t ) language datasets : Ds = ( Xs , Ys ) , Dt = ( Xt ) ; task models : θ ( 1 ) , θ ( 2 ) , θ ( 3 ) , pre-trained masked LM θmlm , mask ratio P , diversification factor δ , sampling factor α , and distillation factor η Output : models trained on augmented data 1 : θ ( 1 ) , θ ( 2 ) , θ ( 3 ) = WARMUP ( Ds , θ ( 1 ) , θ ( 2 ) , θ ( 3 ) ) . warm up with conf . penalty . 2 : for e ∈ [ 1 : 3 ] do . e denotes epoch . 3 : for k ∈ { 1 , 2 , 3 } do 4 : X ( k ) t , Y ( k ) t = DISTIL ( Xt , ηe , θ ( k ) ) . infer and select tgt training data for augmentation . 5 : for j ∈ { 1 , 2 , 3 } do 6 : if k == j then Continue 7 : / * source language data augmentation * / 8 : X̃s = GEN-LM ( Xs , θmlm , P , δ ) . vicinal example generation . 9 : X ( k ) s , Y ( k ) s = DISTIL ( X̃s , ηe , θ ( k ) ) ; X ( j ) s , Y ( j ) s = DISTIL ( X̃s , ηe , θ ( j ) ) 10 : D̃s = AGREEMENT ( D ( k ) s = ( X ( k ) s , Y ( k ) s ) , D ( j ) s = ( X ( j ) s , Y ( j ) s ) ) 11 : / * target language data augmentation ( no vicinity ) * / 12 : X ( j ) t , Y ( j ) t = DISTIL ( Xt , ηe , θ ( j ) ) 13 : D′t = AGREEMENT ( D ( k ) t = ( X ( k ) t , Y ( k ) t ) , D ( j ) t = ( X ( j ) t , Y ( j ) t ) ) . see line 4 14 : / * target language data augmentation * / 15 : X̃t = GEN-LM ( Xt , θmlm , P , δ ) . vicinal example generation . 16 : X ( k ) t , Y ( k ) t = DISTIL ( X̃t , ηe , θ ( k ) ) ; X ( j ) t , Y ( j ) t = DISTIL ( X̃t , ηe , θ ( j ) ) 17 : D̃t = AGREEMENT ( D ( k ) t = ( X ( k ) t , Y ( k ) t ) , D ( j ) t = ( X ( j ) t , Y ( j ) t ) ) 18 : / * train new models on augmented data * / 19 : for l ∈ { 1 , 2 , 3 } do 20 : if l 6= j and l 6= k then 21 : with sampling factor α , train θ ( l ) on D , . train progressively 22 : where D = { Ds1 ( e ∈ { 1 , 3 } ) ∪ D′t1 ( e ∈ { 1 , 3 } ) ∪ D̃s1 ( e = 3 ) ∪ D̃t1 ( e ∈ { 2 , 3 } ) } 23 : Return { θ ( 1 ) , θ ( 2 ) , θ ( 3 ) } Although the data augmentation proposed in Contextual Augmentation ( Kobayashi , 2018 ) , Conditional BERT ( Wu et al. , 2018 ) , AUG-BERT ( Wu et al. , 2018 ) also use a pretrained masked LM , there are some fundamental differences between our method and these approaches . Unlike these approaches our vicinity-based LM augmentation is purely unsupervised and we do not perform any fine-tuning of the pretrained vicinity model ( θlm ) . The vicinity model in XLA is a disjoint pretrained entity whose weights are not trained on any task objective . This disjoint characteristic gives our framework the flexibility to replace θlm even with a better monolingual LM for a specific target language , which in turn makes XLA extendable to utilize stronger LMs that may come in the future . In the following , we describe the steps in Algorithm 1 . | This paper proposed a new data augmentation framework for low-resourse (and zero resource) cross-lingual task adaptation by combining several methods (entropy regluarized training, self-training). The authors conducted extensive experiments on three cross-lingual tasks, demonstrating the effectiveness of XLA. In addition, the authors compared different choices in the XLA distilation stage and claimed that the gain from XLA is beyond the model ensemble effect. | SP:937d3a7858616e03c7d95fed51082dff234198f4 |
One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks | 1 INTRODUCTION . Standard practice in machine learning has long been to only address carefully circumscribed , often very related tasks . For example , we might train a single classifier to label an image as containing objects from a certain predefined set , or to label the words of a sentence with their semantic roles . Indeed , when working with relatively simple classes of functions like linear classifiers , it would be unreasonable to expect to train a classifier that handles more than such a carefully scoped task ( or related tasks in standard multitask learning ) . As techniques for learning with relatively rich classes such as neural networks have been developed , it is natural to ask whether or not such scoping of tasks is inherently necessary . Indeed , many recent works ( see Section 1.2 ) have proposed eschewing this careful scoping of tasks , and instead training a single , “ monolithic ” function spanning many tasks . Large , deep neural networks can , in principle , represent multiple classifiers in such a monolithic learned function ( Hornik , 1991 ) , giving rise to the field of multitask learning . This combined function might be learned by combining all of the training data for all of the tasks into one large batch–see Section 1.2 for some examples . Taken to an extreme , we could consider seeking to learn a universal circuit—that is , a circuit that interprets arbitrary programs in a programming language which can encode various tasks . But , the ability to represent such a monolithic combined function does not necessarily entail that such a function can be efficiently learned by existing methods . Cryptographic hardness theorems ( Kearns & Valiant , 1994 ) establish that this is not possible in general by any method , let alone the specific training methods used in practice . Nevertheless , we still can ask how ∗Work performed in part while visiting Google . †Work performed in part while affiliated with Stanford , and in part while interning at Google . rich a family of tasks can be learned by these standard methods . In this work , we study the extent to which backpropagation with stochastic gradient descent ( SGD ) can learn such monolithic functions on diverse , unrelated tasks . There might still be some inherent benefit to an architecture in which tasks are partitioned into sub-tasks of such small scope , and the training data is correspondingly partitioned prior to learning . For example , in the early work on multitask learning , Caruana ( 1997 ) observed that training a network to solve unrelated tasks simultaneously seemed to harm the overall performance . Similarly , the seminal work of Jacobs et al . ( 1991 ) begins by stating that “ If backpropagation is used to train a single , multilayer network to perform different subtasks on different occasions , there will generally be strong interference effects that lead to slow learning and poor generalization ” . We therefore ask if , for an unfortunate choice of tasks in our model , learning by standard methods might be fundamentally impaired . As a point of reference from neuroscience , the classical view is that distinct tasks are handled in the brain by distinct patches of the cortex . While it is a subject of debate whether modularity exists for higher level tasks ( Samuels , 2006 ) , it is accepted that there are dedicated modules for low-level tasks such as vision and audio processing . Thus , it seems that the brain produces a modular architecture , in which different tasks are handled by different regions of the cortex . Conceivably , this division into task-specific regions might be driven by fundamental considerations of learnability : A single , monolithic neural circuit might simply be too difficult to learn because the different tasks might interfere with one another . Others have taken neural networks trained by backpropagation as a model of learning in the cortex ( Musslick et al. , 2017 ) ; to the extent that this is reasonable , our work has some bearing on these questions as well . 1.1 OUR RESULTS . We find , perhaps surprisingly , that combining multiple tasks into one can not fundamentally impair learning with standard training methods . We demonstrate this for a broad family of methods for combining individual tasks into a single monolithic task . For example , inputs for each individual tasks may come from a disjoint region ( for example , a disjoint ball ) in a common input space , and each individual task could then involve applying some arbitrary simple function ( e.g. , a separate linear classifier for each region ) . Alternately there may be an explicit “ task code ” attribute ( e.g. , a one-hot code ) , together with the usual input attributes and output label ( s ) , where examples with the same task code are examples for the same learning task . Complementing our results that combining multiple tasks does not impair learning , we also find that some task coding schemes do incur a sample complexity penalty . A vast variety of task coding schemes may be used . As a concrete example , when the data points for each task are well-separated into distinct clusters , and the tasks are linear classification tasks , we show that a two-layer architecture trained with SGD successfully learns the combined , monolithic function ; the required amount of data simply scales as the sum of the amount required to learn each task individually ( Theorem 2 ) . Meanwhile , if the tasks are determined by a balanced decision tree of height h on d code attributes ( as in Fig . 1 , left ) , we find that the training time and amount of data needed scales as ∼ dh—quasipolynomial in the 2h leaves ( distinct tasks ) when d is of similar size to h , and thus when the coding is efficient ( Theorem 3 ) . We also prove a corresponding lower bound , which shows that this bound is in fact asymptotically tight ( Theorem 3 ) . More generally , for task codings based on decision trees using linear splits with a margin of at least γ ( when the data has unit ` 2 norm ) , the training time and required data are asymptotically bounded by ∼ eO ( h/γ 2 ) , which for constant γ is polynomial in the 2h functions ( Theorem 4 ) . We generalize from these cluster-based and decision-tree based task codings to more complex codes that are actually simple programs . For instance , we show that SQL-style aggregation queries over a fixed database , written as a functions of the parameters of the query , can also be learned this way . More generally , simple programming constructs ( such as in Fig . 1 , right ) , built by operations such as compositions , aggregation , concatenation , and branching on a small number of such learnable functions , are also learnable ( Theorem 5 ) . In general , we can learn a low-depth formula ( circuit with fan-out 1 ) in which each gate is not merely a switch ( as in a decision tree ) , but can be any analytic function on the inputs , including arithmetic operations . Again , our key technical contribution is that we show that all of these functions are efficiently learned by SGD . This is non-trival since , although universal approximation theorems show that such functions can be expressed by ( sufficiently wide ) two-layer neural networks , under standard assumptions some expressible functions are not learnable Klivans & Sherstov ( 2009 ) . We supplement the theoretical bounds with experiments on clusters , decision trees , and SQL-style aggregation showing that such functions are indeed learned in practice . We note that the learning of such combined functions could have been engineered by hand : for example , there exist efficient algorithms for learning clusterings or such decision trees , and it is easy to learn the linear classifiers given the partitioned data . Likewise , these classes of functions are all known to be learnable by other methods , given an appropriate transformation of the input features . The key point is that the two-layer neural network can jointly learn the task coding scheme and the task-specific functions without special engineering of the architecture . That is , it is unnecessary to engineer a way of partitioning of the data into separate tasks prior to learning . Relatedly , the time and sample requirements of learning multiple tasks on a single network in general is insufficient to explain the modularity observed in biological neural networks if their learning dynamics are similar to SGD —i.e. , we can not explain the presence of modularity from such general considerations . All our theoretical results are based upon a fundamental theorem that shows that analytic functions can be efficiently learnt by wide ( but finite-width ) two-layer neural networks with standard activation functions ( such as ReLU ) , using SGD from a random initialization . Specifically , we derive novel generalization bounds for multivariate analytic functions ( Theorems 1 and 8 ) by relating wide networks to kernel learning with a specific network-induced kernel ( Jacot et al. , 2018 ; Du et al. , 2019 ; Allen-Zhu et al. , 2019 ; Arora et al. , 2019a ; Lee et al. , 2019 ) , known as the neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ) . We further develop a calculus of bounds showing that the sum , product , ratio , and composition of analytic functions is also learnable , with bounds constructed using the familiar product and chain rules of univariate calculus ( Corollaries 1 , 2 ) . These above learnability results may be of independent interest ; for example , they can be used to show that natural physical laws like the gravitational force equations ( shown in Fig . 1 ) can be efficiently learnt by neural networks ( Section B.1 ) . Furthermore , our bounds imply that the NTK kernel for ReLU activation has theoretical learning guarantees that are superior to the Gaussian kernel ( Section A.2 ) , which we also demonstrate empirically with experiments on learning the gravitational force law ( Section B.2 ) . 1.2 RELATED WORK . Most related to our work are a number of works in application areas that have sought to learn a single network that can perform many different tasks . In natural language processing , Tsai et al . ( 2019 ) show that a single model can solve machine translation across more than 50 languages . Many other works in NLP similarly seek to use one model for multiple languages , or even multiple tasks ( Johnson et al. , 2017 ; Aharoni et al. , 2019 ; Bapna et al. , 2019 ; Devlin et al. , 2018 ) . Monolithic models have also been successfully trained for tasks in very different domains , such as speech and language ( Kaiser et al. , 2017 ) . Finally , there is also work on training extremely large neural networks which have the capacity to learn multiple tasks ( Shazeer et al. , 2017 ; Raffel et al. , 2019 ) . These works provide empirical clues that suggest that a single network can successfully be trained to perform a wide variety of tasks . But , they do not provide a systematic theoretical investigation of the extent of this ability as we do here . Caruana ( 1997 ) proposed multitask learning in which a single network is trained to solve multiple tasks on the same input simultaneously , as a vector of outputs . He observed that average generalization error for the multiple tasks may be much better than when the tasks are trained separately , and this observation initiated an active area of machine learning research ( Zhang & Yang , 2017 ) . Multitask learning is obviously related to our monolithic architectures . The difference is that whereas in multitask learning all of the tasks are computed simultaneously and output on separate gates , here all of the tasks share a common set of outputs , and the task code inputs switch between the various tasks . Furthermore , contrary to the main focus of multitask learning , we are primarily interested in the extent to which different tasks may interfere , rather than how much similar ones may benefit . Our work is also related to studies of neural models of multitasking in cognitive science . In particular , Musslick et al . ( 2017 ) consider a similar two-layer architecture in which there is a set of task code attributes . But , as in multitask learning , they are interested in how many of these tasks can be performed simultaneously , on distinct outputs . They analyze the tradeoff between improved sample complexity and interference of the tasks with a handcrafted “ gating ” scheme , in which the parts of activity are zeroed out depending on the input ( as opposed to the usual nonlinearities ) ; in this model , they find out that the speedup from multitask learning comes at the penalty of limiting the number of tasks that can be correctly computed as the similarity of inputs varies . Thus , in contrast to our model where the single model is computing distinct tasks sequentially , they do find that the distinct tasks can interfere with each other when we seek to solve them simultaneously . | This paper posits a very interesting question about provable multi-task learning by neural nets. The idea is quite interesting to encode the objectives as task codes and then to write a smooth approximation to the predictor function as a weighted sum of indicators and the try training a net to learn this smoothening. But the problem with the paper is that the presentation of the details are extremely unclear and almost nothing about the proofs can be easily followed! Let me cite a few specific issues, | SP:451837cd17de7cdc4a059f4cc0cedd10b2eb136d |
One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks | 1 INTRODUCTION . Standard practice in machine learning has long been to only address carefully circumscribed , often very related tasks . For example , we might train a single classifier to label an image as containing objects from a certain predefined set , or to label the words of a sentence with their semantic roles . Indeed , when working with relatively simple classes of functions like linear classifiers , it would be unreasonable to expect to train a classifier that handles more than such a carefully scoped task ( or related tasks in standard multitask learning ) . As techniques for learning with relatively rich classes such as neural networks have been developed , it is natural to ask whether or not such scoping of tasks is inherently necessary . Indeed , many recent works ( see Section 1.2 ) have proposed eschewing this careful scoping of tasks , and instead training a single , “ monolithic ” function spanning many tasks . Large , deep neural networks can , in principle , represent multiple classifiers in such a monolithic learned function ( Hornik , 1991 ) , giving rise to the field of multitask learning . This combined function might be learned by combining all of the training data for all of the tasks into one large batch–see Section 1.2 for some examples . Taken to an extreme , we could consider seeking to learn a universal circuit—that is , a circuit that interprets arbitrary programs in a programming language which can encode various tasks . But , the ability to represent such a monolithic combined function does not necessarily entail that such a function can be efficiently learned by existing methods . Cryptographic hardness theorems ( Kearns & Valiant , 1994 ) establish that this is not possible in general by any method , let alone the specific training methods used in practice . Nevertheless , we still can ask how ∗Work performed in part while visiting Google . †Work performed in part while affiliated with Stanford , and in part while interning at Google . rich a family of tasks can be learned by these standard methods . In this work , we study the extent to which backpropagation with stochastic gradient descent ( SGD ) can learn such monolithic functions on diverse , unrelated tasks . There might still be some inherent benefit to an architecture in which tasks are partitioned into sub-tasks of such small scope , and the training data is correspondingly partitioned prior to learning . For example , in the early work on multitask learning , Caruana ( 1997 ) observed that training a network to solve unrelated tasks simultaneously seemed to harm the overall performance . Similarly , the seminal work of Jacobs et al . ( 1991 ) begins by stating that “ If backpropagation is used to train a single , multilayer network to perform different subtasks on different occasions , there will generally be strong interference effects that lead to slow learning and poor generalization ” . We therefore ask if , for an unfortunate choice of tasks in our model , learning by standard methods might be fundamentally impaired . As a point of reference from neuroscience , the classical view is that distinct tasks are handled in the brain by distinct patches of the cortex . While it is a subject of debate whether modularity exists for higher level tasks ( Samuels , 2006 ) , it is accepted that there are dedicated modules for low-level tasks such as vision and audio processing . Thus , it seems that the brain produces a modular architecture , in which different tasks are handled by different regions of the cortex . Conceivably , this division into task-specific regions might be driven by fundamental considerations of learnability : A single , monolithic neural circuit might simply be too difficult to learn because the different tasks might interfere with one another . Others have taken neural networks trained by backpropagation as a model of learning in the cortex ( Musslick et al. , 2017 ) ; to the extent that this is reasonable , our work has some bearing on these questions as well . 1.1 OUR RESULTS . We find , perhaps surprisingly , that combining multiple tasks into one can not fundamentally impair learning with standard training methods . We demonstrate this for a broad family of methods for combining individual tasks into a single monolithic task . For example , inputs for each individual tasks may come from a disjoint region ( for example , a disjoint ball ) in a common input space , and each individual task could then involve applying some arbitrary simple function ( e.g. , a separate linear classifier for each region ) . Alternately there may be an explicit “ task code ” attribute ( e.g. , a one-hot code ) , together with the usual input attributes and output label ( s ) , where examples with the same task code are examples for the same learning task . Complementing our results that combining multiple tasks does not impair learning , we also find that some task coding schemes do incur a sample complexity penalty . A vast variety of task coding schemes may be used . As a concrete example , when the data points for each task are well-separated into distinct clusters , and the tasks are linear classification tasks , we show that a two-layer architecture trained with SGD successfully learns the combined , monolithic function ; the required amount of data simply scales as the sum of the amount required to learn each task individually ( Theorem 2 ) . Meanwhile , if the tasks are determined by a balanced decision tree of height h on d code attributes ( as in Fig . 1 , left ) , we find that the training time and amount of data needed scales as ∼ dh—quasipolynomial in the 2h leaves ( distinct tasks ) when d is of similar size to h , and thus when the coding is efficient ( Theorem 3 ) . We also prove a corresponding lower bound , which shows that this bound is in fact asymptotically tight ( Theorem 3 ) . More generally , for task codings based on decision trees using linear splits with a margin of at least γ ( when the data has unit ` 2 norm ) , the training time and required data are asymptotically bounded by ∼ eO ( h/γ 2 ) , which for constant γ is polynomial in the 2h functions ( Theorem 4 ) . We generalize from these cluster-based and decision-tree based task codings to more complex codes that are actually simple programs . For instance , we show that SQL-style aggregation queries over a fixed database , written as a functions of the parameters of the query , can also be learned this way . More generally , simple programming constructs ( such as in Fig . 1 , right ) , built by operations such as compositions , aggregation , concatenation , and branching on a small number of such learnable functions , are also learnable ( Theorem 5 ) . In general , we can learn a low-depth formula ( circuit with fan-out 1 ) in which each gate is not merely a switch ( as in a decision tree ) , but can be any analytic function on the inputs , including arithmetic operations . Again , our key technical contribution is that we show that all of these functions are efficiently learned by SGD . This is non-trival since , although universal approximation theorems show that such functions can be expressed by ( sufficiently wide ) two-layer neural networks , under standard assumptions some expressible functions are not learnable Klivans & Sherstov ( 2009 ) . We supplement the theoretical bounds with experiments on clusters , decision trees , and SQL-style aggregation showing that such functions are indeed learned in practice . We note that the learning of such combined functions could have been engineered by hand : for example , there exist efficient algorithms for learning clusterings or such decision trees , and it is easy to learn the linear classifiers given the partitioned data . Likewise , these classes of functions are all known to be learnable by other methods , given an appropriate transformation of the input features . The key point is that the two-layer neural network can jointly learn the task coding scheme and the task-specific functions without special engineering of the architecture . That is , it is unnecessary to engineer a way of partitioning of the data into separate tasks prior to learning . Relatedly , the time and sample requirements of learning multiple tasks on a single network in general is insufficient to explain the modularity observed in biological neural networks if their learning dynamics are similar to SGD —i.e. , we can not explain the presence of modularity from such general considerations . All our theoretical results are based upon a fundamental theorem that shows that analytic functions can be efficiently learnt by wide ( but finite-width ) two-layer neural networks with standard activation functions ( such as ReLU ) , using SGD from a random initialization . Specifically , we derive novel generalization bounds for multivariate analytic functions ( Theorems 1 and 8 ) by relating wide networks to kernel learning with a specific network-induced kernel ( Jacot et al. , 2018 ; Du et al. , 2019 ; Allen-Zhu et al. , 2019 ; Arora et al. , 2019a ; Lee et al. , 2019 ) , known as the neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ) . We further develop a calculus of bounds showing that the sum , product , ratio , and composition of analytic functions is also learnable , with bounds constructed using the familiar product and chain rules of univariate calculus ( Corollaries 1 , 2 ) . These above learnability results may be of independent interest ; for example , they can be used to show that natural physical laws like the gravitational force equations ( shown in Fig . 1 ) can be efficiently learnt by neural networks ( Section B.1 ) . Furthermore , our bounds imply that the NTK kernel for ReLU activation has theoretical learning guarantees that are superior to the Gaussian kernel ( Section A.2 ) , which we also demonstrate empirically with experiments on learning the gravitational force law ( Section B.2 ) . 1.2 RELATED WORK . Most related to our work are a number of works in application areas that have sought to learn a single network that can perform many different tasks . In natural language processing , Tsai et al . ( 2019 ) show that a single model can solve machine translation across more than 50 languages . Many other works in NLP similarly seek to use one model for multiple languages , or even multiple tasks ( Johnson et al. , 2017 ; Aharoni et al. , 2019 ; Bapna et al. , 2019 ; Devlin et al. , 2018 ) . Monolithic models have also been successfully trained for tasks in very different domains , such as speech and language ( Kaiser et al. , 2017 ) . Finally , there is also work on training extremely large neural networks which have the capacity to learn multiple tasks ( Shazeer et al. , 2017 ; Raffel et al. , 2019 ) . These works provide empirical clues that suggest that a single network can successfully be trained to perform a wide variety of tasks . But , they do not provide a systematic theoretical investigation of the extent of this ability as we do here . Caruana ( 1997 ) proposed multitask learning in which a single network is trained to solve multiple tasks on the same input simultaneously , as a vector of outputs . He observed that average generalization error for the multiple tasks may be much better than when the tasks are trained separately , and this observation initiated an active area of machine learning research ( Zhang & Yang , 2017 ) . Multitask learning is obviously related to our monolithic architectures . The difference is that whereas in multitask learning all of the tasks are computed simultaneously and output on separate gates , here all of the tasks share a common set of outputs , and the task code inputs switch between the various tasks . Furthermore , contrary to the main focus of multitask learning , we are primarily interested in the extent to which different tasks may interfere , rather than how much similar ones may benefit . Our work is also related to studies of neural models of multitasking in cognitive science . In particular , Musslick et al . ( 2017 ) consider a similar two-layer architecture in which there is a set of task code attributes . But , as in multitask learning , they are interested in how many of these tasks can be performed simultaneously , on distinct outputs . They analyze the tradeoff between improved sample complexity and interference of the tasks with a handcrafted “ gating ” scheme , in which the parts of activity are zeroed out depending on the input ( as opposed to the usual nonlinearities ) ; in this model , they find out that the speedup from multitask learning comes at the penalty of limiting the number of tasks that can be correctly computed as the similarity of inputs varies . Thus , in contrast to our model where the single model is computing distinct tasks sequentially , they do find that the distinct tasks can interfere with each other when we seek to solve them simultaneously . | This paper sets out to show that multiple tasks can be encoded in a neural network that that does not have explicit modular construction for each tasks, which is in contrast with the work of [Bakker and Heskes, JMLR 2003], and [Jocabs, Jordan, Nowlan and Hinton, Neural Computation 1991]. The premise of the paper is that task coding as indicators can be approximated via the derivative of an approximation (erf + Taylor truncation) of the step function. This approximation is analytic and, with individual tasks being analytic, makes the entire multiple task function g(c; x) possible to be approximated by neural networks due the the universal function approximation power of neural networks. | SP:451837cd17de7cdc4a059f4cc0cedd10b2eb136d |
One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks | 1 INTRODUCTION . Standard practice in machine learning has long been to only address carefully circumscribed , often very related tasks . For example , we might train a single classifier to label an image as containing objects from a certain predefined set , or to label the words of a sentence with their semantic roles . Indeed , when working with relatively simple classes of functions like linear classifiers , it would be unreasonable to expect to train a classifier that handles more than such a carefully scoped task ( or related tasks in standard multitask learning ) . As techniques for learning with relatively rich classes such as neural networks have been developed , it is natural to ask whether or not such scoping of tasks is inherently necessary . Indeed , many recent works ( see Section 1.2 ) have proposed eschewing this careful scoping of tasks , and instead training a single , “ monolithic ” function spanning many tasks . Large , deep neural networks can , in principle , represent multiple classifiers in such a monolithic learned function ( Hornik , 1991 ) , giving rise to the field of multitask learning . This combined function might be learned by combining all of the training data for all of the tasks into one large batch–see Section 1.2 for some examples . Taken to an extreme , we could consider seeking to learn a universal circuit—that is , a circuit that interprets arbitrary programs in a programming language which can encode various tasks . But , the ability to represent such a monolithic combined function does not necessarily entail that such a function can be efficiently learned by existing methods . Cryptographic hardness theorems ( Kearns & Valiant , 1994 ) establish that this is not possible in general by any method , let alone the specific training methods used in practice . Nevertheless , we still can ask how ∗Work performed in part while visiting Google . †Work performed in part while affiliated with Stanford , and in part while interning at Google . rich a family of tasks can be learned by these standard methods . In this work , we study the extent to which backpropagation with stochastic gradient descent ( SGD ) can learn such monolithic functions on diverse , unrelated tasks . There might still be some inherent benefit to an architecture in which tasks are partitioned into sub-tasks of such small scope , and the training data is correspondingly partitioned prior to learning . For example , in the early work on multitask learning , Caruana ( 1997 ) observed that training a network to solve unrelated tasks simultaneously seemed to harm the overall performance . Similarly , the seminal work of Jacobs et al . ( 1991 ) begins by stating that “ If backpropagation is used to train a single , multilayer network to perform different subtasks on different occasions , there will generally be strong interference effects that lead to slow learning and poor generalization ” . We therefore ask if , for an unfortunate choice of tasks in our model , learning by standard methods might be fundamentally impaired . As a point of reference from neuroscience , the classical view is that distinct tasks are handled in the brain by distinct patches of the cortex . While it is a subject of debate whether modularity exists for higher level tasks ( Samuels , 2006 ) , it is accepted that there are dedicated modules for low-level tasks such as vision and audio processing . Thus , it seems that the brain produces a modular architecture , in which different tasks are handled by different regions of the cortex . Conceivably , this division into task-specific regions might be driven by fundamental considerations of learnability : A single , monolithic neural circuit might simply be too difficult to learn because the different tasks might interfere with one another . Others have taken neural networks trained by backpropagation as a model of learning in the cortex ( Musslick et al. , 2017 ) ; to the extent that this is reasonable , our work has some bearing on these questions as well . 1.1 OUR RESULTS . We find , perhaps surprisingly , that combining multiple tasks into one can not fundamentally impair learning with standard training methods . We demonstrate this for a broad family of methods for combining individual tasks into a single monolithic task . For example , inputs for each individual tasks may come from a disjoint region ( for example , a disjoint ball ) in a common input space , and each individual task could then involve applying some arbitrary simple function ( e.g. , a separate linear classifier for each region ) . Alternately there may be an explicit “ task code ” attribute ( e.g. , a one-hot code ) , together with the usual input attributes and output label ( s ) , where examples with the same task code are examples for the same learning task . Complementing our results that combining multiple tasks does not impair learning , we also find that some task coding schemes do incur a sample complexity penalty . A vast variety of task coding schemes may be used . As a concrete example , when the data points for each task are well-separated into distinct clusters , and the tasks are linear classification tasks , we show that a two-layer architecture trained with SGD successfully learns the combined , monolithic function ; the required amount of data simply scales as the sum of the amount required to learn each task individually ( Theorem 2 ) . Meanwhile , if the tasks are determined by a balanced decision tree of height h on d code attributes ( as in Fig . 1 , left ) , we find that the training time and amount of data needed scales as ∼ dh—quasipolynomial in the 2h leaves ( distinct tasks ) when d is of similar size to h , and thus when the coding is efficient ( Theorem 3 ) . We also prove a corresponding lower bound , which shows that this bound is in fact asymptotically tight ( Theorem 3 ) . More generally , for task codings based on decision trees using linear splits with a margin of at least γ ( when the data has unit ` 2 norm ) , the training time and required data are asymptotically bounded by ∼ eO ( h/γ 2 ) , which for constant γ is polynomial in the 2h functions ( Theorem 4 ) . We generalize from these cluster-based and decision-tree based task codings to more complex codes that are actually simple programs . For instance , we show that SQL-style aggregation queries over a fixed database , written as a functions of the parameters of the query , can also be learned this way . More generally , simple programming constructs ( such as in Fig . 1 , right ) , built by operations such as compositions , aggregation , concatenation , and branching on a small number of such learnable functions , are also learnable ( Theorem 5 ) . In general , we can learn a low-depth formula ( circuit with fan-out 1 ) in which each gate is not merely a switch ( as in a decision tree ) , but can be any analytic function on the inputs , including arithmetic operations . Again , our key technical contribution is that we show that all of these functions are efficiently learned by SGD . This is non-trival since , although universal approximation theorems show that such functions can be expressed by ( sufficiently wide ) two-layer neural networks , under standard assumptions some expressible functions are not learnable Klivans & Sherstov ( 2009 ) . We supplement the theoretical bounds with experiments on clusters , decision trees , and SQL-style aggregation showing that such functions are indeed learned in practice . We note that the learning of such combined functions could have been engineered by hand : for example , there exist efficient algorithms for learning clusterings or such decision trees , and it is easy to learn the linear classifiers given the partitioned data . Likewise , these classes of functions are all known to be learnable by other methods , given an appropriate transformation of the input features . The key point is that the two-layer neural network can jointly learn the task coding scheme and the task-specific functions without special engineering of the architecture . That is , it is unnecessary to engineer a way of partitioning of the data into separate tasks prior to learning . Relatedly , the time and sample requirements of learning multiple tasks on a single network in general is insufficient to explain the modularity observed in biological neural networks if their learning dynamics are similar to SGD —i.e. , we can not explain the presence of modularity from such general considerations . All our theoretical results are based upon a fundamental theorem that shows that analytic functions can be efficiently learnt by wide ( but finite-width ) two-layer neural networks with standard activation functions ( such as ReLU ) , using SGD from a random initialization . Specifically , we derive novel generalization bounds for multivariate analytic functions ( Theorems 1 and 8 ) by relating wide networks to kernel learning with a specific network-induced kernel ( Jacot et al. , 2018 ; Du et al. , 2019 ; Allen-Zhu et al. , 2019 ; Arora et al. , 2019a ; Lee et al. , 2019 ) , known as the neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ) . We further develop a calculus of bounds showing that the sum , product , ratio , and composition of analytic functions is also learnable , with bounds constructed using the familiar product and chain rules of univariate calculus ( Corollaries 1 , 2 ) . These above learnability results may be of independent interest ; for example , they can be used to show that natural physical laws like the gravitational force equations ( shown in Fig . 1 ) can be efficiently learnt by neural networks ( Section B.1 ) . Furthermore , our bounds imply that the NTK kernel for ReLU activation has theoretical learning guarantees that are superior to the Gaussian kernel ( Section A.2 ) , which we also demonstrate empirically with experiments on learning the gravitational force law ( Section B.2 ) . 1.2 RELATED WORK . Most related to our work are a number of works in application areas that have sought to learn a single network that can perform many different tasks . In natural language processing , Tsai et al . ( 2019 ) show that a single model can solve machine translation across more than 50 languages . Many other works in NLP similarly seek to use one model for multiple languages , or even multiple tasks ( Johnson et al. , 2017 ; Aharoni et al. , 2019 ; Bapna et al. , 2019 ; Devlin et al. , 2018 ) . Monolithic models have also been successfully trained for tasks in very different domains , such as speech and language ( Kaiser et al. , 2017 ) . Finally , there is also work on training extremely large neural networks which have the capacity to learn multiple tasks ( Shazeer et al. , 2017 ; Raffel et al. , 2019 ) . These works provide empirical clues that suggest that a single network can successfully be trained to perform a wide variety of tasks . But , they do not provide a systematic theoretical investigation of the extent of this ability as we do here . Caruana ( 1997 ) proposed multitask learning in which a single network is trained to solve multiple tasks on the same input simultaneously , as a vector of outputs . He observed that average generalization error for the multiple tasks may be much better than when the tasks are trained separately , and this observation initiated an active area of machine learning research ( Zhang & Yang , 2017 ) . Multitask learning is obviously related to our monolithic architectures . The difference is that whereas in multitask learning all of the tasks are computed simultaneously and output on separate gates , here all of the tasks share a common set of outputs , and the task code inputs switch between the various tasks . Furthermore , contrary to the main focus of multitask learning , we are primarily interested in the extent to which different tasks may interfere , rather than how much similar ones may benefit . Our work is also related to studies of neural models of multitasking in cognitive science . In particular , Musslick et al . ( 2017 ) consider a similar two-layer architecture in which there is a set of task code attributes . But , as in multitask learning , they are interested in how many of these tasks can be performed simultaneously , on distinct outputs . They analyze the tradeoff between improved sample complexity and interference of the tasks with a handcrafted “ gating ” scheme , in which the parts of activity are zeroed out depending on the input ( as opposed to the usual nonlinearities ) ; in this model , they find out that the speedup from multitask learning comes at the penalty of limiting the number of tasks that can be correctly computed as the similarity of inputs varies . Thus , in contrast to our model where the single model is computing distinct tasks sequentially , they do find that the distinct tasks can interfere with each other when we seek to solve them simultaneously . | This paper takes an interesting theoretical dive on the learnability of multiple tasks as one task, while the tasks are constructed with special structures of the cluster, decision tree, or simple program. This paper provides sample complexities analysis, showing that wide two-layer neural networks with standard activation functions and SGD optimization is able to capture the data regularity between input and output, modulated by the three kinds of task code considered. Experiment results show that the networks are flexible enough to fit complex data generated in this way. | SP:451837cd17de7cdc4a059f4cc0cedd10b2eb136d |
Increasing the Coverage and Balance of Robustness Benchmarks by Using Non-Overlapping Corruptions | 1 INTRODUCTION . Neural Networks perform poorly when they deal with images that are drawn from a different distribution than their training samples . Indeed , neural networks are sensitive to adversarial examples ( Szegedy et al. , 2014 ) , background changes ( Xiao et al. , 2020 ) , and common corruptions ( Hendrycks & Dietterich , 2019 ) . Common corruptions are perturbations that change the appearance of images without changing their semantic content . For instance , neural networks are sensitive to noises ( Koziarski & Cyganek , 2017 ) , blurs ( Vasiljevic et al. , 2016 ) or lighting condition variations ( Temel et al. , 2017 ) . Contrary to adversarial examples ( Szegedy et al. , 2014 ) , common corruptions are not artificial perturbations especially crafted to fool neural networks . They naturally appear in industrial applications without any human interfering , and can significantly reduce the performances of neural networks . A neural network is robust to a corruption c , when its performances on samples corrupted with c are close to its performances on clean samples . Some methods have been recently proposed to make neural networks more robust to common corruptions ( Geirhos et al. , 2019 ; Hendrycks * et al. , 2020 ; Rusak et al. , 2020 ) . To determine whether these approaches are effective , it is required to have a method to measure the neural network robustness to common corruptions . The most commonly used method consists in evaluating the performances of neural networks on images distorted by various kinds of common corruptions : ( Hendrycks & Dietterich , 2019 ; Karahan et al. , 2016 ; Geirhos et al. , 2019 ; Temel et al. , 2017 ) . In this study , we call the group of perturbations used to make the robustness estimation a corruption benchmark . We also use this term to refer to a set of test images that have been corrupted with these various corruptions . We identify two important factors that should be taken into account when building a corruption benchmark : the balance and the coverage . In this paper , we consider that a corruption c is covered by a benchmark , when increasing the robustness of a network to all the corruptions of this benchmark , also increases the robustness of the network to c. For instance , a benchmark that contains a camera shake blur corruption covers the defocus blur corruption , because the robustnesses towards these two corruptions are correlated ( Vasiljevic et al. , 2016 ) . The coverage of a benchmark is defined as the number of corruptions covered by this benchmark . The more a benchmark covers a wide range of common corruptions , the more it gives a complete view of the robustness of a neural network . At the same time , we consider a benchmark as balanced when it gives the same importance to the robustness to every corruption it contains . For instance , according to a balanced benchmark , being robust to noises is as important as being robust to brightness variations . We argue that most of the existing corruption benchmarks are unbalanced : they give too much importance to the robustness to some corruptions compared to others . The coverage and balance of corruption benchmarks are related to the notion of corruption overlappings . We say that two corruptions overlap when the robustnesses of neural networks towards these corruptions are correlated . The contribution of this paper is fourfold : 1 . We propose the first method to estimate to what extent two corruptions overlap . 2 . We show that building corruption benchmarks with non-overlapping corruptions make them more balanced and able to cover a wider range of corruptions . 3 . We propose a method to build benchmarks that contain only non-overlapping corruptions . 4 . We use this method to build from ImageNet , a benchmark of Non-Overlapping Corruptions called ImagNet-NOC , to estimate the robustness of image classifiers to common corruptions . We show that ImagNet-NOC is balanced and covers corruptions that are not covered by ImageNet-C : a reference corruption benchmark ( Hendrycks & Dietterich , 2019 ) . 2 BACKGROUND AND RELATED WORKS . 2.1 ESTIMATING THE ROBUSTNESS OF NETWORKS WITH OUT-OF-DISTRIBUTION SAMPLES . Studying the performances of neural networks on samples that lie outside training distributions , is a widely studied domain . Being able to understand out-of-distribution ( o.o.d ) samples is essential to guarantee that neural networks are reliable in real-world applications . Several benchmarks and methods have been proposed to study this field . For instance , ImageNet-A ( Dan Hendrycks & Song , 2019 ) is a simple benchmark for ImageNet classifiers that contains samples drawn from a different source than the one used to build ImageNet . Adversarial examples , are samples that have been slightly modified to fool neural networks ( Szegedy et al. , 2014 ) . Making sure that models are robust to these kinds of o.o.d samples is essential in terms of security . Artistic renditions ( Hendrycks et al. , 2020 ) or sketches ( Haohan et al. , 2019 ) , can also be useful to determine if neural networks understand the abstract concepts we want them to learn . Methods to study how classifiers are affected by background changes have also been recently proposed ( Beery et al. , 2018 ; Xiao et al. , 2020 ) . Another important aspect of the robustness of neural networks to o.o.d samples , is the robustness to common corruptions . This aspect of the robustness is generally estimated by gathering several commonly encountered corruptions , and by testing the performances of neural networks on images corrupted with these corruptions . Diverse selections of common corruptions have been proposed to make a robustness estimation ( Karahan et al. , 2016 ; Laugros et al. , 2019 ; Geirhos et al. , 2019 ) . In particular , ImageNet-C is a popular benchmark used to measure the robustness of ImageNet classifiers ( Hendrycks & Dietterich , 2019 ) . Different common corruption benchmarks have also been proposed in the context of object detection ( Michaelis et al. , 2019 ) , scene classification ( Tadros et al. , 2019 ) or , eye-tracking ( Che et al. , 2020 ) . It is worth noting that some transformations that are in between adversarial attacks and common corruptions have been recently proposed to measure the robustness of image classifiers ( Kang et al. , 2019 ; Dunn et al. , 2019 ; Liu et al. , 2019 ) . 2.2 CORRUPTION OVERLAPPINGS IN BENCHMARKS . It has been noticed that fine-tuning a model with camera shake blur helps it to deal with defocus blur and conversely ( Vasiljevic et al. , 2016 ) . The robustnesses to diverse kinds of noises have also been shown to be closely related ( Laugros et al. , 2019 ) . Even for two corruptions that do not look similar to the human eye , increasing the robustness of a model to one of these corruptions , can imply increasing the robustness to the other corruption ( Kang et al. , 2019 ) . In general , it has been shown that the robustnesses to the corruptions that distort the high-frequency content of images are correlated ( Yin et al. , 2019 ) . In the context of adversarial examples , it is known that the robustness towards one adversarial attack can be correlated with the robustness to another attack ( Tramer & Boneh , 2019 ) . So , it is generally recommended to evaluate the adversarial robustness with attacks that are clearly different from each other ( Carlini et al. , 2019 ) . The experiments carried out in this paper suggest that this recommendation should also be followed in the context of common corruption robustness estimation . 3 CORRUPTION OVERLAPPING . 3.1 THE CORRUPTION OVERLAPPING SCORE . We consider that two corruptions overlap when the robustness to one of these corruptions is correlated with the robustness to the other corruption . In this section , we propose a methodology to estimate to what extent two corruptions overlap . The Robustness Score . To determine whether two corruptions overlap , we first need to introduce a metric called the robustness score . This score gives an estimation of the robustness of a model m to a corruption c. It is computed with the following formula : Rmc = Ac Aclean . Aclean is the accuracy of m on an uncorrupted test set and Ac is the accuracy of m on the same test set corrupted with c. The higher Rmc is , the more robust m is . Please note that using this metric requires to monitor Aclean and make sure it is relatively high . Otherwise , an untrained model for which Ac equals Aclean , would be considered as robust for example . In this study , this metric is used only in the methodology we propose to estimate the overlapping between two corruptions . The Corruption Overlapping Score . We consider two neural networks m1 and m2 and two corruptions c1 and c2 . m1 and m2 are identical , and trained with exactly the same settings except that their training sets are respectively augmented with the corruptions c1 and c2 . A standard model is trained the same way but only with non-corrupted samples . We propose a method to measure to what extent c1 and c2 overlap . The idea of the method is to see if a data augmentation with c1 makes a model more robust to c2 and conversely . To determine this , m1 , m2 , and a test set are used to compute the following expression : ( Rm2c1 −Rstandardc1 ) + ( Rm1c2 −Rstandardc2 ) ( 1 ) The first term of ( 1 ) measures whether a model that fits exactly c2 is more robust to c1 than the standard model . Symmetrically , the second term measures whether a model that fits exactly c1 is more robust than the standard model to c2 . The more making a model fit c1 implies being more robust to c2 and reciprocally , and the more we can suppose that the robustnesses to c1 and c2 are correlated in practice . In other words , the expression ( 1 ) gives an estimation of the overlapping between c1 and c2 . To be more convenient , we would like to build a corruption overlapping score equal to 1 when c1 = c2 , and equal to 0 when the robustnesses to c1 and c2 are not correlated at all . We propose a new expression that respects both conditions : Oc1 , c2 = max { 0 , 1 2 ∗ ( Rm1c2 −Rstandardc2 Rm2c2 −Rstandardc2 + Rm2c1 −Rstandardc1 Rm1c1 −Rstandardc1 ) } ( 2 ) The expression ( 2 ) is a normalized version of ( 1 ) . It measures the overlapping between two corruptions while respecting the conditions mentioned above . Indeed , if a data augmentation with c1 does not increase the robustness to c2 at all and conversely , then the ratios in ( 2 ) are null or negative , so the whole overlapping score is maximized to zero . In other words , when c1 and c2 do not overlap at all , the overlapping score is equal to 0 . Besides , when c1 = c2 , Rm1c2 = R m2 c2 and R m2 c1 = R m1 c1 , so both ratios of ( 2 ) are equal to 1 . Then , Oc1 , c2 = 1 when c1 and c2 completely overlap . How to compute an overlapping score . To get the overlapping score between c1 and c2 , we follow the method illustrated in Figure 1 . This method has six steps , and requires to have a training set , a test set and three untrained models that share the same architecture ( m1 , m2 and standard ) . The step ( 1 ) , consists in using the corruptions c1 and c2 to get two training sets , each corrupted with one corruption . Then , the obtained corrupted sets are used to train the models m1 and m2 in step ( 2 ) . The standard model is also trained during this step but only with non-corrupted samples . In step ( 3 ) , similarly to step ( 1 ) , we use c1 and c2 to get two corrupted versions of the test set . The accuracies of the three models on the three test sets are computed in step ( 4 ) . The scores obtained are used in step ( 5 ) , to get the robustness scores of each model for the corruptions c1 and c2 . The results obtained are used to compute the overlapping score between c1 and c2 in step ( 6 ) . | This paper points out that ImageNet-C, the de facto standard for measuring robustness to natural corruptions for ImageNet classification models, contains correlated corruptions, so a mean robustness score over the ImageNet-C corruptions is biased in favor of certain classes of corruptions. It proposes two metrics: robustness score and overlapping score, and uses them to specify desirable characteristics of a robustness benchmark: coverage and balance. It specifies an algorithm for selecting a set of corruptions that optimize for these characteristics, and introduces a new benchmark (ImageNet-NOC) built using this algorithm and evaluates some models that performed well on ImageNet-C on ImageNet-NOC to see if they still perform well. | SP:c5862fb1ef5d251216f75033ad7fbad6fa446323 |
Increasing the Coverage and Balance of Robustness Benchmarks by Using Non-Overlapping Corruptions | 1 INTRODUCTION . Neural Networks perform poorly when they deal with images that are drawn from a different distribution than their training samples . Indeed , neural networks are sensitive to adversarial examples ( Szegedy et al. , 2014 ) , background changes ( Xiao et al. , 2020 ) , and common corruptions ( Hendrycks & Dietterich , 2019 ) . Common corruptions are perturbations that change the appearance of images without changing their semantic content . For instance , neural networks are sensitive to noises ( Koziarski & Cyganek , 2017 ) , blurs ( Vasiljevic et al. , 2016 ) or lighting condition variations ( Temel et al. , 2017 ) . Contrary to adversarial examples ( Szegedy et al. , 2014 ) , common corruptions are not artificial perturbations especially crafted to fool neural networks . They naturally appear in industrial applications without any human interfering , and can significantly reduce the performances of neural networks . A neural network is robust to a corruption c , when its performances on samples corrupted with c are close to its performances on clean samples . Some methods have been recently proposed to make neural networks more robust to common corruptions ( Geirhos et al. , 2019 ; Hendrycks * et al. , 2020 ; Rusak et al. , 2020 ) . To determine whether these approaches are effective , it is required to have a method to measure the neural network robustness to common corruptions . The most commonly used method consists in evaluating the performances of neural networks on images distorted by various kinds of common corruptions : ( Hendrycks & Dietterich , 2019 ; Karahan et al. , 2016 ; Geirhos et al. , 2019 ; Temel et al. , 2017 ) . In this study , we call the group of perturbations used to make the robustness estimation a corruption benchmark . We also use this term to refer to a set of test images that have been corrupted with these various corruptions . We identify two important factors that should be taken into account when building a corruption benchmark : the balance and the coverage . In this paper , we consider that a corruption c is covered by a benchmark , when increasing the robustness of a network to all the corruptions of this benchmark , also increases the robustness of the network to c. For instance , a benchmark that contains a camera shake blur corruption covers the defocus blur corruption , because the robustnesses towards these two corruptions are correlated ( Vasiljevic et al. , 2016 ) . The coverage of a benchmark is defined as the number of corruptions covered by this benchmark . The more a benchmark covers a wide range of common corruptions , the more it gives a complete view of the robustness of a neural network . At the same time , we consider a benchmark as balanced when it gives the same importance to the robustness to every corruption it contains . For instance , according to a balanced benchmark , being robust to noises is as important as being robust to brightness variations . We argue that most of the existing corruption benchmarks are unbalanced : they give too much importance to the robustness to some corruptions compared to others . The coverage and balance of corruption benchmarks are related to the notion of corruption overlappings . We say that two corruptions overlap when the robustnesses of neural networks towards these corruptions are correlated . The contribution of this paper is fourfold : 1 . We propose the first method to estimate to what extent two corruptions overlap . 2 . We show that building corruption benchmarks with non-overlapping corruptions make them more balanced and able to cover a wider range of corruptions . 3 . We propose a method to build benchmarks that contain only non-overlapping corruptions . 4 . We use this method to build from ImageNet , a benchmark of Non-Overlapping Corruptions called ImagNet-NOC , to estimate the robustness of image classifiers to common corruptions . We show that ImagNet-NOC is balanced and covers corruptions that are not covered by ImageNet-C : a reference corruption benchmark ( Hendrycks & Dietterich , 2019 ) . 2 BACKGROUND AND RELATED WORKS . 2.1 ESTIMATING THE ROBUSTNESS OF NETWORKS WITH OUT-OF-DISTRIBUTION SAMPLES . Studying the performances of neural networks on samples that lie outside training distributions , is a widely studied domain . Being able to understand out-of-distribution ( o.o.d ) samples is essential to guarantee that neural networks are reliable in real-world applications . Several benchmarks and methods have been proposed to study this field . For instance , ImageNet-A ( Dan Hendrycks & Song , 2019 ) is a simple benchmark for ImageNet classifiers that contains samples drawn from a different source than the one used to build ImageNet . Adversarial examples , are samples that have been slightly modified to fool neural networks ( Szegedy et al. , 2014 ) . Making sure that models are robust to these kinds of o.o.d samples is essential in terms of security . Artistic renditions ( Hendrycks et al. , 2020 ) or sketches ( Haohan et al. , 2019 ) , can also be useful to determine if neural networks understand the abstract concepts we want them to learn . Methods to study how classifiers are affected by background changes have also been recently proposed ( Beery et al. , 2018 ; Xiao et al. , 2020 ) . Another important aspect of the robustness of neural networks to o.o.d samples , is the robustness to common corruptions . This aspect of the robustness is generally estimated by gathering several commonly encountered corruptions , and by testing the performances of neural networks on images corrupted with these corruptions . Diverse selections of common corruptions have been proposed to make a robustness estimation ( Karahan et al. , 2016 ; Laugros et al. , 2019 ; Geirhos et al. , 2019 ) . In particular , ImageNet-C is a popular benchmark used to measure the robustness of ImageNet classifiers ( Hendrycks & Dietterich , 2019 ) . Different common corruption benchmarks have also been proposed in the context of object detection ( Michaelis et al. , 2019 ) , scene classification ( Tadros et al. , 2019 ) or , eye-tracking ( Che et al. , 2020 ) . It is worth noting that some transformations that are in between adversarial attacks and common corruptions have been recently proposed to measure the robustness of image classifiers ( Kang et al. , 2019 ; Dunn et al. , 2019 ; Liu et al. , 2019 ) . 2.2 CORRUPTION OVERLAPPINGS IN BENCHMARKS . It has been noticed that fine-tuning a model with camera shake blur helps it to deal with defocus blur and conversely ( Vasiljevic et al. , 2016 ) . The robustnesses to diverse kinds of noises have also been shown to be closely related ( Laugros et al. , 2019 ) . Even for two corruptions that do not look similar to the human eye , increasing the robustness of a model to one of these corruptions , can imply increasing the robustness to the other corruption ( Kang et al. , 2019 ) . In general , it has been shown that the robustnesses to the corruptions that distort the high-frequency content of images are correlated ( Yin et al. , 2019 ) . In the context of adversarial examples , it is known that the robustness towards one adversarial attack can be correlated with the robustness to another attack ( Tramer & Boneh , 2019 ) . So , it is generally recommended to evaluate the adversarial robustness with attacks that are clearly different from each other ( Carlini et al. , 2019 ) . The experiments carried out in this paper suggest that this recommendation should also be followed in the context of common corruption robustness estimation . 3 CORRUPTION OVERLAPPING . 3.1 THE CORRUPTION OVERLAPPING SCORE . We consider that two corruptions overlap when the robustness to one of these corruptions is correlated with the robustness to the other corruption . In this section , we propose a methodology to estimate to what extent two corruptions overlap . The Robustness Score . To determine whether two corruptions overlap , we first need to introduce a metric called the robustness score . This score gives an estimation of the robustness of a model m to a corruption c. It is computed with the following formula : Rmc = Ac Aclean . Aclean is the accuracy of m on an uncorrupted test set and Ac is the accuracy of m on the same test set corrupted with c. The higher Rmc is , the more robust m is . Please note that using this metric requires to monitor Aclean and make sure it is relatively high . Otherwise , an untrained model for which Ac equals Aclean , would be considered as robust for example . In this study , this metric is used only in the methodology we propose to estimate the overlapping between two corruptions . The Corruption Overlapping Score . We consider two neural networks m1 and m2 and two corruptions c1 and c2 . m1 and m2 are identical , and trained with exactly the same settings except that their training sets are respectively augmented with the corruptions c1 and c2 . A standard model is trained the same way but only with non-corrupted samples . We propose a method to measure to what extent c1 and c2 overlap . The idea of the method is to see if a data augmentation with c1 makes a model more robust to c2 and conversely . To determine this , m1 , m2 , and a test set are used to compute the following expression : ( Rm2c1 −Rstandardc1 ) + ( Rm1c2 −Rstandardc2 ) ( 1 ) The first term of ( 1 ) measures whether a model that fits exactly c2 is more robust to c1 than the standard model . Symmetrically , the second term measures whether a model that fits exactly c1 is more robust than the standard model to c2 . The more making a model fit c1 implies being more robust to c2 and reciprocally , and the more we can suppose that the robustnesses to c1 and c2 are correlated in practice . In other words , the expression ( 1 ) gives an estimation of the overlapping between c1 and c2 . To be more convenient , we would like to build a corruption overlapping score equal to 1 when c1 = c2 , and equal to 0 when the robustnesses to c1 and c2 are not correlated at all . We propose a new expression that respects both conditions : Oc1 , c2 = max { 0 , 1 2 ∗ ( Rm1c2 −Rstandardc2 Rm2c2 −Rstandardc2 + Rm2c1 −Rstandardc1 Rm1c1 −Rstandardc1 ) } ( 2 ) The expression ( 2 ) is a normalized version of ( 1 ) . It measures the overlapping between two corruptions while respecting the conditions mentioned above . Indeed , if a data augmentation with c1 does not increase the robustness to c2 at all and conversely , then the ratios in ( 2 ) are null or negative , so the whole overlapping score is maximized to zero . In other words , when c1 and c2 do not overlap at all , the overlapping score is equal to 0 . Besides , when c1 = c2 , Rm1c2 = R m2 c2 and R m2 c1 = R m1 c1 , so both ratios of ( 2 ) are equal to 1 . Then , Oc1 , c2 = 1 when c1 and c2 completely overlap . How to compute an overlapping score . To get the overlapping score between c1 and c2 , we follow the method illustrated in Figure 1 . This method has six steps , and requires to have a training set , a test set and three untrained models that share the same architecture ( m1 , m2 and standard ) . The step ( 1 ) , consists in using the corruptions c1 and c2 to get two training sets , each corrupted with one corruption . Then , the obtained corrupted sets are used to train the models m1 and m2 in step ( 2 ) . The standard model is also trained during this step but only with non-corrupted samples . In step ( 3 ) , similarly to step ( 1 ) , we use c1 and c2 to get two corrupted versions of the test set . The accuracies of the three models on the three test sets are computed in step ( 4 ) . The scores obtained are used in step ( 5 ) , to get the robustness scores of each model for the corruptions c1 and c2 . The results obtained are used to compute the overlapping score between c1 and c2 in step ( 6 ) . | The paper considers the problem of measuring the robustness of image classification models to common image perturbations. Datasets of corrupted images, such as ImageNet-C, have been created for this purpose. However, these datasets have been created from an ad-hoc, heuristic selection of perturbations. The present paper proposes a systematic approach to select types of perturbations in a way that spans a large variety of perturbations and assigns similar importance to each perturbation. Similarity of perturbations is measured based on how much training on one perturbations confers robustness against another perturbation. The paper provides an algorithm for selecting perturbations to include, and uses the algorithm to create a variant of ImageNet-C with improved coverage and balance. | SP:c5862fb1ef5d251216f75033ad7fbad6fa446323 |
Increasing the Coverage and Balance of Robustness Benchmarks by Using Non-Overlapping Corruptions | 1 INTRODUCTION . Neural Networks perform poorly when they deal with images that are drawn from a different distribution than their training samples . Indeed , neural networks are sensitive to adversarial examples ( Szegedy et al. , 2014 ) , background changes ( Xiao et al. , 2020 ) , and common corruptions ( Hendrycks & Dietterich , 2019 ) . Common corruptions are perturbations that change the appearance of images without changing their semantic content . For instance , neural networks are sensitive to noises ( Koziarski & Cyganek , 2017 ) , blurs ( Vasiljevic et al. , 2016 ) or lighting condition variations ( Temel et al. , 2017 ) . Contrary to adversarial examples ( Szegedy et al. , 2014 ) , common corruptions are not artificial perturbations especially crafted to fool neural networks . They naturally appear in industrial applications without any human interfering , and can significantly reduce the performances of neural networks . A neural network is robust to a corruption c , when its performances on samples corrupted with c are close to its performances on clean samples . Some methods have been recently proposed to make neural networks more robust to common corruptions ( Geirhos et al. , 2019 ; Hendrycks * et al. , 2020 ; Rusak et al. , 2020 ) . To determine whether these approaches are effective , it is required to have a method to measure the neural network robustness to common corruptions . The most commonly used method consists in evaluating the performances of neural networks on images distorted by various kinds of common corruptions : ( Hendrycks & Dietterich , 2019 ; Karahan et al. , 2016 ; Geirhos et al. , 2019 ; Temel et al. , 2017 ) . In this study , we call the group of perturbations used to make the robustness estimation a corruption benchmark . We also use this term to refer to a set of test images that have been corrupted with these various corruptions . We identify two important factors that should be taken into account when building a corruption benchmark : the balance and the coverage . In this paper , we consider that a corruption c is covered by a benchmark , when increasing the robustness of a network to all the corruptions of this benchmark , also increases the robustness of the network to c. For instance , a benchmark that contains a camera shake blur corruption covers the defocus blur corruption , because the robustnesses towards these two corruptions are correlated ( Vasiljevic et al. , 2016 ) . The coverage of a benchmark is defined as the number of corruptions covered by this benchmark . The more a benchmark covers a wide range of common corruptions , the more it gives a complete view of the robustness of a neural network . At the same time , we consider a benchmark as balanced when it gives the same importance to the robustness to every corruption it contains . For instance , according to a balanced benchmark , being robust to noises is as important as being robust to brightness variations . We argue that most of the existing corruption benchmarks are unbalanced : they give too much importance to the robustness to some corruptions compared to others . The coverage and balance of corruption benchmarks are related to the notion of corruption overlappings . We say that two corruptions overlap when the robustnesses of neural networks towards these corruptions are correlated . The contribution of this paper is fourfold : 1 . We propose the first method to estimate to what extent two corruptions overlap . 2 . We show that building corruption benchmarks with non-overlapping corruptions make them more balanced and able to cover a wider range of corruptions . 3 . We propose a method to build benchmarks that contain only non-overlapping corruptions . 4 . We use this method to build from ImageNet , a benchmark of Non-Overlapping Corruptions called ImagNet-NOC , to estimate the robustness of image classifiers to common corruptions . We show that ImagNet-NOC is balanced and covers corruptions that are not covered by ImageNet-C : a reference corruption benchmark ( Hendrycks & Dietterich , 2019 ) . 2 BACKGROUND AND RELATED WORKS . 2.1 ESTIMATING THE ROBUSTNESS OF NETWORKS WITH OUT-OF-DISTRIBUTION SAMPLES . Studying the performances of neural networks on samples that lie outside training distributions , is a widely studied domain . Being able to understand out-of-distribution ( o.o.d ) samples is essential to guarantee that neural networks are reliable in real-world applications . Several benchmarks and methods have been proposed to study this field . For instance , ImageNet-A ( Dan Hendrycks & Song , 2019 ) is a simple benchmark for ImageNet classifiers that contains samples drawn from a different source than the one used to build ImageNet . Adversarial examples , are samples that have been slightly modified to fool neural networks ( Szegedy et al. , 2014 ) . Making sure that models are robust to these kinds of o.o.d samples is essential in terms of security . Artistic renditions ( Hendrycks et al. , 2020 ) or sketches ( Haohan et al. , 2019 ) , can also be useful to determine if neural networks understand the abstract concepts we want them to learn . Methods to study how classifiers are affected by background changes have also been recently proposed ( Beery et al. , 2018 ; Xiao et al. , 2020 ) . Another important aspect of the robustness of neural networks to o.o.d samples , is the robustness to common corruptions . This aspect of the robustness is generally estimated by gathering several commonly encountered corruptions , and by testing the performances of neural networks on images corrupted with these corruptions . Diverse selections of common corruptions have been proposed to make a robustness estimation ( Karahan et al. , 2016 ; Laugros et al. , 2019 ; Geirhos et al. , 2019 ) . In particular , ImageNet-C is a popular benchmark used to measure the robustness of ImageNet classifiers ( Hendrycks & Dietterich , 2019 ) . Different common corruption benchmarks have also been proposed in the context of object detection ( Michaelis et al. , 2019 ) , scene classification ( Tadros et al. , 2019 ) or , eye-tracking ( Che et al. , 2020 ) . It is worth noting that some transformations that are in between adversarial attacks and common corruptions have been recently proposed to measure the robustness of image classifiers ( Kang et al. , 2019 ; Dunn et al. , 2019 ; Liu et al. , 2019 ) . 2.2 CORRUPTION OVERLAPPINGS IN BENCHMARKS . It has been noticed that fine-tuning a model with camera shake blur helps it to deal with defocus blur and conversely ( Vasiljevic et al. , 2016 ) . The robustnesses to diverse kinds of noises have also been shown to be closely related ( Laugros et al. , 2019 ) . Even for two corruptions that do not look similar to the human eye , increasing the robustness of a model to one of these corruptions , can imply increasing the robustness to the other corruption ( Kang et al. , 2019 ) . In general , it has been shown that the robustnesses to the corruptions that distort the high-frequency content of images are correlated ( Yin et al. , 2019 ) . In the context of adversarial examples , it is known that the robustness towards one adversarial attack can be correlated with the robustness to another attack ( Tramer & Boneh , 2019 ) . So , it is generally recommended to evaluate the adversarial robustness with attacks that are clearly different from each other ( Carlini et al. , 2019 ) . The experiments carried out in this paper suggest that this recommendation should also be followed in the context of common corruption robustness estimation . 3 CORRUPTION OVERLAPPING . 3.1 THE CORRUPTION OVERLAPPING SCORE . We consider that two corruptions overlap when the robustness to one of these corruptions is correlated with the robustness to the other corruption . In this section , we propose a methodology to estimate to what extent two corruptions overlap . The Robustness Score . To determine whether two corruptions overlap , we first need to introduce a metric called the robustness score . This score gives an estimation of the robustness of a model m to a corruption c. It is computed with the following formula : Rmc = Ac Aclean . Aclean is the accuracy of m on an uncorrupted test set and Ac is the accuracy of m on the same test set corrupted with c. The higher Rmc is , the more robust m is . Please note that using this metric requires to monitor Aclean and make sure it is relatively high . Otherwise , an untrained model for which Ac equals Aclean , would be considered as robust for example . In this study , this metric is used only in the methodology we propose to estimate the overlapping between two corruptions . The Corruption Overlapping Score . We consider two neural networks m1 and m2 and two corruptions c1 and c2 . m1 and m2 are identical , and trained with exactly the same settings except that their training sets are respectively augmented with the corruptions c1 and c2 . A standard model is trained the same way but only with non-corrupted samples . We propose a method to measure to what extent c1 and c2 overlap . The idea of the method is to see if a data augmentation with c1 makes a model more robust to c2 and conversely . To determine this , m1 , m2 , and a test set are used to compute the following expression : ( Rm2c1 −Rstandardc1 ) + ( Rm1c2 −Rstandardc2 ) ( 1 ) The first term of ( 1 ) measures whether a model that fits exactly c2 is more robust to c1 than the standard model . Symmetrically , the second term measures whether a model that fits exactly c1 is more robust than the standard model to c2 . The more making a model fit c1 implies being more robust to c2 and reciprocally , and the more we can suppose that the robustnesses to c1 and c2 are correlated in practice . In other words , the expression ( 1 ) gives an estimation of the overlapping between c1 and c2 . To be more convenient , we would like to build a corruption overlapping score equal to 1 when c1 = c2 , and equal to 0 when the robustnesses to c1 and c2 are not correlated at all . We propose a new expression that respects both conditions : Oc1 , c2 = max { 0 , 1 2 ∗ ( Rm1c2 −Rstandardc2 Rm2c2 −Rstandardc2 + Rm2c1 −Rstandardc1 Rm1c1 −Rstandardc1 ) } ( 2 ) The expression ( 2 ) is a normalized version of ( 1 ) . It measures the overlapping between two corruptions while respecting the conditions mentioned above . Indeed , if a data augmentation with c1 does not increase the robustness to c2 at all and conversely , then the ratios in ( 2 ) are null or negative , so the whole overlapping score is maximized to zero . In other words , when c1 and c2 do not overlap at all , the overlapping score is equal to 0 . Besides , when c1 = c2 , Rm1c2 = R m2 c2 and R m2 c1 = R m1 c1 , so both ratios of ( 2 ) are equal to 1 . Then , Oc1 , c2 = 1 when c1 and c2 completely overlap . How to compute an overlapping score . To get the overlapping score between c1 and c2 , we follow the method illustrated in Figure 1 . This method has six steps , and requires to have a training set , a test set and three untrained models that share the same architecture ( m1 , m2 and standard ) . The step ( 1 ) , consists in using the corruptions c1 and c2 to get two training sets , each corrupted with one corruption . Then , the obtained corrupted sets are used to train the models m1 and m2 in step ( 2 ) . The standard model is also trained during this step but only with non-corrupted samples . In step ( 3 ) , similarly to step ( 1 ) , we use c1 and c2 to get two corrupted versions of the test set . The accuracies of the three models on the three test sets are computed in step ( 4 ) . The scores obtained are used in step ( 5 ) , to get the robustness scores of each model for the corruptions c1 and c2 . The results obtained are used to compute the overlapping score between c1 and c2 in step ( 6 ) . | This paper proposes a new dataset for estimating robustness to distribution shift, in particular corruption robustness. They accomplish this by proposing an alternative to ImageNet-C, ImageNet-NOC, which uses different corruptions. They consider corruptions not in ImageNet-C, and they argue that their dataset is superior because they have more "balance and coverage." They select corruptions that are "decorrelated" in a specific sense. | SP:c5862fb1ef5d251216f75033ad7fbad6fa446323 |
Learning Disconnected Manifolds: Avoiding The No Gan's Land by Latent Rejection | 1 INTRODUCTION . GANs ( Goodfellow et al. , 2014 ) are an effective way to learn complex and high-dimensional distributions , leading to state-of-the-art models for image synthesis in both unconditional ( Karras et al. , 2019 ) and conditional settings ( Brock et al. , 2019 ) . However , it is well-known that a single generator with a unimodal latent variable can not recover a distribution composed of disconnected sub-manifolds ( Khayatkhoei et al. , 2018 ) . This leads to a common problem for practitioners : the necessary existence of very-low quality samples when covering different modes . This is formalized by Tanielian et al . ( 2020 ) which refers to this area as the no GAN ’ s land and provides impossibility theorems on the learning of disconnected manifolds with standard formulations of GANs . Fitting a disconnected target distribution requires an additional mechanism inserting disconnectedness in the modeled distribution . A first solution is to add some expressivity to the model : Khayatkhoei et al . ( 2018 ) propose to train a mixture of generators while Gurumurthy et al . ( 2017 ) make use of a multi-modal latent distribution . A second solution is to improve the quality of a trained generative model by avoiding its poorest samples ( Tao et al. , 2018 ; Azadi et al. , 2019 ; Turner et al. , 2019 ; Grover et al. , 2019 ; Tanaka , 2019 ) . This second line of research relies heavily on a variety of Monte-Carlo algorithms , such as Rejection Sampling or the Metropolis-Hastings . These methods aim at sampling from a target distribution , while having only access to samples generated from a proposal distribution . This idea was successfully applied to GANs , using the previously learned generative distribution as a proposal distribution . However , one of the main drawback is that Monte-Carlo algorithms only guarantee to sample from the target distribution under strong assumptions . First , we need access to the density ratios between the proposal and target distributions or equivalently to a perfect discriminator ( Azadi et al. , 2019 ) . Second , the support of the proposal distribution must fully cover the one of the target distribution , which means no mode collapse . This is known to be very demanding in high dimension since the intersection of supports between the proposal and target distribution is likely to be negligible ( Arjovsky and Bottou , 2017 , Lemma 3 ) . In this setting , an optimal discriminator would give null acceptance probabilities for almost any generated points , leading to a lower performance . To tackle the aforementioned issue , we propose a novel method aiming at reducing the Wasserstein distance between the previously trained generative model and the target distribution . This is done via the adversarial training of a third network that learns importance weights in the latent space . The goal is to learn the redistribution of mass of the modeled distribution that best fits the target distribution . To better understand our approach , we first consider a simple 2D motivational example where the real data lies on four disconnected manifolds . To approximate this , the generator splits the latent space into four distinct areas and maps data points located in the frontiers , areas in orange in Figure 1b , out of the true manifold ( see Figure 1a ) . Our method consequently aims at learning latent importance weights that can identify these frontiers and simply avoid them . This is highlighted in Figure 1d where the importance weighter has identified these four frontiers . When sampling from the new latent distribution , we can now perfectly fit the mixture of four gaussians ( see Figure 1c ) . Our contributions are the following : • We discuss works improving the sampling quality of GANs and identify their limitations . • We propose a novel approach that directly modifies the latent space distribution . It provides a principled way to reduce the Wasserstein distance to the target distribution . • We thorougly compare our method with a large set of previous approaches on a variety of datasets and distributions . We empirically show that our solution significantly reduces the computational cost of inference while demonstrating an equal efficiency . Notation . Before moving to the related work section , we shortly present notation needed in the paper . The goal of the generator is to generate data points that are “ similar ” to samples collected from some target probability measure µ ? . The measure µ ? is defined on a potentially high dimensional spaceRD , equipped with the euclidean norm ‖ · ‖ . To approach µ ? , we use a parametric family of generative distribution where each distribution is the push-forward measure of a latent distribution Z and a continuous function modeled by a neural network . In most of all practical applications , the random variable Z defined on a low dimensional space Rd is either a multivariate Gaussian distribution or uniform distribution . The generator is a parameterized class of functions from Rd to RD , say G = { Gθ : θ ∈Θ } , where Θ ⊆ Rp is the set of parameters describing the model . Each function Gθ takes input from Z and outputs “ fake ” observations with distribution µθ = Gθ ] Z . On the other hand , the discriminator is described by a family of functions from RD to R , say D = { Dα : α ∈Λ } , Λ ⊆RQ , where each Dα . Finally , for any given distribution µ , we note Sµ its support . 2 RELATED WORK . 2.1 DISCONNECTED MANIFOLD LEARNING : HOW TO TRAIN AND EVALUATE GANS . Goodfellow et al . ( 2014 ) already stated that when training vanilla GANs , the generator could ignore modes of the target distribution : this is the mode collapse . A significant step towards understanding this phenomenon was made by Arjovsky and Bottou ( 2017 ) who explained that the standard formulation of GANs leads to vanishing or unstable gradients . The authors proposed the Wasserstein GANs ( WGANs ) architecture ( Arjovsky et al. , 2017 ) where , in particular , discriminative functions are restricted to the class of 1-Lipschitz functions . WGANs aim at solving : sup α∈A inf θ∈Θ Ex∼µ ? Dα ( x ) −Ez∼Z Dα ( Gθ ( z ) ) ) ( 1 ) The broader drawback of standard GANs is that , since any modeled distribution is the push-forward of a unimodal distribution by a continuous transformation , it consequently has a connected support . This means that when the generator covers multiple disconnected modes of the target distribution , it necessarily generates samples out of the real data manifold ( Khayatkhoei et al. , 2018 ) . Consequently , any thorough evaluation of GANs should assess simultaneously both the quality and the variety of the generated samples . Sajjadi et al . ( 2018 ) argue that a single-digit metric such as the Inception Score ( Salimans et al. , 2016 ) or the Frechet Inception distance ( Heusel et al. , 2017 ) is thus not adequate to compare generative models . To solve this issue , the authors propose a Precision/Recall metric that aims at measuring both the mode dropping and the mode inventing . In the Improved Precision/Recall ( Kynkäänniemi et al. , 2019 ) , the precision refers to the portion of generated points that belongs to the target manifold , while the recall measures how much of the target distribution can be re-constructed by the model distribution . Building on this metric , Tanielian et al . ( 2020 ) highlighted the trade-off property of GANs deriving upper-bounds on the precision of standard GANs . To solve this problem , a common direction of research consists in over-parameterizing the generative model . Khayatkhoei et al . ( 2018 ) enforce diversity by using a mixture of generators while Gurumurthy et al . ( 2017 ) suggest that a mixture of Gaussians in the latent space is efficient to learn diverse and limited data . 2.2 IMPROVING THE QUALITY OF TRAINED GENERATORS . To better fit disconnected manifolds with standard GANs architectures , another line of research consists in inserting disconnectedness into a previously learned generative distribution µθ . Tanielian et al . ( 2020 ) proposed an heuristic to remove the no GAN ’ s land ( i.e . samples mapped out of the true manifold ) : rejecting data points with a high Jacobian Frobenius norm . Another possibility would be to use one of the different Monte-Carlo methods ( Robert and Casella , 2013 ) and apply it to GANs . Building up on the well-known inference theory , Azadi et al . ( 2019 ) suggests the use of rejection sampling to improve the quality of the proposal distribution µθ . One can compute density ratios using either a classifier trained from scratch or the discriminator obtained at the end of the training . Consequently , in this Discriminator Rejection Sampling ( DRS ) , any generated data point x∼ µθ is accepted with the following acceptance probability Pa : Pa ( x ) = µ ? ( x ) Mµθ ( x ) where M = max x∈Sµθ µ ? ( x ) µθ ( x ) , ( 2 ) where µ ? and µθ here refers to the density functions . Similarly , Turner et al . ( 2019 ) use the same density ratios and derive MH-GAN , an adaptation of the Metropolis-Hasting algorithm ( Hastings , 1970 ) , that improves the sampling from µθ . Finally , Grover et al . ( 2019 ) use these density ratios r as importance weights and define an importance resampled generative model whose density is now defined by µ̂θ ∝ µθ × r ( x ) . In order to perform discrete sampling from µ̂θ , authors rely on the Sampling-Importance-Resampling ( SIR ) algorithm ( Rubin , 1988 ; Liu and Chen , 1998 ) . This defines a new distribution µ̂θ SIR : µ̂SIRθ ( xi ) = r ( xi ) n ∑ j=1 r ( x j ) where x1 , . . . , xn ∼ µnθ . Note that these algorithms rely on the same density ratios and an acceptance-rejection scheme . In Rejection Sampling , the acceptance rate is uncontrollable but sampling from µ ? is assured . With SIR and MH , the acceptance rate is controllable but sampling from µ ? is no longer guaranteed . 3 ADVERSARIAL LEARNING OF LATENT IMPORTANCE WEIGHTS . 3.1 OUR APPROACH . Similar to previous works , our method consists in improving the performance of a given generative model , post-training . Given a trained WGANs ( Gθ , Dα ) , we now propose to learn importance weights in the latent space . To do so , we use a feed-forward neural network from Rd to R+ , say Ω = { wϕ : ϕ ∈ Φ } . The neural network wϕ is trained using an adversarial process with the discriminator Dα , whilst keeping the weights of Gθ frozen . We now want to solve the following : sup α∈A inf ϕ∈Φ Ex∼µ ? Dα ( x ) −Ez∼Z ( wϕ ( z ) ×Dα ( Gθ ( z ) ) ) ) ( 3 ) Note that our formulation can also be plugged on top of many different objective functions . Interestingly , the use of the predictor wϕ defines a new latent space distribution whose density γ̂ is defined by γ̂ ( z ) ∝ wϕ ( z ) × γ ( z ) . Consequently , the newly defined modeled distribution µ̂θ is defined as the pushforward µ̂θ = Gθ ] γ̂ . The proposed method can be seen as minimizing the Wasserstein distance to the target distribution , over an increased class of generative distributions . The network wϕ thus learns how to redistribute the mass of µθ such that µ̂θ is closer to µ ? in terms Wasserstein distance . However , as in the field of counterfactual estimation , a naive optimization of importance weights by gradient descent can lead to trivial solutions . First , if for example , the Wasserstein critic Dα outputs negative values for any generated samples , the network wϕ could simply learn to avoid the dataset and output 0 everywhere . To avoid this issue , we follow Swaminathan and Joachims ( 2015c ) and scale the output of the discriminator such that the reward is always positive . A second problem comes from the fact that equation 3 can now be minimized not only by putting large importance weights wϕ ( z ) on the examples with high likelihoods Dα ( G ( z ) ) , but also by maximizing the sum of the weights : this is the propensity overfitting ( Swaminathan and Joachims , 2015a ) . To stabilize the optimisation process , we consequently introduce two important regularization techniques : Self-normalization . Similarly to Swaminathan and Joachims ( 2015a ) , we advocate the use of a normalization of the importance weights . To be more precise , we enforce the expectation of the importance weights to be close 1 by adding a penalty term . By doing so , we prohibit the propensity overfitting since the sum of the importance weights in the batch is bounded . Soft-Clipping To avoid cases where small areas of z have really high wϕ ( z ) values , which would lead to mode collapse , we enforce a soft-clipping on the weights ( Bottou et al. , 2013 ; Grover et al. , 2019 ) . Note that this constraint on wϕ ( z ) could also be implemented with a bounded activation function on the final layer , such as a re-scaled sigmoid or tanh activation . Finally , we thus get the following objective function : sup ϕ∈Φ Ez∼Z wϕ ( z ) ( Dα ( Gθ ( z ) ) ) −∇ ) ︸ ︷︷ ︸ discriminator reward −λ1 ( Ez∼Zwϕ ( z ) −1 ) 2︸ ︷︷ ︸ self-normalization −λ2Ez∼Z max ( 0 , ( wϕ ( z ) −m ) ) 2︸ ︷︷ ︸ soft-clipping , ( 4 ) where ∇ = minz∼Z Dα ( G ( z ) ) . λ1 , λ2 , and m are hyper-parameters ( values displayed in Appendix ) . | The paper proposes a method for an improvement of generative adversarial models via post-processing its latent variable distribution. To be more precise, the method proposes to train an additional neural network that outputs an important weight for each point of the latent space, thus reweighting the final distribution in the space of images. For the optimization of this network, the authors use the dual form of the Wasserstein distance, where they multiply the initial latent density by the output of the network. To fix the ill-behaved objective, the authors add two regularization terms to it. The proposed objective is then validated on 3 MNIST-like datasets quantitatively and on CelebA qualitatively. | SP:42059920072ac2c09c17ad97e79303e5bee38534 |
Learning Disconnected Manifolds: Avoiding The No Gan's Land by Latent Rejection | 1 INTRODUCTION . GANs ( Goodfellow et al. , 2014 ) are an effective way to learn complex and high-dimensional distributions , leading to state-of-the-art models for image synthesis in both unconditional ( Karras et al. , 2019 ) and conditional settings ( Brock et al. , 2019 ) . However , it is well-known that a single generator with a unimodal latent variable can not recover a distribution composed of disconnected sub-manifolds ( Khayatkhoei et al. , 2018 ) . This leads to a common problem for practitioners : the necessary existence of very-low quality samples when covering different modes . This is formalized by Tanielian et al . ( 2020 ) which refers to this area as the no GAN ’ s land and provides impossibility theorems on the learning of disconnected manifolds with standard formulations of GANs . Fitting a disconnected target distribution requires an additional mechanism inserting disconnectedness in the modeled distribution . A first solution is to add some expressivity to the model : Khayatkhoei et al . ( 2018 ) propose to train a mixture of generators while Gurumurthy et al . ( 2017 ) make use of a multi-modal latent distribution . A second solution is to improve the quality of a trained generative model by avoiding its poorest samples ( Tao et al. , 2018 ; Azadi et al. , 2019 ; Turner et al. , 2019 ; Grover et al. , 2019 ; Tanaka , 2019 ) . This second line of research relies heavily on a variety of Monte-Carlo algorithms , such as Rejection Sampling or the Metropolis-Hastings . These methods aim at sampling from a target distribution , while having only access to samples generated from a proposal distribution . This idea was successfully applied to GANs , using the previously learned generative distribution as a proposal distribution . However , one of the main drawback is that Monte-Carlo algorithms only guarantee to sample from the target distribution under strong assumptions . First , we need access to the density ratios between the proposal and target distributions or equivalently to a perfect discriminator ( Azadi et al. , 2019 ) . Second , the support of the proposal distribution must fully cover the one of the target distribution , which means no mode collapse . This is known to be very demanding in high dimension since the intersection of supports between the proposal and target distribution is likely to be negligible ( Arjovsky and Bottou , 2017 , Lemma 3 ) . In this setting , an optimal discriminator would give null acceptance probabilities for almost any generated points , leading to a lower performance . To tackle the aforementioned issue , we propose a novel method aiming at reducing the Wasserstein distance between the previously trained generative model and the target distribution . This is done via the adversarial training of a third network that learns importance weights in the latent space . The goal is to learn the redistribution of mass of the modeled distribution that best fits the target distribution . To better understand our approach , we first consider a simple 2D motivational example where the real data lies on four disconnected manifolds . To approximate this , the generator splits the latent space into four distinct areas and maps data points located in the frontiers , areas in orange in Figure 1b , out of the true manifold ( see Figure 1a ) . Our method consequently aims at learning latent importance weights that can identify these frontiers and simply avoid them . This is highlighted in Figure 1d where the importance weighter has identified these four frontiers . When sampling from the new latent distribution , we can now perfectly fit the mixture of four gaussians ( see Figure 1c ) . Our contributions are the following : • We discuss works improving the sampling quality of GANs and identify their limitations . • We propose a novel approach that directly modifies the latent space distribution . It provides a principled way to reduce the Wasserstein distance to the target distribution . • We thorougly compare our method with a large set of previous approaches on a variety of datasets and distributions . We empirically show that our solution significantly reduces the computational cost of inference while demonstrating an equal efficiency . Notation . Before moving to the related work section , we shortly present notation needed in the paper . The goal of the generator is to generate data points that are “ similar ” to samples collected from some target probability measure µ ? . The measure µ ? is defined on a potentially high dimensional spaceRD , equipped with the euclidean norm ‖ · ‖ . To approach µ ? , we use a parametric family of generative distribution where each distribution is the push-forward measure of a latent distribution Z and a continuous function modeled by a neural network . In most of all practical applications , the random variable Z defined on a low dimensional space Rd is either a multivariate Gaussian distribution or uniform distribution . The generator is a parameterized class of functions from Rd to RD , say G = { Gθ : θ ∈Θ } , where Θ ⊆ Rp is the set of parameters describing the model . Each function Gθ takes input from Z and outputs “ fake ” observations with distribution µθ = Gθ ] Z . On the other hand , the discriminator is described by a family of functions from RD to R , say D = { Dα : α ∈Λ } , Λ ⊆RQ , where each Dα . Finally , for any given distribution µ , we note Sµ its support . 2 RELATED WORK . 2.1 DISCONNECTED MANIFOLD LEARNING : HOW TO TRAIN AND EVALUATE GANS . Goodfellow et al . ( 2014 ) already stated that when training vanilla GANs , the generator could ignore modes of the target distribution : this is the mode collapse . A significant step towards understanding this phenomenon was made by Arjovsky and Bottou ( 2017 ) who explained that the standard formulation of GANs leads to vanishing or unstable gradients . The authors proposed the Wasserstein GANs ( WGANs ) architecture ( Arjovsky et al. , 2017 ) where , in particular , discriminative functions are restricted to the class of 1-Lipschitz functions . WGANs aim at solving : sup α∈A inf θ∈Θ Ex∼µ ? Dα ( x ) −Ez∼Z Dα ( Gθ ( z ) ) ) ( 1 ) The broader drawback of standard GANs is that , since any modeled distribution is the push-forward of a unimodal distribution by a continuous transformation , it consequently has a connected support . This means that when the generator covers multiple disconnected modes of the target distribution , it necessarily generates samples out of the real data manifold ( Khayatkhoei et al. , 2018 ) . Consequently , any thorough evaluation of GANs should assess simultaneously both the quality and the variety of the generated samples . Sajjadi et al . ( 2018 ) argue that a single-digit metric such as the Inception Score ( Salimans et al. , 2016 ) or the Frechet Inception distance ( Heusel et al. , 2017 ) is thus not adequate to compare generative models . To solve this issue , the authors propose a Precision/Recall metric that aims at measuring both the mode dropping and the mode inventing . In the Improved Precision/Recall ( Kynkäänniemi et al. , 2019 ) , the precision refers to the portion of generated points that belongs to the target manifold , while the recall measures how much of the target distribution can be re-constructed by the model distribution . Building on this metric , Tanielian et al . ( 2020 ) highlighted the trade-off property of GANs deriving upper-bounds on the precision of standard GANs . To solve this problem , a common direction of research consists in over-parameterizing the generative model . Khayatkhoei et al . ( 2018 ) enforce diversity by using a mixture of generators while Gurumurthy et al . ( 2017 ) suggest that a mixture of Gaussians in the latent space is efficient to learn diverse and limited data . 2.2 IMPROVING THE QUALITY OF TRAINED GENERATORS . To better fit disconnected manifolds with standard GANs architectures , another line of research consists in inserting disconnectedness into a previously learned generative distribution µθ . Tanielian et al . ( 2020 ) proposed an heuristic to remove the no GAN ’ s land ( i.e . samples mapped out of the true manifold ) : rejecting data points with a high Jacobian Frobenius norm . Another possibility would be to use one of the different Monte-Carlo methods ( Robert and Casella , 2013 ) and apply it to GANs . Building up on the well-known inference theory , Azadi et al . ( 2019 ) suggests the use of rejection sampling to improve the quality of the proposal distribution µθ . One can compute density ratios using either a classifier trained from scratch or the discriminator obtained at the end of the training . Consequently , in this Discriminator Rejection Sampling ( DRS ) , any generated data point x∼ µθ is accepted with the following acceptance probability Pa : Pa ( x ) = µ ? ( x ) Mµθ ( x ) where M = max x∈Sµθ µ ? ( x ) µθ ( x ) , ( 2 ) where µ ? and µθ here refers to the density functions . Similarly , Turner et al . ( 2019 ) use the same density ratios and derive MH-GAN , an adaptation of the Metropolis-Hasting algorithm ( Hastings , 1970 ) , that improves the sampling from µθ . Finally , Grover et al . ( 2019 ) use these density ratios r as importance weights and define an importance resampled generative model whose density is now defined by µ̂θ ∝ µθ × r ( x ) . In order to perform discrete sampling from µ̂θ , authors rely on the Sampling-Importance-Resampling ( SIR ) algorithm ( Rubin , 1988 ; Liu and Chen , 1998 ) . This defines a new distribution µ̂θ SIR : µ̂SIRθ ( xi ) = r ( xi ) n ∑ j=1 r ( x j ) where x1 , . . . , xn ∼ µnθ . Note that these algorithms rely on the same density ratios and an acceptance-rejection scheme . In Rejection Sampling , the acceptance rate is uncontrollable but sampling from µ ? is assured . With SIR and MH , the acceptance rate is controllable but sampling from µ ? is no longer guaranteed . 3 ADVERSARIAL LEARNING OF LATENT IMPORTANCE WEIGHTS . 3.1 OUR APPROACH . Similar to previous works , our method consists in improving the performance of a given generative model , post-training . Given a trained WGANs ( Gθ , Dα ) , we now propose to learn importance weights in the latent space . To do so , we use a feed-forward neural network from Rd to R+ , say Ω = { wϕ : ϕ ∈ Φ } . The neural network wϕ is trained using an adversarial process with the discriminator Dα , whilst keeping the weights of Gθ frozen . We now want to solve the following : sup α∈A inf ϕ∈Φ Ex∼µ ? Dα ( x ) −Ez∼Z ( wϕ ( z ) ×Dα ( Gθ ( z ) ) ) ) ( 3 ) Note that our formulation can also be plugged on top of many different objective functions . Interestingly , the use of the predictor wϕ defines a new latent space distribution whose density γ̂ is defined by γ̂ ( z ) ∝ wϕ ( z ) × γ ( z ) . Consequently , the newly defined modeled distribution µ̂θ is defined as the pushforward µ̂θ = Gθ ] γ̂ . The proposed method can be seen as minimizing the Wasserstein distance to the target distribution , over an increased class of generative distributions . The network wϕ thus learns how to redistribute the mass of µθ such that µ̂θ is closer to µ ? in terms Wasserstein distance . However , as in the field of counterfactual estimation , a naive optimization of importance weights by gradient descent can lead to trivial solutions . First , if for example , the Wasserstein critic Dα outputs negative values for any generated samples , the network wϕ could simply learn to avoid the dataset and output 0 everywhere . To avoid this issue , we follow Swaminathan and Joachims ( 2015c ) and scale the output of the discriminator such that the reward is always positive . A second problem comes from the fact that equation 3 can now be minimized not only by putting large importance weights wϕ ( z ) on the examples with high likelihoods Dα ( G ( z ) ) , but also by maximizing the sum of the weights : this is the propensity overfitting ( Swaminathan and Joachims , 2015a ) . To stabilize the optimisation process , we consequently introduce two important regularization techniques : Self-normalization . Similarly to Swaminathan and Joachims ( 2015a ) , we advocate the use of a normalization of the importance weights . To be more precise , we enforce the expectation of the importance weights to be close 1 by adding a penalty term . By doing so , we prohibit the propensity overfitting since the sum of the importance weights in the batch is bounded . Soft-Clipping To avoid cases where small areas of z have really high wϕ ( z ) values , which would lead to mode collapse , we enforce a soft-clipping on the weights ( Bottou et al. , 2013 ; Grover et al. , 2019 ) . Note that this constraint on wϕ ( z ) could also be implemented with a bounded activation function on the final layer , such as a re-scaled sigmoid or tanh activation . Finally , we thus get the following objective function : sup ϕ∈Φ Ez∼Z wϕ ( z ) ( Dα ( Gθ ( z ) ) ) −∇ ) ︸ ︷︷ ︸ discriminator reward −λ1 ( Ez∼Zwϕ ( z ) −1 ) 2︸ ︷︷ ︸ self-normalization −λ2Ez∼Z max ( 0 , ( wϕ ( z ) −m ) ) 2︸ ︷︷ ︸ soft-clipping , ( 4 ) where ∇ = minz∼Z Dα ( G ( z ) ) . λ1 , λ2 , and m are hyper-parameters ( values displayed in Appendix ) . | This work aims at improving the sample quality of generative models through better sampling, which is a relevant problem and has brought about a line of work [1,2,3,4,5], to name a few. By leveraging the idea of importance sampling, the authors train an additional network. The latter uses the information contained in the learned discriminator to assign importance weights to the latent points, thus defining a new distribution in the latent space. Subsequently, rejection sampling on the newly defined latent distribution is applied to obtain inputs for a generator network. By treating the problem in the latent space, the paper introduces latentRS method that compares favourably to several existing methods in terms of computational complexity for generating a sample. The authors propose one more method, latentGA, following the path in the latent space that maximizes the learned importance weights. The paper also discusses the limitations of the previously proposed methods and presents their empirical comparison on several datasets and metrics. | SP:42059920072ac2c09c17ad97e79303e5bee38534 |
Learning Disconnected Manifolds: Avoiding The No Gan's Land by Latent Rejection | 1 INTRODUCTION . GANs ( Goodfellow et al. , 2014 ) are an effective way to learn complex and high-dimensional distributions , leading to state-of-the-art models for image synthesis in both unconditional ( Karras et al. , 2019 ) and conditional settings ( Brock et al. , 2019 ) . However , it is well-known that a single generator with a unimodal latent variable can not recover a distribution composed of disconnected sub-manifolds ( Khayatkhoei et al. , 2018 ) . This leads to a common problem for practitioners : the necessary existence of very-low quality samples when covering different modes . This is formalized by Tanielian et al . ( 2020 ) which refers to this area as the no GAN ’ s land and provides impossibility theorems on the learning of disconnected manifolds with standard formulations of GANs . Fitting a disconnected target distribution requires an additional mechanism inserting disconnectedness in the modeled distribution . A first solution is to add some expressivity to the model : Khayatkhoei et al . ( 2018 ) propose to train a mixture of generators while Gurumurthy et al . ( 2017 ) make use of a multi-modal latent distribution . A second solution is to improve the quality of a trained generative model by avoiding its poorest samples ( Tao et al. , 2018 ; Azadi et al. , 2019 ; Turner et al. , 2019 ; Grover et al. , 2019 ; Tanaka , 2019 ) . This second line of research relies heavily on a variety of Monte-Carlo algorithms , such as Rejection Sampling or the Metropolis-Hastings . These methods aim at sampling from a target distribution , while having only access to samples generated from a proposal distribution . This idea was successfully applied to GANs , using the previously learned generative distribution as a proposal distribution . However , one of the main drawback is that Monte-Carlo algorithms only guarantee to sample from the target distribution under strong assumptions . First , we need access to the density ratios between the proposal and target distributions or equivalently to a perfect discriminator ( Azadi et al. , 2019 ) . Second , the support of the proposal distribution must fully cover the one of the target distribution , which means no mode collapse . This is known to be very demanding in high dimension since the intersection of supports between the proposal and target distribution is likely to be negligible ( Arjovsky and Bottou , 2017 , Lemma 3 ) . In this setting , an optimal discriminator would give null acceptance probabilities for almost any generated points , leading to a lower performance . To tackle the aforementioned issue , we propose a novel method aiming at reducing the Wasserstein distance between the previously trained generative model and the target distribution . This is done via the adversarial training of a third network that learns importance weights in the latent space . The goal is to learn the redistribution of mass of the modeled distribution that best fits the target distribution . To better understand our approach , we first consider a simple 2D motivational example where the real data lies on four disconnected manifolds . To approximate this , the generator splits the latent space into four distinct areas and maps data points located in the frontiers , areas in orange in Figure 1b , out of the true manifold ( see Figure 1a ) . Our method consequently aims at learning latent importance weights that can identify these frontiers and simply avoid them . This is highlighted in Figure 1d where the importance weighter has identified these four frontiers . When sampling from the new latent distribution , we can now perfectly fit the mixture of four gaussians ( see Figure 1c ) . Our contributions are the following : • We discuss works improving the sampling quality of GANs and identify their limitations . • We propose a novel approach that directly modifies the latent space distribution . It provides a principled way to reduce the Wasserstein distance to the target distribution . • We thorougly compare our method with a large set of previous approaches on a variety of datasets and distributions . We empirically show that our solution significantly reduces the computational cost of inference while demonstrating an equal efficiency . Notation . Before moving to the related work section , we shortly present notation needed in the paper . The goal of the generator is to generate data points that are “ similar ” to samples collected from some target probability measure µ ? . The measure µ ? is defined on a potentially high dimensional spaceRD , equipped with the euclidean norm ‖ · ‖ . To approach µ ? , we use a parametric family of generative distribution where each distribution is the push-forward measure of a latent distribution Z and a continuous function modeled by a neural network . In most of all practical applications , the random variable Z defined on a low dimensional space Rd is either a multivariate Gaussian distribution or uniform distribution . The generator is a parameterized class of functions from Rd to RD , say G = { Gθ : θ ∈Θ } , where Θ ⊆ Rp is the set of parameters describing the model . Each function Gθ takes input from Z and outputs “ fake ” observations with distribution µθ = Gθ ] Z . On the other hand , the discriminator is described by a family of functions from RD to R , say D = { Dα : α ∈Λ } , Λ ⊆RQ , where each Dα . Finally , for any given distribution µ , we note Sµ its support . 2 RELATED WORK . 2.1 DISCONNECTED MANIFOLD LEARNING : HOW TO TRAIN AND EVALUATE GANS . Goodfellow et al . ( 2014 ) already stated that when training vanilla GANs , the generator could ignore modes of the target distribution : this is the mode collapse . A significant step towards understanding this phenomenon was made by Arjovsky and Bottou ( 2017 ) who explained that the standard formulation of GANs leads to vanishing or unstable gradients . The authors proposed the Wasserstein GANs ( WGANs ) architecture ( Arjovsky et al. , 2017 ) where , in particular , discriminative functions are restricted to the class of 1-Lipschitz functions . WGANs aim at solving : sup α∈A inf θ∈Θ Ex∼µ ? Dα ( x ) −Ez∼Z Dα ( Gθ ( z ) ) ) ( 1 ) The broader drawback of standard GANs is that , since any modeled distribution is the push-forward of a unimodal distribution by a continuous transformation , it consequently has a connected support . This means that when the generator covers multiple disconnected modes of the target distribution , it necessarily generates samples out of the real data manifold ( Khayatkhoei et al. , 2018 ) . Consequently , any thorough evaluation of GANs should assess simultaneously both the quality and the variety of the generated samples . Sajjadi et al . ( 2018 ) argue that a single-digit metric such as the Inception Score ( Salimans et al. , 2016 ) or the Frechet Inception distance ( Heusel et al. , 2017 ) is thus not adequate to compare generative models . To solve this issue , the authors propose a Precision/Recall metric that aims at measuring both the mode dropping and the mode inventing . In the Improved Precision/Recall ( Kynkäänniemi et al. , 2019 ) , the precision refers to the portion of generated points that belongs to the target manifold , while the recall measures how much of the target distribution can be re-constructed by the model distribution . Building on this metric , Tanielian et al . ( 2020 ) highlighted the trade-off property of GANs deriving upper-bounds on the precision of standard GANs . To solve this problem , a common direction of research consists in over-parameterizing the generative model . Khayatkhoei et al . ( 2018 ) enforce diversity by using a mixture of generators while Gurumurthy et al . ( 2017 ) suggest that a mixture of Gaussians in the latent space is efficient to learn diverse and limited data . 2.2 IMPROVING THE QUALITY OF TRAINED GENERATORS . To better fit disconnected manifolds with standard GANs architectures , another line of research consists in inserting disconnectedness into a previously learned generative distribution µθ . Tanielian et al . ( 2020 ) proposed an heuristic to remove the no GAN ’ s land ( i.e . samples mapped out of the true manifold ) : rejecting data points with a high Jacobian Frobenius norm . Another possibility would be to use one of the different Monte-Carlo methods ( Robert and Casella , 2013 ) and apply it to GANs . Building up on the well-known inference theory , Azadi et al . ( 2019 ) suggests the use of rejection sampling to improve the quality of the proposal distribution µθ . One can compute density ratios using either a classifier trained from scratch or the discriminator obtained at the end of the training . Consequently , in this Discriminator Rejection Sampling ( DRS ) , any generated data point x∼ µθ is accepted with the following acceptance probability Pa : Pa ( x ) = µ ? ( x ) Mµθ ( x ) where M = max x∈Sµθ µ ? ( x ) µθ ( x ) , ( 2 ) where µ ? and µθ here refers to the density functions . Similarly , Turner et al . ( 2019 ) use the same density ratios and derive MH-GAN , an adaptation of the Metropolis-Hasting algorithm ( Hastings , 1970 ) , that improves the sampling from µθ . Finally , Grover et al . ( 2019 ) use these density ratios r as importance weights and define an importance resampled generative model whose density is now defined by µ̂θ ∝ µθ × r ( x ) . In order to perform discrete sampling from µ̂θ , authors rely on the Sampling-Importance-Resampling ( SIR ) algorithm ( Rubin , 1988 ; Liu and Chen , 1998 ) . This defines a new distribution µ̂θ SIR : µ̂SIRθ ( xi ) = r ( xi ) n ∑ j=1 r ( x j ) where x1 , . . . , xn ∼ µnθ . Note that these algorithms rely on the same density ratios and an acceptance-rejection scheme . In Rejection Sampling , the acceptance rate is uncontrollable but sampling from µ ? is assured . With SIR and MH , the acceptance rate is controllable but sampling from µ ? is no longer guaranteed . 3 ADVERSARIAL LEARNING OF LATENT IMPORTANCE WEIGHTS . 3.1 OUR APPROACH . Similar to previous works , our method consists in improving the performance of a given generative model , post-training . Given a trained WGANs ( Gθ , Dα ) , we now propose to learn importance weights in the latent space . To do so , we use a feed-forward neural network from Rd to R+ , say Ω = { wϕ : ϕ ∈ Φ } . The neural network wϕ is trained using an adversarial process with the discriminator Dα , whilst keeping the weights of Gθ frozen . We now want to solve the following : sup α∈A inf ϕ∈Φ Ex∼µ ? Dα ( x ) −Ez∼Z ( wϕ ( z ) ×Dα ( Gθ ( z ) ) ) ) ( 3 ) Note that our formulation can also be plugged on top of many different objective functions . Interestingly , the use of the predictor wϕ defines a new latent space distribution whose density γ̂ is defined by γ̂ ( z ) ∝ wϕ ( z ) × γ ( z ) . Consequently , the newly defined modeled distribution µ̂θ is defined as the pushforward µ̂θ = Gθ ] γ̂ . The proposed method can be seen as minimizing the Wasserstein distance to the target distribution , over an increased class of generative distributions . The network wϕ thus learns how to redistribute the mass of µθ such that µ̂θ is closer to µ ? in terms Wasserstein distance . However , as in the field of counterfactual estimation , a naive optimization of importance weights by gradient descent can lead to trivial solutions . First , if for example , the Wasserstein critic Dα outputs negative values for any generated samples , the network wϕ could simply learn to avoid the dataset and output 0 everywhere . To avoid this issue , we follow Swaminathan and Joachims ( 2015c ) and scale the output of the discriminator such that the reward is always positive . A second problem comes from the fact that equation 3 can now be minimized not only by putting large importance weights wϕ ( z ) on the examples with high likelihoods Dα ( G ( z ) ) , but also by maximizing the sum of the weights : this is the propensity overfitting ( Swaminathan and Joachims , 2015a ) . To stabilize the optimisation process , we consequently introduce two important regularization techniques : Self-normalization . Similarly to Swaminathan and Joachims ( 2015a ) , we advocate the use of a normalization of the importance weights . To be more precise , we enforce the expectation of the importance weights to be close 1 by adding a penalty term . By doing so , we prohibit the propensity overfitting since the sum of the importance weights in the batch is bounded . Soft-Clipping To avoid cases where small areas of z have really high wϕ ( z ) values , which would lead to mode collapse , we enforce a soft-clipping on the weights ( Bottou et al. , 2013 ; Grover et al. , 2019 ) . Note that this constraint on wϕ ( z ) could also be implemented with a bounded activation function on the final layer , such as a re-scaled sigmoid or tanh activation . Finally , we thus get the following objective function : sup ϕ∈Φ Ez∼Z wϕ ( z ) ( Dα ( Gθ ( z ) ) ) −∇ ) ︸ ︷︷ ︸ discriminator reward −λ1 ( Ez∼Zwϕ ( z ) −1 ) 2︸ ︷︷ ︸ self-normalization −λ2Ez∼Z max ( 0 , ( wϕ ( z ) −m ) ) 2︸ ︷︷ ︸ soft-clipping , ( 4 ) where ∇ = minz∼Z Dα ( G ( z ) ) . λ1 , λ2 , and m are hyper-parameters ( values displayed in Appendix ) . | The paper proposes a new algorithm for improved sampling of GANs. Since GANs are continuous functions that act on a connected latent space, they will have trouble learning distributions whose support is disconnected (for e.g., clustered data). The proposed method tries to fix this issue and is motivated by rejected sampling. However, instead of using density based algorithms for rejecting samples, the authors take a fixed pre-trained generative model and train a neural network that learns to reject samples from the latent space. | SP:42059920072ac2c09c17ad97e79303e5bee38534 |
Revisiting Explicit Regularization in Neural Networks for Reliable Predictive Probability | 1 INTRODUCTION . As deep learning models have become pervasive in real-world decision-systems , the importance of producing a reliable predictive probability is increasing . In this paper , we call predictive probability reliable if it is well-calibrated and precisely represents uncertainty about its predictions . The calibrated behavior refers to the ability to match its predictive probability of an event to the longterm frequency of the event occurrence ( Dawid , 1982 ) . The reliable predictive probability benefits many downstream tasks such as anomaly detection ( Malinin & Gales , 2019 ) , classification with rejection ( Lakshminarayanan et al. , 2017 ) , and exploration in reinforcement learning ( Gal & Ghahramani , 2016 ) . More importantly , deep learning systems with more reliable predictive probability can provide better feedback for explaining what is going on , situations when its prediction becomes uncertain , and unexpected anomalies to users . Unfortunately , neural networks are prone to be overconfident and lack uncertainty representation ability , and this problem has become a fundamental concern in the deep learning community . Bayesian methods have innate abilities to produce reliable predictive probability . Specifically , they express the probability distribution over parameters , in which uncertainty in the parameter space is automatically determined by data ( MacKay , 1992 ; Neal , 1993 ) . Then , uncertainty in prediction can be represented by means of providing rich information about aggregated predictions from different parameter configurations such as entropy and mutual information . From this perspective , deterministic neural networks selecting a single parameter configuration that can not provide such rich information naturally lack the uncertainty representation ability . However , the automatic determination of parameter uncertainty in the light of data , i.e. , posterior inference , comes with prohibitive computational costs . Therefore , the mainstream approach for improving the predictive probability quality has been an efficient adoption of the Bayesian principle into neural networks ( Gal & Ghahramani , 2016 ; Ritter et al. , 2018 ; Teye et al. , 2018 ; Joo et al. , 2020a ) . Recent works ( Lakshminarayanan et al. , 2017 ; Müller et al. , 2019 ; Thulasidasan et al. , 2019 ) has discovered the hidden gems of label smoothing ( Szegedy et al. , 2016 ) , mixup ( Zhang et al. , 2018 ) , and adversarial training ( Goodfellow et al. , 2015 ) , which improve the calibration performance and the uncertainty representation ability . These findings present a new possibility of improving the reliability of the predictive probability without changing the deterministic nature of neural networks . This direction is appealing because it can be applied in a plug-and-play fashion to the existing building blocks . This means that they can inherit the scalability , computational efficiency , and surprising generalization performance of the deterministic neural networks , for which Bayesian neural networks often struggle ( Wu et al. , 2019 ; Osawa et al. , 2019 ; Joo et al. , 2020a ) . Motivated by these observations , we investigate a general direction from the regularization perspective to mitigate the unreliable predictive probability problem , rather than proposing new constructive heuristics or discovering hidden properties of specific methods . Our main contribution is twofold . First , we present a new direction for alleviating the unreliable predictive behavior , which is readily applicable , computationally efficient , and scalable to large-scale models compared to Bayesian neural networks or ensemble methods . Second , our findings provide a novel view of the role of explicit regularization in deep learning , which improves the reliability of the predictive probability . 2 ANALYZING THE CAUSE OF UNRELIABLE PREDICTIVE PROBABILITY . 2.1 BACKGROUND . We consider a classification problem with i.i.d . training samples D = { ( x ( i ) , y ( i ) ) } N i=1 drawn from unknown distributions Px , y whose corresponding tuple of random variables is ( x , y ) . We denote X as an input space and Y as a set of categories { 1 , 2 , · · · , K } . Let fW : X → Z be a neural network with parameters W where Z = RK is a logit space . On top of the logit space , the softmax σ : RK →4K−1 normalizes the exponential of logits : φWk ( x ) = exp ( fWk ( x ) ) ∑ i exp ( f W i ( x ) ) ( 1 ) where we let φWk ( x ) = σk ( f W ( x ) ) for brevity . σk ( fW ( x ) ) is often interpreted as the predictive probability that the label of x belongs to class k ( Bridle , 1990 ) The probabilistic interpretation of neural network outputs gives the natural minimization objective for classification–the cross-entropy between the predictive probability and the one-hot encoded label : lCE ( y , φW ( x ) ) = − ∑ k 1y ( k ) log φ W k ( x ) , where 1A ( ω ) is an indicator function taking one if ω ∈ A and zero otherwise . By minimizing the cross-entropy ( or equivalently maximizing the log-likelihood ) with stochastic gradient descent ( SGD ) ( Robbins & Monro , 1951 ) or its variants , modern neural networks achieve the surprising generalization performance . As the demand for neural networks in real-world decision-making is emerging , reliable predictive probability has been of interest in the machine learning community . One important quality of predictive probability is calibrated behavior . Specifically , based on the notion of calibration in classical forecasting problem ( Dawid , 1982 ) , the perfectly calibrated model can be defined as follows : p ( y = k|φW ( x ) = p ) = pk , ∀p ∈ 4K−1 , k ∈ { 1 , 2 , · · · , K } ( 2 ) Here note that the calibrated model does not necessarily be ones producing φWk ( x ) = p ( y = k|x ) . In practice , expected calibration error ( ECE ) ( Naeini et al. , 2015 ) is widely used for calibration performance measure . ECE on dataset DT can be computed by binning predictions into M -groups based on their confidences1 and then averaging their calibration scores by : M∑ i=1 |Gi| |DT | |acc ( Gi ) − conf ( Gi ) | ( 3 ) where Gi = { x : i/M < maxk φ W k ( x ) ≤ ( 1 + i ) /M , x ∈ DT } ; acc ( Gi ) and conf ( Gi ) are average accuracy and confidence of predictions in group Gi , respectively . 1Throughout this paper , the confidence ( or predictive confidence ) at x refers to maxk φWk ( x ) , which is different from the confidence in statistics literature . Another metric for evaluating the reliability of predictive probability is based on predictive entropy , which evaluates how well is the model aware of its ignorance . To this end , predictive entropy is measured on samples that the model is ignorant of , such as misclassified or out-of-distribution ( OOD ) samples . For the classifier having reliable predictive probability , we expect the high uncertainty of φW ( x ) on such samples , i.e. , the answer “ I don ’ t know. ” However , several recent findings show that the resulting neural network produces unreliable predictive probability , which interprets the softmax output as the “ predictive probability ” implausible ( Gal & Ghahramani , 2016 ) . For example , Figure 1 illustrates the unreliable predictive behavior of ResNet ( He et al. , 2016 ) : the network produces outputs with high confidence in misclassified examples ( Figure 1 ( upper ) ) and provides low predictive entropy on out-of-distribution samples , albeit the samples belong to none of the classes it learned ( Figure 1 ( lower ) ) . One simple yet effective solution for improving the predictive probability ’ s quality is temperature scaling ( Guo et al. , 2017 ) , which adjusts the smoothness of the softmax so that the resulting predictive probability maximizes the log-likelihood on unseen dataset D′ : max τ ∑ ( x , y ) ∈D′ log exp ( fWy ( x ) /τ ) ∑ j exp ( f W j ( x ) /τ ) ( 4 ) where W is a fixed pretrained weight and τ is a temperature controlling the smoothness of the softmax output . This simple method makes the softmax output more reliable predictive probability . For instance , the predictive confidence matches its actual accuracy well , and the predictive entropy on out-of-distribution samples significantly increases ( Figure 1 ) . 2.2 A CLOSER LOOK AT THE LOG-LIKELIHOOD ON UNSEEN SAMPLES .. Motivated by the success of the temperature scaling , we aim to find the relationship between the loglikelihood and the calibration performance . To this end , let s be a binary random variable indicating whether a model correctly classifies a sample . Then , we can derive the following upper bound by using the law of total expectation : Ex , y [ log φWy ( x ) ] = Ex [ ps|x ( s = 1 ) Ey|s=1 , x [ log φWy ( x ) ] + ps|x ( s = 0 ) Ey|s=0 , x [ log φWy ( x ) ] ] ≤ Ex [ ps|x ( s = 1 ) log φ W mx ( x ) + ps|x ( s = 0 ) log ( 1− φ W mx ( x ) ) ] ( 5 ) where mx is the predictive class such that mx = argmaxk f W k ( x ) , ps|x ( s = 1 ) = p ( y = argmaxk φ W mk ( x ) |x ) , and the inequality comes from the fact that Ey|s=0 , x [ log φWy ( x ) ] ≤ maxk 6=mx log φ W k ( x ) ≤ log ( 1− φWmx ( x ) ) . The upper bound can be thought of as the probabilistic measure of the calibration performance . Specifically , suppose a neural network produces an answer with probability φWmx ( x ) and refuses to answer otherwise . Then , the upper bound becomes the divergence between the model ’ s ability to correctly predict the sample and the model ’ s willingness to answer . When we consider the inequality in equation 5 is applied to the empirical mean on the test dataset , this inequality clearly explains why the temperature scaling is helpful ; it increases a lower bound of the calibration error with modified confidence by τ closer to its accuracy . More importantly , the inequality in equation 5 explains the impacts of the cross-entropy minimization on the behavior of the neural network . Specifically , suppose E ( x , y ) ∼D [ log φWy ( x ) ] → 0 for some dataset D. Then , it can be shown that ps|x ( s = 1 ) → 1 and φWmx ( x ) → 1 for all ( x , y ) ∈ D. This holds because we have mink φWk ( x ) ≥ 1/K for any x and the minimum of p log q + ( 1 − p ) log ( 1 − q ) are ( p , q ) → ( 1 , 1 ) or ( p , q ) → ( 0 , 0 ) . Therefore , for high-capacity models that can make the log-likelihood on the training dataset Dtr close to zero , e.g. , deep neural networks , its behavior on the training set will converge to a configuration that corrects all samples with perfect confidence . Figure 2 illustrates this phenomenon in ResNet trained on CIFAR-100 : Ex∼Dtr [ log φWmx ] → 0 ( a ) and E ( x , y ) ∼Dtr [ ps|x ( s = 1 ) ] → 1 ( b ) as the training continues . Why are these convergences problematic ? We observe that function ’ s properties , which depends only on x , evaluated on two different datasets are very close to each other if they are drawn from the same distribution , unlike values that depend on the external randomness y|x ; that is , |Ex∼Dtr [ g ( x ) ] − Ex′∼Dval [ g ( x′ ) ] | |E ( x , y ) ∼Dtr [ h ( x , y ) ] − E ( x′ , y′ ) ∼Dval [ h ( x′ , y′ ) ] | for some functions g ( · ) and h ( · ) . For example , the empirical mean of the maximum log-probability log φWmx ( x ) ( Figure 2 ( a ) ) and L2 norm of fW ( Figure 2 ( c ) ) 2 on training samples are significantly similar to those values on unseen samples compared to the log-likelihood ( Figure 2 ( a ) ) and the accuracy ( Figure 2 ( b ) ) . We conjecture that the log-likelihood maximization with a high-capacity neural network naturally results in a high calibration error on unseen samples due to this discrepancy ; that is , it produces perfect confidence on unseen samples as it does on the training samples where it can not produce the perfect accuracy . Fortunately , this result also indicates that restricting the predictive confidence φWmx ( x ) on training samples will directly impact the predictive confidence on unseen samples and therefore can reduce the calibration error . For this reason , we explore various ways to restrict confidence and show their efficacy on a wide range of tasks for the rest of the paper . | There has been an ongoing debate about the role and importance of explicit and implicit regularization in deep learning. This paper attempts to address this issue by arguing that explicit regularization is required for the generalization of predictive probabilities, which may not be observed under the 0-1 loss. The paper provides some discussion and numerical evidence to support the claim. | SP:b2ed74153a3a56f3ad5511eeb3990e068d412192 |
Revisiting Explicit Regularization in Neural Networks for Reliable Predictive Probability | 1 INTRODUCTION . As deep learning models have become pervasive in real-world decision-systems , the importance of producing a reliable predictive probability is increasing . In this paper , we call predictive probability reliable if it is well-calibrated and precisely represents uncertainty about its predictions . The calibrated behavior refers to the ability to match its predictive probability of an event to the longterm frequency of the event occurrence ( Dawid , 1982 ) . The reliable predictive probability benefits many downstream tasks such as anomaly detection ( Malinin & Gales , 2019 ) , classification with rejection ( Lakshminarayanan et al. , 2017 ) , and exploration in reinforcement learning ( Gal & Ghahramani , 2016 ) . More importantly , deep learning systems with more reliable predictive probability can provide better feedback for explaining what is going on , situations when its prediction becomes uncertain , and unexpected anomalies to users . Unfortunately , neural networks are prone to be overconfident and lack uncertainty representation ability , and this problem has become a fundamental concern in the deep learning community . Bayesian methods have innate abilities to produce reliable predictive probability . Specifically , they express the probability distribution over parameters , in which uncertainty in the parameter space is automatically determined by data ( MacKay , 1992 ; Neal , 1993 ) . Then , uncertainty in prediction can be represented by means of providing rich information about aggregated predictions from different parameter configurations such as entropy and mutual information . From this perspective , deterministic neural networks selecting a single parameter configuration that can not provide such rich information naturally lack the uncertainty representation ability . However , the automatic determination of parameter uncertainty in the light of data , i.e. , posterior inference , comes with prohibitive computational costs . Therefore , the mainstream approach for improving the predictive probability quality has been an efficient adoption of the Bayesian principle into neural networks ( Gal & Ghahramani , 2016 ; Ritter et al. , 2018 ; Teye et al. , 2018 ; Joo et al. , 2020a ) . Recent works ( Lakshminarayanan et al. , 2017 ; Müller et al. , 2019 ; Thulasidasan et al. , 2019 ) has discovered the hidden gems of label smoothing ( Szegedy et al. , 2016 ) , mixup ( Zhang et al. , 2018 ) , and adversarial training ( Goodfellow et al. , 2015 ) , which improve the calibration performance and the uncertainty representation ability . These findings present a new possibility of improving the reliability of the predictive probability without changing the deterministic nature of neural networks . This direction is appealing because it can be applied in a plug-and-play fashion to the existing building blocks . This means that they can inherit the scalability , computational efficiency , and surprising generalization performance of the deterministic neural networks , for which Bayesian neural networks often struggle ( Wu et al. , 2019 ; Osawa et al. , 2019 ; Joo et al. , 2020a ) . Motivated by these observations , we investigate a general direction from the regularization perspective to mitigate the unreliable predictive probability problem , rather than proposing new constructive heuristics or discovering hidden properties of specific methods . Our main contribution is twofold . First , we present a new direction for alleviating the unreliable predictive behavior , which is readily applicable , computationally efficient , and scalable to large-scale models compared to Bayesian neural networks or ensemble methods . Second , our findings provide a novel view of the role of explicit regularization in deep learning , which improves the reliability of the predictive probability . 2 ANALYZING THE CAUSE OF UNRELIABLE PREDICTIVE PROBABILITY . 2.1 BACKGROUND . We consider a classification problem with i.i.d . training samples D = { ( x ( i ) , y ( i ) ) } N i=1 drawn from unknown distributions Px , y whose corresponding tuple of random variables is ( x , y ) . We denote X as an input space and Y as a set of categories { 1 , 2 , · · · , K } . Let fW : X → Z be a neural network with parameters W where Z = RK is a logit space . On top of the logit space , the softmax σ : RK →4K−1 normalizes the exponential of logits : φWk ( x ) = exp ( fWk ( x ) ) ∑ i exp ( f W i ( x ) ) ( 1 ) where we let φWk ( x ) = σk ( f W ( x ) ) for brevity . σk ( fW ( x ) ) is often interpreted as the predictive probability that the label of x belongs to class k ( Bridle , 1990 ) The probabilistic interpretation of neural network outputs gives the natural minimization objective for classification–the cross-entropy between the predictive probability and the one-hot encoded label : lCE ( y , φW ( x ) ) = − ∑ k 1y ( k ) log φ W k ( x ) , where 1A ( ω ) is an indicator function taking one if ω ∈ A and zero otherwise . By minimizing the cross-entropy ( or equivalently maximizing the log-likelihood ) with stochastic gradient descent ( SGD ) ( Robbins & Monro , 1951 ) or its variants , modern neural networks achieve the surprising generalization performance . As the demand for neural networks in real-world decision-making is emerging , reliable predictive probability has been of interest in the machine learning community . One important quality of predictive probability is calibrated behavior . Specifically , based on the notion of calibration in classical forecasting problem ( Dawid , 1982 ) , the perfectly calibrated model can be defined as follows : p ( y = k|φW ( x ) = p ) = pk , ∀p ∈ 4K−1 , k ∈ { 1 , 2 , · · · , K } ( 2 ) Here note that the calibrated model does not necessarily be ones producing φWk ( x ) = p ( y = k|x ) . In practice , expected calibration error ( ECE ) ( Naeini et al. , 2015 ) is widely used for calibration performance measure . ECE on dataset DT can be computed by binning predictions into M -groups based on their confidences1 and then averaging their calibration scores by : M∑ i=1 |Gi| |DT | |acc ( Gi ) − conf ( Gi ) | ( 3 ) where Gi = { x : i/M < maxk φ W k ( x ) ≤ ( 1 + i ) /M , x ∈ DT } ; acc ( Gi ) and conf ( Gi ) are average accuracy and confidence of predictions in group Gi , respectively . 1Throughout this paper , the confidence ( or predictive confidence ) at x refers to maxk φWk ( x ) , which is different from the confidence in statistics literature . Another metric for evaluating the reliability of predictive probability is based on predictive entropy , which evaluates how well is the model aware of its ignorance . To this end , predictive entropy is measured on samples that the model is ignorant of , such as misclassified or out-of-distribution ( OOD ) samples . For the classifier having reliable predictive probability , we expect the high uncertainty of φW ( x ) on such samples , i.e. , the answer “ I don ’ t know. ” However , several recent findings show that the resulting neural network produces unreliable predictive probability , which interprets the softmax output as the “ predictive probability ” implausible ( Gal & Ghahramani , 2016 ) . For example , Figure 1 illustrates the unreliable predictive behavior of ResNet ( He et al. , 2016 ) : the network produces outputs with high confidence in misclassified examples ( Figure 1 ( upper ) ) and provides low predictive entropy on out-of-distribution samples , albeit the samples belong to none of the classes it learned ( Figure 1 ( lower ) ) . One simple yet effective solution for improving the predictive probability ’ s quality is temperature scaling ( Guo et al. , 2017 ) , which adjusts the smoothness of the softmax so that the resulting predictive probability maximizes the log-likelihood on unseen dataset D′ : max τ ∑ ( x , y ) ∈D′ log exp ( fWy ( x ) /τ ) ∑ j exp ( f W j ( x ) /τ ) ( 4 ) where W is a fixed pretrained weight and τ is a temperature controlling the smoothness of the softmax output . This simple method makes the softmax output more reliable predictive probability . For instance , the predictive confidence matches its actual accuracy well , and the predictive entropy on out-of-distribution samples significantly increases ( Figure 1 ) . 2.2 A CLOSER LOOK AT THE LOG-LIKELIHOOD ON UNSEEN SAMPLES .. Motivated by the success of the temperature scaling , we aim to find the relationship between the loglikelihood and the calibration performance . To this end , let s be a binary random variable indicating whether a model correctly classifies a sample . Then , we can derive the following upper bound by using the law of total expectation : Ex , y [ log φWy ( x ) ] = Ex [ ps|x ( s = 1 ) Ey|s=1 , x [ log φWy ( x ) ] + ps|x ( s = 0 ) Ey|s=0 , x [ log φWy ( x ) ] ] ≤ Ex [ ps|x ( s = 1 ) log φ W mx ( x ) + ps|x ( s = 0 ) log ( 1− φ W mx ( x ) ) ] ( 5 ) where mx is the predictive class such that mx = argmaxk f W k ( x ) , ps|x ( s = 1 ) = p ( y = argmaxk φ W mk ( x ) |x ) , and the inequality comes from the fact that Ey|s=0 , x [ log φWy ( x ) ] ≤ maxk 6=mx log φ W k ( x ) ≤ log ( 1− φWmx ( x ) ) . The upper bound can be thought of as the probabilistic measure of the calibration performance . Specifically , suppose a neural network produces an answer with probability φWmx ( x ) and refuses to answer otherwise . Then , the upper bound becomes the divergence between the model ’ s ability to correctly predict the sample and the model ’ s willingness to answer . When we consider the inequality in equation 5 is applied to the empirical mean on the test dataset , this inequality clearly explains why the temperature scaling is helpful ; it increases a lower bound of the calibration error with modified confidence by τ closer to its accuracy . More importantly , the inequality in equation 5 explains the impacts of the cross-entropy minimization on the behavior of the neural network . Specifically , suppose E ( x , y ) ∼D [ log φWy ( x ) ] → 0 for some dataset D. Then , it can be shown that ps|x ( s = 1 ) → 1 and φWmx ( x ) → 1 for all ( x , y ) ∈ D. This holds because we have mink φWk ( x ) ≥ 1/K for any x and the minimum of p log q + ( 1 − p ) log ( 1 − q ) are ( p , q ) → ( 1 , 1 ) or ( p , q ) → ( 0 , 0 ) . Therefore , for high-capacity models that can make the log-likelihood on the training dataset Dtr close to zero , e.g. , deep neural networks , its behavior on the training set will converge to a configuration that corrects all samples with perfect confidence . Figure 2 illustrates this phenomenon in ResNet trained on CIFAR-100 : Ex∼Dtr [ log φWmx ] → 0 ( a ) and E ( x , y ) ∼Dtr [ ps|x ( s = 1 ) ] → 1 ( b ) as the training continues . Why are these convergences problematic ? We observe that function ’ s properties , which depends only on x , evaluated on two different datasets are very close to each other if they are drawn from the same distribution , unlike values that depend on the external randomness y|x ; that is , |Ex∼Dtr [ g ( x ) ] − Ex′∼Dval [ g ( x′ ) ] | |E ( x , y ) ∼Dtr [ h ( x , y ) ] − E ( x′ , y′ ) ∼Dval [ h ( x′ , y′ ) ] | for some functions g ( · ) and h ( · ) . For example , the empirical mean of the maximum log-probability log φWmx ( x ) ( Figure 2 ( a ) ) and L2 norm of fW ( Figure 2 ( c ) ) 2 on training samples are significantly similar to those values on unseen samples compared to the log-likelihood ( Figure 2 ( a ) ) and the accuracy ( Figure 2 ( b ) ) . We conjecture that the log-likelihood maximization with a high-capacity neural network naturally results in a high calibration error on unseen samples due to this discrepancy ; that is , it produces perfect confidence on unseen samples as it does on the training samples where it can not produce the perfect accuracy . Fortunately , this result also indicates that restricting the predictive confidence φWmx ( x ) on training samples will directly impact the predictive confidence on unseen samples and therefore can reduce the calibration error . For this reason , we explore various ways to restrict confidence and show their efficacy on a wide range of tasks for the rest of the paper . | The main contribution of this paper is to propose new regularization methods in deep neural networks that produce well-calibrated probability scores. The authors argue that regularization is better than post-processing, such as temperature scaling, because temperature scaling would require a separate dataset for calibration. In addition, regularization is added to the loss so it does not alter other components of the neural network. There are two forms of regularization that the authors propose: (1) regularizing in the function space, and (2) in the probability space. Interestingly, they show that both regularization methods yield well-calibrated scores, which cannot be attributed to minimizing the norm of the weights alone. | SP:b2ed74153a3a56f3ad5511eeb3990e068d412192 |
Revisiting Explicit Regularization in Neural Networks for Reliable Predictive Probability | 1 INTRODUCTION . As deep learning models have become pervasive in real-world decision-systems , the importance of producing a reliable predictive probability is increasing . In this paper , we call predictive probability reliable if it is well-calibrated and precisely represents uncertainty about its predictions . The calibrated behavior refers to the ability to match its predictive probability of an event to the longterm frequency of the event occurrence ( Dawid , 1982 ) . The reliable predictive probability benefits many downstream tasks such as anomaly detection ( Malinin & Gales , 2019 ) , classification with rejection ( Lakshminarayanan et al. , 2017 ) , and exploration in reinforcement learning ( Gal & Ghahramani , 2016 ) . More importantly , deep learning systems with more reliable predictive probability can provide better feedback for explaining what is going on , situations when its prediction becomes uncertain , and unexpected anomalies to users . Unfortunately , neural networks are prone to be overconfident and lack uncertainty representation ability , and this problem has become a fundamental concern in the deep learning community . Bayesian methods have innate abilities to produce reliable predictive probability . Specifically , they express the probability distribution over parameters , in which uncertainty in the parameter space is automatically determined by data ( MacKay , 1992 ; Neal , 1993 ) . Then , uncertainty in prediction can be represented by means of providing rich information about aggregated predictions from different parameter configurations such as entropy and mutual information . From this perspective , deterministic neural networks selecting a single parameter configuration that can not provide such rich information naturally lack the uncertainty representation ability . However , the automatic determination of parameter uncertainty in the light of data , i.e. , posterior inference , comes with prohibitive computational costs . Therefore , the mainstream approach for improving the predictive probability quality has been an efficient adoption of the Bayesian principle into neural networks ( Gal & Ghahramani , 2016 ; Ritter et al. , 2018 ; Teye et al. , 2018 ; Joo et al. , 2020a ) . Recent works ( Lakshminarayanan et al. , 2017 ; Müller et al. , 2019 ; Thulasidasan et al. , 2019 ) has discovered the hidden gems of label smoothing ( Szegedy et al. , 2016 ) , mixup ( Zhang et al. , 2018 ) , and adversarial training ( Goodfellow et al. , 2015 ) , which improve the calibration performance and the uncertainty representation ability . These findings present a new possibility of improving the reliability of the predictive probability without changing the deterministic nature of neural networks . This direction is appealing because it can be applied in a plug-and-play fashion to the existing building blocks . This means that they can inherit the scalability , computational efficiency , and surprising generalization performance of the deterministic neural networks , for which Bayesian neural networks often struggle ( Wu et al. , 2019 ; Osawa et al. , 2019 ; Joo et al. , 2020a ) . Motivated by these observations , we investigate a general direction from the regularization perspective to mitigate the unreliable predictive probability problem , rather than proposing new constructive heuristics or discovering hidden properties of specific methods . Our main contribution is twofold . First , we present a new direction for alleviating the unreliable predictive behavior , which is readily applicable , computationally efficient , and scalable to large-scale models compared to Bayesian neural networks or ensemble methods . Second , our findings provide a novel view of the role of explicit regularization in deep learning , which improves the reliability of the predictive probability . 2 ANALYZING THE CAUSE OF UNRELIABLE PREDICTIVE PROBABILITY . 2.1 BACKGROUND . We consider a classification problem with i.i.d . training samples D = { ( x ( i ) , y ( i ) ) } N i=1 drawn from unknown distributions Px , y whose corresponding tuple of random variables is ( x , y ) . We denote X as an input space and Y as a set of categories { 1 , 2 , · · · , K } . Let fW : X → Z be a neural network with parameters W where Z = RK is a logit space . On top of the logit space , the softmax σ : RK →4K−1 normalizes the exponential of logits : φWk ( x ) = exp ( fWk ( x ) ) ∑ i exp ( f W i ( x ) ) ( 1 ) where we let φWk ( x ) = σk ( f W ( x ) ) for brevity . σk ( fW ( x ) ) is often interpreted as the predictive probability that the label of x belongs to class k ( Bridle , 1990 ) The probabilistic interpretation of neural network outputs gives the natural minimization objective for classification–the cross-entropy between the predictive probability and the one-hot encoded label : lCE ( y , φW ( x ) ) = − ∑ k 1y ( k ) log φ W k ( x ) , where 1A ( ω ) is an indicator function taking one if ω ∈ A and zero otherwise . By minimizing the cross-entropy ( or equivalently maximizing the log-likelihood ) with stochastic gradient descent ( SGD ) ( Robbins & Monro , 1951 ) or its variants , modern neural networks achieve the surprising generalization performance . As the demand for neural networks in real-world decision-making is emerging , reliable predictive probability has been of interest in the machine learning community . One important quality of predictive probability is calibrated behavior . Specifically , based on the notion of calibration in classical forecasting problem ( Dawid , 1982 ) , the perfectly calibrated model can be defined as follows : p ( y = k|φW ( x ) = p ) = pk , ∀p ∈ 4K−1 , k ∈ { 1 , 2 , · · · , K } ( 2 ) Here note that the calibrated model does not necessarily be ones producing φWk ( x ) = p ( y = k|x ) . In practice , expected calibration error ( ECE ) ( Naeini et al. , 2015 ) is widely used for calibration performance measure . ECE on dataset DT can be computed by binning predictions into M -groups based on their confidences1 and then averaging their calibration scores by : M∑ i=1 |Gi| |DT | |acc ( Gi ) − conf ( Gi ) | ( 3 ) where Gi = { x : i/M < maxk φ W k ( x ) ≤ ( 1 + i ) /M , x ∈ DT } ; acc ( Gi ) and conf ( Gi ) are average accuracy and confidence of predictions in group Gi , respectively . 1Throughout this paper , the confidence ( or predictive confidence ) at x refers to maxk φWk ( x ) , which is different from the confidence in statistics literature . Another metric for evaluating the reliability of predictive probability is based on predictive entropy , which evaluates how well is the model aware of its ignorance . To this end , predictive entropy is measured on samples that the model is ignorant of , such as misclassified or out-of-distribution ( OOD ) samples . For the classifier having reliable predictive probability , we expect the high uncertainty of φW ( x ) on such samples , i.e. , the answer “ I don ’ t know. ” However , several recent findings show that the resulting neural network produces unreliable predictive probability , which interprets the softmax output as the “ predictive probability ” implausible ( Gal & Ghahramani , 2016 ) . For example , Figure 1 illustrates the unreliable predictive behavior of ResNet ( He et al. , 2016 ) : the network produces outputs with high confidence in misclassified examples ( Figure 1 ( upper ) ) and provides low predictive entropy on out-of-distribution samples , albeit the samples belong to none of the classes it learned ( Figure 1 ( lower ) ) . One simple yet effective solution for improving the predictive probability ’ s quality is temperature scaling ( Guo et al. , 2017 ) , which adjusts the smoothness of the softmax so that the resulting predictive probability maximizes the log-likelihood on unseen dataset D′ : max τ ∑ ( x , y ) ∈D′ log exp ( fWy ( x ) /τ ) ∑ j exp ( f W j ( x ) /τ ) ( 4 ) where W is a fixed pretrained weight and τ is a temperature controlling the smoothness of the softmax output . This simple method makes the softmax output more reliable predictive probability . For instance , the predictive confidence matches its actual accuracy well , and the predictive entropy on out-of-distribution samples significantly increases ( Figure 1 ) . 2.2 A CLOSER LOOK AT THE LOG-LIKELIHOOD ON UNSEEN SAMPLES .. Motivated by the success of the temperature scaling , we aim to find the relationship between the loglikelihood and the calibration performance . To this end , let s be a binary random variable indicating whether a model correctly classifies a sample . Then , we can derive the following upper bound by using the law of total expectation : Ex , y [ log φWy ( x ) ] = Ex [ ps|x ( s = 1 ) Ey|s=1 , x [ log φWy ( x ) ] + ps|x ( s = 0 ) Ey|s=0 , x [ log φWy ( x ) ] ] ≤ Ex [ ps|x ( s = 1 ) log φ W mx ( x ) + ps|x ( s = 0 ) log ( 1− φ W mx ( x ) ) ] ( 5 ) where mx is the predictive class such that mx = argmaxk f W k ( x ) , ps|x ( s = 1 ) = p ( y = argmaxk φ W mk ( x ) |x ) , and the inequality comes from the fact that Ey|s=0 , x [ log φWy ( x ) ] ≤ maxk 6=mx log φ W k ( x ) ≤ log ( 1− φWmx ( x ) ) . The upper bound can be thought of as the probabilistic measure of the calibration performance . Specifically , suppose a neural network produces an answer with probability φWmx ( x ) and refuses to answer otherwise . Then , the upper bound becomes the divergence between the model ’ s ability to correctly predict the sample and the model ’ s willingness to answer . When we consider the inequality in equation 5 is applied to the empirical mean on the test dataset , this inequality clearly explains why the temperature scaling is helpful ; it increases a lower bound of the calibration error with modified confidence by τ closer to its accuracy . More importantly , the inequality in equation 5 explains the impacts of the cross-entropy minimization on the behavior of the neural network . Specifically , suppose E ( x , y ) ∼D [ log φWy ( x ) ] → 0 for some dataset D. Then , it can be shown that ps|x ( s = 1 ) → 1 and φWmx ( x ) → 1 for all ( x , y ) ∈ D. This holds because we have mink φWk ( x ) ≥ 1/K for any x and the minimum of p log q + ( 1 − p ) log ( 1 − q ) are ( p , q ) → ( 1 , 1 ) or ( p , q ) → ( 0 , 0 ) . Therefore , for high-capacity models that can make the log-likelihood on the training dataset Dtr close to zero , e.g. , deep neural networks , its behavior on the training set will converge to a configuration that corrects all samples with perfect confidence . Figure 2 illustrates this phenomenon in ResNet trained on CIFAR-100 : Ex∼Dtr [ log φWmx ] → 0 ( a ) and E ( x , y ) ∼Dtr [ ps|x ( s = 1 ) ] → 1 ( b ) as the training continues . Why are these convergences problematic ? We observe that function ’ s properties , which depends only on x , evaluated on two different datasets are very close to each other if they are drawn from the same distribution , unlike values that depend on the external randomness y|x ; that is , |Ex∼Dtr [ g ( x ) ] − Ex′∼Dval [ g ( x′ ) ] | |E ( x , y ) ∼Dtr [ h ( x , y ) ] − E ( x′ , y′ ) ∼Dval [ h ( x′ , y′ ) ] | for some functions g ( · ) and h ( · ) . For example , the empirical mean of the maximum log-probability log φWmx ( x ) ( Figure 2 ( a ) ) and L2 norm of fW ( Figure 2 ( c ) ) 2 on training samples are significantly similar to those values on unseen samples compared to the log-likelihood ( Figure 2 ( a ) ) and the accuracy ( Figure 2 ( b ) ) . We conjecture that the log-likelihood maximization with a high-capacity neural network naturally results in a high calibration error on unseen samples due to this discrepancy ; that is , it produces perfect confidence on unseen samples as it does on the training samples where it can not produce the perfect accuracy . Fortunately , this result also indicates that restricting the predictive confidence φWmx ( x ) on training samples will directly impact the predictive confidence on unseen samples and therefore can reduce the calibration error . For this reason , we explore various ways to restrict confidence and show their efficacy on a wide range of tasks for the rest of the paper . | The paper studies how explicit regularization affects the reliable predictive probability (calibration) of classification tasks. An analysis of the log-likelihood is presented which motivates the use of explicit regularization. Next, two regularization terms are proposed to improve the predictive probability, and experiments on CIFAR – 10/100 show that these regularization terms improve both the accuracy and the calibration error. | SP:b2ed74153a3a56f3ad5511eeb3990e068d412192 |
Sample weighting as an explanation for mode collapse in generative adversarial networks | 1 INTRODUCTION . Generative adversarial networks have come a long way since their introduction ( Goodfellow et al. , 2014 ) and are currently state of the art for some tasks , such as generating images . A combination of deep learning developments , GAN specific advances and vast improvements in data sets and computational resources have enabled GANs to generate high resolution images that require some effort to distinguish from real photos ( Zhang et al. , 2018 ; Brock et al. , 2018 ; Karras et al. , 2018 ) . GANs use two competing networks : a generator G that maps input noise to samples mimicking real data , and a discriminator D that outputs estimated probabilities of samples being real rather than generated by G. We summarize their cost functions , JD and JG , for the minimax and nonsaturating formulations introduced in Goodfellow et al . ( 2014 ) . We denote samples from real data and noise distributions by x and z and omit the proper expectation value formalism : JDMM ( x , z ) = JDNS ( x , z ) = − log ( Dp ( x ) ) − log ( 1−Dp ( G ( z ) ) ) JGMM ( z ) = log ( 1−Dp ( G ( z ) ) ) JGNS ( z ) = − log ( Dp ( G ( z ) ) ( 1 ) For clarity , we use subscripts to distinguish between the discriminator ’ s pre-activation logit output Dl and the probability representation Dp : Dp ≡ ( 1 + exp ( −Dl ) ) −1 ( 2 ) Both formulations have the same cost function for D , representing the cross entropy between probability estimates and ground truth . In the minimax formulation ( MM-GAN ) , G is simply trained to maximize D ’ s cost . Ideally , G matches its outputs to the real data distribution while also achieving meaningful generalization , but many failure modes are observed in practice . NS-GAN uses a modified cost for G that is non-saturating when D distinguishes real and generated data with very high confidence , such that G ’ s gradients do not vanish . ( Supplementary : C ) Various publications establish what the different cost functions optimize in terms of the JensenShannon and reverse Kullback-Leibler divergences between real and generated data : JGMM ⇔ 2 · DJS ( Goodfellow et al. , 2014 ) JGMM + JGNS ⇔ DRKL ( Huszár , 2016 ) JGNS ⇔ DRKL − 2 · DJS ( Arjovsky & Bottou , 2017 ) ( 3 ) Huszár ( 2015 ) and Arjovsky & Bottou ( 2017 ) have suggested NS-GAN ’ s divergence as an explanation for the ubiquitous mode dropping and mode collapsing problems with GANs ( Metz et al. , 2016 ; Salimans et al. , 2016 ; Srivastava et al. , 2017 ) . While MM-GAN seems promising in terms of its Jensen-Shannon divergence , the formulation has largely been ignored because the saturating cost causes training to break down . A variety of other GAN formulations have been introduced , such as WGAN-GP ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ) , LS-GAN ( Mao et al. , 2016 ) and Hinge-GAN ( Miyato et al. , 2018 ) . Lucic et al . ( 2018 ) finds that different cost formulations tend to get similar results given sufficient parameter tuning , including various forms of regularization . Despite the questionable form of NSGAN in terms of divergences , it is widely used and can produce very impressive results , such as in the improved StyleGAN ( Karras et al. , 2019 ) . 2 THEORY 2.1 MM-GAN SATURATION The parameters of a network are typically trained with some form of gradient descent on a cost function . We find the expressions for D ’ s and G ’ s gradients with respect to their parameters , φ and θ : ( Supplementary : F ) ∇φJDMM , NS = + ∂Dl ( G ( z ) , φ ) ∂φ ·Dp ( G ( z ) , φ ) + ∂Dl ( x , φ ) ∂φ · ( 1−Dp ( x , φ ) ) ∇θJGMM = − ∂Dl ( G ( z , θ ) ) ∂θ ·Dp ( G ( z , θ ) ) ∇θJGNS = − ∂Dl ( G ( z , θ ) ) ∂θ · ( 1−Dp ( G ( z , θ ) ) ) ( 4 ) We emphasize the two kinds of scaling factors for these gradients in red and blue : they are plotted in figure 2 . The discriminator ’ s scaling factors decrease as it minimizes its cost , approaching 0 towards the optima for both the real and the generated data term . The minimax formulation JG = −JD is suitable for adversarial training in terms of the generator ’ s optimum , but the unchanged scaling factor means that G ’ s gradients increase towards and decrease away from its optimum . The saturation effect described in Goodfellow et al . ( 2014 ) is that limDp ( G ( z ) ) →0∇θJGMM = 0 , such that G stops training when D is highly confident that its samples are fake . More generally , the scaling factor makes JG concave with respect to Dl , which interacts poorly with common optimization methods ( see section 2.4 ) . As∇θJGMM and∇θJGNS are the same aside from their scaling factors , the different behaviors of the two formulations must follow from these . NS-GAN ’ s scaling factor avoids saturation , but gives rise to a different , more subtle mode dropping tendency ( see section 2.3 ) . 2.2 NON-SATURATION AND SAMPLE WEIGHTING . As can be seen from eq 4 , the NS-GAN and MM-GAN gradients are parallel for a single sample , but with different magnitudes . Stochastic gradient descent estimates the gradient of the cost over the entire input distribution by using a number of samples ( a minibatch ) . We can express the NS-GAN minibatch gradient in terms of the MM-GAN gradient : ∇θJbatchGNS = N−1∑ i=0 ∇θJ sampleGMM ( zi ) [ 1−Dp ( G ( zi ) ) Dp ( G ( zi ) ) ] ( 5 ) Due to the bracketed factor , NS-GAN rescales the contribution from each sample relative to MMGAN , implicitly emphasizing samples with smaller values of Dp . Seeing as saturation is caused by the gradient ’ s vanishing magnitude , this additional effect on the gradient ’ s direction is questionable . The exact ratio of the minibatch gradient magnitudes for NS-GAN and MM-GAN depends on ∂/∂θ ( Dl ( G ( z ) ) ) for each sample and has no convenient expression . We can approximate it by replacing Dp ( G ( zi ) ) in eq 5 with its mean over the minibatch , Dp = 1N ∑N−1 i=0 Dp ( G ( zi ) ) . This allows us to formulate a form of non-saturation for MM-GAN that mimicks NS-GAN : ∇θJbatchGMM-nsat = 1−Dp Dp N−1∑ i=0 ∇θJ sampleGMM ( zi ) ( 6 ) We refer to the formulation with this generator gradient as MM-nsat . The relative weights of samples in each batch are as for MM-GAN , while the gradient magnitude approximates that of NS-GAN . Note , however , that the relative weights of samples may be disturbed across batches , such as when the minibatch size is small and Dp fluctuates . Despite different theoretical motivation , MM-nsat is very closely related to importance weighted NS-GAN ( Hu et al. , 2017 ) . ( Supplementary : D ) 2.3 SAMPLE WEIGHTING AND MODE DROPPING . If we use r ( x ) and g ( x ) to denote the density of real and generated samples at a point x in data space , the optimal discriminator ( i.e . the function that minimizes JDMM , NS ) is given by : Doptp ( x ) = r ( x ) r ( x ) + g ( x ) ( Goodfellow et al. , 2014 ) ( 7 ) This expression for the optimial discriminator assumes idealized conditions : that D is optimized for a fixed G , with unlimited capacity and without having to estimate the underlying real and generated data distributions by finite sampling . While Dopt is not realized in practice ( Sinn & Rawat , 2018 ) , we use it to form some intuitions about D ’ s behaviors . Suppose we have real data with two disjunct , equiprobable modes , and that one of these modes , O , is overrepresented in generated data . For convergence , G would need to shift probability mass from O to the underrepresented mode , U . However , the minibatches used to update G will have more samples from O , simply because they are generated more often . For a strong discriminator , these samples will also tend towards smaller values of Dp ( G ( z ) ) , due to equation 7 . This in turn causes them to be weighted differently by NS-GAN and MM-GAN , because Dp ( G ( z ) ) appears in their scaling factors ( eq 4 ) . We refer to the first effect as over- and underrepresentation ( of generated samples relative to real samples ) and the second effect as up- and downweighting ( governed on the scaling factor ) . The fundamental problem with NS-GAN can be seen by looking at how these two effects interact . The NS-GAN generator has a scaling factor 1−Dp ( G ( z ) ) , which combines overepresentation with upweighting and underrepresentation with downweighting , allowing an overrpresented mode O to dominate U ’ s contributions to the parameter updates . If parameter updates overwhelmingly based on gradients from O have an adverse effect on G ’ s ability to generate samples from U , this mode may become increasingly underrepresented , making O yet more dominant . MM-GAN ’ s scaling factor has the reverse behavior , pairing overrepresentation with downweighting and underrepresentation with upweighting , such that the two effects combine in a stabilizing rather than destabilizing way . For the extreme example of a nearly dropped or newly discovered mode , r ( x ) g ( x ) such that we expect Dp ( x ) ≈ 1 for a strong discriminator . NS-GAN ’ s sample weighting disregards gradients from such samples , whereas MM-GAN ’ s sample weighting emphasizes them . This difference between MM-GAN and NS-GAN in terms of scaling factors reflects the different divergences they have been shown to optimize ( eq 3 ) . Huszár ( 2015 ) and Arjovsky et al . ( 2017 ) both connect the mode dropping tendencies of NS-GAN to its reverse Kullback-Leibler divergence , which strongly penalizes G for generating samples outside of the real data modes , but not for dropping modes . A variety of attempts to address mode dropping , mode collapse and mode hopping in GANs seem unaware that this is reasonable behavior for NS-GAN given its divergence : mode dropping minimizes DRKL and maximizes DJS . 2.4 MM-GAN INTERACTION WITH ADAM GANs are generally trained with the Adam optimizer ( Kingma & Ba , 2014 ) , as was recommended by Radford et al . ( 2015 ) while introducing the DCGAN architecture . The parameter update step for Adam is given by : ∆θt = −α m̂t√ v̂t + mt = β1mt−1 + ( 1− β1 ) gt , m̂t = mt 1− βt1 vt = β2vt−1 + ( 1− β2 ) g2t , v̂t = vt 1− βt2 ( 8 ) Here , gt is the gradient at timestep t , and m̂t and v̂t are the first and second order exponential moving averages of the parameterwise gradients , bias-corrected to account for zero-initialization . There are four hyperparamters : is a small constant primarily for numerical stability , α is the learning rate and β1 and β2 determine the effective memories of the moving averages . The fraction m̂t√ v̂t resembles unit normalization of a vector , and for a constant gradient gt = g ( such that the moving averages are trivial ) update steps depend only on the sign of g , if |g| . ∆θt = −α sgn ( g ) 1 + |g| ( 9 ) However , this normalization does not necessarily address MM-GAN ’ s saturation problem , due to the training dynamics . Doptl = ±∞ where the real and generated data do not overlap ( eq 7 ) , such that if D can cleanly separate real from generated samples , we expect it to further decrease its loss by inflating its outputs . Supposing that D approaches this optimum linearly , i.e . Dtl ( G ( z ) ) = at , we get : Dtp ( G ( z ) ) ≈ exp ( at ) ( 10 ) Dp also appears as the MM-GAN scaling factor ( eq 4 ) , such that G will be optimized with gradients of the form gt = g0 exp ( at ) . For reasonable values of a , β1 and β2 , the update step size for each of G ’ s parameters can be approximated : ( Supplementary : G ) ∆θt ≈ g0C exp ( ( a− 1 2 log ( β2 ) ) t ) ( 11 ) Given the commonly used β2 = 0.999 , parameter updates will vanish exponentially if a < −0.0005 , given that |Dl| increases fast enough and conforms reasonably well to our simplified model . If D learns to distinguish the real and generated data manifolds before they meaningfully intersect , this interaction between D , G and Adam threatens to freeze parameter updates altogether . ( Supplementary : L ) | This work proposes that many common issues with GAN methods are based on the weighting of the samples given to the generator’s objective function. They focus on a study of the original GAN objective proposed in Goodfellow et al. where the generator’s objective is the negative of the discriminators objective. The GAN community quickly observed that when the discriminator outperforms the generator with this objective, the saturating nature of the sigmoid function causes the gradients to vanish for the generator’s objective. For this reason, a new objective (NS-GAN) was proposed which modifies the generator’s objective to alleviate this gradient vanishing issue. The authors argue that this modified objective is to blame for a number of common issues with GAN methods -- most notably the mode-dropping issue. The authors present theory that backs up these claims and they propose a new generator training objective which re-weights the gradients of the generator objective to have the same average magnitude as NS-GAN but have the same relative magnitudes of the original GAN objective. | SP:ed625d9db2079f50e38537787c0e5c9e2ea79419 |
Sample weighting as an explanation for mode collapse in generative adversarial networks | 1 INTRODUCTION . Generative adversarial networks have come a long way since their introduction ( Goodfellow et al. , 2014 ) and are currently state of the art for some tasks , such as generating images . A combination of deep learning developments , GAN specific advances and vast improvements in data sets and computational resources have enabled GANs to generate high resolution images that require some effort to distinguish from real photos ( Zhang et al. , 2018 ; Brock et al. , 2018 ; Karras et al. , 2018 ) . GANs use two competing networks : a generator G that maps input noise to samples mimicking real data , and a discriminator D that outputs estimated probabilities of samples being real rather than generated by G. We summarize their cost functions , JD and JG , for the minimax and nonsaturating formulations introduced in Goodfellow et al . ( 2014 ) . We denote samples from real data and noise distributions by x and z and omit the proper expectation value formalism : JDMM ( x , z ) = JDNS ( x , z ) = − log ( Dp ( x ) ) − log ( 1−Dp ( G ( z ) ) ) JGMM ( z ) = log ( 1−Dp ( G ( z ) ) ) JGNS ( z ) = − log ( Dp ( G ( z ) ) ( 1 ) For clarity , we use subscripts to distinguish between the discriminator ’ s pre-activation logit output Dl and the probability representation Dp : Dp ≡ ( 1 + exp ( −Dl ) ) −1 ( 2 ) Both formulations have the same cost function for D , representing the cross entropy between probability estimates and ground truth . In the minimax formulation ( MM-GAN ) , G is simply trained to maximize D ’ s cost . Ideally , G matches its outputs to the real data distribution while also achieving meaningful generalization , but many failure modes are observed in practice . NS-GAN uses a modified cost for G that is non-saturating when D distinguishes real and generated data with very high confidence , such that G ’ s gradients do not vanish . ( Supplementary : C ) Various publications establish what the different cost functions optimize in terms of the JensenShannon and reverse Kullback-Leibler divergences between real and generated data : JGMM ⇔ 2 · DJS ( Goodfellow et al. , 2014 ) JGMM + JGNS ⇔ DRKL ( Huszár , 2016 ) JGNS ⇔ DRKL − 2 · DJS ( Arjovsky & Bottou , 2017 ) ( 3 ) Huszár ( 2015 ) and Arjovsky & Bottou ( 2017 ) have suggested NS-GAN ’ s divergence as an explanation for the ubiquitous mode dropping and mode collapsing problems with GANs ( Metz et al. , 2016 ; Salimans et al. , 2016 ; Srivastava et al. , 2017 ) . While MM-GAN seems promising in terms of its Jensen-Shannon divergence , the formulation has largely been ignored because the saturating cost causes training to break down . A variety of other GAN formulations have been introduced , such as WGAN-GP ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ) , LS-GAN ( Mao et al. , 2016 ) and Hinge-GAN ( Miyato et al. , 2018 ) . Lucic et al . ( 2018 ) finds that different cost formulations tend to get similar results given sufficient parameter tuning , including various forms of regularization . Despite the questionable form of NSGAN in terms of divergences , it is widely used and can produce very impressive results , such as in the improved StyleGAN ( Karras et al. , 2019 ) . 2 THEORY 2.1 MM-GAN SATURATION The parameters of a network are typically trained with some form of gradient descent on a cost function . We find the expressions for D ’ s and G ’ s gradients with respect to their parameters , φ and θ : ( Supplementary : F ) ∇φJDMM , NS = + ∂Dl ( G ( z ) , φ ) ∂φ ·Dp ( G ( z ) , φ ) + ∂Dl ( x , φ ) ∂φ · ( 1−Dp ( x , φ ) ) ∇θJGMM = − ∂Dl ( G ( z , θ ) ) ∂θ ·Dp ( G ( z , θ ) ) ∇θJGNS = − ∂Dl ( G ( z , θ ) ) ∂θ · ( 1−Dp ( G ( z , θ ) ) ) ( 4 ) We emphasize the two kinds of scaling factors for these gradients in red and blue : they are plotted in figure 2 . The discriminator ’ s scaling factors decrease as it minimizes its cost , approaching 0 towards the optima for both the real and the generated data term . The minimax formulation JG = −JD is suitable for adversarial training in terms of the generator ’ s optimum , but the unchanged scaling factor means that G ’ s gradients increase towards and decrease away from its optimum . The saturation effect described in Goodfellow et al . ( 2014 ) is that limDp ( G ( z ) ) →0∇θJGMM = 0 , such that G stops training when D is highly confident that its samples are fake . More generally , the scaling factor makes JG concave with respect to Dl , which interacts poorly with common optimization methods ( see section 2.4 ) . As∇θJGMM and∇θJGNS are the same aside from their scaling factors , the different behaviors of the two formulations must follow from these . NS-GAN ’ s scaling factor avoids saturation , but gives rise to a different , more subtle mode dropping tendency ( see section 2.3 ) . 2.2 NON-SATURATION AND SAMPLE WEIGHTING . As can be seen from eq 4 , the NS-GAN and MM-GAN gradients are parallel for a single sample , but with different magnitudes . Stochastic gradient descent estimates the gradient of the cost over the entire input distribution by using a number of samples ( a minibatch ) . We can express the NS-GAN minibatch gradient in terms of the MM-GAN gradient : ∇θJbatchGNS = N−1∑ i=0 ∇θJ sampleGMM ( zi ) [ 1−Dp ( G ( zi ) ) Dp ( G ( zi ) ) ] ( 5 ) Due to the bracketed factor , NS-GAN rescales the contribution from each sample relative to MMGAN , implicitly emphasizing samples with smaller values of Dp . Seeing as saturation is caused by the gradient ’ s vanishing magnitude , this additional effect on the gradient ’ s direction is questionable . The exact ratio of the minibatch gradient magnitudes for NS-GAN and MM-GAN depends on ∂/∂θ ( Dl ( G ( z ) ) ) for each sample and has no convenient expression . We can approximate it by replacing Dp ( G ( zi ) ) in eq 5 with its mean over the minibatch , Dp = 1N ∑N−1 i=0 Dp ( G ( zi ) ) . This allows us to formulate a form of non-saturation for MM-GAN that mimicks NS-GAN : ∇θJbatchGMM-nsat = 1−Dp Dp N−1∑ i=0 ∇θJ sampleGMM ( zi ) ( 6 ) We refer to the formulation with this generator gradient as MM-nsat . The relative weights of samples in each batch are as for MM-GAN , while the gradient magnitude approximates that of NS-GAN . Note , however , that the relative weights of samples may be disturbed across batches , such as when the minibatch size is small and Dp fluctuates . Despite different theoretical motivation , MM-nsat is very closely related to importance weighted NS-GAN ( Hu et al. , 2017 ) . ( Supplementary : D ) 2.3 SAMPLE WEIGHTING AND MODE DROPPING . If we use r ( x ) and g ( x ) to denote the density of real and generated samples at a point x in data space , the optimal discriminator ( i.e . the function that minimizes JDMM , NS ) is given by : Doptp ( x ) = r ( x ) r ( x ) + g ( x ) ( Goodfellow et al. , 2014 ) ( 7 ) This expression for the optimial discriminator assumes idealized conditions : that D is optimized for a fixed G , with unlimited capacity and without having to estimate the underlying real and generated data distributions by finite sampling . While Dopt is not realized in practice ( Sinn & Rawat , 2018 ) , we use it to form some intuitions about D ’ s behaviors . Suppose we have real data with two disjunct , equiprobable modes , and that one of these modes , O , is overrepresented in generated data . For convergence , G would need to shift probability mass from O to the underrepresented mode , U . However , the minibatches used to update G will have more samples from O , simply because they are generated more often . For a strong discriminator , these samples will also tend towards smaller values of Dp ( G ( z ) ) , due to equation 7 . This in turn causes them to be weighted differently by NS-GAN and MM-GAN , because Dp ( G ( z ) ) appears in their scaling factors ( eq 4 ) . We refer to the first effect as over- and underrepresentation ( of generated samples relative to real samples ) and the second effect as up- and downweighting ( governed on the scaling factor ) . The fundamental problem with NS-GAN can be seen by looking at how these two effects interact . The NS-GAN generator has a scaling factor 1−Dp ( G ( z ) ) , which combines overepresentation with upweighting and underrepresentation with downweighting , allowing an overrpresented mode O to dominate U ’ s contributions to the parameter updates . If parameter updates overwhelmingly based on gradients from O have an adverse effect on G ’ s ability to generate samples from U , this mode may become increasingly underrepresented , making O yet more dominant . MM-GAN ’ s scaling factor has the reverse behavior , pairing overrepresentation with downweighting and underrepresentation with upweighting , such that the two effects combine in a stabilizing rather than destabilizing way . For the extreme example of a nearly dropped or newly discovered mode , r ( x ) g ( x ) such that we expect Dp ( x ) ≈ 1 for a strong discriminator . NS-GAN ’ s sample weighting disregards gradients from such samples , whereas MM-GAN ’ s sample weighting emphasizes them . This difference between MM-GAN and NS-GAN in terms of scaling factors reflects the different divergences they have been shown to optimize ( eq 3 ) . Huszár ( 2015 ) and Arjovsky et al . ( 2017 ) both connect the mode dropping tendencies of NS-GAN to its reverse Kullback-Leibler divergence , which strongly penalizes G for generating samples outside of the real data modes , but not for dropping modes . A variety of attempts to address mode dropping , mode collapse and mode hopping in GANs seem unaware that this is reasonable behavior for NS-GAN given its divergence : mode dropping minimizes DRKL and maximizes DJS . 2.4 MM-GAN INTERACTION WITH ADAM GANs are generally trained with the Adam optimizer ( Kingma & Ba , 2014 ) , as was recommended by Radford et al . ( 2015 ) while introducing the DCGAN architecture . The parameter update step for Adam is given by : ∆θt = −α m̂t√ v̂t + mt = β1mt−1 + ( 1− β1 ) gt , m̂t = mt 1− βt1 vt = β2vt−1 + ( 1− β2 ) g2t , v̂t = vt 1− βt2 ( 8 ) Here , gt is the gradient at timestep t , and m̂t and v̂t are the first and second order exponential moving averages of the parameterwise gradients , bias-corrected to account for zero-initialization . There are four hyperparamters : is a small constant primarily for numerical stability , α is the learning rate and β1 and β2 determine the effective memories of the moving averages . The fraction m̂t√ v̂t resembles unit normalization of a vector , and for a constant gradient gt = g ( such that the moving averages are trivial ) update steps depend only on the sign of g , if |g| . ∆θt = −α sgn ( g ) 1 + |g| ( 9 ) However , this normalization does not necessarily address MM-GAN ’ s saturation problem , due to the training dynamics . Doptl = ±∞ where the real and generated data do not overlap ( eq 7 ) , such that if D can cleanly separate real from generated samples , we expect it to further decrease its loss by inflating its outputs . Supposing that D approaches this optimum linearly , i.e . Dtl ( G ( z ) ) = at , we get : Dtp ( G ( z ) ) ≈ exp ( at ) ( 10 ) Dp also appears as the MM-GAN scaling factor ( eq 4 ) , such that G will be optimized with gradients of the form gt = g0 exp ( at ) . For reasonable values of a , β1 and β2 , the update step size for each of G ’ s parameters can be approximated : ( Supplementary : G ) ∆θt ≈ g0C exp ( ( a− 1 2 log ( β2 ) ) t ) ( 11 ) Given the commonly used β2 = 0.999 , parameter updates will vanish exponentially if a < −0.0005 , given that |Dl| increases fast enough and conforms reasonably well to our simplified model . If D learns to distinguish the real and generated data manifolds before they meaningfully intersect , this interaction between D , G and Adam threatens to freeze parameter updates altogether . ( Supplementary : L ) | This paper reexamines the original (MM) and the non-saturating (NS) GAN objective. The authors show that the gradients of the respective objectives just differ from a scaling factor depending on the discriminator's output for generated samples. While the scaling factor for the MM gradient is responsible for the well known vanishing gradient if the discriminator is optimal, the scaling factor of the NS gradient counteracts this saturation effect. However, on the other side, the NS scaling factor introduces a mode dropping effect and the inability of the learning dynamci to discover new modes. The authors show additionally that the NS minibatch gradient is the weighted sum of the single sample MM gradients with respective scaling factors as weights. These scaling factors avoid saturating, however, alter the direction of the resulting minibatch gradient. To counteract the change of the gradient direction the authors propose to summarize the sample scaling factors into one scalar for the batch gradient which preserves the non-saturating behaviour of the NS and the gradient direction of the MM objective. The new GAN objective is called MM-nsat. Additionally the authors discuss the non-saturating effect of the ADAM beta2 parameter for the MM-GAN Generator. | SP:ed625d9db2079f50e38537787c0e5c9e2ea79419 |
Sample weighting as an explanation for mode collapse in generative adversarial networks | 1 INTRODUCTION . Generative adversarial networks have come a long way since their introduction ( Goodfellow et al. , 2014 ) and are currently state of the art for some tasks , such as generating images . A combination of deep learning developments , GAN specific advances and vast improvements in data sets and computational resources have enabled GANs to generate high resolution images that require some effort to distinguish from real photos ( Zhang et al. , 2018 ; Brock et al. , 2018 ; Karras et al. , 2018 ) . GANs use two competing networks : a generator G that maps input noise to samples mimicking real data , and a discriminator D that outputs estimated probabilities of samples being real rather than generated by G. We summarize their cost functions , JD and JG , for the minimax and nonsaturating formulations introduced in Goodfellow et al . ( 2014 ) . We denote samples from real data and noise distributions by x and z and omit the proper expectation value formalism : JDMM ( x , z ) = JDNS ( x , z ) = − log ( Dp ( x ) ) − log ( 1−Dp ( G ( z ) ) ) JGMM ( z ) = log ( 1−Dp ( G ( z ) ) ) JGNS ( z ) = − log ( Dp ( G ( z ) ) ( 1 ) For clarity , we use subscripts to distinguish between the discriminator ’ s pre-activation logit output Dl and the probability representation Dp : Dp ≡ ( 1 + exp ( −Dl ) ) −1 ( 2 ) Both formulations have the same cost function for D , representing the cross entropy between probability estimates and ground truth . In the minimax formulation ( MM-GAN ) , G is simply trained to maximize D ’ s cost . Ideally , G matches its outputs to the real data distribution while also achieving meaningful generalization , but many failure modes are observed in practice . NS-GAN uses a modified cost for G that is non-saturating when D distinguishes real and generated data with very high confidence , such that G ’ s gradients do not vanish . ( Supplementary : C ) Various publications establish what the different cost functions optimize in terms of the JensenShannon and reverse Kullback-Leibler divergences between real and generated data : JGMM ⇔ 2 · DJS ( Goodfellow et al. , 2014 ) JGMM + JGNS ⇔ DRKL ( Huszár , 2016 ) JGNS ⇔ DRKL − 2 · DJS ( Arjovsky & Bottou , 2017 ) ( 3 ) Huszár ( 2015 ) and Arjovsky & Bottou ( 2017 ) have suggested NS-GAN ’ s divergence as an explanation for the ubiquitous mode dropping and mode collapsing problems with GANs ( Metz et al. , 2016 ; Salimans et al. , 2016 ; Srivastava et al. , 2017 ) . While MM-GAN seems promising in terms of its Jensen-Shannon divergence , the formulation has largely been ignored because the saturating cost causes training to break down . A variety of other GAN formulations have been introduced , such as WGAN-GP ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ) , LS-GAN ( Mao et al. , 2016 ) and Hinge-GAN ( Miyato et al. , 2018 ) . Lucic et al . ( 2018 ) finds that different cost formulations tend to get similar results given sufficient parameter tuning , including various forms of regularization . Despite the questionable form of NSGAN in terms of divergences , it is widely used and can produce very impressive results , such as in the improved StyleGAN ( Karras et al. , 2019 ) . 2 THEORY 2.1 MM-GAN SATURATION The parameters of a network are typically trained with some form of gradient descent on a cost function . We find the expressions for D ’ s and G ’ s gradients with respect to their parameters , φ and θ : ( Supplementary : F ) ∇φJDMM , NS = + ∂Dl ( G ( z ) , φ ) ∂φ ·Dp ( G ( z ) , φ ) + ∂Dl ( x , φ ) ∂φ · ( 1−Dp ( x , φ ) ) ∇θJGMM = − ∂Dl ( G ( z , θ ) ) ∂θ ·Dp ( G ( z , θ ) ) ∇θJGNS = − ∂Dl ( G ( z , θ ) ) ∂θ · ( 1−Dp ( G ( z , θ ) ) ) ( 4 ) We emphasize the two kinds of scaling factors for these gradients in red and blue : they are plotted in figure 2 . The discriminator ’ s scaling factors decrease as it minimizes its cost , approaching 0 towards the optima for both the real and the generated data term . The minimax formulation JG = −JD is suitable for adversarial training in terms of the generator ’ s optimum , but the unchanged scaling factor means that G ’ s gradients increase towards and decrease away from its optimum . The saturation effect described in Goodfellow et al . ( 2014 ) is that limDp ( G ( z ) ) →0∇θJGMM = 0 , such that G stops training when D is highly confident that its samples are fake . More generally , the scaling factor makes JG concave with respect to Dl , which interacts poorly with common optimization methods ( see section 2.4 ) . As∇θJGMM and∇θJGNS are the same aside from their scaling factors , the different behaviors of the two formulations must follow from these . NS-GAN ’ s scaling factor avoids saturation , but gives rise to a different , more subtle mode dropping tendency ( see section 2.3 ) . 2.2 NON-SATURATION AND SAMPLE WEIGHTING . As can be seen from eq 4 , the NS-GAN and MM-GAN gradients are parallel for a single sample , but with different magnitudes . Stochastic gradient descent estimates the gradient of the cost over the entire input distribution by using a number of samples ( a minibatch ) . We can express the NS-GAN minibatch gradient in terms of the MM-GAN gradient : ∇θJbatchGNS = N−1∑ i=0 ∇θJ sampleGMM ( zi ) [ 1−Dp ( G ( zi ) ) Dp ( G ( zi ) ) ] ( 5 ) Due to the bracketed factor , NS-GAN rescales the contribution from each sample relative to MMGAN , implicitly emphasizing samples with smaller values of Dp . Seeing as saturation is caused by the gradient ’ s vanishing magnitude , this additional effect on the gradient ’ s direction is questionable . The exact ratio of the minibatch gradient magnitudes for NS-GAN and MM-GAN depends on ∂/∂θ ( Dl ( G ( z ) ) ) for each sample and has no convenient expression . We can approximate it by replacing Dp ( G ( zi ) ) in eq 5 with its mean over the minibatch , Dp = 1N ∑N−1 i=0 Dp ( G ( zi ) ) . This allows us to formulate a form of non-saturation for MM-GAN that mimicks NS-GAN : ∇θJbatchGMM-nsat = 1−Dp Dp N−1∑ i=0 ∇θJ sampleGMM ( zi ) ( 6 ) We refer to the formulation with this generator gradient as MM-nsat . The relative weights of samples in each batch are as for MM-GAN , while the gradient magnitude approximates that of NS-GAN . Note , however , that the relative weights of samples may be disturbed across batches , such as when the minibatch size is small and Dp fluctuates . Despite different theoretical motivation , MM-nsat is very closely related to importance weighted NS-GAN ( Hu et al. , 2017 ) . ( Supplementary : D ) 2.3 SAMPLE WEIGHTING AND MODE DROPPING . If we use r ( x ) and g ( x ) to denote the density of real and generated samples at a point x in data space , the optimal discriminator ( i.e . the function that minimizes JDMM , NS ) is given by : Doptp ( x ) = r ( x ) r ( x ) + g ( x ) ( Goodfellow et al. , 2014 ) ( 7 ) This expression for the optimial discriminator assumes idealized conditions : that D is optimized for a fixed G , with unlimited capacity and without having to estimate the underlying real and generated data distributions by finite sampling . While Dopt is not realized in practice ( Sinn & Rawat , 2018 ) , we use it to form some intuitions about D ’ s behaviors . Suppose we have real data with two disjunct , equiprobable modes , and that one of these modes , O , is overrepresented in generated data . For convergence , G would need to shift probability mass from O to the underrepresented mode , U . However , the minibatches used to update G will have more samples from O , simply because they are generated more often . For a strong discriminator , these samples will also tend towards smaller values of Dp ( G ( z ) ) , due to equation 7 . This in turn causes them to be weighted differently by NS-GAN and MM-GAN , because Dp ( G ( z ) ) appears in their scaling factors ( eq 4 ) . We refer to the first effect as over- and underrepresentation ( of generated samples relative to real samples ) and the second effect as up- and downweighting ( governed on the scaling factor ) . The fundamental problem with NS-GAN can be seen by looking at how these two effects interact . The NS-GAN generator has a scaling factor 1−Dp ( G ( z ) ) , which combines overepresentation with upweighting and underrepresentation with downweighting , allowing an overrpresented mode O to dominate U ’ s contributions to the parameter updates . If parameter updates overwhelmingly based on gradients from O have an adverse effect on G ’ s ability to generate samples from U , this mode may become increasingly underrepresented , making O yet more dominant . MM-GAN ’ s scaling factor has the reverse behavior , pairing overrepresentation with downweighting and underrepresentation with upweighting , such that the two effects combine in a stabilizing rather than destabilizing way . For the extreme example of a nearly dropped or newly discovered mode , r ( x ) g ( x ) such that we expect Dp ( x ) ≈ 1 for a strong discriminator . NS-GAN ’ s sample weighting disregards gradients from such samples , whereas MM-GAN ’ s sample weighting emphasizes them . This difference between MM-GAN and NS-GAN in terms of scaling factors reflects the different divergences they have been shown to optimize ( eq 3 ) . Huszár ( 2015 ) and Arjovsky et al . ( 2017 ) both connect the mode dropping tendencies of NS-GAN to its reverse Kullback-Leibler divergence , which strongly penalizes G for generating samples outside of the real data modes , but not for dropping modes . A variety of attempts to address mode dropping , mode collapse and mode hopping in GANs seem unaware that this is reasonable behavior for NS-GAN given its divergence : mode dropping minimizes DRKL and maximizes DJS . 2.4 MM-GAN INTERACTION WITH ADAM GANs are generally trained with the Adam optimizer ( Kingma & Ba , 2014 ) , as was recommended by Radford et al . ( 2015 ) while introducing the DCGAN architecture . The parameter update step for Adam is given by : ∆θt = −α m̂t√ v̂t + mt = β1mt−1 + ( 1− β1 ) gt , m̂t = mt 1− βt1 vt = β2vt−1 + ( 1− β2 ) g2t , v̂t = vt 1− βt2 ( 8 ) Here , gt is the gradient at timestep t , and m̂t and v̂t are the first and second order exponential moving averages of the parameterwise gradients , bias-corrected to account for zero-initialization . There are four hyperparamters : is a small constant primarily for numerical stability , α is the learning rate and β1 and β2 determine the effective memories of the moving averages . The fraction m̂t√ v̂t resembles unit normalization of a vector , and for a constant gradient gt = g ( such that the moving averages are trivial ) update steps depend only on the sign of g , if |g| . ∆θt = −α sgn ( g ) 1 + |g| ( 9 ) However , this normalization does not necessarily address MM-GAN ’ s saturation problem , due to the training dynamics . Doptl = ±∞ where the real and generated data do not overlap ( eq 7 ) , such that if D can cleanly separate real from generated samples , we expect it to further decrease its loss by inflating its outputs . Supposing that D approaches this optimum linearly , i.e . Dtl ( G ( z ) ) = at , we get : Dtp ( G ( z ) ) ≈ exp ( at ) ( 10 ) Dp also appears as the MM-GAN scaling factor ( eq 4 ) , such that G will be optimized with gradients of the form gt = g0 exp ( at ) . For reasonable values of a , β1 and β2 , the update step size for each of G ’ s parameters can be approximated : ( Supplementary : G ) ∆θt ≈ g0C exp ( ( a− 1 2 log ( β2 ) ) t ) ( 11 ) Given the commonly used β2 = 0.999 , parameter updates will vanish exponentially if a < −0.0005 , given that |Dl| increases fast enough and conforms reasonably well to our simplified model . If D learns to distinguish the real and generated data manifolds before they meaningfully intersect , this interaction between D , G and Adam threatens to freeze parameter updates altogether . ( Supplementary : L ) | This paper proposes an explanation for mode collapse in the original GAN with the log -D objective for the generator (dubbed the non-saturating GAN or NS-GAN for short). The paper takes the approach of comparing the gradient of the generator objective for the original GAN with cross-entropy loss (dubbed the minimax GAN or MM-GAN for short) and the log -D variant. The key observation is that the difference between the gradient of the generator objective of MM-GAN and NS-GAN is that MM-GAN has a factor of D_p(G(z,\theta)), whereas NS-GAN has a factor of 1-D_p(G(z,\theta)), where D_p(G(z,\theta)) is the output of the discriminator on a sample generated from z. Hence, the terms in the MM-GAN gradient that appear fake to the discriminator are downweighted, whereas they are upweighted in the NS-GAN gradient. Because the samples from modes that are already overrepresented are likely declared fake by the discriminator, the contribution to the generator gradient is dominated by these samples in NS-GAN. | SP:ed625d9db2079f50e38537787c0e5c9e2ea79419 |
SkipW: Resource Adaptable RNN with Strict Upper Computational Limit | 1 INTRODUCTION . Since Recurrent Neural Networks ( RNN ) have been introduced Williams et al . ( 1986 ) , they have become one of the reference methods to process sequences . A typical architecture is the Long-ShortTerm-Memory neural network ( LSTM ) which allowed improvement in natural language processing such as large-vocabulary speech recognition ( Sak et al. , 2014 ; Li & Wu , 2015 ) . Used with CNNs they have also reached state of the art in automatic image captioning ( Vinyals et al. , 2015 ) . Deep learning models are now brought closer to the user rather than running in a distant cloud , helping to reduce latency , network congestion , and improving data security and privacy . However , smartphones and user devices impose additional constraints such as limited computation or energy . Handling these constraints has become an active research topic ( Zhang et al. , 2017 ; 2018 ; Howard et al. , 2019 ; Wu et al. , 2019 ; Cai et al. , 2020 ) . User devices can also host multiple processes running at the same time and starting or stopping abruptly , modifying the constraints affecting the processes . Few works have considered models that can be modified at run time to adapt to an evolving computational limit ( Yu et al. , 2019 ; Yu & Huang , 2019 ; Guerra et al. , 2020 ; Jin et al. , 2020 ) . However , none of these focus on sequences and therefore none address the problem of adapting the model in the middle of a sequence . In this context , this paper introduces Skip-Window ( SkipW ) , a flexible recurrent neural network architecture : its computational cost can be dynamically adapted during a sequence analysis to meet real time constraints changes . The proposed architecture can be combined with any RNN cell and allows to strictly limit the computational resources used to avoid exceeding a given budget . Furthermore , empirical experiments on four data sets ( Adding Task , MNIST , IMDB and HAR-2D-POSE ) demonstrate that this subsampling architecture is interesting in itself . Skip-Window matches or exceed the accuracy of existing approaches for a given computational cost . In addition , measurements on specific processors highlight that SkipW produces real computational and energy savings . 2 RELATED WORK . Typically , RNNs maintain a “ state ” , a vector of variables , over time . This state is supposed to accumulate relevant information and is updated recursively . Each input of the sequence is typically a ) processed by some deep layers and b ) then combined with the previous state through some other deep layers to compute the new state . Hence , the RNN can be seen as a function taking a sequence of inputs x = ( x1 , . . . , xT ) and recursively computing a set of states s = ( s1 , . . . , sT ) . Each state st is computed from st−1 and xt by a cell S of the RNN . As neural networks are increasingly run on limited hardware , recent research has focused on controlling their computational cost . 2.1 FLEXIBLE NEURAL NETWORKS . A few architectures have recently been designed to adapt the computational complexity of a Deep Neural Network ( DNN ) without reloading the whole model . This can be achieved by removing/adding neurons ( Yu et al. , 2019 ; Yu & Huang , 2019 ) or by modifying the quantization of the weights ( Guerra et al. , 2020 ; Jin et al. , 2020 ) . An efficient embedding of a mixture of Convolutional Neural Network ( CNNs ) also allows to add or remove several models at the same time , hence changing the computational cost ( Ruiz & Verbeek , 2019 ) . 2.1.1 THRRNN . For RNNs specifically , ThrRNN ( Lambert et al. , 2020 ) aims to control computation time by not processing some inputs . This is controlled by an update gate ut . The tradeoff between the average accuracy and the average number of updates can be modified during inference by changing a single parameter thr . ThrRNN can wrap any RNN cell S : ut = fbinarize ( ũt , thr ) = { 0 if ũt < thr 1 otherwise ( 1 ) ∆ũt = σ ( Wst + b ) ( 2 ) ũt+1 = ut∆ũt + ( 1− ut ) ( ũt +min ( ∆ũt , 1− ũt ) ) ( 3 ) st = utS ( st−1 , xt ) + ( 1− ut ) st−1 . ( 4 ) When an input is processed , an update gate computes the quantity ∆ũt that determines how many inputs will be skipped . In practice the ∆ũt are accumulated in ũt until ũt ≥ thr . 2.2 RECURRENT NEURAL NETWORK WITH LOW COMPUTATIONAL COMPLEXITY . Several architectures have been proposed to limit or reduce the computational cost of RNNs , but this cost can not be adapted at inference . A first class of architectures dynamically reduces computation based on the input . SkipRNN ( Campos et al. , 2018 ) predates and is similar to ThrRNN , except that the binarization function does not change . A similar mechanism has been proposed by Zhang et al . ( 2019 ) . Other architectures directly select the next input to process ( Yeung et al. , 2016 ; Yu et al. , 2017 ; Hansen et al. , 2019 ; Song et al. , 2018 ) . Early exit has also been investigated by Dennis et al . ( 2019 ) . Tao et al . ( 2019 ) also use xt as input to an update gate . So do Seo et al . ( 2018 ) ; Jernite et al . ( 2017 ) ; Li et al . ( 2020 ) . However , they do not skip any input but perform partial state updates . A second class of architectures focuses on reducing the overal cost of the RNN . FastRNN is an RNN augmented with a residual connection with two extra scalar parameters and FastGRNN is an improved FastRNN : the residual connection is extended to a gate and RNN matrices are low rank , sparse and quantized ( Kusupati et al. , 2018 ) . Other architectures reduce the RNN length . Chan et al . ( 2016 ) train an encoder to reduce the input length . Yeung et al . ( 2016 ) ; Shan et al . ( 2018 ) ; Chen et al . ( 2018 ) propose various mechanisms to summarize subsequences of windows of inputs . 2.3 RECURRENT NEURAL NETWORK WITH HIERARCHICAL-DEPENDENT COMPLEXITY . A class of architectures focuses on hierarchy level concept to reduce the complexity . These methods are mainly used in the context of multi-layer RNNs where each layer is supposed to model a different level in the hierarchy ( e.g . for a corpus the levels could be documents , paragraphs , sentences , words , letters ) . These approaches are based on the fact that a hierarchical separation exists within a sequence of inputs , which might not always be the case . In ( Koutnik et al. , 2014 ) , the hidden state is partitioned into different modules , each module has its own clock period , meaning that they will be updated at different times . Skipping updates of part of the hidden state decreases the computational cost . In ( Koutnik et al. , 2014 ) , the update periods are chosen arbitrarily , for example using an exponential series . For stacked-RNNs , Chung et al . ( 2017 ) ; Chang et al . ( 2017 ) conditionally update each layer based on a feature level criterion , or by dilating a skip connection . Layers close to the inputs would model lower feature levels and be updated more frequently . Further layers would model higher level features . In ( Chung et al. , 2017 ) , a layer modeling sentences would be updated only when a word is processed entirely ( in a model fed character by character ) , from the layer modeling words . Before the end of a word is reached , the state of the former layer is copied across input steps . 2.4 RELATIONSHIP TO OUR WORK . ThrRNN is the closest model to SkipW . Both are flexible RNNs and skip some inputs . However , ThrRNN optimizes computational cost on average over sequences . This induces two variabilities : a ) inter-sequence variability : the model will not use the same number of updates for every sequence ; and b ) intra-sequence variability : the number of updates will not be uniform across time steps , updates may be concentrated in a certain part of the sequence . These two variabilities can cause the model to exceed its computational budget and , therefore , to either shut down or delay the output . SkipW does not have this problem as it strictly enforces a computational constraint over each window of inputs . Other strategies for flexible models are not straightforward to apply to RNN . They require specialized training algorithms . They have never been applied to models processing inputs of an RNN or to make an RNN flexible and it is not clear how they would need to be modified . Furthermore , these models adapt between independent inputs whereas , for sequences , adaptation is necessary between time steps . RNN architectures with low complexity are orthogonal to our approach . They do not offer flexibility . They could be combined with and benefit from our approach . However , SkipRNN ( which we are based on ) and related methods have one big advantage over others : by skipping inputs , they also skip any modification of an input , such as processing by an expensive CNN for images . As SkipW makes decision over a window of inputs , it has some superficial similarity to methods summarizing windows or hierarchical RNNs . However , SkipW a ) does not summarize windows and b ) does not even look at these inputs before deciding what to skip . 3 SKIP-WINDOWS ARCHITECTURE . Skip-Windows ( SkipW ) is a wrapper for a RNN cell S. It uses a conditional computation mechanism to skip some updates . Rather than at each input xt , update gates are computed at the beginning of windows of inputs , that is , every L time steps ( Figure 1 ) . In other words , before any new L-size window of inputs , a L-size vector ũW is computed . ũW [ i ] can be seen as the importance of input i in the window . Then , the architecture includes a selectK mechanism . This function takes as input the vector ũW and outputs the vector ũKW , setting L − K elements to a value that ensures the associated inputs are not processed ( 0 in Figure 2 ) . Therefore , it ensures that at most K out of every L inputs will be processed . In other words , it forces the RNN cell to skip ( L − K ) out of every L inputs . This ensures a strict upper bound on the computational cost of the model for a sequence and for each window , therefore alleviating the inter-sequence variability and intra-sequence variability issues . Similarly to other works , the binary state update L-size vector , uW , is then obtained by binarizing the remaining values as in equation 1 . For example , by setting all values below a threshold to a value that ensures the associated inputs are not processed ( 0 in Figure 2 ) . An example of the Skip-Window cell implementation is represented in Figure 2 . In this case , selectK is implemented as a topK function . This enforces the strict constraint on the number of updates . The topK operation keeps unchanged the K highest value in ũW , t , and sets the ( L −K ) others to 0 . The corresponding architecture can be characterized as follows : st = ut · S ( st−1 , xt ) + ( 1− ut ) · st−1 ( 5 ) ũW , t+1 = γ · σ ( Ww ( st−1 , t ) + bw ) + ( 1− γ ) · ũW , t ( 6 ) γ = { 1 if i == 0 0 otherwise ( 7 ) i = t mod L ( 8 ) ũKW , t = topK ( ũW , t ) ( 9 ) ut = fbinarize ( ũ K W , t [ i ] , thr ) = { 0 if ũKW , t [ i ] < thr 1 otherwise ( 10 ) where Ww is a weight matrix of size ( N + 1 ) × L , N is the number of hidden units as defined by the RNN cell S , bW is a L-vector bias , σ is the sigmoid function and mod is the modulo operation . Instead of a topK function for selectK , it is also possible to use a stochastic sampling mechanism that randomly selects ( without replacement ) K out of L elements of ũW where the probability of selecting each element of index i is proportional to ũW [ i ] . Some selectK alternatives are discussed and evaluated in Appendix H. Including the time step t in equation 6 is also optional and can be replaced by a value ensuring the state is not static if no update is made in a window . For example , the number of inputs since the last update or the number of windows already processed . Training the model The model is trained to minimize a two-part loss , similarly to Campos et al . ( 2018 ) . The first term measures the accuracy of the task , and the second one penalizes inputs used : Lbudget = λ T∑ t=1 ut , ( 11 ) where λ is the cost associated to the use of a single input . More experimental details are provided in Appendix B . Error gradients The model is differentiable except for the fbinarize and topK function . To train the model using standard backpropagation the straight-through estimator is used as done in Campos et al . ( 2018 ) for fbinarize . Other alternatives might involve reinforcement learning such as REINFORCE ( Williams , 1992 ) or , in the case of the topK function , the usage of a differentiable topK as proposed in Xie et al . ( 2020 ) . Early experiments using a differentiable topK ( Xie et al. , 2020 ) have shown worse results than the straight-through estimator . This suggests that constraining computation may be an interesting approximation for a topK operation . Adapting computational cost at inference During inference , adapting the tradeoff between model performance and computational cost can be done using two factors : the K in equation 9 and the thr in equation 10 . These two parameters can be modified together or one at a time . Increasing/lowering thr parameter in [ 0 , 1 ] encourages the model to process fewer/more inputs . Changing K in { 0 . . . L } forces the model to process at most K/L of the window . Choice of the window size hyper-parameter By the nature of the model , the task can influence the choice of L. It can be hand tuned or computed using typical hyper-parameter search methods such as grid search . Choosing small L allows the model to make update decisions for the near future only but offers less choice in operating points . Similarly a bigger L requires the model to predict its update decisions for a bigger time span but offers more flexibility . At the extreme when L = 1 , each window consists of a single input . | This submission presents an extension of SkipRNN, Skip-Window, that splits input sequences into windows of length L from which only K samples can be used. This guarantees that the computational budget is never exceeded. Skip-Window implemented this inductive bias by predicting L updating probabilities in parallel at the beginning of each window. L needs to be set prior to training, whereas K can be modified at test time. The model is evaluated in two tasks, namely a synthetic adding task and human activity recognition. Authors report latency and energy consumption in small platforms, showing the impact of this research direction in real applications. | SP:099cb12ac8ffe1e09ba4ff99a263194e7372c137 |
SkipW: Resource Adaptable RNN with Strict Upper Computational Limit | 1 INTRODUCTION . Since Recurrent Neural Networks ( RNN ) have been introduced Williams et al . ( 1986 ) , they have become one of the reference methods to process sequences . A typical architecture is the Long-ShortTerm-Memory neural network ( LSTM ) which allowed improvement in natural language processing such as large-vocabulary speech recognition ( Sak et al. , 2014 ; Li & Wu , 2015 ) . Used with CNNs they have also reached state of the art in automatic image captioning ( Vinyals et al. , 2015 ) . Deep learning models are now brought closer to the user rather than running in a distant cloud , helping to reduce latency , network congestion , and improving data security and privacy . However , smartphones and user devices impose additional constraints such as limited computation or energy . Handling these constraints has become an active research topic ( Zhang et al. , 2017 ; 2018 ; Howard et al. , 2019 ; Wu et al. , 2019 ; Cai et al. , 2020 ) . User devices can also host multiple processes running at the same time and starting or stopping abruptly , modifying the constraints affecting the processes . Few works have considered models that can be modified at run time to adapt to an evolving computational limit ( Yu et al. , 2019 ; Yu & Huang , 2019 ; Guerra et al. , 2020 ; Jin et al. , 2020 ) . However , none of these focus on sequences and therefore none address the problem of adapting the model in the middle of a sequence . In this context , this paper introduces Skip-Window ( SkipW ) , a flexible recurrent neural network architecture : its computational cost can be dynamically adapted during a sequence analysis to meet real time constraints changes . The proposed architecture can be combined with any RNN cell and allows to strictly limit the computational resources used to avoid exceeding a given budget . Furthermore , empirical experiments on four data sets ( Adding Task , MNIST , IMDB and HAR-2D-POSE ) demonstrate that this subsampling architecture is interesting in itself . Skip-Window matches or exceed the accuracy of existing approaches for a given computational cost . In addition , measurements on specific processors highlight that SkipW produces real computational and energy savings . 2 RELATED WORK . Typically , RNNs maintain a “ state ” , a vector of variables , over time . This state is supposed to accumulate relevant information and is updated recursively . Each input of the sequence is typically a ) processed by some deep layers and b ) then combined with the previous state through some other deep layers to compute the new state . Hence , the RNN can be seen as a function taking a sequence of inputs x = ( x1 , . . . , xT ) and recursively computing a set of states s = ( s1 , . . . , sT ) . Each state st is computed from st−1 and xt by a cell S of the RNN . As neural networks are increasingly run on limited hardware , recent research has focused on controlling their computational cost . 2.1 FLEXIBLE NEURAL NETWORKS . A few architectures have recently been designed to adapt the computational complexity of a Deep Neural Network ( DNN ) without reloading the whole model . This can be achieved by removing/adding neurons ( Yu et al. , 2019 ; Yu & Huang , 2019 ) or by modifying the quantization of the weights ( Guerra et al. , 2020 ; Jin et al. , 2020 ) . An efficient embedding of a mixture of Convolutional Neural Network ( CNNs ) also allows to add or remove several models at the same time , hence changing the computational cost ( Ruiz & Verbeek , 2019 ) . 2.1.1 THRRNN . For RNNs specifically , ThrRNN ( Lambert et al. , 2020 ) aims to control computation time by not processing some inputs . This is controlled by an update gate ut . The tradeoff between the average accuracy and the average number of updates can be modified during inference by changing a single parameter thr . ThrRNN can wrap any RNN cell S : ut = fbinarize ( ũt , thr ) = { 0 if ũt < thr 1 otherwise ( 1 ) ∆ũt = σ ( Wst + b ) ( 2 ) ũt+1 = ut∆ũt + ( 1− ut ) ( ũt +min ( ∆ũt , 1− ũt ) ) ( 3 ) st = utS ( st−1 , xt ) + ( 1− ut ) st−1 . ( 4 ) When an input is processed , an update gate computes the quantity ∆ũt that determines how many inputs will be skipped . In practice the ∆ũt are accumulated in ũt until ũt ≥ thr . 2.2 RECURRENT NEURAL NETWORK WITH LOW COMPUTATIONAL COMPLEXITY . Several architectures have been proposed to limit or reduce the computational cost of RNNs , but this cost can not be adapted at inference . A first class of architectures dynamically reduces computation based on the input . SkipRNN ( Campos et al. , 2018 ) predates and is similar to ThrRNN , except that the binarization function does not change . A similar mechanism has been proposed by Zhang et al . ( 2019 ) . Other architectures directly select the next input to process ( Yeung et al. , 2016 ; Yu et al. , 2017 ; Hansen et al. , 2019 ; Song et al. , 2018 ) . Early exit has also been investigated by Dennis et al . ( 2019 ) . Tao et al . ( 2019 ) also use xt as input to an update gate . So do Seo et al . ( 2018 ) ; Jernite et al . ( 2017 ) ; Li et al . ( 2020 ) . However , they do not skip any input but perform partial state updates . A second class of architectures focuses on reducing the overal cost of the RNN . FastRNN is an RNN augmented with a residual connection with two extra scalar parameters and FastGRNN is an improved FastRNN : the residual connection is extended to a gate and RNN matrices are low rank , sparse and quantized ( Kusupati et al. , 2018 ) . Other architectures reduce the RNN length . Chan et al . ( 2016 ) train an encoder to reduce the input length . Yeung et al . ( 2016 ) ; Shan et al . ( 2018 ) ; Chen et al . ( 2018 ) propose various mechanisms to summarize subsequences of windows of inputs . 2.3 RECURRENT NEURAL NETWORK WITH HIERARCHICAL-DEPENDENT COMPLEXITY . A class of architectures focuses on hierarchy level concept to reduce the complexity . These methods are mainly used in the context of multi-layer RNNs where each layer is supposed to model a different level in the hierarchy ( e.g . for a corpus the levels could be documents , paragraphs , sentences , words , letters ) . These approaches are based on the fact that a hierarchical separation exists within a sequence of inputs , which might not always be the case . In ( Koutnik et al. , 2014 ) , the hidden state is partitioned into different modules , each module has its own clock period , meaning that they will be updated at different times . Skipping updates of part of the hidden state decreases the computational cost . In ( Koutnik et al. , 2014 ) , the update periods are chosen arbitrarily , for example using an exponential series . For stacked-RNNs , Chung et al . ( 2017 ) ; Chang et al . ( 2017 ) conditionally update each layer based on a feature level criterion , or by dilating a skip connection . Layers close to the inputs would model lower feature levels and be updated more frequently . Further layers would model higher level features . In ( Chung et al. , 2017 ) , a layer modeling sentences would be updated only when a word is processed entirely ( in a model fed character by character ) , from the layer modeling words . Before the end of a word is reached , the state of the former layer is copied across input steps . 2.4 RELATIONSHIP TO OUR WORK . ThrRNN is the closest model to SkipW . Both are flexible RNNs and skip some inputs . However , ThrRNN optimizes computational cost on average over sequences . This induces two variabilities : a ) inter-sequence variability : the model will not use the same number of updates for every sequence ; and b ) intra-sequence variability : the number of updates will not be uniform across time steps , updates may be concentrated in a certain part of the sequence . These two variabilities can cause the model to exceed its computational budget and , therefore , to either shut down or delay the output . SkipW does not have this problem as it strictly enforces a computational constraint over each window of inputs . Other strategies for flexible models are not straightforward to apply to RNN . They require specialized training algorithms . They have never been applied to models processing inputs of an RNN or to make an RNN flexible and it is not clear how they would need to be modified . Furthermore , these models adapt between independent inputs whereas , for sequences , adaptation is necessary between time steps . RNN architectures with low complexity are orthogonal to our approach . They do not offer flexibility . They could be combined with and benefit from our approach . However , SkipRNN ( which we are based on ) and related methods have one big advantage over others : by skipping inputs , they also skip any modification of an input , such as processing by an expensive CNN for images . As SkipW makes decision over a window of inputs , it has some superficial similarity to methods summarizing windows or hierarchical RNNs . However , SkipW a ) does not summarize windows and b ) does not even look at these inputs before deciding what to skip . 3 SKIP-WINDOWS ARCHITECTURE . Skip-Windows ( SkipW ) is a wrapper for a RNN cell S. It uses a conditional computation mechanism to skip some updates . Rather than at each input xt , update gates are computed at the beginning of windows of inputs , that is , every L time steps ( Figure 1 ) . In other words , before any new L-size window of inputs , a L-size vector ũW is computed . ũW [ i ] can be seen as the importance of input i in the window . Then , the architecture includes a selectK mechanism . This function takes as input the vector ũW and outputs the vector ũKW , setting L − K elements to a value that ensures the associated inputs are not processed ( 0 in Figure 2 ) . Therefore , it ensures that at most K out of every L inputs will be processed . In other words , it forces the RNN cell to skip ( L − K ) out of every L inputs . This ensures a strict upper bound on the computational cost of the model for a sequence and for each window , therefore alleviating the inter-sequence variability and intra-sequence variability issues . Similarly to other works , the binary state update L-size vector , uW , is then obtained by binarizing the remaining values as in equation 1 . For example , by setting all values below a threshold to a value that ensures the associated inputs are not processed ( 0 in Figure 2 ) . An example of the Skip-Window cell implementation is represented in Figure 2 . In this case , selectK is implemented as a topK function . This enforces the strict constraint on the number of updates . The topK operation keeps unchanged the K highest value in ũW , t , and sets the ( L −K ) others to 0 . The corresponding architecture can be characterized as follows : st = ut · S ( st−1 , xt ) + ( 1− ut ) · st−1 ( 5 ) ũW , t+1 = γ · σ ( Ww ( st−1 , t ) + bw ) + ( 1− γ ) · ũW , t ( 6 ) γ = { 1 if i == 0 0 otherwise ( 7 ) i = t mod L ( 8 ) ũKW , t = topK ( ũW , t ) ( 9 ) ut = fbinarize ( ũ K W , t [ i ] , thr ) = { 0 if ũKW , t [ i ] < thr 1 otherwise ( 10 ) where Ww is a weight matrix of size ( N + 1 ) × L , N is the number of hidden units as defined by the RNN cell S , bW is a L-vector bias , σ is the sigmoid function and mod is the modulo operation . Instead of a topK function for selectK , it is also possible to use a stochastic sampling mechanism that randomly selects ( without replacement ) K out of L elements of ũW where the probability of selecting each element of index i is proportional to ũW [ i ] . Some selectK alternatives are discussed and evaluated in Appendix H. Including the time step t in equation 6 is also optional and can be replaced by a value ensuring the state is not static if no update is made in a window . For example , the number of inputs since the last update or the number of windows already processed . Training the model The model is trained to minimize a two-part loss , similarly to Campos et al . ( 2018 ) . The first term measures the accuracy of the task , and the second one penalizes inputs used : Lbudget = λ T∑ t=1 ut , ( 11 ) where λ is the cost associated to the use of a single input . More experimental details are provided in Appendix B . Error gradients The model is differentiable except for the fbinarize and topK function . To train the model using standard backpropagation the straight-through estimator is used as done in Campos et al . ( 2018 ) for fbinarize . Other alternatives might involve reinforcement learning such as REINFORCE ( Williams , 1992 ) or , in the case of the topK function , the usage of a differentiable topK as proposed in Xie et al . ( 2020 ) . Early experiments using a differentiable topK ( Xie et al. , 2020 ) have shown worse results than the straight-through estimator . This suggests that constraining computation may be an interesting approximation for a topK operation . Adapting computational cost at inference During inference , adapting the tradeoff between model performance and computational cost can be done using two factors : the K in equation 9 and the thr in equation 10 . These two parameters can be modified together or one at a time . Increasing/lowering thr parameter in [ 0 , 1 ] encourages the model to process fewer/more inputs . Changing K in { 0 . . . L } forces the model to process at most K/L of the window . Choice of the window size hyper-parameter By the nature of the model , the task can influence the choice of L. It can be hand tuned or computed using typical hyper-parameter search methods such as grid search . Choosing small L allows the model to make update decisions for the near future only but offers less choice in operating points . Similarly a bigger L requires the model to predict its update decisions for a bigger time span but offers more flexibility . At the extreme when L = 1 , each window consists of a single input . | The paper proposes Skip-Window or SkipW an abstraction encapsulating RNN cells to actively skip updates similar to some earlier works like Skip-RNN, Skim-RNN, and ThrRNN. The novelty of the method comes is in having control over the total updates to control the overall computational budget compared to previous methods which didn't provide deterministic upper bounds and varied depending on the inputs. The idea is very simple and straightforward and can be looked at as a logical extension to the Skip-RNN line of work combined with a windowed approach on time series as used in ShaRNN (Dennis et al., NeurIPS 2019). The entire time series is divided into Windows of length L (which is a tunable parameter) and each window has a precomputed (from the final hidden state of the previous window) per-time step (update inside the window) importance vector which can be used as an indicator to update or not to update following the binarization as done in previous methods. The strict sparsification of this per-window importance vector to have only K non-zeros per window helps reduce compute to an upper bound ratio of K/L. The method further uses another threshold term over the sparsified importance vector to control finer budget requirements if needed. The experimentation is done on 2 tasks HAR-2D-Pose (with 32 time steps) and Adding task with 50 timesteps. The evaluation shows that Skip-Window shows good performance./accuracy compared to previous flexible RNNs with a reduction in the total number of updates. Finally, the impressive part of the paper is the real-world evaluation on Jetson Nano with a more complex workflow involving pose estimation from images for HAR-2D-Pose. | SP:099cb12ac8ffe1e09ba4ff99a263194e7372c137 |
SkipW: Resource Adaptable RNN with Strict Upper Computational Limit | 1 INTRODUCTION . Since Recurrent Neural Networks ( RNN ) have been introduced Williams et al . ( 1986 ) , they have become one of the reference methods to process sequences . A typical architecture is the Long-ShortTerm-Memory neural network ( LSTM ) which allowed improvement in natural language processing such as large-vocabulary speech recognition ( Sak et al. , 2014 ; Li & Wu , 2015 ) . Used with CNNs they have also reached state of the art in automatic image captioning ( Vinyals et al. , 2015 ) . Deep learning models are now brought closer to the user rather than running in a distant cloud , helping to reduce latency , network congestion , and improving data security and privacy . However , smartphones and user devices impose additional constraints such as limited computation or energy . Handling these constraints has become an active research topic ( Zhang et al. , 2017 ; 2018 ; Howard et al. , 2019 ; Wu et al. , 2019 ; Cai et al. , 2020 ) . User devices can also host multiple processes running at the same time and starting or stopping abruptly , modifying the constraints affecting the processes . Few works have considered models that can be modified at run time to adapt to an evolving computational limit ( Yu et al. , 2019 ; Yu & Huang , 2019 ; Guerra et al. , 2020 ; Jin et al. , 2020 ) . However , none of these focus on sequences and therefore none address the problem of adapting the model in the middle of a sequence . In this context , this paper introduces Skip-Window ( SkipW ) , a flexible recurrent neural network architecture : its computational cost can be dynamically adapted during a sequence analysis to meet real time constraints changes . The proposed architecture can be combined with any RNN cell and allows to strictly limit the computational resources used to avoid exceeding a given budget . Furthermore , empirical experiments on four data sets ( Adding Task , MNIST , IMDB and HAR-2D-POSE ) demonstrate that this subsampling architecture is interesting in itself . Skip-Window matches or exceed the accuracy of existing approaches for a given computational cost . In addition , measurements on specific processors highlight that SkipW produces real computational and energy savings . 2 RELATED WORK . Typically , RNNs maintain a “ state ” , a vector of variables , over time . This state is supposed to accumulate relevant information and is updated recursively . Each input of the sequence is typically a ) processed by some deep layers and b ) then combined with the previous state through some other deep layers to compute the new state . Hence , the RNN can be seen as a function taking a sequence of inputs x = ( x1 , . . . , xT ) and recursively computing a set of states s = ( s1 , . . . , sT ) . Each state st is computed from st−1 and xt by a cell S of the RNN . As neural networks are increasingly run on limited hardware , recent research has focused on controlling their computational cost . 2.1 FLEXIBLE NEURAL NETWORKS . A few architectures have recently been designed to adapt the computational complexity of a Deep Neural Network ( DNN ) without reloading the whole model . This can be achieved by removing/adding neurons ( Yu et al. , 2019 ; Yu & Huang , 2019 ) or by modifying the quantization of the weights ( Guerra et al. , 2020 ; Jin et al. , 2020 ) . An efficient embedding of a mixture of Convolutional Neural Network ( CNNs ) also allows to add or remove several models at the same time , hence changing the computational cost ( Ruiz & Verbeek , 2019 ) . 2.1.1 THRRNN . For RNNs specifically , ThrRNN ( Lambert et al. , 2020 ) aims to control computation time by not processing some inputs . This is controlled by an update gate ut . The tradeoff between the average accuracy and the average number of updates can be modified during inference by changing a single parameter thr . ThrRNN can wrap any RNN cell S : ut = fbinarize ( ũt , thr ) = { 0 if ũt < thr 1 otherwise ( 1 ) ∆ũt = σ ( Wst + b ) ( 2 ) ũt+1 = ut∆ũt + ( 1− ut ) ( ũt +min ( ∆ũt , 1− ũt ) ) ( 3 ) st = utS ( st−1 , xt ) + ( 1− ut ) st−1 . ( 4 ) When an input is processed , an update gate computes the quantity ∆ũt that determines how many inputs will be skipped . In practice the ∆ũt are accumulated in ũt until ũt ≥ thr . 2.2 RECURRENT NEURAL NETWORK WITH LOW COMPUTATIONAL COMPLEXITY . Several architectures have been proposed to limit or reduce the computational cost of RNNs , but this cost can not be adapted at inference . A first class of architectures dynamically reduces computation based on the input . SkipRNN ( Campos et al. , 2018 ) predates and is similar to ThrRNN , except that the binarization function does not change . A similar mechanism has been proposed by Zhang et al . ( 2019 ) . Other architectures directly select the next input to process ( Yeung et al. , 2016 ; Yu et al. , 2017 ; Hansen et al. , 2019 ; Song et al. , 2018 ) . Early exit has also been investigated by Dennis et al . ( 2019 ) . Tao et al . ( 2019 ) also use xt as input to an update gate . So do Seo et al . ( 2018 ) ; Jernite et al . ( 2017 ) ; Li et al . ( 2020 ) . However , they do not skip any input but perform partial state updates . A second class of architectures focuses on reducing the overal cost of the RNN . FastRNN is an RNN augmented with a residual connection with two extra scalar parameters and FastGRNN is an improved FastRNN : the residual connection is extended to a gate and RNN matrices are low rank , sparse and quantized ( Kusupati et al. , 2018 ) . Other architectures reduce the RNN length . Chan et al . ( 2016 ) train an encoder to reduce the input length . Yeung et al . ( 2016 ) ; Shan et al . ( 2018 ) ; Chen et al . ( 2018 ) propose various mechanisms to summarize subsequences of windows of inputs . 2.3 RECURRENT NEURAL NETWORK WITH HIERARCHICAL-DEPENDENT COMPLEXITY . A class of architectures focuses on hierarchy level concept to reduce the complexity . These methods are mainly used in the context of multi-layer RNNs where each layer is supposed to model a different level in the hierarchy ( e.g . for a corpus the levels could be documents , paragraphs , sentences , words , letters ) . These approaches are based on the fact that a hierarchical separation exists within a sequence of inputs , which might not always be the case . In ( Koutnik et al. , 2014 ) , the hidden state is partitioned into different modules , each module has its own clock period , meaning that they will be updated at different times . Skipping updates of part of the hidden state decreases the computational cost . In ( Koutnik et al. , 2014 ) , the update periods are chosen arbitrarily , for example using an exponential series . For stacked-RNNs , Chung et al . ( 2017 ) ; Chang et al . ( 2017 ) conditionally update each layer based on a feature level criterion , or by dilating a skip connection . Layers close to the inputs would model lower feature levels and be updated more frequently . Further layers would model higher level features . In ( Chung et al. , 2017 ) , a layer modeling sentences would be updated only when a word is processed entirely ( in a model fed character by character ) , from the layer modeling words . Before the end of a word is reached , the state of the former layer is copied across input steps . 2.4 RELATIONSHIP TO OUR WORK . ThrRNN is the closest model to SkipW . Both are flexible RNNs and skip some inputs . However , ThrRNN optimizes computational cost on average over sequences . This induces two variabilities : a ) inter-sequence variability : the model will not use the same number of updates for every sequence ; and b ) intra-sequence variability : the number of updates will not be uniform across time steps , updates may be concentrated in a certain part of the sequence . These two variabilities can cause the model to exceed its computational budget and , therefore , to either shut down or delay the output . SkipW does not have this problem as it strictly enforces a computational constraint over each window of inputs . Other strategies for flexible models are not straightforward to apply to RNN . They require specialized training algorithms . They have never been applied to models processing inputs of an RNN or to make an RNN flexible and it is not clear how they would need to be modified . Furthermore , these models adapt between independent inputs whereas , for sequences , adaptation is necessary between time steps . RNN architectures with low complexity are orthogonal to our approach . They do not offer flexibility . They could be combined with and benefit from our approach . However , SkipRNN ( which we are based on ) and related methods have one big advantage over others : by skipping inputs , they also skip any modification of an input , such as processing by an expensive CNN for images . As SkipW makes decision over a window of inputs , it has some superficial similarity to methods summarizing windows or hierarchical RNNs . However , SkipW a ) does not summarize windows and b ) does not even look at these inputs before deciding what to skip . 3 SKIP-WINDOWS ARCHITECTURE . Skip-Windows ( SkipW ) is a wrapper for a RNN cell S. It uses a conditional computation mechanism to skip some updates . Rather than at each input xt , update gates are computed at the beginning of windows of inputs , that is , every L time steps ( Figure 1 ) . In other words , before any new L-size window of inputs , a L-size vector ũW is computed . ũW [ i ] can be seen as the importance of input i in the window . Then , the architecture includes a selectK mechanism . This function takes as input the vector ũW and outputs the vector ũKW , setting L − K elements to a value that ensures the associated inputs are not processed ( 0 in Figure 2 ) . Therefore , it ensures that at most K out of every L inputs will be processed . In other words , it forces the RNN cell to skip ( L − K ) out of every L inputs . This ensures a strict upper bound on the computational cost of the model for a sequence and for each window , therefore alleviating the inter-sequence variability and intra-sequence variability issues . Similarly to other works , the binary state update L-size vector , uW , is then obtained by binarizing the remaining values as in equation 1 . For example , by setting all values below a threshold to a value that ensures the associated inputs are not processed ( 0 in Figure 2 ) . An example of the Skip-Window cell implementation is represented in Figure 2 . In this case , selectK is implemented as a topK function . This enforces the strict constraint on the number of updates . The topK operation keeps unchanged the K highest value in ũW , t , and sets the ( L −K ) others to 0 . The corresponding architecture can be characterized as follows : st = ut · S ( st−1 , xt ) + ( 1− ut ) · st−1 ( 5 ) ũW , t+1 = γ · σ ( Ww ( st−1 , t ) + bw ) + ( 1− γ ) · ũW , t ( 6 ) γ = { 1 if i == 0 0 otherwise ( 7 ) i = t mod L ( 8 ) ũKW , t = topK ( ũW , t ) ( 9 ) ut = fbinarize ( ũ K W , t [ i ] , thr ) = { 0 if ũKW , t [ i ] < thr 1 otherwise ( 10 ) where Ww is a weight matrix of size ( N + 1 ) × L , N is the number of hidden units as defined by the RNN cell S , bW is a L-vector bias , σ is the sigmoid function and mod is the modulo operation . Instead of a topK function for selectK , it is also possible to use a stochastic sampling mechanism that randomly selects ( without replacement ) K out of L elements of ũW where the probability of selecting each element of index i is proportional to ũW [ i ] . Some selectK alternatives are discussed and evaluated in Appendix H. Including the time step t in equation 6 is also optional and can be replaced by a value ensuring the state is not static if no update is made in a window . For example , the number of inputs since the last update or the number of windows already processed . Training the model The model is trained to minimize a two-part loss , similarly to Campos et al . ( 2018 ) . The first term measures the accuracy of the task , and the second one penalizes inputs used : Lbudget = λ T∑ t=1 ut , ( 11 ) where λ is the cost associated to the use of a single input . More experimental details are provided in Appendix B . Error gradients The model is differentiable except for the fbinarize and topK function . To train the model using standard backpropagation the straight-through estimator is used as done in Campos et al . ( 2018 ) for fbinarize . Other alternatives might involve reinforcement learning such as REINFORCE ( Williams , 1992 ) or , in the case of the topK function , the usage of a differentiable topK as proposed in Xie et al . ( 2020 ) . Early experiments using a differentiable topK ( Xie et al. , 2020 ) have shown worse results than the straight-through estimator . This suggests that constraining computation may be an interesting approximation for a topK operation . Adapting computational cost at inference During inference , adapting the tradeoff between model performance and computational cost can be done using two factors : the K in equation 9 and the thr in equation 10 . These two parameters can be modified together or one at a time . Increasing/lowering thr parameter in [ 0 , 1 ] encourages the model to process fewer/more inputs . Changing K in { 0 . . . L } forces the model to process at most K/L of the window . Choice of the window size hyper-parameter By the nature of the model , the task can influence the choice of L. It can be hand tuned or computed using typical hyper-parameter search methods such as grid search . Choosing small L allows the model to make update decisions for the near future only but offers less choice in operating points . Similarly a bigger L requires the model to predict its update decisions for a bigger time span but offers more flexibility . At the extreme when L = 1 , each window consists of a single input . | This work introduces Skip-Window (SkipW), an approach that allows RNNS to have improved computational efficiency at the cost of accuracy. SkipW adds a procedure to existing RNN cells that allows them to process fewer inputs while remaining in a strict computational budget. This work demonstrates the benefits of SkipW through experiments on multiple data sets. | SP:099cb12ac8ffe1e09ba4ff99a263194e7372c137 |
Iterative Image Inpainting with Structural Similarity Mask for Anomaly Detection | 1 INTRODUCTION . Anomaly detection ( AD ) is the identification task of the rarely happened events or items that differ from the majority of the data . In the real world , there are many applications , such as the medial diagnosis ( Baur et al. , 2018 ; Zimmerer et al. , 2019a ) , defect detection in the factories ( Matsubara et al. , 2018 ; Bergmann et al. , 2019 ) , early detection of plant disease ( Wang et al. , 2019 ) , and X-Ray security detection in public space ( Griffin et al. , 2018 ) . Because manual inspection by humans is slow , expensive , and error-prone , automating visual inspection is the popular application of artificial intelligence . In transferring knowledge from humans to machines , there is a lack of anomalous samples due to their low event rate and difficulty annotating and categorizing various anomalous defects beforehand . Therefore , AD methods typically take unsupervised approaches that try to learn compact features of data from normal samples and detect anomalies by thresholding anomaly score to measure the deviation from learned features . To deal with high-dimensional images and learn their features , it is popular to use deep neural networks ( Goodfellow et al. , 2016 ) . In this work , we focus on the reconstruction-based unsupervised AD . This attempts to reconstruct only the normal dataset and classify the normal or anomalous data on thresholding reconstruction errors ( An & Cho , 2015 ) . The architectures are based on deep neural networks such as deep autoencoders ( Hinton & Salakhutdinov , 2006 ) , variational autoencoders ( VAEs ) ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) , or autoencoders with generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . These models compress the high-dimensional information into the data manifold in lower-dimensional latent space by reconstructing input data under certain constraints for latent space , such as a prior distribution or an information bottleneck ( Alemi et al. , 2016 ) . The reconstruction-based AD approach issue is that autoencoders fail to model small details and yield blurry image reconstruction . This is especially the case for the high-frequency textures , such as carpet , leather , and tile ( Bergmann et al. , 2019 ) . Dehaene et al . ( 2020 ) also pointed out that there is no guarantee of the generalization of their behavior for out-of-samples , and local defects added to normal images could deteriorate whole images . In the viewpoint of the signal-to-noise ratio ( SNR ) , it is interpreted that blurry reconstruction makes anomaly signals ( reconstruction errors in anomaly pixels ) unclear and increases normal noises ( reconstruction errors in normal pixels ) . Since the SNR explains the feasibility of AD by thresholding a sample-wise reconstruction error , the low SNR makes AD challenges . We point out an additional issue about the gap between optimized function at training and evaluated function at testing . Rethinking our goal in unsupervised AD , it is concluded not to minimize reconstruction errors merely but to maximize the SNR . Although models are trained to minimize a sample-wise reconstruction error , they are expected to have a large deviation of anomaly pixels and a small deviation of normal pixels at testing . In this paper , we propose I3AD ( Iterative Image Inpainting for Anomaly Detection ) . As Figure 1 , our method utilizes an inpainting model that only encode unmasked regions and reconstruct masked regions instead of vanilla autoencoders . Once computed reconstruction errors , it is recycled as an inpainting mask for the next iteration . We show that the iterative update enhances the reconstruction quality and satisfies the expected objective to maximize the expected SNR at testing . Through experiments and analysis on the MVTecAD dataset shows that our I3AD outperforms existing methods on nine categories and has ave. +11.6 % improvement on texture category . 2 METHODOLOGY . 2.1 HIGH-LEVEL IDEA . We think of unsupervised AD using autoencoders . Here , we implicitly assume anomalies show up in partial regions , and pixels in surrounding regions obey by the distribution of normal datasets . Therefore , the sample-wise anomaly score based on reconstruction errors is the summation of two types of pixels : ( 1 ) pixels of normal regions ( normal noises ) and ( 2 ) pixels of anomalous regions ( anomaly signals ) . We expect an ideal model with zero errors on normal regions and distinguishable per-pixel scores on anomalous regions , leading to the high SNR . Inheriting vanilla autoencoders ’ architecture does not help to resolve the low SNR issue mentioned by Bergmann et al . ( 2019 ) and Dehaene et al . ( 2020 ) . Indeed , autoencoders are forced to encode whole images with anomalous pixels of local defects and attempt to decode whole images with background normal pixels of fine structures . They do not learn their behavior to encode unseen anomalous pixels . That anomalous information can affect the whole image decoding . One approach to resolve the issue is a combination of the per-pixel identity function and conditional autoencoder . Compared to vanilla autoencoders , conditional autoencoders can encode only normal regions and decode only anomalous regions . The per-pixel identity function could copy the remaining unreconstructed regions . This model architecture is the same as an image inpainting model . Actually , deep inpainting models are designed by conditional autoencoders that encode unmasked regions and fill in masked regions under a certain mask matrix ( Yu et al. , 2019 ) . However , the image inpainting method for AD falls into the tautology trap that we do not know a perfect inpainting mask matrix in advance , and detecting anomalous regions is the main goal to achieve . The key ideas to disentangle this tautology are mask generation by the anomaly score and iterative update of a mask matrix . Updating the inpainting mask matrix dynamically controls encoding and decoding information balances with a pixel-wise confidence level of anomaly scores . A generator gradually receives more information on potentially normal pixels and focuses on the suspected pixels during iterations . Furthermore , this process not only reduces background noises but also improves the SNR directly . 2.2 ITERATIVE IMAGE INPAINTING FOR ANOMALY DETECTION . Following the above discussion , we construct our I3AD method by an inpainting generator and a mask generation module . As a mask generation module , we explain the detail in the next subsection . Our model overview is depicted in Figure 2 . We construct an inpainting generator using conditional generative adversarial networks ( cGANs ) ( Isola et al. , 2017 ) and train its networks by a general image inpainting task under a normal dataset . We feed normal images partially hidden by randomly generated masks into generator networks and train them to decode the masked pixels from the unmasked pixels and corresponding bool mask matrix . A discriminator network distinguishes generated images from normal images . A generator is rewarded for fooling a discriminator , while a discriminator is rewarded for detecting the generated images , respectively . This training can be considered as the two-player min-max game that a generator and a discriminator compete . As a result , the inpainting model tries to find the optimal point on the loss function as below : min G max D Ex∼Pdata ( x ) [ logD ( x ) ] +Ex∼Pdata ( x ) , M∼P ( M ) [ log ( 1−D ( G ( x̂ =M x , M ) ) ) ] , where x and x̂ are real samples from data distribution Pdata ( x ) and their masked real samples . is an element-wise product . M is the corresponding bool mask matrix generated from the random distribution P ( M ) . G ( x̂ , M ) is an image inpainting network that takes an incomplete image and masked matrix . D ( x ) denotes a binary classifier whether an image is generated or real . We borrow and customize Spectral-Normalized Markovian GAN ( SN-PatchGAN ) architecture following by Yu et al . ( 2019 ) . The network consists of two networks : a coarse-to-fine generator network with attention module and gated convolution , and a spectral normalized Markovian ( Patch ) discriminator network . Our I3AD expects to handle more fine structured masks than usual free-form masks . Therefore , to better handle more irregular masks , we apply the self-attention module ( Zhang et al. , 2018 ) instead of the contextual attention module originally designed for large rectangular masks as described in Yu et al . ( 2018 ; 2019 ) . To stabilize the training of GAN , we adapted proposed spectral normalization ( Miyato et al. , 2018 ) for discriminator ’ s layers . As approximation of objective min-max loss function ( Miyato et al. , 2018 ) , we also derived loss functions respectively for generator LG and discriminator LD below : LG = −Ex∼Pdata ( x ) , M∼P ( M ) [ D sn ( G ( x̂ =M x , M ) ) ] LD = Ex∼Pdata ( x ) [ ReLU ( 1−D sn ( x ) ) ] + Ex∼Pdata ( x ) , M∼P ( M ) [ ReLU ( 1 +D sn ( G ( x̂ =M x , M ) ) ) ] , where Dsn ( x ) denotes spectral-normalized discriminator . ReLU is the abbreviation of Rectified Linear Units activation function , defined by ReLU ( x ) = max ( 0 , x ) . For a generator network , we use a spatially discounted l1 reconstruction loss ( Yu et al. , 2018 ) . At the test step , we fix all trainable parameters in the I3AD generator . The I3AD generator receives test images that are normal or anomalous images and adaptive mask matrices . Mask matrices are constructed from the pixel-wise reconstruction errors between original images and generated images at the previous iteration step . Since mask matrices are dynamically updated and shrunken during iterations , the I3AD generator generates masked regions intensively , leveraging the gradually increasing information of the surrounding unmasked regions . 2.3 INPAINTING MASK OF STRUCTURAL SIMILARITY ( SSIM MASK ) . In anomaly segmentation tasks , structural similarity measure ( SSIM ) index ( Wang et al. , 2004 ) sharply measures the small anomalous change ( Bergmann et al. , 2018 ) . Details of SSIM calculation is in Appendix A . We propose the structural similarity mask ( SSIM-Mask ) to mask anomalous pixels during test iterations . SSIM-Mask Mi is a binary mask thresholding the pixel-wise SSIM Index between an input image x0 and the reconstructed image x̃i at ith iteration step , defined as Mi = { 1 if ai ( x ) ≥ u 0 otherwise , ai ( x ) = SSIM ( x0 , x̃i ) , x̃i = G ( x̂i−1 , Mi−1 ) where u denotes the threshold level for binary classification . After N iterations , we use the N th SSIM anomaly score aN ( x ) for AD evaluation . 2.4 MASK INITIALIZATION . We have no mask information at the first iteration step during testing . We use four checkerboard matrices to initialize masks . Figure 6 shows initialized mask examples . A generator encodes pixels of test images in white regions and decodes pixels in black boxes . These black regions are mutually exclusive between four masks and cover target images collectively . Therefore , we combine four generated images into a single whole reconstruction . 2.5 ITERATION STEPS AND STOP CRITERIA . We expect no masked region for normal images and some local masked region for anomalous images . Therefore , applying iterative inpainting for some samples on the training dataset could estimate iteration steps enough to reduce masks on normal pixels . Since I3AD decodes only masked regions , Mi+1 will be almost all subset of Mi . Therefore , we could set early stop criteria whether the difference between Mi and Mi+1 is small against masked regions of Mi . | This paper presents a method for contrastive anomaly detection (AD) using an iterative masked conditional autoencoder inpainting approach. An autoencoder network is trained using an adversarial approach to reconstruct a randomly masked part of the input image. At test time a mask is derived from the generated anomaly map (using SSIM index between the input and reconstructed image) and used to mask the input so that the process is repeated N times. The method is shown to produce SOT results on the MVTec AD benchmark. | SP:3f2ca182ccafb5084013ee07613aaaa3bbbee930 |
Iterative Image Inpainting with Structural Similarity Mask for Anomaly Detection | 1 INTRODUCTION . Anomaly detection ( AD ) is the identification task of the rarely happened events or items that differ from the majority of the data . In the real world , there are many applications , such as the medial diagnosis ( Baur et al. , 2018 ; Zimmerer et al. , 2019a ) , defect detection in the factories ( Matsubara et al. , 2018 ; Bergmann et al. , 2019 ) , early detection of plant disease ( Wang et al. , 2019 ) , and X-Ray security detection in public space ( Griffin et al. , 2018 ) . Because manual inspection by humans is slow , expensive , and error-prone , automating visual inspection is the popular application of artificial intelligence . In transferring knowledge from humans to machines , there is a lack of anomalous samples due to their low event rate and difficulty annotating and categorizing various anomalous defects beforehand . Therefore , AD methods typically take unsupervised approaches that try to learn compact features of data from normal samples and detect anomalies by thresholding anomaly score to measure the deviation from learned features . To deal with high-dimensional images and learn their features , it is popular to use deep neural networks ( Goodfellow et al. , 2016 ) . In this work , we focus on the reconstruction-based unsupervised AD . This attempts to reconstruct only the normal dataset and classify the normal or anomalous data on thresholding reconstruction errors ( An & Cho , 2015 ) . The architectures are based on deep neural networks such as deep autoencoders ( Hinton & Salakhutdinov , 2006 ) , variational autoencoders ( VAEs ) ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) , or autoencoders with generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . These models compress the high-dimensional information into the data manifold in lower-dimensional latent space by reconstructing input data under certain constraints for latent space , such as a prior distribution or an information bottleneck ( Alemi et al. , 2016 ) . The reconstruction-based AD approach issue is that autoencoders fail to model small details and yield blurry image reconstruction . This is especially the case for the high-frequency textures , such as carpet , leather , and tile ( Bergmann et al. , 2019 ) . Dehaene et al . ( 2020 ) also pointed out that there is no guarantee of the generalization of their behavior for out-of-samples , and local defects added to normal images could deteriorate whole images . In the viewpoint of the signal-to-noise ratio ( SNR ) , it is interpreted that blurry reconstruction makes anomaly signals ( reconstruction errors in anomaly pixels ) unclear and increases normal noises ( reconstruction errors in normal pixels ) . Since the SNR explains the feasibility of AD by thresholding a sample-wise reconstruction error , the low SNR makes AD challenges . We point out an additional issue about the gap between optimized function at training and evaluated function at testing . Rethinking our goal in unsupervised AD , it is concluded not to minimize reconstruction errors merely but to maximize the SNR . Although models are trained to minimize a sample-wise reconstruction error , they are expected to have a large deviation of anomaly pixels and a small deviation of normal pixels at testing . In this paper , we propose I3AD ( Iterative Image Inpainting for Anomaly Detection ) . As Figure 1 , our method utilizes an inpainting model that only encode unmasked regions and reconstruct masked regions instead of vanilla autoencoders . Once computed reconstruction errors , it is recycled as an inpainting mask for the next iteration . We show that the iterative update enhances the reconstruction quality and satisfies the expected objective to maximize the expected SNR at testing . Through experiments and analysis on the MVTecAD dataset shows that our I3AD outperforms existing methods on nine categories and has ave. +11.6 % improvement on texture category . 2 METHODOLOGY . 2.1 HIGH-LEVEL IDEA . We think of unsupervised AD using autoencoders . Here , we implicitly assume anomalies show up in partial regions , and pixels in surrounding regions obey by the distribution of normal datasets . Therefore , the sample-wise anomaly score based on reconstruction errors is the summation of two types of pixels : ( 1 ) pixels of normal regions ( normal noises ) and ( 2 ) pixels of anomalous regions ( anomaly signals ) . We expect an ideal model with zero errors on normal regions and distinguishable per-pixel scores on anomalous regions , leading to the high SNR . Inheriting vanilla autoencoders ’ architecture does not help to resolve the low SNR issue mentioned by Bergmann et al . ( 2019 ) and Dehaene et al . ( 2020 ) . Indeed , autoencoders are forced to encode whole images with anomalous pixels of local defects and attempt to decode whole images with background normal pixels of fine structures . They do not learn their behavior to encode unseen anomalous pixels . That anomalous information can affect the whole image decoding . One approach to resolve the issue is a combination of the per-pixel identity function and conditional autoencoder . Compared to vanilla autoencoders , conditional autoencoders can encode only normal regions and decode only anomalous regions . The per-pixel identity function could copy the remaining unreconstructed regions . This model architecture is the same as an image inpainting model . Actually , deep inpainting models are designed by conditional autoencoders that encode unmasked regions and fill in masked regions under a certain mask matrix ( Yu et al. , 2019 ) . However , the image inpainting method for AD falls into the tautology trap that we do not know a perfect inpainting mask matrix in advance , and detecting anomalous regions is the main goal to achieve . The key ideas to disentangle this tautology are mask generation by the anomaly score and iterative update of a mask matrix . Updating the inpainting mask matrix dynamically controls encoding and decoding information balances with a pixel-wise confidence level of anomaly scores . A generator gradually receives more information on potentially normal pixels and focuses on the suspected pixels during iterations . Furthermore , this process not only reduces background noises but also improves the SNR directly . 2.2 ITERATIVE IMAGE INPAINTING FOR ANOMALY DETECTION . Following the above discussion , we construct our I3AD method by an inpainting generator and a mask generation module . As a mask generation module , we explain the detail in the next subsection . Our model overview is depicted in Figure 2 . We construct an inpainting generator using conditional generative adversarial networks ( cGANs ) ( Isola et al. , 2017 ) and train its networks by a general image inpainting task under a normal dataset . We feed normal images partially hidden by randomly generated masks into generator networks and train them to decode the masked pixels from the unmasked pixels and corresponding bool mask matrix . A discriminator network distinguishes generated images from normal images . A generator is rewarded for fooling a discriminator , while a discriminator is rewarded for detecting the generated images , respectively . This training can be considered as the two-player min-max game that a generator and a discriminator compete . As a result , the inpainting model tries to find the optimal point on the loss function as below : min G max D Ex∼Pdata ( x ) [ logD ( x ) ] +Ex∼Pdata ( x ) , M∼P ( M ) [ log ( 1−D ( G ( x̂ =M x , M ) ) ) ] , where x and x̂ are real samples from data distribution Pdata ( x ) and their masked real samples . is an element-wise product . M is the corresponding bool mask matrix generated from the random distribution P ( M ) . G ( x̂ , M ) is an image inpainting network that takes an incomplete image and masked matrix . D ( x ) denotes a binary classifier whether an image is generated or real . We borrow and customize Spectral-Normalized Markovian GAN ( SN-PatchGAN ) architecture following by Yu et al . ( 2019 ) . The network consists of two networks : a coarse-to-fine generator network with attention module and gated convolution , and a spectral normalized Markovian ( Patch ) discriminator network . Our I3AD expects to handle more fine structured masks than usual free-form masks . Therefore , to better handle more irregular masks , we apply the self-attention module ( Zhang et al. , 2018 ) instead of the contextual attention module originally designed for large rectangular masks as described in Yu et al . ( 2018 ; 2019 ) . To stabilize the training of GAN , we adapted proposed spectral normalization ( Miyato et al. , 2018 ) for discriminator ’ s layers . As approximation of objective min-max loss function ( Miyato et al. , 2018 ) , we also derived loss functions respectively for generator LG and discriminator LD below : LG = −Ex∼Pdata ( x ) , M∼P ( M ) [ D sn ( G ( x̂ =M x , M ) ) ] LD = Ex∼Pdata ( x ) [ ReLU ( 1−D sn ( x ) ) ] + Ex∼Pdata ( x ) , M∼P ( M ) [ ReLU ( 1 +D sn ( G ( x̂ =M x , M ) ) ) ] , where Dsn ( x ) denotes spectral-normalized discriminator . ReLU is the abbreviation of Rectified Linear Units activation function , defined by ReLU ( x ) = max ( 0 , x ) . For a generator network , we use a spatially discounted l1 reconstruction loss ( Yu et al. , 2018 ) . At the test step , we fix all trainable parameters in the I3AD generator . The I3AD generator receives test images that are normal or anomalous images and adaptive mask matrices . Mask matrices are constructed from the pixel-wise reconstruction errors between original images and generated images at the previous iteration step . Since mask matrices are dynamically updated and shrunken during iterations , the I3AD generator generates masked regions intensively , leveraging the gradually increasing information of the surrounding unmasked regions . 2.3 INPAINTING MASK OF STRUCTURAL SIMILARITY ( SSIM MASK ) . In anomaly segmentation tasks , structural similarity measure ( SSIM ) index ( Wang et al. , 2004 ) sharply measures the small anomalous change ( Bergmann et al. , 2018 ) . Details of SSIM calculation is in Appendix A . We propose the structural similarity mask ( SSIM-Mask ) to mask anomalous pixels during test iterations . SSIM-Mask Mi is a binary mask thresholding the pixel-wise SSIM Index between an input image x0 and the reconstructed image x̃i at ith iteration step , defined as Mi = { 1 if ai ( x ) ≥ u 0 otherwise , ai ( x ) = SSIM ( x0 , x̃i ) , x̃i = G ( x̂i−1 , Mi−1 ) where u denotes the threshold level for binary classification . After N iterations , we use the N th SSIM anomaly score aN ( x ) for AD evaluation . 2.4 MASK INITIALIZATION . We have no mask information at the first iteration step during testing . We use four checkerboard matrices to initialize masks . Figure 6 shows initialized mask examples . A generator encodes pixels of test images in white regions and decodes pixels in black boxes . These black regions are mutually exclusive between four masks and cover target images collectively . Therefore , we combine four generated images into a single whole reconstruction . 2.5 ITERATION STEPS AND STOP CRITERIA . We expect no masked region for normal images and some local masked region for anomalous images . Therefore , applying iterative inpainting for some samples on the training dataset could estimate iteration steps enough to reduce masks on normal pixels . Since I3AD decodes only masked regions , Mi+1 will be almost all subset of Mi . Therefore , we could set early stop criteria whether the difference between Mi and Mi+1 is small against masked regions of Mi . | This work proposed a novel learning strategy for unsupervisedly anomaly detection. Particularly, authors propose to use an iterative mask generation process based on image impainting and reduction of a structural similarity metric (SSMI) between the input image and its reconstructed version. For evaluation purposes, authors resort to the public MVTec benchmark, showing better results than the baselines. Please find me comments below: | SP:3f2ca182ccafb5084013ee07613aaaa3bbbee930 |
Iterative Image Inpainting with Structural Similarity Mask for Anomaly Detection | 1 INTRODUCTION . Anomaly detection ( AD ) is the identification task of the rarely happened events or items that differ from the majority of the data . In the real world , there are many applications , such as the medial diagnosis ( Baur et al. , 2018 ; Zimmerer et al. , 2019a ) , defect detection in the factories ( Matsubara et al. , 2018 ; Bergmann et al. , 2019 ) , early detection of plant disease ( Wang et al. , 2019 ) , and X-Ray security detection in public space ( Griffin et al. , 2018 ) . Because manual inspection by humans is slow , expensive , and error-prone , automating visual inspection is the popular application of artificial intelligence . In transferring knowledge from humans to machines , there is a lack of anomalous samples due to their low event rate and difficulty annotating and categorizing various anomalous defects beforehand . Therefore , AD methods typically take unsupervised approaches that try to learn compact features of data from normal samples and detect anomalies by thresholding anomaly score to measure the deviation from learned features . To deal with high-dimensional images and learn their features , it is popular to use deep neural networks ( Goodfellow et al. , 2016 ) . In this work , we focus on the reconstruction-based unsupervised AD . This attempts to reconstruct only the normal dataset and classify the normal or anomalous data on thresholding reconstruction errors ( An & Cho , 2015 ) . The architectures are based on deep neural networks such as deep autoencoders ( Hinton & Salakhutdinov , 2006 ) , variational autoencoders ( VAEs ) ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) , or autoencoders with generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) . These models compress the high-dimensional information into the data manifold in lower-dimensional latent space by reconstructing input data under certain constraints for latent space , such as a prior distribution or an information bottleneck ( Alemi et al. , 2016 ) . The reconstruction-based AD approach issue is that autoencoders fail to model small details and yield blurry image reconstruction . This is especially the case for the high-frequency textures , such as carpet , leather , and tile ( Bergmann et al. , 2019 ) . Dehaene et al . ( 2020 ) also pointed out that there is no guarantee of the generalization of their behavior for out-of-samples , and local defects added to normal images could deteriorate whole images . In the viewpoint of the signal-to-noise ratio ( SNR ) , it is interpreted that blurry reconstruction makes anomaly signals ( reconstruction errors in anomaly pixels ) unclear and increases normal noises ( reconstruction errors in normal pixels ) . Since the SNR explains the feasibility of AD by thresholding a sample-wise reconstruction error , the low SNR makes AD challenges . We point out an additional issue about the gap between optimized function at training and evaluated function at testing . Rethinking our goal in unsupervised AD , it is concluded not to minimize reconstruction errors merely but to maximize the SNR . Although models are trained to minimize a sample-wise reconstruction error , they are expected to have a large deviation of anomaly pixels and a small deviation of normal pixels at testing . In this paper , we propose I3AD ( Iterative Image Inpainting for Anomaly Detection ) . As Figure 1 , our method utilizes an inpainting model that only encode unmasked regions and reconstruct masked regions instead of vanilla autoencoders . Once computed reconstruction errors , it is recycled as an inpainting mask for the next iteration . We show that the iterative update enhances the reconstruction quality and satisfies the expected objective to maximize the expected SNR at testing . Through experiments and analysis on the MVTecAD dataset shows that our I3AD outperforms existing methods on nine categories and has ave. +11.6 % improvement on texture category . 2 METHODOLOGY . 2.1 HIGH-LEVEL IDEA . We think of unsupervised AD using autoencoders . Here , we implicitly assume anomalies show up in partial regions , and pixels in surrounding regions obey by the distribution of normal datasets . Therefore , the sample-wise anomaly score based on reconstruction errors is the summation of two types of pixels : ( 1 ) pixels of normal regions ( normal noises ) and ( 2 ) pixels of anomalous regions ( anomaly signals ) . We expect an ideal model with zero errors on normal regions and distinguishable per-pixel scores on anomalous regions , leading to the high SNR . Inheriting vanilla autoencoders ’ architecture does not help to resolve the low SNR issue mentioned by Bergmann et al . ( 2019 ) and Dehaene et al . ( 2020 ) . Indeed , autoencoders are forced to encode whole images with anomalous pixels of local defects and attempt to decode whole images with background normal pixels of fine structures . They do not learn their behavior to encode unseen anomalous pixels . That anomalous information can affect the whole image decoding . One approach to resolve the issue is a combination of the per-pixel identity function and conditional autoencoder . Compared to vanilla autoencoders , conditional autoencoders can encode only normal regions and decode only anomalous regions . The per-pixel identity function could copy the remaining unreconstructed regions . This model architecture is the same as an image inpainting model . Actually , deep inpainting models are designed by conditional autoencoders that encode unmasked regions and fill in masked regions under a certain mask matrix ( Yu et al. , 2019 ) . However , the image inpainting method for AD falls into the tautology trap that we do not know a perfect inpainting mask matrix in advance , and detecting anomalous regions is the main goal to achieve . The key ideas to disentangle this tautology are mask generation by the anomaly score and iterative update of a mask matrix . Updating the inpainting mask matrix dynamically controls encoding and decoding information balances with a pixel-wise confidence level of anomaly scores . A generator gradually receives more information on potentially normal pixels and focuses on the suspected pixels during iterations . Furthermore , this process not only reduces background noises but also improves the SNR directly . 2.2 ITERATIVE IMAGE INPAINTING FOR ANOMALY DETECTION . Following the above discussion , we construct our I3AD method by an inpainting generator and a mask generation module . As a mask generation module , we explain the detail in the next subsection . Our model overview is depicted in Figure 2 . We construct an inpainting generator using conditional generative adversarial networks ( cGANs ) ( Isola et al. , 2017 ) and train its networks by a general image inpainting task under a normal dataset . We feed normal images partially hidden by randomly generated masks into generator networks and train them to decode the masked pixels from the unmasked pixels and corresponding bool mask matrix . A discriminator network distinguishes generated images from normal images . A generator is rewarded for fooling a discriminator , while a discriminator is rewarded for detecting the generated images , respectively . This training can be considered as the two-player min-max game that a generator and a discriminator compete . As a result , the inpainting model tries to find the optimal point on the loss function as below : min G max D Ex∼Pdata ( x ) [ logD ( x ) ] +Ex∼Pdata ( x ) , M∼P ( M ) [ log ( 1−D ( G ( x̂ =M x , M ) ) ) ] , where x and x̂ are real samples from data distribution Pdata ( x ) and their masked real samples . is an element-wise product . M is the corresponding bool mask matrix generated from the random distribution P ( M ) . G ( x̂ , M ) is an image inpainting network that takes an incomplete image and masked matrix . D ( x ) denotes a binary classifier whether an image is generated or real . We borrow and customize Spectral-Normalized Markovian GAN ( SN-PatchGAN ) architecture following by Yu et al . ( 2019 ) . The network consists of two networks : a coarse-to-fine generator network with attention module and gated convolution , and a spectral normalized Markovian ( Patch ) discriminator network . Our I3AD expects to handle more fine structured masks than usual free-form masks . Therefore , to better handle more irregular masks , we apply the self-attention module ( Zhang et al. , 2018 ) instead of the contextual attention module originally designed for large rectangular masks as described in Yu et al . ( 2018 ; 2019 ) . To stabilize the training of GAN , we adapted proposed spectral normalization ( Miyato et al. , 2018 ) for discriminator ’ s layers . As approximation of objective min-max loss function ( Miyato et al. , 2018 ) , we also derived loss functions respectively for generator LG and discriminator LD below : LG = −Ex∼Pdata ( x ) , M∼P ( M ) [ D sn ( G ( x̂ =M x , M ) ) ] LD = Ex∼Pdata ( x ) [ ReLU ( 1−D sn ( x ) ) ] + Ex∼Pdata ( x ) , M∼P ( M ) [ ReLU ( 1 +D sn ( G ( x̂ =M x , M ) ) ) ] , where Dsn ( x ) denotes spectral-normalized discriminator . ReLU is the abbreviation of Rectified Linear Units activation function , defined by ReLU ( x ) = max ( 0 , x ) . For a generator network , we use a spatially discounted l1 reconstruction loss ( Yu et al. , 2018 ) . At the test step , we fix all trainable parameters in the I3AD generator . The I3AD generator receives test images that are normal or anomalous images and adaptive mask matrices . Mask matrices are constructed from the pixel-wise reconstruction errors between original images and generated images at the previous iteration step . Since mask matrices are dynamically updated and shrunken during iterations , the I3AD generator generates masked regions intensively , leveraging the gradually increasing information of the surrounding unmasked regions . 2.3 INPAINTING MASK OF STRUCTURAL SIMILARITY ( SSIM MASK ) . In anomaly segmentation tasks , structural similarity measure ( SSIM ) index ( Wang et al. , 2004 ) sharply measures the small anomalous change ( Bergmann et al. , 2018 ) . Details of SSIM calculation is in Appendix A . We propose the structural similarity mask ( SSIM-Mask ) to mask anomalous pixels during test iterations . SSIM-Mask Mi is a binary mask thresholding the pixel-wise SSIM Index between an input image x0 and the reconstructed image x̃i at ith iteration step , defined as Mi = { 1 if ai ( x ) ≥ u 0 otherwise , ai ( x ) = SSIM ( x0 , x̃i ) , x̃i = G ( x̂i−1 , Mi−1 ) where u denotes the threshold level for binary classification . After N iterations , we use the N th SSIM anomaly score aN ( x ) for AD evaluation . 2.4 MASK INITIALIZATION . We have no mask information at the first iteration step during testing . We use four checkerboard matrices to initialize masks . Figure 6 shows initialized mask examples . A generator encodes pixels of test images in white regions and decodes pixels in black boxes . These black regions are mutually exclusive between four masks and cover target images collectively . Therefore , we combine four generated images into a single whole reconstruction . 2.5 ITERATION STEPS AND STOP CRITERIA . We expect no masked region for normal images and some local masked region for anomalous images . Therefore , applying iterative inpainting for some samples on the training dataset could estimate iteration steps enough to reduce masks on normal pixels . Since I3AD decodes only masked regions , Mi+1 will be almost all subset of Mi . Therefore , we could set early stop criteria whether the difference between Mi and Mi+1 is small against masked regions of Mi . | This paper presents an impainting-based method for anomaly localization on images. In the training time, a conditional GAN-based generative modeling approach is adopted. In the test time, a mask matrix is adaptively estimated by thresholding the structural similarity index measure (SSIM) between the original images and reconstructed images. The idea is very intuitive and experiments demonstrate improved performance (especially on textures) over two recent baselines methods. | SP:3f2ca182ccafb5084013ee07613aaaa3bbbee930 |
Robust Reinforcement Learning using Adversarial Populations | 1 INTRODUCTION . Developing controllers that work effectively across a wide range of potential deployment environments is one of the core challenges in engineering . The complexity of the physical world means that the models used to design controllers are often inaccurate . Optimization based control design approaches , such as reinforcement learning ( RL ) , have no notion of model inaccuracy and can lead to controllers that fail catastrophically under mismatch . In this work , we aim to demonstrate an effective method for training reinforcement learning policies that are robust to model inaccuracy by designing controllers that are effective in the presence of worst-case adversarial noise in the dynamics . An easily automated approach to inducing robustness is to formulate the problem as a zero-sum game and learn an adversary that perturbs the transition dynamics ( Tessler et al. , 2019 ; Kamalaruban et al. , 2020 ; Pinto et al. , 2017 ) . If a global Nash equilibrium of this problem is found , then that equilibrium provides a lower bound on the performance of the policy under some bounded set of perturbations . Besides the benefit of removing user design once the perturbation mechanism is specified , this approach is maximally conservative , which is useful for safety critical applications . However , the literature on learning an adversary predominantly uses a single , stochastic adversary . This raises a puzzling question : the zero-sum game does not necessarily have any pure Nash equilibria ( see Appendix C in Tessler et al . ( 2019 ) ) but the existing robust RL literature mostly appears to attempt to solve for pure Nash equilibria . That is , the most general form of the minimax problem searches over distributions of adversary and agent policies , however , this problem is approximated in the literature by a search for a single agent-adversary pair . We contend that this reduction to a single adversary approach can sometimes fail to result in improved robustness under standard parametrizations of the adversary policy . The following example provides some intuition for why using a single adversary can decrease robustness . Consider a robot trying to learn to walk east-wards while an adversary outputs a force representing wind coming from the north or the south . For a fixed , deterministic adversary the agent knows that the wind will come from either south or north and can simply apply a counteracting force at each state . Once the adversary is removed , the robot will still apply the compensatory forces and possibly become unstable . Stochastic Gaussian policies ( ubiquitous in continuous control ) offer little improvement : they can not represent multi-modal perturbations . Under these standard policy parametrizations , we can not use an adversary to endow the agent with a prior that a strong wind could persistently blow either north or south . This leaves the agent exploitable to this class of perturbations . The use of a single adversary in the robustness literature is in contrast to the multi-player game literature . In multi-player games , large sets of adversaries are used to ensure that an agent can not easily be exploited ( Vinyals et al. , 2019 ; Czarnecki et al. , 2020 ; Brown & Sandholm , 2019 ) . Drawing inspiration from this literature , we introduce RAP ( Robustness via Adversary Populations ) : a randomly initialized population of adversaries that we sample from at each rollout and train alongside the agent . Returning to our example of a robot perturbed by wind , if the robot learns to cancel the north wind effectively , then that opens a niche for an adversary to exploit by applying forces in another direction . With a population , we can endow the robot with the prior that a strong wind could come from either direction and that it must walk carefully to avoid being toppled over . Our contributions are as follows : • Using a set of continuous robotics control tasks , we provide evidence that a single adversary does not have a consistent positive impact on the robustness of an RL policy while the use of an adversary population provides improved robustness across all considered examples . • We investigate the source of the robustness and show that the single adversary policy is exploitable by new adversaries whereas policies trained with RAP are robust to new adversaries . • We demonstrate that adversary populations provide comparable robustness to domain randomization while avoiding potential failure modes of domain randomization . 2 RELATED WORK . This work builds upon robust control ( Zhou & Doyle , 1998 ) , a branch of control theory focused on finding optimal controllers under worst-case perturbations of the system dynamics . The Robust Markov Decision Process ( R-MDP ) formulation extends this worst-case model uncertainty to uncertainty sets on the transition dynamics of an MDP and demonstrates that computationally tractable solutions exist for small , tabular MDPs ( Nilim & El Ghaoui , 2005 ; Lim et al. , 2013 ) . For larger or continuous MDPs , one successful approach has been to use function approximation to compute approximate solutions to the R-MDP problem ( Tamar et al. , 2014 ) . One prominent variant of the R-MDP literature is to interpret the perturbations as an adversary and attempt to learn the distribution of the perturbation under a minimax objective . Two variants of this idea that tie in closely to our work are Robust Adversarial Reinforcement Learning ( RARL ) ( Pinto et al. , 2017 ) and Noisy Robust Markov Decision Processes ( NR-MDP ) ( Tessler et al. , 2019 ) which differ in how they parametrize the adversaries : RARL picks out specific robot joints that the adversary acts on while NR-MDP adds the adversary action to the agent action . Both of these works attempt to find an equilibrium of the minimax objective using a single adversary ; in contrast our work uses a large set of adversaries and shows improved robustness relative to a single adversary . A strong alternative to the minimax objective , domain randomization , asks a designer to explicitly define a distribution over environments that the agent should be robust to . For example , ( Peng et al. , 2018 ) varies simulator parameters to train a robot to robustly push a puck to a target location in the real world ; ( Antonova et al. , 2017 ) adds noise to friction and actions to transfer an object pivoting policy directly from simulation to a Baxter robot . Additionally , domain randomization has been successfully used to build accurate object detectors solely from simulated data ( Tobin et al. , 2017 ) and to zero-shot transfer a quadcopter flight policy from simulation ( Sadeghi & Levine , 2016 ) . The use of population based training is a standard technique in multi-agent settings . Alphastar , the grandmaster-level Starcraft bot , uses a population of `` exploiter '' agents that fine-tune against the bot to prevent it from developing exploitable strategies ( Vinyals et al. , 2019 ) . ( Czarnecki et al. , 2020 ) establishes a set of sufficient geometric conditions on games under which the use of multiple adversaries will ensure gradual improvement in the strength of the agent policy . They empirically demonstrate that learning in games can often fail to converge without populations . Finally , Active Domain Randomization ( Mehta et al. , 2019 ) is a very close approach to ours , as they use a population of adversaries to select domain randomization parameters whereas we use a population of adversaries to directly perturb the agent actions . However , they explicitly induce diversity using a repulsive term and use a discriminator to generate the reward . 3 BACKGROUND . In this work we use the framework of a multi-agent , finite-horizon , discounted , Markov Decision Process ( MDP ) ( Puterman , 1990 ) defined by a tuple 〈Aagent × Aadversary , S , T , r , γ〉 . Here Aagent is the set of actions for the agent , Aadversary is the set of actions for the adversary , S is a set of states , T : Aagent × Aadversary × S → ∆ ( S ) is a transition function , r : Aagent × Aadversary × S → R is a reward function and γ is a discount factor . S is shared between the adversaries as they share a state-space with the agent . The goal for a given MDP is to find a policy πθ parametrized by θ that maximizes the expected cumulative discounted reward Jθ = E [ ∑T t=0 γ tr ( st , at ) |πθ ] . The conditional in this expression is a short-hand to indicate that the actions in the MDP are sampled via at ∼ πθ ( st , at−1 ) . We denote the agent policy parametrized by weights θ as πθ and the policy of adversary i as π̄φi . Actions sampled from the adversary policy π̄φi will be written as ā i t. We use ξ to denote the parametrization of the system dynamics ( e.g . different values of friction , mass , wind , etc . ) and the system dynamics for a given state and action as st+1 ∼ fξ ( st , at ) . 3.1 BASELINES . Here we outline prior work and the approaches that will be compared with RAP . Our baselines consist of a single adversary and domain randomization . 3.1.1 SINGLE MINIMAX ADVERSARY . Our adversary formulation uses the Noisy Action Robust MDP ( Tessler et al. , 2019 ) in which the adversary adds its actions onto the agent actions . The objective is max θ E [ T∑ t=0 γtr ( st , at + αāt ) |πθ , π̄φ ] min φ E [ T∑ t=0 γtr ( st , at + αāt ) |πθ , π̄φ ] ( 1 ) where α is a hyperparameter controlling the adversary strength . This is a game in which the adversary and agent play simultaneously . We note an important restriction inherent to this adversarial model . Since the adversary is only able to attack the agent through the actions , there is a restricted class of dynamical systems that it can represent ; this set of dynamical systems may not necessarily align with the set of dynamical systems that the agent may be tested in . This is a restriction caused by the choice of adversarial perturbation and could be alleviated by using different adversarial parametrizations e.g . perturbing the transition function directly . 3.1.2 DYNAMICS RANDOMIZATION . Domain randomization is the setting in which the user specifies a set of environments which the agent should be robust to . This allows the user to directly encode knowledge about the likely deviations between training and testing domains . For example , the user may believe that friction is hard to measure precisely and wants to ensure that their agent is robust to variations in friction ; they then specify that the agent will be trained with a wide range of possible friction values . We use ξ to denote some vector that parametrizes the set of training environments ( e.g . friction , masses , system dynamics , etc. ) . We denote the domain over which ξ is drawn from as Ξ and use P ( Ξ ) to denote some probability distribution over ξ . The domain randomization objective is max θ Eξ∼P ( Ξ ) [ Est+1∼fξ ( st , at ) [ T∑ t=0 γtr ( st , at ) |πθ ] ] st+1 ∼ fξ ( st , at ) at ∼ πθ ( st ) ( 2 ) Here the goal is to find an agent that performs well on average across the distribution of training environment . Most commonly , and in this work , the parameters ξ are sampled uniformly over Ξ . | This paper proposes to improve robustness in reinforcement learning via a population of diverse adversaries, where previous works mainly focus on the use a single adversary to mitigate the problem that the trained policy could be highly exploitable by the adversary. Specifically, at each iteration, it randomly selects an adversary from the population for rollouts, and it is trained by PPO. Experiments are conducted on 3 MuJoCo environments in comparison with vanilla PPO, domain randomization. | SP:bd8c89f5faf1695ca9f25e7e112cdb795db83864 |
Robust Reinforcement Learning using Adversarial Populations | 1 INTRODUCTION . Developing controllers that work effectively across a wide range of potential deployment environments is one of the core challenges in engineering . The complexity of the physical world means that the models used to design controllers are often inaccurate . Optimization based control design approaches , such as reinforcement learning ( RL ) , have no notion of model inaccuracy and can lead to controllers that fail catastrophically under mismatch . In this work , we aim to demonstrate an effective method for training reinforcement learning policies that are robust to model inaccuracy by designing controllers that are effective in the presence of worst-case adversarial noise in the dynamics . An easily automated approach to inducing robustness is to formulate the problem as a zero-sum game and learn an adversary that perturbs the transition dynamics ( Tessler et al. , 2019 ; Kamalaruban et al. , 2020 ; Pinto et al. , 2017 ) . If a global Nash equilibrium of this problem is found , then that equilibrium provides a lower bound on the performance of the policy under some bounded set of perturbations . Besides the benefit of removing user design once the perturbation mechanism is specified , this approach is maximally conservative , which is useful for safety critical applications . However , the literature on learning an adversary predominantly uses a single , stochastic adversary . This raises a puzzling question : the zero-sum game does not necessarily have any pure Nash equilibria ( see Appendix C in Tessler et al . ( 2019 ) ) but the existing robust RL literature mostly appears to attempt to solve for pure Nash equilibria . That is , the most general form of the minimax problem searches over distributions of adversary and agent policies , however , this problem is approximated in the literature by a search for a single agent-adversary pair . We contend that this reduction to a single adversary approach can sometimes fail to result in improved robustness under standard parametrizations of the adversary policy . The following example provides some intuition for why using a single adversary can decrease robustness . Consider a robot trying to learn to walk east-wards while an adversary outputs a force representing wind coming from the north or the south . For a fixed , deterministic adversary the agent knows that the wind will come from either south or north and can simply apply a counteracting force at each state . Once the adversary is removed , the robot will still apply the compensatory forces and possibly become unstable . Stochastic Gaussian policies ( ubiquitous in continuous control ) offer little improvement : they can not represent multi-modal perturbations . Under these standard policy parametrizations , we can not use an adversary to endow the agent with a prior that a strong wind could persistently blow either north or south . This leaves the agent exploitable to this class of perturbations . The use of a single adversary in the robustness literature is in contrast to the multi-player game literature . In multi-player games , large sets of adversaries are used to ensure that an agent can not easily be exploited ( Vinyals et al. , 2019 ; Czarnecki et al. , 2020 ; Brown & Sandholm , 2019 ) . Drawing inspiration from this literature , we introduce RAP ( Robustness via Adversary Populations ) : a randomly initialized population of adversaries that we sample from at each rollout and train alongside the agent . Returning to our example of a robot perturbed by wind , if the robot learns to cancel the north wind effectively , then that opens a niche for an adversary to exploit by applying forces in another direction . With a population , we can endow the robot with the prior that a strong wind could come from either direction and that it must walk carefully to avoid being toppled over . Our contributions are as follows : • Using a set of continuous robotics control tasks , we provide evidence that a single adversary does not have a consistent positive impact on the robustness of an RL policy while the use of an adversary population provides improved robustness across all considered examples . • We investigate the source of the robustness and show that the single adversary policy is exploitable by new adversaries whereas policies trained with RAP are robust to new adversaries . • We demonstrate that adversary populations provide comparable robustness to domain randomization while avoiding potential failure modes of domain randomization . 2 RELATED WORK . This work builds upon robust control ( Zhou & Doyle , 1998 ) , a branch of control theory focused on finding optimal controllers under worst-case perturbations of the system dynamics . The Robust Markov Decision Process ( R-MDP ) formulation extends this worst-case model uncertainty to uncertainty sets on the transition dynamics of an MDP and demonstrates that computationally tractable solutions exist for small , tabular MDPs ( Nilim & El Ghaoui , 2005 ; Lim et al. , 2013 ) . For larger or continuous MDPs , one successful approach has been to use function approximation to compute approximate solutions to the R-MDP problem ( Tamar et al. , 2014 ) . One prominent variant of the R-MDP literature is to interpret the perturbations as an adversary and attempt to learn the distribution of the perturbation under a minimax objective . Two variants of this idea that tie in closely to our work are Robust Adversarial Reinforcement Learning ( RARL ) ( Pinto et al. , 2017 ) and Noisy Robust Markov Decision Processes ( NR-MDP ) ( Tessler et al. , 2019 ) which differ in how they parametrize the adversaries : RARL picks out specific robot joints that the adversary acts on while NR-MDP adds the adversary action to the agent action . Both of these works attempt to find an equilibrium of the minimax objective using a single adversary ; in contrast our work uses a large set of adversaries and shows improved robustness relative to a single adversary . A strong alternative to the minimax objective , domain randomization , asks a designer to explicitly define a distribution over environments that the agent should be robust to . For example , ( Peng et al. , 2018 ) varies simulator parameters to train a robot to robustly push a puck to a target location in the real world ; ( Antonova et al. , 2017 ) adds noise to friction and actions to transfer an object pivoting policy directly from simulation to a Baxter robot . Additionally , domain randomization has been successfully used to build accurate object detectors solely from simulated data ( Tobin et al. , 2017 ) and to zero-shot transfer a quadcopter flight policy from simulation ( Sadeghi & Levine , 2016 ) . The use of population based training is a standard technique in multi-agent settings . Alphastar , the grandmaster-level Starcraft bot , uses a population of `` exploiter '' agents that fine-tune against the bot to prevent it from developing exploitable strategies ( Vinyals et al. , 2019 ) . ( Czarnecki et al. , 2020 ) establishes a set of sufficient geometric conditions on games under which the use of multiple adversaries will ensure gradual improvement in the strength of the agent policy . They empirically demonstrate that learning in games can often fail to converge without populations . Finally , Active Domain Randomization ( Mehta et al. , 2019 ) is a very close approach to ours , as they use a population of adversaries to select domain randomization parameters whereas we use a population of adversaries to directly perturb the agent actions . However , they explicitly induce diversity using a repulsive term and use a discriminator to generate the reward . 3 BACKGROUND . In this work we use the framework of a multi-agent , finite-horizon , discounted , Markov Decision Process ( MDP ) ( Puterman , 1990 ) defined by a tuple 〈Aagent × Aadversary , S , T , r , γ〉 . Here Aagent is the set of actions for the agent , Aadversary is the set of actions for the adversary , S is a set of states , T : Aagent × Aadversary × S → ∆ ( S ) is a transition function , r : Aagent × Aadversary × S → R is a reward function and γ is a discount factor . S is shared between the adversaries as they share a state-space with the agent . The goal for a given MDP is to find a policy πθ parametrized by θ that maximizes the expected cumulative discounted reward Jθ = E [ ∑T t=0 γ tr ( st , at ) |πθ ] . The conditional in this expression is a short-hand to indicate that the actions in the MDP are sampled via at ∼ πθ ( st , at−1 ) . We denote the agent policy parametrized by weights θ as πθ and the policy of adversary i as π̄φi . Actions sampled from the adversary policy π̄φi will be written as ā i t. We use ξ to denote the parametrization of the system dynamics ( e.g . different values of friction , mass , wind , etc . ) and the system dynamics for a given state and action as st+1 ∼ fξ ( st , at ) . 3.1 BASELINES . Here we outline prior work and the approaches that will be compared with RAP . Our baselines consist of a single adversary and domain randomization . 3.1.1 SINGLE MINIMAX ADVERSARY . Our adversary formulation uses the Noisy Action Robust MDP ( Tessler et al. , 2019 ) in which the adversary adds its actions onto the agent actions . The objective is max θ E [ T∑ t=0 γtr ( st , at + αāt ) |πθ , π̄φ ] min φ E [ T∑ t=0 γtr ( st , at + αāt ) |πθ , π̄φ ] ( 1 ) where α is a hyperparameter controlling the adversary strength . This is a game in which the adversary and agent play simultaneously . We note an important restriction inherent to this adversarial model . Since the adversary is only able to attack the agent through the actions , there is a restricted class of dynamical systems that it can represent ; this set of dynamical systems may not necessarily align with the set of dynamical systems that the agent may be tested in . This is a restriction caused by the choice of adversarial perturbation and could be alleviated by using different adversarial parametrizations e.g . perturbing the transition function directly . 3.1.2 DYNAMICS RANDOMIZATION . Domain randomization is the setting in which the user specifies a set of environments which the agent should be robust to . This allows the user to directly encode knowledge about the likely deviations between training and testing domains . For example , the user may believe that friction is hard to measure precisely and wants to ensure that their agent is robust to variations in friction ; they then specify that the agent will be trained with a wide range of possible friction values . We use ξ to denote some vector that parametrizes the set of training environments ( e.g . friction , masses , system dynamics , etc. ) . We denote the domain over which ξ is drawn from as Ξ and use P ( Ξ ) to denote some probability distribution over ξ . The domain randomization objective is max θ Eξ∼P ( Ξ ) [ Est+1∼fξ ( st , at ) [ T∑ t=0 γtr ( st , at ) |πθ ] ] st+1 ∼ fξ ( st , at ) at ∼ πθ ( st ) ( 2 ) Here the goal is to find an agent that performs well on average across the distribution of training environment . Most commonly , and in this work , the parameters ξ are sampled uniformly over Ξ . | The authors present a scheme that can be used to train agents to be robust against a population of adversarial policies, in which adversaries can perturb actions via an additive perturbation. Motivated by the observation that agents trained against a single policy may overfit to that policy and hence will lack robustness to new/unseen policies, the authors seek to show that their method generalizes well to unseen policies at test time. Their experiments consider several simulated environments, in which they show generally good performance against several baselines. | SP:bd8c89f5faf1695ca9f25e7e112cdb795db83864 |
Robust Reinforcement Learning using Adversarial Populations | 1 INTRODUCTION . Developing controllers that work effectively across a wide range of potential deployment environments is one of the core challenges in engineering . The complexity of the physical world means that the models used to design controllers are often inaccurate . Optimization based control design approaches , such as reinforcement learning ( RL ) , have no notion of model inaccuracy and can lead to controllers that fail catastrophically under mismatch . In this work , we aim to demonstrate an effective method for training reinforcement learning policies that are robust to model inaccuracy by designing controllers that are effective in the presence of worst-case adversarial noise in the dynamics . An easily automated approach to inducing robustness is to formulate the problem as a zero-sum game and learn an adversary that perturbs the transition dynamics ( Tessler et al. , 2019 ; Kamalaruban et al. , 2020 ; Pinto et al. , 2017 ) . If a global Nash equilibrium of this problem is found , then that equilibrium provides a lower bound on the performance of the policy under some bounded set of perturbations . Besides the benefit of removing user design once the perturbation mechanism is specified , this approach is maximally conservative , which is useful for safety critical applications . However , the literature on learning an adversary predominantly uses a single , stochastic adversary . This raises a puzzling question : the zero-sum game does not necessarily have any pure Nash equilibria ( see Appendix C in Tessler et al . ( 2019 ) ) but the existing robust RL literature mostly appears to attempt to solve for pure Nash equilibria . That is , the most general form of the minimax problem searches over distributions of adversary and agent policies , however , this problem is approximated in the literature by a search for a single agent-adversary pair . We contend that this reduction to a single adversary approach can sometimes fail to result in improved robustness under standard parametrizations of the adversary policy . The following example provides some intuition for why using a single adversary can decrease robustness . Consider a robot trying to learn to walk east-wards while an adversary outputs a force representing wind coming from the north or the south . For a fixed , deterministic adversary the agent knows that the wind will come from either south or north and can simply apply a counteracting force at each state . Once the adversary is removed , the robot will still apply the compensatory forces and possibly become unstable . Stochastic Gaussian policies ( ubiquitous in continuous control ) offer little improvement : they can not represent multi-modal perturbations . Under these standard policy parametrizations , we can not use an adversary to endow the agent with a prior that a strong wind could persistently blow either north or south . This leaves the agent exploitable to this class of perturbations . The use of a single adversary in the robustness literature is in contrast to the multi-player game literature . In multi-player games , large sets of adversaries are used to ensure that an agent can not easily be exploited ( Vinyals et al. , 2019 ; Czarnecki et al. , 2020 ; Brown & Sandholm , 2019 ) . Drawing inspiration from this literature , we introduce RAP ( Robustness via Adversary Populations ) : a randomly initialized population of adversaries that we sample from at each rollout and train alongside the agent . Returning to our example of a robot perturbed by wind , if the robot learns to cancel the north wind effectively , then that opens a niche for an adversary to exploit by applying forces in another direction . With a population , we can endow the robot with the prior that a strong wind could come from either direction and that it must walk carefully to avoid being toppled over . Our contributions are as follows : • Using a set of continuous robotics control tasks , we provide evidence that a single adversary does not have a consistent positive impact on the robustness of an RL policy while the use of an adversary population provides improved robustness across all considered examples . • We investigate the source of the robustness and show that the single adversary policy is exploitable by new adversaries whereas policies trained with RAP are robust to new adversaries . • We demonstrate that adversary populations provide comparable robustness to domain randomization while avoiding potential failure modes of domain randomization . 2 RELATED WORK . This work builds upon robust control ( Zhou & Doyle , 1998 ) , a branch of control theory focused on finding optimal controllers under worst-case perturbations of the system dynamics . The Robust Markov Decision Process ( R-MDP ) formulation extends this worst-case model uncertainty to uncertainty sets on the transition dynamics of an MDP and demonstrates that computationally tractable solutions exist for small , tabular MDPs ( Nilim & El Ghaoui , 2005 ; Lim et al. , 2013 ) . For larger or continuous MDPs , one successful approach has been to use function approximation to compute approximate solutions to the R-MDP problem ( Tamar et al. , 2014 ) . One prominent variant of the R-MDP literature is to interpret the perturbations as an adversary and attempt to learn the distribution of the perturbation under a minimax objective . Two variants of this idea that tie in closely to our work are Robust Adversarial Reinforcement Learning ( RARL ) ( Pinto et al. , 2017 ) and Noisy Robust Markov Decision Processes ( NR-MDP ) ( Tessler et al. , 2019 ) which differ in how they parametrize the adversaries : RARL picks out specific robot joints that the adversary acts on while NR-MDP adds the adversary action to the agent action . Both of these works attempt to find an equilibrium of the minimax objective using a single adversary ; in contrast our work uses a large set of adversaries and shows improved robustness relative to a single adversary . A strong alternative to the minimax objective , domain randomization , asks a designer to explicitly define a distribution over environments that the agent should be robust to . For example , ( Peng et al. , 2018 ) varies simulator parameters to train a robot to robustly push a puck to a target location in the real world ; ( Antonova et al. , 2017 ) adds noise to friction and actions to transfer an object pivoting policy directly from simulation to a Baxter robot . Additionally , domain randomization has been successfully used to build accurate object detectors solely from simulated data ( Tobin et al. , 2017 ) and to zero-shot transfer a quadcopter flight policy from simulation ( Sadeghi & Levine , 2016 ) . The use of population based training is a standard technique in multi-agent settings . Alphastar , the grandmaster-level Starcraft bot , uses a population of `` exploiter '' agents that fine-tune against the bot to prevent it from developing exploitable strategies ( Vinyals et al. , 2019 ) . ( Czarnecki et al. , 2020 ) establishes a set of sufficient geometric conditions on games under which the use of multiple adversaries will ensure gradual improvement in the strength of the agent policy . They empirically demonstrate that learning in games can often fail to converge without populations . Finally , Active Domain Randomization ( Mehta et al. , 2019 ) is a very close approach to ours , as they use a population of adversaries to select domain randomization parameters whereas we use a population of adversaries to directly perturb the agent actions . However , they explicitly induce diversity using a repulsive term and use a discriminator to generate the reward . 3 BACKGROUND . In this work we use the framework of a multi-agent , finite-horizon , discounted , Markov Decision Process ( MDP ) ( Puterman , 1990 ) defined by a tuple 〈Aagent × Aadversary , S , T , r , γ〉 . Here Aagent is the set of actions for the agent , Aadversary is the set of actions for the adversary , S is a set of states , T : Aagent × Aadversary × S → ∆ ( S ) is a transition function , r : Aagent × Aadversary × S → R is a reward function and γ is a discount factor . S is shared between the adversaries as they share a state-space with the agent . The goal for a given MDP is to find a policy πθ parametrized by θ that maximizes the expected cumulative discounted reward Jθ = E [ ∑T t=0 γ tr ( st , at ) |πθ ] . The conditional in this expression is a short-hand to indicate that the actions in the MDP are sampled via at ∼ πθ ( st , at−1 ) . We denote the agent policy parametrized by weights θ as πθ and the policy of adversary i as π̄φi . Actions sampled from the adversary policy π̄φi will be written as ā i t. We use ξ to denote the parametrization of the system dynamics ( e.g . different values of friction , mass , wind , etc . ) and the system dynamics for a given state and action as st+1 ∼ fξ ( st , at ) . 3.1 BASELINES . Here we outline prior work and the approaches that will be compared with RAP . Our baselines consist of a single adversary and domain randomization . 3.1.1 SINGLE MINIMAX ADVERSARY . Our adversary formulation uses the Noisy Action Robust MDP ( Tessler et al. , 2019 ) in which the adversary adds its actions onto the agent actions . The objective is max θ E [ T∑ t=0 γtr ( st , at + αāt ) |πθ , π̄φ ] min φ E [ T∑ t=0 γtr ( st , at + αāt ) |πθ , π̄φ ] ( 1 ) where α is a hyperparameter controlling the adversary strength . This is a game in which the adversary and agent play simultaneously . We note an important restriction inherent to this adversarial model . Since the adversary is only able to attack the agent through the actions , there is a restricted class of dynamical systems that it can represent ; this set of dynamical systems may not necessarily align with the set of dynamical systems that the agent may be tested in . This is a restriction caused by the choice of adversarial perturbation and could be alleviated by using different adversarial parametrizations e.g . perturbing the transition function directly . 3.1.2 DYNAMICS RANDOMIZATION . Domain randomization is the setting in which the user specifies a set of environments which the agent should be robust to . This allows the user to directly encode knowledge about the likely deviations between training and testing domains . For example , the user may believe that friction is hard to measure precisely and wants to ensure that their agent is robust to variations in friction ; they then specify that the agent will be trained with a wide range of possible friction values . We use ξ to denote some vector that parametrizes the set of training environments ( e.g . friction , masses , system dynamics , etc. ) . We denote the domain over which ξ is drawn from as Ξ and use P ( Ξ ) to denote some probability distribution over ξ . The domain randomization objective is max θ Eξ∼P ( Ξ ) [ Est+1∼fξ ( st , at ) [ T∑ t=0 γtr ( st , at ) |πθ ] ] st+1 ∼ fξ ( st , at ) at ∼ πθ ( st ) ( 2 ) Here the goal is to find an agent that performs well on average across the distribution of training environment . Most commonly , and in this work , the parameters ξ are sampled uniformly over Ξ . | This paper proposes an algorithm to improve the robustness of reinforcement learning. The algorithm , RAP, combines ideas from domain randomization and adversarial training. Specifically, during learning, it trains an ensemble of adversary to attack the learner, with the hope that the learner can be robust to various situations. The experimental results show the proposed algorithm indeed outperform the respective baselines here (single-adversary training and domain randomization) in its ability to generalize the other test domains. | SP:bd8c89f5faf1695ca9f25e7e112cdb795db83864 |
Meta-Learning with Implicit Processes | 1 INTRODUCTION . Few-shot learning ( also known as meta-learning ) is a defining characteristic of human intelligence . Its goal is to leverage the experiences from previous tasks to form a model ( represented by metaparameters ) that can rapidly adapt to a new task using only a limited quantity of its training data . A number of meta-learning algorithms ( Finn et al. , 2018 ; Jerfel et al. , 2019 ; Ravi & Beatson , 2018 ; Rusu et al. , 2019 ; Yoon et al. , 2018 ) have recently adopted a probabilistic perspective to characterize the uncertainty in the predictions via a Bayesian treatment of the meta-parameters . Though they can consequently represent different tasks with different values of meta-parameters , it is not clear how or whether they are naturally amenable to ( a ) the characterization of a principled similarity/distance measure between tasks ( e.g. , for identifying outlier tasks that can potentially hurt training for the new task , procuring the most valuable/similar tasks/datasets to the new task , detecting task distribution shift , among others ) , ( b ) active task selection given a limited budget of expensive task queries ( see Appendix A.2.3 for an example of a real-world use case ) , and ( c ) synthetic task/dataset generation in privacy-aware applications without revealing the real data or for augmenting a limited number of previous tasks to improve generalization performance . To tackle the above challenge , this paper presents a novel implicit process-based meta-learning ( IPML ) algorithm ( Sec . 3 ) that , in contrast to existing works , explicitly represents each task as a continuous latent vector and models its probabilistic belief within the highly expressive IP1 framework ( Sec . 2 ) . Unfortunately , meta-training in IPML is computationally challenging due to its need to perform intractable exact IP inference in task adaptation.2 To resolve this , we propose a novel 1An IP ( Ma et al. , 2019 ) is a stochastic process such that every finite collection of random variables has an implicitly defined joint prior distribution . Some typical examples of IP include Gaussian processes , Bayesian neural networks , neural processes ( Garnelo et al. , 2018 ) , among others . An IP is formally defined in Def . 1 . 2The work of Ma et al . ( 2019 ) uses the well-studied Gaussian process as the variational family to perform variational inference in general applications of IP , which sacrifices the flexibility and expressivity of IP by constraining the distributions of the function outputs to be Gaussian . Such a straightforward application of IP to meta-learning has not yielded satisfactory results in our experiments ( see Appendix A.4 ) . expectation-maximization ( EM ) algorithm to perform meta-training ( Sec . 3.1 ) : In the E step , we perform task adaptation using the stochastic gradient Hamiltonian Monte Carlo sampling method ( Chen et al. , 2014 ) to draw samples from IP posterior beliefs for all meta-training tasks , which eliminates the need to learn a latent encoder ( Garnelo et al. , 2018 ) . In the M step , we optimize the meta-learning objective w.r.t . the meta-parameters using these samples . Our delicate design of the neural network architecture for meta-training in IPML allows competitive meta-learning performance to be achieved ( Sec . 3.2 ) . Our IPML algorithm offers the benefits of being amenable to ( a ) the characterization of a principled distance measure between tasks using maximum mean discrepancy ( Gretton et al. , 2012 ) , ( b ) active task selection without needing the assumption of known task contexts in ( Kaddour et al. , 2020 ) , and ( c ) synthetic task generation by modeling task-dependent input distributions ( Sec . 3.3 ) . 2 BACKGROUND AND NOTATIONS . For simplicity , the inputs ( outputs ) for all tasks are assumed to belong to the same input ( output ) space . Consider meta-learning on probabilistic regression tasks:3 Each task is generated from a task distribution and associated with a dataset ( X , yX ) where the set X and the vector yX , ( yx ) > x2X denote , respectively , the input vectors and the corresponding noisy outputs yx , f ( x ) + ✏ ( x ) ( 1 ) which are outputs of an unknown underlying function f corrupted by an i.i.d . Gaussian noise ✏ ( x ) ⇠ N ( 0 , 2 ) with variance 2 . Let f be distributed by an implicit process ( IP ) , as follows : Definition 1 ( Implicit process for meta-learning ) . Let the collection of random variables f ( · ) denote an IP parameterized by meta-parameters ✓ , that is , every finite collection { f ( x ) } x2X has a joint prior distribution p ( fX , ( f ( x ) ) > x2X ) implicitly defined by the following generative model : z ⇠ p ( z ) , f ( x ) , g✓ ( x , z ) ( 2 ) for all x 2 X where z is a latent task vector to be explained below and generator g✓ can be an arbitrary model ( e.g. , deep neural network ) parameterized by meta-parameters ✓ . Definition 1 defines valid stochastic processes if z is finite dimensional ( Ma et al. , 2019 ) . Though , in reality , a task may follow an unknown distribution , we assume the existence of an unknown function that maps each task to a latent task vector z satisfying the desired known distribution p ( z ) , like in ( Kaddour et al. , 2020 ) .4 Using p ( yX |fX ) = N ( fX , 2I ) ( 1 ) and the IP prior belief p ( fX ) from Def . 1 , we can derive the marginal likelihood p ( yX ) by marginalizing out fX . Remark 1 . Two sources of uncertainty exist in p ( yX ) : Aleatoric uncertainty in p ( yX |fX ) reflects the noise ( i.e. , modeled in ( 1 ) ) inherent in the dataset , while epistemic uncertainty in the IP prior belief p ( fX ) reflects the model uncertainty arising from the latent task prior belief p ( z ) in ( 2 ) .5 Let the sets T and T⇤ denote the meta-training and meta-testing tasks , respectively . Following the convention in ( Finn et al. , 2018 ; Gordon et al. , 2019 ; Ravi & Beatson , 2018 ; Yoon et al. , 2018 ) , for each meta-training task t 2 T , we consider a support-query ( or train-test ) split of its dataset ( Xt , yXt ) into the support set ( or training dataset ) ( X st , yX st ) and query set ( or test/evaluation dataset ) ( X qt , yX qt ) where Xt = X s t [ X q t and X st \ X q t = ; . Specifically , for a N -way K-shot classification problem , the support set has K examples per class and N classes in total . Meta-learning can be defined as an optimization problem ( Finn et al. , 2017 ; 2018 ) and its goal is to learn meta-parameters ✓ that maximize the following objective defined over all meta-training tasks : Jmeta , log Y t2T p yX qt |yX st = X t2T log Z p yX qt |fX q t p fX qt |yX st dfX qt . ( 3 ) Task adaptation p ( fX qt |yX st ) is performed via IP inference after observing the support set : p fX qt |yX st = Z z p fX qt |z p z|yX st dz . ( 4 ) 3We defer the discussion of meta-learning on probabilistic classification tasks using the robust-max likelihood ( Hernández-Lobato et al. , 2011 ) to Appendix A.1 . 4p ( z ) is often assumed to be a simple distribution like multivariate Gaussian N ( 0 , I ) ( Garnelo et al. , 2018 ) . 5Our work here considers a point estimate of meta-parameters ✓ instead of a Bayesian treatment of ✓ ( Finn et al. , 2018 ; Yoon et al. , 2018 ) . This allows us to interpret the epistemic uncertainty in p ( fX ) via p ( z ) directly . The objective Jmeta ( 3 ) is the “ test ” likelihood on the query set , which reflects the idea of “ learning to learn ” by assessing the effectiveness of “ learning on the support set ” through the query set . An alternative interpretation views p ( fX qt |yX st ) as an “ informative prior ” after observing the support set . The objective Jmeta ( 3 ) is also known as the Bayesian held-out likelihood ( Gordon et al. , 2019 ) . In a meta-testing task , adaptation is also performed via IP inference after observing its support set and evaluated on its query set . Similar to GP or any stochastic process , the input vectors of the dataset are assumed to be known/fixed beforehand . We will relax this assumption by allowing them to be unknown when our IPML algorithm is exploited for synthetic task generation ( Sec . 3.3 ) . 3 IMPLICIT PROCESS-BASED META-LEARNING ( IPML ) . 3.1 EXPECTATION MAXIMIZATION ( EM ) ALGORITHM FOR IPML . Recall that task adaptation requires evaluating p ( fX qt |yX st ) ( 4 ) . From Def . 1 , if generator g✓ ( 2 ) can be an arbitrary model ( e.g. , deep neural network ) , then p ( fX qt |yX st ) and p ( fX qt ) can not be evaluated in closed form and have to be approximated by samples . Inspired by the Monte Carlo EM algorithm ( Wei & Tanner , 1990 ) which utilizes posterior samples to obtain a maximum likelihood estimate of some hyperparameters , we propose an EM algorithm for IPML : The E step uses the stochastic gradient Hamiltonian Monte Carlo ( SGHMC ) sampling method to draw samples from p ( fX qt |yX st ) ( 4 ) , while the M step maximizes the meta-learning objective Jmeta ( 3 ) w.r.t . metaparameters ✓ : Expectation ( E ) step . Note that since fX qt = ( g✓ ( x , z ) ) > x2X qt ( 2 ) , no uncertainty exists in p ( fX qt |z ) in ( 4 ) . So , p ( fX qt |yX st ) can be evaluated using the same generator g✓ ( 2 ) and the latent task posterior belief p ( z|yX st ) , as follows : Remark 2 . Drawing samples from p ( fX qt |yX st ) is thus equivalent to first drawing samples of z from p ( z|yX st ) and then passing them as inputs to generator g✓ to obtain samples of fX qt . Hence , given a task t , adaptation p ( fX qt |yX st ) ( 4 ) essentially reduces to a task identification problem by performing IP inference to obtain the latent task posterior belief p ( z|yX st ) . This is a direct consequence of epistemic uncertainty arising from p ( z|yX st ) and p ( z ) ( Remark 1 ) . In general , p ( z|yX st ) also can not be evaluated in closed form . Instead of using variational inference ( VI ) and approximating p ( z|yX st ) with a potentially restrictive variational distribution ( Garnelo et al. , 2018 ; Kaddour et al. , 2020 ; Ma et al. , 2019 ) , we draw samples from p ( z|yX st ) using SGHMC ( Chen et al. , 2014 ) . SGHMC introduces an auxiliary random vector r and samples from a joint distribution p ( z , r|yX st ) following the Hamiltonian dynamics ( Brooks et al. , 2011 ; Neal , 1993 ) : p ( z , r|yX st ) / exp ( U ( z ) 0.5r > M 1r ) where the negative log-probability U ( z ) , log p ( z|yX st ) resembles the potential energy and r resembles the momentum . SGHMC updates z and r , as follows : z = ↵M 1r , r = ↵rzU ( z ) ↵CM 1r+N ( 0 , 2↵ ( C B ) ) where ↵ , C , M , and B are the step size , friction term , mass matrix , and Fisher information matrix , respectively.6 Note that rzU ( z ) = rz log p ( z|yX st ) = rz log p ( z , yX st ) = rz [ log p ( yX st |fX st = ( g✓ ( x , z ) ) > x2X st ) + log p ( z ) ] can be evaluated tractably . Maximization ( M ) step . We optimize Jmeta ( 3 ) w.r.t . ✓ using samples of z . The original objective Jmeta = P t2T log ( Ep ( z|yXst ) [ p ( yX qt |fX qt = ( g✓ ( x , z ) ) > x2X qt ) ] ) is not amenable to stochastic optimization with data minibatches , which is usually not an issue in a few-shot learning setting . When a huge number of data points and samples of z are considered , we can resort to optimizing the lower bound Js-meta of Jmeta by applying the Jensen ’ s inequality : Jmeta Js-meta , P t2T Ep ( fXqt |yXst ) ⇥ log p ( yX qt |fX q t ) ⇤ = P t2T Ep ( z|yXst ) ⇥ log p ( yX qt |fX q t ) ⇤ . 6The sampler hyperparameters ↵ , C , M , and B are set according to the auto-tuning method of Springenberg et al . ( 2016 ) which has been verified to work well in our experiments ; more details are given in Appendix A.2.1 . | This paper proposes an efficient meta-learning approach using implicit processes. Specifically, authors represent each task as a continuous latent vector and use expectation-maximization algorithm to perform meta-learning. The E step performs task adaption using stochastic gradient Hamiltonian Monte Carlo sampling method, while the M step optimizes meta-learning objective using these samples. Their framework can measure principled distance between tasks by maximum mean discrepancy (MMD) and generate synthetic tasks by task-dependent distribution. Finally, the authors validate their proposed framework on several benchmark datasets and real-world datasets. The novelty and originality of this paper is good by proposing new ideas and methods. In addition, the paper is well-organized and clearly written. We can quickly get to know what problem they are trying to solve, how they solve and what their results are. | SP:3a95eb3f0187add9fb6cc59398f744250daf1434 |
Meta-Learning with Implicit Processes | 1 INTRODUCTION . Few-shot learning ( also known as meta-learning ) is a defining characteristic of human intelligence . Its goal is to leverage the experiences from previous tasks to form a model ( represented by metaparameters ) that can rapidly adapt to a new task using only a limited quantity of its training data . A number of meta-learning algorithms ( Finn et al. , 2018 ; Jerfel et al. , 2019 ; Ravi & Beatson , 2018 ; Rusu et al. , 2019 ; Yoon et al. , 2018 ) have recently adopted a probabilistic perspective to characterize the uncertainty in the predictions via a Bayesian treatment of the meta-parameters . Though they can consequently represent different tasks with different values of meta-parameters , it is not clear how or whether they are naturally amenable to ( a ) the characterization of a principled similarity/distance measure between tasks ( e.g. , for identifying outlier tasks that can potentially hurt training for the new task , procuring the most valuable/similar tasks/datasets to the new task , detecting task distribution shift , among others ) , ( b ) active task selection given a limited budget of expensive task queries ( see Appendix A.2.3 for an example of a real-world use case ) , and ( c ) synthetic task/dataset generation in privacy-aware applications without revealing the real data or for augmenting a limited number of previous tasks to improve generalization performance . To tackle the above challenge , this paper presents a novel implicit process-based meta-learning ( IPML ) algorithm ( Sec . 3 ) that , in contrast to existing works , explicitly represents each task as a continuous latent vector and models its probabilistic belief within the highly expressive IP1 framework ( Sec . 2 ) . Unfortunately , meta-training in IPML is computationally challenging due to its need to perform intractable exact IP inference in task adaptation.2 To resolve this , we propose a novel 1An IP ( Ma et al. , 2019 ) is a stochastic process such that every finite collection of random variables has an implicitly defined joint prior distribution . Some typical examples of IP include Gaussian processes , Bayesian neural networks , neural processes ( Garnelo et al. , 2018 ) , among others . An IP is formally defined in Def . 1 . 2The work of Ma et al . ( 2019 ) uses the well-studied Gaussian process as the variational family to perform variational inference in general applications of IP , which sacrifices the flexibility and expressivity of IP by constraining the distributions of the function outputs to be Gaussian . Such a straightforward application of IP to meta-learning has not yielded satisfactory results in our experiments ( see Appendix A.4 ) . expectation-maximization ( EM ) algorithm to perform meta-training ( Sec . 3.1 ) : In the E step , we perform task adaptation using the stochastic gradient Hamiltonian Monte Carlo sampling method ( Chen et al. , 2014 ) to draw samples from IP posterior beliefs for all meta-training tasks , which eliminates the need to learn a latent encoder ( Garnelo et al. , 2018 ) . In the M step , we optimize the meta-learning objective w.r.t . the meta-parameters using these samples . Our delicate design of the neural network architecture for meta-training in IPML allows competitive meta-learning performance to be achieved ( Sec . 3.2 ) . Our IPML algorithm offers the benefits of being amenable to ( a ) the characterization of a principled distance measure between tasks using maximum mean discrepancy ( Gretton et al. , 2012 ) , ( b ) active task selection without needing the assumption of known task contexts in ( Kaddour et al. , 2020 ) , and ( c ) synthetic task generation by modeling task-dependent input distributions ( Sec . 3.3 ) . 2 BACKGROUND AND NOTATIONS . For simplicity , the inputs ( outputs ) for all tasks are assumed to belong to the same input ( output ) space . Consider meta-learning on probabilistic regression tasks:3 Each task is generated from a task distribution and associated with a dataset ( X , yX ) where the set X and the vector yX , ( yx ) > x2X denote , respectively , the input vectors and the corresponding noisy outputs yx , f ( x ) + ✏ ( x ) ( 1 ) which are outputs of an unknown underlying function f corrupted by an i.i.d . Gaussian noise ✏ ( x ) ⇠ N ( 0 , 2 ) with variance 2 . Let f be distributed by an implicit process ( IP ) , as follows : Definition 1 ( Implicit process for meta-learning ) . Let the collection of random variables f ( · ) denote an IP parameterized by meta-parameters ✓ , that is , every finite collection { f ( x ) } x2X has a joint prior distribution p ( fX , ( f ( x ) ) > x2X ) implicitly defined by the following generative model : z ⇠ p ( z ) , f ( x ) , g✓ ( x , z ) ( 2 ) for all x 2 X where z is a latent task vector to be explained below and generator g✓ can be an arbitrary model ( e.g. , deep neural network ) parameterized by meta-parameters ✓ . Definition 1 defines valid stochastic processes if z is finite dimensional ( Ma et al. , 2019 ) . Though , in reality , a task may follow an unknown distribution , we assume the existence of an unknown function that maps each task to a latent task vector z satisfying the desired known distribution p ( z ) , like in ( Kaddour et al. , 2020 ) .4 Using p ( yX |fX ) = N ( fX , 2I ) ( 1 ) and the IP prior belief p ( fX ) from Def . 1 , we can derive the marginal likelihood p ( yX ) by marginalizing out fX . Remark 1 . Two sources of uncertainty exist in p ( yX ) : Aleatoric uncertainty in p ( yX |fX ) reflects the noise ( i.e. , modeled in ( 1 ) ) inherent in the dataset , while epistemic uncertainty in the IP prior belief p ( fX ) reflects the model uncertainty arising from the latent task prior belief p ( z ) in ( 2 ) .5 Let the sets T and T⇤ denote the meta-training and meta-testing tasks , respectively . Following the convention in ( Finn et al. , 2018 ; Gordon et al. , 2019 ; Ravi & Beatson , 2018 ; Yoon et al. , 2018 ) , for each meta-training task t 2 T , we consider a support-query ( or train-test ) split of its dataset ( Xt , yXt ) into the support set ( or training dataset ) ( X st , yX st ) and query set ( or test/evaluation dataset ) ( X qt , yX qt ) where Xt = X s t [ X q t and X st \ X q t = ; . Specifically , for a N -way K-shot classification problem , the support set has K examples per class and N classes in total . Meta-learning can be defined as an optimization problem ( Finn et al. , 2017 ; 2018 ) and its goal is to learn meta-parameters ✓ that maximize the following objective defined over all meta-training tasks : Jmeta , log Y t2T p yX qt |yX st = X t2T log Z p yX qt |fX q t p fX qt |yX st dfX qt . ( 3 ) Task adaptation p ( fX qt |yX st ) is performed via IP inference after observing the support set : p fX qt |yX st = Z z p fX qt |z p z|yX st dz . ( 4 ) 3We defer the discussion of meta-learning on probabilistic classification tasks using the robust-max likelihood ( Hernández-Lobato et al. , 2011 ) to Appendix A.1 . 4p ( z ) is often assumed to be a simple distribution like multivariate Gaussian N ( 0 , I ) ( Garnelo et al. , 2018 ) . 5Our work here considers a point estimate of meta-parameters ✓ instead of a Bayesian treatment of ✓ ( Finn et al. , 2018 ; Yoon et al. , 2018 ) . This allows us to interpret the epistemic uncertainty in p ( fX ) via p ( z ) directly . The objective Jmeta ( 3 ) is the “ test ” likelihood on the query set , which reflects the idea of “ learning to learn ” by assessing the effectiveness of “ learning on the support set ” through the query set . An alternative interpretation views p ( fX qt |yX st ) as an “ informative prior ” after observing the support set . The objective Jmeta ( 3 ) is also known as the Bayesian held-out likelihood ( Gordon et al. , 2019 ) . In a meta-testing task , adaptation is also performed via IP inference after observing its support set and evaluated on its query set . Similar to GP or any stochastic process , the input vectors of the dataset are assumed to be known/fixed beforehand . We will relax this assumption by allowing them to be unknown when our IPML algorithm is exploited for synthetic task generation ( Sec . 3.3 ) . 3 IMPLICIT PROCESS-BASED META-LEARNING ( IPML ) . 3.1 EXPECTATION MAXIMIZATION ( EM ) ALGORITHM FOR IPML . Recall that task adaptation requires evaluating p ( fX qt |yX st ) ( 4 ) . From Def . 1 , if generator g✓ ( 2 ) can be an arbitrary model ( e.g. , deep neural network ) , then p ( fX qt |yX st ) and p ( fX qt ) can not be evaluated in closed form and have to be approximated by samples . Inspired by the Monte Carlo EM algorithm ( Wei & Tanner , 1990 ) which utilizes posterior samples to obtain a maximum likelihood estimate of some hyperparameters , we propose an EM algorithm for IPML : The E step uses the stochastic gradient Hamiltonian Monte Carlo ( SGHMC ) sampling method to draw samples from p ( fX qt |yX st ) ( 4 ) , while the M step maximizes the meta-learning objective Jmeta ( 3 ) w.r.t . metaparameters ✓ : Expectation ( E ) step . Note that since fX qt = ( g✓ ( x , z ) ) > x2X qt ( 2 ) , no uncertainty exists in p ( fX qt |z ) in ( 4 ) . So , p ( fX qt |yX st ) can be evaluated using the same generator g✓ ( 2 ) and the latent task posterior belief p ( z|yX st ) , as follows : Remark 2 . Drawing samples from p ( fX qt |yX st ) is thus equivalent to first drawing samples of z from p ( z|yX st ) and then passing them as inputs to generator g✓ to obtain samples of fX qt . Hence , given a task t , adaptation p ( fX qt |yX st ) ( 4 ) essentially reduces to a task identification problem by performing IP inference to obtain the latent task posterior belief p ( z|yX st ) . This is a direct consequence of epistemic uncertainty arising from p ( z|yX st ) and p ( z ) ( Remark 1 ) . In general , p ( z|yX st ) also can not be evaluated in closed form . Instead of using variational inference ( VI ) and approximating p ( z|yX st ) with a potentially restrictive variational distribution ( Garnelo et al. , 2018 ; Kaddour et al. , 2020 ; Ma et al. , 2019 ) , we draw samples from p ( z|yX st ) using SGHMC ( Chen et al. , 2014 ) . SGHMC introduces an auxiliary random vector r and samples from a joint distribution p ( z , r|yX st ) following the Hamiltonian dynamics ( Brooks et al. , 2011 ; Neal , 1993 ) : p ( z , r|yX st ) / exp ( U ( z ) 0.5r > M 1r ) where the negative log-probability U ( z ) , log p ( z|yX st ) resembles the potential energy and r resembles the momentum . SGHMC updates z and r , as follows : z = ↵M 1r , r = ↵rzU ( z ) ↵CM 1r+N ( 0 , 2↵ ( C B ) ) where ↵ , C , M , and B are the step size , friction term , mass matrix , and Fisher information matrix , respectively.6 Note that rzU ( z ) = rz log p ( z|yX st ) = rz log p ( z , yX st ) = rz [ log p ( yX st |fX st = ( g✓ ( x , z ) ) > x2X st ) + log p ( z ) ] can be evaluated tractably . Maximization ( M ) step . We optimize Jmeta ( 3 ) w.r.t . ✓ using samples of z . The original objective Jmeta = P t2T log ( Ep ( z|yXst ) [ p ( yX qt |fX qt = ( g✓ ( x , z ) ) > x2X qt ) ] ) is not amenable to stochastic optimization with data minibatches , which is usually not an issue in a few-shot learning setting . When a huge number of data points and samples of z are considered , we can resort to optimizing the lower bound Js-meta of Jmeta by applying the Jensen ’ s inequality : Jmeta Js-meta , P t2T Ep ( fXqt |yXst ) ⇥ log p ( yX qt |fX q t ) ⇤ = P t2T Ep ( z|yXst ) ⇥ log p ( yX qt |fX q t ) ⇤ . 6The sampler hyperparameters ↵ , C , M , and B are set according to the auto-tuning method of Springenberg et al . ( 2016 ) which has been verified to work well in our experiments ; more details are given in Appendix A.2.1 . | The paper proposes a meta-learning method based on implicit process (IP) framework in which each task is represented by a latent vector. The IP setup for meta-learning seems identical to that of Neural processes [1]. In that, the key challenge for adaptation to a task based on a context/support set $(X_c, Y_c)$ is inferring the distribution $p(z|X_c, Y_c)$ over latent vectors. While [1] use amortized (variational) inference with a variational Gaussian distribution to approximate the true $p(z|X_c, Y_c)$, the paper proposes to use stochastic gradient Hamiltonian Monte Carlo (SG-HMC) for sampling latent state vectors from $p(z|X_c, Y_c)$. | SP:3a95eb3f0187add9fb6cc59398f744250daf1434 |
Meta-Learning with Implicit Processes | 1 INTRODUCTION . Few-shot learning ( also known as meta-learning ) is a defining characteristic of human intelligence . Its goal is to leverage the experiences from previous tasks to form a model ( represented by metaparameters ) that can rapidly adapt to a new task using only a limited quantity of its training data . A number of meta-learning algorithms ( Finn et al. , 2018 ; Jerfel et al. , 2019 ; Ravi & Beatson , 2018 ; Rusu et al. , 2019 ; Yoon et al. , 2018 ) have recently adopted a probabilistic perspective to characterize the uncertainty in the predictions via a Bayesian treatment of the meta-parameters . Though they can consequently represent different tasks with different values of meta-parameters , it is not clear how or whether they are naturally amenable to ( a ) the characterization of a principled similarity/distance measure between tasks ( e.g. , for identifying outlier tasks that can potentially hurt training for the new task , procuring the most valuable/similar tasks/datasets to the new task , detecting task distribution shift , among others ) , ( b ) active task selection given a limited budget of expensive task queries ( see Appendix A.2.3 for an example of a real-world use case ) , and ( c ) synthetic task/dataset generation in privacy-aware applications without revealing the real data or for augmenting a limited number of previous tasks to improve generalization performance . To tackle the above challenge , this paper presents a novel implicit process-based meta-learning ( IPML ) algorithm ( Sec . 3 ) that , in contrast to existing works , explicitly represents each task as a continuous latent vector and models its probabilistic belief within the highly expressive IP1 framework ( Sec . 2 ) . Unfortunately , meta-training in IPML is computationally challenging due to its need to perform intractable exact IP inference in task adaptation.2 To resolve this , we propose a novel 1An IP ( Ma et al. , 2019 ) is a stochastic process such that every finite collection of random variables has an implicitly defined joint prior distribution . Some typical examples of IP include Gaussian processes , Bayesian neural networks , neural processes ( Garnelo et al. , 2018 ) , among others . An IP is formally defined in Def . 1 . 2The work of Ma et al . ( 2019 ) uses the well-studied Gaussian process as the variational family to perform variational inference in general applications of IP , which sacrifices the flexibility and expressivity of IP by constraining the distributions of the function outputs to be Gaussian . Such a straightforward application of IP to meta-learning has not yielded satisfactory results in our experiments ( see Appendix A.4 ) . expectation-maximization ( EM ) algorithm to perform meta-training ( Sec . 3.1 ) : In the E step , we perform task adaptation using the stochastic gradient Hamiltonian Monte Carlo sampling method ( Chen et al. , 2014 ) to draw samples from IP posterior beliefs for all meta-training tasks , which eliminates the need to learn a latent encoder ( Garnelo et al. , 2018 ) . In the M step , we optimize the meta-learning objective w.r.t . the meta-parameters using these samples . Our delicate design of the neural network architecture for meta-training in IPML allows competitive meta-learning performance to be achieved ( Sec . 3.2 ) . Our IPML algorithm offers the benefits of being amenable to ( a ) the characterization of a principled distance measure between tasks using maximum mean discrepancy ( Gretton et al. , 2012 ) , ( b ) active task selection without needing the assumption of known task contexts in ( Kaddour et al. , 2020 ) , and ( c ) synthetic task generation by modeling task-dependent input distributions ( Sec . 3.3 ) . 2 BACKGROUND AND NOTATIONS . For simplicity , the inputs ( outputs ) for all tasks are assumed to belong to the same input ( output ) space . Consider meta-learning on probabilistic regression tasks:3 Each task is generated from a task distribution and associated with a dataset ( X , yX ) where the set X and the vector yX , ( yx ) > x2X denote , respectively , the input vectors and the corresponding noisy outputs yx , f ( x ) + ✏ ( x ) ( 1 ) which are outputs of an unknown underlying function f corrupted by an i.i.d . Gaussian noise ✏ ( x ) ⇠ N ( 0 , 2 ) with variance 2 . Let f be distributed by an implicit process ( IP ) , as follows : Definition 1 ( Implicit process for meta-learning ) . Let the collection of random variables f ( · ) denote an IP parameterized by meta-parameters ✓ , that is , every finite collection { f ( x ) } x2X has a joint prior distribution p ( fX , ( f ( x ) ) > x2X ) implicitly defined by the following generative model : z ⇠ p ( z ) , f ( x ) , g✓ ( x , z ) ( 2 ) for all x 2 X where z is a latent task vector to be explained below and generator g✓ can be an arbitrary model ( e.g. , deep neural network ) parameterized by meta-parameters ✓ . Definition 1 defines valid stochastic processes if z is finite dimensional ( Ma et al. , 2019 ) . Though , in reality , a task may follow an unknown distribution , we assume the existence of an unknown function that maps each task to a latent task vector z satisfying the desired known distribution p ( z ) , like in ( Kaddour et al. , 2020 ) .4 Using p ( yX |fX ) = N ( fX , 2I ) ( 1 ) and the IP prior belief p ( fX ) from Def . 1 , we can derive the marginal likelihood p ( yX ) by marginalizing out fX . Remark 1 . Two sources of uncertainty exist in p ( yX ) : Aleatoric uncertainty in p ( yX |fX ) reflects the noise ( i.e. , modeled in ( 1 ) ) inherent in the dataset , while epistemic uncertainty in the IP prior belief p ( fX ) reflects the model uncertainty arising from the latent task prior belief p ( z ) in ( 2 ) .5 Let the sets T and T⇤ denote the meta-training and meta-testing tasks , respectively . Following the convention in ( Finn et al. , 2018 ; Gordon et al. , 2019 ; Ravi & Beatson , 2018 ; Yoon et al. , 2018 ) , for each meta-training task t 2 T , we consider a support-query ( or train-test ) split of its dataset ( Xt , yXt ) into the support set ( or training dataset ) ( X st , yX st ) and query set ( or test/evaluation dataset ) ( X qt , yX qt ) where Xt = X s t [ X q t and X st \ X q t = ; . Specifically , for a N -way K-shot classification problem , the support set has K examples per class and N classes in total . Meta-learning can be defined as an optimization problem ( Finn et al. , 2017 ; 2018 ) and its goal is to learn meta-parameters ✓ that maximize the following objective defined over all meta-training tasks : Jmeta , log Y t2T p yX qt |yX st = X t2T log Z p yX qt |fX q t p fX qt |yX st dfX qt . ( 3 ) Task adaptation p ( fX qt |yX st ) is performed via IP inference after observing the support set : p fX qt |yX st = Z z p fX qt |z p z|yX st dz . ( 4 ) 3We defer the discussion of meta-learning on probabilistic classification tasks using the robust-max likelihood ( Hernández-Lobato et al. , 2011 ) to Appendix A.1 . 4p ( z ) is often assumed to be a simple distribution like multivariate Gaussian N ( 0 , I ) ( Garnelo et al. , 2018 ) . 5Our work here considers a point estimate of meta-parameters ✓ instead of a Bayesian treatment of ✓ ( Finn et al. , 2018 ; Yoon et al. , 2018 ) . This allows us to interpret the epistemic uncertainty in p ( fX ) via p ( z ) directly . The objective Jmeta ( 3 ) is the “ test ” likelihood on the query set , which reflects the idea of “ learning to learn ” by assessing the effectiveness of “ learning on the support set ” through the query set . An alternative interpretation views p ( fX qt |yX st ) as an “ informative prior ” after observing the support set . The objective Jmeta ( 3 ) is also known as the Bayesian held-out likelihood ( Gordon et al. , 2019 ) . In a meta-testing task , adaptation is also performed via IP inference after observing its support set and evaluated on its query set . Similar to GP or any stochastic process , the input vectors of the dataset are assumed to be known/fixed beforehand . We will relax this assumption by allowing them to be unknown when our IPML algorithm is exploited for synthetic task generation ( Sec . 3.3 ) . 3 IMPLICIT PROCESS-BASED META-LEARNING ( IPML ) . 3.1 EXPECTATION MAXIMIZATION ( EM ) ALGORITHM FOR IPML . Recall that task adaptation requires evaluating p ( fX qt |yX st ) ( 4 ) . From Def . 1 , if generator g✓ ( 2 ) can be an arbitrary model ( e.g. , deep neural network ) , then p ( fX qt |yX st ) and p ( fX qt ) can not be evaluated in closed form and have to be approximated by samples . Inspired by the Monte Carlo EM algorithm ( Wei & Tanner , 1990 ) which utilizes posterior samples to obtain a maximum likelihood estimate of some hyperparameters , we propose an EM algorithm for IPML : The E step uses the stochastic gradient Hamiltonian Monte Carlo ( SGHMC ) sampling method to draw samples from p ( fX qt |yX st ) ( 4 ) , while the M step maximizes the meta-learning objective Jmeta ( 3 ) w.r.t . metaparameters ✓ : Expectation ( E ) step . Note that since fX qt = ( g✓ ( x , z ) ) > x2X qt ( 2 ) , no uncertainty exists in p ( fX qt |z ) in ( 4 ) . So , p ( fX qt |yX st ) can be evaluated using the same generator g✓ ( 2 ) and the latent task posterior belief p ( z|yX st ) , as follows : Remark 2 . Drawing samples from p ( fX qt |yX st ) is thus equivalent to first drawing samples of z from p ( z|yX st ) and then passing them as inputs to generator g✓ to obtain samples of fX qt . Hence , given a task t , adaptation p ( fX qt |yX st ) ( 4 ) essentially reduces to a task identification problem by performing IP inference to obtain the latent task posterior belief p ( z|yX st ) . This is a direct consequence of epistemic uncertainty arising from p ( z|yX st ) and p ( z ) ( Remark 1 ) . In general , p ( z|yX st ) also can not be evaluated in closed form . Instead of using variational inference ( VI ) and approximating p ( z|yX st ) with a potentially restrictive variational distribution ( Garnelo et al. , 2018 ; Kaddour et al. , 2020 ; Ma et al. , 2019 ) , we draw samples from p ( z|yX st ) using SGHMC ( Chen et al. , 2014 ) . SGHMC introduces an auxiliary random vector r and samples from a joint distribution p ( z , r|yX st ) following the Hamiltonian dynamics ( Brooks et al. , 2011 ; Neal , 1993 ) : p ( z , r|yX st ) / exp ( U ( z ) 0.5r > M 1r ) where the negative log-probability U ( z ) , log p ( z|yX st ) resembles the potential energy and r resembles the momentum . SGHMC updates z and r , as follows : z = ↵M 1r , r = ↵rzU ( z ) ↵CM 1r+N ( 0 , 2↵ ( C B ) ) where ↵ , C , M , and B are the step size , friction term , mass matrix , and Fisher information matrix , respectively.6 Note that rzU ( z ) = rz log p ( z|yX st ) = rz log p ( z , yX st ) = rz [ log p ( yX st |fX st = ( g✓ ( x , z ) ) > x2X st ) + log p ( z ) ] can be evaluated tractably . Maximization ( M ) step . We optimize Jmeta ( 3 ) w.r.t . ✓ using samples of z . The original objective Jmeta = P t2T log ( Ep ( z|yXst ) [ p ( yX qt |fX qt = ( g✓ ( x , z ) ) > x2X qt ) ] ) is not amenable to stochastic optimization with data minibatches , which is usually not an issue in a few-shot learning setting . When a huge number of data points and samples of z are considered , we can resort to optimizing the lower bound Js-meta of Jmeta by applying the Jensen ’ s inequality : Jmeta Js-meta , P t2T Ep ( fXqt |yXst ) ⇥ log p ( yX qt |fX q t ) ⇤ = P t2T Ep ( z|yXst ) ⇥ log p ( yX qt |fX q t ) ⇤ . 6The sampler hyperparameters ↵ , C , M , and B are set according to the auto-tuning method of Springenberg et al . ( 2016 ) which has been verified to work well in our experiments ; more details are given in Appendix A.2.1 . | This paper proposes Implicit Process Meta-Learning (IPML) where each task is represented as a continuous latent vector $\mathbf{z}$, and corresponding data points are described as function values evaluated at an implicit process conditioned on the task latent vector $\mathbf{z}$. To conduct the intractable inference, a stochastic gradient Hamiltonian Monte Carlo (SGHMC) algorithm is employed. A VAE-like network called X-Net is trained simultaneously to generate synthetic tasks from the task latent vectors. The experimental results demonstrate that the proposed algorithm shows decent performances on few-shot classification tasks, and the task latent vectors indeed represent a meaningful space of the tasks on which measuring distances between tasks and detecting outlier tasks are possible. | SP:3a95eb3f0187add9fb6cc59398f744250daf1434 |
Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent Space Distribution Matching in WAE | 1 INTRODUCTION . The main goal of generative modeling is to learn a good approximation of the underlying data distribution from finite data samples , while facilitating an efficient way to draw samples . Popular algorithms such as variational autoencoders ( VAE , Kingma & Welling ( 2013 ) ; Rezende et al . ( 2014 ) ) and generative adversarial networks ( GAN , Goodfellow et al . ( 2014 ) ) are theoretically-grounded models designed to meet this goal . However , they come with some challenges . For instance , VAEs suffer from the posterior collapse problem ( Chen et al. , 2016 ; Zhao et al. , 2017 ; Van Den Oord et al. , 2017 ) , and a mismatch between the posterior and prior distribution ( Kingma et al. , 2016 ; Tomczak & Welling , 2018 ; Dai & Wipf , 2019 ; Bauer & Mnih , 2019 ) . GANs are known to have the mode collapse problem ( Che et al. , 2016 ; Dumoulin et al. , 2016 ; Donahue et al. , 2016 ) and optimization instability ( Arjovsky & Bottou , 2017 ) due to their saddle point problem formulation . Wasserstein autoencoder ( WAE ) Tolstikhin et al . ( 2017 ) proposes a general theoretical framework that can potentially avoid some of these challenges . They show that the divergence between two distributions is equivalent to the minimum reconstruction error , under the constraint that the marginal distribution of the latent space is identical to a prior distribution . The core challenge of this framework is to match the latent space distribution to a prior distribution that is easy to sample from . Tolstikhin et al . ( 2017 ) investigate GANs and maximum mean discrepancy ( MMD , Gretton et al . ( 2012 ) ) for this task and empirically find that the GAN-based approach yields better performance despite its instability . Existing research has tried to address this challenge ( Kolouri et al. , 2018 ; Knop et al. , 2018 ) ( see section 2 for a discussion ) . This paper aims to design a generative model to address the latent space distribution matching problem of WAEs . To do so , we make a simple observation that allows us to use the contrastive learning framework . Contrastive learning achieves state-of-the-art results in self-supervised representation learning tasks ( He et al. , 2020 ; Chen et al. , 2020 ) by forcing the latent representations to be 1 ) augmentation invariant ; 2 ) distinct for different data samples . It has been shown that the contrastive learning objective corresponding to the latter goal pushes the learned representations to achieve maximum entropy over the unit hyper-sphere ( Wang & Isola , 2020 ) . We observe that applying this contrastive loss term to the latent representation of an AE therefore matches it to the uniform distribution over the unit hyper-sphere . Due to the use of the contrastive learning approach , we call our algorithm Momentum Contrastive Autoencoder ( MoCA ) . Our contributions are as follows : 1. we address the fundamental algorithmic challenge of Wasserstein auto-encoders ( WAE ) , viz the latent space distribution matching problem , which involves matching the marginal distribution of the latent space to a prior distribution . We achieve this by making the observation that the contrastive term in the recent contrastive learning framework implicitly achieves this precise goal . This is also our novelty . 2. we show that our proposal of using the contrastive learning framework to optimize the WAE loss achieves faster convergence and more stable optimization compared with existing popular algorithms for WAE . 3. we perform a thorough ablation analysis of the impact of the hyper-parameters introduced by the contrastive learning framework , on the performance and behavior of WAE . 2 RELATED WORK . There has been a considerable amount of research on autoencoder based generative modeling . In this paper we focus on Wasserstein autoencoders ( WAE ) . Nonetheless , we discuss other autoencoder methods for completeness , and then focus on prior work that aim at achieving the WAE objective . AE based generative models : One of the earliest model in this category is the de-noising autoencoder ( Vincent et al. , 2008 ) . Bengio et al . ( 2013b ) show that training an autoencoder to de-noise a corrupted input leads to the learning of a Markov chain whose stationary distribution is the original data distribution it is trained on . However , this results in inefficient sampling and mode mixing problems ( Bengio et al. , 2013b ; Alain & Bengio , 2014 ) . Variational autoencoders ( VAE ) ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) overcome these challenges by maximizing a variational lower bound of the data likelihood , which involves a KL term minimizing the divergence between the latent ’ s posterior distribution and a prior distribution . This allows for efficient approximate likelihood estimation as well as posterior inference through ancestral sampling once the model is trained . Despite these advantages , followup works have identified a few important drawbacks of VAEs . The VAE objective is at the risk of posterior collapse – learning a latent space distribution which is independent of the input distribution if the KL term dominates the reconstruction term ( Chen et al. , 2016 ; Zhao et al. , 2017 ; Van Den Oord et al. , 2017 ) . The poor sample qualities of VAE has been attributed to a mismatch between the prior ( which is used for drawing samples ) and the posterior ( Kingma et al. , 2016 ; Tomczak & Welling , 2018 ; Dai & Wipf , 2019 ; Bauer & Mnih , 2019 ) . Dai & Wipf ( 2019 ) claim that this happens due to mismatch between the AE latent space dimension and the intrinsic dimensionality of the data manifold ( which is typically unknown ) , and propose a two stage VAE to remedy this problem . VQ-VAE ( Oord et al. , 2017 ) take a different approach and propose a discrete latent space as an inductive bias in VAE . Ghosh et al . ( 2019 ) observe that VAEs can be interpreted as deterministic autoencoders with noise injected in the latent space as a form of regularization . Based on this observation , they introduce deterministic autoencoders and empirically investigate various other regularizations . Similar to our work , the recently proposed DC-VAE ( Parmar et al. , 2021 ) also uses contrastive loss for generative modeling . However , the objective resulting from their version of instance discrimination estimates the log likelihood function rather than the WAE objective ( which estimates Wasserstein distance ) . Also , they integrate the GAN loss to the instance discrimination version of VAE loss . There has been research on AEs with hyperspherical latent space , that we use in our paper . Davidson et al . ( 2018 ) propose to replace the Gaussian prior used in VAE with the Von Mises-Fisher ( vMF ) distribution , which is analogous to the Gaussian distribution but on the unit hypersphere . WAE : Tolstikhin et al . ( 2017 ) make the observation that the optimal transport problem can be equivalently framed as an autoencoder objective under the constraint that the latent space distribution matches a prior distribution . They experiment with two alternatives to satisfy this constraint in the form of a penalty – MMD ( Gretton et al. , 2012 ) and GAN ( Goodfellow et al. , 2014 ) ) loss , and they find that the latter works better in practice . Training an autoencoder with an adversarial loss was also proposed earlier in adversarial autoencoders ( Makhzani et al. , 2015 ) . There has been research on making use of sliced distances to achieve the WAE objective . For instance , Kolouri et al . ( 2018 ) observe that Wasserstein distance for one dimensional distributions have a closed form solution . Motivated by this , they propose to use sliced-Wasserstein distance , which involves a large number of projections of the high dimensional distribution onto one dimensional spaces which allows approximating the original Wasserstein distance with the average of one dimensional Wasserstein distances . A similar idea using the sliced-Cramer distance is introduced in Knop et al . ( 2018 ) . Patrini et al . ( 2020 ) on the other hand propose a more general framework which allows for matching the posterior of the autoencoder to any arbitrary prior of choice ( which is a challenging task ) through the use of the Sinkhorn algorithm ( Cuturi , 2013 ) . However , it requires differentiating through the Sinkhorn iterations and unrolling it for backpropagation ( which is computationally expensive ) ; though their choice of Sinkhorn algorithm for latent space distribution matching allows their approach to be general . 3 MOMENTUM CONTRASTIVE AUTOENCODER . We present the proposed algorithm in this section . We begin by restating the WAE theorem that connects the autoencoder loss with the Wasserstein distance between two distributions . Let X ∼ PX be a random variable sampled from the real data distribution on X , Z ∼ Q ( Z|X ) be its latent representation in Z ⊆ Rd , and X̂ = g ( Z ) be its reconstruction by a deterministic decoder/generator g : Z → X . Note that the encoder Q ( Z|X ) can also be deterministic in the WAE framework , and we let f ( X ) dist= Q ( Z|X ) for some deterministic f : X → Z . Theorem 1 . ( Bousquet et al. , 2017 ; Tolstikhin et al. , 2017 ) Let PZ be a prior distribution on Z , let Pg = g # PZ be the push-forward of PZ under g ( i.e . the distribution of X̂ = g ( Z ) when Z ∼ PZ ) , and let QZ = f # PX be the push-forward of PX under f . Then , Wc ( PX , Pg ) = inf Q : QZ=PZ E X∼PX Z∼Q ( Z|X ) [ c ( X , g ( Z ) ) ] = inf f : ( f # PX ) =PZ E X∼PX [ c ( X , g ( f ( X ) ) ] ( 1 ) where Wc denotes the Wasserstein distance for some measurable cost function c. The above theorem states that the Wasserstein distance between the true ( PX ) and generated ( Pg ) data distributions can be equivalently computed by finding the minimum ( w.r.t . f ) reconstruction loss , under the constraint that the marginal distribution of the latent variable QZ matches the prior distribution PZ . Thus the Wasserstein distance itself can be minimized by jointly minimizing the reconstruction loss w.r.t . both f ( encoder ) and g ( decoder/generator ) as long as the constraint is met . In this work , we parameterize the encoder network f : X → Rd such that latent variable Z = f ( X ) has unit ` 2 norm . Our goal is then to match the distribution of this Z to the uniform distribution over the unit hyper-sphere Sd = { z ∈ Rd : ‖z‖2 = 1 } . To do so , we study the so-called “ negative sampling ” component of the contrastive loss used in self-supervised learning , Lneg ( f ; τ , K ) = E x∼PX { x−i } K i=1∼PX log 1 K K∑ j=1 ef ( x ) T f ( x−j ) /τ ( 2 ) Here , f : X → Sd is a neural network whose output has unit ` 2 norm , τ is the temperature hyperparameter , and K is the number of samples ( another hyper-parameter ) . Theorem 1 of Wang & Isola ( 2020 ) shows that for any fixed t , when K →∞ , lim K→∞ ( Lneg ( f ; τ , K ) − logK ) = E x∼PX [ log E x−∼PX [ ef ( x ) T f ( x− ) /τ ] ] ( 3 ) Crucially , this limit is minimized exactly when the push-forward f # PX ( i.e . the distribution of the latent random variable Z = f ( X ) when X ∼ PX ) is uniform on Sd . Moreover , even the Monte Carlo approximation of Eq . 2 ( with mini-batch size B and some K such that B ≤ K < ∞ ) LMCneg ( f ; τ , K , B ) = 1 B B∑ i=1 log 1 K K∑ j=1 ef ( xi ) T f ( xj ) /τ ( 4 ) Algorithm 1 PyTorch-like pseudocode of Momentum Contrastive Autoencoder algorithm # Enc_q , Enc_k : encoder networks for query and key . Their outputs are L2 normalized # Dec : decoder network # Q : dictionary as a queue of K randomly initialized keys ( dxK ) # m : momentum # lambda : regularization coefficient for entropy maximization # tau : logit temperature for x in data_loader : # load a minibatch x with B samples z_q = Enc_q ( x ) # queries : Bxd z_k = Enc_k ( x ) .detach ( ) # keys : Bxd , no gradient through keys x_rec = Dec ( z_q ) # reconstructed input # positive logits : Bx1 l_pos = bmm ( z_q.view ( B,1 , d ) , z_k.view ( B , d,1 ) ) # negative logits : BxK l_neg = mm ( z_q.view ( B , d ) , Q.view ( d , K ) ) # logits : Bx ( 1+K ) logits = cat ( [ l_pos , l_neg ] , dim=1 ) # compute loss labels = zeros ( B ) # positive elements are in the 0-th index L_con = CrossEntropyLoss ( logits/tau , labels ) # contrastive loss maximizing entropy of z_q L_rec = ( ( x_rec - x ) * * 2 ) .sum ( ) / B # reconstruction loss L = L_rec + lambda * L_con # momentum contrastive autoencoder loss # update Enc_q and Dec networks L.backward ( ) update ( Enc_q.params ) update ( Dec.params ) # update Enc_k Enc_k.params = m * Enc_k.params + ( 1-m ) * Enc_q.params # update dictionary enqueue ( Q , z_k ) # enqueue the current minibatch dequeue ( Q ) # dequeue the earliest minibatch bmm : batch matrix multiplication ; mm : matrix multiplication ; cat : concatenation . enqueue appends Q with the keys zk ∈ RB×d from the current batch ; dequeue removes the oldest B keys from Q is a consistent estimator ( up to a constant ) of the entropy of f # PX called the redistribution estimate ( Ahmad & Lin , 1976 ) . This follows if we notice that k ( xi ; τ , K ) : = 1K ∑K j=1 e f ( xi ) T f ( xj ) /τ is the un-normalized kernel density estimate of f ( xi ) using the i.i.d . samples { xj } Kj=1 , so −LMCneg ( f ; τ , K , B ) = − 1B ∑B i=1 log k ( xi ; τ , K ) ( Wang & Isola , 2020 ) . So minimizing Lneg ( and importantly LMCneg ) maximizes the entropy of f # PX . Tolstikhin et al . ( 2017 ) attempted to enforce the constraint that f # PX and PZ were matching distributions by regularizing the reconstruction loss with the MMD or a GAN-based estimate of the divergence between f # PX and PZ . By letting PZ be the uniform distribution over the unit hyper-sphere Sd , the insights above allow us to instead minimize the much simpler regularized loss L ( f , g ; λ , τ , B , K ) = 1 B B∑ i=1 ‖xi − g ( f ( xi ) ) ‖22 + λLMCneg ( f ; τ , K , B ) ( 5 ) Training : For simplicity , we will now use the notation Enc ( · ) and Dec ( · ) to respectively denote the encoder and decoder network of the autoencoder . Further , the d-dimensional output of Enc ( · ) is ` 2 normalized , i.e. , ‖Enc ( x ) ‖2 = 1 ∀x . Based on the theory above , we aim to minimize the loss L ( Enc , Dec ; λ , τ , B , K ) , where λ is the regularization coefficient , τ is the temperature hyperparameter , B is the mini-batch size , and K ≥ B is the number of samples used to estimate Lneg . In practice , we propose to use the momentum contrast ( MoCo , He et al . ( 2020 ) ) framework to implement Lneg . Let Enct be parameterized by θt at step t of training . Then , we let Enc′t be the same encoder parameterized by the exponential moving average θ̃t = ( 1−m ) ∑t i=1m t−iθi . Letting x1 , . . . , xK be the K most recent training examples , and letting t ( j ) = t − bj/Bc be the time at which xj appeared in a training mini-batch , we replace LMCneg at time step t with LMoCo = 1 B B∑ i=1 log 1 K K∑ j=1 exp ( Enct ( xi ) TEnc′t ( j ) ( xj ) τ ) − 1 B B∑ i=1 Enct ( xi ) TEnc′t ( xi ) τ ( 6 ) This approach allows us to use the latent vectors of inputs outside the current mini-batch without re-computing them , offering substantial computational advantages over other contrastive learning frameworks such as SimCLR ( Chen et al. , 2020 ) . Forcing the parameters of Enc′ to evolve according to an exponential moving average is necessary for training stability , as is the second term encouraging the similarity of Enct ( xi ) and Enc′t ( xi ) ( so-called “ positive samples ” in the terminology of contrastive learning ) . Note that we do not use any data augmentations in our algorithm , but this similarity term is still non-trivial since the networks Enct and Enc′t are not identical . Pseudo-code of our final algorithm , which we call Momentum Contrastive Autoencoder ( MoCA ) , is shown in Algorithm 1 ( pseudo-code style adapted from He et al . ( 2020 ) ) . Finally , in all our experiments , inspired by Grill et al . ( 2020 ) we set the exponential moving average parameter m for updating the Enc′ network at the tth iteration as m = 1− ( 1−m0 ) · ( cos ( πt/T ) + 1 ) /2 , where T is the total number of training iterations , and m0 is the base momentum hyper-parameter . Inference : Once the model is trained , the marginal distribution of the latent space ( i.e . the pushforward Enc # PX ) should be close to a uniform distribution over the unit hyper-sphere . We can therefore draw samples from the learned distribution as follows : we first sample z ∼ N ( 0 , I ) from the standard multivariate normal distribution in Rd and then generate a sample xg : = Dec ( z/‖z‖2 ) . | In this paper, the authors propose to use contrastive learning for matching in latent space in the Wasserstein autoencoder (WAE). In addition, they employ techniques such as momentum contrast in contrastive learning. Experimental results show that the proposed method, MoCA, is more stable and converges faster than existing methods. It is also capable of generating high-resolution images such as CelebA-HQ. | SP:98c84435bfea0ef2beb3b63b51a0a464b5ec620a |
That Escalated Quickly: Compounding Complexity by Editing Levels at the Frontier of Agent Capabilities | 1 INTRODUCTION . Reinforcement Learning ( RL , Sutton & Barto ( 1998 ) ) considers the problem of an agent learning from experience in an environment to maximize total ( discounted ) of reward . The past decade has seen a surge of interest in RL , with high profile successes in games ( Vinyals et al. , 2019 ; Berner et al. , 2019 ; Silver et al. , 2016 ; Mnih et al. , 2013 ; Hu & Foerster , 2020 ) and robotics ( OpenAI et al. , 2019 ; Andrychowicz et al. , 2020 ) . As such , there is tremendous excitement that RL may be a path towards generally capable agents ( Silver et al. , 2021 ) . Despite these successes , deploying RL agents in the real world remains a challenge ( Dulac-Arnold et al. , 2019 ) . Notably , strong training performance in simulation may not result in policies that are robust to the many sources of variation in the real world . Addressing this challenge on the agent side has become an active area of research ( Zhang et al. , 2021a ; Agarwal et al. , 2021a ; Raileanu & Fergus , 2021 ) , but in this paper we instead focus on the impact of the training environment itself , which often has a significant impact on agent ’ s ability to generalize ( Co-Reyes et al. , 2020 ) . For example in locomotion tasks , Reda et al . ( 2020 ) found that the initial state distribution , survival bonus , reward structure and control frequency had a significant impact on the performance of an agent . Indeed , curricula over environments can also influence the generalization performance of the agent ( Jiang et al. , 2021b ) . Throughout this paper we consider distributions of environments , referring to each individual sample as a level . Given a parameterized environment , the simplest approach one can consider is Domain Randomization ( DR , Jakobi , 1997 ; Tobin et al. , 2017 ; Sadeghi & Levine , 2017 ; Risi & Togelius , 2020 ; Peng et al. , 2017 ) , whereby an agent trains on individual levels uniformly sampled from an underlying environment distribution . It has been shown that training an agent with a DR-type approach can produce agents capable of complex real-world skills ( OpenAI et al. , 2019 ) . However , the performance of DR is only as good as the sampling distribution available—thus it can be ineffective when the probability of sampling useful levels is too low . Recently , Unsupervised Environment Design ( UED , Dennis et al. , 2020 ) has emerged as formalism for methods to design effective curricula . Given a parameterized environment , UED methods frame learning as a game between a teacher which generates a curriculum of levels , and a student seeking to maximize some notion of return . UED is a generalization of several other approaches . Indeed , DR can be considered as a UED algorithm whereby the teacher generates environments uniformly at random from the environment distribution . Other approaches to UED consider learning a teacher agent ( or generator ) , with a variety of adversarial objectives proposed ( Dennis et al. , 2020 ; Gur et al. , 2021 ) . However , training a teacher is a challenging optimization problem , suffering from both nonstationarity and sparse reward , as the teacher ’ s feedback only comes after evaluation by a changing student policy . Recent work showed it can be more effective to simply curate levels produced by DR ( Jiang et al. , 2021b ; a ; Matiisen et al. , 2020 ) , producing a curriculum of increasingly complex randomly generated levels . Despite their promise , these methods can only be as effective as the best of the random levels they sample , which can be a limitation in high dimensional design spaces . Finally , another series of promising works seek to evolve populations of environments ( Wang et al. , 2019 ; 2020 ; Dharna et al. , 2020 ) , but these methods heavily rely on handcrafted heuristics and also use up to 20x more compute since they also train a population of agents . In this paper , we seek a general method which harnesses the benefits of all three of these approaches . We posit the following : Rather than generate levels from scratch , it may be more effective to edit previously curated levels . Our primary contribution is to propose a new method which we call Adversarially Compounding Complexity by Editing Levels , or ACCEL . ACCEL is an evolutionary process , with levels constantly changing to remain at the frontier of the student agent ’ s capabilities ( see : Figure 2 ) . As such , levels generated by ACCEL begin simple but quickly become more complex . This benefits both the beginning of training ( Berthouze & Lungarella , 2004 ) , as the student begins learning much faster , while it also facilitates the construction of complex structures ( see Figure 1 ) . We believe ACCEL provides the best of both worlds : an evolutionary approach that can generate increasingly complex environments , combined with a regret-based curator which provides theoretical robustness guarantees in equilibrium . We evaluate ACCEL on a series of challenging procedurally generated grid world environments , where ACCEL demonstrates the ability to rapidly increase complexity while maintaining performance . Finally , we show ACCEL makes it possible to train agents capable of transfer to mazes an order of magnitude larger than training levels , achieving over double the success rate of the next best baseline . 2 BACKGROUND . 2.1 FROM MDPS TO UNDERSPECIFIED POMDPS . A Markov Decision Process ( MDP ) is defined as a tuple 〈S , A , T , R , γ〉 where S andA stand for the sets of states and actions respectively and T : S ×A→∆ ( S ) is a transition function representing the probability that the system/agent transitions from a state st ∈ S to st+1 ∈ S given action at ∈ A . Each transition also induces an associated reward rt generated by a reward functionR : S → R , and γ is a discount factor . When provided with an MDP , the goal of Reinforcement Learning ( RL , Sutton & Barto , 1998 ) is to learn a policy π that maximizes expected discounted reward , i.e . E [ ∑T i=0 rtγ t ] . Despite the generality of the MDP framework , it is often an unrealistic model for real world environments . First , it assumes full observability of the state , which is often impossible in practice . This is addressed in partially observable MDPs , or POMDPs , which include an observation function I : S → O which maps the true state ( which is unknown to the agent ) to a ( potentially noisy ) set of observations O. Secondly , the traditional MDP framework assumes a single reward and transition function , which are fixed throughout learning . Instead , in the real world , agents may experience variations not seen during training , which makes it crucial that policies are capable of robust transfer . To address both of these issues , we use the recently introduced Underspecified POMDP , or UPOMDP , given byM = 〈A , O , Θ , SM , TM , IM , RM , γ〉 . This definition is identical to a POMDP with the addition of Θ to represent the free parameters of the environment , similar to the context in a Contextual MDP ( Modi et al. , 2017 ) . These parameters can be distinct at every time step and incorporated into the transition function TM : S ×A×Θ→∆ ( S ) . Following Jiang et al . ( 2021a ) we define a levelMθ as an environment resulting from a fixed θ . We define the value of π inMθ to be V θ ( π ) = E [ ∑T i=0 rtγ t ] where rt are the rewards achieved by π inMθ . UPOMDPs benefit from their generality , since Θ can represent possible dynamics ( for example in sim2real ( Peng et al. , 2017 ; OpenAI et al. , 2019 ; Andrychowicz et al. , 2020 ) ) , changes in observations , different reward functions or differing game maps in procedurally generated environments . 2.2 METHODS FOR UNSUPERVISED ENVIRONMENT DESIGN . The goal of Unsupervised Environment Design ( UED , Dennis et al. , 2020 ) is to generate a series of levels that form a curriculum for a student agent , such that the student agent is capable of transfer , by maximizing some utility function Ut ( π , θ ) . In the case of DR , the utility function is simply : UUt ( π , θ ) = C ( 1 ) for any constant C. When learning a teacher , recent approaches proposed to use objectives seeking to maximize regret , defined as the difference between the expected return of the current policy and the optimal policy , ie : URt ( π , θ ) = argmax π∗∈Π { REGRETθ ( π , π∗ ) } = argmax π∗∈Π { V θ ( π∗ ) − V θ ( π ) } ( 2 ) Unlike other objectives , which may promote unsolvable environments , regret-based objectives have been shown to promote the simplest possible environments that the agent can not currently solve ( Dennis et al. , 2020 ) in a range of settings . However , since we do not have access to π∗ , a key challenge in UED algorithms utilizing objectives inspired by Equation 2 is to approximate the regret . Recently , the Prioritized Level Replay ( PLR , Jiang et al. , 2021b ; a ) algorithm introduced an additional teacher agent in the form of a curator , forming a “ dual curriculum game ” . The curator maintains a buffer of previously experienced levels and selects levels to be replayed by the student policy using objectives approximating regret . One of the objectives used by PLR is Positive Value Loss , given by : 1 T T∑ t=0 max ( T∑ k=t ( γλ ) k−tδk , 0 ) ( 3 ) where λ and γ are the Generalized Advantage Estimation ( GAE , Schulman et al . ( 2016 ) ) and MDP discount factors respectively , and δt , the TD-error at timestep t. Since Positive Value Loss approximates regret , if the student trains solely on curated levels ( i.e . does not take gradient steps on levels from the generator ) , then PLR achieves robustness guarantees in equilibrium . More formally , if St = Π is the strategy set of the student and St = Θ is the strategy set of the teacher ( in this case the curator ) , then ( by Corollary 1 of Jiang et al . ( 2021a ) ) , in equilibrium the resulting student policy π converges to a minimax regret policy , ie : π = argmin πA∈Π { max θ , πB∈Θ , Π { REGRETθ ( πA , πB ) } } ( 4 ) Empirically PLR has also been shown produce policies with strong generalization capabilities1 , yet it ’ s main weakness is that it still relies on randomly sampling useful levels . Next , we introduce our new approach which seeks to leverage the curator to produce batches of high regret levels . 1To see the impact of PLR on a simple example , we include a visualization in Figure 19 in the Appendix . | This paper introduces Adversarially Compounding Complexity by Editing Levels (ACCEL). ACCEL is an Unsupervised Environment Design (UED) algorithm, a method of generating a curriculum of environments so as to train agents that generalize well to either a training distribution of environments or off-distribution environments. ACCEL bares similarity to a recent addition in the UED literature, Robust PLR, but importantly uses an editor to modify previously seen environments. Edited levels are used if they satisfy a criterion based on the PLR score, and hence this can be thought of as a sort of evolutionary algorithm. After introducing ACCEL, the paper presents a series of experiments in the Lava grid and Minigrid environments, benchmarking ACCEL against a suite of UED and simpler baselines. It demonstrates emergent complexity generated in algorithms' curricula (with metrics such as number of lava tiles/blocks, shortest path length) as well as generalization to held-out levels. | SP:8197f4e8e9cb5c37a297dca06bf3955da0cdcc93 |
Evaluating the Robustness of Time Series Anomaly and Intrusion Detection Methods against Adversarial Attacks | Time series anomaly and intrusion detection are extensively studied in statistics , economics , and computer science . Over the years , numerous methods have been proposed for time series anomaly and intrusion detection using deep learningbased methods . Many of these methods demonstrate state-of-the-art performance on benchmark datasets , giving the false impression that these systems are robust and deployable in practical and industrial scenarios . In this paper , we demonstrate that state-of-the-art anomaly and intrusion detection methods can be easily fooled by adding adversarial perturbations to the sensor data . We use different scoring metrics such as prediction errors , anomaly , and classification scores over several public and private datasets belong to aerospace applications , automobiles , server machines , and cyber-physical systems . We evaluate state-of-theart deep neural networks ( DNNs ) and graph neural networks ( GNNs ) methods , which claim to be robust against anomalies and intrusions , and find their performance can drop to as low as 0 % under adversarial attacks from Fast Gradient Sign Method ( FGSM ) and Projected Gradient Descent ( PGD ) methods . To the best of our knowledge , we are the first to demonstrate the vulnerabilities of anomaly and intrusion detection systems against adversarial attacks . Our code is available here : https : //anonymous.4open.science/r/ICLR298 1 INTRODUCTION . Machine learning and deep learning have profoundly impacted numerous fields of research and society over the last decade ( LeCun et al. , 2015 ; Goodfellow et al. , 2016 ) . Medical imaging ( Litjens et al. , 2017 ) , speech recognition ( Kumar et al. , 2018 ) , and smart manufacturing systems ( Wang et al. , 2018 ) are a few of these areas . With the proliferation of smart sensors , massive advances in data collection and storage , and the ease with which data analytics and predictive modeling can be applied , multivariate time series data obtained from collections of sensors can be analyzed to identify regular patterns that can be interpreted and exploited . Numerous researchers have been interested in time series anomaly and intrusion detection ( Pang et al. , 2021 ; Khraisat et al. , 2019 ) . For instance , time series anomaly detection methods are used in the aerospace industry for satellite health monitoring , while intrusion detection methods are employed in the automobile industry for in-vehicle controller area networks . These deep neural network-based solutions outperform the competition on a variety of benchmark datasets . However , as deep learning became more prevalent , researchers began to investigate the vulnerability of deep networks , particularly to adversarial attacks . In the context of image recognition , an adversarial attack entails modifying an original image in such a way that the modifications are nearly imperceptible to the human eye ( Yuan et al. , 2019 ) . The modified image is referred to as an adversarial image , as it will be classified incorrectly by the neural network , whereas the original image will be classified correctly . One of the most well-known real-world attacks involves manipulating the image of a traffic sign in such a way that it is misinterpreted by an autonomous vehicle ( Eykholt et al. , 2018 ) . The most common type of attack is gradient-based , in which the attacker modifies the image in the direction of the gradient of the loss function relative to the input image , thereby increasing the rate of misclassification ( Yuan et al. , 2019 ; Goodfellow et al. , 2014 ; Madry et al. , 2017 ) . While adversarial attacks have been extensively studied in the context of image recognition , they have not been extensively investigated for anomaly and intrusion detection systems . It is surprising given the increasing popularity of deep learning models for classifying time series ( Ma et al. , 2018 ; Zheng et al. , 2017 ; Wang et al. , 2017 ) . Additionally , adversarial attacks are a possibility in a large number of applications that require the use of time series data . For instance , Figure 1 ( top ) depicts the original and perturbed time series for the Korean Aerospace Research Institute ’ s KOMPSAT-5 satellite ( KARI ) . The prediction error ( see Figure 1 , bottom ) is generated by the “ Convolutional LSTM with Mixtures of Probabilistic Principal Component Analyzers ” ( CLMPPCA ) method , which is currently deployed at KARI , to predict anomalies . While CLMPPCA accurately predicts the anomaly for the original time series , adding small perturbations in the form of FGSM and PGD attacks causes the entire input samples to be classified as an anomaly . This attack can have a severe impact on the satellite health monitoring system . We present , transfer , and adapt adversarial attacks that have been demonstrated to work well on images to time series data ( containing anomalies and intrusions ) in this work . Additionally , we present an experimental study utilizing benchmark datasets from the aerospace and automobile industries and server machines , demonstrating that state-of-the-art anomaly and intrusion detection methods are vulnerable to adversarial attacks . We highlight specific real-world use cases to emphasize the critical nature of such attacks in real-world scenarios . Our findings indicate that deep networks for time series data , like their computer vision counterparts , are vulnerable to adversarial attacks . As a result , this paper emphasizes the importance of protecting against such attacks , particularly when anomaly and intrusion detection systems are used in sensitive industries such as aerospace and automobiles . Finally , we discuss some mechanisms for avoiding these attacks while strengthening the models ’ resistance to adversarial examples . Aim , Scope and Contribution . In this work , we do not propose any novel adversarial attack method . However , we demonstrate the threat of existing attacks such as FGSM and PGD on stateof-the-art anomaly and intrusion detection methods . In comparison to the computer vision domain , where adversarial attack has been extensively studied and investigated , the literature on novelty detection , and particularly on anomaly detection , is noticeably devoid of such studies . The purpose of this paper is to bring attention to this issue . Additionally , we hope to encourage researchers to consider robustness to adversarial attacks when evaluating future detectors . The paper ’ s scope was limited to SOTA anomaly detectors and intrusion detection systems ( Note : As intrusion detection is a vast domain we consider only one sub-domain i.e. , intrusion detection in Controller Area Network ) . Finally , to demonstrate that the current generation of detectors is unprepared against adversarial attacks . We demonstrate these attacks successfully on a deployed system in the aerospace industry . 2 RELATED WORK . 2.1 BACKGROUND AND NOTATIONS . When performing a supervised learning task , we define D = { ( si , yi ) |i = 1 , . . . , N } to represent a data set containing N data samples . Each data sample is composed of a m-dimensional multivariate time series si and a single target value yi for classification . We will observe such formation in intrusion detection scenarios ( see Section 4.2 ) . For unsupervised learning , each data sample is again composed of a m-dimensional multivariate time series si however , yi is an n-dimensional multivariate time series obtained from an autoregressive model , predicting the future . In most cases , n = m however , they can be different as well . Moreover , we define any deep learning method as F ( · ) ∈ f : RN → ŷ and loss function ( e.g. , cross entropy or mean squared error ) as Lf ( · , · ) . Finally , generating an adversarial instance sadvi can be described as a optimization problem given a trained deep learning model F and an original input time series si , as follows : min ∥∥si − sadvi ∥∥ s.t . F ( si ) = ŷi , F ( sadvi ) = ŷadvi and ŷi 6= ŷadvi . ( 1 ) 2.2 ADVERSARIAL ATTACKS . In 2014 , Szegedy et al . ( 2013 ) introduced adversarial examples against deep neural networks for image recognition tasks for the first time . Following these inspiring discoveries , an enormous amount of research has been devoted to generating , understanding , and preventing adversarial attacks on deep neural networks ( Eykholt et al. , 2018 ; Goodfellow et al. , 2014 ; Madry et al. , 2017 ) . Adversarial attacks can be broadly classified into two types : white-box and black-box attacks . As white-box attacks presume access to the model ’ s design and parameters , they can attack the model effectively and efficiently using gradient information . By contrast , black-box attacks do not require access to the output probabilities or even the label , making them more practical in real-world situations . However , black-box attacks frequently take hundreds , if not millions , of model queries to calculate a single adversarial case . The majority of adversarial attack techniques have been proposed for use in image recognition . For instance , a Fast Gradient Sign Method attack was developed by Goodfellow et al . ( 2014 ) as a substitute for expensive optimization techniques ( Szegedy et al. , 2013 ) . Madry et al . ( 2017 ) proposed Projected Gradient Descent ( PGD ) in response to the success of FGSM . PGD seeks to find the perturbation that maximizes a model ’ s loss on a particular input over a specified number of iterations while keeping the perturbation ’ s size below a specified value called epsilon ( ) . This constraint is typically expressed as the perturbation ’ s L2 or L∞ norm . It is added to ensure that the content of the adversarial example is identical to that of the unperturbed sample — or even to ensure that the adversarial example is imperceptibly different from the unperturbed sample . Carlini-Wagner is another well-known attack ( Carlini & Wagner , 2017 ) . However , it is primarily intended for L2 norm-based attacks , whereas this study focuses exclusively on L∞ norm-based attacks . Adversarial Attacks on Time Series . Limited efforts have been made to extend similar attacks to time series data . Surprisingly , the community has ignored adversarial attack approaches for time series anomaly and intrusion detection tasks . However , a few adversarial attack approaches have been proposed recently for the time series classification task , which are tangentially related to our work . For instance , in their work on adopting a soft K Nearest Neighbors ( KNN ) classifier with Dynamic Time Warping ( DTW ) , Oregi et al . ( 2018 ) demonstrated that adversarial examples could trick the proposed nearest neighbors classifier on a single simulated synthetic control dataset from the UCR archive ( Dau et al. , 2019 ) . Given that the KNN classifier is no longer considered the stateof-the-art classifier for time series data ( Bagnall et al. , 2017 ) , Fawaz et al . ( 2019 ) extend this work by examining the effect of adversarial attack on the more recent and commonly used ResNet classifier ( He et al. , 2016 ) . Fawaz et al . ( 2019 ) , on the other hand , focused mainly on univariate datasets from the UCR repository . As a result , Harford et al . ( 2020 ) investigate the influence of adversarial attacks on multivariate time series classification using the multivariate dataset from UEA repository ( Bagnall et al. , 2018 ) . However , Harford et al . ( 2020 ) only consider basic methods such as 1-Nearest Neighbor Dynamic Time Warping ( Seto et al. , 2015 ) ( 1-NN DTW ) and a Fully Convolutional Network ( FCN ) . Karim et al . ( 2020 ) and Harford et al . ( 2020 ) attacked models using Gradient Adversarial Transformation Networks ( GATNs ) . However , they examined just transfer attacks , a relatively weak sort of black-box attack . Only Siddiqui et al . ( 2019 ) demonstrated the effectiveness of gradient-based adversarial attacks on time series classification and regression networks . However , they considered a very simple baseline for the attack , containing only three convolutional , two max-pooling , and one dense layer . Our study differs from previous research in that we focus on time series anomaly and intrusion detection rather than the broader classification problem . More precisely , we explore autoregressive models that have been mostly overlooked in prior works . Additionally , rather than targeting generic deep neural networks KNN with DTW or ResNet , we investigate state-of-the-art anomaly or intrusion detection methods . For instance , when it comes to anomaly detection , we focus on the most contemporary and commonly used techniques , such as MSCRED ( Zhang et al. , 2019 ) , CLMPPCA Tariq et al . ( 2019 ) , and MTAD-GAT Zhao et al . ( 2020 ) . Similarly , for controller area network intrusion detection , we explore two well-known methods : CAN-ADF ( Tariq et al. , 2020a ) and CANTransfer ( Tariq et al. , 2020b ) . Section 4 will cover these methods in further depth . | The paper tackles the problem of adversarial attacks against time-series-based ML applications devoted to intrusion detection. The paper is relatively simple: they use existing adversarial ML strategies (white-box attacks) to thwart a similar ML system. The main contribution is the fact that few efforts investigated adversarial attacks against time-series based ML methods, and – specifically – no paper considered “anomaly and intrusion detection” scenarios. Overall, the presentation of the paper is adequate. The quality of the English text is fair. Figures and Tables are appropriate. The topic addressed by the manuscript is relevant and within ICLR’s scope. The references are not appropriate. The contribution is not very significant. STRENGTHS: + It is truly the first + Evaluation on multiple datasets WEAKNESSES - Poor treatment of previous work - Unimpressive results - Poor threat model - Inadequate problem definition | SP:4964917854c4f203cd0a464df3ab989d448b4236 |
Stochastic Projective Splitting: Solving Saddle-Point Problems with Multiple Regularizers | We present a new , stochastic variant of the projective splitting ( PS ) family of algorithms for monotone inclusion problems . It can solve min-max and noncooperative game formulations arising in applications such as robust ML without the convergence issues associated with gradient descent-ascent , the current de facto standard approach in ML applications . Our proposal is the first version of PS able to use stochastic gradient oracles . It can solve min-max games while handling multiple constraints and nonsmooth regularizers via projection and proximal operators . Unlike other stochastic splitting methods that can solve such problems , our method does not rely on a product-space reformulation of the original problem . We prove almost-sure convergence of the iterates to the solution and a convergence rate for the expected residual . By working with monotone inclusions rather than variational inequalities , our analysis avoids the drawbacks of measuring convergence through the restricted gap function . We close with numerical experiments on a distributionally robust sparse logistic regression problem . 1 INTRODUCTION . The most prominent application of optimization in ML is empirical risk minimization . However , inspired by the success of GANs ( Goodfellow et al. , 2014 ) . , ML practitioners have developed more complicated min-max and adversarial optimization formulations ( Yu et al. , 2021 ; Kuhn et al. , 2019 ; Shafieezadeh-Abadeh et al. , 2015 ; Sinha et al. , 2018 ; Lin et al. , 2020 ; Namkoong & Duchi , 2016 ; Huang et al. , 2017 ; Wadsworth et al. , 2018 ; Zhang et al. , 2018 ; Edwards & Storkey , 2015 ; Celis & Keswani , 2019 ) . Solving these multi-player games leads to issues not seen when minimizing a single-player loss function . The competitive nature of a game leads to rotational dynamics that can cause intuitive gradient-based methods to fail to converge ( Gidel et al. , 2019 ; Daskalakis et al. , 2018 ; Hsieh et al. , 2020 ) . A mathematical framework underlying both convex optimization and saddle-point problems is the monotone inclusion problem ; see Ryu & Boyd ( 2016 ) for an introduction . Methods developed for monotone inclusions will converge for convex-concave , games as they are explicitly designed to handle such problems ’ governing dynamics . In recent years , monotone inclusion methods and theory have started to receive attention in the ML community ( Diakonikolas , 2020 ; Liu et al. , 2021 ; Ryu et al. , 2020 ; Pathak & Wainwright , 2020 ) , with a focus on monotone variational inequalities , which form a special case of monotone inclusions ( Antonakopoulos et al. , 2019 ; Gidel et al. , 2019 ; Daskalakis et al. , 2018 ; Hsieh et al. , 2020 ; Mertikopoulos et al. , 2019 ) . The most prevalent methods for solving min-max games in ML are variants of gradient descent-ascent ( GDA ) . This method alternates between a gradient-descent step for the minimizing player and a gradient-ascent step for the maximizing player . Unfortunately , GDA requires additional assumptions to converge on convex-concave games , and it even fails for some simple 2D bilinear games ( Gidel et al. , 2019 , Prop . 1 ) . While there have been several approaches to modify either GDA ( Chavdarova et al. , 2021 ; Grnarova et al. , 2021 ; Balduzzi et al. , 2018 ) or the underlying game objective ( Mescheder et al. , 2018 ; Nagarajan & Kolter , 2017 ; Mescheder et al. , 2017 ) to ensure convergence , this paper instead develops a method for solving monotone inclusions that can naturally handle game dynamics . Our approach builds upon the recently proposed projective splitting ( PS ) method with forward steps ( Johnstone & Eckstein , 2020b ) . PS is designed specifically for solving monotone inclusions , thus does not fall prey to the convergence issues that plague GDA , at least for convex-concave games . PS is within the general class of projective splitting methods invented by Eckstein & Svaiter ( 2008 ) and developed further in Eckstein & Svaiter ( 2009 ) ; Alotaibi et al . ( 2014 ) ; Combettes & Eckstein ( 2018 ) ; Eckstein ( 2017 ) ; Johnstone & Eckstein ( 2019 ; 2021 ; 2020a ) . These methods work by creating a separating hyperplane between the current iterate and the solution and then moving closer to the solution by projecting the current iterate onto this hyperplane ( see Section 3 for an overview ) . Other than being able to natively handle game dynamics , the primary advantage of PS is that it fully splits problems involving an arbitrary number of regularizers and constraints . “ Full splitting ” means that the method can handle multiple regularizers and constraints through their respective individual proximal and projection operators , along with the smooth terms via gradients . What makes this useful is that many of the regularizers used in ML have proximal operators that are relatively easy to compute ; see for example Parikh & Boyd ( 2013 ) . Despite these advantages , the preexisting PS framework has a significant drawback : it requires deterministic gradient oracles . This feature makes it impractical for application to large datasets for which stochastic oracles may be the only feasible option . Contributions The primary contribution of this work is a new projective splitting algorithm that allows for a stochastic gradient oracle . We call the method stochastic projective splitting ( SPS ) . Our method “ fully splits ” the monotone inclusion problem Find z ∈ Rd s.t . 0 ∈ ∑n i=1Ai ( z ) +B ( z ) , ( 1 ) where B is monotone and L-Lipschitz and each Ai is maximal monotone and typically set valued , usually arising from a constraint or a nonsmooth regularizer in the underlying optimization problem or game ; see for example Ryu & Boyd ( 2016 ) for definitions . For some example ML applications of ( 1 ) , see Section 2 and Appendix A . Here , an algorithm that “ fully splits ” ( 1 ) means one whose computational steps each involve only the individual operators A1 , . . . , An , B . Ours is the first method that can accomplish full splitting without a product-space reformulation that recasts ( 1 ) as a two-operator problem on a higher-dimensional space , a tactic whose disadvantages are discussed in Appendix F.7 . Our method interrogates the Lipschitz operator B through a stochastic oracle . Previous methods splitting ( 1 ) have either required a deterministic oracle for B , or have made far more restrictive assumptions on the noise or the operators ( Briceño-Arias & Combettes , 2011 ; Combettes & Pesquet , 2012 ; Malitsky & Tam , 2020 ; Bot et al. , 2019 ; Van Dung & Vu , 2021 ) than we will require below . However , the stochastic methods of Alacaoglu et al . ( 2021 ) and Böhm et al . ( 2020 ) , when combined with a product-space reformulation , can solve ( 1 ) when all the Ai are subdifferentials of convex functions ; see Section 6 . When moving away from a deterministic gradient oracle in projective splitting , a key difficulty is that the generated hyperplanes do not guarantee separation between the solution and the current point . We solve this issue by relaxing the projection : we only update each iterate in the direction of the noisy projection and scale its movement by a decreasing stepsize that allows for control of the stochastic error . Using the framework of stochastic quasi-Fejér monotonicity ( Combettes & Pesquet , 2015 ) , we prove almost-sure convergence of the final iterate and do not require averaging of the iterates ( Theorem 1 , Section 5 ) . We also provide a non-asymptotic convergence rate for the approximation residual ( Theorem 2 , Section 5 ) . A special case of SPS is the recently-developed Double Stepsize Extragradient Method ( DSEG ) ( Hsieh et al. , 2020 ) . When n = 0 and therefore only B is present in ( 1 ) , DSEG and SPS coincide . Thus , our method extends DSEG to allow for regularizers and constraints . Our analysis also provides a new interpretation for DSEG as a special case of projective splitting . Our nonasymptotic convergence rate for SPS also applies to DSEG under no additional assumptions . By contrast , the original convergence rate analysis for DSEG requires either strong monotonicity or an error bound . We close with numerical experiments on a distributionally robust sparse logistic regression problem . This is a nonsmooth convex-concave min-max problem which can be converted to ( 1 ) with n = 2 set-valued operators . On this problems class , SPS compares well to the possible alternative splitting methods . Non-monotone problems The work of Hsieh et al . ( 2020 ) included a local convergence analysis for DSEG applied to locally monotone problems . For min-max problems , if the objective is locally convex-concave at a solution and DSEG is initialized in close proximity , then for small enough stepsizes it converges to the solution with high probability . It is possible to extend this result to SPS , along with our convergence rate analysis . This result is beyond the scope of this work , but Appendix J provides a proof sketch . 2 BACKGROUND ON MONOTONE INCLUSIONS . Since they are so important to SPS , this section provides some background material regarding monotone inclusions , along with their connections to convex optimization , games , and ML . Appendix G discusses their connections to variational inequalities . For a more thorough treatment , we refer to Bauschke & Combettes ( 2017 ) . See Appendix A for a longer discussion of the applications of monotone inclusions to ML along with several examples . Fundamentals Let f : Rd → R ∪ { ∞ } be closed , convex , and proper ( CCP ) . Recall that its subdifferential ∂f is given by ∂f ( x ) .= { g : f ( y ) ≥ f ( x ) + g > ( y − x ) } . The map ∂f has the property u ∈ ∂f ( x ) , v ∈ ∂f ( y ) =⇒ ( u− v ) > ( x− y ) ≥ 0 , and any point-to-set map having this property is called a monotone operator . A monotone operator T is called maximal if no additional points can be included in the image T ( x ) of any x ∈ Rd without violating the above property ( Bauschke & Combettes , 2017 , Def . 20.20 ) . Subgradient maps of CCP functions are maximal ( Bauschke & Combettes , 2017 , Thm . 20.25 ) . A minimizer of f is any x∗ such that 0 ∈ ∂f ( x∗ ) . This is perhaps the simplest example of a monotone inclusion , the problem of finding x such that 0 ∈ T ( x ) , where T is a monotone operator . If f is smooth , then ∂f ( x ) = { ∇f ( x ) } for all x , and the monotone inclusion 0 ∈ ∂f ( x ) is equivalent to the first-order optimality condition 0 = ∇f ( x ) . Under certain regularity conditions ( Bauschke & Combettes , 2017 , Cor . 16.5 ) , minimizing a sum of CCP functions f1 , . . . , fn is equivalent to solving the monotone inclusion formed from the sum of their subdifferentials : x∗ ∈ arg min x∈Rd n∑ i=1 fi ( x ) ⇐⇒ 0 ∈ n∑ i=1 ∂fi ( x ∗ ) . ( 2 ) As throughout this paper for all set addition operations , the summation on the right-hand side of ( 2 ) is the Minkowski sum ∑n i=1 Si = { ∑n i=1 si | si ∈ Si ∀ i ∈ 1 .. n } . For a convex set X , a constraint x ∈ C for some convex set C may be imposed by setting one of the fi to be the indicator function ιC , defined by ιC ( x ) = 0 for x ∈ C and ιC ( x ) = +∞ for x 6∈ C. Indicator functions of closed convex sets are CCP ( Bauschke & Combettes , 2017 , Ex . 1.25 ) , and the subgradient map of ιC is also referred to as the normal cone map NC of C ( Bauschke & Combettes , 2017 , Def . 6.37 ) . Multiple constraints may be imposed by including multiple indicator functions in ( 2 ) . ML applications The form ( 2 ) can be used to model ML problems with multiple constraints and/or nonsmooth regularizers , including sparse and overlapping group lasso ( Jacob et al. , 2009 ) , sparse and low-rank matrix estimation problems ( Richard et al. , 2012 ) , and rare feature selection ( Yan & Bien , 2020 ) ; see Pedregosa & Gidel ( 2018 ) for an overview . Games Consider a two-player noncooperative game in which each player tries to selfishly minimize its own loss , with each loss depending on the actions of both players . Typically , the goal is to find a Nash equilibrium , in which neither player can improve its loss by changing strategy : x∗ ∈ arg min x∈Θ F ( x , y∗ ) and y∗ ∈ arg min y∈Ω G ( x∗ , y ) . ( 3 ) Assuming that the admissible strategy sets Θ ⊆ Rdx and Ω ⊆ Rdy are closed and convex and that F and G are differentiable , then writing the first-order necessary conditions for each optimization problem in ( 3 ) yields 0 ∈ [ ∇xF ( x∗ , y∗ ) ∇yG ( x∗ , y∗ ) ] + ( NΘ ( x ∗ ) ×NΩ ( y∗ ) ) . ( 4 ) IfG = −F , then ( 3 ) is a min-max game . If F is also convex in x and concave in y , thenB : ( x , y ) 7→ ( ∇xF ( x , y ) , −∇yF ( x , y ) ) > is monotone1 on Rdx+dy ( Rockafellar , 1970 ) . In many applications , B is also Lipschitz continuous . In this situation , ( 4 ) is a monotone inclusion involving two operators B and NΘ×Ω , with B being Lipschitz . Using the simultaneous version of GDA on ( 3 ) is equivalent to applying the forward-backward method ( FB ) ( Bauschke & Combettes , 2017 , Thm . 26.14 ) to ( 4 ) . However , convergence of FB requires that the operator B be cocoercive ( Bauschke & Combettes , 2017 , Def . 4.10 ) , and not merely Lipschitz ( Bauschke & Combettes , 2017 , Thm . 26.14 ) . Thus , simultaneous GDA fails to converge for ( 3 ) without additional assumptions ; see Gidel et al . ( 2019 , Prop . 1 ) for a simple counterexample . Regularizers and further constraints may be imposed by adding more operators to ( 4 ) . For example , if one wished to apply a ( nonsmooth ) convex regularizer r : Rdx → R ∪ { +∞ } to the x variables and a similar regularizer d : Rdy → R ∪ { +∞ } to the y variables , one would add the operator A2 : ( x , y ) 7→ ∂r ( x ) × ∂d ( y ) to the right-hand side of ( 4 ) . ML applications of games Distributionally robust supervised learning ( DRSL ) is an emerging framework for improving the stability and reliability of ML models in the face of distributional shifts ( Yu et al. , 2021 ; Kuhn et al. , 2019 ; Shafieezadeh-Abadeh et al. , 2015 ; Sinha et al. , 2018 ; Lin et al. , 2020 ; Namkoong & Duchi , 2016 ) . Common approaches to DRSL formulate the problem as a min-max game between a learner selecting the model parameters and an adversary selecting a worst-case distribution subject to some ambiguity set around the observed empirical distribution . This min-max problem is often further reduced to either a finite-dimensional saddlepoint problem or a convex optimization problem . DRSL is a source of games with multiple constraints/regularizers . One such formulation , based on Yu et al . ( 2021 ) , is discussed in the experiments below . The work in Namkoong & Duchi ( 2016 ) uses an ambiguity set based on f -divergences , while Sinha et al . ( 2018 ) introduce a Lagrangian relaxation of the Wasserstein ball . When applied to models utilizing multiple regularizers ( Jacob et al. , 2009 ; Richard et al. , 2012 ; Yan & Bien , 2020 ) , both of these approaches lead to min-max problems with multiple regularizers . Other applications of games in ML , although typically nonconvex , include generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ; Arjovsky et al. , 2017 ; Loizou et al. , 2020 ; 2021 ; Mishchenko et al. , 2020 ) , fair classification ( Wadsworth et al. , 2018 ; Zhang et al. , 2018 ; Edwards & Storkey , 2015 ; Celis & Keswani , 2019 ) , and adversarial privacy ( Huang et al. , 2017 ) . Resolvents , proximal operators , and projections A fundamental computational primitive for solving monotone inclusions is the resolvent . The resolvent of a monotone operator A is defined to be JA . = ( I + A ) −1 , where I is the identity operator and the inverse of any operator T is simply T−1 : x 7→ { y : Ty 3 x } . If A is maximal monotone , then for any ρ > 0 , JρA is single valued , nonexpansive , and has domain equal to Rd ( Bauschke & Combettes , 2017 , Thm . 21.1 and Prop . 23.8 ) . Resolvents generalize proximal operators of convex functions : the proximal operator of a CCP function f is proxρf ( t ) . = arg min x∈Rd { ρf ( x ) + ( 1/2 ) ‖x− t‖2 } . It is easily proved that proxρf = Jρ∂f . Like proximal operators , resolvents generalize projection onto convex sets : if f = ιC , then JρNC = proxρf = projC for any ρ > 0 . In many ML applications , proximal operators , and hence resolvents , are relatively straightforward to compute . For examples , see Parikh & Boyd ( 2013 , Sec . 6 ) . Operator splitting methods Operator splitting methods attempt to solve monotone inclusions such as ( 1 ) by a sequence of operations that each involve only one of the operators A1 , . . . , An , B . Such methods are often presented in the context of convex optimization problems like ( 2 ) , but typically apply more generally to monotone inclusions such as ( 1 ) . In the specific context of ( 1 ) , each iteration of such a method ideally handles each Ai via its resolvent and the Lipschitz operator B by explicit ( not stochastic ) evaluation . This is a feasible approach if the original problem can be decomposed in 1Sufficient conditions for the monotonicity of ( 4 ) in the case where G 6= −F are discussed in e.g . Scutari et al . ( 2014 ) ; Briceño-Arias & Combettes ( 2013 ) . such a way that the resolvents of each Ai are relatively inexpensive to compute , and full evaluations of B are possible . Although not discussed here , more general formulations in which matrices couple the arguments of the operators can broaden the applicability of operator splitting methods . | The paper focuses on the stochastic variant of the projective splitting (PS) algorithm. With a specific focus on monotone inclusion problems, the authors propose a novel separable algorithm featured by the ability to handle multiple constraints and non-smooth regularizers. Compared with similar approaches on variational inequality which is a special case of monotone inclusions, this paper uses a more direct error metric than the restricted gap function. Moreover, although with a slower convergence rate, this paper is the first discussion under the general discontinuous monotone inclusion case. | SP:e417981b6a5065733cf298169044570548654483 |
Omni-Scale CNNs: a simple and effective kernel size configuration for time series classification | 1 INTRODUCTION . One of the most challenging problems for Time Series Classification ( TSC ) tasks is how to tell models in what time scales 1 to extract features . Time series ( TS ) data is a series of data points ordered by time or other meaningful sequences such as frequency . Due to the variety of information sources ( e.g. , medical sensors , economic indicators , and logs ) and record settings ( e.g. , sampling rate , record length , and bandwidth ) , TS data is naturally composed of various types of signals on various time scales ( Hills et al. , 2014 ; Schäfer , 2015 ; Dau et al. , 2018 ) . Thus , in what time scales can a model “ see ” from the TS input data has been a key for the performance of TS classification . Traditional machine learning methods have taken huge efforts to capture important time scales , and the computational resource consumption increase exponentially with the length of TS increase . For example , for shapelet methods ( Hills et al. , 2014 ; Lines et al. , 2012 ) , whose discriminatory feature is obtained via finding sub-sequences from TS that can be representative of class membership , the time scale capture work is finding the proper sub-sequences length . To obtain the proper length , even for a dataset with length 512 , ( Hills et al. , 2014 ) has to try 71 different sub-sequence lengths . For other methods , such as ( Berndt & Clifford , 1994 ; Schäfer , 2015 ; Lucas et al. , 2019 ) , despite the time scale capture might be called by different names such as finding warping size or window length . They all need searching works to identify those important time scales . More recent deep learning based methods also showed that they had to pay a lot of attention to this time scale problem . MCNN ( Cui et al. , 2016 ) searches the kernel size to find the best RF of a 1D-CNN for every dataset . Tapnet ( Zhang et al. , 2020 ) additionally considers the dilation steps . Chen & Shi ( 2021 ) also take 1It has different names for different methods . Generally , it refers to the length of the time series subsequence for feature extraction . the number of layers into considerations . These are all important factors for the RF of CNNs and the performance for TSC . Although a number of researchers have searched for the best RF of 1D-CNNs for TSC , there is still no agreed answer to 1 ) what size of the RF is the best ? And 2 ) how many different RFs should be used ? Models need to be equipped with different sizes and different numbers of RFs for a specific dataset . Using the same setup for every dataset can lead to a significant performance drop for some datasets . For example , as shown by the statistics on the University of California Riverside ( UCR ) 85 “ bake off ” datasets in Figure 1a , the accuracy of most datasets can have a variance of more than 5 % just by changing the RF sizes of their model while keeping the rest of the configurations the same . As also shown in Figure 1b , no RF can consistently perform the best over different datasets . To avoid those complicated and resource-consuming searching work , we propose Omni-Scale block ( OS-block ) , where the kernel choices for 1D-CNNs are automatically set through a simple and universal rule that can cover the RF of all scales . The rule is inspired by Goldbach ’ s conjecture , where any positive even number can be written as the sum of two prime numbers . Therefore , the OS-block uses a set of prime numbers as the kernel sizes except for the last layer whose kernel sizes are 1 and 2 . In this way , a 1D-CNN with these kernel sizes can cover the RF of all scales by transforming TS through different combinations of these prime size kernels . What ’ s more , the OS-block is easy to implement to various TS datasets via selecting the maximum prime number according to the length of the TS . In experiments , we show consistent state-of-the-art performance on four TSC benchmarks . These benchmarks contain datasets from different domains , i.e. , healthcare , human activity recognition , speech recognition , and spectrum analysis . Despite the dynamic patterns of these datasets , 1DCNNs with our OS-block robustly outperform previous baselines with the unified training hyperparameters for all datasets such as learning rate , batch size , and iteration numbers . We also did a comprehensive study to show our OS-block , the no time scale search solution , always matches the performance with the best RF size for different datasets . 2 MOTIVATIONS . Two phenomena of 1D-CNNs inspire the design of the OS-block . In this section , we will introduce the two phenomena with examples in Figure 2 and more discussions can be found in Section 4.6 . Firstly , we found that , although the RF size is important , the 1D-CNNs are not sensitive to the specific kernel size configurations that we take to compose that RF size . An example is given in the right image of the Figure 2 Secondly , the performance of 1D-CNNs is mainly determined by the best RF size it has . To be specific , supposing we have multiple single-RF-size-models which are of similar model size and layer numbers , but each of them has a unique RF size . Let ’ s denote the set of those RF sizes as S. When testing those models on a dataset , we will have a set of accuracy results A . Then , supposing we have a multi-kernel model which has multiple RF sizes2 and set of those sizes is also S. Then , 2A detailed discussion about how to calculate RF sizes for 1D-CNNs with multiple kernels in each layer can be found in Section 3.2 the accuracy of the multiple-RF-sizes-model will be similar to the highest value of A . An example is given in the left image of Figure 2 . Specifically , when testing single-RF-size-models on the Google Speechcommands dataset , the model ’ s performance is positive correlation with the model ’ s RF size . For example , the light blue line whose set of RF size is { 99 } outperforms the light green line { 39 } and light red line { 9 } . For those multiple-RF-sizes-models which has more than one element in their set of RF sizes , their performance are determined by the best ( also the largest because of the positive correlation ) RF size it has . Having more worse ( smaller ) RF sizes will not have much influence on the performance . The second phenomenon means that , instead of searching for the best time scales , if the model covers all RF sizes , its performance will be similar to that of a model with the best RF size . However , there are many designs that can cover all RF sizes . Which one should be preferred ? Based on the first phenomenon , from the performance perspective , we could choose any design that we want . However , as we will show in Section 3.3 , those candidate designs are not of the same characteristics such as the model size or the expandability for long TS data . Therefore , the design of the OS-block that we propose aims at covering all RF sizes in an efficient manner . 3 METHOD . The section is organized as follows : Firstly , we give the problem definition in Section 3.1 . Then , we will explain how to construct the Omni-scale block ( OS-block ) which covers all receptive field sizes in Section 3.2 . Section 3.3 will explain the reason why OS-block can cover RF of all sizes in an efficient manner . In Section 3.4 , we will introduce how to apply the OS-block on TSC tasks . 3.1 PROBLEM DEFINITION . TS data is denoted as X = [ x1 , x2 , ... , xm ] , where m is the number of variates . For univariate TS data , m = 1 and for m > 1 , the TS are multivariate . Each variate is a vector of length l. A TS dataset , which has n data and label pairs , can be denoted as : D = { ( X1 , y1 ) , ( X2 , y2 ) , ... , ( Xn , yn ) } , where ( X∗ , y∗ ) denotes the TS data X∗ belongs to the class y∗ . The task of TSC is to predict the class label y∗ when given a TS X∗ . 3.2 ARCHITECTURE OF OS-BLOCK . The architecture of the OS-block is shown in Figure 3 . It is a three-layer multi-kernel structure , and each kernel does the same padding convolution with input . For the kernel size configuration , we use P ( i ) to denote the kernel size set of the i-th layer : P ( i ) = { { 1 , 2 , 3 , 5 , ... , pk } { 1 , 2 } , i ∈ { 1 , 2 } , i = 3 ( 1 ) Where { 1 , 2 , 3 , 5 , 7 , ... , pk } is a set of prime numbers from 1 to pk . The value of pk is the smallest prime number that can cover all sizes of RF in a range . Here , the range that we mentioned is all meaningful scales . For example , since the TS length is l , we don ’ t need to cover RFs that are larger than l or smaller than 1 . Therefore , the pk is the smallest prime number that can cover the RF size from 1 to l. If we have prior knowledge , such as that we know there are cycles in the TS , or we know the length range of the hidden representative pattern . We could change the RF size range of the OS-block by simply changing the prime number list . An example is given in the left image in Figure 3 , which uses the prime number list in the blue block to cover the RF size range from 10 to 26 . RF sizes of the OS-block : The RF is defined as the size of the region in the input that produces the feature . Because each layer of the OS-block has more than one convolution kernel , there will be several different paths from the input signal to the final output feature ( Araujo et al. , 2019 ; Luo et al. , 2016 ) , and each path will have a RF size . For the 3-layer OS-block , which has no pooling layer and the stride size is 1 , the set of RF sizes S is the set of RF size of all paths , and it can be described as : S = { p ( 1 ) + p ( 2 ) + p ( 3 ) − 2 | p ( i ) ∈ P ( i ) , i ∈ { 1 , 2 , 3 } } . ( 2 ) For the reasons that P ( i ) are prime number list when i ∈ { 1 , 2 } , the set { p ( 1 ) + p ( 2 ) |p ( i ) ∈ P ( i ) , i ∈ { 1 , 2 } } is the set of all even numbers E.3 Thus , we have S = { e+ p ( 3 ) − 2 | p ( 3 ) ∈ P ( 3 ) , e ∈ E } . ( 3 ) With Equation 3 and Equation 1 , we have S = { e|e ∈ E } ∪ { e− 1|e ∈ E } ≡ N+ . ( 4 ) Where N+ is the set of all integer numbers in the range . Specifically , the S ≡ N+ is because a real number must be an odd number or an even number , while E is the even number set , { e−1|e ∈ E } is 3This is according to Goldbach ’ s conjecture . Specifically , the conjecture states that any positive even number can be composed of two prime numbers . For example , 8 = 5 + 3 , 12 = 7 + 5 , and more examples can be found in the left image of Figure 3 . Despite that the conjecture is yet unproven in theory , but its correctness has been validated up to 4× 1014 ( Richstein , 2001 ) , which is larger than the length of all available TS data . OS-block with residual connection OS-block on each variate for multi variate time series dataOS-block with ensemble learning the odd number set . Therefore , with the proper selection of pk , we could cover any integer RF size in a range . It should be noticed that , there might be many options to cover all RF sizes , we use the Godlach ’ s conjecture to make sure that we could all scales . | The paper is on the receptive field of CNNs for 1D time series classification. It proposes an elegant decomposition of receptive fields based on the Goldbach conjecture that any number can be represented by a sum of primes. The paper thus puts RFs with prime numbers in CNN layers, and by having multiple layers, these RFs will sum, and thus allowing the proposed network to cover all RF sizes. Experiments on several datasets shows that the method gives good results. | SP:cd1397f08a6712e350b8a41ac9a6682e6fab4baf |
Adversarial Distributions Against Out-of-Distribution Detectors | Out-of-distribution ( OOD ) detection is the task of determining whether an input lies outside the training data distribution . As an outlier may deviate from the training distribution in unexpected ways , an ideal OOD detector should be able to detect all types of outliers . However , current evaluation protocols test a detector over OOD datasets that cover only a small fraction of all possible outliers , leading to overly optimistic views of OOD detector performance . In this paper , we propose a novel evaluation framework for OOD detection that tests a detector over a larger , unexplored space of outliers . In our framework , a detector is evaluated with samples from its adversarial distribution , which generates diverse outlier samples that are likely to be misclassified as in-distribution by the detector . Using adversarial distributions , we investigate OOD detectors with reported near-perfect performance on standard benchmarks like CIFAR-10 vs SVHN . Our methods discover a wide range of samples that are obviously outlier but recognized as indistribution by the detectors , indicating that current state-of-the-art detectors are not as perfect as they seem on existing benchmarks . 1 INTRODUCTION . Identifying whether an input datum lies outside the training data distribution is one of the canonical problems in machine learning . Over its long history , the problem has been called by multiple names , including novelty detection ( Markou & Singh , 2003 ) , outlier detection ( Hawkins , 1980 ) , oneclass classification ( Japkowicz et al. , 1995 ) , and more recently , out-of-distribution ( OOD ) detection ( Hendrycks & Gimpel , 2016 ) . Investigation of the problem has resulted in a number of real-world applications , for example , medical diagnosis ( Li et al. , 2019 ) and inspection of defective parts and products ( Bergmann et al. , 2019 ) . The relevance of OOD detection is growing beyond these applications , as an OOD detector is considered an essential component of a trustworthy machine learning system . For example , without a reliable OOD detector , an image classifier trained to classify cats and dogs may incorrectly classify a human as belonging to one of these two classes ( Hendrycks et al. , 2019b ) . In order to advance the development of reliable OOD detection algorithms , a more comprehensive evaluation protocol is needed . Current evaluation protocols adopted by the community provide a distorted view of a detector ’ s performance for two reasons . First , OOD detectors are tested over a small fraction of possible outliers . Detectors are typically evaluated using test OOD datasets chosen by a researcher . Since the test OOD datasets do not cover the entire space of outliers , there may exist untested outliers that the detector fails to classify correctly , even though the detector perfectly detects the chosen test OOD points as shown in Figure 1 . To assess a detector ’ s performance in a more comprehensive and systematic way , a method is needed to test a detector over a larger , unexplored space beyond what is covered by the test OOD datasets . Second , the current evaluation protocol neglects the worst-case behavior of an OOD detector . The average performance metric is often not sufficient to build trust on a detector , because in safetycritical applications even a single mistake can result in fatal consequences . A detector should be tested adversarially , through an active search for its worst-case failure mode , i.e. , an outlier that is classified with maximal confidence as an inlier . Such failure cases may reveal weaknesses of the tested detector , and provide informative clues for building a better detection algorithm . In this paper , we propose a novel evaluation protocol of OOD detectors that addresses the above-mentioned limitations of current evaluation methods . We first formulate the notion of adversarial search against an OOD detector , a search problem of finding an outlier that a detector classifies as an inlier with the greatest confidence . A detector may have more than one significant failure mode , and it is more informative to find a set of diverse failure cases instead of the most critical one . To that end , we propose the adversarial distribution against an OOD detector , which generates outlier samples that are likely to be misclassified by the given detector . Measuring OOD detection performance against samples from a detector ’ s adversarial distribution gives an accurate and finer-grained assessment on its performance . To ensure that samples from an adversarial distribution are indeed outliers , the distribution needs to be supported on a zero-inlier space , a set without any overlap to the inlier distribution . Meanwhile , the zero-inlier space should be large so that we may observe meaningful failure mode of OOD detectors within the space . However , finding such a space is generally challenging , as the true boundary between inliers and outliers is unknown . We circumvent this challenge by building a generative model over known outliers . Our construction of zero-inlier spaces contain diverse samples . We implement 11 previously proposed OOD detectors and investigate their behavior using their adversarial distributions . Among the tested detectors , 8 detectors report near-perfect OOD detection performance on a popular benchmark , CIFAR-10 ( in ) vs SVHN ( out ) , effectively being indistinguishable with respect to their performance . Our investigation reveals that the 8 detectors in fact have diverging degrees of detection quality outside the SVHN test set . Our methods also lead to several interesting insights that suggest techniques for improving OOD detection . Our main contributions can be summarized as follows : • We propose the adversarial search and adversarial distributions , which can be used to evaluate OOD detection algorithms beyond a pre-defined test OOD dataset ; • We provide practical techniques to define the space of outliers that contains samples outside the test OOD dataset ; • By examining the state-of-the-art OOD detectors , we show that OOD detectors with seemingly equivalent performance differ significantly in their failure modes ; Related Work Testing the worst-case performance of an algorithm is highly related to evaluating adversarial robustness of the detector . Detection of adversarially perturbed outliers is investigated in previous literature where perturbation is assumed to be restricted in a small norm-ball Hein et al . ( 2019 ) ; Meinke & Hein ( 2020 ) ; Bitterwolf et al . ( 2020 ) . The idea of using an autoencoder during an adversarial attack is investigated in Tu et al . ( 2019 ) . The idea of generating outliers are investigated in the context of improving OOD detection Chen et al . ( 2020 ) or adversarial attack Song et al . ( 2018 ) . In Section 2 , we provide essential preliminaries . Section 3 formulates adversarial search and adversarial distributions , and Section 4 introduces techniques to construct a zero-inlier space . Our main experimental results are presented in Section 5 , with deeper discussions provided in Section 6 . Section 7 concludes the paper . 2 BACKGROUND : OUT-OF-DISTRIBUTION DETECTION . 2.1 DEFINITION . We consider a probability distribution of interest Pin , samples from which we consider as inliers . Each sample is represented as a D-dimensional real-valued vector . The probability density function of Pin is denoted as pin ( x ) . We write the support of Pin as Sin = { x|pin ( x ) > 0 } ⊂ X ⊂ RD , where X is the data space , the set of all possible values for data , which we assume to be compact . Then , we define OOD-ness as follows : Definition 1 . A vector x is out-of-distribution ( OOD ) , or an outlier , with respect to Pin if x does not belong to the support of Pin , i.e. , x 6∈ Sin . Conversely , x is in-distribution when x ∈ Sin . A distribution Q having the support SQ is OOD to Pin , when SQ ∩ Sin = ∅ . Another popular definition of OOD-ness characterizes a vector x as OOD when the vector belongs to a density sub-level set , x ∈ { x|pin ( x ) ≤ η } ( Steinwart et al. , 2005 ) . However , the density sub-level set does not provide a consistent characterization of the OOD-ness . A vector classified as OOD in one coordinate may not be classified as OOD in a different coordinate , because a probability density function can be arbitrarily distorted via an invertible coordinate transform as pointed out in ( Lan & Dinh , 2020 ) . On the contrary , our definition of OOD using the density support provides an invariant characterization of outliers with respect to such transforms . An OOD detector f ( x ) : RD → R is a function which outputs a larger value for an input more likely to be an outlier . A test vector x∗ is classified as OOD f ( x∗ ) > ηf for the threshold ηf . A detector score refers to the function value of f ( x ) . We shall assume f ( x ) is bounded . In this paper , we consider a black-box setting , where any information other than the function value of f ( x ) , such as its gradient , is not accessible . An OOD detector is normally trained using an in-distribution datasetDin , a set of iid samples from Pin . However , some OOD detectors utilize additional datasets other than Din . 2.2 EVALUATION OF OOD DETECTORS . The currently accepted evaluation protocol for OOD detectors relies on one or multiple test OOD datasets Dout , which contain a finite number of samples considered as OOD with respect to Pin by human prior knowledge . When Pin is a distribution of images , test OOD sets are often chosen from separately published image datasets of which contents are different from that of Pin . For example , when an image dataset of animals is selected as in-distribution , a set of digit images can be used as a test OOD dataset . Given a test OOD dataset , an OOD detector f ( x ) performs the binary classification against indistribution dataset , and the quality of the classification is considered as an indicator for how good f ( x ) is as an OOD detector . The classification result is summarized using metrics such as the area under the receiver operating characteristic curve ( AUROC or AUC ) . AUC is a preferred metric in a number of literature , as it does not require the specification of the threshold ηf . AUC score of 1.0 indicates the perfect classification , and AUC of 0.5 implies the random guess . The research community has been focusing on a few representative in-distribution and OOD dataset pairs , such as CIFAR-10 ( in ) vs SVHN ( out ) and Fashion-MNIST ( in ) vs MNIST ( out ) . These dataset pairs become popular after the reports showing that OOD detectors built upon deep generative models , such as PixelCNN++ ( Salimans et al. , 2017 ) or Glow ( Kingma & Dhariwal , 2018 ) , fail to detect outliers ( Hendrycks et al. , 2019a ; Nalisnick et al. , 2019 ) . In fact , the generative-modelbased detectors score AUC lower than 0.5 , meaning that SVHN images are more strongly perceived as CIFAR-10 than the actual CIFAR-10 images by the detectors . This observation spurred intense research efforts , and now there are multiple OOD detectors achieving AUC scores higher than 0.9 or even higher than 0.99 on CIFAR-10 vs SVHN as listed in Section 5 . Given the near-perfect detection score , we question whether the detectors are indeed good OOD detectors beyond the tested examples . | The paper proposes a novel evaluation framework for out-of-distribution (OOD) detection under worst-case scenarios. While existing benchmarks use real samples from datasets outside the training distribution, the authors propose instead to learn an adversarial outlier distribution against OOD detectors using an autoencoder model, with an auxiliary binary classifier that filters out inlier samples. Empirical experiments on CIFAR-10 (inlier) and SVHN/CelebA (outlier) datasets show that standard OOD benchmarks tend to produce overoptimistic results, and that prior methods with similar scores on standard benchmarks have diverging performance outside the predefined OOD test sets. | SP:f12be73fab934b3d9c1917d05faad062d64d05e7 |
FROB: Few-shot ROBust Model for Classification with Out-of-Distribution Detection | 1 INTRODUCTION . In real-world settings , it is crucial to robustly perform classification and OoD detection with high levels of confidence . The problem of detecting whether a sample is in-distribution , from the training distribution , or OoD is critical for adversarial attacks . This is crucial nowadays in many applications in safety , security , and defence . However , deep neural networks produce overconfident predictions and do not distinguish in- and out-of-data-distribution . Adversarial examples , when small modifications of the input appear , can change the classifier decision . It is an important property of a classifier to address such limitations with high level of confidence , and provide robustness guarantees for neural networks . In parallel , OoD detection is a challenging aim since classifiers set high confidence to OoD samples away from the training data . The state-of-art models are overconfident in their predictions , and do not distinguish in- and OoD . The setting that our proposed Few-shot ROBust ( FROB ) model addresses is robust few-shot Out-of-Distribution ( OoD ) detection and few-shot Outlier Exposure ( OE ) . To address rarity and the limited samples in the few-shot setting , we aim at reducing the number of the few-shots of the OoD samples , while maintaining accurate and robust performance . Diverse data are available today in large quantities . Deep learning magnifies the difficulty of distinguishing OoD from in-distribution . It is possible to use such data to improve OoD detection by training detectors with auxiliary outlier sets ( Hendrycks et al. , 2019 ) . OE enables detectors to generalize to detect unseen OoD samples with improved robustness and performance . Models trained with different outliers can detect unmodelled data and improve OoD detection by learning cues for whether inputs are unmodelled . By exposing models to different OoD , the complement of the support of the normal class distribution is modelled and the detection of new types of anomalies is enabled . OE improves the calibration of deep neural network classifiers in the setting where a fraction of the data is OoD , addressing the problem of classifiers being overconfident when applied to OoD ( Bitterwolf et al. , 2020 ) . Aiming at solving the few-shot robustness problem with classification and OoD detection , the contribution of our FROB methodology is the development of an integrated robust framework for self-supervised few-shot negative data augmentation on the distribution confidence boundary , combined with few-shot OE , for improved OoD detection . The combination of the generated boundary in a self-supervised learning way and the imposition of low confidence at this learned boundary is the main contribution of FROB , which greatly and decisively improves robustness for few-shot OoD detection . To address the rarity of relevant outliers during training using OoD samples , we propose to use even few-shots to improve the OoD detection performance . FROB achieves significantly better robustness and resilience to few-shot OoD detection , while maintaining competitive in-distribution accuracy . FROB achieves generalization to unseen anomalies , with applicability to new , in the wild , test sets that do not correlate to the training sets . FROB ’ s evaluation on different sets , CIFAR-10 , SVHN , CIFAR-100 , and low-frequency noise , using cross-dataset and One-Class Classification ( OCC ) evaluations , shows that our self-supervised model with few-shot OE on the confidence boundary and few-shot adaptation improves the few-shot OoD detection performance and outperforms benchmarks . The robustness performance analysis of FROB to the number of few-shots and to outlier variation shows that it is robust to few-shots and outperforms baselines . 2 OUR PROPOSED FEW-SHOT ROBUSTNESS ( FROB ) METHODOLOGY . We propose FROB for few-shot OoD detection and classification using discriminative and generative models . We devise a methodology for improved robustness and reliable confidence prediction , to force low confidence close and away from the data . To improve robustness , FROB generates strong adversarial samples on the boundary close to the normal class . It finds the boundary of the normal class , and it combines the self-supervised learning few-shot boundary with our robustness loss . Flowchart of FROB . Fig . 1 shows the flowchart of FROB which uses a discriminative model for classification and OoD detection . FROB also uses a generator for the OoD samples and the learned boundary . It generates low-confidence samples and performs active negative training with the generated OoD samples on the boundary . It performs self-supervised learning negative sampling of confidence boundary samples via the generation of strong and specifically adversarial OoD . It trains classifiers and generators to robustly classify as less confident samples on and out of the boundary . Our proposed loss . We denote the normal class data by x where xi are the labeled data with class labels yi . Our proposed loss of the discriminative model which is minimized during training is arg minf − 1 N N∑ i=1 log exp ( fyi ( xi ) ) ∑K k=1 exp ( fk ( xi ) ) − λ 1 M M∑ m=1 log ( 1− exp ( f ( Zm ) ) ∑K k=1 exp ( fk ( Zm ) ) ) ( 1 ) where f ( . ) is the Convolutional Neural Network ( CNN ) discriminative model for multi-class classification with K classes . Our loss has 2 terms and a hyper-parameter . The 2 losses operate on different samples for positive and negative training , respectively . The first loss is the cross-entropy between yi and the predictions , softmax ( f ( xi ) ) ; the CNN is followed by the normalized exponential to obtain the probability over the classes . Our robustness loss forces f ( . ) to accurately detect outliers , in addition to classification . It operates on the few-shot OE samples , Z . It is weighted by the hyper-parameter λ. k is a class index . For the in-distribution , N is the batch size and i is the batch data sampling index . For the OoD , M is the batch size and m is the batch data sampling index . FROB then trains a generator to generate low-confidence samples on the normal class boundary . Our algorithm includes these learned low-confidence samples in the training to improve the performance in the few-shot setting . Instead of using a large OE set , which constitutes an ad hoc choice of outliers to model the complement of the support of the normal class distribution , FROB performs learned negative data augmentation and self-supervised learning , to model the boundary of the support of the normal class distribution . We train a CNN deep neural network generator and denote it by O ( z ) , where O refers to OoD samples and z are latent space samples from a standard Gaussian distribution . Our proposed optimization of maximizing dispersion subject to being on the boundary is given by arg minO 1 N − 1 N∑ j=1 , zj ̸=z ||z − zj ||2 ||O ( z ) −O ( zj ) ||2 + µ max l=1,2 , ... , K exp ( fl ( O ( z ) ) − fl ( x ) ) ∑K k=1 exp ( fk ( O ( z ) ) − fk ( x ) ) + ν minj=1,2 , ... , Q ||O ( z ) − xj ||2 ( 2 ) where using ( 2 ) , we penalize the probability that O ( z ) have higher confidence than the normal class . We hence make O ( z ) have lower probability than x ( Jolicoeur-Martineau , 2019 ; Ren et al. , 2021 ) . FROB includes the learned low-confidence samples in the training by performing ( 1 ) with the selfgenerated few-shot boundary , O ( z ) , in addition to Z . Our self-supervised learning mechanism to calibrate confidence in unforeseen scenarios is ( 2 ) followed by ( 1 ) . FROB performs boundary data augmentation in a learnable self-supervised learning manner . It introduces self-generated boundary samples , and sets them as OoD to better perform few-shot OoD detection . This learned boundary has strong and adversarial anomalies close to the distribution support and near high probability normal class samples . FROB introduces optimal , relevant , and useful anomalies to more accurately detect few-shots of OoD ( Wang et al. , 2020a ; b ) . It detects OoD robustly , by generating strong adversarial OoD samples and helpful task-specific anomalies . A property of our nested optimization , where the inner optimization is O ( z ) in ( 2 ) and the outer one is cross-entropy with negative training in ( 1 ) , is that if an optimum is reached for the inner one , an optimum will also be reached for the outer . FROB addresses the few-shots problem by performing negative data augmentation in a well-sampled manner on the support boundary of the normal class . It performs OoD sample description and characterization , not allowing space between the normal class and our self-generated anomalies . FROB addresses the question of what OoD samples to introduce to our model for negative training , to robustly detect few-shots of data . FROB introduces self-supervised learning and learned data augmentation using the Deep Tightest-Possible Data Description algorithm of ( 2 ) followed by ( 1 ) , and our self-generated confidence boundary in ( 2 ) is robust to mode collapse ( Dionelis et al. , 2020b ; a ) . By performing scattering , FROB achieves diversity using the ratio of distances in the latent and data spaces rather than maximum entropy ( von Kügelgen et al. , 2021 ; Dieng et al. , 2019 ) . Our framework uses data space point-set distances ( Dionelis et al. , 2020b ; a ; Jalal et al. , 2017 ; Jordan et al. , 2019 ) . Inference . The Anomaly Score ( AS ) of FROB for any queried test sample , x̃ , during inference is AS ( f , x̃ ) = max l=1,2 , ... , K exp ( fl ( x̃ ) ) ∑K k=1 exp ( fk ( x̃ ) ) ( 3 ) where if the AS is smaller than a threshold τ , i.e . AS < τ , x̃ is OoD . Otherwise , x̃ is in-distribution . 3 RELATED WORK ON CLASSIFICATION WITH OOD DETECTION . Outlier Exposure . The OE method trains detectors with outliers to improve the OoD performance to detect unseen anomalies ( Hendrycks et al. , 2019 ) . Using auxiliary sets , disjoint from train and test data , models learn better representations for OoD detection . Confidence Enhancing Data Augmentation ( CEDA ) , Adversarial Confidence Enhancing Training ( ACET ) , and Guaranteed OoD Detection ( GOOD ) tackle the problem of classifiers being overconfident at OoD samples ( Bitterwolf et al. , 2020 ; Hein et al. , 2019 ) . Their aim is to force low confidence in a l∞-norm ball around each OoD sample where the prediction confidence is maxk=1,2 , ... , K pk ( x ) for the output K-class softmax ( Sensoy et al. , 2018 ; Hariharan & Girshick , 2017 ; Jeong & Kim , 2020 ) . CEDA employs point-wise robustness ( Bastani et al. , 2016 ; Rosenfeld et al. , 2020 ) . GOOD finds worst-case OoD detection guarantees . The models are trained on OE sets , using the 80 Million Tiny Images reduced by the normal class . Disjoint distributions are used for positive and negative training , but the OoD samples for OE are chosen in an ad hoc way . In contrast , FROB performs learned negative data augmentation on the boundary of the normal class to streamline and redesign few-shot OE ( and zero-shot OE ) . Human prior . GOOD defines the normal class , then filters it out from the 80 Million Tiny Images . This filtering-out process of normality from the OE set is human-dependent . This modified dataset is set as anomalies . Next , GOOD learns the normal class and sets low confidence to these OoD . This process is data-dependent , not automatic , and feature-dependent ( Dionelis et al. , 2021 ; Sohn et al. , 2021 ) . In contrast , FROB eliminates the need for feature extraction and human intervention which is the aim of Deep Learning , as these do not scale . This filtering-out process is not practical and can not be used in real-world scenarios as anomalies are not confined in finite closed sets ( Sensoy et al. , 2020 ) . FROB avoids feature- , application- , and dataset-dependent processes . Our self-supervised boundary data augmentation obviates memorization , scalability , and data diversity problems arising from memory replay and prioritized experience replay ( Zaheer et al. , 2020 ; Pourreza et al. , 2021 ) . Learned OoD samples . The Confidence-Calibrated Classifier ( CCC ) uses a GAN to create samples out of , but close to the normal class ( Lee et al. , 2018a ) . FROB substantially differs from CCC , as CCC finds a threshold and not the boundary . CCC uses the OE set , U ( y ) , where the labels follow a Uniform distribution , to compute this threshold . This is limiting as the threshold depends on U ( y ) , which is an ad hoc choice of outliers . In contrast , FROB finds the confidence boundary and does not use U ( y ) to find this boundary . FROB streamlines OE and few-shot outliers . Our boundary is not a function of U ( y ) , as U ( y ) is not necessary ( Sohn et al. , 2021 ) . For negative training , CCC defines a closeness metric ( KL divergence ) , and then penalizes this metric ( Zaheer et al. , 2020 ; Asokan & Seelamantula , 2020 ; Dionelis et al. , 2021 ) . CCC suffers from mode collapse as it does not perform scattering for diversity . The models in Lee et al . ( 2018a ) ; Vernekar et al . ( 2019a ; b ) and Wang et al . ( 2018 ) perform confidence-aware classification . Self-Supervised outlier Detection ( SSD ) creates OoD samples in the Mahalanobis metric ( Sehwag et al. , 2021 ) . It is not a classifier , as it performs OoD detection with OE . FROB achieves fast inference with ( 3 ) , in contrast to Tack et al . ( 2020 ) which is slow during inference ( Goldberger et al. , 2005 ) . Tack et al . ( 2020 ) does not address issues arising from detecting with nearest neighbors while using a different composite loss for training . | The paper addresses an important issue of Out-of-Distribution detection in a few-shot setting. The authors propose to generate negative samples in an adversarial way to increase the OoD performance. Additionally they augment the loss function with additional term for OoD samples. They perform experiments of 2 benchmark datasets: CIFAR-10 and SVHN. | SP:11561980998d0ff0b9a327c512fa1c918173d476 |
Learning Synthetic Environments and Reward Networks for Reinforcement Learning | We introduce Synthetic Environments ( SEs ) and Reward Networks ( RNs ) , represented by neural networks , as proxy environment models for training Reinforcement Learning ( RL ) agents . We show that an agent , after being trained exclusively on the SE , is able to solve the corresponding real environment . While an SE acts as a full proxy to a real environment by learning about its state dynamics and rewards , an RN is a partial proxy that learns to augment or replace rewards . We use bi-level optimization to evolve SEs and RNs : the inner loop trains the RL agent , and the outer loop trains the parameters of the SE / RN via an evolution strategy . We evaluate our proposed new concept on a broad range of RL algorithms and classic control environments . In a one-to-one comparison , learning an SE proxy requires more interactions with the real environment than training agents only on the real environment . However , once such an SE has been learned , we do not need any interactions with the real environment to train new agents . Moreover , the learned SE proxies allow us to train agents with fewer interactions while maintaining the original task performance . Our empirical results suggest that SEs achieve this result by learning informed representations that bias the agents towards relevant states . Moreover , we find that these proxies are robust against hyperparameter variation and can also transfer to unseen agents . 1 INTRODUCTION . Generating synthetic data addresses the question of what data is required to achieve a rich learning experience in machine learning . Next to increasing the amount of available data , synthetic data can enable higher training efficiency that opens up new applications for Neural Architecture Search ( Such et al. , 2020 ) , may improve algorithm analysis or facilitate custom datasets ( Jhang et al. , 2020 ) . In this paper , we consider learning neural synthetic data generators for Reinforcement Learning ( RL ) . We investigate the question of whether we can learn a synthetic Markov Decision Process of a real ( target ) environment which is capable of producing synthetic data to allow effective and more efficient agent training , that is , to achieve similar or higher performance more quickly compared to when training purely on the real environment . When learning to produce both states and rewards , we refer to these neural network proxies as synthetic environments ( SEs ) . Additionally , we investigate the same question for learning reward proxies that do not learn about the state dynamics and which we refer to as a Reward Networks ( RNs ) . We depict our procedure in Figure 1which resembles a bi-level optimization scheme consisting of an outer and inner loop . The inner loop trains the agent on an SE or RN . Since our method is agnostic to both domain and agent , we can interchangeably adopt standard RL algorithms in the inner loop . In the outer loop , we assess the agent ’ s performance by evaluating it on the real environment ; we then take the collected reward as a score to update the SE ’ s or RN ’ s neural parameters used in the inner loop . In this way , the SE/RN is gradually updated such that an agent being trained on it , scores higher on a real environment . For the outer loop we use Evolution Strategies ( Rechenberg , 1973 ; Salimans et al. , 2017 ) with a population of SE/RN parameters . After discussing related work ( Section 2 ) , we make the following contributions : • We introduce synthetic environments ( Section 3 ) , a novel concept that focuses on the environment instead of agent learning using a bi-level optimization scheme , which is guided purely by the agent performance . This concept goes beyond the usual learning of a onetime internal environment model inside an agent ( such as in Dyna ( Sutton , 1990 ) ) . • As a sub-problem of SEs , we investigate reward networks ( Section 4 ) , contrasting several types of potential-based reward shaping ( Ng et al. , 1999 ) variants . • We show that it is possible to learn SEs ( Section 5 ) and RNs ( Section 6 ) that , when used for agent training , yield agents that successfully solve the Gym tasks ( Brockman et al. , 2016 ) CartPole and Acrobot ( SEs ) , as well as Cliff Walking , CartPole , MountainCarContinuous , and HalfCheetah ( RNs ) . • We show that SEs and RNs are efficient and robust in training agents , require fewer training steps compared to training on the real environment and are able to train unseen agents • We report empirical evidence showing that SEs and RNs achieve their efficiency gains for training new agents through condensed and informed state and reward representations . Overall , we find it noteworthy that it is actually possible to learn such proxies , and we believe our research will improve their understanding . Since these learned proxies can train agents quickly and transfer to unseen agents , we believe this work might open up possible avenues of research in RL . Possible future applications include cheap-to-run environments for AutoML ( Hutter et al. , 2019 ) or for robotics when training on targets is expensive , as well as agent and task analysis , or efficient RL agent pre-training . Our PyTorch ( Paszke et al. , 2019 ) code and models are made available publicly.1 2 RELATED WORK . Synthetic Environments In the context of RL , learning synthetic environments is related to model-based RL ( MBRL ) where the dynamics model can be viewed as an SE . In MBRL , one jointly learns both a dynamics model and a policy as in Dyna ( Sutton , 1990 ) or an existing dynamics model is used with planning methods to learn a policy ( Silver et al. , 2017 ; Moerland et al. , 2020 ) . Our work does not involve planning , nor does it use supervised learning for the dynamics model and it does not mix synthetic and real data during policy learning . Instead , we use purely synthetic data to train agents similar to World Models ( Ha & Schmidhuber , 2018 ) but use the cumulative rewards from the real environment in a bi-level optimization for learning our model jointly with our agent . For a more extensive discussion on the differences between our work and model-based RL , we refer the reader to Appendix D. Analogous to learning SEs is Procedural Content Generation ( Togelius et al. , 2011 ) and Curriculum Learning that concerns automatically selecting ( Matiisen et al. , 2020 ) or generating the content of training environments ( Volz et al. , 2018 ; Shaker et al. , 2016 ; Wang et al. , 2019 ; Cobbe et al. , 2020 ) , with the closest work being Generative Playing Networks ( GPNs ) ( Bontrager & Togelius , 2020 ) . GPNs learn an environment generator that creates increasingly difficult SEs according to what a critic ’ s value function estimates as challenging . While GPNs are a method to generate environment curricula , our approach studies learning an SE for effective and efficient agent training by compressing the relevant information into a single model of the environment . We also use a more generally 1https : //github.com/automl/learning environments applicable objective that does not rely on actor-critic formulations but purely on the achieved cumulative reward . Less related areas are methods like domain randomization ( Tobin et al. , 2017 ) that generate environments regardless of agent performance or minimax methods that adversarially ( instead of cooperatively ) generate environments based on the difference between the reward of two competing agents that are trying to solve the generated environment ( Dennis et al. , 2020 ) . Related to our work , and inspiring it , are Generative Teaching Networks ( Such et al. , 2020 ) . While we similarly use a bi-level optimization to learn a synthetic data generator , our approach is different in central aspects : we use Evolution Strategies to avoid the need for explicitly computing Hessians , we do not use noise vectors as input to our SEs , and we target sequential decision-making problems instead of supervised learning . Reward Networks Reward shaping concerns the question of how to enhance the reward signal to allow agents to be trained more effectively or efficiently . Common learned reward shaping approaches are curiosity or count-based exploration ( Pathak et al. , 2017 ; Burda et al. , 2019 ; Singh et al. , 2010 ; Bellemare et al. , 2016 ; Tang et al. , 2017 ) . Others achieve reward shaping with prior knowledge through expert demonstrations ( Judah et al. , 2014 ; Brys et al. , 2015 ; Ibarz et al. , 2018 ) . In contrast to our work , these contributions all apply a single-level optimization . When using a bilevel optimization , the reward shaping function is usually learned in the outer loop while the policy using the learned rewards is optimized in the inner loop . Here , one way is to meta-learn the parameterization of reward functions ( Faust et al. , 2019 ; Hu et al. , 2020 ; Jaderberg et al. , 2019 ) . Another way is to learn a neural network that resembles the reward function . While learning full synthetic environments is entirely novel , there exists prior work on learning reward shaping networks . The most related works are ( Zheng et al. , 2018 ) for single tasks , ( Zou et al. , 2019 ) for entire task distributions , or ( Zheng et al. , 2020 ) that additionally take into account the entire lifetime of an agent to learn a “ statefulness across episodes ” -reward function . Despite the similarities , some noteworthy differences exist . Importantly , the approaches in ( Zheng et al. , 2018 ; Zou et al. , 2019 ) are not agent-agnostic , making it less straightforward to exchange agents as in our work . Moreover , the transferability of learned shaped rewards is studied only limitedly ( Zheng et al. , 2018 ) , not at all ( Zou et al. , 2019 ) or only for grid world-like environments ( Zheng et al. , 2020 ) . 3 LEARNING SYNTHETIC ENVIRONMENTS . Problem Statement Let ( S , A , P , R ) be a Markov Decision Process ( MDP ) with the set of states S , the set of actions A , the state transition probabilities P and the immediate rewards R when transitioning from state s ∈ S to the next state s′ ∈ S through action a ∈ A . The MDPs we consider are either human-designed environments Ereal or learned synthetic environments Esyn , ψ ( SE ) represented by a neural network with parameters ψ. Interfacing with the environments is identical in both cases , i.e . s′ , r = E ( s , a ) . The crucial difference is that for SEs , the state dynamics and rewards are learned . The main objective of an RL agent when acting on an MDP Ereal is to find a policy πθ parameterized by θ that maximizes the cumulative expected reward F ( θ ; Ereal ) . We consider the following bi-level optimization problem : find the parameters ψ∗ , such that the agent policy πθ parameterized with θ that results from training on Esyn , ψ∗ achieves the highest reward on a target environment Ereal . Formally that is : ψ∗ = arg max ψ F ( θ∗ ( ψ ) ; Ereal ) s.t . θ∗ ( ψ ) = arg max θ F ( θ ; Esyn , ψ ) . ( 1 ) We use standard RL algorithms for optimizing the agents on the SE in the inner loop . Although gradient-based optimization methods can be applied in the outer loop , we chose Natural Evolution Strategies ( Wierstra et al. , 2008 ) ( NES ) to allow the optimization to be independent of the choice of the agent in the inner loop and to avoid potentially expensive , unstable meta-gradients ( Metz et al. , 2019 ) . Additional advantages of NES are that it is better suited for long episodes , sparse or delayed rewards , and parallelization ( Salimans et al. , 2017 ) . Algorithm We now explain our method . The overall scheme is adopted from ( Salimans et al. , 2017 ) and depicted in Algorithm 1 . It consists of an Evolutionary Strategy in the outer loop to learn the SE and an inner loop which trains RL agents . The performances of the trained agents are then used in the outer loop to update the SE . We instantiate the population search distribution as a multivariate Gaussian with mean 0 and a fixed covariance σ2I . The main difference to ( Salimans et al. , 2017 ) is that , while they maintain a population over agent parameter vectors , our population consists of SE parameter vectors . Moreover , our approach involves two optimizations ( the agent and the SE parameters ) instead of one ( agent parameters ) . Algorithm 1 : Learning Synthetic Env . with NES 1 Input : initial SE parameters ψ , real environment Ereal , NES noise std . dev . σ , number of episodes ne , population size np , NES step size α 2 repeat 3 foreach member of pop . i = 1 , 2 , . . . , np do 4 ϵi ∼ N ( 0 , σ2I ) 5 ψi = ψ + ϵi 6 θi = TrainAgent ( θi , Esyn , ψi , ne ) 7 Fψ , i = EvaluateAgent ( θi , Ereal ) 8 Update SE : ψ ← ψ + α 1npσ ∑np i Fiϵi 9 until no steps Our algorithm first stochastically perturbs each population member i which results in ψi ( Line 5 ) . Then , a new randomly initialized agent is trained on the SE parameterized by ψi for ne episodes ( L6 ) . The trained agent with fixed parameters is then tested across 10 episodes on the real environment ( L7 ) , yielding the average cumulative reward which we use as a score Fψ , i . Finally , we update ψ with a stochastic gradient estimate based on all member scores ( L8 ) . We use a parallelized version of the algorithm and an early stopping heuristic to stop in fewer than ne episodes when progress plateaus ( more in Appendix A.1 ) . | This paper aims to learn proxy environments (synthetic environments or SEs) and reward functions (reward networks or RNs), parameterized as neural networks, such that these proxy models provide beneficial transitions to make it more sample-efficient to learn a policy for a fixed target environment (referred to as the real environment). SEs replace the observed state and rewards during training, while reward networks provide a synthetic reward that is a function of the real reward, current state, and next state. The proposed method formulates this problem as a bi-level optimization, where the inner loop consists of standard RL under the proxy model, and the outer loop consists of NES with the aim of optimizing either the performance on the true, target environment (SEs) or a the number of training steps needed to reach a certain return threshold on the real environment. | SP:67e9394a528b943091462ecae83f3202d40bce57 |
Do What Nature Did To Us: Evolving Plastic Recurrent Neural Networks For Generalized Tasks | While artificial neural networks ( ANNs ) have been widely adopted in machine learning , the gaps between ANNs and biological neural networks ( BNNs ) are receiving increasing concern . In this paper , we propose a framework named as Evolutionary Plastic Recurrent Neural Networks ( EPRNN ) . Inspired by BNN , EPRNN composes Evolution Strategies , Plasticity Rules , and Recursion-based Learning in one meta-learning framework for generalization to different tasks . More specifically , EPRNN incorporates nested loops for meta-learning — an outer loop searches for optimal initial parameters of the neural network and learning rules ; an inner loop adapts to specific tasks . In the inner loop of EPRNN , we effectively attain both long-term and short-term memory by forging plasticity with recursion-based learning mechanisms , both of which are believed to be responsible for memristance in BNNs . The inner-loop setting closely simulates BNNs , which neither use gradient-based optimization nor require the exact forms of learning objectives . To evaluate the performance of EPRNN , we carry out extensive experiments in two groups of tasks : Sequence Predicting , and Wheeled Robot Navigating . The experiment results demonstrate the unique advantage of EPRNN compared to state-of-the-arts based on plasticity and recursion while yielding comparably good performance against deep learning-based approaches in the tasks . The experiment results suggest the potential of EPRNN to generalize to a variety of tasks and encourage more efforts in plasticity and recursion-based learning mechanisms . 1 INTRODUCTION . ANNs have achieved great success in handling machine learning tasks . Despite being initially inspired by Biological Neural Networks ( BNNs ) , there are apparent gaps between ANNs and BNNs . Mainstream ANNs use gradient-based optimizers to minimize learning objectives . Shreds of evidence show that BNNs learn through plasticity ( Gerstner et al. , 1993 ) without explicit learning objectives , among which Hebb ’ s rule ( Hebb , 1949 ) is most well known . Though gradient descent methods are the most efficient optimizers for ANNs , their side effects are also noticed , including the problems of catastrophic forgetting , over-consumption of data , and the requirement for manual efforts in designing objective functions . Those challenges are becoming an essential impedance to the further development of machine intelligence . Recent studies show the learning mechanisms of BNNs , such as plasticity ( Soltoggio et al. , 2008 ; Najarro & Risi , 2020 ) and model-based learning ( Santoro et al. , 2016 ; Mishra et al. , 2018 ) , under appropriate meta-parameter optimization , can be effective alternative for task generalization in ANNs . Unlike gradient-based methods , these mechanisms simulate the learning behaviors of BNNs and don ’ t require any explicit-form learning objectives . More recently , authors in ( Miconi et al. , 2019 ) proposed a plastic recurrent neural network for lifelong learning of ANNs , where implements Hebbian plasticity with differentiable objectives and gradient-based optimization . Though the above studies have investigated learning of ANNs using the two mechanisms derived from BNNs with gradient-based methods ( Miconi et al. , 2019 ) optionally , in this work , we aim at further verify the path of discovering those rules evolutionarily , simulating that of BNNs . Backgrounds . Though learning in BNNs has not been fully understood , some of the learning mechanisms and rules , such as plasticity ( Gerstner et al. , 1993 ) and recursion ( Pollen , 2003 ) , have been observed in brains and adopted by ANNs . Typically , Model-based learning employs recurrent neural networks ( RNN ) , LSTM ( Hochreiter & Schmidhuber , 1997 ) , and self-attention ( Mishra et al. , 2018 ; Chen et al. , 2021 ) layers as learners . Learning is based on memories within the feed-forward pass . The information is updated in the hidden states instead of the parameters . Model-based learners are found to be sample efficient in generalized supervised tasks ( Santoro et al. , 2016 ) , zero-shot generalization in language ( Brown et al. , 2020 ) , and reinforcement learning ( Mishra et al. , 2018 ; Chen et al. , 2021 ) when compared with various type of gradient descent methods . So far , among model-based learners , though self-attention-based learners such as Transformers have state-of-the-art performance , the O ( T 2 ) ( where T is the sequence length ) makes them only available to relatively short sequences . On the other hand , recurrent learners such as RNN and LSTM have O ( T ) inference costs but suffer from poor asymptotic performances . That is , when sequences are getting longer , performances no longer improve or even deteriorate . It is partly due to the limitation of the memory spaces . For instance , an recurrent neural network of hidden size n has a memory of O ( n ) . In contrast , its parameters scale with O ( n2 ) , making parameter-updating more powerful as learning mechanisms than recursion-only . In addition to recursion-based learning , evolving plasticity ( Soltoggio et al. , 2008 ; 2018 ; Lindsey & Litwin-Kumar , 2020 ; Yaman et al. , 2021 ) has been proposed to reproduce the natural evolution and plasticity in simulation , as shown in Figure 1 . Implementing plasticity is not straightforward ; unlike gradient-based learning methods , plastic rules are not universal but have to be optimized beforehand , which is not possible without a further outer-loop optimizer over the inner-loop learning . Evolutionary algorithms ( Zhang et al. , 2011 ; Salimans et al. , 2017a ) are typically applied in the outer loop to search for meta-parameters shaping the learning rules , which can be regarded as information carried by genomes during evolution . Those optimized plasticity rules are then applied in the inner loop to further tune NN ’ s parameter for better adaptions to the environment . Another line of works tries to bring gradient-based learning algorithms to plasticity rule optimization ( Miconi et al. , 2018 ; 2019 ) . It is found that evolution can be more efficient in cases of long-horizon in reinforcement learning ( Salimans et al. , 2017b ; Stanley , 2019 ) . Our Works . Inspired by the previous works ( Cabessa & Siegelmann , 2014 ; Miconi et al. , 2018 ; 2019 ) that improve recursive neural networks using plastic rules for capacity of learning , we propose a novel meta-learning framework namely Evolutionary Plastic Recurrent Neural Networks ( EPRNN ) for task generalization . Specifically , this work makes contributions as follows . • We study the potential of learning plasticity and recursion rules through the natural evolution in task generation . We show that recursion and plasticity-based rules can surpass gradientbased methods as inner-loop learners . • We present investigations and analyses on the learned rules and parameters , showing that the learning framework discovers plasticity rules that effectively update the connection weights according to the learning tasks . The differences between the transformation of hidden states and parameters are also shown , verifying the efficacy of combining recursion with plasticity . The most relevant works to our study are ( Miconi et al. , 2018 ; 2019 ; Lindsey & Litwin-Kumar , 2020 ; Yaman et al. , 2021 ) . Compared to ( Miconi et al. , 2018 ) that leverage gradient oracles to efficiently search plastic rules , the proposed EPRNN could work well in gradient-free settings . Compared to ( Lindsey & Litwin-Kumar , 2020 ; Yaman et al. , 2021 ) that use evolutionary strategies to learn plastic rules , EPRNN also incorporates an RNN-based inner loop for recursion-based learning . The work ( Miconi et al. , 2019 ) also uses recursion ( i.e. , RNN ) and differentiable plasticity in nested loops to train self-modifying neural networks , EPRNN replaces the outer loop with evolutionary strategies to generalize tasks with non-differentiable objectives . Though EPRNN is not as competitive as gradient-based methods , which can optimize advanced neural networks with large datasets , our work still demonstrates the potential of using plasticity and recursion for meta-learning through natural evolution . 2 RELATED WORKS . Meta-Learning . Meta-learning aims at building learning machines that gain experience using taskspecific data over the distribution of tasks . Inspired by human and animal brains that are born with both embedded skills and the capability of acquiring new skills , meta-learning implements two nested learning loops : The outer learning loops optimize the meta-parameters that typically involves initial parameters ( Finn et al. , 2017 ; Song et al. , 2019 ) , learning rules ( Zoph & Le , 2017 ; Najarro & Risi , 2020 ; Pedersen & Risi , 2021 ) , model structures ( Soltoggio et al. , 2008 ; Li & Malik , 2016 ) and all of three ( Real et al. , 2020 ) over distribution of tasks ; The inner learning loops adapt the model to specific tasks by utilizing those meta-parameters . According to different inner-loop optimizers , we roughly classify the methods into model-based and parameter-updating methods . The model-based methods do not update the parameters in the inner-loop , where only hidden states is updated ( e.g. , recursion ) ; The parameter-updating methods re-modify the connection weights in the inner-loop ( e.g. , gradient descent ( MAML ) , plasticity ) . From this point of view , our method can be classified into both groups . A brief review of the typical meta-learning paradigms is presented in Table 1 . Plasticity-based Learning . The proposal of the learning mechanism of BNNs is initially raised by Hebb ’ s rule ( Hebb , 1949 ) , the most prominent part of which is “ neurons fire together wire together ” . It is further polished by Spike Time-Dependent Plasticity ( STDP ) ( Gerstner et al. , 1993 ) indicating that the signal of learning is dependent on the temporal patterns of the presynaptic spike and postsynaptic spike . Learning could also appear in inhibitory connections , also known as anti-Hebbian ( Barlow , 1989 ) . Also , relationships between STDP and memory are investigated ( Linares-Barranco & Serrano-Gotarredona , 2009 ) . Since many of those rules are related to spiking neural networks ( Ghosh-Dastidar & Adeli , 2009 ) , to apply them to ANNs , simplified rules are proposed ( Soltoggio et al. , 2008 ) instead : given the pre-synaptic neuron state X and post-synaptic neuron state Y , the connections between X and Y are updated by δW = m [ A ·XY +B ·X + C · Y +D ] , ( 1 ) m is the output from neuron modulators that adjust the learning rates of plasticity . Most of the existing rules are sub-classes of Equation 1 . For instance , some works neglect the neural modulator m ( Najarro & Risi , 2020 ; Miconi et al. , 2018 ) , others have set B , C , and D to 0 ( Miconi et al. , 2018 ; 2019 ) . The learned rule A , B , C , D will inevitably be dependent on initial parameter of W , however , learning plastic rules that is not dependent on the initial parameters was also investigated ( Najarro & Risi , 2020 ; Yaman et al. , 2021 ) . 3 ALGORITHMS . Problem Settings . We consider an agent ( learner ) that is dependent on meta-parameter θ . It has the capability of adapting itself to a distribution of tasks Tj ∈ T by interacting with the environment Tj through observation it and action at . In K-shot learning , the agent is allowed to first observe samples of length K ( this stage can be referred as meta-training-training , see Beaulieu et al . ( 2020 ) ) , then its fitness is calculated in meta-training-testing rollouts . In Generalized Supervised Learning tasks ( GSL ) , the observations typically include features ( xt ) and labels ( yt ) in meta-training-training stage ( it = { xt , yt } ) , and the labels are left out for predicting in the meta-training-testing stage ( Santoro et al. , 2016 ; Garnelo et al. , 2018 ) . In Generalized Reinforcement Learning tasks ( GRL ) , the observations typically include states ( st ) , actions ( at−1 ) , and feedbacks ( rt−1 ) ( it = { st , at−1 , rt−1 } , sometimes rt−1 can not be observed ) ( Mishra et al. , 2018 ) . The goal of meta-training is to optimize θ such that the agent achieves higher fitness in meta-training-testing . In meta-testing , similarly , the learned parameters are given meta-testing-training and meta-testing-testing in order , the performances in meta-testing-testing are evaluated . Plastic Recurrent Neural Networks ( PRNN ) . Given a sequence of observations i1 , ... , it , ... , we first consider an recurrent neural network ( RNN ) that propagates forward and yields sequence of outputs at following : ht+1 = σ ( Wt · ht +Wi · it + b ) , ( 2 ) at = f ( ht+1 ) ( 3 ) where ht is the hidden states at step t. In PRNN , we kept Wi stationary , but we set Wt to be plastic , so that we add a subscript t to mark different Wt at different steps . Regarding ht as pre-synaptic neuron states , and ht+1 as post-synaptic neuron states , by applying Equation 1 , we update Wt with : Wt+1 =Wt + δWt ( 4 ) δWt =WA ( ĥt+1 · hTt ) +WB ( mt · hTt ) +WC ( ĥt+1 · 1T ) +WD ·mt ( 5 ) ĥt+1 = mt ht+1 , ( 6 ) where we use and · to represent “ element-wise multiplication ” and “ matrix multiplication ” respectively . h and 1 are column vectors . WA , WB , WC , WD are collection of plastic rules of A , B , C , D from Equation 1 , which has the same shape as Wt . mt is the neural modulators that adjusts the learning rates of plasticity . We calculate mt by applying a neuron modulating layer denoted with : mt = σ ( W ( m ) h · ht +W ( m ) i · it + b ( m ) ) . ( 7 ) A sketch of PRNN is presented in Figure 2 . The main difference between PRNN and naive RNN is that PRNN updates both the hidden states and the connection weights during the forward pass . Evolving PRNN . Given task Tj ∈ T , by continuously applying Equation 2 to 7 over meta-trainingtraining and meta-training-testing , the fitness is eventually dependent on the initial parameters , learning rules , and the sampled task T , which is denoted as : Fit ( θ , T ) = Fitness ( iK+1 , aK+1 , iK+2 , aK+2 , ... , ) ( 8 ) Wi , W0 , WA , WB , WC , WD , W ( m ) h , W ( m ) i , b , b ( m ) ∈ θ ( 9 ) Following Evolution Strategies ( ES ) ( Salimans et al. , 2017a ) , in kth outer-loop iteration , we sample different tasks from T , and meta-parameters θk , i ( i ∈ [ 1 , n ] ) from the neighbourhoods of θk . We evaluate the fitness of sampled meta-parameters , and update the meta-parameters by applying : θk+1 = θk + α 1 n n∑ i=1 Fit ( θk , i , Tk ) ( θk , i − θk ) ( 10 ) Why Recurrent Neural Networks ? As stated in Equation 1 , plasticity in feed-forward-only NNs allows NNs to gain experiences from single-frame observation only . In cases of non-sequential GSL , the plasticity has chances to tune the connection weights to the specific task by relying on observing one single frame of data ( it = { xt , yt } ) , since its information of the feature and the supervision is complete . However , in general cases , learning can be effective without summarizing sequences of observations . For instances , a human driver getting used to a new car through continuously interacting and observing . Moreover , in GRL , there are time lag between the observation of states and feed-backs . Recursion helps to summarize historical observations to give the correct update for the connection weights . Although , compared with naive RNN , there are obviously bunches of more sophisticated neural structure such as GRU and LSTM , we believe it is more desirable to start from simplest recurrent structure to study the potential of combining recursion and plasticity . | In this work, the authors use evolutionary strategies to train recurrent neural networks with Hebbian plasticity rules. They test the system on two tasks, sequence prediction and a simple RL tasks that involve robot navigation. The approach is compared against previous work that uses plasticity but without recurrent connections and other approaches such as LSTMs. For the problems presented in this paper, the proposed approach outperforms most methods used in the comparison. | SP:b09a4981cd0ca1f72068fb57104f3142c81bb92d |
Continuous Deep Q-Learning in Optimal Control Problems: Normalized Advantage Functions Analysis | 1 INTRODUCTION . The standard reinforcement learning ( RL ) setup consists of an agent interacting with an environment ( Sutton & Barto , 2018 ) . At each step of the interaction , the agent determines an action based on its policy and its current state , gets a reward , and makes a transition to the next state . An aim of the agent is to learn the policy that maximizes the sum of rewards . Q-learning ( Watkins & Dayan , 1992 ) is one of the most widespread algorithms for solving RL problems . According to this algorithm , the optimal action-value function ( Q-function ) is being found as a solution of the Bellman optimality equation . After the learning , the agent can act optimally by the learned Q-function . Initially , the Q-learning algorithm was applied for solving RL problems with finite state and action spaces . In this case , the Q-function can be represented by a finite table . For the case of a large or continuous state space , Q-learning has recently been extended to Deep Q-learning algorithm ( Mnih et al. , 2015 ) that allows to look for the approximate Q-function in the class of neural networks by means of the stochastic gradient descent . Deep Q-learning and its modifications have shown the efficiency for a range of challenging tasks ( Wang et al. , 2016 ; van Hasselt et al. , 2016 ; Schaul et al. , 2016 ; Hessel et al. , 2018 ) , however , note that this algorithm can not be directly applied for solving RL problems with continuous action spaces . The reason is that Deep Q-learning involves a maximizing of an approximate Q-function by the action variable on each step of the learning , which is a complex problem for continuous action spaces . Among various approaches to overcome this problem ( Lillicrap et al. , 2016 ; Haarnoja et al. , 2017 ; Kalashnikov et al. , 2018 ; Lim et al. , 2019 ; Ryu et al. , 2020 ; Lutter et al. , 2021 ) , we focus on an idea of the normalized advantage functions ( NAF ) algorithm ( Gu et al. , 2016 ) . This idea consists in the approximation of the Q-function by functions quadratic with respect to the action variable . It allows to get the maximum quite fast and precisely and solve some challenging control problems ( Gu et al. , 2017 ; Dong et al. , 2018 ; Ikemoto & Ushio , 2021 ) , but on the other hand , it brings up the question of classes of RL problems in which this approximation is acceptable . The presented paper describes one of the possible answers to this question . Note that the class of LQR problems ( Bradtke et al. , 1994 ) has the Q-functions quadratic with respect to the action variable . However , this class , being quite special , is not suitable for the description of complex controlled processes . In the paper , we consider a wider ( in some sense ) class of RL prob- lems . We consider RL problems which are obtained by the discretization of certain optimal control problems ( Bardi & Dolcetta , 1997 ) . The rationale for the consideration of such RL problems is that a lot of RL problems with continuous action spaces arise from control problems for mechanical or robotic systems ( Lillicrap et al. , 2016 ; Gu et al. , 2016 ; Haarnoja et al. , 2017 ; Gu et al. , 2017 ; Kalashnikov et al. , 2018 ) , whose dynamic are , in fact , described by ordinary differential equations . For the considered class of problems , based on the idea of NAF , we present a new family of quadratic functions and prove that , first , this family is sufficiently rich to approximately solve the Bellman optimality equation ( Theorem 1 ) , and second , any sufficiently accurate solution of the Bellman optimality equation allows to approximately obtain the optimal policy in the corresponding optimal control problem ( Theorem 2 ) . Moreover , we prove that it is impossible to get the same results for the original family of functions from Gu et al . ( 2016 ) ( Theorem 3 ) . From the obtained theoretical statements , we get some additional knowledge about the Q-function approximation by our family of quadratic functions and also provide several ways to use this knowledge in order to improve NAF . The experimental results confirm the efficiency of our improvements . 2 BACKGROUND . The standard reinforcement learning ( RL ) setup consists of an agent interacting with an environment ( Sutton & Barto , 2018 ) . This interaction is described by a Markov Decision Process ( MDP ) , which is a tuple ( S , U , P , R , ρ0 , γ ) , where S is a state space , U is an action space , P ( s′|s , u ) is a transition distribution , R ( s , u ) is a reward function , ρ0 ( s ) is an initial state distribution , and γ ∈ [ 0 , 1 ] is a discount factor . An aim of the agent is to learn its optimal policy µ∗ ( s ) that maximizes the value J ( µ ) = E [ ∞∑ i=0 γiR ( si , ui ) | s0 ∼ ρ0 ( s0 ) , ui = µ ( si ) , si+1 ∼ P ( si+1|si , ui ) , i = 0 , 1 , 2 , . . . ] . In the general statement of reinforcement learning problems , a policy of the agent can be stochastic , however , within this paper , we assume that the policy is deterministic . One of the most effective algorithms for solving RL problems is Q-learning ( Watkins & Dayan , 1992 ) . According to this algorithm , the agent explores the environment and looks for the optimal action-value function ( Q-function ) Q∗ ( s , u ) = sup µ E [ ∞∑ i=0 γiR ( si , ui ) | s0 = s , u0 = u , si+1 ∼ P ( si+1|si , ui ) , ui+1 = µ ( si+1 ) , i = 0 , 1 , 2 , . . . ] . as a solution of the Bellman optimality equation Q∗ ( s , u ) = E [ R ( s , u ) + γmax u′∈U Q∗ ( s ′ , u′ ) | s′ ∼ P ( s′|s , u ) ] . In other words , the agent solves the following minimization problem : sup s∈S , u∈U ∣∣∣Q ( s , u ) − E [ R ( s , u ) + γmax u′∈U Q ( s′ , u′ ) | s′ ∼ P ( s′|s , u ) ] ∣∣∣→ inf Q . ( 1 ) If the agent knows the function Q∗ ( s , a ) , it can act optimally by the greedy policy µ∗ ( s ) ∈ argmaxu∈U Q∗ ( s , u ) . Initially , the Q-learning algorithm was applied for solving RL problems with finite state and action spaces . In this case , the Q-function is represented by a finite table and problem ( 1 ) is finitedimensional . For the case of a large or continuous state space , Q-learning has recently been extended to Deep Q-learning algorithm ( Mnih et al. , 2015 ) that allows to look for the approximate Q-function in the class of neural networks Q ( x , u|θQ ) , where θQ is the parameter vector of the neural network . During the learning , the experiences ( si , ui , ri , si+1 ) are stored in the buffer D and simultaneously the parameter vector θQ is updated by means of the stochastic gradient descent minimizing the loss function L ( θQ ) = E [ ( Q ( s , u|θQ ) − y ) 2 | ( s , u , r , s′ ) ∼ U ( D ) ] , y = r + γmax u′∈U Q ( s′ , u′|θQ ) . ( 2 ) where U ( D ) is the uniform distribution on D. Deep Q-learning and its modifications are effective for a range of challenging tasks ( Wang et al. , 2016 ; van Hasselt et al. , 2016 ; Schaul et al. , 2016 ; Hessel et al. , 2018 ) , however , note that this algorithm can not be directly applied for solving RL problems with continuous action spaces . The reason is that Deep Q-learning involves the maximizing in ( 2 ) on each step of the learning , which is a complex problem for continuous U . Among various approaches to overcome this problem ( Lillicrap et al. , 2016 ; Haarnoja et al. , 2017 ; Kalashnikov et al. , 2018 ; Lim et al. , 2019 ; Ryu et al. , 2020 ; Lutter et al. , 2021 ) , we focus on an idea of the normalized advantage functions ( NAF ) algorithm ( Gu et al. , 2016 ) . This idea consists in the approximation of the Q-function by the following quadratic with respect to u functions : Q ( s , u|θQ ) = V ( s|θV ) +A ( s , u|θA ) , A ( s , u|θA ) = −1 2 ( u− µ ( s|θµ ) ) TP ( s|θP ) ( u− µ ( s|θµ ) ) , ( 3 ) where V ( s|θV ) , µ ( s|θµ ) , and P ( s|θP ) are neural networks with parameters θV , θµ , and θP , respectively ; P ( s|θP ) is a positive-definite square matrix for each s and θP ; θA = { θµ , θP } and θQ = { θA , θV } . Under the condition µ ( s|θµ ) ∈ U , ( 4 ) it allows to get the maximum and argmaximum values directly by values of V ( s|θV ) and µ ( s|θµ ) : max u∈U Q ( s , u|θQ ) = V ( s|θV ) , Argmax u∈U Q ( s , u|θQ ) = µ ( s|θµ ) , ( 5 ) but on the other hand , it brings up the question of classes of RL problems in which quadratic approximations is acceptable . Below , we describes one such class . 3 PROBLEM STATEMENT . In this section , we consider a certain class of optimal control problems and show that discrete approximations of these problems can be formalized as RL problems . Consider the following optimal control problem : it is required to maximize the functional J ( u ( · ) ) = σ ( x ( T ) ) − ∫ T 0 ( q ( t , x ( t ) ) + u ( t ) T r ( t , x ( t ) ) u ( t ) ) dt , ( 6 ) over all u ( · ) , where x ( · ) is the solution ( Filippov , 1988 , §1 ) of the differential equation d dt x ( t ) = f ( t , x ( t ) ) + g ( t , x ( t ) ) u ( t ) , t ∈ [ 0 , T ] , ( 7 ) under the initial condition x ( 0 ) = z . ( 8 ) Here t is the time variable , T > 0 is the terminal instant of time , x ( t ) ∈ Rn is the current state vector , u ( t ) ∈ U is the current control action vector forming the measurable function u ( · ) , U ⊂ Rm is the nonempty compact set , z ∈ Rn is the fixed initial state vector , f ( t , x ) ∈ Rn , g ( t , x ) ∈ Rn×m , q ( t , x ) ∈ R , r ( t , x ) ∈ Rm×m , ( t , x ) ∈ [ 0 , T ] × Rn are continuous with respect to t and continuously differentiable with respect to x functions , r ( t , x ) is the positive-definite matrix for each ( t , x ) ∈ [ 0 , T ] ×Rn , and σ ( x ) ∈ R , x ∈ Rn is the continuous function . We assume that there exists a constant cfg > 0 such that ∥f ( t , x ) + g ( t , x ) u∥ ≤ ( 1 + ∥x∥ ) cfg , ( t , x ) ∈ [ 0 , T ] × Rn , u ∈ U . ( 9 ) Note that , under these conditions , for each function u ( · ) , there exists a unique solution x ( · ) of equation ( 7 ) under the initial condition ( 8 ) ( Filippov , 1988 , §1 ) . Define the value function in optimal control problem ( 6 ) , ( 7 ) by V∗ ( t∗ , x∗ ) = sup u ( · ) ( σ ( x ( T ) ) − ∫ T t∗ ( q ( t , x ( t ) ) +u ( t ) T r ( t , x ( t ) ) u ( t ) ) dt ) , ( t∗ , x∗ ) ∈ [ 0 , T ] ×Rn , ( 10 ) where , for each u ( · ) , x ( · ) is the solution of equation ( 7 ) on the interval [ t∗ , T ] under the initial condition x ( t∗ ) = x∗ . Define the sets S = { ( t , x ) ∈ [ 0 , T ] × Rn : ∥x∥ ≤ ( 1 + ∥z∥ ) ecfgt − 1 } , S ( t ) = { x ∈ Rn : ( t , x ) ∈ S } . ( 11 ) Let k ∈ N , ∆tk = T/k , and ti = i∆tk , i ∈ 0 , k. Consider the corresponding discrete optimal control problem : it is required to maximize the function Jk ( u0 , u1 , . . . uk−1 ) = σ ( xk ) −∆tk k−1∑ i=0 ( q ( ti , xi ) + u T i r ( ti , xi ) ui ) , ( 12 ) over all ui ∈ U , i ∈ 0 , k − 1 , where ( x0 , x1 , . . . , xk ) is defined by x0 = z , xi+1 = xi + ( f ( ti , xi ) + g ( ti , xi ) ui ) ∆tk , i ∈ 0 , k − 1 . ( 13 ) Let us show that problem ( 12 ) , ( 13 ) can be formalized as the RL problem . First , we define the state and actions spaces , the initial state distribution , and the discount factor as follows : S = ∪ki=0 ( { ti } × S ( ti ) ) ∪ sT , U = U , ρ0 ( s0 ) = δ ( s0 = ( 0 , z ) ) , γ = 1 . ( 14 ) Here sT is some fictional terminal state , δ is Dirac delta distribution . Next , for every i ∈ 0 , k − 1 , x ∈ S ( ti ) , and u ∈ U , we define the transition distribution and the reward function by P ( s′|s = ( ti , x ) , u ) = δ ( s′ = ( ti+1 , x′ ) ) , R ( s = ( ti , x ) , u ) = − ( q ( ti , x ) + u T r ( ti , x ) u ) ∆tk , ( 15 ) where x′ = x + ( f ( ti , x ) + g ( ti , x ) u ) ∆tk . Taking into account ( 9 ) and ( 11 ) , one can prove the inclusion ( ti+1 , x′ ) ∈ S. Hence , the transition distribution P is well-defined . For i = k , we set P ( s′|s = ( tk , x ) , u ) = δ ( s′ = sT ) , R ( s = ( tk , x ) , u ) = σ ( x ) , x ∈ S ( tk ) , u ∈ U . ( 16 ) In order to make dynamical processes ( 13 ) formally infinite , we put P ( s′|sT , u ) = δ ( s′ = sT ) , R ( sT , u ) = 0 , u ∈ U . ( 17 ) Thus , we define MDP which describes the RL problem corresponding to problem ( 12 ) , ( 13 ) . Next , we show that such RL problems is suitable for using quadratic approximations of the Q-function . | -The authors prove that a discrete approximation of a class of optimal control problems can be recasted as an RL MDP. -The authors prove that the original NAF formulation of the Q-function cannot approximately solve the MDP defined above, and hence propose a new quadratic formulation of the Q-value. They apply their new formulation via Bounded, Reward-based, and Gradient-based BNAF. -The authors evaluate their proposed agent over 4 optimal control environments. | SP:767e7f52dbf4778ed9a304c587178b1781ce5a42 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.