Add Batch c22a1ef9-8a71-4ff7-83ec-2d3ac191febf
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- agradientflowframeworkforanalyzingnetworkpruning/0fd7671f-6c6a-4be3-87d9-d4badb32bdde_content_list.json +3 -0
- agradientflowframeworkforanalyzingnetworkpruning/0fd7671f-6c6a-4be3-87d9-d4badb32bdde_model.json +3 -0
- agradientflowframeworkforanalyzingnetworkpruning/0fd7671f-6c6a-4be3-87d9-d4badb32bdde_origin.pdf +3 -0
- agradientflowframeworkforanalyzingnetworkpruning/full.md +731 -0
- agradientflowframeworkforanalyzingnetworkpruning/images.zip +3 -0
- agradientflowframeworkforanalyzingnetworkpruning/layout.json +3 -0
- apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/77c37bb7-b3f6-4adf-8e03-75d22ddc1ca9_content_list.json +3 -0
- apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/77c37bb7-b3f6-4adf-8e03-75d22ddc1ca9_model.json +3 -0
- apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/77c37bb7-b3f6-4adf-8e03-75d22ddc1ca9_origin.pdf +3 -0
- apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/full.md +290 -0
- apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/images.zip +3 -0
- apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/layout.json +3 -0
- areneuralrankersstilloutperformedbygradientboosteddecisiontrees/47e9890f-c34e-479a-af17-493072162936_content_list.json +3 -0
- areneuralrankersstilloutperformedbygradientboosteddecisiontrees/47e9890f-c34e-479a-af17-493072162936_model.json +3 -0
- areneuralrankersstilloutperformedbygradientboosteddecisiontrees/47e9890f-c34e-479a-af17-493072162936_origin.pdf +3 -0
- areneuralrankersstilloutperformedbygradientboosteddecisiontrees/full.md +391 -0
- areneuralrankersstilloutperformedbygradientboosteddecisiontrees/images.zip +3 -0
- areneuralrankersstilloutperformedbygradientboosteddecisiontrees/layout.json +3 -0
- asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/03151e5d-dbcd-4f79-89ec-5aacfe80c593_content_list.json +3 -0
- asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/03151e5d-dbcd-4f79-89ec-5aacfe80c593_model.json +3 -0
- asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/03151e5d-dbcd-4f79-89ec-5aacfe80c593_origin.pdf +3 -0
- asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/full.md +370 -0
- asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/images.zip +3 -0
- asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/layout.json +3 -0
- autoregressiveentityretrieval/3a88b05e-473a-4de5-8481-47a0e538d6d4_content_list.json +3 -0
- autoregressiveentityretrieval/3a88b05e-473a-4de5-8481-47a0e538d6d4_model.json +3 -0
- autoregressiveentityretrieval/3a88b05e-473a-4de5-8481-47a0e538d6d4_origin.pdf +3 -0
- autoregressiveentityretrieval/full.md +355 -0
- autoregressiveentityretrieval/images.zip +3 -0
- autoregressiveentityretrieval/layout.json +3 -0
- behavioralcloningfromnoisydemonstrations/71732701-928a-470e-8bdc-6e5162239933_content_list.json +3 -0
- behavioralcloningfromnoisydemonstrations/71732701-928a-470e-8bdc-6e5162239933_model.json +3 -0
- behavioralcloningfromnoisydemonstrations/71732701-928a-470e-8bdc-6e5162239933_origin.pdf +3 -0
- behavioralcloningfromnoisydemonstrations/full.md +314 -0
- behavioralcloningfromnoisydemonstrations/images.zip +3 -0
- behavioralcloningfromnoisydemonstrations/layout.json +3 -0
- benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/18ea42be-5b92-4b13-a3b3-551b4d5fdd6b_content_list.json +3 -0
- benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/18ea42be-5b92-4b13-a3b3-551b4d5fdd6b_model.json +3 -0
- benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/18ea42be-5b92-4b13-a3b3-551b4d5fdd6b_origin.pdf +3 -0
- benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/full.md +714 -0
- benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/images.zip +3 -0
- benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/layout.json +3 -0
- beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/27f95abb-7e15-43ae-913b-746319f62b32_content_list.json +3 -0
- beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/27f95abb-7e15-43ae-913b-746319f62b32_model.json +3 -0
- beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/27f95abb-7e15-43ae-913b-746319f62b32_origin.pdf +3 -0
- beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/full.md +338 -0
- beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/images.zip +3 -0
- beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/layout.json +3 -0
- bustlebottomupprogramsynthesisthroughlearningguidedexploration/8e440b64-4a3f-43e4-a021-feefad71e32b_content_list.json +3 -0
- bustlebottomupprogramsynthesisthroughlearningguidedexploration/8e440b64-4a3f-43e4-a021-feefad71e32b_model.json +3 -0
agradientflowframeworkforanalyzingnetworkpruning/0fd7671f-6c6a-4be3-87d9-d4badb32bdde_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8ff188359dde418a6b73f01a34ea212e1c766298c9377ca9dec403c79a26e915
|
| 3 |
+
size 162779
|
agradientflowframeworkforanalyzingnetworkpruning/0fd7671f-6c6a-4be3-87d9-d4badb32bdde_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6155f6578a4c99642a672c6cc0e0fc69af86d18b2f5618c001c0e30011c3b95b
|
| 3 |
+
size 198010
|
agradientflowframeworkforanalyzingnetworkpruning/0fd7671f-6c6a-4be3-87d9-d4badb32bdde_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:66e04a5d0aa5e08de6a750b43001a18611ed74ec2aaabc17e26b5b974bbbf7a0
|
| 3 |
+
size 7769109
|
agradientflowframeworkforanalyzingnetworkpruning/full.md
ADDED
|
@@ -0,0 +1,731 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A GRADIENT FLOW FRAMEWORK FOR ANALYZING NETWORK PRUNING
|
| 2 |
+
|
| 3 |
+
Ekdeep Singh Lubana & Robert P. Dick
|
| 4 |
+
|
| 5 |
+
EECS Department
|
| 6 |
+
|
| 7 |
+
University of Michigan
|
| 8 |
+
|
| 9 |
+
Ann Arbor, MI 48105, USA
|
| 10 |
+
|
| 11 |
+
{eslubana, dickrp}@umich.edu
|
| 12 |
+
|
| 13 |
+
# ABSTRACT
|
| 14 |
+
|
| 15 |
+
Recent network pruning methods focus on pruning models early-on in training. To estimate the impact of removing a parameter, these methods use importance measures that were originally designed to prune trained models. Despite lacking justification for their use early-on in training, such measures result in surprisingly low accuracy loss. To better explain this behavior, we develop a general framework that uses gradient flow to unify state-of-the-art importance measures through the norm of model parameters. We use this framework to determine the relationship between pruning measures and evolution of model parameters, establishing several results related to pruning models early-on in training: (i) magnitude-based pruning removes parameters that contribute least to reduction in loss, resulting in models that converge faster than magnitude-agnostic methods; (ii) loss-preservation based pruning preserves first-order model evolution dynamics and its use is therefore justified for pruning minimally trained models; and (iii) gradient-norm based pruning affects second-order model evolution dynamics, such that increasing gradient norm via pruning can produce poorly performing models. We validate our claims on several VGG-13, MobileNet-V1, and ResNet-56 models trained on CIFAR-10/CIFAR-100.
|
| 16 |
+
|
| 17 |
+
# 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
The use of Deep Neural Networks (DNNs) in intelligent edge systems has been enabled by extensive research on model compression. "Pruning" techniques are commonly used to remove "unimportant" filters to either preserve or promote specific, desirable model properties. Most pruning methods were originally designed to compress trained models, with the goal of reducing inference costs only. For example, Li et al. (2017); He et al. (2018) proposed to remove filters with small $\ell 1 / \ell 2$ norm, thus ensuring minimal change in model output. Molchanov et al. (2017; 2019); Theis et al. (2018) proposed to preserve the loss of a model, generally using Taylor expansions around a filter's parameters to estimate change in loss as a function of its removal.
|
| 20 |
+
|
| 21 |
+
Recent works focus on pruning models at initialization (Lee et al. (2019; 2020)) or after minimal training (You et al. (2020)), thus enabling reduction in both inference and training costs. To estimate the impact of removing a parameter, these methods use the same importance measures as designed for pruning trained models. Since such measures focus on preserving model outputs or loss, Wang et al. (2020) argue they are not well-motivated for pruning models early-on in training. However, in this paper, we demonstrate that if the relationship between importance measures used for pruning trained models and the evolution of model parameters is established, their use early-on in training can be better justified.
|
| 22 |
+
|
| 23 |
+
In particular, we employ gradient flow<sup>1</sup> to develop a general framework that relates state-of-the-art importance measures used in network pruning through the norm of model parameters. This framework establishes the relationship between regularly used importance measures and the evolution of a model's parameters, thus demonstrating why measures designed to prune trained models also
|
| 24 |
+
|
| 25 |
+
perform well early-on in training. More generally, our framework enables better understanding of what properties make a parameter dispensable according to a particular importance measure. Our findings follow. (i) Magnitude-based pruning measures remove parameters that contribute least to reduction in loss. This enables magnitude-based pruned models to achieve faster convergence than magnitude-agnostic measures. (ii) Loss-preservation based measures remove parameters with the least tendency to change, thus preserving first-order model evolution dynamics. This shows the use of loss-preservation is justified for pruning models early-on in training as well. (iii) Gradient-norm based pruning is linearly related to second-order model evolution dynamics. Increasing gradient norm via pruning for even slightly trained models can permanently damage earlier layers, producing poorly performing architectures. This behavior is a result of aggressively pruning filters that maximally increase model loss. We validate our claims on several VGG-13, MobileNet-V1, and ResNet-56 models trained on CIFAR-10 and CIFAR-100.
|
| 26 |
+
|
| 27 |
+
# 2 RELATED WORK
|
| 28 |
+
|
| 29 |
+
Several pruning frameworks define importance measures to estimate the impact of removing a parameter. Most popular importance measures are based on parameter magnitude (Li et al. (2017); He et al. (2018); Liu et al. (2017)) or loss preservation (Molchanov et al. (2019; 2017); Theis et al. (2018); Gao et al. (2019)). Recent works show that using these measures, models pruned at initialization (Lee et al. (2019); Wang et al. (2020); Hayou et al. (2021)) or after minimal training (You et al. (2020)) achieve final performance similar to the original networks. Since measures for pruning trained models are motivated by output or loss preservation, Wang et al. (2020) argue they may not be well suited for pruning models early-on in training. They thus propose GraSP, a measure that promotes preservation of parameters that increase the gradient norm.
|
| 30 |
+
|
| 31 |
+
Despite its success, the foundations of network pruning are not well understood. Recent work has shown that good "subnetworks" that achieve similar performance to the original network exist within both trained (Ye et al. (2020)) and untrained models (Frankle & Carbin (2019); Malach et al. (2020); Pensia et al. (2020)). These works thus prove networks can be pruned without loss in performance, but do not indicate how a network should be pruned, i.e., which importance measures are preferable. In fact, Liu et al. (2019) show reinitializing pruned models before retraining rarely affects their performance, indicating the consequential differences among importance measures are in the properties of architectures they produce. Since different importance measures perform differently (see Appendix E), analyzing popular measures to determine which model properties they tend to preserve can reveal which measures lead to better-performing architectures.
|
| 32 |
+
|
| 33 |
+
From an implementation standpoint, pruning approaches can be placed in two categories. The first, structured pruning (Li et al. (2017); He et al. (2018); Liu et al. (2017); Molchanov et al. (2019; 2017); Gao et al. (2019)), removes entire filters, thus preserving structural regularity and directly improving execution efficiency on commodity hardware platforms. The second, unstructured pruning (Han et al. (2016b); LeCun et al. (1990); Hassibi & Stork (1993)), is more fine-grained, operating at the level of individual parameters instead of filters. Unstructured pruning has recently been used to reduce computational complexity as well, but requires specially designed hardware (Han et al. (2016a)) or software (Elsen et al. (2020)) that are capable of accelerating sparse operations. By clarifying benefits and pitfalls of popular importance measures, our work aims to ensure practitioners are better able to make informed choices for reducing DNN training/inference expenditure via network pruning. Thus, while results in this paper are applicable in both structured and unstructured settings, our experimental evaluation primarily focuses on structured pruning early-on in training. Results on unstructured pruning are relegated to Appendix H.
|
| 34 |
+
|
| 35 |
+
# 3 PRELIMINARIES: CLASSES OF STANDARD IMPORTANCE MEASURES
|
| 36 |
+
|
| 37 |
+
In this section, we review the most successful classes of importance measures for network pruning. These measures will be our focus in subsequent sections. We use bold symbols to denote vectors and italicize scalar variables. Consider a model that is parameterized as $\Theta(t)$ at time $t$ . We denote the gradient of the loss with respect to model parameters at time $t$ as $\mathbf{g}(\Theta(t))$ , the Hessian as $\mathbf{H}(\Theta(t))$ , and the model loss as $L(\Theta(t))$ . A general model parameter is denoted as $\theta(t)$ . The importance of a set of parameters $\Theta_p(t)$ is denoted as $I(\Theta_p(t))$ .
|
| 38 |
+
|
| 39 |
+
Magnitude-based measures: Both $\ell 1$ norm (Li et al. (2017)) and $\ell 2$ norm (He et al. (2018)) have been successfully used as magnitude-focused importance measures and generally perform equally well. Due to its differentiability, $\ell 2$ norm can be analyzed using gradient flow and will be our focus in the following sections.
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
I \left(\Theta_ {p} (t)\right) = \| \Theta_ {p} (t) \| _ {2} ^ {2} = \sum_ {\theta_ {i} \in \Theta_ {p}} \left(\theta_ {i} (t)\right) ^ {2}. \tag {1}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
Loss-preservation based measures: These measures generally use a first-order Taylor decomposition to determine the impact removing a set of parameters has on model loss. Most recent methods (Molchanov et al. (2019; 2017); Ding et al. (2019); Theis et al. (2018)) for pruning trained models are variants of this method, often using additional heuristics to improve their performance.
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
L (\boldsymbol {\Theta} (t) - \boldsymbol {\Theta} _ {p} (t)) - L (\boldsymbol {\Theta} (t)) \approx - \boldsymbol {\Theta} _ {p} ^ {T} (t) \mathbf {g} (\boldsymbol {\Theta} (t)). \tag {2}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
The equation above implies that the loss of a pruned model is higher (lower) than the original model if parameters with a negative (positive) value for $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ are removed. Thus, for preserving model loss, the following importance score should be used.
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
I (\boldsymbol {\Theta} _ {p} (t)) = \left| \boldsymbol {\Theta} _ {p} ^ {T} (t) \mathbf {g} (\boldsymbol {\Theta} (t)) \right|. \tag {3}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
Increase in gradient-norm based measures: Wang et al. (2020) argue loss-preservation based methods are not well-motivated for pruning models early-on in training. They thus propose GraSP, an importance measure that prunes parameters whose removal increases the gradient norm and can enable fast convergence for a pruned model.
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\left\| \mathbf {g} (\boldsymbol {\Theta} (t) - \boldsymbol {\Theta} _ {p} (t)) \right\| _ {2} ^ {2} - \left\| \mathbf {g} (\boldsymbol {\Theta} (t)) \right\| _ {2} ^ {2} \approx - 2 \boldsymbol {\Theta} _ {p} ^ {T} (t) \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t)). \tag {4}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
The above equation implies that the gradient norm of a pruned model is higher than the original model if parameters with a negative value for $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ are removed. This results in the following importance score.
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
I (\boldsymbol {\Theta} _ {p} (t)) = \boldsymbol {\Theta} _ {p} ^ {T} (t) \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t)). \tag {5}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
As mentioned before, these importance measures were introduced for pruning trained models (except for GraSP), but are also used for pruning models early-on in training. In the following sections, we revisit the original goals for these measures, establish their relationship with evolution of model parameters over time, and provide clear justifications for their use early-on in training.
|
| 70 |
+
|
| 71 |
+
# 4 GRADIENT FLOW AND NETWORK PRUNING
|
| 72 |
+
|
| 73 |
+
Gradient flow, or gradient descent with infinitesimal learning rate, is a continuous-time version of gradient descent. The evolution over time of model parameters, gradient, and loss under gradient flow can be described as follows.
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\text {(P a r a m e t e r s o v e r t i m e)} \quad \frac {\partial \boldsymbol {\Theta} (t)}{\partial t} = - \mathbf {g} (\boldsymbol {\Theta} (t));
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\text {(G r a d i e n t o v e r t i m e)} \quad \frac {\partial \mathbf {g} (\boldsymbol {\Theta} (t))}{\partial t} = - \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t)); \tag {6}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\text {(L o s s o v e r t i m e)} \quad \frac {\partial L (t)}{\partial t} = - \| \mathbf {g} (\boldsymbol {\Theta} (t)) \| _ {2} ^ {2}.
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
Recall that standard importance measures based on loss-preservation (Equation 3) or increase in gradient-norm (Equation 5) are derived using a first-order Taylor series approximation, making them exactly valid under the continuous scenario of gradient flow. This indicates analyzing the evolution of model parameters via gradient flow can provide useful insights into the relationships between different importance measures. To this end, we use gradient flow to develop a general framework that relates different classes of importance measures through the norm of model parameters. As we develop this framework, we explain the reasons why importance measures defined for pruning trained models are also highly effective when used for pruning early-on in training. In Appendix B, we further extend this framework to models trained using stochastic gradient descent, showing that up to a first-order approximation, the observations developed here are valid under SGD training too.
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
Figure 1: Train/test accuracy curves for pruned ResNet-56 models on CIFAR-10 (left) and CIFAR-100 (right) over 25 rounds. Models are pruned using magnitude-based pruning (Magnitude), the proposed extension to loss preservation (Proposed), and loss-preservation based pruning (Loss-pres.). Magnitude-based pruning converges fastest, followed by the proposed measure. Curves for other models and number of rounds are shown in Appendix F.
|
| 91 |
+
|
| 92 |
+

|
| 93 |
+
|
| 94 |
+
# 4.1 GRADIENT FLOW AND MAGNITUDE-BASED PRUNING
|
| 95 |
+
|
| 96 |
+
We first analyze the evolution of the $\ell 2$ norm of model parameters, a magnitude-based pruning measure, as a function of time under gradient flow. For a model initialized as $\Theta(0)$ , we denote parameters at time $T$ as $\Theta(T)$ . Under this notation, the distance from initialization can be related to model loss as follows:
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
\begin{array}{l} \left\| \boldsymbol {\Theta} (T) - \boldsymbol {\Theta} (0) \right\| _ {2} ^ {2} = \left\| \int_ {0} ^ {T} \frac {\partial \boldsymbol {\Theta} (t)}{\partial t} d t \right\| _ {2} ^ {2} = \left\| \int_ {0} ^ {T} \mathbf {g} (\boldsymbol {\Theta} (t)) d t \right\| _ {2} ^ {2} \leq T \int_ {0} ^ {T} \| \mathbf {g} (\boldsymbol {\Theta} (t)) d t \| _ {2} ^ {2} d t \tag {7} \\ \stackrel {(\mathtt {b})} {=} T \int_ {0} ^ {T} - \frac {\partial L (t)}{\partial t} d t = T \left(L (0) - L (T)\right) \\ \Rightarrow \frac {1}{T} \| \Theta (T) - \Theta (0) \| _ {2} ^ {2} \leq L (0) - L (T), \\ \end{array}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where (a) follows from the application of triangle inequality for norms and the Cauchy-Schwarz inequality and (b) follows from Equation 6. The inequality above is a modification of a previously known result by Nagarajan & Kolter (2017), who show that change in model parameters, as measured by distance from initialization, is bounded by the ratio of loss at initialization to the norm of the gradient at time $T$ . Frequently used initialization techniques are zero-centered with small variance (Glorot & Bengio (2010); He et al. (2015)). This reduces the impact of initialization, making $\| \Theta (T) - \Theta (0)\| _2^2\approx \| \Theta (T)\| _2^2$ . In fact, calculating the importance of filters in a model using $\ell 2$ norm versus using distance from initialization, we find the two measures have an average correlation of 0.994 across training epochs, and generally prune identical parameters. We now use Equation 7 to relate magnitude-based pruning with model loss at any given time. Specifically, reorganizing Equation 7 yields
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
L (T) \leq L (0) - \frac {1}{T} \| \boldsymbol {\Theta} (T) - \boldsymbol {\Theta} (0) \| _ {2} ^ {2} \approx L (0) - \frac {1}{T} \| \boldsymbol {\Theta} (T) \| _ {2} ^ {2}. \tag {8}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
Based on this analysis, we make the following observation.
|
| 109 |
+
|
| 110 |
+
Observation 1: The larger the magnitude of parameters at a particular instant, the smaller the model loss at that instant will be. If these large-magnitude parameters are preserved while pruning (instead of smaller ones), the pruned model's loss decreases faster.
|
| 111 |
+
|
| 112 |
+
Equation 8 shows that the norm of model parameters bounds model loss at any given time. Thus, by preserving large-magnitude parameters, magnitude-based pruning enables faster reduction in loss than magnitude-agnostic techniques. For example, we later show that loss-preservation based pruning removes the most slowly changing parameters, regardless of their magnitude. Thus magnitude-based pruned models will generally converge faster than loss-preservation based pruned models.
|
| 113 |
+
|
| 114 |
+
To verify this analysis also holds under the SGD training used in practice, we train several VGG-13, MobileNet-V1, and ResNet-56 models on CIFAR-10 and CIFAR-100. Our pruning setup starts with
|
| 115 |
+
|
| 116 |
+
Table 1: Accuracy of pruned models on CIFAR-10/CIFAR-100 (over 3 seeds; base model accuracies are reported in parentheses; std. dev. $< 0.3$ for all experiments; best results are in bold, second best are underlined). Magnitude-based pruning (Mag.) and the proposed extension to loss preservation (Proposed: $\sum_{\theta_i \in \Theta_p} |\theta_i(t)| |\theta_i(t)g(\theta_i(t))|$ ) consistently outperform plain loss-preservation based pruning (Loss). With more rounds, the proposed measure outperforms Magnitude-based pruning too.
|
| 117 |
+
|
| 118 |
+
<table><tr><td>Pruning Rounds</td><td></td><td colspan="3">1 round</td><td colspan="3">5 rounds</td><td colspan="3">25 rounds</td></tr><tr><td>CIFAR-10</td><td>% pruned</td><td>Mag.</td><td>Loss</td><td>Proposed</td><td>Mag.</td><td>Loss</td><td>Proposed</td><td>Mag.</td><td>Loss</td><td>Proposed</td></tr><tr><td>VGG-13 (93.1)</td><td>75%</td><td>92.05</td><td>92.01</td><td>92.13</td><td>92.32</td><td>91.43</td><td>92.29</td><td>92.06</td><td>91.53</td><td>92.45</td></tr><tr><td>MobileNet (92.3)</td><td>75%</td><td>91.71</td><td>91.17</td><td>91.73</td><td>91.76</td><td>90.99</td><td>91.89</td><td>91.52</td><td>90.96</td><td>91.77</td></tr><tr><td>ResNet-56 (93.1)</td><td>60%</td><td>91.41</td><td>91.09</td><td>91.54</td><td>91.80</td><td>91.39</td><td>91.88</td><td>91.95</td><td>91.47</td><td>92.17</td></tr><tr><td>CIFAR-100</td><td>% pruned</td><td>Mag.</td><td>Loss</td><td>Proposed</td><td>Mag.</td><td>Loss</td><td>Proposed</td><td>Mag.</td><td>Loss</td><td>Proposed</td></tr><tr><td>VGG-13 (69.6)</td><td>65%</td><td>67.89</td><td>68.61</td><td>69.01</td><td>69.31</td><td>67.88</td><td>68.25</td><td>68.31</td><td>68.11</td><td>68.93</td></tr><tr><td>MobileNet (69.1)</td><td>65%</td><td>68.01</td><td>67.16</td><td>68.52</td><td>68.33</td><td>67.21</td><td>68.41</td><td>67.58</td><td>67.31</td><td>68.35</td></tr><tr><td>ResNet-56 (71.0)</td><td>50%</td><td>66.92</td><td>66.88</td><td>67.10</td><td>68.04</td><td>67.11</td><td>67.70</td><td>68.75</td><td>67.74</td><td>68.62</td></tr></table>
|
| 119 |
+
|
| 120 |
+
randomly initialized models and uses a prune-and-train framework, where each round of pruning involves an epoch of training followed by pruning. A target amount of pruning (e.g., $75\%$ filters) is divided evenly over a given number of pruning rounds. Throughout training, we use a small temperature value $(= 5)$ to ensure smooth changes to model parameters. We provide results for 1, 5, and 25 rounds for all models and datasets. The results after 1 round, where pruning is single-shot, demonstrate that our claims are general and not an artifact of allowing the model to compensate for its lost parameters; the results after 5 and 25 rounds, where pruning is distributed over a number of rounds, demonstrate that our claims also hold when models compensate for lost parameters.
|
| 121 |
+
|
| 122 |
+
The results are shown in Table 1. Magnitude-based pruning consistently performs better than loss-preservation based pruning. Furthermore, train/test convergence for magnitude-based pruned models is faster than that for loss-preservation based pruned models, as shown in Figure 1. These results validate our claim that magnitude-based pruning results in faster converging, better-performing models when pruning early-on in training.
|
| 123 |
+
|
| 124 |
+
Extending existing importance measures: A fundamental understanding of existing importance measures can be exploited to extend and design new measures for pruning models early-on in training. For example, we modify loss-preservation based pruning to remove small-magnitude parameters that do not affect model loss using the following importance measure: $\sum_{\theta_i \in \Theta_p} |\theta_i(t)| |\theta_i(t)g(\theta_i(t))|$ . Using $|\theta_i(t)g(\theta_i(t))|$ to preserve loss, this measure retains training progress up to the current instant; using $|\theta_i(t)|$ , this measure is biased towards removing small-magnitude parameters and should produce a model that converges faster than loss-preservation alone. These expected properties are demonstrated in Table 1 and Figure 1: the proposed measure consistently outperforms loss-preservation based pruning and often outperforms magnitude-based pruning, especially when more rounds are used (see Table 1); train/test convergence rate for models pruned using this measure are better than those for loss-preservation pruning, and competitive to those of magnitude-based pruning (see Figure 1).
|
| 125 |
+
|
| 126 |
+
# 4.2 GRADIENT FLOW AND LOSS-PRESERVATION BASED PRUNING
|
| 127 |
+
|
| 128 |
+
Loss-preservation based pruning methods use first-order Taylor decomposition to determine the impact of pruning on model loss (see Equation 3). We now show that magnitude-based and loss-preservation based pruning measures are related by an order of time-derivative.
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\frac {\partial \left\| \boldsymbol {\Theta} (t) \right\| _ {2} ^ {2}}{\partial t} = 2 \boldsymbol {\Theta} ^ {T} (t) \frac {\partial \boldsymbol {\Theta} (t)}{\partial t} \tag {9}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\stackrel {{\mathrm {(a)}}} {=} - 2 \boldsymbol {\Theta} ^ {T} (t) \mathbf {g} (\boldsymbol {\Theta} (t)),
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
where (a) follows from Equation 6. This implies that
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\left| \frac {\partial \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2}}{\partial t} \right| = 2 | \boldsymbol {\Theta} ^ {T} (t) \mathbf {g} (\boldsymbol {\Theta} (t)) |. \tag {10}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
Based on Equation 10, the following observations can be made.
|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
(a) VGG-13.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
(b) MobileNet-V1.
|
| 151 |
+
Figure 2: Correlation between $|\sigma \Delta \sigma|$ and loss-preservation based importance (see Equation 3) at every 10th epoch. Also plotted is the distance between pruning masks (target ratio: $20\%$ filters), as used by You et al. (2020) to decide when to prune a model. As the distance between pruning masks over consecutive epochs reduces, $|\sigma \Delta \sigma|$ becomes more correlated with loss-preservation importance. Results for a similar experiment on Tiny-ImageNet are shown in Appendix I and in an unstructured pruning setting are shown in Section H.1.
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
(c) ResNet-56.
|
| 155 |
+
|
| 156 |
+
Observation 2: Up to a constant, the magnitude of time-derivative of norm of model parameters (the score for magnitude-based pruning) is equal to the importance measure used for loss-preservation (Equation 3). Further, loss-preservation corresponds to removal of the slowest changing parameters.
|
| 157 |
+
|
| 158 |
+
Equation 10 implies that loss-preservation based pruning preserves more than model loss: It also preserves the first-order dynamics of a model's evolution. This result demonstrates that loss-preservation based importance is in fact a well-motivated measure for pruning models early-on in training. This explains why loss-preservation based pruning has been successfully used for pruning both trained (Molchanov et al. (2017; 2019)) and untrained (Lee et al. (2019)) models. We also draw another important conclusion pertaining to pruning of trained models: parameters that continue to change are more important to retain for preserving model loss, while parameters that have the least tendency to change are dispensable. We highlight the relationship in Equation 10 was also noticed by Tanaka et al. (2020), but they do not study its implications in detail.
|
| 159 |
+
|
| 160 |
+
Observation 3: Due to their closely related nature, when used with additional heuristics, magnitude-based importance measures preserve loss.
|
| 161 |
+
|
| 162 |
+
Pruning methods often use additional heuristics with existing importance measures to improve their efficacy. E.g., You et al. (2020) recently showed that models become amenable to pruning early-on in training. To determine how much to train before pruning, they create a binary mask to represent filters that have low importance and are thus marked for pruning. A filter's importance is defined using the magnitude of its corresponding BatchNorm-scale parameter. If change in this binary mask is minimal over a predefined number of consecutive epochs, training is stopped and the model is pruned.
|
| 163 |
+
|
| 164 |
+
Due to the related nature of magnitude-based and loss-preservation based measures, as shown in Equation 10, we expect use of additional heuristics can unknowingly extend an importance measure to preserve model properties that it was not intentionally designed to target. To demonstrate this concretely, we analyze the implications of the "train until minimal change" heuristic by You et al., as explained above. In particular, consider a certain BatchNorm-scale parameter $\sigma$ that has a small magnitude and is thus marked for pruning. Its magnitude must continue to remain small to ensure the binary mask does not change. This implies the change in $\sigma$ over an epoch, or $\Delta \sigma$ , must be small. These two constraints can be assembled in a single measure: $|\sigma \Delta \sigma|$ . The continuous-time, per-iteration version of this product is $|\sigma \frac{\partial \sigma}{\partial t}|$ . As shown in Equation 9 and Equation 10, $|\sigma \frac{\partial \sigma}{\partial t}|$ is the mathematical equivalent of loss-preservation based importance (Equation 3). Therefore, when applied per iteration, removing parameters with small magnitude, under the additional heuristic of minimal change, converts magnitude-based pruning into a loss-preservation strategy. If parameters do not change much over several iterations, this argument can further extend to change over an epoch. To confirm this, we train VGG-13, MobileNet-V1, and ResNet-56 models on CIFAR-100, record the distance between pruning masks for consecutive epochs, and calculate correlation between loss-preservation based importance and $|\sigma \Delta \sigma|$ . As shown in Figure 2, as the mask distance reduces,
|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
(a) Initialization.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
(b) 40 epochs.
|
| 171 |
+
Figure 3: $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ versus $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for ResNet-56 models trained on CIFAR-100. Plots are shown for filters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training. The correlation is averaged over 3 seeds and plots are for 1 seed. As shown, the measures are highly correlated throughout model training, indicating gradient-norm increase may severely affect model loss if a partially or completely trained model is pruned using $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ . Plots for other models are shown in Appendix G. Results of this experiment in an unstructured pruning setting are shown in Section H.1.
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
(c) 160 epochs.
|
| 175 |
+
|
| 176 |
+
the correlation between the importance measures increases. This confirms that due to the intimate relationship between magnitude-based pruning and loss-preservation based pruning, when used with additional heuristics, magnitude-based pruning extends to preserve model loss as well.
|
| 177 |
+
|
| 178 |
+
# 4.3 GRADIENT FLOW AND GRADIENT-NORM BASED PRUNING
|
| 179 |
+
|
| 180 |
+
Having demonstrated the relationship between the first-order time derivative of the norm of model parameters and loss-preservation based pruning, we now consider the second-order time derivative of the norm of model parameters. We demonstrate its connection with GraSP (Wang et al. (2020)), a method designed for use early-on in training to increase gradient-norm using pruning.
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\begin{array}{l} \frac {\partial^ {2} \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2}}{\partial t ^ {2}} = \frac {\partial}{\partial t} \left(\frac {\partial \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2}}{\partial t}\right) \stackrel {(\mathrm {a})} {=} - 2 \frac {\partial (\boldsymbol {\Theta} ^ {T} (t) \mathbf {g} (\boldsymbol {\Theta} (t)))}{\partial t} \\ = - 2 \left(\mathbf {g} ^ {T} (\boldsymbol {\Theta} (t)) \frac {\partial \boldsymbol {\Theta} (t)}{\partial t} + \boldsymbol {\Theta} ^ {T} (t) \frac {\partial \mathbf {g} (\boldsymbol {\Theta} (t))}{\partial t}\right) \tag {11} \\ \stackrel {(\mathrm {b})} {=} 2 \left(\| \mathbf {g} (\boldsymbol {\Theta} (t)) \| _ {2} ^ {2} + \boldsymbol {\Theta} ^ {T} (t) \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t))\right), \\ \end{array}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
where (a) follows from Equation 9 and (b) follows from Equation 6. This implies that
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
\frac {\partial^ {2} \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2}}{\partial t ^ {2}} = 2 \left(\| \mathbf {g} (\boldsymbol {\Theta} (t)) \| _ {2} ^ {2} + \boldsymbol {\Theta} ^ {T} (t) \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t))\right). \tag {12}
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
Equation 12 shows the second-order time derivative of the norm of model parameters is linearly related to the importance measure for increase in gradient-norm (Equation 5). In particular, parameters with a negative $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ reduce the "acceleration" at which a model approaches its final solution. Thus, removing them can speedup optimization.
|
| 193 |
+
|
| 194 |
+
We now use the analysis above to demonstrate limitations to increase of gradient-norm using pruning.
|
| 195 |
+
|
| 196 |
+
Observation 4: Increasing gradient-norm via pruning removes parameters that maximally increase model loss.
|
| 197 |
+
|
| 198 |
+
Extending the "acceleration" analogy, recall that if an object has increasingly negative velocity, its acceleration will be negative as well. As shown before in Equation 9, the velocity of norm of model parameters is a constant multiple of the first-order Taylor decomposition for model loss around current parameter values. Thus, the parameter with the most negative value for $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ is likely to also have a large, negative value for $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ . From Equation 2, we observe that removing parameters with negative $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ increases model loss with respect to the original
|
| 199 |
+
|
| 200 |
+
Table 2: Accuracy of models pruned using different variants of gradient-norm based pruning on CIFAR-10/CIFAR-100 (over 3 seeds; base model accuracies are reported in parentheses; std. dev. $< 0.3$ for all models, except for GraSP $(\mathrm{T} = 1)$ with 5/25 rounds (std. dev. $< 2.1$ ); best results are in bold, second best are underlined). Gradient-norm preservation (|GraSP $(\mathrm{T} = 1)\|)$ outperforms both GraSP with large temperature $(\mathrm{T} = 200)$ and without temperature $(\mathrm{T} = 1)$ .
|
| 201 |
+
|
| 202 |
+
<table><tr><td>Pruning Rounds</td><td></td><td colspan="3">1 round</td><td colspan="3">5 rounds</td><td colspan="3">25 rounds</td></tr><tr><td>CIFAR-10</td><td>pruned (%)</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP| (T=1)</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP| (T=1)</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP| (T=1)</td></tr><tr><td>VGG-13 (93.1)</td><td>75%</td><td>91.62</td><td>91.32</td><td>91.92</td><td>90.46</td><td>83.77</td><td>91.83</td><td>89.57</td><td>77.12</td><td>91.75</td></tr><tr><td>MobileNet (92.3)</td><td>75%</td><td>89.46</td><td>90.03</td><td>91.03</td><td>86.88</td><td>80.95</td><td>90.89</td><td>80.37</td><td>76.06</td><td>91.01</td></tr><tr><td>ResNet-56 (93.1)</td><td>60%</td><td>91.01</td><td>90.81</td><td>91.21</td><td>90.44</td><td>80.15</td><td>91.25</td><td>86.23</td><td>10.00</td><td>91.63</td></tr><tr><td>CIFAR-100</td><td>pruned (%)</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP| (T=1)</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP| (T=1)</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP| (T=1)</td></tr></table>
|
| 203 |
+
|
| 204 |
+
model. This indicates $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ may in fact increase the gradient norm by removing parameters that maximally increase model loss.
|
| 205 |
+
|
| 206 |
+
To test this claim, we plot $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ alongside $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for filters of VGG-13, MobileNet-V1, and ResNet-56 models trained on CIFAR-100 at various points in training and consistently find them to be highly correlated (see Figure 3). This strong correlation confirms that using $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ as an importance measure for network pruning increases gradient norm by removing parameters that maximally increase model loss.
|
| 207 |
+
|
| 208 |
+
Wang et al. (2020) remark in their work that it is possible that GraSP may increase the gradient-norm by increasing model loss. We provide evidence and rationale illuminating this remark: we show why preserving loss and increasing gradient-norm are antithetical. To avoid this behavior, Wang et al. propose to use a large temperature value (200) before calculating the Hessian-gradient product. We now demonstrate the pitfalls of this solution and propose a more robust approach.
|
| 209 |
+
|
| 210 |
+
Observation 5: Preserving gradient-norm maintains second-order model evolution dynamics and results in better-performing models than increasing gradient-norm.
|
| 211 |
+
|
| 212 |
+
Equation 12 shows the measure $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ affects a model's second-order evolution dynamics. The analysis in Observation 4 shows $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ and $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ are highly correlated. These results together imply that preserving gradient-norm $\left(|\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))|\right)$ should preserve both second-order model evolution dynamics and model loss. Since the correlation with loss-preservation based importance is not perfect, this strategy is only an approximation of loss-preservation. However, it is capable of avoiding increase in model loss due to pruning.
|
| 213 |
+
|
| 214 |
+
To demonstrate the efficacy of gradient-norm preservation and the effects of increase in model loss on GraSP, we prune VGG-13, MobileNet-V1, and ResNet-56 models trained on CIFAR-10 and CIFAR-100. The results are shown in Table 2 and lead to the following conclusions. (i) When a single round of pruning is performed, the accuracy of GraSP is essentially independent of temperature. This implies that using temperature may be unnecessary if the model is close to initialization. (ii) In a few epochs of training, when reduction in loss can be attributed to training of earlier layers (Raghu et al. (2017)), GraSP without large temperature chooses to prune earlier layers aggressively (see Section G.2 for layer-wise pruning ratios). This permanently damages the model and thus the accuracy of low-temperature GraSP decreases substantially with increasing training epochs. (iii) At high temperatures, the performance of pruned models is more robust to the number of rounds and pruning of earlier layers is reduced. However, reduction in accuracy is still significant. (iv) These behaviors are mitigated by using the proposed alternative: gradient-norm preservation $\left(|\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))|\right)$ . Since this measure preserves the gradient-norm and is correlated to loss-preservation, it focuses on ensuring the model's second-order training dynamics and loss remain the same, consistently resulting in the best performance.
|
| 215 |
+
|
| 216 |
+
# 5 CONCLUSION
|
| 217 |
+
|
| 218 |
+
In this paper, we revisit importance measures designed for pruning trained models to better justify their use early-on in training. Developing a general framework that relates these measures through the norm of model parameters, we analyze what properties of a parameter make it more dispensable according to each measure. This enables us to show that from the lens of model evolution, use of magnitude- and loss-preservation based measures is well-justified early-on in training. More specifically, by preserving parameters that enable fast convergence, magnitude-based pruning generally outperforms magnitude-agnostic methods. By removing parameters that have the least tendency to change, loss-preservation based pruning preserves first-order model evolution dynamics and is well-justified for pruning models early-on in training. We also explore implications of the intimate relationship between magnitude-based pruning and loss-preservation based pruning, demonstrating that one can evolve into the other as training proceeds. Finally, we analyze gradient-norm based pruning and show that it is linearly related to second-order model evolution dynamics. Due to this relationship, we find that increasing gradient-norm via pruning corresponds to removing parameters that maximally increase model loss. Since such parameters are concentrated in the initial layers early-on in training, this method can permanently damage a model's initial layers and undermine its ability to learn from later training. To mitigate this problem, we show the more robust approach is to prune parameters that preserve gradient norm, thus preserving a model's second-order evolution dynamics while pruning. In conclusion, our work shows the use of an importance measure for pruning models early-on in training is difficult to justify unless the measure's relationship with the evolution of a model's parameters over time is established. More generally, we believe new importance measures that specifically focus on pruning early-on should be directly motivated by a model's evolution dynamics.
|
| 219 |
+
|
| 220 |
+
# 6 ACKNOWLEDGEMENTS
|
| 221 |
+
|
| 222 |
+
We thank Puja Trivedi, Daniel Kunin, Hidenori Tanaka, and Chaoqi Wang for several helpful discussions during the course of this project. We also thank Puja Trivedi and Karan Desai for providing useful comments in the drafting of this paper.
|
| 223 |
+
|
| 224 |
+
# REFERENCES
|
| 225 |
+
|
| 226 |
+
X. Ding, G. Ding, X. Zhou, Y. Guo, J. Han, and J. Liu. Global Sparse Momentum SGD for Pruning Very Deep Neural Networks. In Proc. Adv. in Neural Information Processing Systems, 2019.
|
| 227 |
+
E. Elsen, M. Dukhan, T. Gale, and K. Simonyan. Fast Sparse ConvNets. In Proc. Int. Conf. on Computer Vision and Pattern Recognition, 2020.
|
| 228 |
+
J. Frankle and M. Carbin. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In Proc. Int. Conf. on Learning Representations, 2019.
|
| 229 |
+
W. Gao, Y. Liu, C. Wang, and S. Oh. Rate Distortion For Model Compression: From Theory To Practice. In Proc. Int. Conf. on Machine Learning, 2019.
|
| 230 |
+
X. Glorot and Y. Bengio. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proc. Int. Conf. on Artificial Intelligence and Statistics, 2010.
|
| 231 |
+
S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. Horowitz, and W. J. Dally. EIE: Efficient Inference Engine on Compressed Deep Neural Network. In Proc. Int. Symp. on Computer Architecture, 2016a.
|
| 232 |
+
S. Han, J. Pool, J. Tran, and W. J. Dally. Learning Both Weights and Connections For Efficient Neural Networks. In Proc. Adv. in Neural Information Processing Systems, 2016b.
|
| 233 |
+
B. Hassibi and D. G. Stork. Second Order Derivatives For Network Pruning: Optimal Brain Surgeon. In Proc. Adv. in Neural Information Processing Systems, 1993.
|
| 234 |
+
S. Hayou, J. Ton, A. Doucet, and Y. W. Teh. Robust Pruning at Initialization. In Proc. Int. Conf. on Learning Representations, 2021.
|
| 235 |
+
|
| 236 |
+
K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proc. Int. Conf. on Computer Vision, 2015.
|
| 237 |
+
Y. He, G. Kang, X. Dong, Y. Fu, and Y. Yang. Soft Filter Pruning For Accelerating Deep Convolutional Neural Networks. In Proc. Int. Joint Conf. on Artificial Intelligence, 2018.
|
| 238 |
+
G. Hinton, O. Vinyals, and J. Dean. Distilling the Knowledge in a Neural Network. In NeurIPS Deep Learning and Representation Learning Workshop, 2015.
|
| 239 |
+
Y. LeCun, J. S. Denker, and S. A. Solla. Optimal Brain Damage. In Proc. Adv. in Neural Information Processing Systems, 1990.
|
| 240 |
+
N. Lee, T. Ajanthan, and P. Torr. SNIP: Single-Shot Network Pruning Based on Connection Sensitivity. In Proc. Int. Conf. on Learning Representations, 2019.
|
| 241 |
+
N. Lee, T. Ajanthan, S. Gould, and P. Torr. A Signal Propagation Perspective For Pruning Neural Networks at Initialization. In Proc. Int. Conf. on Learning Representations, 2020.
|
| 242 |
+
H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf. Pruning Filters For Efficient ConvNets. In Proc. Int. Conf. on Learning Representations, 2017.
|
| 243 |
+
Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang. Learning Efficient Convolutional Networks Through Network Slimming. In Proc. Int. Conf. on Computer Vision, 2017.
|
| 244 |
+
Z. Liu, M. Sun, T. Zhou, G. Huang, and T. Darrell. Rethinking the Value of Network Pruning. In Proc. Int. Conf. on Learning Representations, 2019.
|
| 245 |
+
E. Malach, G. Yehudai, S. Shalev-shwartz, and O. Shamir. Proving the Lottery Ticket Hypothesis: Pruning is All You Need. In Proc. Int. Conf. on Machine Learning, 2020.
|
| 246 |
+
P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz. Pruning Convolutional Neural Networks For Resource Efficient Inference. In Proc. Int. Conf. on Learning Representations, 2017.
|
| 247 |
+
P. Molchanov, A. Mallya, S. Tyree, I. Frosio, and J. Kautz. Importance Estimation For Neural Network Pruning. In Proc. Int. Conf. on Computer Vision and Pattern Recognition, 2019.
|
| 248 |
+
V. Nagarajan and Z. Kolter. Generalization in Deep Networks: The Role of Distance from Initialization. In Workshop on Deep Learning: Bridging Theory and Practice, 2017.
|
| 249 |
+
A. Pensia, S. Rajput, A. Nagle, H. Vishwakarma, and D. Papailiopoulos. Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient. In Proc. Adv. in Neural Information Processing Systems, 2020.
|
| 250 |
+
M. Raghu, J. Gilmer, J. Yosinski, and J. Sohl-Dickstein. SVCCA: Singular Vector Canonical Correlation Analysis For Deep Learning Dynamics and Interpretability. In Proc. Adv. in Neural Information Processing Systems, 2017.
|
| 251 |
+
H. Tanaka, D. Kunin, D. Yamins, and S. Ganguli. Pruning Neural Networks Without Any Data by Iteratively Conserving Synaptic Flow. In Proc. Adv. in Neural Information Processing Systems (NeurIPS), 2020.
|
| 252 |
+
L. Theis, I. Korshunova, A. Tejani, and F. Huszár. Faster Gaze Prediction with Dense Networks and Fisher Pruning. In Proc. Euro. Conf. on Computer Vision, 2018.
|
| 253 |
+
C. Wang, G. Zhang, and R. Grosse. Picking Winning Tickets Before Training by Preserving Gradient Flow. In Proc. Int. Conf. on Learning Representations, 2020.
|
| 254 |
+
M. Ye, C. Gong, L. Nie, D. Zhou, A. Klivans, and Q. Liu. Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection. In Proc. Int. Conf. on Machine Learning, 2020.
|
| 255 |
+
H. You, C. Li, P. Xu, Y. Fu, Y. Wang, X. Chen, R. G. Baraniuk, Z. Wang, and Y. Lin. Drawing Early-Bird Tickets: Towards More Efficient Training of Deep Networks. In Proc. Int. Conf. on Learning Representations, 2020.
|
| 256 |
+
|
| 257 |
+
# A ORGANIZATION
|
| 258 |
+
|
| 259 |
+
The appendix is organized as follows:
|
| 260 |
+
|
| 261 |
+
- Appendix B: Extends our gradient flow framework towards SGD based training.
|
| 262 |
+
- Appendix C: Details the setup for training base models.
|
| 263 |
+
- Appendix D: Details the setup for training pruned models.
|
| 264 |
+
- Appendix E: Random/Uniform pruning results on CIFAR-100.
|
| 265 |
+
- Appendix F: Train/Test curves for magnitude-based and loss-preservation based pruned models.
|
| 266 |
+
- Appendix G: More Results on Gradient-norm Based Pruning.
|
| 267 |
+
|
| 268 |
+
Section G.1: Scatter plots demonstrating correlation between $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ and $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ .
|
| 269 |
+
|
| 270 |
+
Section G.2: Provides layer-wise pruning ratios for models pruned using gradient-norm based pruning variants.
|
| 271 |
+
|
| 272 |
+
Section G.3: Train/Test curves for gradient-norm based variants used in the main paper.
|
| 273 |
+
|
| 274 |
+
- Appendix H: Provides empirical verification for observations 2-4 in an unstructured pruning setup.
|
| 275 |
+
- Appendix I: Provides empirical verification for observations 2-4 on Tiny-ImageNet.
|
| 276 |
+
|
| 277 |
+
# B EXTENDING THE FRAMEWORK TO SGD TRAINING
|
| 278 |
+
|
| 279 |
+
The main paper focuses on using gradient flow to establish the relationship between frequently used importance measures for network pruning and evolution of model parameters. In practice, however, deep neural networks are trained using SGD (stochastic gradient descent) or its variants (e.g., SGD with momentum). This section serves to demonstrate that up to a first order approximation, our observations based upon gradient flow are theoretically valid for SGD training too. Our analysis is based on the fact that when an importance measure is employed, most pruning frameworks stop model training and use several mini-batches of training data to determine an approximation for gradient/Hessian. This enables us to use an expectation operation over mini-batches of data, making a theoretical analysis tractable.
|
| 280 |
+
|
| 281 |
+
# B.1 NOTATIONS AND SGD UPDATES
|
| 282 |
+
|
| 283 |
+
Notations: We first establish new notations for this section. For completeness, notations used in the main paper are recapped as well:
|
| 284 |
+
|
| 285 |
+
$\Theta(t)$ denotes model parameters at time $t$ . $\eta$ denotes the learning rate at which stochastic gradient descent trains a model. $\mathbf{d}(\Theta(t); X_i)$ denotes the estimate of gradient of loss and $\mathbf{H}(\Theta(t); X_i)$ denotes the estimate of Hessian of loss with respect to model parameters at time $t$ , as calculated using the $i^{th}$ mini-batch $(X_i)$ . $\mathbf{g}(\Theta(t))$ denotes the full-batch gradient of loss and $\mathbf{H}(\Theta(t))$ denotes Hessian of loss with respect to model parameters at time $t$ .
|
| 286 |
+
|
| 287 |
+
SGD Update: Under stochastic gradient descent, a random mini-batch of data is sampled amongst all mini-batches without replacement. For example, if mini-batch $X_{i}$ is sampled, a gradient approximation based on its data elements will be used to update model parameters at the current iteration. Therefore, the model parameters evolve as follows:
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
\boldsymbol {\Theta} (t) \rightarrow \boldsymbol {\Theta} (t) - \eta \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}). \tag {13}
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
# B.2 CHANGE IN NORM OF MODEL PARAMETERS: $\Delta \| \Theta (t)\| _2^2$
|
| 294 |
+
|
| 295 |
+
We first use SGD to analyze change in norm of model parameters, denoted as $\Delta \| \Theta (t)\| _2^2$ . Assume mini-batch $X_{i}$ is used to update model parameters at the current iteration.
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
\begin{array}{l} \Delta \| \Theta (t) \| _ {2} ^ {2} = \| \Theta (t) - \eta \mathbf {d} (\Theta (t); X _ {i}) \| _ {2} ^ {2} - \| \Theta (t) \| _ {2} ^ {2} \\ = \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2} - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) + \eta^ {2} \| \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \| _ {2} ^ {2} - \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2} \tag {14} \\ = - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) + \mathcal {O} (\eta^ {2}) \\ \approx - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}), \\ \end{array}
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
where we ignored terms with higher order products of learning rate $\eta$ . Smaller the learning rate, the better this approximation will be.
|
| 302 |
+
|
| 303 |
+
As mentioned before, to compute importance estimates for network pruning, most pruning frameworks stop the training process and compute estimates for gradient/Hessian using several mini-batches of data. To account for this, we can take expectation over the mini-batch sampling process. For example, if input data is denoted as $X$ , under expectation, Equation 14 changes as follows:
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
\begin{array}{l} E _ {X _ {i} \sim X} \left[ \Delta \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2} \right] = E _ {X _ {i} \sim X} \left[ - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \right] \\ = - 2 \eta \boldsymbol {\Theta} (t) ^ {T} E _ {X _ {i} \sim X} [ \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ] \\ = - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {g} (\boldsymbol {\Theta} (t)). \tag {15} \\ \Longrightarrow E _ {X _ {i} \sim X} \left[ \frac {\Delta \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2}}{\eta} \right] = - 2 \boldsymbol {\Theta} (t) ^ {T} \mathbf {g} (\boldsymbol {\Theta} (t)). \\ \end{array}
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
As the main paper shows (see Equation 9), under gradient flow, the norm of model parameters evolves as follows:
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
\frac {\partial \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2}}{\partial t} = - 2 \boldsymbol {\Theta} (t) ^ {T} \mathbf {g} (\boldsymbol {\Theta} (t)). \tag {16}
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
Therefore, using Equation 15 and Equation 16, it can be evidently seen that up to a first-order approximation, the expected rate at which norm of model parameters evolves under SGD is exactly the same as that under gradient flow. In the main paper, we use this relationship to conclude Observations 2 and 3. Thus, our analysis in this section indicates Observations 2, 3 are in fact valid approximations under SGD training as well.
|
| 316 |
+
|
| 317 |
+
# B.3 CHANGE IN CHANGE OF NORM OF MODEL PARAMETERS: $\Delta^2\| \Theta (t)\| _2^2$
|
| 318 |
+
|
| 319 |
+
We now determine the expected rate at which change in norm of model parameters itself changes under SGD. To this end, we will calculate $\Delta^2\|\Theta(t)\|_2^2 = \Delta(\Delta\|\Theta(t)\|_2^2)$ .
|
| 320 |
+
|
| 321 |
+
Equation 14 shows that when mini-batch $X_{i}$ is used to take an optimization step, up to a first-order approximation, the change in norm of model parameters $(\Delta \| \Theta (t)\| _2^2)$ is as follows:
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\Delta \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2} = - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}). \tag {17}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
After the optimization step based on mini-batch $X_{i}$ has been completed, the model parameters change from $\Theta (t)$ to $\Theta (t) - \eta \mathbf{d}(\Theta (t);X_i)$ . Assume that a new mini-batch $X_{j}$ is sampled for the next optimization step-i.e., the next gradient estimate is $\mathbf{d}\left(\Theta (t) - \eta \mathbf{d}\left(\Theta (t);X_i\right);X_j\right)$ . To relate this new gradient estimate with the previous one, we use a first-order Taylor approximation for model gradient based on mini-batch $X_{j}$ :
|
| 328 |
+
|
| 329 |
+
$$
|
| 330 |
+
\mathbf {d} (\boldsymbol {\Theta} (t) - \eta \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}); X _ {j}) = \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) - \eta \mathbf {H} (\boldsymbol {\Theta} (t); X _ {j}) \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}). \tag {18}
|
| 331 |
+
$$
|
| 332 |
+
|
| 333 |
+
Note that in the above equation, the Hessian estimate is based on mini-batch $X_{j}$ only. Using this relationship, the change in norm of model parameters can be approximated as follows:
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
\begin{array}{l} \Delta \left\| \boldsymbol {\Theta} (t) - \eta \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \right\| _ {2} ^ {2} = - 2 \eta (\boldsymbol {\Theta} (t) - \eta \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i})) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t) - \eta \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}); X _ {j}) \\ = - 2 \eta (\boldsymbol {\Theta} (t) - \eta \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i})) ^ {T} (\mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) - \eta \mathbf {H} (\boldsymbol {\Theta} (t); X _ {j}) \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i})) \\ = - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) + 2 \eta^ {2} \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t); X _ {j}) \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \\ + 2 \eta^ {2} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) + \mathcal {O} (\eta^ {3}) \\ \approx - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) + 2 \eta^ {2} \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t); X _ {j}) \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \\ + 2 \eta^ {2} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}), \tag {19} \\ \end{array}
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
where we ignore terms of the order of $\mathcal{O}(\eta^3)$ . This above result can now be used to calculate $\Delta^2\|\Theta(t)\|_2^2$ , our desired quantity.
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
\begin{array}{l} \Delta^ {2} \left\| \boldsymbol {\Theta} (t) \right\| _ {2} ^ {2} = \Delta \left\| \boldsymbol {\Theta} (t) - \eta \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \right\| _ {2} ^ {2} - \Delta \left\| \boldsymbol {\Theta} (t) \right\| _ {2} ^ {2} \\ = \left[ - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) + 2 \eta^ {2} \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t); X _ {j}) \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \right. \\ + 2 \eta^ {2} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) ] - [ - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ] \\ = - 2 \eta \boldsymbol {\Theta} (t) ^ {T} (\mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) - \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i})) + 2 \eta^ {2} \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t); X _ {j}) \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \\ + 2 \eta^ {2} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}). \tag {20} \\ \end{array}
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
We again use the fact that importance estimates for network pruning are estimated by stopping model training and computing gradient/Hessian estimates over several mini-batches of data. We denote this through an expectation over input data $X$ , where our random variables are the minibatches $X_{i}$ and $X_{j}$ . As mini-batches are sampled independently, we use the result that expectation of product of their functions can be independently evaluated: i.e., $E_{X_i,X_j\sim X}[f(X_i)g(X_j)] = E_{X_i\sim X}[f(X_i)]E_{X_j\sim X}[g(X_j)]$ .
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
\begin{array}{l} \Rightarrow E _ {X _ {i}, X _ {j} \sim X} \left[ \Delta^ {2} \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2} \right] = - 2 \eta E _ {X _ {i}, X _ {j} \sim X} \left[ \boldsymbol {\Theta} (t) ^ {T} (\mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) - \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i})) \right] \\ + 2 \eta^ {2} E _ {X _ {i}, X _ {j} \sim X} \left[ \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t); X _ {j}) \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) \right] \\ + 2 \eta^ {2} E _ {X _ {i}, X _ {j} \sim X} \left[ \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ^ {T} \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) \right] \\ = - 2 \eta \boldsymbol {\Theta} (t) ^ {T} \left(E _ {X _ {j} \sim X} \left[ (\mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) ] - E _ {X _ {i} \sim X} [ \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i})) ]\right) \right. \\ + 2 \eta^ {2} \boldsymbol {\Theta} (t) ^ {T} E _ {X _ {j} \sim X} [ \mathbf {H} (\boldsymbol {\Theta} (t); X _ {j}) ] E _ {X _ {i} \sim X} [ \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ] \\ + 2 \eta^ {2} E _ {X _ {i} \sim X} [ \mathbf {d} (\boldsymbol {\Theta} (t); X _ {i}) ] ^ {T} E _ {X _ {j} \sim X} [ \mathbf {d} (\boldsymbol {\Theta} (t); X _ {j}) ] \\ = - 2 \eta \boldsymbol {\Theta} (t) ^ {T} (\mathbf {g} (\boldsymbol {\Theta} (t)) - \mathbf {g} (\boldsymbol {\Theta} (t))) + 2 \eta^ {2} \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t)) \\ + 2 \eta^ {2} \mathbf {g} (\boldsymbol {\Theta} (t)) ^ {T} \mathbf {g} (\boldsymbol {\Theta} (t)) \\ = 2 \eta^ {2} \left(\left\| \mathbf {g} (\boldsymbol {\Theta} (t)) \right\| ^ {2} + \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t))\right). \tag {21} \\ \end{array}
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
\Longrightarrow E _ {X _ {i}, X _ {j} \sim X} \left[ \frac {\Delta^ {2} \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2}}{\eta^ {2}} \right] = 2 \left(\| \mathbf {g} (\boldsymbol {\Theta} (t)) \| ^ {2} + \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t))\right). \tag {22}
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
As shown in the main paper (see Equation 12), under gradient flow, the rate at which change in norm of model parameters changes is as follows:
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
\Rightarrow \frac {\partial^ {2} \| \boldsymbol {\Theta} (t) \| _ {2} ^ {2}}{\partial t ^ {2}} = 2 \left(\| \mathbf {g} (\boldsymbol {\Theta} (t)) \| ^ {2} + \boldsymbol {\Theta} (t) ^ {T} \mathbf {H} (\boldsymbol {\Theta} (t)) \mathbf {g} (\boldsymbol {\Theta} (t))\right). \tag {23}
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
As can be evidently seen in Equation 22 and Equation 23, up to a first-order approximation, the expected rate at which change of norm of model parameters changes under SGD is exactly the same
|
| 362 |
+
|
| 363 |
+
as that under gradient flow. In the main paper, we use this relationship to conclude Observations 4 and 5. Thus, our analysis in this section indicates Observations 4, 5 are in fact valid approximations under SGD training as well.
|
| 364 |
+
|
| 365 |
+
Overall, in this section, we showed that the relationships between commonly used measures for network pruning and their expected impact on model parameters is the same for both SGD training and gradient flow, up to a first-order approximation.
|
| 366 |
+
|
| 367 |
+
# C TRAINING BASE MODELS
|
| 368 |
+
|
| 369 |
+
We analyze models trained on CIFAR-10 and CIFAR-100 datasets. All models are trained with the exact same setup. In all our experiments, we average results across 3 seeds.
|
| 370 |
+
|
| 371 |
+
Since pruning has a highly practical motivation, we argue analyzing a variety of models that capture different architecture classes is important. To this end, we use VGG-13 (a vanilla CNN), MobileNet-V1 (a low-redundancy CNN designed specifically for efficient computation), and ResNet-56 (a residual model to analyze effects of pruning under skip connections). The final setup is as follows:
|
| 372 |
+
|
| 373 |
+
- Optimizer: SGD,
|
| 374 |
+
- Momentum: 0.9,
|
| 375 |
+
Weight decay: 0.0001,
|
| 376 |
+
- Learning rate schedule: (0.1, 0.01, 0.001),
|
| 377 |
+
- Number of epochs for each learning rate: (80, 40, 40),
|
| 378 |
+
- Batch Size: 128.
|
| 379 |
+
|
| 380 |
+
# D TRAINING PRUNED MODELS
|
| 381 |
+
|
| 382 |
+
The general setup for training pruned models is the same as that for training base models. However, for a fair comparison, if the prune-and-train framework takes $n$ rounds, we subtract $n$ epochs from the number of epochs allotted to the highest learning rate. This ensures the same amount of training budget for all models and pruning strategies. Note that even if this subtraction is not performed, the results do not vary much, as the model already converges (see Appendix F for train/test curves). The final setup is as follows.
|
| 383 |
+
|
| 384 |
+
- Optimizer: SGD,
|
| 385 |
+
- Momentum: 0.9,
|
| 386 |
+
Weight decay: 0.0001,
|
| 387 |
+
- Learning rate schedule: (0.1, 0.01, 0.001),
|
| 388 |
+
- Number of epochs for each learning rate: (80 - number of pruning rounds, 40, 40),
|
| 389 |
+
- Batch size: 128.
|
| 390 |
+
|
| 391 |
+
Since we prune models in a structured manner, our target pruning ratios are chosen in terms of the number of filters to be pruned. A rough translation in terms of parameters follows:
|
| 392 |
+
|
| 393 |
+
- VGG-13: $75\%$ filters ( $\sim 94\%$ parameters) for CIFAR-10; $65\%$ filters ( $\sim 89\%$ parameters) for CIFAR-100,
|
| 394 |
+
- MobileNet-V1: $75\%$ filters ( $\sim 94\%$ parameters) for CIFAR-10; $65\%$ filters ( $\sim 89\%$ parameters) for CIFAR-100,
|
| 395 |
+
- ResNet-56: $60\%$ filters ( $\sim 88\%$ parameters) for CIFAR-10; $50\%$ filters ( $\sim 83\%$ parameters) for CIFAR-100.
|
| 396 |
+
|
| 397 |
+
The percentage of pruned parameters is much larger than the percentage of pruned filters because generally filters from deeper layers are pruned more heavily. These filters form the bulk of a model's parameters, thus resulting in high parameter percentages.
|
| 398 |
+
|
| 399 |
+
# E RANDOM/UNIFORM PRUNING
|
| 400 |
+
|
| 401 |
+
In this section, we compare various pruning measures with random and uniform pruning under the same experimental setup as used in the main paper (see Appendix D). For random pruning, we assign a random score between 0-1, with equal probability, to each filter. For uniform pruning, we decide a target amount of percentage pruning and remove that much percentage filters from all layers. For example, if $50\%$ of the model is to be pruned, we remove half the filters from each layer randomly.
|
| 402 |
+
|
| 403 |
+
If which importance measure is used for pruning a model in structured pruning settings is not an important choice, both random and uniform pruning schemes should perform competitively with standardly used importance measures for network pruning. However, as we show in Table 3, random and uniform pruning perform much worse than the standardly used metrics for pruning.
|
| 404 |
+
|
| 405 |
+
Table 3: Reduction in test accuracy for CIFAR-100 models pruned randomly, uniformly, and using the importance measures analyzed in the main paper. Test accuracies of base models are reported in the table. When pruning is distributed over several rounds, it often leads to small amounts of pruning in a given round. Since only integer number of filters can be pruned, a small ratio will result in no pruning at a layer with small number of filters and other layers will be pruned more to compensate for the same. While this issue does not arise in VGG-13 and MobileNet-V1 models, for ResNet-56, which has several layers with only 16 filters per layer, often very minimal amounts of pruning takes place. Due to this, later layers are pruned more aggressively than earlier layers, resulting in poor performance in both 5 and 25 rounds of random/uniform pruning for ResNet-56.
|
| 406 |
+
|
| 407 |
+
<table><tr><td>VGG-13 (65% pruned)</td><td>Base Model</td><td>Random</td><td>Uniform</td><td>Mag. Based</td><td>Loss Based</td><td>Proposed Extension</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP | (T=1)</td></tr><tr><td>1 round</td><td>69.62</td><td>-5.71</td><td>-5.60</td><td>-1.73</td><td>-1.01</td><td>-0.61</td><td>-1.11</td><td>-3.83</td><td>-0.96</td></tr><tr><td>5 rounds</td><td></td><td>-5.49</td><td>-4.57</td><td>-0.31</td><td>-1.74</td><td>-1.37</td><td>-3.22</td><td>-15.06</td><td>-1.53</td></tr><tr><td>25 rounds</td><td></td><td>-4.55</td><td>-3.19</td><td>-1.31</td><td>-0.51</td><td>-0.69</td><td>-4.44</td><td>-26.79</td><td>-1.48</td></tr><tr><td>MobileNet-V1 (65% pruned)</td><td>Base Model</td><td>Random</td><td>Uniform</td><td>Mag. Based</td><td>Loss Based</td><td>Proposed Extension</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP | (T=1)</td></tr><tr><td>1 round</td><td>69.10</td><td>-6.54</td><td>-9.71</td><td>-1.09</td><td>-1.94</td><td>-0.58</td><td>-4.58</td><td>-4.31</td><td>-1.93</td></tr><tr><td>5 rounds</td><td></td><td>-6.55</td><td>-4.66</td><td>-0.77</td><td>-1.89</td><td>-0.69</td><td>-6.08</td><td>-15.35</td><td>-1.82</td></tr><tr><td>25 rounds</td><td></td><td>-5.59</td><td>-3.56</td><td>-1.52</td><td>-1.79</td><td>-0.75</td><td>-9.96</td><td>-22.61</td><td>-1.41</td></tr><tr><td>ResNet-56 (50% pruned)</td><td>Base Model</td><td>Random</td><td>Uniform</td><td>Mag. Based</td><td>Loss Based</td><td>Proposed Extension</td><td>GraSP (T=200)</td><td>GraSP (T=1)</td><td>|GraSP | (T=1)</td></tr><tr><td>1 round</td><td>71.01</td><td>-4.00</td><td>-3.42</td><td>-4.09</td><td>-4.13</td><td>-3.91</td><td>-3.92</td><td>-3.99</td><td>-3.86</td></tr><tr><td>5 rounds</td><td></td><td>-9.05†</td><td>-7.58†</td><td>-2.97</td><td>-3.90</td><td>-3.31</td><td>-4.04</td><td>-18.03</td><td>-4.00</td></tr><tr><td>25 rounds</td><td></td><td>-27.54†</td><td>-70.01†</td><td>-2.26</td><td>-3.27</td><td>-2.39</td><td>-13.46</td><td>-70.01</td><td>-2.99</td></tr></table>
|
| 408 |
+
|
| 409 |
+
# F TRAIN/TEST PLOTS
|
| 410 |
+
|
| 411 |
+
This section further demonstrates that magnitude based pruned models converge faster than loss-preservation based pruned models. For magnitude-based pruning, the importance is defined as $\ell 2$ norm of a filter; for loss-preservation, the importance is defined as a variant of SNIP (Lee et al. (2019)) applied to entire filter. The plots show train/test curves for VGG-13, MobileNet-V1, and ResNet-56 models for 1, 5, and 25 pruning rounds each. Also plotted are train/test curves for the proposed extension to loss-preservation which intentionally biases loss-preservation towards removing small magnitude parameters (see Section 4.1 for more details).
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
F.1 VGG-13
|
| 415 |
+
|
| 416 |
+

|
| 417 |
+
Figure 4: VGG-13: 1 round of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
Figure 5: VGG-13: 5 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 421 |
+
|
| 422 |
+

|
| 423 |
+
|
| 424 |
+

|
| 425 |
+
Figure 6: VGG-13: 25 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 426 |
+
|
| 427 |
+

|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
F.2 MOBILENET-V1
|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
Figure 7: MobileNet-V1: 1 round of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
Figure 8: MobileNet-V1: 5 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
|
| 440 |
+

|
| 441 |
+
Figure 9: MobileNet-V1: 25 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 442 |
+
|
| 443 |
+

|
| 444 |
+
|
| 445 |
+
# F.3 RESNET-56
|
| 446 |
+
|
| 447 |
+

|
| 448 |
+
Figure 10: ResNet-56: 1 round of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 449 |
+
|
| 450 |
+

|
| 451 |
+
|
| 452 |
+

|
| 453 |
+
Figure 11: ResNet-56: 5 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 454 |
+
|
| 455 |
+

|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
Figure 12: ResNet-56: 25 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 459 |
+
|
| 460 |
+

|
| 461 |
+
|
| 462 |
+
# G MORE RESULTS ON GRADIENT-NORM BASED PRUNING
|
| 463 |
+
|
| 464 |
+
This section provides further results on gradient-norm based pruning. The general implementation of these measures requires the calculation of Hessian-gradient products. Similar to the implementation by Wang et al. (2020), we define a constant amount of memory that stores randomly selected samples for all classes. For the original GraSP variant that increases gradient norm, we follow the original implementation and use a temperature of 200 during the calculation of Hessian-gradient product. This involves division of the model output (i.e., the logits vector) by a constant $T (= 200)$ before using softmax for classification (see Hinton et al. (2015) for further details). For the GraSP variant without large temperature, this temperature value is brought down to 1. Finally, for the gradient norm preservation measure, the GraSP variant without large temperature is used alongside an absolute value operator to remove parameters that least change gradient norm (i.e., least affect second-order model evolution dynamics).
|
| 465 |
+
|
| 466 |
+
# G.1 SCATTER PLOTS FOR $\Theta_p^T (t)\mathbf{H}(\Theta (t))\mathbf{g}(\Theta (t))$ VS. $\Theta_p^T (t)\mathbf{g}(\Theta (t))$
|
| 467 |
+
|
| 468 |
+
This subsection provides scatter plots demonstrating the highly correlated nature of $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ and $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ . The correlation is averaged over 3 seeds and plots are for 1 seed. As can be seen in the plots, the measures are highly correlated throughout model training, indicating gradient-norm increase may severely affect model loss if a partially trained or completely trained model is pruned using $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ .
|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
(a) Initialization.
|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
(b) 40 epochs.
|
| 475 |
+
|
| 476 |
+

|
| 477 |
+
(c) 160 epochs.
|
| 478 |
+
Figure 13: $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ versus $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for VGG-13 models trained on CIFAR-100. Plots are shown for filters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
(a) Initialization.
|
| 482 |
+
|
| 483 |
+

|
| 484 |
+
(b) 40 epochs.
|
| 485 |
+
Figure 14: $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ versus $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for MobileNet-V1 models trained on CIFAR-100. Plots are shown for filters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 486 |
+
|
| 487 |
+

|
| 488 |
+
(c) 160 epochs.
|
| 489 |
+
|
| 490 |
+

|
| 491 |
+
(a) Initialization.
|
| 492 |
+
Figure 15: $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ versus $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for ResNet-56 models trained on CIFAR-100. Plots are shown for filters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 493 |
+
|
| 494 |
+

|
| 495 |
+
(b) 40 epochs.
|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
(c) 160 epochs.
|
| 499 |
+
|
| 500 |
+
# G.2 LAYER-WISE PRUNING RATIOS
|
| 501 |
+
|
| 502 |
+
In Table 2, we showed that in a few epochs of training, when reduction in loss can be attributed to training of earlier layers (Raghu et al. (2017)), GraSP without large temperature chooses to prune earlier layers aggressively. This failure mode is an inherent tendency of GraSP, since GraSP assigns least importance to parameters whose removal maximally increases model loss (see Observation 5 in Section 4.3).
|
| 503 |
+
|
| 504 |
+
To further elaborate on this comment, we first provide layer-wise pruning ratios of models pruned using different variants of gradient-norm based pruning-i.e., GraSP with large temperature (GraSP $(T = 200)$ ), Gradient norm preservation (|GraSP|), and GraSP without large temperature (GraSP $(T = 1)$ ). Figure 16 provides results for VGG-13 models; Figure 17 provides results for MobileNet-V1 models; and Figure 18, Figure 19, and Figure 20 provide results for block-1, block-2, and block-3 of ResNet-56 models, respectively. For sequential architectures (VGG-13 and MobileNet-V1), we can vividly see earlier layers are pruned aggressively. For ResNet-56, we find that in any given residual module, the first convolutional layer is pruned a lot more. This holds well with our claim that GraSP removes parameters which maximally increase the model loss, for removing parameters from the first convolutional layer in a residual module will completely block signal propagation for that module. The remaining pruning budget can then be used to remove parameters from the first convolutional layer of other residual modules, instead of focusing on the Layer-2 of the same module. Note that this behavior becomes more explicit as several rounds of pruning are used and invariably earlier Layer-1's in earlier blocks are pruned more.
|
| 505 |
+
|
| 506 |
+
Overall, across all models, GraSP without large temperature prunes earlier layers very aggressively. This permanently damages the model and thus the accuracy of low-temperature GraSP is much lower than other variants of gradient-norm based pruning. Even for GraSP with large temperature, when more rounds of pruning are used, use of large temperature is unable to satisfactorily avoid aggressive pruning in earlier layers (e.g., see Figure 17c). Only Gradient norm preservation is able to avoid this failure mode, and does so across all models and settings.
|
| 507 |
+
|
| 508 |
+
From a more theoretical standpoint, one can again employ gradient flow to better understand this behavior. In particular, Equation 6 shows model loss $(L(t))$ over time reduces in proportion to negative of the gradient norm. For example, if a model has $N$ layers and $\Theta_{n}(t)$ , $\mathbf{g}(\Theta_n(t))$ denote parameters, gradient of the $n^{th}$ layer, then the model loss reduces at the following rate:
|
| 509 |
+
|
| 510 |
+
$$
|
| 511 |
+
\frac {\partial L (t)}{\partial t} = - \left\| \mathbf {g} (\boldsymbol {\Theta} (t)) \right\| _ {2} ^ {2} = - \sum_ {n = 1} ^ {N} \left\| \mathbf {g} (\boldsymbol {\Theta} _ {n} (t)) \right\| _ {2} ^ {2}. \tag {24}
|
| 512 |
+
$$
|
| 513 |
+
|
| 514 |
+
This implies, at any given time, if a layer has a higher gradient norm, it contributes more to reduction in loss. This makes parameters of that layer an appropriate candidate for removal by GraSP.
|
| 515 |
+
|
| 516 |
+
We plot the layer-wise gradient norm of different models trained on CIFAR-100 over time (see Figure 21). As can be evidently seen, gradient norm is highest for the earlier layers in the beginning. Later in training, especially after the learning rate is dropped $(80^{th}$ epoch), it is difficult to discern which part of the model has a higher gradient norm. Overall, this experiment provides further evidence that early-on in training, reduction in model loss can be majorly attributed to the optimization of parameters in the earlier layers, as noted by Raghu et al. (2017) through the use of SVCCA. This leads to aggressive pruning of earlier layers when GraSP is used as an importance measure. Gradient norm preservation, however, is able to avoid this failure mode.
|
| 517 |
+
|
| 518 |
+

|
| 519 |
+
(a) 1 round.
|
| 520 |
+
|
| 521 |
+

|
| 522 |
+
(b) 5 rounds.
|
| 523 |
+
Figure 16: Layer-wise pruning ratios for VGG-13 with a target ratio of $65\%$ pruning. Results are provided for pruning over (a) 1 round, (b) 5 rounds, and (c) 25 rounds. When the model is allowed to train even slightly, GraSP without temperature prunes earlier layers very aggressively.
|
| 524 |
+
|
| 525 |
+

|
| 526 |
+
(c) 25 rounds.
|
| 527 |
+
|
| 528 |
+

|
| 529 |
+
(a) 1 round.
|
| 530 |
+
Figure 17: Layer-wise pruning ratios for MobileNet-V1 with a target ratio of $65\%$ pruning. Results are provided for pruning over (a) 1 round, (b) 5 rounds, and (c) 25 rounds. When the model is allowed to train even slightly, GraSP without temperature prunes earlier layers very aggressively.
|
| 531 |
+
|
| 532 |
+

|
| 533 |
+
(b) 5 rounds.
|
| 534 |
+
|
| 535 |
+

|
| 536 |
+
(c) 25 rounds.
|
| 537 |
+
|
| 538 |
+

|
| 539 |
+
(a) Block 1.
|
| 540 |
+
|
| 541 |
+

|
| 542 |
+
(b) Block 2.
|
| 543 |
+
|
| 544 |
+

|
| 545 |
+
(c) Block 3.
|
| 546 |
+
|
| 547 |
+

|
| 548 |
+
Figure 18: Layer-wise pruning ratios for (a) Block 1, (b) Block 2, and (c) Block 3 of ResNet-56, with a target ratio of $50\%$ pruning over 1 round. For (a) Block 1, even layer indices are Layer-1 of a residual module and odd layer indices are Layer-2 of a residual module; for (b) Block 2 and (c) Block 3, index 0 is Layer-1, index 1 is Layer-2, and index 2 is the learned shortcut layer of the first residual module; thereafter, for (b) Block 2 and (c) Block 3 odd layer indices are Layer-1 of a residual module and even layer indices are Layer-2 of a residual module. This figure shows that when the model is at initialization, GraSP with and without large temperature have only a slight bias towards pruning earlier layers more than deeper layers.
|
| 549 |
+
(a) Block 1.
|
| 550 |
+
|
| 551 |
+

|
| 552 |
+
(b) Block 2.
|
| 553 |
+
|
| 554 |
+

|
| 555 |
+
(c) Block 3.
|
| 556 |
+
|
| 557 |
+

|
| 558 |
+
(a) Block 1.
|
| 559 |
+
Figure 20: Layer-wise pruning ratios for (a) Block 1, (b) Block 2, and (c) Block 3 of ResNet-56, with a target ratio of $50\%$ pruning over 25 rounds. For (a) Block 1, even layer indices are Layer-1 of a residual module and odd layer indices are Layer-2 of a residual module; for (b) Block 2 and (c) Block 3, index 0 is Layer-1, index 1 is Layer-2, and index 2 is the learned shortcut layer of the first residual module; thereafter, for (b) Block 2 and (c) Block 3 odd layer indices are Layer-1 of a residual module and even layer indices are Layer-2 of a residual module. This figure shows that when the model is allowed to train even slightly, GraSP without temperature prunes Layer-1's of residual modules in all blocks aggressively, with a clearly higher amount of pruning in the earlier blocks. GraSP with large temperature prunes all earlier layers aggressively.
|
| 560 |
+
|
| 561 |
+

|
| 562 |
+
(b) Block 2.
|
| 563 |
+
|
| 564 |
+

|
| 565 |
+
Figure 19: Layer-wise pruning ratios for (a) Block 1, (b) Block 2, and (c) Block 3 of ResNet-56, with a target ratio of $50\%$ pruning over 5 rounds. For (a) Block 1, even layer indices are Layer-1 of a residual module and odd layer indices are Layer-2 of a residual module; for (b) Block 2 and (c) Block 3, index 0 is Layer-1, index 1 is Layer-2, and index 2 is the learned shortcut layer of the first residual module; thereafter, for (b) Block 2 and (c) Block 3 odd layer indices are Layer-1 of a residual module and even layer indices are Layer-2 of a residual module. This figure shows that when the model is allowed to train even slightly, GraSP without large temperature prunes Layer-1's of residual modules in all blocks aggressively, with a clearly higher amount of pruning in the earlier blocks. GraSP with large temperature prunes all earlier layers aggressively.
|
| 566 |
+
(c) Block 3.
|
| 567 |
+
|
| 568 |
+

|
| 569 |
+
(a) VGG-13.
|
| 570 |
+
|
| 571 |
+

|
| 572 |
+
|
| 573 |
+

|
| 574 |
+
(c) ResNet-56: Block 1.
|
| 575 |
+
|
| 576 |
+

|
| 577 |
+
(b) MobileNet-V1.
|
| 578 |
+
(d) ResNet-56: Block 2.
|
| 579 |
+
|
| 580 |
+

|
| 581 |
+
(e) ResNet-56: Block 3.
|
| 582 |
+
Figure 21: Layer-wise gradient norm for (a) VGG-13, (b) MobileNet-V1, (c) Block 1 of ResNet-56, (d) Block 2 of ResNet-56, and (e) Block 3 of ResNet-56 models trained on CIFAR-100. For MobileNet-V1, number of pointwise convolutional filters in a layer and number of depthwise convolutional filters in the following layer need to be the same. Due to this, we only use pointwise convolutional filters to prune a model and thus this figure visualizes the gradient norm for pointwise convolutional filters only. For ResNet-56 plots, bold lines indicate the first convolutional layer (Layer-1) in a residual module and dotted lines indicate the second convolutional layer (Layer-2) in that same module. Early-on in training, we generally find gradient norm is much higher for earlier layers and these layers are invariably the ones pruned more by GraSP (see layer-wise pruning ratios in Figure 16 for VGG-13 models; Figure 17 for MobileNet-V1 models; and Figure 18, Figure 19, and Figure 20 for ResNet-56 models, respectively).
|
| 583 |
+
|
| 584 |
+
# G.3 TRAIN/TEST CURVES FOR GRADIENT-NORM BASED PRUNING VARIANTS
|
| 585 |
+
|
| 586 |
+
This subsection provides train/test curves for different variants of gradient-norm based pruning methods considered in this paper. These curves demonstrate the large temperature GraSP variant does not improve convergence rate substantially, if ever, in comparison to gradient-norm preservation. On the other hand, it does result in significant performance loss when the model is even slightly trained.
|
| 587 |
+
|
| 588 |
+
# G.3.1 VGG-13
|
| 589 |
+
|
| 590 |
+

|
| 591 |
+
Figure 22: VGG-13: 1 round of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 592 |
+
|
| 593 |
+

|
| 594 |
+
|
| 595 |
+

|
| 596 |
+
Figure 23: VGG-13: 5 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 597 |
+
|
| 598 |
+

|
| 599 |
+
Figure 24: VGG-13: 25 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 600 |
+
|
| 601 |
+
# G.3.2 MOBILENET-V1
|
| 602 |
+
|
| 603 |
+

|
| 604 |
+
Figure 25: MobileNet-V1: 1 round of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 605 |
+
|
| 606 |
+

|
| 607 |
+
Figure 26: MobileNet-V1: 5 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 608 |
+
|
| 609 |
+

|
| 610 |
+
Figure 27: MobileNet-V1: 25 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 611 |
+
|
| 612 |
+
# G.3.3 RESNET-56
|
| 613 |
+
|
| 614 |
+

|
| 615 |
+
Figure 28: ResNet-56: 1 round of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 616 |
+
|
| 617 |
+

|
| 618 |
+
Figure 29: ResNet-56: 5 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right).
|
| 619 |
+
|
| 620 |
+

|
| 621 |
+
Figure 30: ResNet-56: 25 rounds of pruning. CIFAR-10 (left); CIFAR-100 (right). GraSP $(T = 1)$ is not shown because it achieves random performance ( $10\%$ for CIFAR-10 and $1\%$ for CIFAR-100).
|
| 622 |
+
|
| 623 |
+
# H RESULTS ON UNSTRUCTURED PRUNING
|
| 624 |
+
|
| 625 |
+
The experiments in this paper focus on structured pruning, which enables acceleration on commodity hardware resources without requirement of specially designed hardware/software. However, note that our theoretical analysis is independent of the pruning setup used. Thus, we revisit some of our experiments under an unstructured pruning setup, hence demonstrating the empirical generality of our results.
|
| 626 |
+
|
| 627 |
+
# H.1 OBSERVATIONS 2/3
|
| 628 |
+
|
| 629 |
+
In Observation 2 (see Section 4.2), we show that loss-preservation based pruning techniques tend to remove filters with minimal movement in their magnitude. This leads us to Observation 3 (see Section 4.2), which shows that with the added heuristic of "train until minimal change" (You et al. (2020)), where magnitude-based pruning removes parameters with small magnitude and minimal change, magnitude-based pruning implicitly performs loss-preservation as well. The main paper verifies this claim at a filter-level granularity (structured pruning) for models trained on CIFAR-100 (see Figure 2).
|
| 630 |
+
|
| 631 |
+
In Figure 31, we show that even when analyzed at a finer granularity of parameters (unstructured pruning) in convolutional filters, our claim continues to hold well. In particular, we show that as training continues and change in parameters reduces, $|\Theta_p \Delta \Theta_p|$ becomes highly correlated with loss-preservation based importance $(|\Theta_p(t) \mathbf{g}(\Theta(t))|)$ . This demonstrates that our observations relating magnitude-based pruning and loss-preservation pruning continue to hold well in an unstructured pruning setup too.
|
| 632 |
+
|
| 633 |
+

|
| 634 |
+
(a) VGG-13.
|
| 635 |
+
|
| 636 |
+

|
| 637 |
+
(b) MobileNet-V1.
|
| 638 |
+
Figure 31: Correlation between $|\Theta_p \Delta \Theta_p|$ and loss-preservation based importance $(|\Theta_p(t) \mathbf{g}(\Theta(t))|)$ at every 10th epoch for models trained on CIFAR-100. Also plotted is the distance between pruning masks (target ratio: 20% parameters), similar to masks used by You et al. (2020) to decide when to prune a model. In contrast to You et al., these masks function on a parameter level granularity, as used in unstructured pruning. We note that as the distance between pruning masks over consecutive epochs reduces, $|\Theta_p \Delta \Theta_p|$ becomes more correlated with loss-preservation importance.
|
| 639 |
+
|
| 640 |
+

|
| 641 |
+
(c) ResNet-56.
|
| 642 |
+
|
| 643 |
+
# H.2 OBSERVATION 4
|
| 644 |
+
|
| 645 |
+
Observation 4 indicates gradient-norm based pruning (i.e., GraSP Wang et al. (2020)) removes parameters which maximally increase model loss. The main paper confirms this claim at a filter-level granularity (structured pruning) for CIFAR-100 models (see Figure 3), showing that there exists a high correlation between $\Theta_p(t)\mathbf{H}(\Theta (t))\mathbf{g}(\Theta (t))$ and $\Theta_p(t)\mathbf{g}(\Theta (t))$ for this particular setting.
|
| 646 |
+
|
| 647 |
+
This subsection generalizes this claim to an unstructured pruning setup by providing scatter plots demonstrating the highly correlated nature of $\Theta_p(t)\mathbf{H}(\Theta (t))\mathbf{g}(\Theta (t))$ and $\Theta_p(t)\mathbf{g}(\Theta (t))$ (see Figure 32 for VGG-13 models, Figure 33 for MobileNet-V1 models, and Figure 34 for ResNet-56 models). In contrast with Section G.1, where plots are visualized on a filter-level granularity, the following plots are visualized on a parameter-level granularity, as expected in an unstructured pruning setup. As can be seen in the plots, the measures are highly correlated throughout model training, indicating gradient-norm increase may severely affect model loss if a partially trained or completely trained model is pruned using $\Theta_p(t)\mathbf{H}(\Theta (t))\mathbf{g}(\Theta (t))$ . This demonstrates that our observations for gradient-norm based pruning continue to be valid in an unstructured pruning setup as well.
|
| 648 |
+
|
| 649 |
+

|
| 650 |
+
(a) Initialization.
|
| 651 |
+
|
| 652 |
+

|
| 653 |
+
(b) 40 epochs.
|
| 654 |
+
Figure 32: $\Theta_p(t)\mathbf{H}(\Theta (t))\mathbf{g}(\Theta (t))$ versus $\Theta_p(t)\mathbf{g}(\Theta (t))$ for VGG-13 models trained on CIFAR-100. Plots are shown for parameters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 655 |
+
|
| 656 |
+

|
| 657 |
+
(c) 160 epochs.
|
| 658 |
+
|
| 659 |
+

|
| 660 |
+
(a) Initialization.
|
| 661 |
+
|
| 662 |
+

|
| 663 |
+
(b) 40 epochs.
|
| 664 |
+
|
| 665 |
+

|
| 666 |
+
(c) 160 epochs.
|
| 667 |
+
|
| 668 |
+

|
| 669 |
+
Figure 33: $\Theta_p(t)\mathbf{H}(\Theta (t))\mathbf{g}(\Theta (t))$ versus $\Theta_{p}(t)\mathbf{g}(\Theta (t))$ for MobileNet-V1 models trained on CIFAR-100. Plots are shown for parameters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 670 |
+
(a) Initialization.
|
| 671 |
+
Figure 34: $\Theta_p(t)\mathbf{H}(\Theta (t))\mathbf{g}(\Theta (t))$ versus $\Theta_p(t)\mathbf{g}(\Theta (t))$ for ResNet-56 models trained on CIFAR100. Plots are shown for parameters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 672 |
+
|
| 673 |
+

|
| 674 |
+
(b) 40 epochs.
|
| 675 |
+
|
| 676 |
+

|
| 677 |
+
(c) 160 epochs.
|
| 678 |
+
|
| 679 |
+
# I RESULTS ON TINY-IMAGENET
|
| 680 |
+
|
| 681 |
+
To further verify our claims, we repeat some of our experiments on Tiny-ImageNet and confirm that our claims indeed generalize to the same.
|
| 682 |
+
|
| 683 |
+
# I.1 OBSERVATIONS 2/3
|
| 684 |
+
|
| 685 |
+
In Observation 2 (see Section 4.2), we show that loss-preservation based pruning techniques tend to remove filters with minimal movement in their magnitude. This leads us to Observation 3 (see Section 4.2), which shows that with the added heuristic of "train until minimal change" (You et al. (2020)), where magnitude-based pruning removes parameters with small magnitude and minimal change, magnitude-based pruning implicitly performs loss-preservation as well. While the main paper
|
| 686 |
+
|
| 687 |
+
verifies this claim on CIFAR-100 (see Figure 2), in this section we replicate this behavior on TinyImageNet. As shown in Figure 35, we indeed find that lower the magnitude and movement between BatchNorm scale parameters over subsequent epochs, the higher the correlation between magnitude-based pruning and loss-preservation based pruning is. This demonstrates that our observations relating magnitude-based pruning and loss-preservation pruning generalize to a larger scale dataset too.
|
| 688 |
+
|
| 689 |
+

|
| 690 |
+
(a) VGG-13.
|
| 691 |
+
|
| 692 |
+

|
| 693 |
+
(b) MobileNet-V1.
|
| 694 |
+
Figure 35: Correlation between $|\sigma \Delta \sigma|$ (proxy for magnitude-based pruning with the added "train until minimal change" heuristic) and loss-preservation based importance (see Equation 3) at every 10th epoch. Also plotted is the distance between pruning masks (target ratio: $20\%$ filters), as used by You et al. (2020) to decide when to prune a model. As the distance between pruning masks over consecutive epochs reduces, $|\sigma \Delta \sigma|$ becomes more correlated with loss-preservation importance.
|
| 695 |
+
|
| 696 |
+

|
| 697 |
+
(c) ResNet-56.
|
| 698 |
+
|
| 699 |
+
# I.2 OBSERVATION 4
|
| 700 |
+
|
| 701 |
+
Observation 4 indicates gradient-norm based pruning (i.e., GraSP Wang et al. (2020)) removes parameters which maximally increase model loss. The main paper confirms this claim for CIFAR-100 models (see Figure 3), showing that there exists a high correlation between $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ and $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for this particular setting. Here, we replicate this experiment and confirm our observation on Tiny-ImageNet models (see Figure 36 for VGG-13 models, Figure 37 for MobileNet-V1 models, and Figure 38 for ResNet-56 models).
|
| 702 |
+
|
| 703 |
+

|
| 704 |
+
(a) Initialization.
|
| 705 |
+
Figure 36: $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ versus $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for VGG-13 models trained on TinyImageNet. Plots are shown for filters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 706 |
+
|
| 707 |
+

|
| 708 |
+
(b) 40 epochs.
|
| 709 |
+
|
| 710 |
+

|
| 711 |
+
(c) 160 epochs.
|
| 712 |
+
|
| 713 |
+

|
| 714 |
+
(a) Initialization.
|
| 715 |
+
|
| 716 |
+

|
| 717 |
+
(b) 40 epochs.
|
| 718 |
+
|
| 719 |
+

|
| 720 |
+
(c) 160 epochs.
|
| 721 |
+
|
| 722 |
+

|
| 723 |
+
Figure 37: $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ versus $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for MobileNet-V1 models trained on Tiny-ImageNet. Plots are shown for filters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 724 |
+
(a) Initialization.
|
| 725 |
+
Figure 38: $\Theta_p^T(t)\mathbf{H}(\Theta(t))\mathbf{g}(\Theta(t))$ versus $\Theta_p^T(t)\mathbf{g}(\Theta(t))$ for ResNet-56 models trained on TinyImageNet. Plots are shown for filters (a) at initialization, (b) after 40 epochs of training, and (c) after complete (160 epochs) training.
|
| 726 |
+
|
| 727 |
+

|
| 728 |
+
(b) 40 epochs.
|
| 729 |
+
|
| 730 |
+

|
| 731 |
+
(c) 160 epochs.
|
agradientflowframeworkforanalyzingnetworkpruning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d40533a7437a445c50fb768b6b2f9ea66c1c9ffc34b0e519e5b620f0de69aec1
|
| 3 |
+
size 1850295
|
agradientflowframeworkforanalyzingnetworkpruning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8d4b977284c913aa82782234e6f80e2a3d71ca33669a28375047f740a7c4ff9d
|
| 3 |
+
size 888199
|
apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/77c37bb7-b3f6-4adf-8e03-75d22ddc1ca9_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e44c145d959c2f901f9067054c7a86e698e70e704040a2ae120d0f11ee5e20fb
|
| 3 |
+
size 101329
|
apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/77c37bb7-b3f6-4adf-8e03-75d22ddc1ca9_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5f29fae38548e6a5a27499921d96c7d705babfa02202c8e7ba5b89fe6a8aabb4
|
| 3 |
+
size 120613
|
apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/77c37bb7-b3f6-4adf-8e03-75d22ddc1ca9_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6a35c4d3520a57fa2b3f0cba08ec15495562abf16eb69cb8a2f03fe801ae23bf
|
| 3 |
+
size 670307
|
apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/full.md
ADDED
|
@@ -0,0 +1,290 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A PANDA? NO, IT'S A SLOTH: SLOWDOWN ATTACKS ON ADAPTIVE MULTI-EXIT NEURAL NETWORK INFERENCE
|
| 2 |
+
|
| 3 |
+
Sanghyun Hong*, Yigitcan Kaya*, Ionut-Vlad Modoranu†, Tudor Dumitras
|
| 4 |
+
|
| 5 |
+
University of Maryland, College Park, USA
|
| 6 |
+
|
| 7 |
+
$^{\dagger}$ Alexandru Ioan Cuza University, Iasi, Romania
|
| 8 |
+
|
| 9 |
+
shhong@cs.umd.edu, yigitcan@cs.umd.edu,
|
| 10 |
+
|
| 11 |
+
modoranu.ionut.vlad@hotmail.com, tudor@umd.edu
|
| 12 |
+
|
| 13 |
+
# ABSTRACT
|
| 14 |
+
|
| 15 |
+
Recent increases in the computational demands of deep neural networks (DNNs), combined with the observation that most input samples require only simple models, have sparked interest in input-adaptive multi-exit architectures, such as MSDNets or Shallow-Deep Networks. These architectures enable faster inferences and could bring DNNs to low-power devices, e.g., in the Internet of Things (IoT). However, it is unknown if the computational savings provided by this approach are robust against adversarial pressure. In particular, an adversary may aim to slowdown adaptive DNNs by increasing their average inference time—a threat analogous to the denial-of-service attacks from the Internet. In this paper, we conduct a systematic evaluation of this threat by experimenting with three generic multi-exit DNNs (based on VGG16, MobileNet, and ResNet56) and a custom multi-exit architecture, on two popular image classification benchmarks (CIFAR-10 and Tiny ImageNet). To this end, we show that adversarial example-crafting techniques can be modified to cause slowdown, and we propose a metric for comparing their impact on different architectures. We show that a slowdown attack reduces the efficacy of multi-exit DNNs by $90 - 100\%$ , and it amplifies the latency by $1.5 - 5 \times$ in a typical IoT deployment. We also show that it is possible to craft universal, reusable perturbations and that the attack can be effective in realistic black-box scenarios, where the attacker has limited knowledge about the victim. Finally, we show that adversarial training provides limited protection against slowdowns. These results suggest that further research is needed for defending multi-exit architectures against this emerging threat. Our code is available at https://github.com/sanghyun-hong/deepsloth.
|
| 16 |
+
|
| 17 |
+
# 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
The inference-time computational demands of deep neural networks (DNNs) are increasing, owing to the "going deeper" (Szegedy et al., 2015) strategy for improving accuracy: as a DNN gets deeper, it progressively gains the ability to learn higher-level, complex representations. This strategy has enabled breakthroughs in many tasks, such as image classification (Krizhevsky et al., 2012) or speech recognition (Hinton et al., 2012), at the price of costly inferences. For instance, with $4 \times$ more inference cost, a 56-layer ResNet (He et al., 2016) improved the Top-1 accuracy on ImageNet by $19\%$ over the 8-layer AlexNet. This trend continued with the 57-layer state-of-the-art EfficientNet (Tan & Le, 2019): it improved the accuracy by $10\%$ over ResNet, with $9 \times$ costlier inferences.
|
| 20 |
+
|
| 21 |
+
The accuracy improvements stem from the fact that the deeper networks fix the mistakes of the shallow ones (Huang et al., 2018). This implies that some samples, which are already correctly classified by shallow networks, do not necessitate the extra complexity. This observation has motivated research on input-adaptive mechanisms, in particular, multi-exit architectures (Teerapittayanon et al., 2016; Huang et al., 2018; Kaya et al., 2019; Hu et al., 2020). Multi-exit architectures save computation by making input-specific decisions about bypassing the remaining layers, once the model becomes confident, and are orthogonal to techniques that achieve savings by permanently modifying the
|
| 22 |
+
|
| 23 |
+
model (Li et al., 2016; Banner et al., 2018; Han et al., 2015; Taylor et al., 2018). Figure 1 illustrates how a multi-exit model (Kaya et al., 2019), based on a standard VGG-16 architecture, correctly classifies a selection of test images from 'Tiny ImageNet' before the final layer. We see that more typical samples, which have more supporting examples in the training set, require less depth and, therefore, less computation.
|
| 24 |
+
|
| 25 |
+
It is unknown if the computational savings provided by multi-exit architectures are robust against adversarial pressure. Prior research showed that DNNs are vulnerable to a wide range of attacks, which involve imperceptible input perturbations (Szegedy et al., 2014; Goodfellow et al., 2015; Papernot et al., 2016; Hu et al., 2020). Considering that a multi-exit model, on the worst-case input, does not provide any computational savings, we ask: Can the savings from multi-exit models be maliciously negated by input perturbations? As some natural inputs do require the full depth of the model, it may be possible to craft adversarial examples that delay the correct decision; it is unclear, however, how many inputs can be delayed with imperceptible perturbations. Furthermore, it is unknown if universal versions of these adversarial examples exist, if the examples transfer across or if existing defenses (e.g. adversarial training) are effective
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Figure 1: Simple to complex inputs. Some Tiny ImageNet images a VGG-16 model can correctly classify if computation stops at the $1^{st}$ , $5^{th}$ , and $14^{th}$ layers.
|
| 29 |
+
|
| 30 |
+
Threat Model. We consider a new threat against DNNs, analogous to the denial-of-service (DoS) attacks that have been plaguing the Internet for decades. By imperceptibly perturbing the input to trigger this worst-case, the adversary aims to slow down the inferences and increase the cost of using the DNN. This is an important threat for many practical applications, which impose strict limits on the responsiveness and resource usage of DNN models (e.g. in the Internet-of-Things (Taylor et al., 2018)), because the adversary could push the victim outside these limits. For example, against a commercial image classification system, such as Clarifai.com, a slowdown attack might waste valuable computational resources. Against a model partitioning scheme, such as Big-Little (De Coninck et al., 2015), it might introduce network latency by forcing excessive transmissions between local and remote models. A slowdown attack aims to force the victim to do more work than the adversary, e.g. by amplifying the latency needed to process the sample or by crafting reusable perturbations. The adversary may have to achieve this with incomplete information about the multi-exit architecture targeted, the training data used by the victim or the classification task (see discussion in Appendix A).
|
| 31 |
+
|
| 32 |
+
Our Contributions. To our best knowledge, we conduct the first study of the robustness of multi-exit architectures against adversarial slowdowns. To this end, we find that examples crafted by prior evasion attacks (Madry et al., 2017; Hu et al., 2020) fail to bypass the victim model's early exits, and we show that an adversary can adapt such attacks to the goal of model slowdown by modifying its objective function. We call the resulting attack DeepSloth. We also propose an efficacy metric for comparing slowdowns across different multi-exit architectures. We experiment with three generic multi-exit DNNs (based on VGG16, ResNet56 and MobileNet) (Kaya et al., 2019) and a specially-designed multi-exit architecture, MSDNets (Huang et al., 2018), on two popular image classification benchmarks (CIFAR-10 and Tiny ImageNet). We find that DeepSloth reduces the efficacy of multi-exit DNNs by $90 - 100\%$ , i.e., the perturbations render nearly all early exits ineffective. In a scenario typical for IoT deployments, where the model is partitioned between edge devices and the cloud, our attack amplifies the latency by $1.5 - 5\times$ , negating the benefits of model partitioning. We also show that it is possible to craft a universal DeepSloth perturbation, which can slow down the model on either all or a class of inputs. While more constrained, this attack still reduces the efficacy by $5 - 45\%$ . Further, we observe that DeepSloth can be effective in some black-box scenarios, where the attacker has limited knowledge about the victim. Finally, we show that a standard defense against adversarial samples—adversarial training—is inadequate against slowdowns. Our results suggest that further research will be required for protecting multi-exit architectures against this emerging security threat.
|
| 33 |
+
|
| 34 |
+
# 2 RELATED WORK
|
| 35 |
+
|
| 36 |
+
Adversarial Examples and Defenses. Prior work on adversarial examples has shown that DNNs are vulnerable to test-time input perturbations (Szegedy et al., 2014; Goodfellow et al., 2015; Papernot et al., 2017; Carlini & Wagner, 2017; Madry et al., 2018). An adversary who wants to maximize a model's error on specific test-time samples can introduce human-imperceptible perturbations to these samples. Moreover, an adversary can also exploit a surrogate model for launching the attack and still hurt an unknown victim (Athalye et al., 2018; Tramèr et al., 2017b; Inkawich et al., 2019). This transferability leads to adversarial examples in more practical black-box scenarios. Although many defenses (Kurakin et al., 2016; Xu et al., 2017; Song et al., 2018; Liao et al., 2018; Lecuyer et al., 2019) have been proposed against this threat, adversarial training (AT) has become the frontrunner (Madry et al., 2018). In Sec 5, we evaluate the vulnerability of multi-exit DNNs to adversarial slowdowns in white-box and black-box scenarios. In Sec 6, we show that standard AT and its simple adaptation to our perturbations are not sufficient for preventing slowdown attacks.
|
| 37 |
+
|
| 38 |
+
Efficient Input-Adaptive Inference. Recent input-adaptive DNN architectures have brought two seemingly distant goals closer: achieving both high predictive quality and computational efficiency. There are two types of input-adaptive DNNs: adaptive neural networks (AdNNs) and multi-exit architectures. During the inference, AdNNs (Wang et al., 2018; Figurnov et al., 2017) dynamically skip a certain part of the model to reduce the number of computations. This mechanism can be used only for ResNet-based architectures as they facilitate skipping within a network. On the other hand, multi-exit architectures (Teerapittayanon et al., 2016; Huang et al., 2018; Kaya et al., 2019) introduce multiple side branches—or early-exits—to a model. During the inference on an input sample, these models can preemptively stop the computation altogether once the stopping criteria are met at one of the branches. Kaya et al. (2019) have also identified that standard, non-adaptive DNNs are susceptible to overthinking, i.e., their inability to stop computation leads to inefficient inferences on many inputs.
|
| 39 |
+
|
| 40 |
+
Haque et al. (2020) presented attacks specifically designed for reducing the energy-efficiency of AdNNs by using adversarial input perturbations. However, our work studies a new threat model that an adversary causes slowdowns on multi-exit architectures. By imperceptibly perturbing the inputs, our attacker can (i) introduce network latency to an infrastructure that utilizes multi-exit architectures and (ii) waste the victim's computational resources. To quantify this vulnerability, we define a new metric to measure the impact of adversarial input perturbation on different multi-exit architectures (Sec 3). In Sec 5, we also study practical attack scenarios and the transferability of adversarial input perturbations crafted by our attacker. Moreover, we discuss the potential defense mechanisms against this vulnerability, by proposing a simple adaptation of adversarial training (Sec 6). To the best of our knowledge, our work is the first systematic study of this new vulnerability.
|
| 41 |
+
|
| 42 |
+
Model Partitioning. Model partitioning has been proposed to bring DNNs to resource-constrained devices (De Coninck et al., 2015; Taylor et al., 2018). These schemes split a multi-exit model into sequential components and deploy them in separate endpoints, e.g., a small, local on-device part and a large, cloud-based part. For bringing DNNs to the Internet of Things (IoT), partitioning is instrumental as it reduces the transmissions between endpoints, a major bottleneck. In Sec 5.1, on a partitioning scenario, we show that our attack can force excessive transmissions.
|
| 43 |
+
|
| 44 |
+
# 3 EXPERIMENTAL SETUP
|
| 45 |
+
|
| 46 |
+
Datasets. We use two datasets: CIFAR-10 (Krizhevsky et al., 2009) and Tiny-ImageNet (Tiny). For testing the cross-domain transferability of our attacks, we use the CIFAR-100 dataset.
|
| 47 |
+
|
| 48 |
+
Architectures and Hyper-parameters. To demonstrate that the vulnerability to adversarial slowdowns is common among multi-exit architectures, we experiment on two recent techniques: Shallow-Deep Networks (SDNs) (Kaya et al., 2019) and MSDNets (Huang et al., 2018). These architectures were designed for different purposes: SDNs are generic and can convert any DNN into a multi-exit model, and MSDNets are custom designed for efficiency. We evaluate an MSDNet architecture (6 exits) and three SDN architectures, based on VGG-16 (Simonyan & Zisserman, 2014) (14 exits), ResNet-56 (He et al., 2016) (27 exits), and MobileNet (Howard et al., 2017) (14 exits).
|
| 49 |
+
|
| 50 |
+
Metrics. We define the early-exit capability (EEC) curve of a multi-exit model to indicate the fraction of the test samples that exit early at a specific fraction of the model's full inference cost.
|
| 51 |
+
|
| 52 |
+
Figure 2 shows the EEC curves of our SDNs on Tiny ImageNet, assuming that the computation stops when there is a correct classification at an exit point. For example, VGG-16-based SDN model can correctly classify $\sim 50\%$ of the samples using $\sim 50\%$ of its full cost. Note that this stopping criterion is impractical; in Sec 4, we will discuss the practical ones.
|
| 53 |
+
|
| 54 |
+
We define the early-exit efficacy, or efficacy in short, to quantify a model's ability of utilizing its exit points. The efficacy of a multi-exit model is the area under its EEC curve, estimated via the trapezoidal rule. An ideal efficacy for a model is close to 1, when most of the input samples the computation stops very early; models that do not use their early exits have 0 efficacy. A model with low efficacy generally exhibits a higher latency; in a partitioned model, the low efficacy will cause more input transmissions to the cloud, and the latency is further amplified by the network round trips. A multi-exit model's efficacy and accuracy are dictated by its stopping criteria, which we discuss in the next section. As for the classification performance, we report the Top-1 accuracy on the test data.
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
Figure 2: The EEC curves. Each curve shows the fraction of test samples a model classifies using a certain fraction of its full inference cost. 'EFCY' is short for the model's efficacy.
|
| 58 |
+
|
| 59 |
+
# 4 ATTACKING THE MULTI-EXIT ARCHITECTURES
|
| 60 |
+
|
| 61 |
+
Setting. We consider the supervised classification setting with standard feedforward DNN architectures. A DNN model consists of $N$ blocks, or layers, that process the input sample, $x \in \mathbb{R}^d$ , from beginning to end and produce a classification. A classification, $F(x, \theta) \in \mathbb{R}^m$ , is the predicted probability distribution of $x$ belonging to each label $y \in M = \{1, \dots, m\}$ . Here, $\theta$ denotes the tunable parameters, or the weights, of the model. The parameters are learned on a training set $\mathcal{D}$ that contains multiple $(x_i, y_i)$ pairs; where $y_i$ is the ground-truth label of the training sample $x_i$ . We use $\theta_i$ to denote the parameters at and before the $i^{th}$ block; i.e., $\theta_i \subset \theta_{i+1}$ and $\theta_N = \theta$ . Once a model is trained, its performance is then tested on a set of unseen samples, $\mathcal{S}$ .
|
| 62 |
+
|
| 63 |
+
Multi-Exit Architectures. A multi-exit model contains $K$ exit points—internal classifiers—attached to a model's hidden blocks. We use $F_{i}$ to denote the $i^{th}$ exit point, which is attached to the $j^{th}$ block. Using the output of the $j^{th}$ ( $j < N$ ) block on $x$ , $F_{i}$ produces an internal classification, i.e., $F_{i}(x, \theta_{j})$ , which we simply denote as $F_{i}(x)$ . In our experiments, we set $K = N$ for SDNs, i.e., one internal classifier at each block and $K = 6$ for MSDNets. Given $F_{i}(x)$ , a multi-exit model uses deterministic criteria to decide between forwarding $x$ to compute $F_{i+1}(x)$ and stopping for taking the early-exit at this block. Bypassing early-exits decreases a network's efficacy as each additional block increases the inference cost. Note that multi-exit models process each sample individually, not in batches.
|
| 64 |
+
|
| 65 |
+
Practical Stopping Criteria. Ideally, a multi-exit model stops when it reaches a correct classification at an exit point, i.e., $\operatorname{argmax}_{j\in M}F_i^{(j)}(x) = \hat{y}_i = y$ ; $y$ is the ground-truth label. However, for unseen samples, this is impractical as $y$ is unknown. The prior work has proposed two simple strategies to judge whether $\hat{y}_i = y$ : $F_{i}(x)$ 's entropy (Teerapittayanon et al., 2016; Huang et al., 2018) or its confidence (Kaya et al., 2019). Our attack (see Sec 4.3) leverages the fact that a uniform $F_{i}(x)$ has both the highest entropy and the lowest confidence. For generality, we experiment with both confidence-based—SDNs—and entropy-based—MSD nets—strategies.
|
| 66 |
+
|
| 67 |
+
A strategy selects confidence, or entropy, thresholds, $T_{i}$ , that determine whether the model should take the $i^{th}$ exit for an input sample. Conservative $T_{i}$ 's lead to fewer early exits and the opposite hurts the accuracy as the estimate of whether $\hat{y}_{i} = y$ becomes unreliable. As utility is a major practical concern, we set $T_{i}$ 's for balancing between efficiency and accuracy. On a holdout set, we set the thresholds to maximize a model's efficacy while keeping its relative accuracy drop (RAD) over its maximum accuracy within 5% and 15%. We refer to these two settings as RAD<5% and RAD<15%. Table 2 (first segment) shows how accuracy and efficacy change in each setting.
|
| 68 |
+
|
| 69 |
+
# 4.1 THREAT MODEL
|
| 70 |
+
|
| 71 |
+
We consider an adversary who aims to decrease the early-exit efficacy of a victim model. The attacker crafts an imperceptible adversarial perturbation, $v \in \mathbb{R}^d$ that, when added to a test-time sample $x \in S$ , prevents the model from taking early-exits.
|
| 72 |
+
|
| 73 |
+
Adversary's Capabilities. The attacker is able to modify the victim's test-time samples to apply the perturbations, e.g., by compromising a camera that collects the data for inference. To ensure the imperceptibility, we focus on $\ell_{\infty}$ norm bounded perturbations as they (i) are well are studied; (ii) have successful defenses (Madry et al., 2018); (iii) have prior extension to multi-exit models (Hu et al., 2020); and (iv) are usually the most efficient to craft. We show results on $\ell_{2}$ and $\ell_{1}$ perturbations in Appendix C. In line with the prior work, we bound the perturbations as follows: for CIFAR-10, $||v||_{\infty} \leq \epsilon = 0.03$ (Madry et al., 2017), $||v||_{1} \leq 8$ (Tramér & Boneh, 2019) and $||v||_{2} \leq 0.35$ (Chen et al., 2017); for Tiny ImageNet, $||v||_{\infty} \leq \epsilon = 0.03$ (Yang et al., 2019), $||v||_{1} \leq 16$ and $||v||_{2} \leq 0.6$ .
|
| 74 |
+
|
| 75 |
+
Adversary's Knowledge. To assess the security vulnerability of multi-exit architectures, we study white-box scenarios, i.e., the attacker knows all the details of the victim model, including its $\mathcal{D}$ and $\theta$ . Further, in Sec 5.2, we study more practical black-box scenarios, i.e., the attacker crafts $v$ on a surrogate model and applies it to an unknown victim model.
|
| 76 |
+
|
| 77 |
+
Adversary's Goals. We consider three DeepSloth variants, (i) the standard, (ii) the universal and (iii) the class-universal. The adversary, in (i) crafts a different $v$ for each $x \in S$ ; in (ii) crafts a single $v$ for all $x \in S$ ; in (iii) crafts a single $v$ for a target class $i \in M$ . Further, although the adversary does not explicitly target it; we observe that DeepSloth usually hurts the accuracy. By modifying the objective function we describe in Sec 4.3, we also experiment with DeepSloth variants that can explicitly preserve or hurt the accuracy, in addition to causing slowdowns.
|
| 78 |
+
|
| 79 |
+
# 4.2 STANDARD ADVERSARIAL ATTACKS DO NOT CAUSE DELAYS
|
| 80 |
+
|
| 81 |
+
To motivate DeepSloth, we first evaluate whether previous adversarial attacks have any effect on the efficacy of multi-exit models. These attacks add imperceptible perturbations to a victim's test-time samples to force misclassifications. We experiment with the standard PGD attack (Madry et al., 2017); PGD-avg and PGD-max variants against multi-exit models (Hu et al., 2020) and the Universal Adversarial Perturbation (UAP) attack that crafts a single perturbation for all test samples (Moosavi-Dezfooli et al., 2017). Table 1 summarizes our findings that these attacks, although they hurt the accuracy, fail to cause any meaningful decrease in efficacy. In many cases, we observe that the attacks actually increase the efficacy. These experiments help us to identify the critical elements of the objective function of an attack that decreases the efficacy.
|
| 82 |
+
|
| 83 |
+
Table 1: Impact of existing evasion attacks on efficacy. Each entry shows a model's efficacy (left) and accuracy (right) when subjected to the respective attack. The multi-exit models are trained on CIFAR-10 and use RAD $< 5\%$ as their early-exit strategy.
|
| 84 |
+
|
| 85 |
+
<table><tr><td>NETWORK</td><td>NO ATTACK</td><td>PGD-20</td><td>PGD-20 (Avg.)</td><td>PGD-20 (MAX.)</td><td>UAP</td></tr><tr><td>VGG-16</td><td>0.77 / 89%</td><td>0.79 / 29%</td><td>0.85 / 10%</td><td>0.81 / 27%</td><td>0.71 / 68%</td></tr><tr><td>RESNET-56</td><td>0.52 / 87%</td><td>0.55 / 12%</td><td>0.82 / 1%</td><td>0.70 / 6%</td><td>0.55 / 44%</td></tr><tr><td>MOBILENET</td><td>0.83 / 87%</td><td>0.85 / 14%</td><td>0.93 / 3%</td><td>0.89 / 12%</td><td>0.77 / 60%</td></tr></table>
|
| 86 |
+
|
| 87 |
+
# 4.3 THE DEEPSLOTH ATTACK
|
| 88 |
+
|
| 89 |
+
The Layer-Wise Objective Function. Figure 3 shows that the attacks that only optimize for the final output, e.g., PGD or UAP, do not perturb the model's earlier layer representations. This does not bypass the early-returns, which makes these attacks ineffective for decreasing the efficacy. Therefore, we modify the objective functions of adversarial example-crafting algorithms to incorporate the outputs of all $F_{i}|i < K$ . For crafting $\ell_{\infty}$ , $\ell_{2}$ and $\ell_{1}$ -bounded perturbations, we adapt the PGD (Madry et al., 2017), the DDN (Rony et al., 2019) and the SLIDE algorithms (Tramér & Boneh, 2019), respectively. Next, we describe how we modify the PGD algorithm—we modify the others similarly:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
v ^ {t + 1} = \prod_ {| | v | | _ {\infty} < \epsilon} \left(v ^ {t} + \alpha \operatorname {s g n} \left(\nabla_ {v} \sum_ {x \in D ^ {\prime}} \sum_ {0 < i < K} \mathcal {L} \left(F _ {i} (x + v), \bar {y}\right)\right)\right)
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
Here, $t$ is the current attack iteration; $\alpha$ is the step size; $\Pi$ is the projection operator that enforces $||v||_{\infty} < \epsilon$ and $\mathcal{L}$ is the cross-entropy loss function. The selection of $\mathcal{D}'$ determines the type of the attack. For the standard variant: $\mathcal{D}' = \{x\}$ , i.e., a single test-time sample. For the universal variant: $\mathcal{D}' = \mathcal{D}$ , i.e., the whole training set. For the class-universal variant against the target class $i \in M$ : $\mathcal{D}' = \{(x,y) \in \mathcal{D} | y = i\}$ , i.e., the training set samples from the $i^{th}$ class. Finally, $\bar{y}$ is the target label distribution our objective pushes $F_i(x)$ towards. Next, we explain how we select $\bar{y}$ .
|
| 96 |
+
|
| 97 |
+
Pushing $F_{i}(x)$ Towards a Uniform Distribution. Despite including all $F_{i}$ , attacks such as PGD-avg and PGD-max (Hu et al., 2020) still fail to decrease efficacy. How these attacks select $\bar{y}$ reflects their goal of causing misclassifications and, therefore, they trigger errors in early-exits, i.e., $\operatorname{argmax}_{j \in M} F_{i}^{(j)}(x) = \bar{y} \neq y$ . However, as the early-exits still have high confidence, or low entropy, the model still stops its computation early. We select $\bar{y}$ as a uniform distribution over the class labels, i.e., $\bar{y}^{(i)} = 1 / m$ . This ensures that $(x + v)$ bypasses common stopping criteria as a uniform $F_{i}(x)$ has both the lowest confidence and the highest entropy.
|
| 98 |
+
|
| 99 |
+
# 5 EMPIRICAL EVALUATION
|
| 100 |
+
|
| 101 |
+
Here, we present the results for $\ell_{\infty}$ DeepSloth against two SDNs—VGG-16 and MobileNet-based—and against the MSDNets. In the Appendix, we report the hyperparameters; the $\ell_1$ and $\ell_2$ attacks; the results on ResNet-56-based SDNs; the cost of the attacks; and some perturbed samples. Overall, we observe that $\ell_{\infty}$ -bounded perturbations are more effective for slowdowns. The optimization challenges might explain this, as $\ell_1$ and $\ell_2$ attacks are usually harder to optimize (Carlini & Wagner, 2017; Tramèr & Boneh, 2019). Unlike objectives for misclassifications, the objective for slowdowns involves multiple loss terms and optimizes over all the output logits.
|
| 102 |
+
|
| 103 |
+
# 5.1 WHITE-BOX SCENARIOS
|
| 104 |
+
|
| 105 |
+
Perturbations Eliminate Early-Exits. Table 2 (second segment) shows that the victim models have $\sim 0$ efficacy on the samples perturbed by DeepSloth. Across the board, the attack makes the early-exit completely ineffective and force the victim models to forward all input samples till the end. Further, DeepSloth also drops the victim's accuracy by $75 - 99\%$ , comparable to the PGD attack. These results give an answer to our main research question: the multi-exit mechanisms are vulnerable and their benefits can be maliciously offset by adversarial input perturbations. In particular, as SDN modification mitigates overthinking in standard, non-adaptive DNNs (Kaya et al., 2019), DeepSloth also leads SDN-based models to overthink on almost all samples by forcing extra computations.
|
| 106 |
+
|
| 107 |
+
Note that crafting a single perturbation requires multiple back-propagations through the model and more floating points operations (FLOPs) than the forward pass. The high cost of crafting, relative to the computational damage to the victim, might make this vulnerability unattractive for the adversary. In the next sections, we highlight scenarios where this vulnerability might lead to practical exploitation. First, we show that in an IoT-like scenarios, the input transmission is a major bottleneck and DeepSloth can exploit it. Second, we evaluate universal DeepSloth attacks that enable the adversary to craft the perturbation only once and reuse it on multiple inputs.
|
| 108 |
+
|
| 109 |
+
Attacking an IoT Scenario. Many IoT scenarios, e.g., health monitoring for elderly (Park et al., 2017), require collecting data from edge devices and making low-latency inferences on this data. However, complex deep learning models are impractical for low-power edge devices, such as an Arduino, that are common in the IoT scenarios (Chen & Ran, 2019). For example, on standard hardware, an average inference takes MSDNet model on Tiny ImageNet 35M FLOPs and $\sim 10$ ms.
|
| 110 |
+
|
| 111 |
+
A potential solution is sending the inputs from the edge to a cloud model, which then returns the prediction. Even in our optimistic estimate with a nearby AWS EC2 instance, this back-and-forth introduces $\sim 11\mathrm{ms}$ latency per inference. Model partitioning alleviates this bottleneck by splitting a multi-exit model into two; deploying the small first part at the edge and the large second part at the cloud (De Coninck et al., 2015). The edge part sends an input only when its prediction does not meet
|
| 112 |
+
|
| 113 |
+
Table 2: The effectiveness of $\ell_{\infty}$ DeepSloth. 'RAD<5,15%' columns list the results in each early-exit setting. Each entry includes the model's efficacy (left) and accuracy (right). The class-universal attack's results are an average of 10 classes. 'TI': Tiny ImageNet and 'C10': CIFAR-10.
|
| 114 |
+
|
| 115 |
+
<table><tr><td>NETWORK</td><td colspan="2">MSDNET</td><td colspan="2">VGG16</td><td colspan="2">MOBILENET</td></tr><tr><td>SET.</td><td>RAD<5%</td><td>RAD<15%</td><td>RAD<5%</td><td>RAD<15%</td><td>RAD<5%</td><td>RAD<15%</td></tr><tr><td colspan="7">BASELINE (NO ATTACK)</td></tr><tr><td>C10</td><td>0.89 / 85%</td><td>0.89 / 85%</td><td>0.77 / 88%</td><td>0.89 / 79%</td><td>0.83 / 87%</td><td>0.92 / 79%</td></tr><tr><td>TI</td><td>0.64 / 55%</td><td>0.83 / 50%</td><td>0.39 / 57%</td><td>0.51 / 52%</td><td>0.42 / 57%</td><td>0.59 / 51%</td></tr><tr><td colspan="7">DEEPSLOTH</td></tr><tr><td>C10</td><td>0.06 / 17%</td><td>0.06 / 17%</td><td>0.01 / 13%</td><td>0.04 / 16%</td><td>0.01 / 12%</td><td>0.06 / 16%</td></tr><tr><td>TI</td><td>0.06 / 7%</td><td>0.06 / 7%</td><td>0.00 / 2%</td><td>0.01 / 2%</td><td>0.02 / 6%</td><td>0.04 / 6%</td></tr><tr><td colspan="7">UNIVERSAL DEEPSLOTH</td></tr><tr><td>C10</td><td>0.85 / 65%</td><td>0.85 / 65%</td><td>0.62 / 65%</td><td>0.86 / 60%</td><td>0.73 / 61%</td><td>0.90 / 59%</td></tr><tr><td>TI</td><td>0.58 / 46%</td><td>0.81 / 41%</td><td>0.31 / 47%</td><td>0.44 / 44%</td><td>0.33 / 47%</td><td>0.51 / 43%</td></tr><tr><td colspan="7">CLASS-UNIVERSAL DEEPSLOTH</td></tr><tr><td>C10</td><td>0.82 / 32%</td><td>0.82 / 32%</td><td>0.47 / 35%</td><td>0.78 / 33%</td><td>0.60 / 30%</td><td>0.85 / 27%</td></tr><tr><td>TI</td><td>0.41 / 21%</td><td>0.71 / 17%</td><td>0.20 / 28%</td><td>0.33 / 27%</td><td>0.21 / 27%</td><td>0.38 / 25%</td></tr></table>
|
| 116 |
+
|
| 117 |
+
the stopping criteria. For example, the first early-exit of MSDNets sends only $5\%$ and $67\%$ of all test samples, on CIFAR-10 and Tiny ImageNet, respectively. This leads to a lower average latency per inference, i.e., from 11ms down to 0.5ms and 7.4ms, respectively.
|
| 118 |
+
|
| 119 |
+
The adversary we study uses DeepSloth perturbations to force the edge part to send all the input samples to the cloud. For the victim, we deploy MSDNet models that we split into two parts at their first exit point. Targeting the first part with DeepSloth forces it to send $96\%$ and $99.97\%$ of all test samples to the second part. This increases average inference latency to $\sim 11\mathrm{ms}$ and invalidates the benefits of model partitioning. In this scenario, perturbing each sample takes $\sim 2\mathrm{ms}$ on a Tesla V-100 GPU, i.e., the time adversary spends is amplified by $1.5 - 5\times$ as the victim's latency increase.
|
| 120 |
+
|
| 121 |
+
Reusable Universal Perturbations. The universal attacks, although limited, are a practical as the adversary can reuse the same perturbation indefinitely to cause minor slowdowns. Table 2 (third segment) shows that they decrease the efficacy by $3 - 21\%$ and the accuracy by $15 - 25\%$ , over the baselines. Having a less conservative early-exit strategy, e.g., $\mathrm{RAD} < 15\%$ , increases the resilience to the attack at the cost of accuracy. Further, MSDNets are fairly resilient with only $3 - 9\%$ efficacy drop; whereas SDNs are more vulnerable with $12 - 21\%$ drop. The attack is also slightly more effective on the more complex task, Tiny ImageNet, as the early-exits become easier to bypass. Using random noise as a baseline, i.e., $v \sim U^{d}(-\epsilon, \epsilon)$ , we find that at most it decreases the efficacy by $\sim 3\%$ .
|
| 122 |
+
|
| 123 |
+
In the universal attack, we observe a phenomenon: it pushes the samples towards a small subset of all classes. For example, $\sim 17\%$ of the perturbed samples are classified into the 'bird' class of CIFAR-10; up from $\sim 10\%$ for the clean samples. Considering certain classes are distant in the feature space, e.g., 'truck' and 'bird'; we expect the class-universal variant to be more effective. The results in Table 2 (fourth segment) confirm our intuition. We see that this attack decreases the baseline efficacy by $8 - 50\%$ and the accuracy by $50 - 65\%$ . We report the average results across multiple classes; however, we observe that certain classes are slightly more vulnerable to this attack.
|
| 124 |
+
|
| 125 |
+
Feature Visualization of DeepSloth. In Figure 3, to shed light on how DeepSloth differs from prior attacks, e.g., PGD and PGD-avg, we visualize a model's hidden block (layer) features on the original and perturbed test-time samples. We observe that in an earlier block (left panel), DeepSloth seems to disrupt the original features slightly more than the PGD attacks. Leaving earlier representations intact prevents PGDs from bypassing the early-exits. The behaviors of the attacks diverge in the middle blocks (middle panel). Here, DeepSloth features remain closer to the original features than prior attacks. The significant disruption of prior attacks leads to high-confidence misclassifications and fails to bypass early-exits. In the later block (right panel), we see that the divergent behavior persists.
|
| 126 |
+
|
| 127 |
+
Preserving or Hurting the Accuracy with DeepSloth. Here, we aim to answer whether DeepSloth can be applied when the adversary explicitly aims to cause or prevent misclassifications, while still
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
Figure 3: Visualising features against attacks using UMAP. VGG-16's 3rd (left), 8th (middle), and 14th (right) hidden block features on CIFAR-10's 'dog' class (Best viewed in color, zoomed in).
|
| 131 |
+
|
| 132 |
+

|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
|
| 136 |
+
causing slowdowns. Our main threat model has no explicit goal regarding misclassifications that hurt the user of the model, i.e., who consumes the output of the model. Whereas, slowdowns additionally hurt the executor or the owner of the model through the computations and latency increased at the cloud providers. In some ML-in-the-cloud scenarios, where these two are different actors, the adversary might aim to target only the executor or both the executor and the user. To this end, we modify our objective function to push $F_{i}(x)$ towards a slightly non-uniform distribution, favoring either the ground truth label for preventing misclassifications or a wrong label for causing them. We test this idea on our VGG-16-based SDN model on CIFAR-10 in RAD<5% setting. We see that DeepSloth for preserving the accuracy leads to 81% accuracy with 0.02 efficacy and DeepSloth for hurting the accuracy leads to 4% accuracy with 0.01 efficacy—the original DeepSloth led to 13% accuracy with 0.01 efficacy. These results show the flexibility of DeepSloth and how it could be modified depending on the attacker's goals.
|
| 137 |
+
|
| 138 |
+
# 5.2 TOWARDS A BLACK-BOX ATTACK: TRANSFERABILITY OF DEEPSLOTH
|
| 139 |
+
|
| 140 |
+
Transferability of adversarial examples imply that they can still hurt a model that they were not crafted on (Tramèr et al., 2017a; Liu et al., 2017). Even though white-box attacks are important to expose the vulnerability, black-box attacks, by requiring fewer assumptions, are more practical. Here, on four distinct scenarios, we investigate whether DeepSloth is transferable. Based on the scenario's constraints, we (i) train a surrogate model; (ii) craft the DeepSloth samples on it; and (iii) use these samples on the victim. We run these experiments on CIFAR-10 in the $\mathrm{RAD} < 5\%$ .
|
| 141 |
+
|
| 142 |
+
Cross-Architecture. First, we relax the assumption that the attacker knows the victim architecture. We evaluate the transferability between a VGG-16-based SDN and an MSDNet—all trained using the same $\mathcal{D}$ . We find that the samples crafted on the MSDNet can slowdown the SDN: reducing its efficacy to 0.63 (from 0.77) and accuracy to $78\%$ (from $88\%$ ). Interestingly, the opposite seems not to be the case: on the samples crafted against the SDN, the MSDNet still has 0.87 efficacy (from 0.89) and $73\%$ accuracy (from $85\%$ ). This hints that DeepSloth transfers if the adversary uses an effective multi-exit models as the surrogate.
|
| 143 |
+
|
| 144 |
+
Limited Training Set Knowledge. Second, we relax the assumption that the attacker knows the victim's training set, $\mathcal{D}$ . Here, the attacker only knows a random portion of $\mathcal{D}$ , i.e., $10\%$ , $25\%$ , and $50\%$ . We use VGG-16 architecture for both the surrogate and victim models. In the $10\%$ , $25\%$ and $50\%$ settings, respectively, the attacks reduce the victim's efficacy to 0.66, 0.5, 0.45 and 0.43 (from 0.77); its accuracy to $81\%$ , $73\%$ , $72\%$ and $74\%$ (from $88\%$ ). Overall, the more limited the adversary's $\mathcal{D}$ is, the less generalization ability the surrogate has and the less transferable the attacks are.
|
| 145 |
+
|
| 146 |
+
Cross-Domain. Third, we relax the assumption that the attacker exactly knows the victim's task. Here, the attacker uses $\mathcal{D}_f$ to train the surrogate, different from the victim's $\mathcal{D}$ altogether. We use a VGG-16 on CIFAR-100 as the surrogate and attack a VGG-16-based victim model on CIFAR-10. This transfer attack reduces the victim's efficacy to 0.63 (from 0.77) and its accuracy to $83\%$ (from $88\%$ ). We see that the cross-domain attack might be more effective than the limited $\mathcal{D}$ scenarios. This makes DeepSloth particularly dangerous as the attacker, without knowing the victim's $\mathcal{D}$ , can collect a similar dataset and still slowdown the victim. We hypothesize the transferability of earlier layer features in CNNs (Yosinski et al., 2014) enables the perturbations attack to transfer from one domain to another, as long as they are similar enough.
|
| 147 |
+
|
| 148 |
+
Cross-Mechanism. Finally, we test the scenario where the victim uses a completely different mechanism than a multi-exit architecture to implement input adaptiveness, i.e., SkipNet (Wang et al., 2018). A SkipNet, a modified residual network, selectively skips convolutional blocks based on the activations of the previous layer and, therefore, does not include any internal classifiers. We use a pre-trained SkipNet on CIFAR-10 that reduces the average computation for each input sample by $\sim 50\%$ over an equivalent ResNet and achieves $\sim 94\%$ accuracy. We then feed DeepSloth samples crafted on a MSDNet to this SkipNet, which reduces its average computational saving to $\sim 32\%$ (36% less effective) and its accuracy to $37\%$ . This result suggests that the two different mechanisms have more in common than previously known and might share the vulnerability. We believe that understanding the underlying mechanisms through which adaptive models save computation is an important research question for future work.
|
| 149 |
+
|
| 150 |
+
# 6 STANDARD ADVERSARIAL TRAINING IS NOT A COUNTERMEASURE
|
| 151 |
+
|
| 152 |
+
In this section, we examine whether a defender can adapt a standard countermeasure against adversarial perturbations, adversarial training (AT) (Madry et al., 2018), to mitigate our attack. AT decreases a model's sensitivity to perturbations that significantly change the model's outputs. While this scheme is effective against adversarial examples that aim to trigger misclassifications; it is unclear whether using our DeepSloth samples for AT can also robustify a multi-exit model against slowdown attacks.
|
| 153 |
+
|
| 154 |
+
To evaluate, we train our multi-exit models as follows. We first take a base network—VGG-16—and train it on CIFAR-10 on PGD-10 adversarial examples. We then convert the resulting model into a multi-exit architecture, using the modification from (Kaya et al., 2019). During this conversion, we adversarially train individual exit points using PGD-10, PGD-10 (avg.), PGD-10 (max.), and DeepSloth; similar to (Hu et al., 2020). Finally, we measure the efficacy and accuracy of the trained models against PGD-20, PGD-20 (avg.), PGD-20 (max.), and DeepSloth, on CIFAR-10's test-set.
|
| 155 |
+
|
| 156 |
+
Table 3: Evaluating adversarial training against slowdown attacks. Each entry includes the model's efficacy score (left) and accuracy (right). Results are on CIFAR-10, in the RAD<5% setting.
|
| 157 |
+
|
| 158 |
+
<table><tr><td>ADV. TRAINING</td><td>NO ATTACK</td><td>PGD-20</td><td>PGD-20 (Avg.)</td><td>PGD-20 (MAX.)</td><td>DEEPSLOTH</td></tr><tr><td>UNDEFENDED</td><td>0.77 / 89%</td><td>0.79 / 29%</td><td>0.85 / 10%</td><td>0.81 / 27%</td><td>0.01 / 13%</td></tr><tr><td>PGD-10</td><td>0.61 / 72%</td><td>0.55 / 38%</td><td>0.64 / 23%</td><td>0.58 / 29%</td><td>0.33 / 70%</td></tr><tr><td>PGD-10 (Avg.)</td><td>0.53 / 72%</td><td>0.47 / 36%</td><td>0.47 / 35%</td><td>0.47 / 35%</td><td>0.32 / 70%</td></tr><tr><td>PGD-10 (MAX.)</td><td>0.57 / 72%</td><td>0.51 / 37%</td><td>0.54 / 30%</td><td>0.52 / 34%</td><td>0.32 / 70%</td></tr><tr><td>OURS</td><td>0.74 / 72%</td><td>0.71 / 38%</td><td>0.82 / 14%</td><td>0.77 / 21%</td><td>0.44 / 67%</td></tr><tr><td>OURS + PGD-10</td><td>0.61 / 73%</td><td>0.55 / 38%</td><td>0.63 / 23%</td><td>0.58 / 28%</td><td>0.33 / 70%</td></tr></table>
|
| 159 |
+
|
| 160 |
+
Our results in Table 3 verify that AT provides resilience against all PGD attacks. Besides, AT provides some resilience to our attack: DeepSloth reduces the efficacy to $\sim 0.32$ on robust models vs. 0.01 on the undefended one. However, we identify a trade-off between the robustness and efficiency of multi-exits. Compared to the undefended model, on clean samples, we see that robust models have lower efficacy—0.77 vs. $0.53\sim 0.61$ . We observe that the model trained only with our DeepSloth samples (Ours) can recover the efficacy on both the clean and our DeepSloth samples, but this model loses its robustness against PGD attacks. Moreover, when we train a model on both our DeepSloth samples and PGD-10 (Ours + PGD-10), the trained model suffers from low efficacy. Our results imply that a defender may require an out-of-the-box defense, such as flagging the users whose queries bypass the early-exits more often than clean samples for which the multi-exit network was calibrated.
|
| 161 |
+
|
| 162 |
+
# 7 CONCLUSIONS
|
| 163 |
+
|
| 164 |
+
This work exposes the vulnerability of input-adaptive inference mechanisms against adversarial slowdowns. As a vehicle for exploring this vulnerability systematically, we propose DeepSloth, an attack that introduces imperceptible adversarial perturbations to test-time inputs for offsetting the computational benefits of multi-exit inference mechanisms. We show that a white-box attack, which perturbs each sample individually, eliminates any computational savings these mechanisms provide. We also show that it is possible to craft universal slowdown perturbations, which can be
|
| 165 |
+
|
| 166 |
+
reused, and transferable samples, in a black-box setting. Moreover, adversarial training, a standard countermeasure for adversarial perturbations, is not effective against DeepSloth. Our analysis suggests that slowdown attacks are a realistic, yet under-appreciated, threat against adaptive models.
|
| 167 |
+
|
| 168 |
+
# ACKNOWLEDGMENT
|
| 169 |
+
|
| 170 |
+
We thank the anonymous reviewers for their feedback. This research was partially supported by the Department of Defense.
|
| 171 |
+
|
| 172 |
+
# REFERENCES
|
| 173 |
+
|
| 174 |
+
Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 274-283, Stockholm, Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/athalyel8a.html.
|
| 175 |
+
Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. Scalable methods for 8-bit training of neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 5145-5153. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7761-scalable-methods-for-8-bit-training-of-neural-networks.pdf.
|
| 176 |
+
N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57, 2017.
|
| 177 |
+
Jiasi Chen and Xukan Ran. Deep learning with edge computing: A review. Proceedings of the IEEE, 107(8): 1655-1674, 2019.
|
| 178 |
+
Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. Ead: elastic-net attacks to deep neural networks via adversarial examples. arXiv preprint arXiv:1709.04114, 2017.
|
| 179 |
+
Elias De Coninck, Tim Verbelen, Bert Vankeirsbilck, Steven Bohez, Pieter Simoens, Piet Demeester, and Bart Dhoedt. Distributed neural networks for internet of things: The big-little approach. In International Internet of Things Summit, pp. 484-492. Springer, 2015.
|
| 180 |
+
Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov. Spatially Adaptive Computation Time for Residual Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
|
| 181 |
+
Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.6572.
|
| 182 |
+
Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
|
| 183 |
+
Mirazul Haque, Anki Chauhan, Cong Liu, and Wei Yang. Ilfo: Adversarial attack on adaptive neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 184 |
+
K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
|
| 185 |
+
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6): 82-97, 2012.
|
| 186 |
+
Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. DynaBERT: Dynamic BERT with Adaptive Width and Depth. In Advances in Neural Information Processing Systems, 2020.
|
| 187 |
+
Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, 2017.
|
| 188 |
+
|
| 189 |
+
C. Hu, W. Bao, D. Wang, and F. Liu. Dynamic adaptive dnn surgery for inference acceleration on the edge. In IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, pp. 1423-1431, 2019. doi: 10.1109/INFOCOM.2019.8737614.
|
| 190 |
+
Ting-Kuei Hu, Tianlong Chen, Haotao Wang, and Zhangyang Wang. Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rJgzzJHtDB.
|
| 191 |
+
Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Weinberger. Multiscale dense networks for resource efficient image classification. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hk2aImxAb.
|
| 192 |
+
Nathan Inkawich, Wei Wen, Hai (Helen) Li, and Yiran Chen. Feature space perturbations yield more transferable adversarial examples. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 193 |
+
Weiwen Jiang, Edwin H.-M. Sha, Xinyi Zhang, Lei Yang, Qingfeng Zhuge, Yiyu Shi, and Jingtong Hu. Achieving super-linear speedup across multi-fpga for real-time dnn inference. ACM Trans. Embed. Comput. Syst., 18(5s), October 2019. ISSN 1539-9087. doi: 10.1145/3358192. URL https://doi.org/10.1145/3358192.
|
| 194 |
+
Yiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, and Lingjia Tang. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS'17, pp. 615-629, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450344654. doi: 10.1145/3037697.3037698. URL https://doi.org/10.1145/3037697.3037698.
|
| 195 |
+
Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitraş. Shallow-Deep Networks: Understanding and mitigating network overthinking. In Proceedings of the 2019 International Conference on Machine Learning (ICML), Long Beach, CA, Jun 2019.
|
| 196 |
+
Alex Krizhevsky, Geoffrey Hinton, et al. Learning Multiple Layers of Features from Tiny Images. 2009.
|
| 197 |
+
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.
|
| 198 |
+
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
|
| 199 |
+
M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana. Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), pp. 656-672, 2019.
|
| 200 |
+
En Li, Zhi Zhou, and Xu Chen. Edge intelligence: On-demand deep learning model co-inference with device-edge synergy. In Proceedings of the 2018 Workshop on Mobile Edge Communications, MECOMM'18, pp. 31-36, New York, NY, USA, 2018. Association for Computing Machinery. ISBN 9781450359061. doi: 10.1145/3229556.3229562. URL https://doi.org/10.1145/3229556.3229562.
|
| 201 |
+
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. CoRR, abs/1608.08710, 2016. URL http://arxiv.org/abs/1608.08710.
|
| 202 |
+
Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
|
| 203 |
+
Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=Sys6GJqxl.
|
| 204 |
+
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
|
| 205 |
+
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJzIBfZAb.
|
| 206 |
+
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
|
| 207 |
+
|
| 208 |
+
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS P), pp. 372-387, 2016.
|
| 209 |
+
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506-519, 2017.
|
| 210 |
+
Se Jin Park, Murali Subramaniyam, Seoung Eun Kim, Seunghee Hong, Joo Hyeong Lee, Chan Min Jo, and Youngseob Seo. Development of the elderly healthcare monitoring system with iot. In Advances in Human Factors and Ergonomics in Healthcare, pp. 309-315. Springer, 2017.
|
| 211 |
+
Jérôme Rony, Luiz G Hafemann, Luiz S Oliveira, Ismail Ben Ayed, Robert Sabourin, and Eric Granger. Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4322-4330, 2019.
|
| 212 |
+
Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition, 2014.
|
| 213 |
+
Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJUYGxbCW.
|
| 214 |
+
C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.
|
| 215 |
+
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. URL http://arxiv.org/abs/1312.6199.
|
| 216 |
+
Mingxing Tan and Quoc Le. EfficientNet: Rethinking model scaling for convolutional neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 6105-6114, Long Beach, California, USA, 09-15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/tan19a.html.
|
| 217 |
+
Ben Taylor, Vicent Sanz Marco, Willy Wolff, Yehia Elkhatib, and Zheng Wang. Adaptive deep learning model selection on embedded systems. ACM SIGPLAN Notices, 53(6):31-43, 2018.
|
| 218 |
+
S. Teerapittayanon, B. McDanel, and H. T. Kung. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2464-2469, 2016.
|
| 219 |
+
Tiny. Tiny ImageNet Visual Recognition Challenge. http://tiny-imagenet.herokuapp.com/. Accessed: 2020-09-28.
|
| 220 |
+
Florian Tramér and Dan Boneh. Adversarial training and robustness for multiple perturbations. In Advances in Neural Information Processing Systems, pp. 5866-5876, 2019.
|
| 221 |
+
Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017a.
|
| 222 |
+
Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453, 2017b.
|
| 223 |
+
Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E. Gonzalez. SkipNet: Learning Dynamic Routing in Convolutional Networks. In The European Conference on Computer Vision (ECCV), September 2018.
|
| 224 |
+
Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017.
|
| 225 |
+
Yuzhe Yang, Guo Zhang, Dina Katabi, and Zhi Xu. Me-net: Towards effective adversarial robustness with matrix estimation. arXiv preprint arXiv:1905.11971, 2019.
|
| 226 |
+
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014.
|
| 227 |
+
|
| 228 |
+
Li Zhou, Hao Wen, Radu Teodorescu, and David H.C. Du. Distributing deep neural networks with containerized partitions at the edge. In 2nd USENIX Workshop on Hot Topics in Edge Computing (HotEdge 19), Renton, WA, July 2019. USENIX Association. URL https://www.usenix.org/conference/hotedge19/presentation/zhou.
|
| 229 |
+
Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. BERT Loses Patience: Fast and Robust Inference with Early Exit. In Advances in Neural Information Processing Systems, 2020.
|
| 230 |
+
|
| 231 |
+
# A MOTIVATING EXAMPLES
|
| 232 |
+
|
| 233 |
+
Here, we discuss two exemplary scenarios where an adversary can exploit the slowdown attacks.
|
| 234 |
+
|
| 235 |
+
- (Case 1) Attacks on cloud-based IoT applications. In most cases, cloud-based IoT applications, such as Apple Siri, Google Now, or Microsoft Cortana, run their DNN inferences in the cloud. This cloud-only approach puts all the computational burden on cloud servers and increases the communications between the servers and IoT devices. In consequence, recent work (Kang et al., 2017; Li et al., 2018; Zhou et al., 2019) utilizes multi-exit architectures for bringing computationally expensive models, e.g. language models (Zhou et al., 2020; Hou et al., 2020), in the cloud to IoT (or mobile) devices. They split a multi-exit model into two partitions and deploy each of them to a server and IoT devices, respectively. Under this scheme, the cloud server only takes care of complex inputs that the shallow partition cannot correctly classify at the edge. As a result, one can reduce the computations in the cloud and decrease communications between the cloud and edge. On the other hand, our adversary, by applying human-imperceptible perturbations, can convert simple inputs into complex inputs. These adversarial inputs will bypass early-exits and, as a result, reduce (or even offset) the computational and communication savings provided by prior work. Here, a defender may deploy DoS defenses such as firewalls or rate-limiting. In this setting, the attacker may not cause DoS because defenses keep the communications between the server and IoT devices under a certain-level. Nevertheless, the attacker still increases: (i) the computations at the edge (by making inputs skip early-exits) and (ii) the number of samples that cloud servers process. Recall that a VGG-16 SDN model classifies $90\%$ of clean CIFAR-10 instances correctly at the first exit. If the adversarial examples crafted by the attacker bypass only the first exit, one can easily increase the computations on IoT devices and make them send requests to the cloud.
|
| 236 |
+
- (Case 2) Attacks on real-time DNN inference for resource- and time-constrained scenarios. Recent work on the real-time systems (Hu et al., 2019; Jiang et al., 2019) harnesses multi-exit architectures and model partitioning as a solution to optimize real-time DNN inference for resource- and time-constrained scenarios. Hu et al. (2019) showed a real-world prototype of an optimal model partitioning, which is based on a self-driving car video dataset, can improve latency and throughput of partitioned models on the cloud and edge by $6.5 - 14 \times$ , respectively.
|
| 237 |
+
|
| 238 |
+
However, the prior work does not consider the danger of slowdown attacks; our threat model has not been discussed before in the literature. Our results in Sec 5 suggest that slowdown can be induced adversarially, potentially violating real-time guarantees. For example, our attacker can force partitioned models on the cloud and edge to use maximal computations for inference. Further, the same adversarial examples also require the inference results from the model running on the cloud, which potentially increases the response time of the edge devices by $1.5 - 5 \times$ . Our work showed that multi-entry architectures should be used with caution in real-time systems.
|
| 239 |
+
|
| 240 |
+
# B HYPERPARAMETERS
|
| 241 |
+
|
| 242 |
+
In our experiments, we use the following hyperparameters to craft adversarial perturbations.
|
| 243 |
+
|
| 244 |
+
$\ell_{\infty}$ -based DeepSloth. We find that $\ell_{\infty}$ -based DeepSloth does not require careful tuning. For the standard attack, we set the total number of iterations to 30 and the step size to $\alpha = 0.002$ . For the modified attacks for hurting or preserving the accuracy, we set the total number of iterations to 75 and the step size to $\alpha = 0.001$ . We compute the standard perturbations using the entire 10k test-set samples in CIFAR-10 and TinyImagenet. For the universal variants, we set the total number of iterations to 12 and reduce the initial step size of $\alpha = 0.005$ by a factor of 10 every 4 iterations. To compute a universal perturbation, we use randomly chosen 250 (CIFAR-10) and 200 (TinyImagenet) training samples.
|
| 245 |
+
|
| 246 |
+
$\ell_2$ -based DeepSloth. For both the standard and universal attacks, we set the total number of iterations to 550 and the step size $\gamma$ to 0.1. Our initial perturbation has the $\ell_2$ -norm of 1.0. Here, we use the same number of samples for crafting the standard and universal perturbations as the $\ell_{\infty}$ -based attacks.
|
| 247 |
+
|
| 248 |
+
$\ell_1$ -based DeepSloth. For our standard $\ell_1$ -based DeepSloth, we set the total number of iterations to 250, the step size $\alpha$ to 0.5, and the gradient sparsity to 99. For the universal variants, we reduce the total number of iterations to 100 and set the gradient sparsity to 90. Other hyperparameters remain the same. We use the same number of samples as the $\ell_{\infty}$ -based attacks, to craft the perturbations.
|
| 249 |
+
|
| 250 |
+
# C EMPIRICAL EVALUATION OF $\ell_{1}$ AND $\ell_{2}$ DEEPSLOTH
|
| 251 |
+
|
| 252 |
+
Table 4 and Table 5 shows the effectiveness of ${\ell }_{1}$ -based and ${\ell }_{2}$ -based DeepSloth attacks,respectively.
|
| 253 |
+
|
| 254 |
+
Table 4: The effectiveness of ${\ell }_{1}$ DeepSloth. 'RAD<5,15%' columns list the results in each early-exit setting. Each entry includes the model's efficacy score (left) and accuracy (right). The class-universal attack's results are an average of 10 classes. 'TI' is Tiny Imagenet and 'C10' is CIFAR-10.
|
| 255 |
+
|
| 256 |
+
<table><tr><td>NETWORK</td><td colspan="2">MSDNET</td><td colspan="2">VGG16</td><td colspan="2">MOBILENET</td></tr><tr><td>SET.</td><td>RAD<5%</td><td>RAD<15%</td><td>RAD<5%</td><td>RAD<15%</td><td>RAD<5%</td><td>RAD<15%</td></tr><tr><td colspan="7">BASELINE (NO ATTACK)</td></tr><tr><td>C10</td><td>0.89 / 85%</td><td>0.89 / 85%</td><td>0.77 / 89%</td><td>0.89 / 79%</td><td>0.83 / 87%</td><td>0.92 / 79%</td></tr><tr><td>TI</td><td>0.64 / 55%</td><td>0.83 / 50%</td><td>0.39 / 57%</td><td>0.51 / 52%</td><td>0.42 / 57%</td><td>0.59 / 51%</td></tr><tr><td colspan="7">DEEPSLOTH</td></tr><tr><td>C10</td><td>0.36 / 51%</td><td>0.35 / 51%</td><td>0.12 / 36%</td><td>0.34 / 45%</td><td>0.18 / 41%</td><td>0.49 / 53%</td></tr><tr><td>TI</td><td>0.23 / 37%</td><td>0.51 / 40%</td><td>0.08 / 22%</td><td>0.15 / 25%</td><td>0.08 / 33%</td><td>0.19 / 35%</td></tr><tr><td colspan="7">UNIVERSAL DEEPSLOTH</td></tr><tr><td>C10</td><td>0.89 / 83%</td><td>0.89 / 83%</td><td>0.75 / 85%</td><td>0.88 / 75%</td><td>0.82 / 85%</td><td>0.92 / 77%</td></tr><tr><td>TI</td><td>0.64 / 55%</td><td>0.83 / 50%</td><td>0.38 / 57%</td><td>0.51 / 52%</td><td>0.41 / 57%</td><td>0.59 / 51%</td></tr><tr><td colspan="7">CLASS-UNIVERSAL DEEPSLOTH</td></tr><tr><td>C10</td><td>0.88 / 73%</td><td>0.88 / 73%</td><td>0.69 / 78%</td><td>0.86 / 67%</td><td>0.76 / 74%</td><td>0.89 / 65%</td></tr><tr><td>TI</td><td>0.64 / 54%</td><td>0.83 / 49%</td><td>0.39 / 59%</td><td>0.50 / 58%</td><td>0.41 / 60%</td><td>0.58 / 53%</td></tr></table>
|
| 257 |
+
|
| 258 |
+
Table 5: The effectiveness of ${\ell }_{2}$ DeepSloth. 'RAD<5,15%' columns list the results in each early-exit setting. Each entry includes the model's efficacy score (left) and accuracy (right). The class-universal attack's results are an average of 10 classes. 'TI' is Tiny Imagenet and 'C10' is CIFAR-10.
|
| 259 |
+
|
| 260 |
+
<table><tr><td>NETWORK</td><td colspan="2">MSDNET</td><td colspan="2">VGG16</td><td colspan="2">MOBILENET</td></tr><tr><td>SET.</td><td>RAD<5%</td><td>RAD<15%</td><td>RAD<5%</td><td>RAD<15%</td><td>RAD<5%</td><td>RAD<15%</td></tr><tr><td colspan="7">BASELINE (NO ATTACK)</td></tr><tr><td>C10</td><td>0.89 / 85%</td><td>0.89 / 85%</td><td>0.77 / 89%</td><td>0.89 / 79%</td><td>0.83 / 87%</td><td>0.92 / 79%</td></tr><tr><td>TI</td><td>0.64 / 55%</td><td>0.83 / 50%</td><td>0.39 / 57%</td><td>0.51 / 52%</td><td>0.42 / 57%</td><td>0.59 / 51%</td></tr><tr><td colspan="7">DEEPSLOTH</td></tr><tr><td>C10</td><td>0.52 / 64%</td><td>0.52 / 64%</td><td>0.22 / 60%</td><td>0.45 / 62%</td><td>0.23 / 46%</td><td>0.48 / 55%</td></tr><tr><td>TI</td><td>0.24 / 42%</td><td>0.52 / 44%</td><td>0.13 / 35%</td><td>0.21 / 36%</td><td>0.12 / 38%</td><td>0.25 / 40%</td></tr><tr><td colspan="7">UNIVERSAL DEEPSLOTH</td></tr><tr><td>C10</td><td>0.89 / 81%</td><td>0.89 / 81%</td><td>0.75 / 87%</td><td>0.88 / 76%</td><td>0.81 / 84%</td><td>0.92 / 76%</td></tr><tr><td>TI</td><td>0.63 / 54%</td><td>0.82 / 48%</td><td>0.38 / 56%</td><td>0.51 / 52%</td><td>0.41 / 56%</td><td>0.58 / 51%</td></tr><tr><td colspan="7">CLASS-UNIVERSAL DEEPSLOTH</td></tr><tr><td>C10</td><td>0.88 / 73%</td><td>0.88 / 73%</td><td>0.71 / 81%</td><td>0.86 / 70%</td><td>0.76 / 76%</td><td>0.89 / 66%</td></tr><tr><td>TI</td><td>0.64 / 53%</td><td>0.83 / 49%</td><td>0.38 / 57%</td><td>0.50 / 57%</td><td>0.41 / 58%</td><td>0.58 / 53%</td></tr></table>
|
| 261 |
+
|
| 262 |
+
Our results show that the $\ell_1$ - and $\ell_2$ -based attacks are less effective than the $\ell_{\infty}$ -based attacks. In contrast to the $\ell_{\infty}$ -based attacks that eliminate the efficacy of victim multi-exit models, the $\ell_1$ - and $\ell_2$ -based attacks reduce the efficacy of the same models by $0.24\sim 0.65$ . Besides, the accuracy drops caused by $\ell_1$ - and $\ell_2$ -based attacks are in $6\sim 21\%$ , smaller than that of $\ell_{\infty}$ -based DeepSloth ( $75\sim 99\%$ ). Moreover, we see that the universal variants of $\ell_1$ - and $\ell_2$ -based attacks can barely reduce the efficacy of multi-exit models—they decrease the efficacy up to 0.08 and the accuracy by $12\%$ .
|
| 263 |
+
|
| 264 |
+
# D EMPIRICAL EVALUATION OF DEEPSLOTH ON RESNET56
|
| 265 |
+
|
| 266 |
+
Table 6 shows the effectiveness of our DeepSloth attacks on ResNet56-base models.
|
| 267 |
+
|
| 268 |
+
Table 6: The effectiveness of DeepSloth on the ResNet-based models. 'RAD<5,15%' columns list the results in each early-exit setting. Each entry includes the model's efficacy score (left) and accuracy (right). The class-universal attack's results are an average of 10 classes.
|
| 269 |
+
|
| 270 |
+
<table><tr><td>NETWORK</td><td colspan="2">RESNET (l∞)</td><td colspan="2">RESNET (l1)</td><td colspan="2">RESNET (l2)</td></tr><tr><td>SET.</td><td>RAD<5%</td><td>RAD<15%</td><td>RAD<5%</td><td>RAD<15%</td><td>RAD<5%</td><td>RAD<15%</td></tr><tr><td colspan="7">BASELINE (NO ATTACK)</td></tr><tr><td>C10</td><td>0.52 / 87%</td><td>0.69 / 80%</td><td>0.52 / 87%</td><td>0.69 / 80%</td><td>0.51 / 87%</td><td>0.69 / 80%</td></tr><tr><td>TI</td><td>0.25 / 51%</td><td>0.39 / 46%</td><td>0.25 / 51%</td><td>0.39 / 46%</td><td>0.25 / 51%</td><td>0.39 / 46%</td></tr><tr><td colspan="7">DEEPSLOTH</td></tr><tr><td>C10</td><td>0.00 / 19%</td><td>0.01 / 19%</td><td>0.05 / 43%</td><td>0.18 / 47%</td><td>0.06 / 45%</td><td>0.17 / 48%</td></tr><tr><td>TI</td><td>0.00 / 7%</td><td>0.01 / 7%</td><td>0.04 / 27%</td><td>0.10 / 28%</td><td>0.05 / 34%</td><td>0.13 / 35%</td></tr><tr><td colspan="7">UNIVERSAL DEEPSLOTH</td></tr><tr><td>C10</td><td>0.35 / 63%</td><td>0.59 / 60%</td><td>0.49 / 84%</td><td>0.68 / 75%</td><td>0.48 / 85%</td><td>0.67 / 76%</td></tr><tr><td>TI</td><td>0.25 / 25%</td><td>0.34 / 37%</td><td>0.25 / 51%</td><td>0.39 / 46%</td><td>0.25 / 51%</td><td>0.38 / 46%</td></tr><tr><td colspan="7">CLASS-UNIVERSAL DEEPSLOTH</td></tr><tr><td>C10</td><td>0.23 / 33%</td><td>0.48 / 29%</td><td>0.39 / 70%</td><td>0.60 / 61%</td><td>0.39 / 71%</td><td>0.60 / 61%</td></tr><tr><td>TI</td><td>0.11 / 21%</td><td>0.23 / 18%</td><td>0.23 / 51%</td><td>0.36 / 46%</td><td>0.23 / 50%</td><td>0.36 / 46%</td></tr></table>
|
| 271 |
+
|
| 272 |
+
Our results show that ResNet56-based models are vulnerable to all the $\ell_{\infty}, \ell_{2}$ , and $\ell_{1}$ -based DeepSloth attacks. Using our $\ell_{\infty}$ -based DeepSloth, the attacker can reduce the efficacy of the victim models to $0.00 \sim 0.01$ and the accuracy by $39 \sim 68\%$ . Besides, the $\ell_{2}$ , and $\ell_{1}$ -based attacks also decrease the efficacy to $0.04 \sim 0.18$ and the accuracy by $11 \sim 44\%$ . Compared to the results on MSDNet, VGG16, and MobileNet in Table 4 and 5, the same attacks are more effective. The universal variants decrease the efficacy up to 0.21 and the accuracy up to $24\%$ . In particular, the $\ell_{2}$ , and $\ell_{1}$ -based attacks (on CIFAR-10 models) are effective than the same attacks on MSDNet, VGG16, and MobileNet models.
|
| 273 |
+
|
| 274 |
+
# E COST OF CRAFTING DEEPSLOTH SAMPLES
|
| 275 |
+
|
| 276 |
+
In Table 7, we compare the cost of DeepSloth with other attack algorithms on a VGG16-based CIFAR-10 model—executed on a single Nvidia Tesla-V100 GPU. For the universal DeepSloth, we measure the execution time for crafting a perturbation using one batch (250 samples) of the training set. For the other attacks, we measure the time for perturbing the whole test set of CIFAR-10. Our DeepSloth takes roughly the same time as the PGD and PGD-avg attacks and significantly less time than the PGD-max attack. Our
|
| 277 |
+
|
| 278 |
+
<table><tr><td>ATTACKS</td><td>TIME (SEC.)</td></tr><tr><td>PGD-20</td><td>38</td></tr><tr><td>PGD-20 (Avg.)</td><td>48</td></tr><tr><td>PGD-20 (MAX.)</td><td>475</td></tr><tr><td>DEEPSLOTH</td><td>44</td></tr><tr><td>UNIVERSAL DEEPSLOTH</td><td>2</td></tr></table>
|
| 279 |
+
|
| 280 |
+
Table 7: Time it takes to craft attacks.
|
| 281 |
+
|
| 282 |
+
universal DeepSloth takes only 2 seconds (10x faster than DeepSloth) as it only uses 250 samples.
|
| 283 |
+
|
| 284 |
+
# F ADVERSARIAL EXAMPLES FROM STANDARD ATTACKS AND DEEPSLOTH
|
| 285 |
+
|
| 286 |
+
In Figure 4, we visualize the adversarial examples from the PGD, UAP and our DeepSloth attacks.
|
| 287 |
+
|
| 288 |
+
<table><tr><td></td><td colspan="4">Standard Attacks</td><td colspan="3">DeepSloth</td></tr><tr><td>Original</td><td>PGD</td><td>PGD (avg.)</td><td>PGD (max.)</td><td>UAP</td><td>Per-sample</td><td>Universal</td><td>Class-specific</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td>R</td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr></table>
|
| 289 |
+
|
| 290 |
+
Figure 4: Adversarial examples from the standard and our DeepSloth attacks. The leftmost column shows the clean images. In the next four columns, we show adversarial examples from PGD, PGD (avg.), PGD (max.), and UAP attacks, respectively. The last four columns include adversarial examples from the three variants of DeepSloth. Each row corresponds to each sample, and the last row contains the average $\ell_{\mathrm{inf}}$ -norm of the perturbations over the eight samples in each attack.
|
apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2ab8659c0cab37232b23f510dbcbbf1c3c32fc3b10c3d4ae8f1f120adf0692e0
|
| 3 |
+
size 768273
|
apandanoitsaslothslowdownattacksonadaptivemultiexitneuralnetworkinference/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ff20657dae922fc6f25a8c953f98ad8fd34a7fa0cfb02735f561e54b2aafe254
|
| 3 |
+
size 524698
|
areneuralrankersstilloutperformedbygradientboosteddecisiontrees/47e9890f-c34e-479a-af17-493072162936_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0793b5e75e37a2ed2225075627e64142baba7f5c21b7f742f0ec26ee101e57e7
|
| 3 |
+
size 100442
|
areneuralrankersstilloutperformedbygradientboosteddecisiontrees/47e9890f-c34e-479a-af17-493072162936_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b4dd91e083b9c28d6ac25cbf511807c1ae81ab1702dc7dfdea043508bae3d5f5
|
| 3 |
+
size 120641
|
areneuralrankersstilloutperformedbygradientboosteddecisiontrees/47e9890f-c34e-479a-af17-493072162936_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d6102ce0b6a569ca793b0756f4135fbc7513b60f23a4de081daecd61fb3eba12
|
| 3 |
+
size 405979
|
areneuralrankersstilloutperformedbygradientboosteddecisiontrees/full.md
ADDED
|
@@ -0,0 +1,391 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ARE NEURAL RANKERS STILL OUTPERFORMED BY GRADIENT BOOSTED DECISION TREES?
|
| 2 |
+
|
| 3 |
+
Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi, Xuanhui Wang, Michael Bendersky, Marc Najork
|
| 4 |
+
|
| 5 |
+
Google Research
|
| 6 |
+
|
| 7 |
+
{zhenqin,lyyanle,hlz,yitay,ramakumar,xuanhui,bemike,najork}@google.com
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
Despite the success of neural models on many major machine learning problems, their effectiveness on traditional Learning-to-Rank (LTR) problems is still not widely acknowledged. We first validate this concern by showing that most recent neural LTR models are, by a large margin, inferior to the best publicly available Gradient Boosted Decision Trees (GBDT) in terms of their reported ranking accuracy on benchmark datasets. This unfortunately was somehow overlooked in recent neural LTR papers. We then investigate why existing neural LTR models under-perform and identify several of their weaknesses. Furthermore, we propose a unified framework comprising of counter strategies to ameliorate the existing weaknesses of neural models. Our models are the first to be able to perform equally well, comparing with the best tree-based baseline, while outperforming recently published neural LTR models by a large margin. Our results can also serve as a benchmark to facilitate future improvement of neural LTR models.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Neural approaches have been dominating in many major machine learning domains, such as computer vision (He et al., 2015), natural language processing (Devlin et al., 2019), and speech recognition (Hannun et al., 2014). However, the effectiveness of neural approaches in traditional Learning-to-Rank (LTR), the long-established interdisciplinary research area at the intersection of machine learning and information retrieval (Liu, 2009), is not widely acknowledged (Yang et al., 2019), especially on benchmark datasets that have only numerical features.
|
| 16 |
+
|
| 17 |
+
Historically, a series of LTR models were developed by researchers at Microsoft, starting with RankNet (Burges et al., 2005) and LambdaRank (Burges et al., 2007), both based on neural networks, and culminating in LambdaMART (Wu et al., 2010), which is based on Gradient Boosted Decision Trees (GBDT); Burges (2010) provides an overview of this evolution. There are two publicly available implementations of LambdaMART: one provided by the $\mathrm{RankLib}^1$ library that is part of the Lemur Project (henceforth referred to as $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ ); and the LightGBM $^2$ implementation provided by Microsoft (Ke et al., 2017) (henceforth referred to as $\lambda \mathrm{MART}_{\mathrm{GBM}}$ ). As we will show in Section 3, $\lambda \mathrm{MART}_{\mathrm{GBM}}$ substantially outperforms $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ .
|
| 18 |
+
|
| 19 |
+
There is strong and continuing interest in neural ranking models, with numerous papers published in the last few years alone. Most of these papers treat RankNet and LambdaRank as weak baselines (Pang et al., 2020; Bruch et al., 2019b) and LambdaMART as the "state-of-the-art" (Bruch et al., 2019b; Li et al., 2019; Zhu & Klabjan, 2020; Hu et al., 2019). However, when examining these papers, we note that they either acknowledge their under-performance to $\lambda \mathrm{MART}_{GBM}$ or claim state-of-the-art performance by comparing to a weaker $\lambda \mathrm{MART}_{\text{RankLib}}$ implementation. The inconsistency of performance evaluation on benchmark datasets in this field has made it difficult to measure progress (Lipton & Steinhardt, 2018). It therefore remains an open question whether neural LTR models are as effective as they claim to be, and how to improve them if that is not the case.
|
| 20 |
+
|
| 21 |
+
In this paper, we first conduct a benchmark to show that $\lambda \mathrm{MART}_{GBM}$ outperforms recently published neural models, as well as the $\lambda \mathrm{MART}_{\text{RankLib}}$ , by a large margin. While the neural paradigm is still appealing in a myriad of ways, such as being composable, flexible, and able to benefit from a plethora of new advances (Vaswani et al., 2017; Devlin et al., 2019), the research progress in neural ranking models could be hindered due to their inferior performance to tree models. It thus becomes critical to understand the pitfalls of building neural rankers and boost their performance on benchmark datasets.
|
| 22 |
+
|
| 23 |
+
Specifically, we investigate why neural LTR approaches under-perform on standard LTR datasets and identify three major weaknesses that are typically ignored by recent work. First, neural models are not as adept at performing effective feature transformations and scaling, which is one major benefit of using tree-based methods (Saberian et al., 2019). In ranking data which is typically long-tailed, this can be a prohibitive property. Second, standard feed-forward networks are ineffective in generating higher-order features as noted by recent papers (Wang et al., 2017b; Beutel et al., 2018). More effective network architectures for neural LTR models are needed. Third, recent neural LTR work on benchmark datasets does not employ high-capacity networks, a key success factor of many neural models (Devlin et al., 2019), possibly due to a small scale of training data that causes overfitting. On the other hand, there are several potential benefits of neural approaches over LambdaMART for LTR, such as their flexibility to model listwise data and the existence of many techniques to mitigate data sparsity. To that end, we propose a new framework that ameliorates the weaknesses of existing neural LTR approaches and improves almost all major network components.
|
| 24 |
+
|
| 25 |
+
In the proposed framework, we make several technical contributions: (1) We demonstrate empirical evidence that a simple log1p transformation on the input features is very helpful. (2) We use data augmentation (DA) to make the most out of high-capacity neural models, which is surprisingly the first work in the LTR literature to do so. We show that adding a simple Gaussian noise helps, but only when the model capacity is appropriately augmented (which probably explains why there is no prior work on such a simple idea). (3) We use self-attention (SA) to model the listwise ranking data as context, and propose to use latent cross (LC) to effectively generate the interaction of each item and its listwise context.
|
| 26 |
+
|
| 27 |
+
We conduct experiments on three widely used public LTR datasets. Our neural models are trained with listwise ranking losses. On all datasets, our framework can outperform recent neural LTR methods by a large margin. When comparing with the strong LambdaMART implementation, $\lambda \mathrm{MART}_{GBM}$ , we are able to achieve equally good results, if not better. Our work can also serve as a benchmark for neural ranking models, which we believe can lay a fertile ground for future neural LTR research, as rigorous benchmarks on datasets such as ImageNet (Russakovsky et al., 2015) and GLUE (Wang et al., 2018a) do in their respective fields.
|
| 28 |
+
|
| 29 |
+
# 2 BACKGROUND
|
| 30 |
+
|
| 31 |
+
We provide some background on LTR, including its formulation and common metrics. We review LambdaMART and highlight its two popular implementations which are causes of the inconsistency of evaluations in the recent literature.
|
| 32 |
+
|
| 33 |
+
# 2.1 LEARNING TO RANK
|
| 34 |
+
|
| 35 |
+
LTR methods are supervised techniques and the training data can be represented as a set $\Psi = \{(\mathbf{x},\mathbf{y})\in \chi^n\times \mathbb{R}^n)\}$ , where $\mathbf{x}$ is a list of $n$ items $x_{i}\in \chi$ and $\mathbf{y}$ is a list of $n$ relevance labels $y_{i}\in \mathbb{R}$ for $1\leq i\leq n$ . We use $\chi$ as the universe of all items. In traditional LTR problems, each $x_{i}$ corresponds to a query-item pair and is represented as a feature vector in $\mathbb{R}^k$ where $k$ is the number of feature dimensions. With slightly abuse of notation, we also use $x_{i}$ as the feature vector and say $\mathbf{x}\in \mathbb{R}^{n\times k}$ . The objective is to learn a function that produces an ordering of items in $\mathbf{x}$ so that the utility of the ordered list is maximized.
|
| 36 |
+
|
| 37 |
+
Most LTR algorithms formulate the problem as learning a ranking function to score and sort the items in a list. As such, the goal of LTR boils down to finding a parameterized ranking function
|
| 38 |
+
|
| 39 |
+
$s(\cdot ;\Theta):\chi^n\to \mathbb{R}^n$ , where $\Theta$ denotes the set of parameters, to minimize the empirical loss:
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\mathcal {L} (s) = \frac {1}{| \Psi |} \sum_ {(\mathbf {x}, \mathbf {y}) \in \Psi} l (\mathbf {y}, s (\mathbf {x})), \tag {1}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
where $l(\cdot)$ is the loss function on a single list. LTR algorithms differ primarily in how they parameterize $s$ and how they define $l$ .
|
| 46 |
+
|
| 47 |
+
There are many existing ranking metrics such as NDCG and MAP used in LTR problems. A common property of these metrics is that they are rank-dependent and place more emphasis on the top ranked items. For example, the commonly adopted NDCG metric is defined as
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
N D C G \left(\pi_ {s}, \mathbf {y}\right) = \frac {D C G \left(\pi_ {s} , \mathbf {y}\right)}{D C G \left(\pi^ {*} , \mathbf {y}\right)}, \tag {2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
where $\pi_s$ is a ranked list induced by the ranking function $s$ on $\mathbf{x}$ , $\pi^*$ is the ideal list (where $\mathbf{x}$ is sorted by $\mathbf{y}$ ), and $DCG$ is defined as:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
D C G (\pi , \mathbf {y}) = \sum_ {i = 1} ^ {n} \frac {2 ^ {y _ {i}} - 1}{\log_ {2} (1 + \pi (i))} = \sum_ {i = 1} ^ {n} \frac {G _ {i}}{D _ {i}} \tag {3}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
In practice, the truncated version that only considers the top-k ranked items, denoted as NDCG@k, is often used.
|
| 60 |
+
|
| 61 |
+
# 2.2 LAMBDAMART
|
| 62 |
+
|
| 63 |
+
LTR models have evolved from linear models (Joachims, 2002), to nerval networks (Burges et al., 2005), and then to decision trees (Burges, 2010) in the past two decades. LambdaMART, proposed about ten years ago (Wu et al., 2010; Burges, 2010), is still treated as the "state-of-the-art" for LTR problems in recent papers (Bruch et al., 2019b; Zhu & Klabjan, 2020). It is based on Gradient Boosted Decision Trees (GBDT). During each boosting step, the loss is dynamically adjusted based on the ranking metric in consideration. For example, $\Delta$ NDCG is defined as the absolute difference between the NDCG values when two documents $i$ and $j$ swap their positions in the ranked list sorted by the obtained ranking functions so far.
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\Delta N D C G (i, j) = \left| G _ {i} - G _ {j} \right| \cdot \left| \frac {1}{D _ {i}} - \frac {1}{D _ {j}} \right|. \tag {4}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
Then LambdaMART uses a pairwise logistic loss and adapts the loss by re-weighting each item pair in each iteration, with $s(\mathbf{x})|_i$ being the score for item $i$ and $\alpha$ being a hyperparameter:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
l (\mathbf {y}, s (\mathbf {x})) = \sum_ {y _ {i} > y _ {j}} \Delta N D C G (i, j) \log_ {2} \left(1 + e ^ {- \alpha (s (\mathbf {x}) | _ {i} - s (\mathbf {x}) | _ {j})}\right) \tag {5}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
There are two popular public implementations of LambdaMART, namely $\lambda \mathrm{MART}_{\mathrm{GBM}}$ and $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ . $\lambda \mathrm{MART}_{\mathrm{GBM}}$ is more recent than $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ and has more advanced features by leveraging novel data sampling and feature bundling techniques (Ke et al., 2017). However, recent neural LTR papers either use the weaker implementation of $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ (Pang et al., 2020; Wang et al., 2017a; Ai et al., 2018; 2019), or acknowledge the inferior performance of neural models when compared with $\lambda \mathrm{MART}_{\mathrm{GBM}}$ (Bruch et al., 2019b). Such an inconsistency makes it hard to determine whether neural models are indeed more effective than the tree-based models.
|
| 76 |
+
|
| 77 |
+
# 3 BENCHMARKING EXISTING METHODS
|
| 78 |
+
|
| 79 |
+
To resolve the inconsistency, we perform a benchmark on three popular LTR benchmark datasets to show that: 1) there is a large gap between the two implementations of tree-based LambdaMART $\lambda \mathrm{MART}_{\mathrm{GBM}}$ and $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ ; 2) Recent neural LTR methods are generally significantly worse than the stronger implementation. Then we discuss several weaknesses of recent neural LTR approaches, and point out promising directions, which lay the foundation of our proposed framework.
|
| 80 |
+
|
| 81 |
+
Table 1: The statistics of the three largest public benchmark datasets for LTR models.
|
| 82 |
+
|
| 83 |
+
<table><tr><td></td><td>#features</td><td colspan="3">#queries</td><td colspan="3">#docs</td></tr><tr><td></td><td></td><td>training</td><td>validation</td><td>test</td><td>training</td><td>validation</td><td>test</td></tr><tr><td>Web30K</td><td>136</td><td>18,919</td><td>6,306</td><td>6,306</td><td>2,270,296</td><td>747,218</td><td>753,611</td></tr><tr><td>Yahoo</td><td>700</td><td>19,944</td><td>2,994</td><td>6,983</td><td>473,134</td><td>71,083</td><td>165,660</td></tr><tr><td>Istella</td><td>220</td><td>20,901</td><td>2,318</td><td>9,799</td><td>6,587,822</td><td>737,803</td><td>3,129,004</td></tr></table>
|
| 84 |
+
|
| 85 |
+
Table 2: All numbers are significantly worse than the corresponding number from $\lambda {\mathrm{{MART}}}_{GBM}$ at the $p < {0.05}$ level using a two-tailed $t$ -test. Best performing numbers are bold.
|
| 86 |
+
|
| 87 |
+
<table><tr><td rowspan="2">Models</td><td rowspan="2">Rerank</td><td colspan="3">Web30K NDCG@k</td><td colspan="3">Yahoo NDCG@k</td><td colspan="3">Istella NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>λMARTRankLib</td><td>X</td><td>45.35</td><td>44.59</td><td>46.46</td><td>68.52</td><td>70.27</td><td>74.58</td><td>65.71</td><td>61.18</td><td>65.91</td></tr><tr><td>λMARTGBM</td><td>X</td><td>50.73</td><td>49.66</td><td>51.48</td><td>71.88</td><td>74.21</td><td>78.02</td><td>74.92</td><td>71.24</td><td>76.07</td></tr><tr><td>RankSVM</td><td>X</td><td>30.10</td><td>33.50</td><td>36.50</td><td>63.70</td><td>67.40</td><td>72.60</td><td>52.69</td><td>50.41</td><td>55.29</td></tr><tr><td>GSF</td><td>X</td><td>41.29</td><td>41.51</td><td>43.74</td><td>64.29</td><td>68.38</td><td>73.16</td><td>62.24</td><td>59.68</td><td>65.08</td></tr><tr><td>ApproxNDCG</td><td>X</td><td>46.64</td><td>45.38</td><td>47.31</td><td>69.63</td><td>72.32</td><td>76.77</td><td>65.81</td><td>62.32</td><td>67.09</td></tr><tr><td>DLCM</td><td>✓</td><td>46.30</td><td>45.00</td><td>46.90</td><td>67.70</td><td>69.90</td><td>74.30</td><td>65.58</td><td>61.94</td><td>66.80</td></tr><tr><td>SetRank</td><td>X</td><td>42.90</td><td>42.20</td><td>44.28</td><td>67.11</td><td>69.60</td><td>73.98</td><td>67.33</td><td>62.78</td><td>67.37</td></tr><tr><td>SetRankre</td><td>✓</td><td>45.91</td><td>45.15</td><td>46.96</td><td>68.22</td><td>70.29</td><td>74.53</td><td>67.60</td><td>63.45</td><td>68.34</td></tr></table>
|
| 88 |
+
|
| 89 |
+
# 3.1 DATASETS
|
| 90 |
+
|
| 91 |
+
The three data sets we used in our experiments are public benchmark datasets widely adopted by the research community. They are the LETOR dataset from Microsoft (Qin & Liu, 2013), Set1 from the YAHOO LTR challenge (Chapelle & Chang, 2011), and Istella (Dato et al., 2016). We call them Web30K, Yahoo, and Istella respectively. All of them are data sets for web search ranking and the largest data sets publicly available for LTR algorithms. The relevance labels of documents for each query are rated by human in the form of multilevel graded relevance. See Qin & Liu (2013) for an example list of features, such as the number of URL clicks, or the BM25 scores of the different page sections. An overview of these three datasets is shown in Table 1.
|
| 92 |
+
|
| 93 |
+
# 3.2 COMPARISON
|
| 94 |
+
|
| 95 |
+
We compare a comprehensive list of methods in Table 2. $\lambda \mathrm{MART}_{\mathrm{GBM}}$ (Ke et al., 2017) and $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ are the two LambdaMART implementations. RankSVM (Joachims, 2006) is a classic pairwise learning-to-rank model built on SVM. GSF (Ai et al., 2019) is a neural model using groupwise scoring function and fully connected layers. ApproxNDCG (Bruch et al., 2019b) is a neural model with fully connected layers and a differeable loss that approximates NDCG (Qin et al., 2010). DLCM (Ai et al., 2018) is an RNN based neural model that use list context information to rerank a list of documents based on $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ as in the original paper. SetRank (Pang et al., 2020) is a neural model using self-attention to encode the entire list and perform a joint scoring. SetRank $^{re}$ (Pang et al., 2020) is SetRank plus ordinal embeddings based on the initial document ranking generated by $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ as in the original paper.
|
| 96 |
+
|
| 97 |
+
We choose to compare these methods because they are either popular or recent. The neural models are already leveraging advanced neural techniques such as using neural methods to model the entire ranking list, which is difficult for tree-based models to achieve. We reproduced results for $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ , $\lambda \mathrm{MART}_{\mathrm{GBM}}$ , $\mathrm{RankSVM}$ , $\mathrm{GSF}$ , and ApproxNDCG with extensive hyperparameter tuning with more details in Appendix A. Results for the DLCM and SetRank methods are from their respective papers where the authors did their own tuning. Note that the test set is fixed for all datasets, thus the numbers are comparable.
|
| 98 |
+
|
| 99 |
+
From Table 2, we can see the following. 1) $\lambda \mathrm{MART}_{\mathrm{GBM}}$ is a more appropriate "state-of-the-art" LambdaMART baseline, as it significantly outperforms $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ . 2) Recent neural LTR methods, though sometimes outperform $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ , are inferior to $\lambda \mathrm{MART}_{\mathrm{GBM}}$ by a large margin,
|
| 100 |
+
|
| 101 |
+
sometimes by as much as $15\%$ , comparatively. These results show the inconsistency of existing methods and validate the concerns on the current practice of neural LTR models<sup>3</sup>.
|
| 102 |
+
|
| 103 |
+
# 4 NEURAL LTR MODELS
|
| 104 |
+
|
| 105 |
+
A natural question is: why do neural models under-perform on LTR benchmark datasets compared with LambdaMART, despite their success in many machine learning research areas? We first identify a few weaknesses of the neural LTR models and then propose our methods to address them.
|
| 106 |
+
|
| 107 |
+
# 4.1 WEAKNESSES
|
| 108 |
+
|
| 109 |
+
By reviewing recent papers and the strength of tree-based models, we give the following hypotheses:
|
| 110 |
+
|
| 111 |
+
Feature transformation. Neural networks are sensitive to input feature scales and transformations (Saberian et al., 2019). LTR datasets consist of features of diverse scales with long-tail distributions, such as the number of clicks of an item. Tree-based models are known to partition the feature space effectively, which is beneficial for datasets (such as LTR datasets) with only numeric features. Some recent work already shows the benefits of better input feature transformations than Gaussian normalization (Saberian et al., 2019; Zhuang et al., 2020). Unfortunately, neither the pioneering neural LTR papers (Burges et al., 2005; 2007) nor the most recent ones discuss the impact of feature transformation.
|
| 112 |
+
|
| 113 |
+
Network architecture. Unless the focus is the neural architecture, neural LTR papers typically use a standard feed-forward network that consists of a stack of fully connected layers. However, fully connected layers are known to be ineffective in generating higher-order feature interactions. The problem has been widely studied in areas such as ads prediction (Wang et al., 2017b) and recommender systems (Beutel et al., 2018), but has not received enough attention for LTR.
|
| 114 |
+
|
| 115 |
+
Data sparsity. Recent neural LTR models are small and do not employ high-capacity networks (Bruch et al., 2019b; Pang et al., 2020), possibly due to the overfitting issue. While large datasets are key factors to many recent successes of neural models in other domains (He et al., 2015; Devlin et al., 2019), the publicly available LTR datasets are comparatively small. Popular techniques such as data augmentation to mitigate overfitting in high-capacity networks are commonly used in other areas (Perez & Wang, 2017). But it is less intuitive on how to do data augmentation for LTR datasets, compared with, e.g., rotating a cat image in computer vision.
|
| 116 |
+
|
| 117 |
+
# 4.2 IMPROVEMENTS
|
| 118 |
+
|
| 119 |
+
We introduce our proposed neural LTR framework that tries to address the above mentioned concerns. Figure 1 summarizes our DASALC framework, which stands for Data Augmented Self-Attentive Latent Cross ranking network.
|
| 120 |
+
|
| 121 |
+
# 4.2.1 EXPLICIT FEATURE TRANSFORMATION AND DATA AUGMENTATION
|
| 122 |
+
|
| 123 |
+
Features in LTR datasets are diverse and can be of different scales. Out of the three datasets we consider, only the Yahoo dataset has been normalized (we leave it not-transformed). It is well known that neural networks are sensitive to input data scale, and we apply a simple "log1p" transformation to every element of $\mathbf{x}$ and empirically find it works well for the Web30K and Istella datasets:
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathbf {x} = \log_ {e} (1 + | \mathbf {x} |) \odot \operatorname {s i g n} (\mathbf {x}). \tag {6}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $\odot$ is the element-wise multiplication operator.
|
| 130 |
+
|
| 131 |
+
We use a very simple data augmentation technique on LTR datasets. We add a random Gaussian noise independently to every element of input vector $\mathbf{x}$ :
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathbf {x} = \mathbf {x} + \mathcal {N} (\mathbf {0}, \sigma^ {2} \mathbf {I}) \tag {7}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
Figure 1: An illustration of the DASALC. FC is fully connected layer, ReLU is ReLU activation, and BN indicates batch normalization. Log1p Transform is applied when applicable. Softmax loss is short for softmax output with cross-entropy loss.
|
| 139 |
+
|
| 140 |
+
where $\sigma$ is a scalar hyperparameter. The random noise is added after the log1p transformation in an online fashion during training (i.e. different perturbations will be added to the same data point seen in different batches). A single scalar $\sigma$ for every feature is reasonable because the feature distributions are normalized by $\log 1p$ . Also data augmentation is added after input Batch Normalization (BN) when applicable. Note that the random noise is added independently to every element so (later) BN will not cancel it away. We find such a simple data augmentation technique works well in our framework, but as shown in experiments, it only works when the capacity of the network is properly augmented as described in the next section.
|
| 141 |
+
|
| 142 |
+
For notation simplicity, we combine the $\log 1p$ feature transformation and data augmentation into a single function $\mathbf{f}:\mathbb{R}^{n\times k}\to \mathbb{R}^{n\times k}$ :
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
\mathbf {f} = \log_ {e} (1 + | \mathbf {x} |) \odot \operatorname {s i g n} (\mathbf {x}) + \mathcal {N} (\mathbf {0}, \sigma^ {2} \mathbf {I}) \tag {8}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
# 4.2.2 LEVERAGING LISTWISE CONTEXT
|
| 149 |
+
|
| 150 |
+
For LTR problem, the list of documents can be leveraged in neural models. This is the key base to enhance the network architecture for LTR. We leverage the multi-head self-attention (MHSA) mechanism (Vaswani et al., 2017) to encode ranking list information. More specifically, we generate a contextual embedding $\mathbf{a}_i$ , for each item $i$ , considering the document similarity between document $i$ and every document in the list. For the multi-head self-attention mechanism, we have the input $\mathbf{f} \in \mathbb{R}^{n \times k}$ , and project $\mathbf{f}$ into a query (in the context of attention mechanism) matrix $Q = \mathbf{f}W^{\bar{Q}}$ , a key matrix $K = \mathbf{f}W^K$ , and a value matrix $V = \mathbf{f}W^V$ with trainable projection matrices $W^Q$ , $W^K$ , and $W^V \in \mathbb{R}^{k \times z}$ , where $z$ is the attention head size. Then a self-attention (SA) head computes the weighted sum of the transformed values $V$ as,
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\operatorname {S A} (\mathbf {f}) = \operatorname {S o f t m a x} (S (\mathbf {f})) V, \tag {9}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
where similarity matrix between $Q$ and $K$ is defined as $S(\mathbf{f}) = \frac{QK^T}{\sqrt{z}}$ . For each layer, the results from the $H$ heads are concatenated to form the output of multi-head self-attention by
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\operatorname {M H S A} (\mathbf {f}) = \operatorname {c o n c a t} _ {h \in [ H ]} \left[ \mathrm {S A} _ {h} (\mathbf {f}) \right] W _ {\text {o u t}} + b _ {\text {o u t}}, \tag {10}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
where $W_{\mathrm{out}} \in \mathbb{R}^{Hz \times z}$ and $b_{\mathrm{out}} \in \mathbb{R}^{n \times z}$ are trainable parameters. We apply $L \geq 1$ layers of multi-head self-attention followed by a layer normalization (Ba et al., 2016) similarly to (Vaswani et al., 2017).
|
| 163 |
+
|
| 164 |
+
By treating $\mathbf{a}_i$ as the listwise contextual embedding for item $i$ , we further leverage the simple latent cross idea (Beutel et al., 2018) to effectively generate feature interactions:
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
h _ {i} ^ {\text {c r o s s}} = \left(1 + \mathbf {a} _ {i}\right) \odot h _ {\text {o u t}} \left(x _ {i}\right), \tag {11}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
where $\odot$ is the element-wise multiplication operator ( $\mathbf{a}_i$ will go through a linear projection when the dimensions do not match, omitted in the equation), and $h_{\mathrm{out}}(x_i)$ is the output of the final hidden layer of regular network.
|
| 171 |
+
|
| 172 |
+
Learning to rank can be seen as learning to induce order over set of items. One desirable property for ranking approaches that use listwise context is to be permutation equivariant: applying a permutation over input items leads to an equivalent permutation over output scores. DASALC satisfies such a permutation equivariance property.
|
| 173 |
+
|
| 174 |
+
Proposition 1. Let $\pi$ be a permutation of indices of $[1,..,n]$ and $\pmb{x} \in \mathbb{R}^{n \times k}$ be the input item representation. DASALC is permutation equivariant for scores generated over input items, i.e., $s_{DASALC}(\pi(\pmb{x})) = \pi(s_{DASALC}(\pmb{x}))$ . See proof at Appendix C.
|
| 175 |
+
|
| 176 |
+
# 4.3 REMARKS
|
| 177 |
+
|
| 178 |
+
We compared several popular pointwise, pairwise, and listwise ranking losses. We report all results based on the softmax cross entropy loss $l(\mathbf{y}, s(\mathbf{x})) = -\sum_{i=1}^{n} y_i \log_e \frac{e^{s_i}}{\sum_j e^{s_j}}$ since it is simple and empirically robust in general, as demonstrated in Appendix B.2.
|
| 179 |
+
|
| 180 |
+
We provided a general framework that can enhance neural LTR models in many components. For each component, we purposefully use simple or well-known techniques for enhancement because the scope of the current research is to identify the possible reasons why neural LTR is under-performing when compared with the best traditional tree-based methods. Clearly, each component can use more advanced techniques, such as learning a more flexible data transformation (Zhuang et al., 2020) or using data augmentation policy (Cubuk et al., 2019), which we leave as future work.
|
| 181 |
+
|
| 182 |
+
# 5 EXPERIMENTS
|
| 183 |
+
|
| 184 |
+
We conduct experiments on the three LTR datasets (introduced in Sec 3.1) with our proposed framework and compare with some methods in Sec 3. For all our experiments using neural network approaches, we implemented them using the TF-Ranking (Pasumarthi et al., 2019) library.
|
| 185 |
+
|
| 186 |
+
We use two variants of our proposed approaches. DASALC is a model trained in our proposed framework. DASALC-ens is an ensemble of DASALC. By realizing LambdaMART is an ensemble method based on boosting, we leverage the randomness of neural model training and simply use the average score of 3-5 models (tuned on validation set) from different runs as the final score in DASALC-ens.
|
| 187 |
+
|
| 188 |
+
Main result. The results are summarized in Table 3. We focus on the comparison with $\lambda \mathrm{MART}_{GBM}$ and also include SetRank to highlight the difference with recent neural LTR models. Readers can refer to Table 2 for more results. We tune hyperparameters on the validation sets, with more details in Appendix A. We have the following observations and discussions: (1) DASALC can sometimes achieve comparable or better results than $\lambda \mathrm{MART}_{GBM}$ , and outperforms recent neural LTR methods by a large margin. (2) DASALC-ens, though simple, can achieve neutral or significantly better results than $\lambda \mathrm{MART}_{GBM}$ on all datasets and metrics. (3) The results on Yahoo dataset are weaker than the other two datasets. One thing to note is Yahoo dataset is already normalized upon release. As we note the importance of input feature transformation, the provided normalization may not be ideal for neural models, thus it should be encouraged to release LTR datasets with raw feature values.
|
| 189 |
+
|
| 190 |
+
Table 3: Result on the Web30K, Yahoo, and Istella datasets. $\uparrow$ means significantly better result, performed against $\lambda \mathrm{MART}_{GBM}$ at the $p < 0.05$ level using a two-tailed $t$ -test. Last row is relative difference of DASALC-ens over $\lambda \mathrm{MART}_{GBM}$ .
|
| 191 |
+
|
| 192 |
+
<table><tr><td rowspan="2">Models</td><td colspan="3">Web30K NDCG@k</td><td colspan="3">Yahoo NDCG@k</td><td colspan="3">Istella NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>λMARTGBM</td><td>50.73</td><td>49.66</td><td>51.48</td><td>71.88</td><td>74.21</td><td>78.02</td><td>74.92</td><td>71.24</td><td>76.07</td></tr><tr><td>SetRankrre</td><td>45.91</td><td>45.15</td><td>46.96</td><td>68.22</td><td>70.29</td><td>74.53</td><td>67.60</td><td>63.45</td><td>68.34</td></tr><tr><td>DASALC</td><td>50.95</td><td>50.92↑</td><td>52.88↑</td><td>70.98</td><td>73.76</td><td>77.66</td><td>72.77</td><td>70.06</td><td>75.30</td></tr><tr><td>DASALC-ens (Relative diff)</td><td>51.89↑ (+2.29%)</td><td>51.72↑ (+4.15%)</td><td>53.73↑ (+4.37%)</td><td>71.24 (-0.89%)</td><td>74.07 (-0.18%)</td><td>77.97 (-0.06%)</td><td>74.40 (-0.69%)</td><td>71.32 (+0.11%)</td><td>76.44↑ (+0.49%)</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 4: NDCG@5 on Istella when different components are added.
|
| 195 |
+
|
| 196 |
+
<table><tr><td>Model</td><td>DNN</td><td>+log1p</td><td>+SA</td><td>+LC</td><td>+DA</td><td>+ens</td></tr><tr><td>NDCG@5</td><td>64.72</td><td>67.09</td><td>68.32</td><td>68.80</td><td>70.06</td><td>71.32</td></tr></table>
|
| 197 |
+
|
| 198 |
+
Ablation study. We provide some ablation study results in Table 4 to highlight the effectiveness of each component in our framework. Each component is added cumulatively from left to right in the table. We can see that each component helps and the best performance is achieved when all components are combined. More detailed ablation study is provided in Appendix B. Appendix B.1 gives more results on the effect of the log1p transformation. Appendix B.2 compares different loss functions and shows that listwise ranking loss performs better. Appendix B.3 shows the benefit of effective listwise context modeling. Appendix B.4 shows the effect of data augmentation in different model architectures.
|
| 199 |
+
|
| 200 |
+
# 6 RELATED WORK
|
| 201 |
+
|
| 202 |
+
We focus on traditional LTR problems when there are only numeric features and human ratings available. Some works (Mitra & Craswell, 2018; Nogueira et al., 2019; Han et al., 2020) on document matching and ranking leverage neural components such as word2vec and BERT when raw text is available, where the major benefit comes from semantic modeling of highly sparse input and tree-based methods become less relevant due to its limitation in handling sparse features.
|
| 203 |
+
|
| 204 |
+
The pioneering neural LTR models are RankNet (Burges et al., 2005) and LambdaRank (Burges et al., 2007). They use feed-forward networks on dense features as their scoring functions and became less favored than tree-based LambdaMART (Burges, 2010). Recent neural LTR models have explored new model architectures (Pang et al., 2020; Qin et al., 2020b), differetiable losses (Bruch et al., 2019b), and leveraging more auxiliary information (Ai et al., 2018). However, there is less work that specifically understands and addresses weaknesses for neural LTR, and a benchmark with strong tree-based baseline is missing. In this work, we show that relatively simple components that aim to address weaknesses of neural models can outperform recent methods significantly.
|
| 205 |
+
|
| 206 |
+
The idea of generating new data for LTR has been explored in few work recently, but their focus is to train more discriminative ranking models, not to mitigate the data sparsity problem for high-capacity neural models. For example, Yu & Lam (2019) uses a separate Autoencoder model to generate data and then feed them into tree-based models. This work can be treated as orthogonal to our data augmentation technique.
|
| 207 |
+
|
| 208 |
+
Several LTR papers have leveraged neural sequence modeling based on LSTM (Ai et al., 2018) or self-attention (Pang et al., 2020; Pasumarthi et al., 2020), which is not easy for tree-based approaches to model. We also leverage listwise context via self-attention to show neural LTR models are easily extendable. The combination of self-attention based listwise context and latent cross in our work to specifically mitigate the ineffectiveness of neural model to generate higher-order feature interactions has not been explored in the literature.
|
| 209 |
+
|
| 210 |
+
Our work is mostly orthogonal to another line of LTR research, namely unbiased learning to rank from implicit feedback data, such as clicks (Joachims et al., 2017; Hu et al., 2019; Qin et al., 2020a; Zhuang et al., 2021). There are also papers that try to reproduce tree models using neural architectures for tabular data (Saberian et al., 2019; Lee & Jaakkola, 2020). Our motivation is different in that our goal is to identify and mitigate weaknesses of neural approaches in general.
|
| 211 |
+
|
| 212 |
+
# 7 CONCLUSION AND DISCUSSION
|
| 213 |
+
|
| 214 |
+
In this paper, we first showed the inconsistency of performance comparison between neural rankers and GBDT models, and verified the inferior performance of neural models. We then identified the weaknesses when building neural rankers in multiple components and proposed methods to address them. Our proposed framework performs competitively well with the strong tree-based baselines. We believe our general framework and the rigorous benchmarking provides critical contribution to facilitate future neural LTR research. In particular, neural models are powerful in modeling complex relations (e.g., attention mechanism (Vaswani et al., 2017)) and raw text features (e.g., BERT (Devlin et al., 2019)). Also, the active research on neural networks in other domains continuously advances neural techniques (e.g., optimizers (Kingma & Ba, 2014)) All these can be studied in the LTR setting and our work pave ways to avoid pitfalls when leveraging these techniques.
|
| 215 |
+
|
| 216 |
+
# REFERENCES
|
| 217 |
+
|
| 218 |
+
Qingyao Ai, Keping Bi, Jiafeng Guo, and W Bruce Croft. Learning a deep listwise context model for ranking refinement. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 135-144, 2018.
|
| 219 |
+
Qingyao Ai, Xuanhui Wang, Sebastian Bruch, Nadav Golbandi, Michael Bendersky, and Marc Najork. Learning groupwise multivariate scoring functions using deep neural networks. In Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval, pp. 85-92, 2019.
|
| 220 |
+
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
|
| 221 |
+
Alex Beutel, Paul Covington, Sagar Jain, Can Xu, Jia Li, Vince Gatto, and Ed H Chi. Latent cross: Making use of context in recurrent recommender systems. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 46-54, 2018.
|
| 222 |
+
Sebastian Bruch, Xuanhui Wang, Michael Bendersky, and Marc Najork. An analysis of the softmax cross entropy loss for learning-to-rank with binary relevance. In Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval, pp. 75-78, 2019a.
|
| 223 |
+
Sebastian Bruch, Masour Zoghi, Michael Bendersky, and Marc Najork. Revisiting approximate metric optimization in the age of deep neural networks. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1241-1244, 2019b.
|
| 224 |
+
Sebastian Bruch, Shuguang Han, Michael Bendersky, and Marc Najork. A stochastic treatment of learning to rank scoring functions. In Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 61-69, 2020.
|
| 225 |
+
Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hul-lender. Learning to rank using gradient descent. In Proceedings of the 22nd International Confer ence on Machine Learning, pp. 89-96, 2005.
|
| 226 |
+
Christopher J. Burges, Robert Ragno, and Quoc V. Le. Learning to rank with nonsmooth cost functions. In B. Scholkopf, J. C. Platt, and T. Hoffman (eds.), Advances in Neural Information Processing Systems 19, pp. 193-200. 2007.
|
| 227 |
+
Christopher JC Burges. From RankNet to LambdaRank to LambdaMART: An overview. Learning, 11(23-581):81, 2010.
|
| 228 |
+
Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th international conference on Machine learning, pp. 129-136, 2007.
|
| 229 |
+
Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings of the Learning to Rank Challenge, pp. 1-24, 2011.
|
| 230 |
+
|
| 231 |
+
Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation strategies from data. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 113-123, 2019.
|
| 232 |
+
Domenico Dato, Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele Perego, Nicola Tonellotto, and Rossano Venturini. Fast ranking with additive ensembles of oblivious and non-oblivious regression trees. ACM Trans. Inf. Syst., 2016.
|
| 233 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, 2019.
|
| 234 |
+
Aditya Grover, Eric Wang, Aaron Zweig, and Stefano Ermon. Stochastic optimization of sorting networks via continuous relaxations. arXiv preprint arXiv:1903.08850, 2019.
|
| 235 |
+
Shuguang Han, Xuanhui Wang, Mike Bendersky, and Marc Najork. Learning-to-rank with bert in tf-ranking. arXiv preprint arXiv:2004.08476, 2020.
|
| 236 |
+
Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Y. Ng. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014.
|
| 237 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
|
| 238 |
+
Ziniu Hu, Yang Wang, Qu Peng, and Hang Li. Unbiased lambdamart: An unbiased pairwise learning-to-rank algorithm. In The Web Conference, pp. 2830-2836, 2019.
|
| 239 |
+
Thorsten Joachims. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 133-142, 2002.
|
| 240 |
+
Thorsten Joachims. Training linear svms in linear time. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 217-226, 2006.
|
| 241 |
+
Thorsten Joachims, Adith Swaminathan, and Tobias Schnabel. Unbiased learning-to-rank with biased feedback. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pp. 781-789, 2017.
|
| 242 |
+
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 3149-3157, 2017.
|
| 243 |
+
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 244 |
+
Guang-He Lee and Tommi S. Jaakkola. Oblique decision trees from derivatives of relu networks. In 8th International Conference on Learning Representations, 2020.
|
| 245 |
+
Pan Li, Zhen Qin, Xuanhui Wang, and Donald Metzler. Combining decision trees and neural networks for learning-to-rank in personal search. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2032-2040, 2019.
|
| 246 |
+
Zachary C Lipton and Jacob Steinhardt. Troubling trends in machine learning scholarship. arXiv preprint arXiv:1807.03341, 2018.
|
| 247 |
+
Tie-Yan Liu. Learning to rank for information retrieval. Found. Trends Inf. Retr., 2009.
|
| 248 |
+
Bhaskar Mitra and Nick Craswell. An introduction to neural information retrieval. Foundations and Trends in Information Retrieval, 13(1):1-126, 2018.
|
| 249 |
+
Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. Multi-stage document ranking with bert. arXiv preprint arXiv:1910.14424, 2019.
|
| 250 |
+
|
| 251 |
+
Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xueqi Cheng, and Jirong Wen. Setrank: Learning a permutation-invariant ranking model for information retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 499-508, 2020.
|
| 252 |
+
Rama Kumar Pasumarthi, Sebastian Bruch, Xuanhui Wang, Cheng Li, Michael Bendersky, Marc Najork, Jan Pfeifer, Nadav Golbandi, Rohan Anil, and Stephan Wolf. TF-Ranking: Scalable tensorflow library for learning-to-rank. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2970-2978, 2019.
|
| 253 |
+
Rama Kumar Pasumarthi, Honglei Zhuang, Xuanhui Wang, Michael Bendersky, and Marc Najork. Permutation equivariant document interaction network for neural learning to rank. In Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval, pp. 145-148, 2020.
|
| 254 |
+
Luis Perez and Jason Wang. The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621, 2017.
|
| 255 |
+
Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika Dorogush, and Andrey Gulin. Catboost: unbiased boosting with categorical features. In Advances in neural information processing systems, pp. 6638-6648, 2018.
|
| 256 |
+
Tao Qin and Tie-Yan Liu. Introducing LETOR 4.0 datasets. CoRR, abs/1306.2597, 2013.
|
| 257 |
+
Tao Qin, Tie-Yan Liu, and Hang Li. A general approximation framework for direct optimization of information retrieval measures. Information retrieval, 13(4):375-397, 2010.
|
| 258 |
+
Zhen Qin, Suming J. Chen, Donald Metzler, Yongwoo Noh, Jingzheng Qin, and Xuanhui Wang. Attribute-based propensity for unbiased learning in recommender systems: Algorithm and case studies. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2359-2367, 2020a.
|
| 259 |
+
Zhen Qin, Zhongliang Li, Michael Bendersky, and Donald Metzler. Matching cross network for learning to rank in personal search. In The Web Conference, pp. 2835-2841, 2020b.
|
| 260 |
+
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015.
|
| 261 |
+
Mohammad Saberian, Pablo Delgado, and Yves Raimond. Gradient boosted decision tree neural network. arXiv preprint arXiv:1910.09340, 2019.
|
| 262 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
|
| 263 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018a.
|
| 264 |
+
Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang, Peng Zhang, and Dell Zhang. Irgan: A minimax game for unifying generative and discriminative information retrieval models. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 515-524, 2017a.
|
| 265 |
+
Ruoxi Wang, Bin Fu, Gang Fu, and Mingliang Wang. Deep & cross network for ad click predictions. In Proceedings of the ADKDD'17, pp. 1-7. 2017b.
|
| 266 |
+
Xuanhui Wang, Cheng Li, Nadav Golbandi, Michael Bendersky, and Marc Najork. The lambdaoss framework for ranking metric optimization. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pp. 1313-1322, 2018b.
|
| 267 |
+
|
| 268 |
+
Qiang Wu, Christopher J. C. Burges, Krysta M. Svore, and Jianfeng Gao. Adapting boosting for information retrieval measures. Information Retrieval Journal, 13:254-270, 2010.
|
| 269 |
+
Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. Critically examining the "neural hype": Weak baselines and the additivity of effectiveness gains from neural ranking models. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1129-1132, 2019.
|
| 270 |
+
Qian Yu and Wai Lam. Data augmentation based on adversarial autoencoder handling imbalance for learning to rank. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 411-418, 2019.
|
| 271 |
+
Xiaofeng Zhu and Diego Klabjan. Listwise learning to rank by exploring unique ratings. In Proceedings of the 13th International Conference on Web Search and Data Mining, 2020.
|
| 272 |
+
Honglei Zhuang, Xuanhui Wang, Michael Bendersky, and Marc Najork. Feature transformation for neural ranking models. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1649-1652, 2020.
|
| 273 |
+
Honglei Zhuang, Zhen Qin, Xuanhui Wang, Mike Bendersky, Xinyu Qian, Po Hu, and Chary Chen. Cross-positional attention for debiasing clicks. In The Web Conference, 2021.
|
| 274 |
+
|
| 275 |
+
# APPENDIX
|
| 276 |
+
|
| 277 |
+
# A HYPERPARAMETER TUNING
|
| 278 |
+
|
| 279 |
+
For $\lambda \mathrm{MART}_{GBM}$ , we do a grid search for number of trees $\in \{300,500,1000\}$ , number of leaves $\in \{200,500,1000\}$ , and learning rate $\in \{0.01,0.05,0.1,0.5\}$ .
|
| 280 |
+
|
| 281 |
+
For our neural models the main hyperparameters are hidden layer size $\in$ $\{256, 512, 1024, 2048, 3072, 4096\}$ and number of layers $\in$ $\{3, 4, 5, 6\}$ for regular DNN, data augmentation noise $\in$ $[0, 5.0]$ using binary search with step 0.1, number of attention layers $\in$ $\{3, 4, 5, 6\}$ , and number of attention heads $\in$ $\{2, 3, 4, 5\}$ . The same parameter swept is enabled on the baselines we tried when applicable. One noticeable difference between our work and existing work is that we tried large hidden layer size up to 4096 and found that large models work better in general when data augmentation is enabled. We are in the process to release the code and trained models in an open-sourced software package.
|
| 282 |
+
|
| 283 |
+
# B ABLATION STUDIES AND ANALYSIS
|
| 284 |
+
|
| 285 |
+
# B.1 EFFECT OF LOG1P INPUT TRANSFORMATION
|
| 286 |
+
|
| 287 |
+
We first show that the simple log1p transform can improve performance on the Web30K and Istella datasets (Yahoo dataset has already been normalized). Results in Table 5 are based on regular DNN models using the softmax cross-entropy loss. The trends are similar for other configurations. We also noted the results are in general slightly better than Gaussian normalization due to the long-tail nature of LTR dataset features, which we omit here.
|
| 288 |
+
|
| 289 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Web30K NDCG@k</td><td colspan="3">Istella NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>Without log1p</td><td>44.60</td><td>44.99</td><td>47.22</td><td>67.19</td><td>64.72</td><td>70.12</td></tr><tr><td>With log1p</td><td>48.30</td><td>48.22</td><td>50.35</td><td>69.78</td><td>67.09</td><td>72.49</td></tr></table>
|
| 290 |
+
|
| 291 |
+
Table 5: Results on Web30K and Istella using log1p input transformation.
|
| 292 |
+
|
| 293 |
+
We can see that such simple transformation can bring meaningful gains. In all following sections, we use log1p transformation by default.
|
| 294 |
+
|
| 295 |
+
# B.2 RANKINGLOSSES
|
| 296 |
+
|
| 297 |
+
Many recent progresses of neural LTR are on ranking losses, especially listwise ranking losses (Bruch et al., 2019b;a; 2020; Grover et al., 2019). For example, it is attractive to devise differentiable versions of ranking losses for end-to-end learning. Here we do a benchmark of different ranking losses on regular DNN models on different datasets to show that (1) Listwise ranking losses are superior choices to pointwise or pairwise losses that are normally used for non-neural LTR models; (2) Performances of state-of-the-art listwise ranking losses are comparable; (3) The softmax cross entropy loss is a simple but robust choice.
|
| 298 |
+
|
| 299 |
+
We consider the following ranking losses:
|
| 300 |
+
|
| 301 |
+
- SigmoidCrossEntropy: a widely used pointwise loss: $l(\mathbf{y}, s(\mathbf{x})) = \sum_{i=1}^{n} -y_i s_i + \log_e(1 + e^{s_i})$ .
|
| 302 |
+
|
| 303 |
+
- RankNet (Burges et al., 2005): a popular pairwise loss: $l(\mathbf{y}, s(\mathbf{x})) = \sum_{y_i > y_j} \log_e(1 + e^{s_j - s_i})$ .
|
| 304 |
+
|
| 305 |
+
- LambdaRank (Burges et al., 2007; Wang et al., 2018b): the pairwise loss with $\Delta$ NDCG weight, which is a direct implementation of the LambdaMART loss in Eq. (5).
|
| 306 |
+
|
| 307 |
+
- Softmax (Cao et al., 2007; Bruch et al., 2019a): a popular listwise loss: $l(\mathbf{y}, s(\mathbf{x})) = -\sum_{i=1}^{n} y_i \log_e \frac{e^{s_i}}{\sum_j e^{s_j}}$ .
|
| 308 |
+
|
| 309 |
+
- ApproxNDCG (Qin et al., 2010; Bruch et al., 2019b): a listwise loss that is a differentiable approximation of NDCG metric: $l(\mathbf{y}, s(\mathbf{x})) = -\frac{1}{DCG(\pi^*, \mathbf{y})} \sum_{i=1}^{n} \frac{2^{y_i} - 1}{\log_2(1 + \pi_s(i))}$ , where $\pi_s(i) = \frac{1}{2} + \sum_j \text{sigmoid}\left(\frac{s_j - s_i}{T}\right)$ with $T$ a smooth parameter.
|
| 310 |
+
|
| 311 |
+
- GumbelApproxNDCG (Bruch et al., 2019b; 2020): a listwise loss with a stochastic treatment on ApproxNDCG: scores $s$ in the above NDCG loss function will be substituted by $s_i + g_i$ , with a gumbel noise $g_i = -\log_e(-\log_e U_i)$ from $U_i$ uniformly sampled in [0, 1].
|
| 312 |
+
|
| 313 |
+
- NeuralSortNDCG(Grover et al., 2019): a listwise loss that approximates NDCG metric with the NeuralSort trick: $l(\mathbf{y}, s(\mathbf{x})) = -\frac{1}{DCG(\pi^*, \mathbf{y})} \sum_{i, r=1}^{n} \frac{(2^{yi} - 1)P_{ir}^s}{\log_2(1 + r)}$ , where $P_{ir}^s$ is an approximate permutation matrix, obtained by NeuralSort trick: $P_{ir}^s = \text{softmax}[(n + 1 - 2i)s_r - \sum_j |s_r - s_j|)/T]$ , with $T$ a smooth parameter.
|
| 314 |
+
|
| 315 |
+
- GumbelNeuralSortNDCG: a listwise loss with a stochastic treatment of NeuralSortNDCG by replacing the score $s$ in neural sort permutation matrix by $s_i + g_i$ , where $g_i$ is again sampled from the gumbel distribution. This is new in the literature but not the major focus of this work.
|
| 316 |
+
|
| 317 |
+
<table><tr><td rowspan="2">Ranking loss</td><td colspan="3">Web30K NDCG@k</td><td colspan="3">Istella NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>SigmoidCrossEntropy</td><td>47.65</td><td>46.85</td><td>48.47</td><td>67.62</td><td>64.46</td><td>69.51</td></tr><tr><td>RankNet</td><td>46.05</td><td>46.67</td><td>48.98</td><td>69.09</td><td>66.04</td><td>71.81</td></tr><tr><td>LambdaRank</td><td>45.87</td><td>46.55</td><td>48.85</td><td>68.18</td><td>65.22</td><td>70.88</td></tr><tr><td>Softmax</td><td>48.30</td><td>48.22</td><td>50.35</td><td>69.78</td><td>67.09</td><td>72.49</td></tr><tr><td>ApproxNDCG</td><td>49.31</td><td>47.87</td><td>49.49</td><td>70.05</td><td>66.08</td><td>70.76</td></tr><tr><td>GumbelApproxNDCG</td><td>49.53</td><td>48.07</td><td>49.75</td><td>71.78</td><td>67.33</td><td>71.79</td></tr><tr><td>NeuralSortNDCG</td><td>48.66</td><td>47.19</td><td>48.83</td><td>68.92</td><td>64.27</td><td>69.03</td></tr><tr><td>GumbelNeuralSortNDCG</td><td>49.74</td><td>48.40</td><td>50.22</td><td>70.96</td><td>67.26</td><td>71.92</td></tr></table>
|
| 318 |
+
|
| 319 |
+
Table 6: Results on the Web30k and Istella datasets with standard feed-forward network architecture.
|
| 320 |
+
|
| 321 |
+
The results are summarized in Table 6. For different ranking losses, we make a grid search over different optimizers with different learning rates: for Adam optimizer, we scan learning rates $\in$ $\{10^{-4}, 10^{-3}, 10^{-2}\}$ ; for Adagrad optimizer, we scan learning rates $\in$ $\{0.01, 0.1, 0.5\}$ . When the smooth parameter $T$ is applicable, we also scan it $\in$ $\{0.1, 1, 10\}$ . We report the results based on best NDCG@5 for different losses.
|
| 322 |
+
|
| 323 |
+
As we have stated above, we find that: (1) The performance of models trained with listwise losses are significantly better than the models trained with pointwise or pairwise losses. (2) Different
|
| 324 |
+
|
| 325 |
+
listwise losses are generally comparable, and we found that the softmax cross-entropy loss performs coherently well over different models and different datasets. It is thus used in our main results and following sections. (3) LambdaRank does not work well for neural models. On the other hand, previous work (Bruch et al., 2019a) shows that tree-based models with softmax loss are not as good as LambdaMART, demonstrating that tree-based models and neural LTR models have different behavior on different loss functions. This encourages future work to design neural LTR specific ranking losses.
|
| 326 |
+
|
| 327 |
+
# B.3 EFFECT OF LISTWISE CONTEXT
|
| 328 |
+
|
| 329 |
+
We study the effect of leveraging listwise context with self-attention, with and without latent cross (concatenation between item feature and context feature will be applied) (Pasumarthi et al., 2020) on the Web30K and Istella datasets. Results are shown in Table 7. We can see that using neural approach to model listwise context, which is difficult for tree-based models to do, is quite beneficial. Latent cross, though simple, can help leverage listwise context more effectively.
|
| 330 |
+
|
| 331 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Web30K NDCG@k</td><td colspan="3">Istella NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>DNN</td><td>48.30</td><td>48.22</td><td>50.35</td><td>69.78</td><td>67.09</td><td>72.49</td></tr><tr><td>Self-attention</td><td>49.89</td><td>50.15</td><td>52.18</td><td>71.77</td><td>68.32</td><td>73.72</td></tr><tr><td>Self-attention and LC</td><td>50.19</td><td>50.49</td><td>52.47</td><td>72.19</td><td>68.80</td><td>74.21</td></tr></table>
|
| 332 |
+
|
| 333 |
+
# B.4 EFFECT OF DATA AUGMENTATION
|
| 334 |
+
|
| 335 |
+
One of the technical findings in this work is that using a simple Gaussian noise as data augmentation can help neural LTR models. Below we add Gaussian noise with different strength $(\sigma)$ to both DNN model and the DASALC framework with results shown in Table 8. We can see that the performance
|
| 336 |
+
|
| 337 |
+
Table 7: Results on the Web30K and Istella datasets using self-attention and latent cross.
|
| 338 |
+
|
| 339 |
+
<table><tr><td rowspan="2">Method (σ)</td><td colspan="3">Web30K NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>DNN(0.0)</td><td>48.30</td><td>48.22</td><td>50.35</td></tr><tr><td>DNN(0.1)</td><td>48.10</td><td>48.01</td><td>50.14</td></tr><tr><td>DNN(1.0)</td><td>46.39</td><td>46.15</td><td>48.18</td></tr><tr><td>DASALC(0.0)</td><td>50.19</td><td>50.49</td><td>52.47</td></tr><tr><td>DASALC(0.1)</td><td>50.38</td><td>50.61</td><td>52.56</td></tr><tr><td>DASALC(1.5)</td><td>50.95</td><td>50.92</td><td>52.88</td></tr><tr><td>DASALC(2.0)</td><td>50.65</td><td>50.78</td><td>52.68</td></tr></table>
|
| 340 |
+
|
| 341 |
+
of DNN starts to drop as soon as we start to add noise. However, for DASALC, data augmentation helps and the performance looks robust using different levels of noise. The performance peeks around $\sigma = 1.5$ . The optimal $\sigma$ needs tuning for different datasets but the general trends are similar for other datasets. We treat the study of the exact mechanism of how data augmentation works in DASALC and the application of more sophisticated data augmentation techniques as future work.
|
| 342 |
+
|
| 343 |
+
We also try to add noise to $\lambda \mathrm{MART}_{GBM}$ and see similar results as DNN. The results on the YAHOO dataset is shown in Table 9, we can see that adding noise leads to worse accuracy.
|
| 344 |
+
|
| 345 |
+
Table 8: Results on the Web30K datasets using different architecture and random noise strength.
|
| 346 |
+
|
| 347 |
+
<table><tr><td rowspan="2">Method (σ)</td><td colspan="3">Yahoo NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>λMARTGBM(0.0)</td><td>71.88</td><td>74.21</td><td>78.02</td></tr><tr><td>λMARTGBM(0.1)</td><td>70.10</td><td>72.60</td><td>77.19</td></tr><tr><td>λMARTGBM(1.0)</td><td>64.96</td><td>67.28</td><td>72.60</td></tr><tr><td>λMARTGBM(1.5)</td><td>64.39</td><td>66.84</td><td>72.27</td></tr></table>
|
| 348 |
+
|
| 349 |
+
Table 9: Results on the Yahoo datasets using different architecture and random noise strength.
|
| 350 |
+
|
| 351 |
+
# B.5 PERFORMANCE ON CATBOOST
|
| 352 |
+
|
| 353 |
+
We mainly compared with $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ and $\lambda \mathrm{MART}_{\mathrm{GBM}}$ in the main content since they are the most popular baselines used in recent papers. There are other GBDT implementations that can also be used for the LTR task. Catboost (Prokhorenkova et al., 2018) is a recently popular GBDT implementation for various tasks. We also evaluate its performance on the three LTR datasets. Note that Catboost is not specific to ranking and does not have a standard LambdaMART implementation to the best of our knowledge. We try both the QueryRMSE loss and YetiRank loss, which are the best performing losses on most existing Catboost's benchmarks. The results are reported in Table 10.
|
| 354 |
+
|
| 355 |
+
Table 10: Comparison of Catboost with other methods on the Web30K, Yahoo, and Istella datasets.
|
| 356 |
+
|
| 357 |
+
<table><tr><td rowspan="2">Models</td><td colspan="3">Web30K NDCG@k</td><td colspan="3">Yahoo NDCG@k</td><td colspan="3">Istella NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>λMARTRankLib</td><td>45.35</td><td>44.59</td><td>46.46</td><td>68.52</td><td>70.27</td><td>74.58</td><td>67.71</td><td>61.18</td><td>65.91</td></tr><tr><td>λMARTGBM</td><td>50.73</td><td>49.66</td><td>51.48</td><td>71.88</td><td>74.21</td><td>78.02</td><td>74.92</td><td>71.24</td><td>76.07</td></tr><tr><td>Catboost-QueryRMSE</td><td>50.07</td><td>50.04</td><td>51.97</td><td>70.50</td><td>74.25</td><td>78.31</td><td>69.91</td><td>67.73</td><td>72.18</td></tr><tr><td>Catboost-YetiRank</td><td>48.92</td><td>49.10</td><td>51.31</td><td>69.86</td><td>74.00</td><td>78.11</td><td>72.06</td><td>69.97</td><td>74.12</td></tr><tr><td>DASALC-ens</td><td>51.89</td><td>51.72</td><td>53.73</td><td>71.24</td><td>74.07</td><td>77.97</td><td>74.40</td><td>71.32</td><td>76.44</td></tr></table>
|
| 358 |
+
|
| 359 |
+
We can see that Catboost can produce very decent results, clearly outperforming $\lambda \mathrm{MART}_{\mathrm{RankLib}}$ , but its comparison with $\lambda \mathrm{MART}_{\mathrm{GBM}}$ is mixed. We encourage researchers to also consider different implementations such as Catboost in future LTR work.
|
| 360 |
+
|
| 361 |
+
# B.6 LAMBDAMART ENSEMBLE
|
| 362 |
+
|
| 363 |
+
We showed that a simple ensemble of neural rankers can bring meaningful gains, leveraging the stochastic nature of neural network learning. On the other hand, LambdaMART itself is an ensemble algorithm using boosting, but it is still interesting to see the effect of ensembling multiple LambdaMART models. We conduct additional experiments on this front using $\lambda \mathrm{MART}_{GBM}$ and have two major observations: 1) Running LambdaMART multiple times with the same configuration generates very similar results, and ensemble in this setting does not help, whereas neural rankers can benefit from such a simple setting; 2) In Table 11 we show ensembling LambdaMART with different configurations (e.g., different # trees, # leaves and learning rate) on the Istella dataset. We ensemble five LambdaMART models chosen on the validation set. The results on other datasets are similar.
|
| 364 |
+
|
| 365 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Istella NDCG@k</td></tr><tr><td>@1</td><td>@5</td><td>@10</td></tr><tr><td>λMARTGBM</td><td>74.92</td><td>71.24</td><td>76.07</td></tr><tr><td>λMARTGBM-ens</td><td>75.04</td><td>71.40</td><td>76.28</td></tr></table>
|
| 366 |
+
|
| 367 |
+
Table 11: Results on the Istella datasets using LambdaMART ensembles.
|
| 368 |
+
|
| 369 |
+
We can see that the improvement from ensembling LambdaMART is smaller than that in neural rankers (see Table 3). Our hypothesis is that model ensembles tend to be more effective for neural rankers with stronger stochastic nature, and exploring advanced model ensemble methods with neural rankers is an interesting future direction.
|
| 370 |
+
|
| 371 |
+
# C PERMUTATION EQUIVARIANCE ANALYSIS
|
| 372 |
+
|
| 373 |
+
For any general scoring function $s(\mathbf{x}) : \mathbb{R}^{n \times k} \to \mathbb{R}^n$ , and a permutation $\pi$ over indices $[1, \dots, n]$ , we call $s$ to be permutation equivariant iff
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
s (\pi (\mathbf {x})) = \pi (s (\mathbf {x})) \tag {12}
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
The scoring function for proposed approach, DASALC, can be written as a combination of feature transformation and data augmentation function $\mathbf{f}$ , output of multi-headed self-attention $\mathbf{a} :=$
|
| 380 |
+
|
| 381 |
+
$\mathbf{MHSA}^L (\mathbf{f})$ and output of final layer of regular network $h_{out}(\mathbf{x})$
|
| 382 |
+
|
| 383 |
+
$$
|
| 384 |
+
s _ {D A S A L C} (\mathbf {x}) = W _ {F C} ^ {T} R e L U \left(\left(1 + \mathbf {a} (\mathbf {x})\right) \odot h _ {\text {o u t}} (\mathbf {x})\right) \tag {13}
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
Note that per-item transformations, which we refer to as univariate transformations, are trivially permutation equivariant. Also, composition of two permutation equivariant functions is also permutation equivariant, as the permutation operator and the permutation equivariant functions are commutative. Hence linear projection, ReLU activation and $\mathbf{f}$ (as a function of $\mathbf{x}$ ) are permutation equivariant. Multi-headed self-attention is shown to be permutation equivariant (Pang et al., 2020). Hence, on applying permutation $\pi$ to the proposed scoring function, we see that it satisfies the permutation equivariance property.
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
\begin{array}{l} \pi \left(s _ {D A S A L C} (\mathbf {x})\right) = \pi \left(W _ {F C} ^ {T} R e L U \left(\left(1 + \mathbf {a} (\mathbf {x})\right) \odot h _ {\text {o u t}} (\mathbf {x})\right)\right) \\ = W _ {F C} ^ {T} R e L U (\pi ((1 + \mathbf {a} (\mathbf {x})) \odot h _ {o u t} (\mathbf {x}))) \\ = W _ {F C} ^ {T} R e L U \left(\left(1 + \pi (\mathbf {a} (\mathbf {x})) \odot \pi \left(h _ {\text {o u t}} (\mathbf {x})\right)\right) \right. \\ = W _ {F C} ^ {T} R e L U \left(\left(\left(1 + \mathbf {a} (\pi (\mathbf {x}))\right) \odot h _ {o u t} (\pi (\mathbf {x}))\right)\right) \\ = s _ {D A S A L C} (\pi (\mathbf {x})) \\ \end{array}
|
| 391 |
+
$$
|
areneuralrankersstilloutperformedbygradientboosteddecisiontrees/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3124fcaad5e8b84a1af20f6bdd02b7ac6805b6730f4fedc5357f8634fb82f5a4
|
| 3 |
+
size 599138
|
areneuralrankersstilloutperformedbygradientboosteddecisiontrees/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8adde85414497fd52e18dfffb58929adf33617adc2848594e2654ea60b7183de
|
| 3 |
+
size 518951
|
asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/03151e5d-dbcd-4f79-89ec-5aacfe80c593_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8fa32f99076886a9c741cb6f47db15c785cd2bbe0193a1338dd638b1291b4c87
|
| 3 |
+
size 81644
|
asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/03151e5d-dbcd-4f79-89ec-5aacfe80c593_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2426320202efa1f8cb883883aab8a8fc6007b9b3f4bc64314121f2983a83046c
|
| 3 |
+
size 104913
|
asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/03151e5d-dbcd-4f79-89ec-5aacfe80c593_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb9bd3da8619542e3d508c096677c4363132446e653528ee736baac639b4c9db
|
| 3 |
+
size 23496148
|
asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/full.md
ADDED
|
@@ -0,0 +1,370 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ASYNC-RED: A PROVABLY CONVERGENT ASYNCHRONOUS BLOCK PARALLEL STOCHASTIC METHOD USING DEEP DENOISING PRIORS
|
| 2 |
+
|
| 3 |
+
Yu Sun $^{1}$ , Jiaming Liu $^{1}$ , Yiran Sun $^{1}$ , Brendt Wohlberg $^{2}$ , Ulugbek S. Kamilov $^{1}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Washington University in St. Louis, <sup>2</sup>Los Alamos National Laboratory
|
| 6 |
+
|
| 7 |
+
{sun.yu, jiaming.liu, yiran.s, kamilov}@wustl.edu, brendt@ieee.org
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
Regularization by denoising (RED) is a recently developed framework for solving inverse problems by integrating advanced denoisers as image priors. Recent work has shown its state-of-the-art performance when combined with pre-trained deep denoisers. However, current RED algorithms are inadequate for parallel processing on multicore systems. We address this issue by proposing a new asynchronous RED (ASYNC-RED) algorithm that enables asynchronous parallel processing of data, making it significantly faster than its serial counterparts for large-scale inverse problems. The computational complexity of ASYNC-RED is further reduced by using a random subset of measurements at every iteration. We present complete theoretical analysis of the algorithm by establishing its convergence under explicit assumptions on the data-fidelity and the denoiser. We validate ASYNC-RED on image recovery using pre-trained deep denoisers as priors.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Imaging inverse problems seek to recover an unknown image $\pmb{x} \in \mathbb{R}^n$ from its noisy measurements $\pmb{y} \in \mathbb{R}^m$ . Such problems arise in many fields, ranging from low-level computer vision to biomedical imaging. Since many imaging inverse problems are ill-posed, it is common to regularize the solution by using prior information on the unknown image. Widely-adopted image priors include total variation, low-rank penalties, and transform-domain sparsity (Rudin et al., 1992; Figueiredo & Nowak 2001; 2003; Hu et al. 2012; Elad & Aharon 2006).
|
| 16 |
+
|
| 17 |
+
There has been considerable recent interest in plug-and-play priors (PnP) (Venkatakrishnan et al., 2013; Sreehari et al., 2016) and regularization by denoising (RED) (Romano et al., 2017), as frameworks for exploiting image denoisers as priors for image recovery. The popularity of deep learning has led to a wide adoption of deep denoisers within PnP/RED, leading to their state-of-the-art performance in a variety of applications, including image restoration (Mataev et al., 2019), phase retrieval (Metzler et al., 2018), and tomographic imaging (Wu et al., 2020). Their empirical success has also prompted a follow-up theoretical work clarifying the existence of explicit regularizers (Reehorst & Schniter, 2019), providing new interpretations based on fixed-point projections (Cohen et al., 2020), and analyzing their coordinate/online variants (Sun et al., 2019a; Wu et al., 2020). Nonetheless, current PnP/RED algorithms are inherently serial. As illustrated in Fig. I, this makes them suboptimal on multicore systems that are often required for processing large-scale datasets (Recht et al., 2011), such as those involving biomedical (Ong et al., 2020) and astronomical images (Akiyama et al., 2019)
|
| 18 |
+
|
| 19 |
+
We address this gap by proposing a novel asynchronous RED (ASYNC-RED) algorithm. The algorithm decomposes the inference problem into a sequence of partial (block-coordinate) updates on $\mathbf{x}$ executed asynchronously in parallel over a multicore system. ASYNC-RED leads to a more efficient usage of available cores by avoiding synchronization of partial updates. ASYNC-RED is also scalable in terms of the number of measurements, since it processes only a small random subset of $\mathbf{y}$ at every iteration. We present two new theoretical results on the convergence of ASYNC-RED based on a unified set of explicit assumptions on the data-fidelity and the denoiser. Specifically, we establish its fixed-point convergence in the batch setting and extend this analysis to the randomized minibatch scenario. Our results extend recent work on serial block-coordinate RED (BC-RED) (Sun et al.,
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Visual illustration of serial and parallel image recovery on a multicore system. (a) Serial processing uses only one core of the system for every iteration. (b) Synchronous parallel processing has to wait for the slowest core to finish before starting the next iteration. (c) Asynchronous parallel processing can continuously iterate using all the cores without waiting. (d) Asynchronous parallel processing using the stochastic gradient leads to additional flexibility. (a), (b), and (c) use all the corresponding measurements at every iteration, while (d) uses only a small random subset at a time. ASYNC-RED adopts the schemes shown in (c) and (d).
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
|
| 30 |
+
2019a) and are fully consistent with the traditional asynchronous parallel optimization methods (Lian et al., 2015; Sun et al., 2017). We numerically validate ASYNC-RED on image recovery from linear and noisy measurements using pre-trained deep denoisers as image priors.
|
| 31 |
+
|
| 32 |
+
# 2 BACKGROUND
|
| 33 |
+
|
| 34 |
+
Inverse problems. Inverse problems are traditionally formulated as a composite optimization problem
|
| 35 |
+
|
| 36 |
+
$$
|
| 37 |
+
\widehat {\boldsymbol {x}} = \underset {\boldsymbol {x} \in \mathbb {R} ^ {n}} {\arg \min } g (\boldsymbol {x}) + h (\boldsymbol {x}), \tag {1}
|
| 38 |
+
$$
|
| 39 |
+
|
| 40 |
+
where $g$ is the data-fidelity term that ensures consistency of $\mathbf{x}$ with the measured data $\mathbf{y}$ and $h$ is the regularizer that infuses the prior knowledge on $\mathbf{x}$ . For example, consider the smooth $\ell_2$ -norm data-fidelity term $g(\mathbf{x}) = \| \mathbf{y} - \mathbf{A}\mathbf{x}\|_2^2$ , which assumes a linear observation model $\mathbf{y} = \mathbf{A}\mathbf{x} + \mathbf{e}$ , and the nonsmooth TV regularizer $h(\mathbf{x}) = \tau \| D\mathbf{x}\|_1$ , where $\tau > 0$ is the regularization parameter and $D$ is the image gradient (Rudin et al., 1992).
|
| 41 |
+
|
| 42 |
+
Regularization by denoising (RED). RED is a recent methodology for imaging inverse problems that seeks vectors $\boldsymbol{x}^{*} \in \mathbb{R}^{n}$ satisfying
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
\mathrm {G} \left(\boldsymbol {x} ^ {*}\right) = \nabla g \left(\boldsymbol {x} ^ {*}\right) + \tau \left(\boldsymbol {x} ^ {*} - \mathrm {D} _ {\sigma} \left(\boldsymbol {x} ^ {*}\right)\right) = 0 \quad \Leftrightarrow \quad \boldsymbol {x} ^ {*} \in \operatorname {z e r} (\mathrm {G}) := \left\{\boldsymbol {x} \in \mathbb {R} ^ {n}: \mathrm {G} (\boldsymbol {x}) = 0 \right\} \tag {2}
|
| 46 |
+
$$
|
| 47 |
+
|
| 48 |
+
where $\nabla g$ denotes the gradient of the data-fidelity term and $\mathsf{D}_{\sigma}:\mathbb{R}^n\to \mathbb{R}^n$ is an image denoiser parameterized by $\sigma >0$ . Under additional technical assumptions, the solutions $x^{*}\in \mathsf{zer}(G)$ can be associated with an explicit objective function of form (1). Specifically, when $\mathsf{D}_{\sigma}$ is locally homogeneous and has a symmetric Jacobian satisfying strong passivity (Romano et al., 2017; Reehorst & Schniter, 2019), $\mathsf{H}(\pmb {x})\coloneqq \pmb {x} - \mathsf{D}_{\sigma}(\pmb {x})$ corresponds to the gradient of a convex regularizer
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
h (\boldsymbol {x}) = \frac {1}{2} \boldsymbol {x} ^ {\mathsf {T}} (\boldsymbol {x} - \mathrm {D} _ {\sigma} (\boldsymbol {x})). \tag {3}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+
A simple strategy, known as GM-RED, for computing $x^{*} \in \mathrm{zer}(\mathsf{G})$ is based on the first-order fixed-point iteration
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
\boldsymbol {x} ^ {t} = \boldsymbol {x} ^ {t - 1} - \gamma \mathrm {G} (\boldsymbol {x} ^ {t - 1}), \quad \text {w i t h} \quad \mathrm {G}: \mathbb {R} ^ {n} \rightarrow \mathbb {R} ^ {n}, \tag {4}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
where $\gamma > 0$ denotes the stepsize. In this paper, we extend this first-order RED algorithm to design ASYNC-RED. Since many denoisers do not satisfy the assumptions necessary for having an explicit objective (Reehorst & Schniter, 2019), our theoretical analysis considers a broader setting where $D_{\sigma}$ does not necessarily correspond to any explicit regularizer. The benefit of our analysis is that it accommodates powerful deep denoisers (such as DnCNN (Zhang et al., 2017a)) that have been shown to achieve the state-of-the-art performance (Sun et al., 2019a; Wu et al., 2020; Cohen et al., 2020).
|
| 61 |
+
|
| 62 |
+
Plug-and-play priors (PnP) and other related work. There are other lines of works that combine the iterative methods with advanced denoisers. One closely-related framework is known as the deep mean-shift priors (Bigdeli et al., 2017). It develops an implicit regularizer whose gradient is specified by a denoising autoencoder. Another well-known framework is PnP, which generalizes proximal methods by replacing the proximal map with an image denoiser (Venkatakrishnan et al., 2013). Applications and theoretical analysis of PnP are widely studied in (Sreehari et al., 2016; Zhang et al., 2017b; Sun et al., 2019; Zhang et al., 2019; Ahmad et al., 2020; Wei et al., 2020) and (Chan et al., 2017; Meinhardt et al., 2017; Buzzard et al., 2018; Sun et al., 2019b; Tirer & Giryes, 2019; Teodoro et al., 2019; Ryu et al., 2019; Xu et al., 2020), respectively. In particular, Buzzard et al. (2018) proposed a parallel extension of PnP called Consensus Equilibrium (CE), which enables synchronous parallel updates of $x$ . Note that while we developed ASYNC-RED as a variant of RED, our framework and analysis can be also potentially applied to PnP/CE. The plug-in strategy can be also applied to another family of algorithms known as approximate message passing (AMP) (Metzler et al., 2016a,b; Fletcher et al., 2018). The AMP-based algorithms are known to be nearly-optimal for random measurement matrices, but are generally unstable for general A (Rangan et al., 2014; 2015).
|
| 63 |
+
|
| 64 |
+
Asynchronous parallel optimization. There are two main lines of work in asynchronous parallel optimization, the one involving the asynchrony in coordinate updates (Iutzeler et al., 2013; Liu et al., 2015; Peng et al., 2016; Bianchi et al., 2015; Sun et al., 2017; Hannah & Yin, 2018; Hannah et al., 2019), and the other focusing on the study of various asynchronous stochastic gradient methods (Recht et al., 2011; Lian et al., 2015; Liu et al., 2018; Zhou et al., 2018; Lian et al., 2018).
|
| 65 |
+
|
| 66 |
+
Our work contributes to the area by developing a novel deep-regularized asynchronous parallel method with provable convergence guarantees.
|
| 67 |
+
|
| 68 |
+
# 3 ASYNCHRONOUS RED
|
| 69 |
+
|
| 70 |
+
ASYNC-RED allows efficient processing of data by simultaneously considering the asynchronous partial updates of solution $\pmb{x}$ and the use of randomized subset of measurements $\pmb{y}$ . In this section, we introduce the algorithmic details of our method. We start with the basic batch formulation of ASYNC-RED (ASYNC-RED-BG) followed by its minibatch variant (ASYNC-RED-SG).
|
| 71 |
+
|
| 72 |
+
# 3.1 ASYNC-RED USING BATCH GRADIENT
|
| 73 |
+
|
| 74 |
+
When the gradient uses all the measurements $\mathbf{y} \in \mathbb{R}^m$ , ASYNC-RED-BG is the asynchronous extension of the recent block-coordinate RED (BC-RED) algorithm (Sun et al., 2019a). Consider the decomposition of the variable space $\mathbb{R}^n$ into $b \geq 1$ blocks
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\boldsymbol {x} = \left(\boldsymbol {x} _ {1}, \dots , \boldsymbol {x} _ {b}\right) \in \mathbb {R} ^ {n _ {1}} \times \dots \times \mathbb {R} ^ {n _ {b}} = \mathbb {R} ^ {n} \quad \text {w i t h} \quad n = n _ {1} + n _ {2} + \dots + n _ {b},
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
For each $i \in \{1, \dots, b\}$ , we introduce the operator $\mathsf{U}_i: \mathbb{R}^{n_i} \to \mathbb{R}^n$ that injects a vector in $\mathbb{R}^{n_i}$ into $\mathbb{R}^n$ and its transpose $\mathsf{U}_i^\top$ that extracts the $i$ th block from a vector in $\mathbb{R}^n$ . This directly implies that
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\mathbf {I} = \cup_ {1} \cup_ {1} ^ {\top} + \dots + \cup_ {b} \cup_ {b} ^ {\top} \quad \text {a n d} \quad \| \boldsymbol {x} \| _ {2} ^ {2} = \| \boldsymbol {x} _ {1} \| _ {2} ^ {2} + \dots + \| \boldsymbol {x} _ {b} \| _ {2} ^ {2} \quad \text {w i t h} \quad \boldsymbol {x} _ {i} = \cup_ {i} ^ {\top} \boldsymbol {x}. \tag {5}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
In analogy to the RED operator $G$ in (2), we define the block-coordinate operator $G_i$ as
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
\mathbf {G} _ {i} (\boldsymbol {x}) := \mathrm {U} _ {i} \mathrm {U} _ {i} ^ {\top} \mathbf {G} (\boldsymbol {x}), \quad \text {w i t h} \quad \boldsymbol {x} \in \mathbb {R} ^ {n} \quad \text {a n d} \quad \mathbf {G} _ {i}: \mathbb {R} ^ {n} \rightarrow \mathbb {R} ^ {n}. \tag {6}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
Due to the asynchrony in the block updates, the iterate might be updated several times by different cores during a single update cycle of a core, which means that the evaluation of $\pmb{x}^{k + 1}$ relies on a stale iterate $\widetilde{\pmb{x}}^k$
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\boldsymbol {x} ^ {k + 1} \leftarrow \boldsymbol {x} ^ {k} - \gamma \mathrm {G} _ {i _ {k}} \left(\widetilde {\boldsymbol {x}} ^ {k}\right), \quad \text {w i t h} \quad \widetilde {\boldsymbol {x}} ^ {k} = \boldsymbol {x} ^ {k} + \sum_ {s = k - \Delta_ {k}} ^ {k - 1} \left(\boldsymbol {x} ^ {s} - \boldsymbol {x} ^ {s + 1}\right), \quad \Delta_ {k} \leq \lambda . \tag {7}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
Here, we assume that the stale iterate $\widetilde{\pmb{x}}^k$ exists as a state of $\pmb{x}$ in the shared memory, and the delay between them is bounded by a finite number $\lambda \in \mathbb{Z}_{+}$ . These two assumptions are often referred to as the consistent read (Recht et al., 2011) and the bounded delay (Liu & Wright, 2015) in the traditional asynchronous block coordinate optimization. Although we implement the consistent read in ASYNC-RED, the algorithm never imposes a global lock on $\pmb{x}^k$ . We refer to Supplement A for the related discussion.
|
| 99 |
+
|
| 100 |
+
Algorithm 1 ASYNC-RED-BG
|
| 101 |
+
|
| 102 |
+
1: input: $\pmb{x}^0 \in \mathbb{R}^n$ , $\gamma > 0, \tau > 0$ .
|
| 103 |
+
2: setup: A multicore system with one shared memory storing $x$ and global iteration $k$ .
|
| 104 |
+
3: for global $k = 1,2,3,\ldots$ do
|
| 105 |
+
4: $\widetilde{\pmb{x}}^k\gets \mathrm{read}(\pmb {x})$
|
| 106 |
+
5: $\mathsf{G}_{i_k}(\widetilde{\boldsymbol{x}}^k)\gets \mathsf{U}_{i_k}\mathsf{U}_{i_k}^\top \mathsf{G}(\widetilde{\boldsymbol{x}}^k)$ with random $i_k\in \{1,\ldots ,b\}$ ▷ Block Operation
|
| 107 |
+
6: $\pmb{x}^k \gets \operatorname{read}(\pmb{x})$
|
| 108 |
+
7: $\pmb{x}^{k + 1}\gets \pmb{x}^k -\gamma \mathsf{G}_{i_k}(\widetilde{\pmb{x}}^k)$
|
| 109 |
+
8: update $\pmb{x}$ in the shared memory using $\pmb{x}^{k+1}$
|
| 110 |
+
9: end for
|
| 111 |
+
|
| 112 |
+
The first variant, ASYNC-RED-BG, is summarized in Algorithm $\boxed{1}$ , where read( $\cdot$ ) reads a block from the shared memory to the local memory. When the algorithm is run on a single core system without parallelization (that is to say $\widetilde{\boldsymbol{x}}^k = \boldsymbol{x}^k$ ), it reduces to the normal BC-RED algorithm. Hence, our analysis is also applicable to BC-RED.
|
| 113 |
+
|
| 114 |
+
We specifically consider the random block selection strategy in ASYNC-RED-BG, namely that every block index $i_k$ is selected as an i.i.d random variable uniformly distributed over $\{1, \dots, b\}$ . Such a strategy is commonly adopted for simplifying the convergence analysis. Nevertheless, our method and analysis can be generalized to the scenario where $i_k$ follows some arbitrary probability $P(i_k = i) = p_i$ specified by the user.
|
| 115 |
+
|
| 116 |
+
Compared with serial RED algorithms, ASYNC-RED-BG enjoys considerable scalability by dividing the computation of the full operator $G$ into $b$ parallel evaluation of $G_i$ distributed across all cores. Thus, without any modification to the algorithmic design, one can easily improve the performance of the algorithm by simply integrating more cores into the system. In Section 5 we experimentally demonstrate the significant speed-up and scale-up in solving the context of image recovery.
|
| 117 |
+
|
| 118 |
+
# 3.2 ASYNC-RED USING STOCHASTIC GRADIENT
|
| 119 |
+
|
| 120 |
+
The scale of measurements is another important factor influencing the computational complexity in the large-scale inference tasks. ASYNC-RED-SG improves the applicability of ASYNC-RED to these cases by further considering the decomposition of the measurement space $\mathbb{R}^m$ into $\ell \geq 1$ blocks
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\boldsymbol {y} = \left(\boldsymbol {y} _ {1}, \dots , \boldsymbol {y} _ {\ell}\right) \in \mathbb {R} ^ {m _ {1}} \times \dots \times \mathbb {R} ^ {m _ {\ell}} = \mathbb {R} ^ {m} \quad \text {w i t h} \quad m = m _ {1} + m _ {2} + \dots + m _ {\ell}.
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
Hence, ASYNC-RED-SG considers the following data-fidelity $g$ and its gradient $\nabla g$
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
g (\boldsymbol {x}) = \frac {1}{\ell} \sum_ {j = 1} ^ {\ell} g _ {j} (\boldsymbol {x}) \quad \Rightarrow \quad \nabla g (\boldsymbol {x}) = \frac {1}{\ell} \sum_ {j = 1} ^ {\ell} \nabla g _ {j} (\boldsymbol {x}), \tag {8}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
where each $g_{j}$ is evaluated on the subset $\mathbf{y}_{j}\in \mathbb{R}^{m_{j}}$ of the full $\mathbf{y}$ . From (8), we know that the computation of $\nabla g(\pmb {x})$ is proportional to the total number $\ell$ . To reduce the per-iteration cost, we follow the idea of stochastic optimization to approximate the batch gradient by using the stochastic gradient that relies on a minibatch of $w\ll \ell$ measurements
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\widehat {\nabla} g (\boldsymbol {x}) = \frac {1}{w} \sum_ {s = 1} ^ {w} \nabla g _ {j _ {s}} (\boldsymbol {x}), \tag {9}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
where $j_{s}$ is picked from the set $\{1,\dots ,\ell \}$ as i.i.d uniform random variable. Based on the minibatch gradient, we define the block stochastic operator $\widehat{\mathsf{G}}_i:\mathbb{R}^n\to \mathbb{R}^n$ as
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\widehat {\mathbf {G}} _ {i} := \cup_ {i} \cup_ {i} ^ {\top} \widehat {\mathbf {G}} (\boldsymbol {x}), \quad \text {w i t h} \quad \widehat {\mathbf {G}} := \widehat {\nabla} g (\boldsymbol {x}) + \tau (\boldsymbol {x} - \mathrm {D} _ {\sigma} (\boldsymbol {x})), \quad \widehat {\mathbf {G}}: \mathbb {R} ^ {n} \rightarrow \mathbb {R} ^ {n}. \tag {10}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
Note that the computation of $\widehat{G}_i$ is now dependent on the minibatch size $w$ that is adjustable to cope with the computation resources at hand. ASYNC-RED-SG is summarized in Algorithm 2
|
| 145 |
+
|
| 146 |
+
The operation minibatchG( $\cdot$ ) computes the estimate of G based on $w$ randomly selected measurements. We clarify the difference between ASYNC-RED-SG and ASYNC-RED-BG via a specific example.
|
| 147 |
+
|
| 148 |
+
Algorithm 2 ASYNC-RED-SG
|
| 149 |
+
1: input: $\pmb{x}^0 \in \mathbb{R}^n, \gamma > 0, \tau > 0$ .
|
| 150 |
+
2: setup: A multicore system with one shared memory storing $\pmb{x}$ and global iteration $k$ .
|
| 151 |
+
3: for global $k = 1, 2, 3, \ldots$ do
|
| 152 |
+
4: $\widetilde{\pmb{x}}^k \gets \text{read}(\pmb{x})$
|
| 153 |
+
5: $\widehat{\mathsf{G}}(\widetilde{\pmb{x}}^k) \gets \text{minibatchG}(\widetilde{\pmb{x}}^k, w)$ with random $j_w \in \{1, \dots, \ell\}$
|
| 154 |
+
6: $\widehat{\mathsf{G}}_{i_k}(\widetilde{\pmb{x}}^k) \gets \cup_{i_k} \mathsf{U}_{i_k}^{\mathsf{T}} \widehat{\mathsf{G}}(\widetilde{\pmb{x}}^k)$ with random $i_k \in \{1, \dots, b\}$
|
| 155 |
+
7: $\pmb{x}^k \gets \text{read}(\pmb{x})$
|
| 156 |
+
8: $\pmb{x}^{k+1} \gets \pmb{x}^k - \gamma \widehat{\mathsf{G}}_{i_k}(\widetilde{\pmb{x}}^k)$
|
| 157 |
+
9: update $\pmb{x}$ in the shared memory using $x^{k+1}$
|
| 158 |
+
10: end for
|
| 159 |
+
|
| 160 |
+
Consider the least-squares $g$ with a block-friendly operator $A$ and a block-efficient denoiser $\mathrm{D}_{\sigma}$ . We can write the update of ASYNC-RED-BG regarding a single iteration as
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\mathbf {G} _ {i} (\widetilde {\boldsymbol {x}}) = \boldsymbol {A} _ {i} ^ {\top} \left(\boldsymbol {A} _ {i} \widetilde {\boldsymbol {x}} - \boldsymbol {y} _ {i}\right) + \tau \left(\widetilde {\boldsymbol {x}} _ {i} - \mathrm {D} \left(\widetilde {\boldsymbol {x}} _ {i}\right)\right), \tag {11}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
where $\widetilde{\pmb{x}}$ is the delayed iterate for $\pmb{x}$ , and $A_{i} \in \mathbb{R}^{m \times n_{i}}$ is a submatrix of $\pmb{A}$ consisting of columns corresponding to the $i$ th blocks. Although the per-iteration complexity is reduced by roughly $b = n / n_{i}$ times by working with $A_{i}$ instead of $\pmb{A}$ , ASYNC-RED-BG still needs to work with all the measurements $\pmb{y}_{i}$ related to the $i$ th block at every iteration. Consider the corresponding update of ASYNC-RED-SG with one measurement used at a time
|
| 167 |
+
|
| 168 |
+
$$
|
| 169 |
+
\widehat {\mathbf {G}} _ {i} (\widetilde {\boldsymbol {x}}) = \boldsymbol {A} _ {j i} ^ {\top} \left(\boldsymbol {A} _ {j i} \widetilde {\boldsymbol {x}} - \boldsymbol {y} _ {j i}\right) + \tau \left(\widetilde {\boldsymbol {x}} _ {i} - \mathrm {D} \left(\widetilde {\boldsymbol {x}} _ {i}\right)\right), \tag {12}
|
| 170 |
+
$$
|
| 171 |
+
|
| 172 |
+
where $\mathbf{y}_{ji}$ denotes the $j$ th measurement of $\mathbf{x}_i$ , and $A_{ji} \in \mathbb{R}^{m_j \times n_i}$ is the submatrix crossed by the rows and columns corresponding to the $j$ th measurement and the $i$ th blocks. This indicates that the reduction of the per-iteration complexity from ASYNC-RED-BG to ASYNC-RED-SG can be up to $\ell = m / m_j$ times. In the practice, it is common to use $w > 1$ measurements at a time to optimize the total runtime. Note that if $U = U^{\top} = I$ , ASYNC-RED-SG becomes the asynchronous stochastic RED algorithm. In the next section, we will present a complete analysis of ASYNC-RED and theoretically discuss its connection to the related algorithms.
|
| 173 |
+
|
| 174 |
+
# 4 CONVERGENCE ANALYSIS OF ASYNC-RED
|
| 175 |
+
|
| 176 |
+
The proposed analysis is based on the following explicit assumptions. Note that these assumptions serve as sufficient conditions for the convergence.
|
| 177 |
+
|
| 178 |
+
Assumption 1. We assume bounded maximal delay $\lambda < \infty$ . Hence, during any update cycle of an agent, the estimate $\pmb{x}$ in the shared memory is updated at most $\lambda \in \mathbb{Z}_{+}$ times by other cores.
|
| 179 |
+
|
| 180 |
+
The value of $\lambda$ is often dependent on the number of cores involved in the computation (Wright, 2015). If every core takes a similar amount of time to compute its update, $\lambda$ is expected to be a multiple of the number of cores. Related work has investigated the convergence with unbounded maximal delays in the context of traditional optimization (Hannah & Yin, 2018; Peng et al., 2019; Zhou et al., 2018).
|
| 181 |
+
|
| 182 |
+
Assumption 2. The operator $\mathsf{G}$ is such that $\mathsf{zer}(\mathsf{G})\neq \emptyset$ , and the distance of the initial $\pmb{x}^0\in \mathbb{R}^n$ to any element in $\mathsf{zer}(\mathsf{G})$ is bounded, that is $\| \pmb {x}^0 -\pmb{x}^*\| \leq R_0$ for all $\pmb{x}^{*}\in \mathsf{zer}(\mathsf{G})$ with $R_0 < \infty$
|
| 183 |
+
|
| 184 |
+
This assumption ensures the existence of a solution for the RED problem and is related to the existence of minimizers in traditional coordinate minimization (Nesterov, 2012; Beck & Tetruashvili, 2013)
|
| 185 |
+
|
| 186 |
+
Assumption 3. (a) Every component function $g_{i}$ is convex differentiable and has a Lipschitz continuous gradient of constant $L_{i} > 0$ . (b) At every update, the stochastic gradient is unbiased estimator of $\nabla g$ that has a bounded variance:
|
| 187 |
+
|
| 188 |
+
$$
|
| 189 |
+
\mathbb {E} \left[ \widehat {\nabla} g (\boldsymbol {x}) \right] = g (\boldsymbol {x}), \quad \mathbb {E} \left[ \| \widehat {\nabla} g (\boldsymbol {x}) - \nabla g (\boldsymbol {x}) \| ^ {2} \right] \leq \frac {\nu^ {2}}{w}, \quad \boldsymbol {x} \in \mathbb {R} ^ {n}, \quad \nu > 0.
|
| 190 |
+
$$
|
| 191 |
+
|
| 192 |
+
The first part of the assumption implies that $g$ is also convex and has Lipschitz continuous gradient with constant $L = \max \{L_1, \ldots, L_\ell\}$ . The second part is a standard assumption on the unbiasedness
|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
Figure 2: Convergence of AsYNC-RED-BG for different numbers of accessible cores $n_c \in \{1, 2, 4, 6, 8\}$ . The left figure plots the average normalized distance to $\text{zer}(G)$ against the iteration number; the middle and right figures plot these values, as well as SNR plotted against the actual runtime in seconds. The shaded areas represent the range of values attained over the first images.
|
| 202 |
+
|
| 203 |
+
and variance of the stochastic gradient (Lian et al. 2015; Ghadimi & Lan, 2016). Our final assumption is related to the deep denoiser used in ASYNC-RED.
|
| 204 |
+
|
| 205 |
+
Assumption 4. The denoiser $\mathsf{D}_{\sigma}$ is a nonexpansive operator $\| \mathsf{D}_{\sigma}(\pmb {x}) - \mathsf{D}_{\sigma}(\pmb {y})\| \leq \| \pmb {x} - \pmb {y}\|$ .
|
| 206 |
+
|
| 207 |
+
Compared with the conditions stated in Section 2 (namely, that it is locally homogeneous with a symmetric Jacobian), our requirement on the denoiser is milder. One can train a nonexpansive $\mathrm{D}_{\sigma}$ by constraining the Lipschitz constant of $\mathrm{D}_{\sigma}$ via the spectral normalization, which is an active area of research in deep learning (Miyato et al., 2018; Sedghi et al., 2019; Anil et al., 2019; Terris et al., 2020).
|
| 208 |
+
|
| 209 |
+
We can now state the theorems on ASYNC-RED.
|
| 210 |
+
|
| 211 |
+
Theorem 1. Let Assumptions [74] hold true. Run ASYNC-RED-BG for $t > 0$ iterations with uniform i.i.d block selection using a fixed step-size $\gamma \in (0,1 / ((1 + 2\lambda)(L + 2\tau)))$ . Then, the iterates of the algorithm satisfy
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\min _ {0 \leq k \leq t - 1} \mathbb {E} \left[ \| \mathrm {G} \left(\boldsymbol {x} ^ {k}\right) \| ^ {2} \right] \leq \left[ \frac {D}{b} + 2 \right] \frac {(L + 2 \tau) b}{\gamma t} R _ {0} ^ {2}. \tag {13}
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
where $D = 2\lambda^2 /(1 + \lambda)^2$ is a constant.
|
| 218 |
+
|
| 219 |
+
Theorem 1 establishes the convergence of ASYNC-RED-BG to the fixed-point set $\text{zer}(\mathbf{G})$ at the rate of $O(1/t)$ . Our result is consistent with the existing results in the literature. In particular, when the algorithm adopts serial block updates, that is $\lambda = 0$ and $\widetilde{x}^k = x^k$ , the recovered convergence is nearly the same as BC-RED (Sun et al., 2019a) scaled by some constant. On the other hand, our convergence rate $O(1/t)$ is also consistent with the rate proved for the asynchronous block coordinate descent in nonconvex optimization (Sun et al., 2017).
|
| 220 |
+
|
| 221 |
+
Theorem 2. Let Assumptions [7,4] hold true. Run ASYNC-RED-SG for $t > 0$ iterations with uniform i.i.d selections of blocks and measurements using a fixed step-size $\gamma \in (0,1 / ((1 + 2\lambda)(L + 2\tau))]$ . Then, the iterates of the algorithm satisfy
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
\min _ {0 \leq k \leq t - 1} \mathbb {E} \left[ \| \mathbf {G} \left(\boldsymbol {x} ^ {k}\right) \| ^ {2} \right] \leq \left[ \frac {D}{b} + 2 \right] \frac {(L + 2 \tau) b}{\gamma t} R _ {0} ^ {2} + \left[ \frac {2 D}{b} + 2 \right] \frac {\gamma}{w} C \tag {14}
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
where $C = (L + 2\tau)(1 + \lambda)\nu^2$ and $D = 2\lambda^2 /(1 + \lambda)^2$ are constants.
|
| 228 |
+
|
| 229 |
+
Theorem 2 states that ASYNC-RED-SG approximates the solution obtained by ASYNC-RED-BG up to a finite error that decreases for larger values of the minibatch size $w$ . This relationship is consistent with the recent theoretical results on the online PnP and RED algorithms (Sun et al., 2019b; Wu et al., 2020). In practice, the selection of $w$ must balance the actual memory capacity of the system and the desired runtime for obtaining a reasonable solution. Our numerical evaluation in Section 5 demonstrates the excellent approximation of ASYNC-RED-SG to the batch-gradient solution by using a small subset of data.
|
| 230 |
+
|
| 231 |
+
By carefully choosing the stepsize $\gamma$ , we can state the following remark on Theorem 2.
|
| 232 |
+
|
| 233 |
+
Remark 1. Set the stepsize to be $\gamma = 1 / \sqrt{wt}$ . If the maximal delay satisfies $\lambda \leq (1 / 2)[\sqrt{wt} /(L + 2\tau) - 1]$ , then after $t > 0$ iterations we have
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
\min _ {0 \leq k \leq t - 1} \mathbb {E} \left[ \| \mathrm {G} \left(\boldsymbol {x} ^ {k}\right) \| ^ {2} \right] \leq \left[ \frac {D}{b} + 2 \right] \frac {(L + 2 \tau) b}{\sqrt {w t}} R _ {0} ^ {2} + \left[ \frac {2 D}{b} + 2 \right] \frac {C}{\sqrt {w t}}. \tag {15}
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
Figure 3: Left: Evolution of the convergence accuracy of ASYNC-RED-SG as the minibatch size $w$ increases. The average distance is plotted against the number of iterations with the shaded areas representing the range of values attained over the test images. Middle & Right: Comparison of convergence speed between ASYNC-RED-BG/SG and other baselines. The right table summarizes the total runtime and the speed-up compared with GM-RED for all algorithms.
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
|
| 250 |
+
<table><tr><td colspan="4">Table 1: Speed-up of Async-RED compared with GM-RED</td></tr><tr><td>Method</td><td>SNR</td><td>time</td><td>speed-up</td></tr><tr><td>GM-RED(1-core)</td><td>29.01 dB</td><td>1.8 hr</td><td>-</td></tr><tr><td>SYNC-RED(8-core)</td><td>29.00 dB</td><td>38.9 min</td><td>2.8×</td></tr><tr><td>Async-RED-BG(8-core)</td><td>29.01 dB</td><td>17.9 min</td><td>6.1×</td></tr><tr><td>Async-RED-SG(8-core)</td><td>28.08 dB</td><td>13.0 min</td><td>8.4×</td></tr></table>
|
| 251 |
+
|
| 252 |
+
This establishes the fixed-point convergence to the set $\text{zer}(\mathbf{G})$ at the rate of $O(1 / \sqrt{wt})$ under specific conditions. If we treat entire $x$ as a block, namely that $\mathsf{U} = \mathsf{U}^{\mathsf{T}} = \mathsf{I}$ and $b = 1$ , ASYNC-RED-SG then becomes the asynchronous stochastic RED algorithm. Hence, the proposed remark immediately holds true for the later. Note that our convergence rate $O(1 / \sqrt{wt})$ is consistent with the rate proved for the serial (Nemirovski et al., 2009) and parallel (Dekel et al., 2012; Lian et al., 2015) stochastic gradient methods.
|
| 253 |
+
|
| 254 |
+
All the proofs are presented in the supplement. We note that the analysis above does not assume the existence of an explicit regularizer associated with the operator $\mathsf{D}_{\sigma}$ . Moreover, it does not require $\mathsf{D}_{\sigma}$ to be a Gaussian denoiser. Our analysis is hence applicable to all nonexpansive operators, such as the traditional proximal operators or the more recent artifact-removal operators (Zhang et al., 2019).
|
| 255 |
+
|
| 256 |
+
# 5 NUMERICAL VALIDATION
|
| 257 |
+
|
| 258 |
+
We now present a numerical validation of ASYNC-RED. Our goals are first to validate the proposed theorems in Section 4 and then to demonstrate the effectiveness and the efficiency of our algorithm on the large-scale problem. We consider two image recovery tasks that have the form $\boldsymbol{y} = \boldsymbol{A}\boldsymbol{x} + \boldsymbol{e}$ , where the measurement matrix $\boldsymbol{A}$ corresponds to either the random matrix in compressive sensing (CS) or the Radon transform in computed tomography (CT), and the noise $\boldsymbol{e}$ is assumed to be additive white Gaussian (AWGN). In particular, the random matrix is implemented with the block-diagonal structure $\boldsymbol{A} = \mathrm{diag}([A_i, \dots, A_b])$ for fast validation, while the Radon transform is used as its full matrix form to demonstrate the effectiveness of ASYNC-RED for overcoming the computation bottleneck. Our deep neural net prior adapts the DnCNN architecture (Zhang et al., 2017a). We used the signal-to-noise ratio (dB) to quantify the quality of the reconstructed images. For each experiments, we selected the denoiser that achieves the best SNR performance from the ones corresponding to five noise levels $\sigma \in \{5, 10, 15, 20, 25\}$ . The value of $\sigma$ is fixed across all iterations of the algorithm. Supplement provides additional technical details.
|
| 259 |
+
|
| 260 |
+
# 5.1 CONVERGENCE BEHAVIOR
|
| 261 |
+
|
| 262 |
+
We validate our theorems on the CS task with 6 test images selected from the Set 12 dataset (Zhang et al., 2017a). Each test image is rescaled to the size of $240 \times 240$ pixels (see Fig. 6 in the supplement for the visualization). The block-diagonal matrix $\mathbf{A}$ is set to consist of 9 submatrices, corresponding to a $3 \times 3$ grid of blocks with the size of $80 \times 80$ pixels in every image. The elements in $\mathbf{A}$ are i.i.d zero-mean Gaussian random variables of variance of $1/m$ , and the compression ratio is set to be $m/n = 0.7$ , which indicates that the total number of measurements is 4480 for each block. We obtain the measurements by multiplying $\mathbf{A}$ with each vectorized image and adding additional noise corresponding to the input SNR of $30~\mathrm{dB}$ . Finally, we use the normalized distance $\| G(\boldsymbol{x}^k) \|_2^2 / \| G(\boldsymbol{x}^0) \|_2^2$ to quantify the fixed-point convergence, with $b$ block updates grouped as one iteration. The distance is expected to approach zero as the algorithm converges to a fixed point. The average performance of all methods is obtained by running a single trial for each image.
|
| 263 |
+
|
| 264 |
+
Theorem $\boxed{1}$ establishes the convergence of ASYNC-RED-BG to the fixed point set $\mathrm{zer}(\mathsf{G})$ . This is illustrated in Fig. $\boxed{2}$ for four different numbers of accessible cores $n_c \in \{2, 4, 6, 8\}$ . In the left figure, the average normalized distance is plotted against the iteration number, while the middle and
|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
FBP (15.56 dB)
|
| 268 |
+
Figure 4: CT reconstruction with a time budget of 1 hour by ASYNC-RED-BG/SG and GM-RED. The colormap is adjusted for the best visual quality.
|
| 269 |
+
|
| 270 |
+

|
| 271 |
+
ASYNC-RED-SG (23.29 dB)
|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
ASYNC-RED-BG (22.25 dB)
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
GM-RED (21.45 dB)
|
| 278 |
+
|
| 279 |
+
right figures plot the corresponding distance and SNR values against the actual runtime in seconds. The shaded areas representing the range of values attained across all test images. We also plot the results of serial BC-RED using the dashed line as reference. ASYNC-RED-BG is implemented to be run asynchronously on multiple cores, while BC-RED can only use one core to perform the computation. The left figure highlights the fixed-point convergence of ASYNC-RED-BG in iteration for different $n_c$ , with all variants agreeing with the serial BC-RED. Since ASYNC-RED-BG uses more cores, the middle and right figures demonstrate the significantly faster in-time convergence of ASYNC-RED-BG than BC-RED to the same SNR value. Specifically, BC-RED takes 1.8 hours to achieve 29.00 dB, while ASYNC-RED-BG ( $n_c = 8$ ) takes only 17.9 minutes to obtain the same value, corresponding to a $6 \times$ improvement in computation time.
|
| 280 |
+
|
| 281 |
+
Theorem 2 establishes the convergence of ASYNC-RED-SG to $\text{zer}(\mathsf{G})$ up to some error term, which is inversely proportional to the minibatch size $w$ . This is illustrated in Fig. 3 (left) for three different minibatch sizes $w \in \{1120, 2240, 3360\}$ . As before, we plotted the average distance against the iteration number with the shading area representing the variance. Note that the log-scale of y-axis highlights the change for smaller values. Fig. 3 demonstrates the improved convergence of ASYNC-RED-SG to $\text{zer}(\mathsf{G})$ for larger $w$ , which is consistent with our theoretical analysis. Fig. 3 (middle) compares the convergence speed between ASYNC-RED-BG/SG, gradient-method RED (GM-RED), and synchronous parallel RED (SYNC-RED). For ASYNC-RED-SG, we use $w = 1120$ . In particular, ASYNC-RED-SG takes fewer total runtime (from 17.9 min to 13.0 min) to obtain the similar result (29.01 dB and 28.03 dB) and achieves $8.4 \times$ speedup compared with GM-RED. The table in Fig. 3 summarizes the detailed results.
|
| 282 |
+
|
| 283 |
+
# 5.2 EFFECTIVENESS FOR COMPUTATIONAL IMAGING
|
| 284 |
+
|
| 285 |
+
We additionally demonstrate the effectiveness of our algorithm by reconstructing a $800 \times 800$ CT image from its 180 projections. For block parallel updates, the image is decomposed into 16 blocks, each having the size of $200 \times 200$ pixels. The Radon matrix used in the experiment corresponds to 180 angles with 1131 detectors, and the noise level is set to $70\mathrm{dB}$ . We refer to Supplement D.2 for additional technical details. Fig. 4 shows the visual illustration of the reconstructed images by ASYNC-RED-BG/SG and GM-RED. Each algorithm starts from the filtered back-projection (FBP) of the measurements and runs for 1 hour. Here, ASYNC-RED-SG randomly uses one-third of the total measurements at every iteration. Given the same amount of time, ASYNC-RED-BG/SG successfully mitigates the noise-artifacts, while the result of GM-RED is still noisy. In particular, the per-iteration time cost of ASYNC-RED-BG/SG and GM-RED is 5.23, 3.21, and 19.19 seconds, respectively. This experiment clearly illustrates the fast processing speed of the asynchronous procedure.
|
| 286 |
+
|
| 287 |
+
# 6 CONCLUSION
|
| 288 |
+
|
| 289 |
+
Asynchronous parallel methods have gained increasing importance in optimization for solving large-scale imaging inverse problems. We have introduced ASYNC-RED as an extension of the recent RED framework and theoretically analyze its convergence in batch and stochastic settings. We have
|
| 290 |
+
|
| 291 |
+
validated its convergence guarantees and demonstrated its effectiveness in CT image reconstruction. We note that this work is complementary to traditional acceleration strategies, such as Nesterov acceleration and variance-reduction, commonly used in optimization. Future work will investigate ASYNC-RED with Nesterov acceleration (as was done in (Hannah et al., 2019) for traditional asynchronous block-coordinate algorithms) and variance-reduction (as was done in (Johnson & Zhang, 2013) for traditional stochastic gradient method) to better understand the tradeoffs between acceleration and scalability in multicore systems. We will additionally investigate theoretical limits of ASYNC-RED in the unbounded maximal delay setting.
|
| 292 |
+
|
| 293 |
+
# ACKNOWLEDGEMENT
|
| 294 |
+
|
| 295 |
+
Research presented in this article was supported by NSF award CCF-1813910 and the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 20200061DR.
|
| 296 |
+
|
| 297 |
+
# REFERENCES
|
| 298 |
+
|
| 299 |
+
R. Ahmad, C. A. Bouman, G. T. Buzzard, S. Chan, S. Liu, E. T. Reehorst, and P. Schniter. Plug-and-play methods for magnetic resonance imaging: Using denoisers for image recovery. IEEE Signal Processing Magazine, 37(1):105-116, 2020.
|
| 300 |
+
K. Akiyama, A. Alberdi, W. Alef, K. Asada, R. Azulay, A.K. Baczko, D. Ball, M. Balokovic, J. Barrett, D. Bintley, et al. First m87 event horizon telescope results. iv. imaging the central supermassive black hole. The Astrophysical Journal Letters, 875(1):L4, 2019.
|
| 301 |
+
C. Anil, J. Lucas, and R. Grosse. Sorting out Lipschitz function approximation. In Proc. 36th Int. Conf. Machine Learning (ICML), pp. 291-301, Long Beach, California, USA, 09-15 Jun 2019.
|
| 302 |
+
H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, 2 edition, 2017.
|
| 303 |
+
A. Beck and L. Tetruashvili. On the convergence of block coordinate descent type methods. SIAM J. Optim., 23(4):2037-2060, October 2013.
|
| 304 |
+
P. Bianchi, W. Hachem, and F. Iutzeler. A coordinate descent primal-dual algorithm and application to distributed asynchronous optimization. IEEE Transactions on Automatic Control, 61(10): 2947-2957, 2015.
|
| 305 |
+
S. A. Bigdeli, M. Jin, P. Favaro, and M. Zwicker. Deep mean-shift priors for image restoration. In Proc. Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, Dec 2017.
|
| 306 |
+
S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2004.
|
| 307 |
+
G. T. Buzzard, S. H. Chan, S. Sreehari, and C. A. Bouman. Plug-and-play unplugged: Optimization free reconstruction using consensus equilibrium. SIAM J. Imaging Sci., 11(3):2001-2020, September 2018.
|
| 308 |
+
S. H. Chan, X. Wang, and O. A. Elgendy. Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comp. Imag., 3(1):84-98, March 2017.
|
| 309 |
+
R. Cohen, M. Elad, and P. Milanfar. Regularization by denoising via fixed-point projection (REDPRO). arXiv:2008.00226 [eess.IV], 2020.
|
| 310 |
+
O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. Journal of Machine Learning Research, 13(1):165-202, 2012.
|
| 311 |
+
M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process., 15(12):3736-3745, December 2006.
|
| 312 |
+
M. A. T. Figueiredo and R. D. Nowak. Wavelet-based image estimation: An empirical Bayes approach using Jeffreys' noninformative prior. IEEE Trans. Image Process., 10(9):1322-1331, September 2001.
|
| 313 |
+
|
| 314 |
+
M. A. T. Figueiredo and R. D. Nowak. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process., 12(8):906-916, August 2003.
|
| 315 |
+
A. K Fletcher, P. Pandit, S. Rangan, S. Sarkar, and P. Schniter. Plug-in estimation in high-dimensional linear inverse problems: A rigorous analysis. In Advances in Neural Information Processing Systems 31, pp. 7451-7460. Montreal, QC, Canada, Dec. 2018.
|
| 316 |
+
S. Ghadimi and G. Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Math. Program. Ser. A, 156(1):59-99, March 2016.
|
| 317 |
+
R. Hannah and W. Yin. On unbounded delays in asynchronous parallel fixed-point algorithms. Journal of Scientific Computing, 76(1):299-326, Jul 2018.
|
| 318 |
+
R. Hannah, F. Feng, and W. Yin. A2BCD: Asynchronous acceleration with optimal complexity. In International Conference on Learning Representations, 2019.
|
| 319 |
+
Y. Hu, S. G. Lingala, and M. Jacob. A fast majorize-minimize algorithm for the recovery of sparse and low-rank matrices. IEEE Trans. Image Process., 21(2):742-753, February 2012.
|
| 320 |
+
F. Iutzeler, P. Bianchi, P. Ciblat, and W. Hachem. Asynchronous distributed optimization using a randomized alternating direction method of multipliers. In 52nd IEEE conference on decision and control, pp. 3671-3676. IEEE, 2013.
|
| 321 |
+
R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, volume 26, pp. 315-323, 2013.
|
| 322 |
+
D. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. arXiv:1412.6980 [cs.LG].
|
| 323 |
+
X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex optimization. In Advances in Neural Information Processing Systems 28, pp. 2737-2745. Montreal, QC, Canada, 2015.
|
| 324 |
+
X. Lian, W. Zhang, C. Zhang, and J. Liu. Asynchronous decentralized parallel stochastic gradient descent. In Proc. 35th Int. Conf. Machine Learning (ICML), pp. 3043-3052, Stockholm, Sweden, 10-15 Jul 2018.
|
| 325 |
+
J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. SIAM Journal on Optimization, 25(1):351-376, 2015.
|
| 326 |
+
J. Liu, S. J. Wright, C. Ré, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic coordinate descent algorithm. J. Mach. Learn. Res., 16(1):285-322, January 2015.
|
| 327 |
+
T. Liu, S. Li, J. Shi, E. Zhou, and T. Zhao. Towards understanding acceleration tradeoff between momentum and asynchrony in nonconvex stochastic optimization. In Advances in Neural Information Processing Systems 31, pp. 3682-3692. Montreal, QC, Canada, Dec 2018.
|
| 328 |
+
D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. IEEE Int. Conf. Comp. Vis. (ICCV), pp. 416-423, Vancouver, Canada, July 7-14, 2001.
|
| 329 |
+
G. Mataev, P. Milanfar, and M. Elad. Deepred: Deep image prior powered by red. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, Oct 2019.
|
| 330 |
+
T. Meinhardt, M. Moeller, C. Hazirbas, and D. Cremers. Learning proximal operators: Using denoising networks for regularizing inverse imaging problems. In Proc. IEEE Int. Conf. Comp. Vis. (ICCV), pp. 1799-1808, Venice, Italy, October 22-29, 2017.
|
| 331 |
+
C. Metzler, P. Schniter, A. Veeraraghavan, and R. Baraniuk. prDeep: Robust phase retrieval with a flexible deep network. In Proc. 35th Int. Conf. Machine Learning (ICML), pp. 3501-3510, Stockholm, Sweden, 10-15 Jul 2018.
|
| 332 |
+
C. A. Metzler, A. Maleki, and R. Baraniuk. BM3D-PRGAMP: Compressive phase retrieval based on BM3D denoising. In Proc. IEEE Int. Conf. Image Proc. (ICIP), pp. 2504–2508, Phoenix, AZ, USA, September 25–28, 2016a.
|
| 333 |
+
|
| 334 |
+
C. A. Metzler, A. Maleki, and R. G. Baraniuk. From denoising to compressed sensing. IEEE Trans. Inf. Theory, 62(9):5117-5144, September 2016b.
|
| 335 |
+
T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
|
| 336 |
+
A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. Optim., 19(4):1574-1609, 2009.
|
| 337 |
+
Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2004.
|
| 338 |
+
Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim., 22(2):341-362, 2012.
|
| 339 |
+
F. Ong, X. Zhu, J. Y. Cheng, K. M. Johnson, P. E. Z. Larson, S. S. Vasanawala, and M. Lustig. Extreme mri: Large-scale volumetric dynamic imaging from continuous non-gated acquisitions. Magnetic Resonance in Medicine, 84(4):1763-1780, 2020.
|
| 340 |
+
Z. Peng, Y. Xu, M. Yan, and W. Yin. Arock: An algorithmic framework for asynchronous parallel coordinate updates. SIAM Journal on Scientific Computing, 38(5):A2851-A2879, 2016.
|
| 341 |
+
Z. Peng, Y. Xu, M. Yan, and W. Yin. On the convergence of asynchronous parallel iteration with unbounded delays. Journal of the Operations Research Society of China, 7(1):5-42, 2019.
|
| 342 |
+
S. Rangan, P. Schniter, and A. Fletcher. On the convergence of approximate message passing with arbitrary matrices. In Proc. IEEE Int. Symp. Information Theory, pp. 236-240, Honolulu, HI, USA, June 29-July 4, 2014.
|
| 343 |
+
S. Rangan, A. K. Fletcher, P. Schniter, and U. S. Kamilov. Inference for generalized linear models via alternating directions and Bethe free energy minimization. In Proc. IEEE Int. Symp. Information Theory, pp. 1640-1644, Hong Kong, June 14-19, 2015.
|
| 344 |
+
B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems 24, pp. 693-701. Granada, Spain, Dec 2011.
|
| 345 |
+
E. T. Reehorst and P. Schniter. Regularization by denoising: Clarifications and new interpretations. IEEE Trans. Comput. Imag., 5(1):52-67, March 2019.
|
| 346 |
+
R. T. Rockafellar and R. J-B Wets. Variational Analysis. Springer, 1998.
|
| 347 |
+
Y. Romano, M. Elad, and P. Milanfar. The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci., 10(4):1804-1844, 2017.
|
| 348 |
+
L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D, 60(1-4):259-268, November 1992.
|
| 349 |
+
E. K. Ryu and S. Boyd. A primer on monotone operator methods. Appl. Comput. Math., 15(1):3-43, 2016.
|
| 350 |
+
E. K. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin. Plug-and-play methods provably converge with properly trained denoisers. In Proc. 36th Int. Conf. Machine Learning (ICML), pp. 5546-5557, 2019.
|
| 351 |
+
H. Sedghi, V. Gupta, and P. M. Long. The singular values of convolutional layers. In International Conference on Learning Representations, 2019.
|
| 352 |
+
S. Sreehari, S. V. Venkatakrishnan, B. Wohlberg, G. T. Buzzard, L. F. Drummy, J. P. Simmons, and C. A. Bouman. Plug-and-play priors for bright field electron tomography and sparse interpolation. IEEE Trans. Comput. Imaging, 2(4):408-423, December 2016.
|
| 353 |
+
T. Sun, R. Hannah, and W. Yin. Asynchronous coordinate descent under more realistic assumption. In Advances in Neural Information Processing Systems 30, pp. 6183-6191, Long Beach, California, USA, Dec. 2017.
|
| 354 |
+
|
| 355 |
+
Y. Sun, J. Liu, and U. S. Kamilov. Block coordinate regularization by denoising. In Advances in Neural Information Processing Systems 32, pp. 380-390. Vancouver, BC, Canada, Dec. 2019a.
|
| 356 |
+
Y. Sun, B. Wohlberg, and U. S. Kamilov. An online plug-and-play algorithm for regularized image reconstruction. IEEE Trans. Comput. Imaging, 2019b.
|
| 357 |
+
Y. Sun, S. Xu, Y. Li, L. Tian, B. Wohlberg, and U. S. Kamilov. Regularized fourier psychography using an online plug-and-play algorithm. In Proc. IEEE Int. Conf. Acoustics, Speech and Signal Process. (ICASSP), pp. 7665-7669, Brighton, UK, May 12-17, 2019.
|
| 358 |
+
A. M. Teodoro, J. M. Bioucas-Dias, and M. A. T. Figueiredo. A convergent image fusion algorithm using scene-adapted Gaussian-mixture-based denoising. IEEE Trans. Image Process., 28(1): 451-463, January 2019.
|
| 359 |
+
M. Terris, A. Repetti, J. Pesquet, and Y. Wiaux. Building firmly nonexpansive convolutional neural networks. In Proc. IEEE Int. Conf. Acoustics, Speech and Signal Process. (ICASSP), pp. 8658-8662, Barcelona, Spain, May 4-8 2020.
|
| 360 |
+
T. Tirer and R. Giryes. Image restoration by iterative denoising and backward projections. IEEE Trans. Image Process., 28(3):1220-1234, Mar. 2019.
|
| 361 |
+
S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg. Plug-and-play priors for model based reconstruction. In Proc. IEEE Global Conf. Signal Process. and Inf. Process. (GlobalSIP), pp. 945-948, Austin, TX, USA, December 3-5, 2013.
|
| 362 |
+
K. Wei, A. Aviles-Rivero, J. Liang, Y. Fu, C.-B. Schnlieb, and H. Huang. Tuning-free plug-and-play proximal algorithm for inverse imaging problems. In Proc. 37th Int. Conf. Machine Learning (ICML), 2020.
|
| 363 |
+
E. Williams, J. Moore, S. W Li, G. Rustici, A. Tarkowska, A. Chessel, S. Leo, B. Antal, R. K Ferguson, U. Sarkans, et al. Image data resource: a bioimage data integration and publication platform. Nature methods, 14(8):775-781, 2017.
|
| 364 |
+
S. J. Wright. Coordinate descent algorithms. Math. Program., 151(1):3-34, June 2015.
|
| 365 |
+
Z. Wu, Y. Sun, A. Matlock, J. Liu, L. Tian, and U. S. Kamilov. Simba: Scalable inversion in optical tomography using deep denoising priors. IEEE Journal of Selected Topics in Signal Processing, pp. 1-1, 2020.
|
| 366 |
+
X. Xu, Y. Sun, J. Liu, B. Wohlberg, and U. S. Kamilov. Provable convergence of plug-and-play priors with mmse denoisers. IEEE Signal Processing Letters, 27:1280-1284, 2020.
|
| 367 |
+
K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process., 26(7):3142-3155, July 2017a.
|
| 368 |
+
K. Zhang, W. Zuo, S. Gu, and L. Zhang. Learning deep CNN denoiser prior for image restoration. In Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 3929-3938, Honolulu, USA, July 21-26, 2017b.
|
| 369 |
+
K. Zhang, W. Zuo, and L. Zhang. Deep plug-and-play super-resolution for arbitrary blur kernels. In Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 1671-1681, Long Beach, CA, USA, June 2019.
|
| 370 |
+
Z. Zhou, P. Mertikopoulos, N. Bambos, P. Glynn, Y. Ye, L. Li, and Li F. Distributed asynchronous optimization with unbounded delays: How slow can you go? In Proc. 35th Int. Conf. Machine Learning (ICML), pp. 5970-5979, Stockholm, Sweden, 10-15 Jul 2018.
|
asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4f06bb79778a5d18090c6463913f28dad8ff37dd3731e09a30ca2a0ea2428b95
|
| 3 |
+
size 375327
|
asyncredaprovablyconvergentasynchronousblockparallelstochasticmethodusingdeepdenoisingpriors/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c0479d47d702b3d43860d9f05fef53676e19ab57dd2b10bd9f129a614b4dbbf0
|
| 3 |
+
size 535419
|
autoregressiveentityretrieval/3a88b05e-473a-4de5-8481-47a0e538d6d4_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9aee726fef315ce89011e1dc5dbdeffd1a1b0fd8d2d9ef8ee95de1728fc7b1b5
|
| 3 |
+
size 113795
|
autoregressiveentityretrieval/3a88b05e-473a-4de5-8481-47a0e538d6d4_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:562111dab6310aa54b18286c361d04fb37cdc51dbe5d002af0609d2e5762acb8
|
| 3 |
+
size 138365
|
autoregressiveentityretrieval/3a88b05e-473a-4de5-8481-47a0e538d6d4_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bedf6b5e2aa32154152a28cbd8927533ecbeed6cc7c51e8da9ef7c3fceac7840
|
| 3 |
+
size 875323
|
autoregressiveentityretrieval/full.md
ADDED
|
@@ -0,0 +1,355 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AUTOREGRESSIVE ENTITY RETRIEVAL
|
| 2 |
+
|
| 3 |
+
Nicola De Cao $^{1,2*}$ , Gautier Izacard $^{2,3,4}$ , Sebastian Riedel $^{2,5}$ , Fabio Petroni $^{2}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>University of Amsterdam, <sup>2</sup>Facebook AI Research
|
| 6 |
+
<sup>3</sup>ENS, PSL University, <sup>4</sup>Inria, <sup>5</sup>University College London
|
| 7 |
+
|
| 8 |
+
nicola.decao@gmail.com,{gizacard,sriedel, fabiopetroni}@fb.com
|
| 9 |
+
|
| 10 |
+
# ABSTRACT
|
| 11 |
+
|
| 12 |
+
Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. One way to understand current approaches is as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach leads to several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions between the two; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion and conditioned on the context. This enables us to mitigate the aforementioned technical issues since: (i) the autoregressive formulation allows us to directly capture relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the exact softmax loss can be efficiently computed without the need to subsample negative data. We show the efficacy of the approach, experimenting with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their unambiguous name. Code and pre-trained models at https://github.com/facebookresearch/GENRE.
|
| 13 |
+
|
| 14 |
+
# 1 INTRODUCTION
|
| 15 |
+
|
| 16 |
+
The ability to retrieve the correct entity from large Knowledge Bases (KBs) given a textual input is a fundamental building block for several applications (Ferrucci, 2012; Slawski, 2015; Yang et al., 2018a). Most commercial recommendation systems, for instance, include in their pipelines components to detect and disambiguate entity mentions in open text, in order to isolate relevant concepts from non-meaningful data (Slawski, 2015; Yang et al., 2018a). Another example are chat-bots and question answering systems, that are often equipped with retrieval components to surface specific KB entries (e.g., Wikipedia articles) to find knowledge for sustaining a conversation or answering a question (Ferrucci, 2012; Chen et al., 2017; Lewis et al., 2020b; Roller et al., 2020).
|
| 17 |
+
|
| 18 |
+
Although there has been extensive previous work on entity retrieval (e.g. Hoffart et al., 2011; Piccinno & Ferragina, 2014; Huang et al., 2015; Le & Titov, 2018; Logeswaran et al., 2019; Broscheit, 2019; Wu et al., 2020, to name just a few) there is a common design choice to most current solutions: entities are associated with a unique atomic label and the retrieval problem can be interpreted as multi-class classification across these labels. The match between input and label is calculated through a bi-encoder (Wu et al., 2020; Karpukhin et al., 2020): a dot product between dense vector encodings of the input and the entity's meta information (such as title and description).
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
Figure 1: Examples of entities correctly retrieved from GENRE (we show only the top-3 rank). On the top three entity disambiguation instances and on the bottom three document retrieval instances, two for open-domain question answering and one for fact checking. All of them are cast as sequence-to-sequence problems while inference is done using constrained beam search. Gold entities in **bold**. Sub-captions indicate the type of interaction between the input context and the entity names required.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+
Critically, this formulation enables sub-linear search using modern maximum-inner-product-search libraries (Johnson et al., 2019) and hence supports retrieving from large entity databases.
|
| 28 |
+
|
| 29 |
+
Unfortunately, the classifier approach to entity retrieval also has several shortcomings. First, unless a costly cross-encoder is used for re-ranking (Wu et al., 2020), the dot-product can miss fine-grained interactions between input and entity meta information (Humeau et al., 2020). Second, storing dense vectors for the whole KB requires a large memory footprint, especially in real-world scenarios (i.e., $\sim 24\mathrm{GB}$ to store 1024-dimensional vectors for all of the $\sim 6\mathrm{M}$ Wikipedia pages), and the size linearly grows with the addition of new entities. Third, computing an exact softmax over all entities is very expensive, hence current solutions need to subsample negative data (Logeswaran et al., 2019; Karpukhin et al., 2020) at training time. Tuning an appropriately hard set of negative instances can be challenging and time-consuming. Finally, existing systems can suffer from a cold-start problem since they cannot represent entities about which they have not yet gathered sufficient information, in the form, for instance, of a textual description or a set of relations with the existing entities.
|
| 30 |
+
|
| 31 |
+
The treatment of entity identifiers as atomic labels in a classifier ignores the fact that we often have unambiguous, highly structured and compositional entity names. Wikipedia, for instance, associates unique titles to articles, $^{1}$ that may be the name of the subject or a description of its topic, as well as potential distinctive information to disambiguate $^{2}$ (see Figure 1 for some examples). These entity names often interact with mention contexts in a predictable and regular fashion. For example, often entity names are identical with the mention strings that refer to them (e.g., Fig. 1f). When this is not possible, they might be composed of tokens in the context (e.g., Fig. 1b), include a type specification that can inferred (e.g., Fig. 1a), be the translation of the string mention (e.g., Fig. 1c), require 'normalization' such as referring to the correct alias of a mention (e.g., Fig. 1d), or require factual knowledge that might be stored in the parameters of a model (e.g., Fig. 1e). These observations suggest that inputs could be translated into unique entity names, word by word, instead of being classified among a huge set of options.
|
| 32 |
+
|
| 33 |
+
In this paper, we propose GENRE (for Generative Entity Retrieval), the first entity retriever that exploits a sequence-to-sequence architecture to generate entity names in an autoregressive fashion conditioned on the context. Concretely, GENRE uses a transformer-based architecture, pre-trained with a language modeling objective (i.e., we use BART weights from Lewis et al. (2020a)) and fine-tuned to generate entity names. This architecture has been shown to retain factual knowledge
|
| 34 |
+
|
| 35 |
+
to some extent (Petroni et al., 2019) and language translation skills (Radford et al., 2019) among other things, both desirable properties for an entity retriever. Naturally, the generated output might not always be a valid entity name. To solve this problem, GENRE employs a constrained decoding strategy that forces each generated name to be in a predefined candidate set.
|
| 36 |
+
|
| 37 |
+
The autoregressive formulation allows us to directly capture the aforementioned relations between context and entity name, effectively cross encoding both. Also, the memory footprint required is orders of magnitude smaller than current systems, since the parameters of a sequence-to-sequence model scale linearly with the vocabulary size, not entity count. Moreover, the exact softmax can be computed efficiently for each output token (i.e., all non-gold tokens are considered negative), thereby eliminating the need for negative data downsampling. Finally, our model never accesses any explicit meta-information about the entity beyond their title, hence new entities can be added by simply appending their unambiguous name to the candidate set (e.g., Fig. 1b refers to an entity added after training).
|
| 38 |
+
|
| 39 |
+
We empirically evaluate the performance of GENRE on more than 20 datasets, spanning three families of tasks: (i) entity disambiguation, using popular datasets and settings (both in and out-of-domain); (ii) end-to-end entity linking, with the GERBIL benchmarking tool (Röder et al., 2018), by using a novel dynamically markup-constrained decoding strategy; (iii) document retrieval, with the recently proposed KILT benchmark (Petroni et al., 2020b) which spans 5 different sub-tasks. Our models achieve state-of-the-art or very competitive results on nearly all datasets, often with substantial improvement (+13.7 precision points on KILT for retrieval on average). Further, we show that compared with recent models, GENRE requires substantially less memory ( $\sim$ 20 times smaller footprint on average). Finally, we demonstrate that our model can be applied in scenarios where the only entity information available is its name.
|
| 40 |
+
|
| 41 |
+
We organize the paper as follows: in Section 2 we describe our problem formulation. Then, in Section 3 we present GENRE and eventually in Section 4 we extensively evaluate our method on the aforementioned settings. We will release code and pre-processed data to reproduce our experiments.
|
| 42 |
+
|
| 43 |
+
# 2 ENTITY RETRIEVAL
|
| 44 |
+
|
| 45 |
+
We assume to have a collection of entities $\mathcal{E}$ (e.g., Wikipedia articles) where each entity is an entry in a Knowledge Base (KB) such as Wikipedia. We want to approach the following retrieval problem: given a textual input source $x$ (e.g., question), a model has to return the most relevant entities from $\mathcal{E}$ with respect to $x$ . We assume that each $e \in \mathcal{E}$ is uniquely assigned to a textual representation (i.e., its name): a sequence of tokens $y$ (e.g., Wikipedia pages are identified by their titles).
|
| 46 |
+
|
| 47 |
+
A particular instance of this problem is Entity Disambiguation (ED) (see Figure 1 for an example) where an input $x$ is annotated with a mention and a system has to select either its corresponding entity from $\mathcal{E}$ , or to predict that there is no corresponding entry in the KB. Another instance is page-level Document Retrieval (DR) where the input $x$ is intended as a query and $\mathcal{E}$ as a collection of documents identified by their unique titles (e.g., Wikipedia articles).
|
| 48 |
+
|
| 49 |
+
# 3 METHOD
|
| 50 |
+
|
| 51 |
+
We address the retrieval problem with an sequence-to-sequence model that generates textual entity identifiers (i.e., entity names). Concretely, GENRE ranks each $e \in \mathcal{E}$ by computing a score with an autoregressive formulation: $\text{score}(e|x) = p_{\theta}(y|x) = \prod_{i=1}^{N} p_{\theta}(y_i | y_{<i}, x)$ where $y$ is the set of $N$ tokens in the identifier of $e$ , and $\theta$ the parameters of the model. We take advantage of fine-tuning the BART (Lewis et al., 2020a) pre-trained language model. We train GENRE using a standard seq2seq objective, i.e., maximizing the output sequence likelihood with teacher forcing (Sutskever et al., 2011; 2014) and regularized with dropout (Srivastava et al., 2014) and label smoothing (Szegedy et al., 2016). Concretely, we use the objective that is typically used for neural machine translation (NMT, Wu et al., 2016), that is maximizing $\log p_{\theta}(y|x)$ with respect to model's parameters $\theta$ which, due to the factorized formulation, can be calculated exactly. We do not need negative sampling to approximate the loss normalizer.
|
| 52 |
+
|
| 53 |
+
Figure 2: Example of dynamically constrained Markup decoding for entity linking using "In 1503, Leonardo began painting the Mona Lisa." as input. There are 3 cases: when we are outside a mention/entity (a), inside a mention generation step (b), and inside an entity link generation step (c). The model is supposed to output the input source annotating mentions and pointing them to the respective entities: "In 1503, [Leonardo](Leonardo da Vinci) began painting the [Mona Lisa](Mona Lisa)".
|
| 54 |
+

|
| 55 |
+
(a) Outside: we can either continue to generate the input or start a new mention.
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
(b) Inside a mention: we can either continue to generate the input or end the current mention.
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
(c) Inside an entity link: we can either generate from the entities prefix trie or close if the generated prefix is a valid entity.
|
| 62 |
+
|
| 63 |
+
# 3.1 INFERENCE WITH CONSTRAINED BEAM SEARCH
|
| 64 |
+
|
| 65 |
+
Naturally, at test time, we could compute a score for every element in $\mathcal{E}$ and then sort them. Unfortunately, this might be prohibitively expensive when $\mathcal{E}$ is very large (e.g., Wikipedia has $\sim 6M$ entities). Hence, we exploit Beam Search (BS, Sutskever et al., 2014), an established approximate decoding strategies to efficiently navigate the search space. Instead of explicitly scoring all entities in $\mathcal{E}$ , we search for the top- $k$ entities in $\mathcal{E}$ decoding from our model using BS with $k$ beams. Note that using BS implies that the time cost of our retriever does not depend on the size of $\mathcal{E}$ , but only on the size of the beams and the average length of entity representations as we do autoregressive generation. The average length of entity representations is tractable (e.g., Wikipedia titles have 6 BPE tokens on average) and we follow standard NMT settings where $k$ is small (e.g., 10).
|
| 66 |
+
|
| 67 |
+
Since we want to output only entities from $\mathcal{E}$ we cannot use traditional BS while decoding. Indeed, allowing to generate any token from the vocabulary at every decoding step might lead the model to generate output strings that are not valid identifiers. Hence, we resort to Constrained BS, forcing to only decode valid entity identifiers. BS only considers one step ahead during decoding so we can only constrain the generation of a single next token conditioned on the previous ones. Thus, we define our constrain in terms of a prefix tree $\mathcal{T}$ (aka trie) (Cormen et al., 2009) where nodes are annotated with tokens from the vocabulary. For each node $t\in \mathcal{T}$ , its children indicate all the allowed continuations from the prefix defined traversing the trie from the root to $t$ .
|
| 68 |
+
|
| 69 |
+
See Figure 9 in Appendix C for an example of a trie. When the number of allowed outputs is tractable (e.g., generating a Wikipedia title among $\sim 6\mathrm{M}$ ) the trie is relatively small it can be precomputed and stored into memory (e.g., constraining on Wikipedia titles using the BART tokenizer produces a trie with $\sim 6\mathrm{M}$ leaves, $\sim 17\mathrm{M}$ internal nodes that occupied $\sim 600\mathrm{MB}$ of disk space). We employed the constraints masking the log-probabilities of the invalid tokens and not their logits (i.e., we do not re-normalize the probability over the vocabulary).<sup>3</sup>
|
| 70 |
+
|
| 71 |
+
# 3.2 AUTOREGRESSIVE END-TO-END ENTITY LINKING
|
| 72 |
+
|
| 73 |
+
We additionally extend our autoregressive framework to address end-to-end Entity Linking (EL) where, given a document, a system has to both detect entity mentions and link those mentions to their respective KB entities. In this setting, we train the model to predict the source input again but with annotated spans. We use a Markup annotation where spans boundaries are flagged with special tokens and accompanied by their corresponding entity identifiers.
|
| 74 |
+
|
| 75 |
+
Differently from a setting where the output space is relatively small (e.g., a pre-defined set $\mathcal{E}$ ), the space of annotated outputs is exponentially large. Hence, it is intractable to pre-compute a trie for decoding, and we compute it dynamically instead. In Figure 2 we show an example. At each generation step, the decoder is either generating a mention span, generating a link to a mention, or continuing from the input source. When outside a mention/entity step the decoder has only two options: (i) to continue by copying the next token from the input source, or (ii) to generate the start of mention token (i.e., $[^\prime ]$ ) which makes the decoder enter the mention generating phase. While
|
| 76 |
+
|
| 77 |
+
generating a mention, the decoder has either to continue with the next token in the input source or to generate the end of mention token (i.e., \(\cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot \cdot) which makes the decoder enter the entity generating phase. Finally, when generating an entity, the decoder employs the entities trie such that it can only output a valid entity identifier as in Constrained Beam Search explained above.
|
| 78 |
+
|
| 79 |
+
# 4 EXPERIMENTS
|
| 80 |
+
|
| 81 |
+
We extensively evaluate GENRE on more than 20 datasets across 3 tasks: Entity Disambiguation, end-to-end Entity Linking (EL), and page-level Document Retrieval. We describe the experimental settings in Section 4.1 where we discuss results in Section 4.2. All experiments are in English.
|
| 82 |
+
|
| 83 |
+
# 4.1 SETTINGS
|
| 84 |
+
|
| 85 |
+
Entity Disambiguation (ED) We reproduce the setting of Le & Titov (2018) using the same candidate sets, in-domain and out-of-domain datasets, and evaluating using the InKB micro- $F_{1}$ . We train GENRE feeding each document where a single mention is flagged with two special start and end tokens and the target output is the textual representation of the corresponding entity. At test time, we decode using constrained beam search with a trie obtained using the provided candidate set (i.e., a subset of $\mathcal{E}$ ). As large generative models benefit from large amount of data, we first pre-train GENRE on the BLINK data (Wu et al., 2020), i.e., 9M unique triples document-mention-entity from Wikipedia. Then, for the in-domain scenario, we fine-tune using the AIDA-CoNLL dataset (Hoffart et al., 2011). For the out-of-domain scenario, we evaluate on five test sets: MSNBC, AQUAINT, ACE2004, WNED-CWEB (CWEB) and WNED-WIKI (WIKI) (Gabrilovich et al., 2013; Guo & Barbosa, 2018). More task details and hyperparameters setting are reported in Appendix A.1.
|
| 86 |
+
|
| 87 |
+
End-to-End Entity Linking (EL) For EL, we reproduce the setting of Kolitsas et al. (2018) using the same in-domain and out-of-domain datasets as well as evaluating the InKB micro- $F_{1}$ on the GERBIL benchmark platform (Roder et al., 2018). Similarly to the ED setting, we first pre-train our model on all abstract sections from Wikipedia<sup>4</sup> enriched by a string matching heuristic to solve co-references (i.e., if there is a string that matches exactly with another hyperlink we also add it to the dataset as a mention/entity pairs). Then, for the in-domain scenario, we fine-tune using the AIDA-CoNLL dataset. We evaluate on seven out-of-domain test sets: MSNBC, Derczynski (Der) (Derczynski et al., 2015), KORE 50 (K50) (Hoffart et al., 2012), N3-Reuters-128 (R128), N3-RSS-500 (R500) (Roder et al., 2014), and OKE challenge 2015 and 2016 (OKE15 and OKE16) (Nuzzolese et al., 2015). More task details and hyperparameters setting are reported in Appendix A.2.
|
| 88 |
+
|
| 89 |
+
Page-level Document Retrieval (DR) For this setting, we test GENRE on all the KILT benchmark tasks (Petroni et al., 2020b). Here, whole Wikipedia is used as the candidate set and we evaluate using R-precision (Beitzel et al., 2009). KILT consists of five tasks that use the same Wikipedia dump as a knowledge source: fact checking with FEVER (Thorne et al., 2018); open domain question answering using Natural Questions (Kwiatkowski et al., 2019), HotpotQA (Yang et al., 2018b), TriviaQA (Joshi et al., 2017), ELI5 (Fan et al., 2019); slot filling with T-REx (Elsahar et al., 2018), Zero Shot RE (Levy et al., 2017); entity disambiguation on AIDA CoNLL-YAGO, WNED-WIKI and WNED-CWEB; dialogue with Wizard of Wikipedia (Dinan et al., 2019). We train GENRE on BLINK and all KILT data simultaneously with a single model. More details on the hyperparameter setting are reported in Appendix A.3.
|
| 90 |
+
|
| 91 |
+
# 4.2 RESULTS
|
| 92 |
+
|
| 93 |
+
Overall, GENRE achieves very competitive results in all of the three settings being the best performing system on average across all of them. See Appendix C for examples of inputs, ground truth and model predictions for all of the three tasks. In the following, we discuss how GENRE compares to SOTA systems as well as showing some quantitative analysis on its memory footprint, how
|
| 94 |
+
|
| 95 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">In-domain</td><td colspan="5">Out-of-domain</td></tr><tr><td>AIDA</td><td>MSNBC</td><td>AQUAINT</td><td>ACE2004</td><td>CWEB</td><td>WIKI*</td><td>Avg.</td></tr><tr><td>Ganea & Hofmann (2017)</td><td>92.2</td><td>93.7</td><td>88.5</td><td>88.5</td><td>77.9</td><td>77.5</td><td>86.4</td></tr><tr><td>Guo & Barbosa (2018)</td><td>89</td><td>92</td><td>87</td><td>88</td><td>77</td><td>84.5</td><td>86.2</td></tr><tr><td>Yang et al. (2018a)</td><td>95.9</td><td>92.6</td><td>89.9</td><td>88.5</td><td>81.8</td><td>79.2</td><td>88.0</td></tr><tr><td>Shahbazi et al. (2019)</td><td>93.5</td><td>92.3</td><td>90.1</td><td>88.7</td><td>78.4</td><td>79.8</td><td>87.1</td></tr><tr><td>Yang et al. (2019)</td><td>93.7</td><td>93.8</td><td>88.2</td><td>90.1</td><td>75.6</td><td>78.8</td><td>86.7</td></tr><tr><td>Le & Titov (2019)</td><td>89.6</td><td>92.2</td><td>90.7</td><td>88.1</td><td>78.2</td><td>81.7</td><td>86.8</td></tr><tr><td>Fang et al. (2019)</td><td>94.3</td><td>92.8</td><td>87.5</td><td>91.2</td><td>78.5</td><td>82.8</td><td>87.9</td></tr><tr><td>BLINK w/o candidate set**</td><td>79.6</td><td>80.0</td><td>80.3</td><td>82.5</td><td>64.2</td><td>75.5</td><td>77.0</td></tr><tr><td>GENRE</td><td>93.3</td><td>94.3</td><td>89.9</td><td>90.1</td><td>77.3</td><td>87.4</td><td>88.8</td></tr><tr><td colspan="8">Ablations</td></tr><tr><td>GENRE only AIDA data</td><td>88.6</td><td>88.1</td><td>77.1</td><td>82.3</td><td>71.9</td><td>71.7</td><td>80.0</td></tr><tr><td>GENRE only BLINK data</td><td>89.3</td><td>93.3</td><td>90.9</td><td>91.1</td><td>76.0</td><td>87.9</td><td>88.1</td></tr><tr><td>GENRE w/o candidate set</td><td>91.2</td><td>86.9</td><td>87.2</td><td>87.5</td><td>71.1</td><td>86.4</td><td>85.1</td></tr><tr><td>GENRE w/o constraints</td><td>86.4</td><td>80.0</td><td>81.7</td><td>82.1</td><td>66.0</td><td>81.1</td><td>79.6</td></tr></table>
|
| 96 |
+
|
| 97 |
+
Table 1: Micro $F_{1}$ (InKB) on the in-domain test set and five out-of-domain test sets for the named entity disambiguation task. Bold indicates best model and underline indicates second best (not for ablations). *WIKI is usually considered out-of-domain but note that all methods use a part of Wikipedia to train. **results taken from https://github.com/facebookresearch/ BLINK and normalized to accommodate entities not in KB.
|
| 98 |
+
|
| 99 |
+
it exploits the structured of the entity name space, and how it behaves on a cold-start scenario where new unseen entities are added to the KB (descriptions of those entities are unobserved).
|
| 100 |
+
|
| 101 |
+
Comparing GENRE to SOTA systems In ED the difference in average $F_{1}$ score between GENRE and the second best performing system is small (i.e., +0.8) however, ED is an established task with more than a decade of research that benchmarked on those datasets. Indeed all systems reported in Table 1 achieved high and similar results even if they were taken from three years back.
|
| 102 |
+
|
| 103 |
+
The improvements on EL are instead more evident. GENRE is the best in-domain system for AIDA while performing remarkably well also on the out-of-domain setting (e.g., $+13F_{1}$ points on Derczynski, and $+4.7$ on KORE50). Noticeably, in two datasets (OKE15 and OKE16) our model performs poorly. However, these datasets are annotated with coreference (pronouns and common nouns are linked to entities) while our model was not specifically trained for that. Conversely, most of the other systems, have a mention detection component in their pipelines that can be trained or biased to also solve these cases. We considered out of the aim of this work to additional train and evaluate on coreference and we leave it for future work.
|
| 104 |
+
|
| 105 |
+
On page-level DR, the superiority of GENRE is remarkable. Our model is the best performing system across all 5 KILT tasks and all datasets except on Natural Questions where it is the second best. We achieve +13.7 R-precision points on average with respect to the best performing baseline. In Table 3 we compare GENRE against all methods reported in the public leaderboard: DPR (Karpukhin et al., 2020), DPR+BERT (Devlin et al., 2019), DPR+BART, tfidf (Leskovec et al., 2014), RAG (Lewis et al., 2020b), and BLINK+flair (Wu et al., 2020; Akbik et al., 2019). No model except ours was trained on the entire KILT dataset at the same time. A RAG model was trained for every single task as well as for DPR+BERT. Note that this gives and advantage to RAG and DPR+BERT to specialize on single tasks where we have only a single model to solve all of them which still performs better. We speculate that multi-task training could have helped since the all tasks share a common objective to retrieve entities. Both DPR and BLINK+flair were not trained specifically on KILT. However, DPR was trained using several QA datasets which include Natural Question and TriviaQA. In Appendix B we report additional results where we do not pre-train or fine-tune our models for both the ED and retrieval setting in Table 1 and 8 respectively. When we train GENRE only in the DPR or BLINK data, our model still outperforms them.
|
| 106 |
+
|
| 107 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">In-domain</td><td colspan="7">Out-of-domain</td></tr><tr><td>AIDA</td><td>MSNBC</td><td>Der</td><td>K50</td><td>R128</td><td>R500</td><td>OKE15*</td><td>OKE16*</td><td>Avg.</td></tr><tr><td>Hoffart et al. (2011)</td><td>72.8</td><td>65.1</td><td>32.6</td><td>55.4</td><td>46.4</td><td>42.4</td><td>63.1</td><td>0.0</td><td>47.2</td></tr><tr><td>Steinmetz & Sack (2013)</td><td>42.3</td><td>30.9</td><td>26.5</td><td>46.8</td><td>18.1</td><td>20.5</td><td>46.2</td><td>46.4</td><td>34.7</td></tr><tr><td>Moro et al. (2014)</td><td>48.5</td><td>39.7</td><td>29.8</td><td>55.9</td><td>23.0</td><td>29.1</td><td>41.9</td><td>37.7</td><td>38.2</td></tr><tr><td>Kolitsas et al. (2018)</td><td>82.4</td><td>72.4</td><td>34.1</td><td>35.2</td><td>50.3</td><td>38.2</td><td>61.9</td><td>52.7</td><td>53.4</td></tr><tr><td>Broscheit (2019)</td><td>79.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>Martins et al. (2019)</td><td>81.9</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>van Hulst et al. (2020)†</td><td>80.5</td><td>72.4</td><td>41.1</td><td>50.7</td><td>49.9</td><td>35.0</td><td>63.1</td><td>58.3</td><td>56.4</td></tr><tr><td>GENRE</td><td>83.7</td><td>73.7</td><td>54.1</td><td>60.7</td><td>46.7</td><td>40.3</td><td>56.1</td><td>50.0</td><td>58.2</td></tr></table>
|
| 108 |
+
|
| 109 |
+
Table 2: Micro $F_{1}$ (InKB) on the in-domain test set and four out-of-domain test sets for the entity linking task. Bold indicates best model and underline indicates second best. *annotated with coreference (note that we do not train/evaluate our model to link pronouns and common nouns).†results from the Wikipedia 2019 setting as opposed to the 2014 setting (older dump and fewer entities).
|
| 110 |
+
|
| 111 |
+
Memory Footprint GENRE is not only performing better than other SOTA models on DR but it has a significant reduction of memory footprint (disk space). In Figure 4 we compare the number of model/index parameter against DPR, RAG, and BLINK. GENRE uses an order of magnitude less parameters (millions instead of billions) to store the entity index because it just has to use a prefix tree of the entity names as opposed to a dense vector for each entity. Concretely, GENRE occupied 14 times less memory than BLINK and 34 times less memory than DPR.
|
| 112 |
+
|
| 113 |
+
Exploiting the Structured Name Space We investigated some properties of GENRE, comparing two variants of our model to BLINK on the ED task (using WNED-KILT validation set): one trained to generate entity names and another to generate numerical identifiers (IDs). All models are trained on the same data and we report results in Figure 5. When there is an exact match between a mention and its entity name, both BLINK and GENRE almost always make an accurate prediction. Different is the case of partial and no match: GENRE performance is much higher suggesting that our model uses the context more effectively, as the autoregressive formulation allows to cross-encode mention context and entity candidates directly capturing fine-grained interactions between the two. Moreover, when we switch to predicting IDs, the performance drops drastically (-20.3 points on average) indicating that it is important that entity names are meaningful, structured and compositional (as they are in Wikipedia) conversely to atomic IDs. Surprisingly, when there is no overlap between a mention-entity pair, performance are still relatively high by using IDs. This suggests that the model is good at memorizing and recalling identifiers even if numeric.
|
| 114 |
+
|
| 115 |
+
Ablation study We here discuss an ablation study on the entity disambiguation task (see Table 1). Due to space limitation, we discuss an ablation study on document retrieval in Appendix B.2. In Table 1, GENRE only AIDA or BLINK data indicates the ablation for which we only train on one of the two datasets (i.e., only fine-tuning). GENRE (full) is also used with constrained decoding (see Section 3) and in combination with a candidate set (as provided by Le & Titov, 2018). GENRE without candidate set denotes ablating the provided (and small) candidate set and therefore using all the entities in the KB (in our case Wikipedia) as candidates. GENRE without constraints indicates ablating constrained decoding which implies no use of the provided candidates set but also unconstrained generation (i.e., the model may generate entity names that are not in the KB). Eventually, using constrained generation and exploiting the candidate sets proved useful. Training only on AIDA data is insufficient to get high $F_{1}$ (but AIDA is quite small compared to the 9M datapoints of BLINK data).
|
| 116 |
+
|
| 117 |
+
Entity frequency The performance of a model naturally depends on how many times entities appear in the training data. We show the data distribution of the mention-entity frequency in Figure 3. Most of the pairs appears in Wikipedia (10931 / 13354) where 2423 do not (first bin). The average accuracy is $82.5\%$ but noticeable it is higher for mention-entity pairs that are more frequent (right side of the plot). The accuracy for pairs that do not appear in Wikipedia is substantially lower than the average suggesting that those are harder cases (the very end tail of the distribution). The degradation in performance is minimal indicating that our model is good at predicting rare entities.
|
| 118 |
+
|
| 119 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Fact Check. FEV</td><td colspan="3">Entity Disambiguation</td><td colspan="2">Slot Filling</td><td colspan="4">Open Domain QA</td><td rowspan="2">Dial. WoW</td><td rowspan="2">Avg.</td></tr><tr><td>AY2</td><td>WnWi</td><td>WnCw</td><td>T-REx</td><td>zsRE</td><td>NQ</td><td>HoPo</td><td>TQA</td><td>ELI5</td></tr><tr><td>DPR + BERT</td><td>72.9</td><td>-</td><td>-</td><td>-</td><td>-</td><td>40.1</td><td>60.7</td><td>25.0</td><td>43.4</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DPR</td><td>55.3</td><td>1.8</td><td>0.3</td><td>0.5</td><td>13.3</td><td>28.9</td><td>54.3</td><td>25.0</td><td>44.5</td><td>10.7</td><td>25.5</td><td>23.6</td></tr><tr><td>tfidf</td><td>50.9</td><td>3.7</td><td>0.24</td><td>2.1</td><td>44.7</td><td>60.8</td><td>28.1</td><td>34.1</td><td>46.4</td><td>13.7</td><td>49.0</td><td>30.5</td></tr><tr><td>DPR + BART</td><td>55.3</td><td>75.5</td><td>45.2</td><td>46.9</td><td>13.3</td><td>28.9</td><td>54.3</td><td>25.0</td><td>44.4</td><td>10.7</td><td>25.4</td><td>38.6</td></tr><tr><td>RAG</td><td>61.9</td><td>72.6</td><td>48.1</td><td>47.6</td><td>28.7</td><td>53.7</td><td>59.5</td><td>30.6</td><td>48.7</td><td>11.0</td><td>57.8</td><td>47.3</td></tr><tr><td>BLINK + flair</td><td>63.7</td><td>81.5</td><td>80.2</td><td>68.8</td><td>59.6</td><td>78.8</td><td>24.5</td><td>46.1</td><td>65.6</td><td>9.3</td><td>38.2</td><td>56.0</td></tr><tr><td>GENRE</td><td>83.6</td><td>89.9</td><td>87.4</td><td>71.2</td><td>79.4</td><td>95.8</td><td>60.3</td><td>51.3</td><td>69.2</td><td>15.8</td><td>62.9</td><td>69.7</td></tr></table>
|
| 120 |
+
|
| 121 |
+
Table 3: R-Precision for page-level retrieval on KILT test data. Bold indicates the best model and underline indicates the second best. For our model, we indicated what datasets we used for training.
|
| 122 |
+
|
| 123 |
+
<table><tr><td>Model</td><td>Memory</td><td>Param.</td><td>Index</td></tr><tr><td>DPR</td><td>70.9GB</td><td>220M</td><td>15B</td></tr><tr><td>RAG</td><td>40.4GB</td><td>626M</td><td>15B</td></tr><tr><td>BLINK</td><td>30.1GB</td><td>680M</td><td>6B</td></tr><tr><td>GENRE</td><td>2.1GB</td><td>406M</td><td>17M</td></tr></table>
|
| 124 |
+
|
| 125 |
+
<table><tr><td>Type (support)</td><td>BLINK</td><td>GENRE</td><td>IDs*</td></tr><tr><td>Exact match (1543)</td><td>97.8</td><td>96.6</td><td>76.0</td></tr><tr><td>Partial match (1531)</td><td>70.7</td><td>86.9</td><td>63.8</td></tr><tr><td>No match (322)</td><td>49.4</td><td>59.9</td><td>55.0</td></tr><tr><td>Total (3396)</td><td>81.0</td><td>88.8</td><td>68.5</td></tr></table>
|
| 126 |
+
|
| 127 |
+
Table 4: Comparison between retrieval models on memory (disk space) footprint and number of model/index parameters.
|
| 128 |
+
|
| 129 |
+
Table 5: Different types of matches between mentions and their entity names on the WNED-KILT. *indicates GENRE trained on numerical identifiers.
|
| 130 |
+
|
| 131 |
+
Cold-start We manually collect 50 Wikipedia articles that were created in $2020^6$ to simulate a cold-start setting where new entities are added to the KB and the only entity information available is their names. To create ED instances we resort to hyperlinks pointing to those entities in other Wikipedia articles. 19 out of 50 mentions have an exact match with their respective entity names and all of them were correctly classified by GENRE. In combination with the results from Table 5 we can conclude that GENRE has a bias on exactly copying the mention, and this helps on unseen data. GENRE also correctly classified 14/31 of the remaining mentions $(45.2\%)$ . This demonstrates the ability of our solution to be applied in scenarios where entity metadata is unavailable (apart his name), a setting where, to the best of our knowledge, no existing system is capable to operate.
|
| 132 |
+
|
| 133 |
+
We additionally test how GENRE performs on unseen mention-entity pairs on WikilinksNED Unseen-Mentions data (Onoe & Durrett, 2020) and we report all results in Table 6 in Appendix B.1. Surprisingly, GENRE performs almost the same for seen and unseen entity pairs (64.4 vs 63.2 accuracy) However, in the Onoe & Durrett (2020) setting we cannot guarantee entity descriptions have not been seen by BART during pre-training (given his training data contains Wikipedia).
|
| 134 |
+
|
| 135 |
+
# 5 RELATED WORKS
|
| 136 |
+
|
| 137 |
+
Casting NLP tasks with a structured input or output into sequence-to-sequence problems has been explored for different problems, including semantic parsing (Rongali et al., 2020), semantic role labelling (Daza & Frank, 2018), discourse representation structure parsing (Liu et al., 2018), generation of fluent natural language responses from structured semantic representations (Balakrishnan et al., 2019), generation and parsing of abstract meaning representation (Konstas et al., 2017). In these works a structured representation, a tree or a graph for instance, is linearized into a sequence of symbols compatible with a seq2seq architecture. To the best of our knowledge, we are the first to cast entity retrieval as a sequence-to-sequence problem while decoding with an autoregressive formulation during inference.
|
| 138 |
+
|
| 139 |
+
Related to our constrained generation mechanism, Daza & Frank (2018); Rongali et al. (2020) use a copying mechanism in order to limit lexical deviations between the input and output strings. In these tasks, as well as for our problem, it is natural to promote a copying mechanism due to the input and the output proximity. A different type of constraint, a structural constraint, is used in Bal-
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
Figure 3: Accuracy per mention-entity pair frequency (in Wikipedia) on the validation sets of all Entity Disambiguation tasks in KILT.
|
| 143 |
+
|
| 144 |
+
akrishnan et al. (2019) to maintain a valid tree structure. Our constrained beam search encompasses both aspects, a copying mechanism that restrains the vocabulary and a structural constraint to obtain a well-formed annotated output. In addition to these tasks with close input and output, the integration of a mechanism to guide the output of neural networks has been explored in various settings. Lexically constrained decoding has been used to force the inclusion of pre-specified words for machine translation (Hokamp & Liu, 2017; Post & Vilar, 2018), and image captioning (Anderson et al., 2017). To the best of our knowledge, we are the first to exploit constrained generation for entity disambiguation, end-to-end entity linking, and query-based entity retrieval.
|
| 145 |
+
|
| 146 |
+
Nogueira et al. (2020) propose to use a sequence-to-sequence model to re-rank document. Given a query and a document the model is trained to output the words "true" or "false" depending on whether the document is relevant or not. Differently from our approach for entity retrieval, it requires a limited list of candidates documents, obtained with BM25 for instance, in order to be computationally possible. Massarelli et al. (2019); Petroni et al. (2020a) explore the idea of using an autoregressive language model as neural retriever, by exploiting the implicit knowledge stored in their parameters to generate relevant sentences given a query. While intriguing, such solutions still lag behind retrievers with an explicit knowledge access (e.g., an explicit Wikipedia index). The idea of using a generative model for entity disambiguation was proposed in Petroni et al. (2020b) as they trained both BART and T5 in a seq2seq fashion on all KILT tasks (including ED). We expanded that intuition generalizing on multiple tasks (end-to-end EL and page-level retrieval) as well as introducing constrained decoding for an efficient and effective search.
|
| 147 |
+
|
| 148 |
+
# 6 CONCLUSIONS
|
| 149 |
+
|
| 150 |
+
In this work, we propose GENRE, a novel paradigm to addresses entity retrieval: generate entity names autoregressively. Entity names have several properties that might help (even humans) retrieving them, including a compositional structure and a predictable interaction with the context. The autoregressive formulation allows us to directly capture some of these properties, leading to several advantages with respect to current solutions, including an efficient way to cross encode mention context and entity candidates, a much smaller memory footprint, and the ability to compute an exact softmax without the need to subsample negative data. We empirically show that these characteristics, combined with constrained decoding strategies, led to state-of-the-art performance on a plethora of entity retrieval datasets, spanning entity disambiguation, end-to-end entity linking, and page-level document retrieval, while resulting in systems with a remarkably contained memory footprint, a space reduction by a factor of twenty on average. We additionally demonstrate that new entities can be effectively considered in our system by simply appending their unambiguous name to the candidate set.
|
| 151 |
+
|
| 152 |
+
# ACKNOWLEDGMENTS
|
| 153 |
+
|
| 154 |
+
Authors thank Patrick Lewis, Aleksandra Piktus, Michael Schlichtkrull, Ivan Titov, Jean Maillard, Edouard Grave, Sergio De Cao, Luisa Quarta for helpful discussions and technical support.
|
| 155 |
+
|
| 156 |
+
# REFERENCES
|
| 157 |
+
|
| 158 |
+
Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. FLAIR: An easy-to-use framework for state-of-the-art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 54-59, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-4010. URL https://www.aclweb.org/anthology/N19-4010.
|
| 159 |
+
Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Guided open vocabulary image captioning with constrained beam search. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 936-945, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1098. URL https://www.aclweb.org/anthology/D17-1098.
|
| 160 |
+
Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. Constrained decoding for neural NLG from compositional representations in task-oriented dialogue. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 831-844, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1080. URL https://www.aclweb.org/anthology/P19-1080.
|
| 161 |
+
Steven M. Beitzel, Eric C. Jensen, and Ophir Frieder. Average R-Precision, pp. 195-195. Springer US, Boston, MA, 2009. ISBN 978-0-387-39940-9. doi: 10.1007/978-0-387-39940-9_491. URL https://doi.org/10.1007/978-0-387-39940-9_491.
|
| 162 |
+
Samuel Broscheit. Investigating entity knowledge in BERT with simple neural end-to-end entity linking. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pp. 677-685, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/K19-1063. URL https://www.aclweb.org/anthology/K19-1063.
|
| 163 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1870-1879, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1171. URL https://www.aclweb.org/anthology/P17-1171.
|
| 164 |
+
Shuang Chen, Jinpeng Wang, Feng Jiang, and Chin-Yew Lin. Improving entity linking by modeling latent entity type information. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 7529-7537, 2020.
|
| 165 |
+
Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2009.
|
| 166 |
+
Angel Daza and Anette Frank. A sequence-to-sequence model for semantic role labeling. In Proceedings of The Third Workshop on Representation Learning for NLP, pp. 207-216, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-3027. URL https://www.aclweb.org/anthology/W18-3027.
|
| 167 |
+
Leon Derczynski, Diana Maynard, Giuseppe Rizzo, Marieke Van Erp, Genevieve Gorrell, Raphaël Troncy, Johann Petrak, and Kalina Bontcheva. Analysis of named entity recognition and linking for tweets. Information Processing & Management, 51(2):32-49, 2015.
|
| 168 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://www.aclweb.org/anthology/N19-1423.
|
| 169 |
+
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. Wizard of wikipedia: Knowledge-powered conversational agents. Proceedings of the International Conference on Learning Representations (ICLR), 2019.
|
| 170 |
+
|
| 171 |
+
Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Elena Simperl, and Frederique Laforest. T-rex: A large scale alignment of natural language with knowledge base triples. LREC, 2018.
|
| 172 |
+
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. ELI5: long form question answering. In Anna Korhonen, David R. Traum, and Lluis Marquez (eds.), Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pp. 3558-3567. Association for Computational Linguistics, 2019. doi: 10.18653/v1/p19-1346. URL https://doi.org/10.18653/v1/p19-1346.
|
| 173 |
+
Zheng Fang, Yanan Cao, Qian Li, Dongjie Zhang, Zhenyu Zhang, and Yanbing Liu. Joint entity linking with deep reinforcement learning. In The World Wide Web Conference, pp. 438-447. ACM, 2019.
|
| 174 |
+
David A Ferrucci. Introduction to "this is watson". IBM Journal of Research and Development, 56 (3.4):1-1, 2012.
|
| 175 |
+
Evgeniy Gabrilovich, Michael Ringgaard, and Amarnag Subramanya. Facc1: Freebase annotation of cluemweb corpora, version 1 (release date 2013-06-26, format version 1, correction level 0). Note: http://lemurproject.org/cluemweb09/FACC1/Cited by, 5:140, 2013.
|
| 176 |
+
Octavian-Eugen Ganea and Thomas Hofmann. Deep joint entity disambiguation with local neural attention. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2619-2629, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1277. URL https://www.aclweb.org/anthology/D17-1277.
|
| 177 |
+
Zhaochen Guo and Denilson Barbosa. Robust named entity disambiguation with random walks. Semantic Web, 9(4):459-479, 2018.
|
| 178 |
+
Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pp. 782-792, Edinburgh, Scotland, UK., July 2011. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D11-1072.
|
| 179 |
+
Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Martin Theobald, and Gerhard Weikum. KORE: Keyphrase Overlap Relatedness for Entity Disambiguation. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12, pp. 545-554, New York, NY, USA, 2012. Association for Computing Machinery. ISBN 9781450311564. doi: 10.1145/2396761.2396832. URL https://doi.org/10.1145/2396761.2396832.
|
| 180 |
+
Chris Hokamp and Qun Liu. Lexically constrained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1535-1546, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1141. URL https://www.aclweb.org/anthology/P17-1141.
|
| 181 |
+
Hongzhao Huang, Larry Heck, and Heng Ji. Leveraging deep neural networks and knowledge graphs for entity disambiguation. arXiv preprint arXiv:1504.07678, 2015.
|
| 182 |
+
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SkxgnnNFvH.
|
| 183 |
+
Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, 2019.
|
| 184 |
+
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp.
|
| 185 |
+
|
| 186 |
+
1601-1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://www.aclweb.org/anthology/P17-1147.
|
| 187 |
+
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6769-6781, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.550. URL https://www.aclweb.org/anthology/2020.emnlp-main.550.
|
| 188 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations (ICLR), 2014.
|
| 189 |
+
Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. End-to-end neural entity linking. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pp. 519-529, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/K18-1050. URL https://www.aclweb.org/anthology/K18-1050.
|
| 190 |
+
Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. Neural AMR: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 146-157, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1014. URL https://www.aclweb.org/anthology/P17-1014.
|
| 191 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466, March 2019. doi: 10.1162/tacl_a_00276. URL https://www.aclweb.org/anthology/Q19-1026.
|
| 192 |
+
Phong Le and Ivan Titov. Improving entity linking by modeling latent relations between mentions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1595-1604, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1148. URL https://www.aclweb.org/anthology/P18-1148.
|
| 193 |
+
Phong Le and Ivan Titov. Boosting entity linking performance by leveraging unlabeled documents. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 1935-1945, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1187. URL https://www.aclweb.org/anthology/P19-1187.
|
| 194 |
+
Jure Leskovec, Anand Rajaraman, and Jeffrey David Ullman. Mining of Massive Datasets. Cambridge University Press, USA, 2nd edition, 2014. ISBN 1107077230.
|
| 195 |
+
Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pp. 333-342, Vancouver, Canada, August 2017. Association for Computational Linguistics. doi: 10.18653/v1/K17-1034. URL https://www.aclweb.org/anthology/K17-1034.
|
| 196 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7871-7880, Online, July 2020a. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.703. URL https://www.aclweb.org/anthology/2020.acl-main.703.
|
| 197 |
+
Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roktaschel, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. arXiv preprint arXiv:2005.11401, 2020b.
|
| 198 |
+
|
| 199 |
+
Jiangming Liu, Shay B. Cohen, and Mirella Lapata. Discourse representation structure parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 429-439, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1040. URL https://www.aclweb.org/anthology/P18-1040.
|
| 200 |
+
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. Zero-shot entity linking by reading entity descriptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3449-3460, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1335. URL https://www.aclweb.org/anthology/P19-1335.
|
| 201 |
+
Pedro Henrique Martins, Zita Marinho, and André F. T. Martins. Joint learning of named entity recognition and entity linking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pp. 190-196, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-2026. URL https://www.aclweb.org/anthology/P19-2026.
|
| 202 |
+
Luca Massarelli, Fabio Petroni, Aleksandra Piktus, Myle Ott, Tim Rocktäschel, Vassilis Plachouras, Fabrizio Silvestri, and Sebastian Riedel. How decoding strategies affect the verifiability of generated text. arXiv preprint arXiv:1911.03587, 2019.
|
| 203 |
+
Andrea Moro, Alessandro Raganato, and Roberto Navigli. Entity linking meets word sense disambiguation: a unified approach. Transactions of the Association for Computational Linguistics, 2:231-244, 2014. doi: 10.1162/tacl_a_00179. URL https://www.aclweb.org/anthology/Q14-1019.
|
| 204 |
+
Isaiah Onando Mulang', Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri, Johannes Hoffart, and Jens Lehmann. Evaluating the impact of knowledge graph context on entity disambiguation models. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 2157-2160, 2020.
|
| 205 |
+
Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. Document ranking with a pretrained sequence-to-sequence model. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 708-718, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020-findings-emnlp.63. URL https://www.aclweb.org/anthology/2020-findings-emnlp.63.
|
| 206 |
+
Andrea Giovanni Nuzzolese, Anna Lisa Gentile, Valentina Presutti, Aldo Gangemi, Dario Garigliotti, and Roberto Navigli. Open knowledge extraction challenge. In Semantic Web Evaluation Challenges, pp. 3-15. Springer, 2015.
|
| 207 |
+
Yasumasa Onoe and Greg Durrett. Fine-grained entity typing for domain independent entity linking. In AAAI, pp. 8576-8583, 2020.
|
| 208 |
+
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pp. 48-53, 2019.
|
| 209 |
+
Fabio Petroni, Tim Roktaschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2463-2473, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1250. URL https://www.aclweb.org/anthology/D19-1250.
|
| 210 |
+
Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Roktaschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. How context affects language models' factual predictions. In *Automated Knowledge Base Construction*, 2020a. URL https://openreview.net/forum?id=025X0zPfn.
|
| 211 |
+
|
| 212 |
+
Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vassilis Plachouras, Tim Rocktäschel, et al. KILT: a Benchmark for Knowledge Intensive Language Tasks. arXiv preprint arXiv:2009.02252, 2020b.
|
| 213 |
+
Francesco Piccinno and Paolo Ferragina. From tagme to wat: A new entity annotator. In Proceedings of the First International Workshop on Entity Recognition and Disambiguation, ERD '14, pp. 55-62, New York, NY, USA, 2014. Association for Computing Machinery. ISBN 9781450330237. doi: 10.1145/2633211.2634350. URL https://doi.org/10.1145/2633211.2634350.
|
| 214 |
+
Matt Post and David Vilar. Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1314-1324, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1119. URL https://www.aclweb.org/anthology/N18-1119.
|
| 215 |
+
A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. In Technical report, OpenAI, 2019.
|
| 216 |
+
Jonathan Raiman and Olivier Raiman. Deeype: multilingual entity linking by neural type system evolution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
|
| 217 |
+
Michael Röder, Ricardo Usbeck, Sebastian Hellmann, Daniel Gerber, and Andreas Both. $\mathbf{N}^3$ -a collection of datasets for named entity recognition and disambiguation in the nlp interchange format. In LREC, pp. 3529-3533, 2014.
|
| 218 |
+
Michael Röder, Ricardo Usbeck, and Axel-Cyrille Ngonga Ngomo. Gerbil-benchmarking named entity recognition and linking consistently. Semantic Web, 9(5):605-625, 2018.
|
| 219 |
+
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637, 2020.
|
| 220 |
+
Subendhu Rongali, Luca Soldaini, Emilio Monti, and Wael Hamza. Don't parse, generate! a sequence to sequence architecture for task-oriented semantic parsing. In Proceedings of The Web Conference 2020, WWW '20, pp. 2962-2968. Association for Computing Machinery, 2020.
|
| 221 |
+
Hamed Shahbazi, Xiaoli Z Fern, Reza Ghaeini, Rasha Obeidat, and Prasad Tadepalli. Entity-aware ELMo: Learning Contextual Entity Representation for Entity Disambiguation. arXiv preprint arXiv:1908.05762, 2019.
|
| 222 |
+
Bill Slawski, Sep 2015. URL https://www.seobythesea.com/2015/09/disambiguate-entities-in-queries-and-pages/.
|
| 223 |
+
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56):1929-1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html.
|
| 224 |
+
Nadine Steinmetz and Harald Sack. Semantic multimedia information retrieval based on contextual descriptions. In Philipp Cimiano, Oscar Corcho, Valentina Presutti, Laura Hollink, and Sebastian Rudolph (eds.), The Semantic Web: Semantics and Big Data, pp. 382-396, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. ISBN 978-3-642-38288-8.
|
| 225 |
+
Ilya Sutskever, James Martens, and Geoffrey E Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML), pp. 1017--1024., 2011.
|
| 226 |
+
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112, 2014.
|
| 227 |
+
|
| 228 |
+
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818-2826, 2016.
|
| 229 |
+
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 809-819, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1074. URL https://www.aclweb.org/anthology/N18-1074.
|
| 230 |
+
Johannes M. van Hulst, Faegheh Hasibi, Koen Dercksen, Krisztian Balog, and Arjen P. de Vries. Rel: An entity linker standing on the shoulders of giants. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '20, pp. 2197-2200, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450380164. doi: 10.1145/3397271.3401416. URL https://doi.org/10.1145/3397271.3401416.
|
| 231 |
+
Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. Scalable zero-shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 6397-6407, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.519. URL https://www.aclweb.org/anthology/2020.emnlp-main.519.
|
| 232 |
+
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
|
| 233 |
+
Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. Joint learning of the embedding of words and entities for named entity disambiguation. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 250-259, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/K16-1025. URL https://www.aclweb.org/anthology/K16-1025.
|
| 234 |
+
Xiyuan Yang, Xiaotao Gu, Sheng Lin, Siliang Tang, Yueting Zhuang, Fei Wu, Zhigang Chen, Guoping Hu, and Xiang Ren. Learning dynamic context augmentation for global entity linking. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 271-281, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1026. URL https://www.aclweb.org/anthology/D19-1026.
|
| 235 |
+
Yi Yang, Ozan Irsoy, and Kazi Shefaet Rahman. Collective entity disambiguation with structured gradient tree boosting. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 777-786, New Orleans, Louisiana, June 2018a. Association for Computational Linguistics. doi: 10.18653/v1/N18-1071. URL https://www.aclweb.org/anthology/N18-1071.
|
| 236 |
+
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369-2380, Brussels, Belgium, October-November 2018b. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://www.aclweb.org/anthology/D18-1259.
|
| 237 |
+
|
| 238 |
+
# A EXPERIMENTAL DETAILS
|
| 239 |
+
|
| 240 |
+
We implemented, trained, and evaluate our model using the fariseq library (Ott et al., 2019). We trained GENRE for every task using Adam (Kingma & Ba, 2014) with a learning rate $3 \cdot 10^{-5}$ with a linear warm-up for 500 steps and then liner decay. The objective is sequence-to-sequence categorical cross-entropy loss with 0.1 of label smoothing.
|
| 241 |
+
|
| 242 |
+
# A.1 NAMED ENTITY DISAMBIGUATION
|
| 243 |
+
|
| 244 |
+
Setting Given a document $d_{j}$ (e.g., a sentence) containing a set of entity mentions $m_{1},\ldots ,m_{N}$ , a system either has to assign, to each mention $m_{i}$ , either a KB entity (i.e., $e_i\in \mathcal{E}$ ), or predicts that there is no corresponding entry in the KB (i.e., $e_i = \mathrm{NIL}$ ). Moreover, a restricted candidates set $C_i = \{\hat{e}_{i1},\dots,\hat{e}_{iK}\} \subseteq \mathcal{E}\cup \{\mathrm{NIL}\}$ for each mention $m_{i}$ is provided.
|
| 245 |
+
|
| 246 |
+
Training We pre-trained GENRE on BLINK data for 200k steps and then we do model selection on the validation set. Afterward, we fine-tuned on AIDA without resetting the learning rate nor the optimizer statistics for 10k steps and we do model selection on the validation set. Following previous works (Yamada et al., 2016; Ganea & Hofmann, 2017; Le & Titov, 2018), we considered only mentions that have entities in the KB (i.e., Wikipedia). Training was done on 32 GPUs (with 32GB of memory) and it completed in $\sim 24\mathrm{h}$ for a total of $\sim 32$ GPU/day.
|
| 247 |
+
|
| 248 |
+
Inference At test time, we use Constrained Beam Search with 10 beams, and maximum decoding steps of 15. We restrict the input sequence to be at most 384 tokens cutting the left, right, or both parts of the context around a mention. We normalize the log-probabilities by sequence length.
|
| 249 |
+
|
| 250 |
+
# A.2 ENTITY LINKING
|
| 251 |
+
|
| 252 |
+
Setting Given a document $d_{j}$ (e.g., a sentence) a system has to return a set of tuples $\langle m_i, e_i \rangle$ where $m_{i}$ is a entity mentions (a span contained in $d_{j}$ ) and $e_{i} \in \mathcal{E}$ its corresponding entity in the KB. Following Kolitsas et al. (2018), we considered only mentions that have entities in the KB (i.e., Wikipedia) and we used their candidate sets with the additions of the table computed by Hoffart et al. (2011).
|
| 253 |
+
|
| 254 |
+
Training We pre-trained GENRE on all abstract sections from Wikipedia<sup>7</sup> enriched by a string matching heuristic to solve co-references (i.e., if there is a string that matches exactly with another hyperlink we also add it to the dataset as a mention/entity pairs) data for 200k steps. Then we do model selection on the validation set. Afterward, we fine-tuned on AIDA resetting the learning rate and the optimizer statistics for 10k steps and we do model selection on the validation set. Again, following previous works (Kolitsas et al., 2018), we considered only mentions that have entities in Wikipedia. Training was done on 64 GPUs (with 32GB of memory) and it completed in $\sim 30\mathrm{h}$ for a total of $\sim 80$ GPU/day.
|
| 255 |
+
|
| 256 |
+
Inference At test time, we use Constrained Beam Search with 6 beams, and a maximum decoding step of 384. When the input sequence is too long, we split the input into multiple chunks of equal size. We normalize the log-probabilities by sequence length.
|
| 257 |
+
|
| 258 |
+
# A.3 PAGE-LEVEL DOCUMENT RETRIEVAL
|
| 259 |
+
|
| 260 |
+
Setting Given a query $q$ (e.g., a question) and a collection of documents $\mathcal{D}$ (in KILT are Wikipedia pages), a system has to rank documents in $\mathcal{D}$ based on their relevance to $q$ .
|
| 261 |
+
|
| 262 |
+
Training We trained GENRE on all KILT data simultaneously for 200k steps and we do model selection on the validation set averaging the score across tasks. Training was done on 128 GPUs (with 32GB of memory) and it completed in $\sim 33\mathrm{h}$ for a total of $\sim 176$ GPU/day.
|
| 263 |
+
|
| 264 |
+
Inference At test time, we use Constrained Beam Search with 10 beams. For the ED sub-task, we restrict the input sequence to be at most 384 tokens cutting the left, right, or both parts of the context around a mention. We normalize the log-probabilities by sequence length.
|
| 265 |
+
|
| 266 |
+
# B ADDITIONAL RESULTS
|
| 267 |
+
|
| 268 |
+
# B.1 NAMED ENTITY DISAMBIGUATION
|
| 269 |
+
|
| 270 |
+
Table 6 reports evaluation of GENRE on on WikilinksNED Unseen-Mentions data (Onoe & Durrett, 2020). We also report additional results on AIDA from the literature in Table 7.
|
| 271 |
+
|
| 272 |
+
<table><tr><td></td><td>Seen</td><td>Unseen</td><td>Total</td></tr><tr><td>Exact match</td><td>87.48 (751)</td><td>70.36 (2227)</td><td>74.68 (2978)</td></tr><tr><td>Partial match</td><td>56.39 (1566)</td><td>61.47 (4838)</td><td>60.23 (6404)</td></tr><tr><td>No match</td><td>41.46 (205)</td><td>45.04 (413)</td><td>43.85 (618)</td></tr><tr><td>Total</td><td>64.43 (2522)</td><td>63.21 (7478)</td><td>63.52 (10k)</td></tr></table>
|
| 273 |
+
|
| 274 |
+
Table 6: Evaluation of GENRE on WikilinksNED Unseen-Mentions data (Onoe & Durrett, 2020). We train on the provided train set and we report accuracy scores (i.e., precision at 1) alongside with the number of supporting datapoints. We report scores splitting the test set in seen and unseen entities as well as in three different matchings between a mention and its gold entity.
|
| 275 |
+
|
| 276 |
+
<table><tr><td>Methods</td><td>micro-F1</td></tr><tr><td>Guo & Barbosa (2018)</td><td>89</td></tr><tr><td>Le & Titov (2019)</td><td>89.6</td></tr><tr><td>Yamada et al. (2016)</td><td>91.5</td></tr><tr><td>Ganea & Hofmann (2017)</td><td>92.2</td></tr><tr><td>Shahbazi et al. (2019)</td><td>93.5</td></tr><tr><td>Chen et al. (2020)</td><td>93.5</td></tr><tr><td>Yang et al. (2019)</td><td>93.7</td></tr><tr><td>Fang et al. (2019)</td><td>94.3</td></tr><tr><td>Raiman & Raiman (2018)</td><td>94.9</td></tr><tr><td>Mulang’ et al. (2020)</td><td>94.9</td></tr><tr><td>Yang et al. (2018a)</td><td>95.9</td></tr><tr><td>GENRE</td><td>93.3</td></tr></table>
|
| 277 |
+
|
| 278 |
+
Table 7: Additional results on AIDA. We report Micro InKB $F_{1}$ on test sets.
|
| 279 |
+
|
| 280 |
+
# B.2 DOCUMENT RETRIEVAL
|
| 281 |
+
|
| 282 |
+
Table 8 extends Table 3 with additional results (i.e., training GENRE on the numerical identifiers) and an ablation study on the document retrieval task. The purpose of the experiment is to see whether GENRE benefits from the entity names to be meaningful as well as compositional. Numerical IDs do not have that property. In both cases, the model uses its memorizing capabilities but when using IDs the performance is significantly low. Indeed, with IDs the model has no way to generalize nor to use "implicit knowledge" acquired during the unsupervised pre-training. We also ablate the training data. DPR data corresponds to training only on Natural Questions (NQ) and TriviaQA (TQA) as DPR was trained only for QA tasks on those datasets and two extra ones. Note that training on BLINK data corresponds to only training for entity disambiguation. However, every other task share similarities with entity disambiguation and thus the model is also capable to address the other tasks with non-zero performance. For the ablations, underlined cells indicate what are the results on the respective task on which a model was trained for (i.e., GENRE only BLINK data was trained only for ED where GENRE only DPR data was trained only for QA). The ablation on data suggests that it is beneficial to train on all tasks simultaneously. GENRE without constraints indicates abating
|
| 283 |
+
|
| 284 |
+
constrained decoding which implies unconstrained generation (i.e., the model may generate entity names that are not in the KB).
|
| 285 |
+
|
| 286 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Fact Check. FEV</td><td colspan="3">Entity Disambiguation</td><td colspan="3">Slot Filling</td><td colspan="4">Open Domain QA</td><td rowspan="2">Dial. WoW</td><td rowspan="2">Avg.</td></tr><tr><td>AY2</td><td>WnWi</td><td>WnCw</td><td>T-REx</td><td>zsRE</td><td>NQ</td><td>HoPo</td><td>TQA</td><td>ELI5</td><td></td></tr><tr><td>DPR + BERT</td><td>72.9</td><td>-</td><td>-</td><td>-</td><td>-</td><td>40.1</td><td>60.7</td><td>25.0</td><td>43.4</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>DPR</td><td>55.3</td><td>1.8</td><td>0.3</td><td>0.5</td><td>13.3</td><td>28.9</td><td>54.3</td><td>25.0</td><td>44.5</td><td>10.7</td><td>25.5</td><td>23.6</td><td></td></tr><tr><td>tfidf</td><td>50.9</td><td>3.7</td><td>0.24</td><td>2.1</td><td>44.7</td><td>60.8</td><td>28.1</td><td>34.1</td><td>46.4</td><td>13.7</td><td>49.0</td><td>30.5</td><td></td></tr><tr><td>DPR + BART</td><td>55.3</td><td>75.5</td><td>45.2</td><td>46.9</td><td>13.3</td><td>28.9</td><td>54.3</td><td>25.0</td><td>44.4</td><td>10.7</td><td>25.4</td><td>38.6</td><td></td></tr><tr><td>RAG</td><td>61.9</td><td>72.6</td><td>48.1</td><td>47.6</td><td>28.7</td><td>53.7</td><td>59.5</td><td>30.6</td><td>48.7</td><td>11.0</td><td>57.8</td><td>47.3</td><td></td></tr><tr><td>BLINK + flair</td><td>63.7</td><td>81.5</td><td>80.2</td><td>68.8</td><td>59.6</td><td>78.8</td><td>24.5</td><td>46.1</td><td>65.6</td><td>9.3</td><td>38.2</td><td>56.0</td><td></td></tr><tr><td>GENRE only BLINK IDs</td><td>1.8</td><td>65.0</td><td>63.5</td><td>58.6</td><td>0.1</td><td>0.2</td><td>0.4</td><td>0.3</td><td>5.4</td><td>0.3</td><td>13.3</td><td>19.0</td><td></td></tr><tr><td>GENRE only DPR data</td><td>70.8</td><td>9.7</td><td>1.9</td><td>7.3</td><td>60.0</td><td>79.7</td><td>58.3</td><td>40.3</td><td>69.6</td><td>13.2</td><td>52.6</td><td>42.1</td><td></td></tr><tr><td>GENRE only BLINK data</td><td>28.1</td><td>82.5</td><td>88.1</td><td>69.9</td><td>44.8</td><td>66.1</td><td>15.0</td><td>16.4</td><td>25.6</td><td>6.8</td><td>38.7</td><td>43.8</td><td></td></tr><tr><td>GENRE w/o constraints</td><td>78.9</td><td>87.2</td><td>83.2</td><td>36.5</td><td>74.4</td><td>93.6</td><td>53.3</td><td>45.2</td><td>63.7</td><td>14.3</td><td>62.7</td><td>63.0</td><td></td></tr><tr><td>GENRE full</td><td>83.6</td><td>89.9</td><td>87.4</td><td>71.2</td><td>79.4</td><td>95.8</td><td>60.3</td><td>51.3</td><td>69.2</td><td>15.8</td><td>62.9</td><td>69.7</td><td></td></tr></table>
|
| 287 |
+
|
| 288 |
+
Table 8: Ablation study on KILT retrieval. We report R-Precision. GENRE only BLINK $IDs$ denotes training on BLINK (Wu et al., 2020) data where instead of using the textual entity representation as target we used a numerical ID.
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
Figure 4: Accuracy per number of BPE tokens of the Wikipedia title to generate on the validation sets of all KILT datasets except ELI5 (as it is fundamentally different from the others). We also show the data distribution of token lengths. Most of the titles have less than 15 BPE tokens while the mode of the distribution is 5. Here GENRE has an average accuracy of $78.6\%$ but it is higher for short titles (e.g., $< 10$ ) and it is lower for long titles (e.g., $\geq 10$ ). Degradation in performance does not directly follow the data distribution of the token lengths. Indeed, even if long titles are rare performance is not heavily affected (e.g., for length $>15$ ).
|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
Figure 5: Accuracy per number of incoming links in Wikipedia on the validation sets of all KILT datasets except ELI5 (as it is fundamentally different from the others). We also show the data distribution of the number of incoming links. Intuitively, a page/entity with few incoming links has been observed less than highly connected pages/entities. Indeed, for pages/entities never linked (first bin on the left) the average accuracy is $20\%$ lower than the global average $(78.6\%)$ . However, for pages/entities linked at least once it is above the global average. This indicates that GENRE seems effective on linking rare entities.
|
| 295 |
+
Figure 6: Example of a GENRE prediction for named entity disambiguation on KILT WNED. The input is plain text where a mention is flagged with two special start and end tokens [STARTENT] and [ENDENT]. The output is a ranked list of entity (where we report the log-likelihood as well).
|
| 296 |
+
Figure 7: Example of GENRE predictions for the retrieval task on KILT. The input is a query and the output is a ranked list of Wikipedia article titles (we also report the log-likelihood of the solutions).
|
| 297 |
+
|
| 298 |
+
# C EXAMPLES
|
| 299 |
+
|
| 300 |
+
```txt
|
| 301 |
+
1 ID: '87d95287-707e-4bd9-9633-ca0c611a4a3a_World_Without_Superma:8'
|
| 302 |
+
2 inputs: '[...] When Superman leaves Earth for New Krypton, he appoints, newly freed from the Phantom Zone, to take his place as guardian of [STARTENT] Metropolis [ENDENT] . Mon-El assumes the secret identity of Johnathan Kent as a tribute to Clark \'s adoptive father, posing as Clark \'s cousin. [...]'
|
| 303 |
+
3 gold_output: 'Metropolis (comics)'
|
| 304 |
+
4 predicted_outputs: [
|
| 305 |
+
5 ('Metropolis_(comics)', -0.09),
|
| 306 |
+
6 ('Themyscira_(DC_Comics)', -1.09),
|
| 307 |
+
7 ('Metropolis_(disambiguation)', -1.27),
|
| 308 |
+
8 ('Superman_(comic_book)', -1.51),
|
| 309 |
+
9 ('Superman_(Earth-Two)', -1.52)
|
| 310 |
+
10 ]
|
| 311 |
+
```
|
| 312 |
+
|
| 313 |
+
```txt
|
| 314 |
+
1 ID:'sfq-18245'
|
| 315 |
+
2 inputs:"Which Florentine painter"
|
| 316 |
+
1535-1607 used the name Bronzino after the death of his 'uncle'? 1 ID: '4713'
|
| 317 |
+
2 inputs:'Tool has won three Oscars.'
|
| 318 |
+
3 gold_output:'Bronzino' 3 gold_output:'Tool (band)'
|
| 319 |
+
4 predicted_outputs: [ predicted outs]:
|
| 320 |
+
5 ('Florence', -0.37), 5 ('Tool_(band)', -0.08),
|
| 321 |
+
6 ('Bronzino', -0.62), 6 ('Tool_(disambiguation)', -1.59),
|
| 322 |
+
7 ('Niccolo_Machiavelli', -0.64), 7 ('Machine_Head_(band)', -1.73),
|
| 323 |
+
8 ('Giorgio_de_Chirico', -0.71), 8 ('Language_Arts_(album)', -1.97),
|
| 324 |
+
9 ('Vitruvian_Man', -0.73) 9 ('Machine_Gun_(band)', -2.12)
|
| 325 |
+
10 ] 10 ]
|
| 326 |
+
```
|
| 327 |
+
|
| 328 |
+
(a) TriviaQA (open domain question answering).
|
| 329 |
+
|
| 330 |
+
(b) FEVER (fact checking).
|
| 331 |
+
|
| 332 |
+
```txt
|
| 333 |
+
1 ID: '1106testa_SOCCEER'
|
| 334 |
+
2 inputs: 'SOCCEER - RESULT IN SPANISH FIRST DIVISION. MADRID 1996-08-31 Result of game $\times$ played in the Spanish first division on Saturday: Deportivo Coruna 1 Real Madrid 1.'
|
| 335 |
+
3 gold_output: 'SOCCEER - RESULT IN [SPANISH](Spain) FIRST DIVISION . [MADRID](Madrid) 1996-08-31 Result of game played in the [Spanish](Spain) first division on Saturday : Deportivo Coruna 1 [Real Madrid](Real Madrid C.F.) 1.'
|
| 336 |
+
4 predicted_output: 'SOCCEER - RESULT IN [SPANISH](Spain) FIRST DIVISION . [MADRID](Madrid) 1996-08-31 Result of game played in the [Spanish](Spain) first division on Saturday : [Deportivo](Deportivo de La Coruna) Coruna 1 [Real Madrid](Real Madrid C.F.) 1.'
|
| 337 |
+
5 gold_spans: [19, 7, 'Spain'],
|
| 338 |
+
7 [44, 6, 'Madrid'],
|
| 339 |
+
8 [91, 7, 'Spain'],
|
| 340 |
+
9 [147, 11, 'Real_Madrid_C.F.']
|
| 341 |
+
]
|
| 342 |
+
11 predicted_spans: [19, 7, 'Spain'],
|
| 343 |
+
12 [44, 6, 'Madrid'],
|
| 344 |
+
13 [91, 7, 'Spain'],
|
| 345 |
+
14 [128, 9, 'Deportivo_de_La_Coruna'],
|
| 346 |
+
15 [147, 11, 'Real_Madrid_C.F.']
|
| 347 |
+
]
|
| 348 |
+
16 Micro-precision: 0.80
|
| 349 |
+
Micro-recall: 1.00
|
| 350 |
+
Micro-F1: 0.88
|
| 351 |
+
```
|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
Figure 8: Example of a GENRE prediction for end-to-end entity linking on AIDA. The input is plain text and the output is a Markup string where the links are Wikipedia titles. Spans are in the format $\langle s_i, l_i, t_i \rangle$ : start of the mention, length of the mention, and title respectively.
|
| 355 |
+
Figure 9: Example of prefix tree (trie) structure where the allowed entities identifiers are 'English language', 'English literature' and 'France'. Note that at the root there is the start-of-sequence token SOS and all leaves are end-of-sequence tokens EOS. Since more that one sequence has the same prefix (i.e., 'English'), this end up being an internal node where branches are the possible continuations.
|
autoregressiveentityretrieval/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:af672f8aa91cc17b045988884fa25fdaa83a14ead3dda0a85d3919a694a41558
|
| 3 |
+
size 697388
|
autoregressiveentityretrieval/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5ae4c832b315e9843739db985056ab24bce0acb677f21072831c8cb71a9a39b3
|
| 3 |
+
size 483283
|
behavioralcloningfromnoisydemonstrations/71732701-928a-470e-8bdc-6e5162239933_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3a04db7686fede40a7a93ba14f6a99cf0618897d113c7b4d04d2542489a6b0ee
|
| 3 |
+
size 86611
|
behavioralcloningfromnoisydemonstrations/71732701-928a-470e-8bdc-6e5162239933_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:833dc3f8560ead078f3abb861217548344b95937c3df428e26979d8c5533209e
|
| 3 |
+
size 102003
|
behavioralcloningfromnoisydemonstrations/71732701-928a-470e-8bdc-6e5162239933_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f55cf55b28c20003e21359ce5b533e66ee6fdeac3bfa866f3e87704c230159f2
|
| 3 |
+
size 1907687
|
behavioralcloningfromnoisydemonstrations/full.md
ADDED
|
@@ -0,0 +1,314 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BEHAVIORAL CLONING FROM NOISY DEMONSTRATIONS
|
| 2 |
+
|
| 3 |
+
Fumihiro Sasaki & Ryota Yamashina
|
| 4 |
+
|
| 5 |
+
Ricoh Company, Ltd.
|
| 6 |
+
|
| 7 |
+
{fumihiro.fs.sasaki,ryohta.yamashina}@jp.ricoh.com
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
We consider the problem of learning an optimal expert behavior policy given noisy demonstrations that contain observations from both optimal and non-optimal expert behaviors. Popular imitation learning algorithms, such as generative adversarial imitation learning, assume that (clean) demonstrations are given from optimal expert policies but not the non-optimal ones, and thus often fail to imitate the optimal expert behaviors given the noisy demonstrations. Prior works that address the problem require (1) learning policies through environment interactions in the same fashion as reinforcement learning, and (2) annotating each demonstration with confidence scores or rankings. However, such environment interactions and annotations in real-world settings take impractically long training time and a significant human effort. In this paper, we propose an imitation learning algorithm to address the problem without any environment interactions and annotations associated with the non-optimal demonstrations. The proposed algorithm learns ensemble policies with a generalized behavioral cloning (BC) objective function where we exploit another policy already learned by BC. Experimental results show that the proposed algorithm can learn behavior policies that are much closer to the optimal policies than ones learned by BC.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Imitation learning (IL) has become a widely used approach to obtain autonomous robotics control systems. IL is often more applicable in real-world problems than reinforcement learning (RL) since expert demonstrations are often easier than designing appropriate rewards that RL requires. There have been several IL methods that involve RL (Ziebart et al., 2008; Ng et al., 2000; Abbeel & Ng, 2004; Ho & Ermon, 2016). Those IL methods inherit sample complexity from RL in terms of environment interactions during training. The complexity restricts applicabilities in real-world problems since a number of environment interactions in real-world settings often take a long time and cause damage to the robot or the environment. Therefore, we are interested in IL methods that do not require the environment interactions, such as behavioral cloning (BC) (Pomerleau, 1991) which learns an expert policy in a supervised fashion.
|
| 16 |
+
|
| 17 |
+
BC as well as popular IL methods, such as generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016), assume the expert demonstration is optimal. Unfortunately, it is often difficult to obtain optimal demonstrations for many tasks in real-world problems because the expert who tries to operate the robot so that it can achieve tasks often makes mistakes due to various reasons, such as the difficulty of the task, difficulty in handling the controller, limited observability of the environment, or the presence of distraction. The mistakes include unnecessary and/or incorrect operations to achieve the tasks. Given such noisy expert demonstrations, which contain records of both optimal and non-optimal behavior, BC as well as the popular IL methods fails to imitate the optimal policy due to the optimal assumption on the demonstrations as shown in (Wu et al., 2019).
|
| 18 |
+
|
| 19 |
+
A naive solution to cope with the noisy demonstrations is discarding the non-optimal demonstrations among the ones that were already collected. This screening process is often impractical because it involves a significant human effort. Most of recent IL works suppose settings where a very limited number of clean expert demonstrations, which are composed of only the optimal behavior records, are available. Those methods are also vulnerable to the noisy demonstrations due to the optimal
|
| 20 |
+
|
| 21 |
+
assumption on the demonstrations. Thus they implicitly suppose such impractical screening process if they were applied in real-world problems, where a number of the noisy demonstrations other than the clean ones can be easily obtained. There have been IL methods addressing the noisy demonstrations. Instead of the screening process, they require to annotate each demonstration with confidence scores (Wu et al., 2019) or rankings (Brown et al., 2019). Even though they cope well with the noisy demonstrations to obtain the optimal behavior policies, such annotation costs a significant human effort as it is for the screening. Hence, we desire IL methods that can cope well with the noisy demonstrations, which can be easily obtained in real-world settings, without any screening and annotation processes associated with the non-optimal behaviors.
|
| 22 |
+
|
| 23 |
+
In this paper, we propose a novel imitation learning algorithm to address the noisy demonstrations. The proposed algorithm does not require (1) any environment interactions during training, and (2) any screening and annotation processes associated with the non-optimality of the expert behaviors. Our algorithm learns ensemble policies with a generalized BC objective function where we exploit another policy already learned by BC. Experimental results show that the proposed algorithm can learn policies that are much closer to the optimal than ones learned by BC.
|
| 24 |
+
|
| 25 |
+
# 2 RELATED WORKS
|
| 26 |
+
|
| 27 |
+
A wide variety of IL methods have been proposed in these last few decades. BC (Pomerleau, 1991) is the simplest IL method among those and thus BC could be the first IL option when enough clean demonstrations are available. Ross & Bagnell (2010) have theoretically pointed out a downside of the BC which is referred to as compounding error – the small errors of the learners trained by BC could compound over time and bring about the deterioration of their performance. On the other hand, experimental results in (Sasaki et al., 2018) show that BC given the clean demonstrations of sufficient amounts can easily obtain the optimal behavior even for complex continuous control tasks. Hence, the effect of the compounding error is negligible in practice if the amount of clean demonstrations is sufficient. However, even if the amount of the demonstrations is large, BC cannot obtain the optimal policy given the noisy demonstrations due to the optimal assumption on the demonstrations. Another widely used IL approaches are inverse reinforcement learning (IRL) (Ziebart et al., 2008; Ng et al., 2000; Abbeel & Ng, 2004) and adversarial imitation learning (AIL) (Ho & Ermon, 2016). Since those approaches also assume the optimality of the demonstrations, they are also not able to obtain the optimal policy given the noisy demonstrations, as shown in (Wu et al., 2019). As we will show in Section 6, our algorithm successfully can learn near-optimal policies if noisy demonstrations of sufficient amounts are given.
|
| 28 |
+
|
| 29 |
+
There have been several works that address the noisy demonstrations (Wu et al., 2019; Brown et al., 2019; Tangkaratt et al., 2019; Kaiser et al., 1995; Grollman & Billard, 2012; Kim et al., 2013). Those works address the noisy demonstrations by either screening the non-optimal demonstrations with heuristic non-optimal assessments (Kaiser et al., 1995), annotations associated with the non-optimality (Wu et al., 2019; Brown et al., 2019; Grollman & Billard, 2012), or training through the environment interactions (Kim et al., 2013; Wu et al., 2019; Brown et al., 2019; Tangkaratt et al., 2019). Our algorithm does not require any screening processes, annotations associated with the non-optimality, and the environment interactions during training.
|
| 30 |
+
|
| 31 |
+
Offline RL methods (Lange et al., 2012; Fujimoto et al., 2019; Kumar et al., 2020) train the learner agents without any environment interactions, and allow the training dataset to have non-optimal trajectories as in our problem setting. A drawback of offline RL methods for the real-world applications is the requirement to design reward functions, which often involves a significant human efforts for its success, since those methods assume that the reward for each state-action pair is known. Our algorithm does not require to design reward functions as in standard IL methods.
|
| 32 |
+
|
| 33 |
+
Disagreement regularized imitation learning (DRIL) (Brantley et al., 2019) is a state-of-the-art IL algorithm which employs an ensemble of policies as our algorithm does. The aims of employing the ensemble is different between DRIL and our algorithm. DRIL uses the disagreement in predictions made by policies in the ensemble to evaluate whether the states observed during training the learner are ones observed in the expert demonstrations. On the other hand, our algorithm uses the ensemble to encourage the learner to take optimal actions on each state as described in 5.3. In addition, DRIL fundamentally requires the environment interactions during training whereas our algorithm does not.
|
| 34 |
+
|
| 35 |
+
# 3 PRELIMINARIES AND PROBLEM SETUP
|
| 36 |
+
|
| 37 |
+
In this work, we consider an episodic fixed-horizon Markov decision process (MDP) which is formalized as a tuple $\{S, \mathcal{A}, \mathcal{P}, R, d_0, T\}$ , where $\mathcal{S}$ is a set of states, $\mathcal{A}$ is a set of possible actions agents can take, $\mathcal{P}: S \times \mathcal{A} \times S \to [0,1]$ is a transition probability, $R: S \times \mathcal{A} \to [0,1]$ is a reward function, $d_0: S \to [0,1]$ is a distribution over initial states, and $T$ is an episode horizon. The agent's behavior is defined by a stochastic policy $\pi: S \times \mathcal{A} \to [0,1]$ and $\Pi$ denotes a set of the stochastic policies. The expected one-step immediate reward for a policy $\pi$ given a state $s$ is defined as $R^{\pi}(s) = \mathbb{E}_{a \sim \pi(\cdot | s)}[R(s, a)]$ .
|
| 38 |
+
|
| 39 |
+
Let $d_t^\pi$ and $d^\pi = \frac{1}{T} \sum_{t=1}^T d_t^\pi$ denote the distribution over states at time step $t$ and the average distribution over $T$ time steps induced by $\pi$ , respectively. The distributions $d_1^\pi$ at the first step correspond to $d_0$ for any $\pi$ . When following a policy $\pi$ throughout an episode, the expected one-step immediate reward at time step $t$ and the expected $T$ -step reward are defined as $R_t^\pi = \mathbb{E}_{s \sim d_t^\pi, a \sim \pi(\cdot|s)}[R(s, a)] = \mathbb{E}_{s \sim d_t^\pi}[R^\pi(s)]$ and $\mathcal{J}(\pi, R) = \sum_{t=1}^T R_t^\pi = T\mathbb{E}_{s \sim d^\pi}[R^\pi(s)]$ , respectively. We refer to $\mathcal{J}(\pi, R)$ as on-policy expected $T$ -step reward. We also consider another $T$ -step reward defined by $\mathcal{J}_\beta(\pi, R) = T\mathbb{E}_{s \sim d^\beta}[R^\pi(s)]$ , which we call off-policy expected $T$ -step reward, where $\beta \in \Pi$ is a policy that can differ from $\pi$ .
|
| 40 |
+
|
| 41 |
+
In our problem setting, the functions $R$ is not given. Instead, we observe noisy demonstrations. We refer to the agent that generates the noisy demonstrations as the noisy expert. The decision process turns to be $\mathrm{MDP} \backslash \{R\}$ as in the common imitation learning settings, and our problem can be formalized as to find an optimal policy in $\mathrm{MDP} \backslash \{R\}$ . Here we refer to the true expert policy $\pi_{e}^{*}$ as ones being able to take the optimal (thus not noisy) behavior in episodic tasks. We make the following four assumptions to further formalize our problem setting:
|
| 42 |
+
|
| 43 |
+
Assumption 1. The $T$ -step expected reward of $\pi_e^*$ satisfies $\mathcal{J}(\pi, R) \leq \mathcal{J}(\pi_e^*, R)$ ; $\mathcal{J}_{\beta}(\pi, R) \leq \mathcal{J}_{\beta}(\pi_e^*, R)$ ; and $\mathcal{J}_{\beta}(\pi_e^*, R) \leq \mathcal{J}(\pi_e^*, R)$ for any non-optimal policies $\pi, \beta \in \Pi \setminus \{\pi_e^*\}$ .
|
| 44 |
+
|
| 45 |
+
Assumption 2. With small probability $\epsilon$ , which we call non-optimal probability, the policies $\pi_e$ the noisy experts follow during demonstrations are sampled at each time step as $\pi_e = \pi \sim p_{\Pi}$ if $\epsilon \geq z \sim \mathcal{U}(0,1)$ , otherwise $\pi_e = \pi_e^*$ , where $p_{\Pi}$ is an unknown distribution over the set of policies, $z$ is a random variable, and $\mathcal{U}(0,1)$ is a uniform distribution with range $[0,1]$ .
|
| 46 |
+
|
| 47 |
+
Assumption 3. The reward $R_{t}^{\pi_{e}}$ is at least zero if the noisy expert has followed a policy $\pi \in \Pi \backslash \{\pi_{e}^{*}\}$ once or more so far, otherwise $R_{t}^{\pi_{e}} = \mathbb{E}_{s \sim d_{t}^{\pi_{e}}} \left[ \epsilon \mathbb{E}_{\pi \sim p_{\Pi}} [R^{\pi}(s)] + (1 - \epsilon) R^{\pi_{e}^{*}}(s) \right]$ .
|
| 48 |
+
|
| 49 |
+
Assumption 4. The sequence $\{R_1^{\pi_e},\dots,R_T^\pi_e\}$ has monotonically decreasing property $R_{t}^{\pi_{e}}\geq R_{t + 1}^{\pi_{e}}$
|
| 50 |
+
|
| 51 |
+
Assumption 1 indicates that both on-policy and off-policy expected $T$ -step reward following $\pi_e^*$ are always greater than or equal to ones following any other policies. In other words, we assume the true expert policy is an optimal one in the MDP, and the agent following the policy is able to behave so that the expected immediate rewards at any states are maximized. Under Assumption 1, the problem that we would like to solve in this work can be said to learn a parameterized policy $\pi_{\theta}$ to maximize its on-policy expected $T$ -step reward $\mathcal{J}(\pi_{\theta}, R)$ to $\mathcal{J}(\pi_{e}^{*}, R)$ . Assumption 2 indicates that the noisy expert occasionally adopts non-optimal policies, which results in the noisy demonstrations, due to random events, such as the presence of distractions, associated with the random variable $z$ . The noisy expert is going to visit states that would be never visited by the true expert if the noisy expert followed non-optimal policies even once. Assumption 3 indicates that those states are less rewarded and their rewards are at least zero. Assumption 3 also indicates that the noisy demonstrations have a number of episodes where the noisy expert has reached the same state $s$ where the noisy expert has adopted both $\pi_{e}^{*}$ and $\pi \in \Pi \setminus \{\pi_{e}^{*}\}$ with the probability $\epsilon$ . Assumption 4 indicates that, since the probability the noisy expert consecutively follows $\pi_{e}^{*}$ decreases as time step increases according to Assumption 2, the divergence between $d_t^{\pi_e}$ and $d_t^{\pi_e^*}$ becomes greater as the number of time step $t$ increases, and thus the one-step expected immediate reward $R_{t}^{\pi_{e}} = \mathbb{E}_{s \sim d_{t}^{\pi_{e}}, a \sim \pi_{e}(\cdot | s)}[R(s, a)]$ decreases as $t$ increases.
|
| 52 |
+
|
| 53 |
+
# 4 ANALYSIS OF PERFORMANCE DETERIORATION
|
| 54 |
+
|
| 55 |
+
In this section, we firstly describe BC objective in 4.1. Then, we analyze why the learner trained by BC deteriorates its performance when using the noisy demonstrations from the expected $T$ -step reward maximization and KL-divergence minimization perspectives in 4.2 and 4.3, respectively.
|
| 56 |
+
|
| 57 |
+
# 4.1 BEHAVIORAL CLONING OBJECTIVE
|
| 58 |
+
|
| 59 |
+
Let $\pi_{\theta} \in \Pi$ is a learner policy parameterized by $\theta$ to be optimized by IL algorithms. The objective of BC in common is as follows:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\underset {\theta} {\arg \max } \mathbb {E} _ {s \sim d ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta} (a | s) ]. \tag {1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
The objective (1) aims to mimic the expert behavior which follows $\pi_e$ . It can be interpreted that (1) is to maximize the expected one-step immediate reward $R^{\pi_{\theta}}(s)$ to $R^{\pi_e}(s)$ at each state $s \sim d^{\pi_e}$ . Since the state distribution $d^{\pi_e}$ is not induced by $\pi_{\theta}$ , it can also be said that (1) is to maximize the off-policy expected $T$ -step rewards $\mathcal{I}_{\pi_e}(\pi_\theta, R)$ to $\mathcal{J}(\pi_e, R)$ .
|
| 66 |
+
|
| 67 |
+
# 4.2 THE EXPECTED T-STEP REWARD MAXIMIZATION
|
| 68 |
+
|
| 69 |
+
We obtain the lower bound of the expected on-policy $T$ -step reward for the noisy expert policy in almost the same way to derive Theorem 2.1 in (Ross & Bagnell, 2010) where they showed the lower bound for the learner policies given the "clean" expert demonstrations.
|
| 70 |
+
|
| 71 |
+
Theorem 1. If the Assumptions 1 - 4 hold, $\mathcal{J}(\pi_e,R)$ has the following lower bound:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\mathcal {J} \left(\pi_ {e}, R\right) \geq \left\{\frac {1}{T} \sum_ {t = 0} ^ {T - 1} (1 - \epsilon) ^ {t} \right\} \cdot \mathbb {E} _ {\pi \sim p _ {\Pi}} \left[ \mathcal {J} _ {\pi_ {e}} (\pi , R) \right]. \tag {2}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
The detailed derivation can be found in Appendix A.1. Assume that the learner policy $\pi_{\theta}$ has a probability of non-optimal behavior $\hat{\epsilon} = \epsilon + \zeta$ at most as the result of BC, where $\zeta \in [0, 1 - \epsilon]$ is an additional probability of non-optimal behavior due to the remained loss in (1). Note that $\zeta$ may become greater than zero due to the difficulty in the optimization of (1) even if $\epsilon = 0$ . The learner following $\pi_{\theta}$ with $\hat{\epsilon}$ can be deemed as another noisy expert who samples a policy at each time step $\pi_{\theta} = \pi \sim p_{\pi_{\theta}}$ if $\hat{\epsilon} \geq z \sim \mathcal{U}(0, 1)$ , otherwise $\pi_{\theta} = \pi_{e}^{*}$ , where $p_{\pi_{\theta}}$ is a (special) distribution from which the same policy is always sampled. By replacing $\hat{\epsilon}$ and $p_{\pi_{\theta}}$ from $\epsilon$ and $p_{\Pi}$ in Theorem 1 respectively, we obtain the following corollary.
|
| 78 |
+
|
| 79 |
+
Corollary 1. If the Assumptions 1 - 4 hold and the policy $\pi_{\theta}$ has a probability of non-optimal behavior $\hat{\epsilon} = \epsilon + \zeta$ , $\mathcal{J}(\pi_{\theta}, R)$ has the following lower bound:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathcal {J} (\pi_ {\theta}, R) \geq \left\{\frac {1}{T} \sum_ {t = 0} ^ {T - 1} (1 - \hat {\epsilon}) ^ {t} \right\} \cdot \mathcal {J} _ {\pi_ {e}} (\pi_ {\theta}, R). \tag {3}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Recall that the BC objective (1) is to maximize $\mathcal{J}_{\pi_e}(\pi_\theta, R)$ . If $\hat{\epsilon} = 0$ , Corollary 1 indicates that the on-policy expected $T$ -step reward $\mathcal{J}(\pi_\theta, R)$ , which corresponds to the actual learner performance, is boosted by maximizing $\mathcal{J}_{\pi_e}(\pi_\theta, R)$ through the optimization of the BC objective (1). On the other hand, if $\epsilon > 0$ and thus $\hat{\epsilon} > 0$ , the first factor on the RHS in (3) becomes much smaller as $\epsilon$ becomes larger. Corollary 1 thus shows that the probability of non-optimal behavior $\epsilon$ of the noisy expert significantly negates the improvement of learner performance $\mathcal{J}(\pi_\theta, R)$ by BC even if $\zeta$ can be sufficiently minimized through the optimization. Hence, the learner trained by BC is not able to boost the learner performance enough if the noisy demonstrations were given.
|
| 86 |
+
|
| 87 |
+
# 4.3 KL DIVERGENCE MINIMIZATION
|
| 88 |
+
|
| 89 |
+
Let $S^{\pi_e}$ be a set of states that are observed in the noisy demonstration. $S^{\pi_e}$ can be thought of as the domain of (empirical) state distributions $d^{\pi_e}$ . $S^{\pi_e}$ can be defined with two state sets of states as $S^{\pi_e} = S_{e}^{\pi_e} \cup S_{e + *}^{\pi_e}$ , where $S_{e}^{\pi_{e}}$ contains states that are observed if the noisy expert has followed a policy $\pi \in \Pi \setminus \{\pi_{e}^{*}\}$ once or more so far in the episode, and $S_{e + *}^{\pi_e}$ contains states
|
| 90 |
+
|
| 91 |
+
at which the noisy expert has followed a policy $\pi \in \Pi \setminus \{\pi_{e}^{*}\}$ at the first time in the episode. Under Assumption 3, the rewards $R_{t}^{\pi_{e}}$ for the states $s \in S_{e}^{\pi_{e}}$ are at least zero whereas $R_{t}^{\pi_{e}} = \mathbb{E}_{s \sim d_{t}^{\pi_{e}}} \left[ \epsilon \mathbb{E}_{\pi \sim p_{\Pi}} [R^{\pi}(s)] + (1 - \epsilon) R_{e}^{\pi_{e}}^{*}(s) \right]$ for the states $s \in S_{e+*}^{\pi_{e}}$ . Note that the noisy expert adopts $\pi \in \Pi \setminus \{\pi_{e}^{*}\}$ with a probability $\epsilon$ at the states $s \in S_{e+*}^{\pi_{e}}$ . Let $d_{e}^{\pi_{e}}$ and $d_{e+*}^{\pi_{e}}$ be the state distributions the noisy expert policy induces in $S_{e}^{\pi_{e}}$ and $S_{e+*}^{\pi_{e}}$ , respectively. Then we can define $d^{\pi_{e}}$ as a mixture of those distributions as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
d ^ {\pi_ {e}} (s) = \alpha d _ {e} ^ {\pi_ {e}} (s) + \beta d _ {e + *} ^ {\pi_ {e}} (s), \tag {4}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where $\alpha$ and $\beta$ are ratios the noisy expert entered states that belong to $S_{e}^{\pi_{e}}$ and $S_{e + *}^{\pi_{e}}$ during demonstrations, respectively. In addition, $\alpha +\beta = 1$ is satisfied. Using Equation (4), the upper bound of the objective function in Equation (1) is derived as follows:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\mathbb {E} _ {s \sim d ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta} (a | s) ] \leq - \alpha \Omega_ {e} (\theta) - \beta \Omega_ {e + *} (\theta), \tag {5}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\begin{array}{l} \Omega_ {e} (\theta) = \mathbb {E} _ {s \sim d _ {e} ^ {\pi_ {e}}} \left[ D _ {K L} \left[ \pi_ {e} (\cdot | s) \right| \mid \pi_ {\theta} (\cdot | s) \right], (6) \\ \Omega_ {e + *} (\theta) = \epsilon \mathbb {E} _ {s \sim d _ {e + *} ^ {\pi_ {e}}, \pi \sim p _ {\Pi}} [ D _ {K L} [ \pi (\cdot | s) | | \pi_ {\theta} (\cdot | s) ] ] \\ + \quad (1 - \epsilon) \mathbb {E} _ {s \sim d _ {e + *} ^ {\pi_ {e}}} [ D _ {K L} [ \pi_ {e} ^ {*} (\cdot | s) | | \pi_ {\theta} (\cdot | s) ] ], (7) \\ \end{array}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where $D_{KL}$ is forward Kullback-Leibler (KL) divergence. The full derivation can be found in Appendix A.2. The inequality (5) shows that the BC objective (1) with the noisy demonstrations is to minimize the sum of KL divergences. The first term on the RHS in (7) leads the learner to imitate some non-optimal behaviors whereas the second term is to learn $\pi_e^*$ on the same states. The optimization to maximize the RHS in (7) is difficult because minimizing KL divergences with different target distributions at the same time is difficult in general. The first term on the RHS in (7) thus works as a "noisy" regularizer with a coefficient $\epsilon$ that makes the learner confused to learn $\pi_e^*$ . The difficulty in the optimization due to the noisy regularizer increases $\zeta$ as $\epsilon$ increases.
|
| 108 |
+
|
| 109 |
+
As mentioned in 4.1 and 4.2, BC is to maximize $\mathcal{J}_{\pi_e}(\pi_\theta, R)$ to $\mathcal{J}(\pi_e, R)$ . Hence, minimizing $\Omega_e(\theta)$ in (6) corresponds to maximize $\mathbb{E}_{s \sim d_e^{\pi_e}}[R^{\pi_\theta}(s)]$ to $\mathbb{E}_{s \sim d_e^{\pi_e}}[R^{\pi_e}(s)]$ . Since the rewards $R^{\pi_e}(s)$ are at least zero for the states $s \sim d_e^{\pi_e}$ according to Assumption 3 and the definition of $S_e^{\pi_e}$ , $\mathbb{E}_{s \sim d_e^{\pi_e}}[R^{\pi_\theta}(s)]$ becomes at least zero by minimizing $\Omega_e(\theta)$ . Hence $\mathcal{J}_{\pi_e}(\pi_\theta, R)$ becomes at least zero as the rate $\alpha$ increases, while the rate $\alpha$ increases as the probabilities of non-optimal behavior $\epsilon$ increases. Thus, the larger the probability $\epsilon$ is, the more difficult it is to boost the learner performance by BC.
|
| 110 |
+
|
| 111 |
+
If the influence of the noisy regularizer can be reduced, probabilities the learner follows $\pi_e^*$ at the state $s \in S_{e+*}^{\pi_e}$ will increase. In addition, as probabilities the learner follows $\pi_e^*$ at the states $s \in S_{e+*}^{\pi_e}$ increase, the rate (corresponding to $\alpha$ ) for the states $s \in S_{e}^{\pi_e}$ will decrease. Thus, it can be said that, the more often learner follows $\pi_e^*$ at the states $s \in S_{e+*}^{\pi_e}$ , the more rewards $R^{\pi_e^*}(s)$ the learner obtains according to Assumption 3. To summarize the above analysis, reducing the influence of the noisy regularizer for states $s \in S_{e+*}^{\pi_e}$ , which leads the learner to imitate some non-optimal behaviors, might boost the learner performance.
|
| 112 |
+
|
| 113 |
+
# 5 ALGORITHM
|
| 114 |
+
|
| 115 |
+
The analyses in Section 4 describe that the learner trained by standard BC deteriorates its performance when the noisy demonstrations are given. Based on both analyses in 4.2 and 4.3, the learner performance will be boosted if the learner imitates the optimal policy $\pi_e^*$ but not the non-optimal ones $\pi \in \Pi \setminus \{\pi_e^*\}$ for the states $s \in S_{e+*}^{\pi_e}$ . In other words, the learner performance will be boosted if $\hat{\epsilon}$ of the learner can be reduced. In this section, we first propose our algorithm that avoids learning $\pi \in \Pi \setminus \{\pi_e^*\}$ while learning $\pi_e^*$ in 5.1. Then we describe how our algorithm works to avoid learning $\pi \in \Pi \setminus \{\pi_e^*\}$ from mode seeking and reward maximization perspectives in 5.2 and 5.3, respectively. We lastly provide limitations of our algorithm in 5.4.
|
| 116 |
+
|
| 117 |
+
# 5.1 PROPOSED ALGORITHM
|
| 118 |
+
|
| 119 |
+
We consider a generalization of the BC objective as follows:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\underset {\theta} {\arg \max } \mathbb {E} _ {s \sim d ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta} (a | s) \cdot \hat {R} (s, a) ], \tag {8}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
Algorithm 1 Behavioral Cloning from Noisy Demonstrations
|
| 126 |
+
1: Given the expert demonstrations $\mathcal{D}$
|
| 127 |
+
2: Set $\hat{R}(s, a) = 1$ for $\forall (s, a) \in \mathcal{D}$ .
|
| 128 |
+
3: Split $\mathcal{D}$ into $K$ disjoint sets $\{\mathcal{D}^1, \mathcal{D}^2, \dots, \mathcal{D}^K\}$ .
|
| 129 |
+
4: for iteration $= 1$ , $M$ do
|
| 130 |
+
5: for $k = 1, K$ do
|
| 131 |
+
6: Initialize parameters $\theta^k$ .
|
| 132 |
+
7: for $l = 1, L$ do
|
| 133 |
+
8: Sample a random minibatch of $N$ state-action pairs $(s_n, a_n)$ from $\mathcal{D}^k$ .
|
| 134 |
+
9: Calculate a sampled gradient $\frac{1}{N} \sum_{n=1}^{N} \nabla_{\theta^k} \log \pi_{\theta^k}(s_n, a_n) \cdot \hat{R}(s_n, a_n)$ .
|
| 135 |
+
10: Update $\theta^k$ by gradient ascent using the sampled gradient.
|
| 136 |
+
11: end for
|
| 137 |
+
12: end for
|
| 138 |
+
13: Copy $\pi_{\theta_{old}} \gets \pi_\theta$ .
|
| 139 |
+
14: Set $\hat{R}(s, a) = \pi_{\theta_{old}}(a|s)$ for $\forall (s, a) \in \mathcal{D}$ .
|
| 140 |
+
15: end for
|
| 141 |
+
16: return $\pi_\theta$ .
|
| 142 |
+
|
| 143 |
+
where $\hat{R}: S \times \mathcal{A} \to [0,1]$ denotes an arbitrary function which can differ from $R$ . If $\hat{R}(s,a) = 1$ for $\forall (s,a) \in S \times \mathcal{A}$ , the objective (8) corresponds to the BC objective (1). If $\int_{A} \hat{R}(s,a) da = 1$ for $\forall s \in S$ is satisfied, $\hat{R}(s,a)$ can be interpreted as weights for action samples obtained by the demonstrations so that the actions are sampled according to their relative weights. The objective (8) can also be deemed as that of the off-policy actor-critic (Off-PAC) algorithm<sup>1</sup> (Degris et al., 2012) with reward functions $\hat{R}(s,a)$ and zero discount factors.
|
| 144 |
+
|
| 145 |
+
Let $\pi_{\theta^1}, \pi_{\theta^2}, \dots, \pi_{\theta^K}$ be $K$ parameterized policies with different initial parameters $\theta^1, \theta^2, \dots, \theta^K$ , and $\pi_\theta(a|s) = \sum_{k=1}^{K} \pi_{\theta^k}(a|s)/K$ denotes an ensemble of the parameterized policies with parameters $\theta = \{\theta^1, \theta^2, \dots, \theta^K\}$ . Let $\pi_{\theta_{old}}$ be a parameterized policy with $\theta_{old}$ which was already optimized with the noisy demonstrations. The main idea of our algorithm is to reuse the old policy $\pi_{\theta_{old}}$ as $\hat{R}(s, a)$ in the generalized BC objective (8).
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\underset {\theta} {\arg \max } \mathbb {E} _ {s \sim d ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta} (a | s) \cdot \pi_ {\theta_ {o l d}} (a | s) ]. \tag {9}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
The overview of our algorithm is described in Algorithm 1.
|
| 152 |
+
|
| 153 |
+
# 5.2 WEIGHTED ACTION SAMPLING FOR $\pi_e^*$ MODE SEEKING
|
| 154 |
+
|
| 155 |
+
Since $\pi_{\theta_{old}}$ satisfies $\int_{\mathcal{A}} \pi_{\theta_{old}}(a|s) da = 1$ for $\forall s \in S$ , $\pi_{\theta_{old}}$ can be interpreted as the weights for the weighted action sampling. We below explain the weighted action sampling procedure in our algorithm on $S_{e+*}^{\pi_e}$ . Figure 1 depicts a toy example of the sampling procedure. The distribution of the noisy expert actions on $S_{e+*}^{\pi_e}$ is a mixture of two distributions as shown in Equation (7). If $\epsilon$ is sufficiently small, $\pi_\theta$ is optimized so that its mode is closer to that of $\pi_e^*$ than $\pi \in \Pi \setminus \{\pi_e^*\}$ according to mode seeking properties of the forward KL divergence (Ghasemipour et al., 2020). Given the sampling weights $\pi_{\theta_{old}}(a|s) = \pi_\theta(a|s)$ for the empirical action samples, the weighted action distribution distorts so that its mode also gets closer to the mode of $\pi_e^*$ . By iteratively distorting the weighted action distribution with the same procedure, its mode fits to near the mode of $\pi_{e*}$ . The weights for actions sampled from $\pi \in \Pi \setminus \{\pi_e^*\}$ eventually become much smaller, and thus the learner will not learn $\pi \in \Pi \setminus \{\pi_e^*\}$ . The mode seeking procedure of our algorithm is analogous to the mean shift algorithm (Fukunaga & Hostetler, 1975) so that the mode of $\pi_\theta$ shifts towards that of $\pi_e^*$ by minimizing the KL divergence between $\pi_\theta$ and the weighted action distribution.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
Figure 1: A toy example of the weighted action sampling procedure at each iteration in our algorithm when given a state $s \in S_{e+*}^{\pi_e}$ . On both rows, the horizontal lines are the action domains. The left and right dotted lines on the top row describe $\pi \in \Pi \setminus \{\pi_e^*\}$ and $\pi_e^*(a|s)$ , respectively. The dotted lines on the bottom row describe the mixture distribution $\pi_e(a|s) = \epsilon \pi(a|s) + (1 - \epsilon) \pi_e^*(a|s)$ with $\epsilon = 0.4$ . The solid lines on the top row describe $\pi_\theta(a|s)$ that are optimized with objective (8) at each iteration. The solid lines on the bottom row describe distributions which draw actions, that were already drawn by $\pi_e(a|s)$ in the noisy demonstrations, according to the current importance weight $\pi_\theta(a|s)$ at each iteration. $\pi_\theta(a|s)$ are optimized at each iteration so that the weighted distribution at the previous iteration is the target distribution.
|
| 159 |
+
|
| 160 |
+
# 5.3 REWARD MAXIMIZATION
|
| 161 |
+
|
| 162 |
+
As the Off-PAC objective, the objective (9) maximizes the expected (one-step) reward $\hat{R}(s,a) = \pi_{\theta_{old}}(a|s)$ . Recall that the learner policy $\pi_{\theta}(a|s) = \sum_{k=1}^{K} \pi_{\theta^k}(a|s)/K$ is an ensemble of the parameterized policies in our algorithm. Following the work in (Perrone, 1993), we obtain
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
\frac {1}{K} \sum_ {k = 1} ^ {K} \mathbb {E} _ {s \sim d ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta^ {k}} (a | s) \cdot \hat {R} (s, a) ] \leq \mathbb {E} _ {s \sim d ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} \left[ \log \pi_ {\theta} (a | s) \cdot \hat {R} (s, a) \right], \tag {10}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
where we use Jensen's inequality with the concave property of logarithm: $\frac{1}{K}\sum_{k = 1}^{K}\log \pi_{\theta^k}(a|s)\leq$ $\log \pi_{\theta}(a|s)$ . The inequality (10) indicates that the ensemble of policies $\pi_{\theta^1},\pi_{\theta^2},\dots,\pi_{\theta^K}$ , each of which was learned with (8), has greater or equal values of the objective function in (8) than the averaged values over the policies in the ensemble. As mentioned in 5.2, $\hat{R} (s,a) = \pi_{\theta_{old}}(a|s)$ becomes higher near the mode of $\pi_e^*$ . Thus, making $\pi_{\theta}$ as the ensemble further encourages to shift its mode to that of $\pi_e^*$ and avoid learning $\pi \in \Pi \setminus \{\pi_e^*\}$ .
|
| 169 |
+
|
| 170 |
+
# 5.4 LIMITATIONS
|
| 171 |
+
|
| 172 |
+
Our algorithm has three limitations. First, $K \times M$ times computational cost is required in comparison with BC, where $M$ is the number of iterations in Algorithm 1. Second, the compounding error due to the probability of non-optimal behavior $\zeta$ still remains unless sufficient amounts of the demonstrations are given. Lastly, $\pi_{\theta}$ is fitting to $\pi \in \Pi \setminus \{\pi_e^*\}$ rather than $\pi_e^*$ if the major mode of $\epsilon \pi(a|s) + (1 - \epsilon) \pi_e^*(a|s)$ is nearer to the mode of $\pi(a|s)$ than that of $\pi_e^*$ . It may be caused due to the higher kurtosis of $\pi(a|s)$ or $\epsilon$ of large values.
|
| 173 |
+
|
| 174 |
+
# 6 EXPERIMENTS
|
| 175 |
+
|
| 176 |
+
In our experiments, we aim to answer the following three questions:
|
| 177 |
+
|
| 178 |
+
Q1. Does our algorithm improve the learner performance more than BC given the noisy demonstrations?
|
| 179 |
+
Q2. Can the compounding error due to $\zeta$ be reduced as the number of noisy demonstrations increase?
|
| 180 |
+
Q3. Is our algorithm competitive to the existing IL methods if both annotations associated with the non-optimality and environment interactions are allowed?
|
| 181 |
+
|
| 182 |
+
# 6.1 SETUP
|
| 183 |
+
|
| 184 |
+
To answer Q1 and Q2, we evaluated our algorithm against BC on four continuous control tasks that are simulated with MuJoCo physics simulator (Todorov et al., 2012). We train an agent on each task by proximal policy optimization (PPO) algorithm (Schulman et al., 2017) using the rewards defined in the OpenAI Gym (Brockman et al., 2016). We use the resulting stochastic policy as the true expert policy $\pi_{e}^{*}$ . We generate the noisy expert demonstrations using $\pi_{e}^{*}$ while randomly adopting non-optimal policies $\pi$ with probabilities of the non-optimal behavior $\epsilon$ . The non-optimal policies $\pi$ are selected from uniform distributions $a \sim \mathcal{U}(-u, u)$ , Gaussian distributions $a \sim \mathcal{N}(a^{*}, I)$ with $a \sim \pi_{e}^{*}(\cdot | s)$ , or a deterministic policy $a = 0$ , where $u \in \mathbb{R}^{|A|}$ denotes all-ones vectors and $I \in \mathbb{R}^{|A| \times |A|}$ denotes identity matrices. $\epsilon$ are selected from $\{0.0, 0.1, 0.2, 0.3, 0.4, 0.5\}$ . The noisy expert takes actions following $\pi_{e}^{*}$ if $z \geq \epsilon$ otherwise $\pi$ which is fixed to a selected one through an episode, where $z \sim \mathcal{U}(0, 1)$ . Each noisy demonstration with the selected $\epsilon$ consists of $N$ state-action pairs, where $N$ is selected from $\{5000, 10000, 50000, 100000\}$ . Then we perform our algorithm as well as BC to train the learners using each noisy demonstration. We also conducted the same experiments on four low-dimensional discrete control tasks (see Appendix A.4).
|
| 185 |
+
|
| 186 |
+
To answer Q3, we evaluated our algorithm against IC-GAIL (Wu et al., 2019), 2IWIL (Wu et al., 2019), T-REX (Brown et al., 2019), GAIL and DRIL on three continuous control tasks. IC-GAIL, 2IWIL and T-REX require both annotations associated with the non-optimality and environment interactions. GAIL and DRIL require the environment interactions for the training, but they do not address the noisy demonstration problem. The true expert policy $\pi_e^*$ are obtained in the same way as mentioned above. The non-optimal policies $\pi$ are fixed to $a \sim \mathcal{U}(-u, u)$ . We generate the noisy expert demonstrations which consists of 10000 state-action pairs for each $\epsilon \in \{0.05, 0.1, 0.15, \dots, 1.0\}$ . Then we perform our algorithm and the baselines using all noisy demonstrations. The detailed description of this experimental setup can be found in Appendix A.3.
|
| 187 |
+
|
| 188 |
+
In both experiments, the performance of the learners is measured by cumulative rewards they earned in an episode. The cumulative reward is normalized with ones earned by $\pi_e^*$ and a random policy $a\sim \mathcal{U}(-u,u)$ so that 1.0 and 0.0 indicate the performance of $\pi_e^*$ and the random policy, respectively. We run five experiments on each task and setup, and measure the mean and standard deviation of the normalized cumulative rewards for each learner over the five experiments. In all experiments, we set the number of policies $K = 5$ in the ensemble learner policy $\pi_{\theta}$ and the number of iterations $M = 5$ . The implementation details of our algorithm can be found in Appendix A.5.
|
| 189 |
+
|
| 190 |
+
# 6.2 RESULTS
|
| 191 |
+
|
| 192 |
+
Figure 2 depicts the experimental results against BC. Over all tasks, our algorithm obtains much better learner performance than BC-Single, which is a single (thus not an ensemble) policy learned by BC. It suggests that the policies learned by our algorithm are closer to $\pi_e^*$ than ones learned by BC. The compounding error due to $\zeta$ is expected to be reduced as the number of demonstrations increase. Whereas BC-Ensemble which denotes the ensemble of policies learned by BC yields significant performance gains against BC-Single, increasing the number of noisy demonstrations has a little effect to boost the learner performance trained by BC-Ensemble as shown in Figure 2-(D). It indicates that BC-Ensemble can not reduce the compounding error due to $\epsilon$ . On the other hand, our algorithm can boost the learner performance up to that of $\pi_e^*$ as increasing the number of demonstrations. It suggests that our algorithm can reduce the compounding error due to both $\epsilon$ and $\zeta$ if sufficient amounts of the noisy expert demonstrations are given, as is the case for BC with the clean expert demonstrations. The results with the deterministic non-optimal policy $\pi \in \Pi \setminus \{\pi_e^*\}$ which always takes an action $a = 0$ are worse than those with other non-optimal policies. It corresponds to the limitation of our algorithm as mentioned in 5.4, since the major mode of $\epsilon \pi(a|s) + (1 - \epsilon)\pi_e^*(a|s)$ might be around $a = 0$ . We also conducted ablation experiments where the number of policies $K$ are selected from $\{1,5\}$ in our algorithm. See Appendix A.6 for details. The ablation experimental results show that the learner obtains better performance if $K$ increases. In addition, the performance of the learner trained by our algorithm is significantly better than that of BC-Single even though $K = 1$ . It suggests that our algorithm improves the learner performance by not only the ensemble approach but also using the old policies $\pi_{\theta_{old}}$ .
|
| 193 |
+
|
| 194 |
+
Table 1 shows the experimental results against IC-GAIL, 2IWIL, T-REX, GAIL and DRIL. Over all tasks, 2IWIL and our algorithm can successfully obtain the true expert performance while others can
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
Figure 2: (A)-(C) The performance of policies vs. $\epsilon$ given 50000 state-action pairs of the noisy expert demonstrations where the non-optimal policies $\pi \in \Pi \setminus \{\pi_e^*\}$ are (A) $\mathcal{U}(-u,u)$ , (B) $\mathcal{N}(a^*,I)$ with $a\sim \pi_{e}^{*}(\cdot |s)$ , and (C) the deterministic one $a = 0$ , respectively. (D) The performance of policies vs. the number of state-action pairs $N$ of the noisy demonstrations with $\epsilon = 0.3$ where $\pi (a|s) = \mathcal{U}(-u,u)$ . BC-Single is a policy learned by BC. BC-Ensemble is an ensemble of policies, each of which was learned by BC. Shaded regions indicate the standard deviation over five experiments.
|
| 198 |
+
|
| 199 |
+
not. It suggests that our algorithm can obtain competitive results with that of existing IL methods even though the annotation and the environment interactions are not used.
|
| 200 |
+
|
| 201 |
+
Table 1: The experimental results against IL methods that require the environment interactions.
|
| 202 |
+
|
| 203 |
+
<table><tr><td></td><td>IC-GAIL</td><td>2IWIL</td><td>T-REX</td><td>GAIL</td><td>DRIL</td><td>Ours</td></tr><tr><td>Ant-v2</td><td>0.631 ± 0.162</td><td>1.042 ± 0.021</td><td>0.586 ± 0.124</td><td>0.003 ± 0.004</td><td>1.071 ± 0.023</td><td>1.055 ± 0.053</td></tr><tr><td>HalfCheetah-v2</td><td>0.941 ± 0.103</td><td>1.024 ± 0.059</td><td>0.001 ± 0.113</td><td>0.106 ± 0.003</td><td>0.065 ± 0.006</td><td>1.093 ± 0.092</td></tr><tr><td>Hopper-v2</td><td>1.233 ± 0.152</td><td>1.223 ± 0.135</td><td>0.441 ± 0.219</td><td>0.000 ± 0.001</td><td>0.910 ± 0.099</td><td>1.003 ± 0.045</td></tr></table>
|
| 204 |
+
|
| 205 |
+
# 7 CONCLUSION
|
| 206 |
+
|
| 207 |
+
In this paper, we proposed an imitation learning algorithm to cope with the noisy expert demonstrations. Experimental results showed that our algorithm can learn behavior policies that are much closer to the true expert policies than ones learned by BC. Since our algorithm cope well with the noisy expert demonstrations while not requiring any environment interactions and annotations associated with the non-optimal demonstrations, our algorithm is more applicable to real-world problems than the prior works. Although our algorithm has a few limitations as mentioned in 5.4, we believe that the analysis of performance deterioration detailed in Section 4 contributes to step forward for solving the noisy demonstration problems. In future work, we will consider the setting where the probability of non-optimal behavior is state-dependent, which often occurs in the real world more than the state-independent case that we have considered in this paper.
|
| 208 |
+
|
| 209 |
+
# REFERENCES
|
| 210 |
+
|
| 211 |
+
Pieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 1, 2004.
|
| 212 |
+
Kianté Brantley, Wen Sun, and Mikael Henaff. Disagreement-regularized imitation learning. In International Conference on Learning Representations, 2019.
|
| 213 |
+
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
|
| 214 |
+
Daniel S Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. arXiv preprint arXiv:1904.06387, 2019.
|
| 215 |
+
Thomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. arXiv preprint arXiv:1205.4839, 2012.
|
| 216 |
+
Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pp. 2052-2062, 2019.
|
| 217 |
+
Keinosuke Fukunaga and Larry Hostetler. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Transactions on information theory, 21(1):32-40, 1975.
|
| 218 |
+
Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, pp. 1259-1277. PMLR, 2020.
|
| 219 |
+
Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256, 2010.
|
| 220 |
+
Daniel H Grollman and Aude G Billard. Robot learning from failed demonstrations. International Journal of Social Robotics, 4(4):331-342, 2012.
|
| 221 |
+
Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in neural information processing systems, pp. 4565-4573, 2016.
|
| 222 |
+
Michael Kaiser, Holger Friedrich, and Rudiger Dillmann. Obtaining good performance from a bad teacher. In *Programming by Demonstration vs. Learning from Examples Workshop at ML*, volume 95, 1995.
|
| 223 |
+
Beomjoon Kim, Amir-massoud Farahmand, Joelle Pineau, and Doina Precup. Learning from limited demonstrations. In Advances in Neural Information Processing Systems, pp. 2859-2867, 2013.
|
| 224 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 225 |
+
Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. arXiv preprint arXiv:2006.04779, 2020.
|
| 226 |
+
Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. In Reinforcement learning, pp. 45-73. Springer, 2012.
|
| 227 |
+
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
|
| 228 |
+
Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In Icml, volume 1, pp. 2, 2000.
|
| 229 |
+
Michael Peter Perrone. Improving Regression Estimation: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization. PhD thesis, Brown University, 1993.
|
| 230 |
+
|
| 231 |
+
Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural computation, 3(1):88-97, 1991.
|
| 232 |
+
Stéphane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 661-668, 2010.
|
| 233 |
+
Fumihiro Sasaki, Tetsuya Yohira, and Atsuo Kawaguchi. Sample efficient imitation learning for continuous control. In International Conference on Learning Representations, 2018.
|
| 234 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
|
| 235 |
+
Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, and Masashi Sugiyama. Vild: Variational imitation learning with diverse-quality demonstrations. arXiv preprint arXiv:1909.06769, 2019.
|
| 236 |
+
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012.
|
| 237 |
+
Yueh-Hua Wu, Nontawat Charoenphakdee, Han Bao, Voot Tangkaratt, and Masashi Sugiyama. Imitation learning from imperfect demonstration. arXiv preprint arXiv:1901.09387, 2019.
|
| 238 |
+
Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In Aaai, volume 8, pp. 1433-1438. Chicago, IL, USA, 2008.
|
| 239 |
+
|
| 240 |
+
# A APPENDIX
|
| 241 |
+
|
| 242 |
+
# A.1 DETAILED DEVIATION OF THEOREM 1
|
| 243 |
+
|
| 244 |
+
Proof. Let $q_{t} = (1 - \epsilon)^{t}$ denotes the probability the noisy expert consecutively follows $\pi_e^*$ in the first $t$ step, and $\chi = \sum_{t=1}^{T} q_{t-1}$ denotes sum of $q_{t-1}$ over time steps. Then we obtain:
|
| 245 |
+
|
| 246 |
+
$$
|
| 247 |
+
\begin{array}{l} \mathcal {J} \left(\pi_ {e}, R\right) \geq \sum_ {t = 1} ^ {T} q _ {t - 1} R _ {t} ^ {\pi_ {e}} + \left(1 - q _ {t - 1}\right) \cdot 0 (11) \\ \geq T \left\{\frac {1}{T} \sum_ {t = 1} ^ {T} q _ {t - 1} \right\} \left\{\frac {1}{T} \sum_ {t = 1} ^ {T} R _ {t} ^ {\pi_ {e}} \right\} (12) \\ = \frac {\chi}{T} \left\{\sum_ {t = 1} ^ {T} \mathbb {E} _ {s \sim d _ {t} ^ {\pi_ {e}}} \left[ \epsilon \mathbb {E} _ {\pi \sim p _ {\Pi}} \left[ R ^ {\pi} (s) \right] + (1 - \epsilon) R ^ {\pi_ {e} ^ {*}} (s) \right] \right\} \\ = \frac {\chi}{T} \left\{\epsilon \mathbb {E} _ {\pi \sim p _ {\Pi}} \left[ \mathcal {J} _ {\pi_ {e}} (\pi , R) \right] + (1 - \epsilon) \mathcal {J} _ {\pi_ {e}} \left(\pi_ {e} ^ {*}, R\right) \right\} \\ \geq \frac {\chi}{T} \left\{\epsilon \mathbb {E} _ {\pi \sim p _ {\Pi}} \left[ \mathcal {J} _ {\pi_ {e}} (\pi , R) \right] + (1 - \epsilon) \mathbb {E} _ {\pi \sim p _ {\Pi}} \left[ \mathcal {J} _ {\pi_ {e}} (\pi , R) \right] \right\} (13) \\ = \left\{\frac {1}{T} \sum_ {t = 0} ^ {T - 1} (1 - \epsilon) ^ {t} \right\} \cdot \mathbb {E} _ {\pi \sim p _ {\Pi}} \left[ \mathcal {I} _ {\pi_ {e}} (\pi , R) \right] \\ \end{array}
|
| 248 |
+
$$
|
| 249 |
+
|
| 250 |
+
The first inequality (11) is from Assumption 2 and 3. The second inequality (12) is from Chebyshev's sum inequality with the monotonically decreasing properties according to Assumption 4. The third inequality (13) is from Assumption $1:\mathcal{J}_{\beta}(\pi ,R)\leq \mathcal{J}_{\beta}(\pi_{e}^{*},R)$ for any $\pi ,\beta \in \Pi \setminus \{\pi_e^*\}$ .
|
| 251 |
+
|
| 252 |
+
# A.2 DETAILED DERIVATION OF THE KL DIVERGENCES
|
| 253 |
+
|
| 254 |
+
From the definition of (4), we obtain:
|
| 255 |
+
|
| 256 |
+
$$
|
| 257 |
+
\begin{array}{l} \mathbb {E} _ {s \sim d ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta} (a | s) ] = \alpha \mathbb {E} _ {s \sim d _ {e} ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta} (a | s) ] (14) \\ + \quad \beta \mathbb {E} _ {s \sim d _ {e + *} ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta} (a | s) ] (15) \\ \end{array}
|
| 258 |
+
$$
|
| 259 |
+
|
| 260 |
+
The forward Kullback-Leibler (KL) divergence $D_{KL}$ between $\pi_e$ and $\pi_{\theta}$ over a state distribution $d^{\pi_e}$ is defined as $\mathbb{E}_{s\sim d^{\pi_e}}[D_{KL}(\pi_e(\cdot |s)||\pi_\theta (\cdot |s))] = -\mathbb{E}_{s\sim d^{\pi_e}}[\mathbb{E}_{a\sim \pi_e(\cdot |s)}[\log \pi_\theta (a|s)] + H[\pi_e(\cdot |s)]]$ , where $H$ denotes the entropy. Since $H[\pi_e(\cdot |s)]$ always takes positive value and is not associated with $\theta$ , we obtain an inequality: $\mathbb{E}_{s\sim d^{\pi_e},a\sim \pi_e(\cdot |s)}[\log \pi_\theta (a|s)]\leq -\mathbb{E}_{s\sim d^{\pi_e}}[D_{KL}(\pi_e(\cdot |s)||\pi_\theta (\cdot |s))]$ . The same goes with (14) as
|
| 261 |
+
|
| 262 |
+
$$
|
| 263 |
+
\alpha \mathbb {E} _ {s \sim d _ {e} ^ {\pi_ {e}}, a \sim \pi_ {e} (\cdot | s)} [ \log \pi_ {\theta} (a | s) ] \leq - \alpha \mathbb {E} _ {s \sim d ^ {\pi_ {e}}} [ D _ {K L} (\pi_ {e} (\cdot | s) | | \pi_ {\theta} (\cdot | s)) ]. \tag {16}
|
| 264 |
+
$$
|
| 265 |
+
|
| 266 |
+
Since $\pi_e$ adopts both $\pi_e^*$ and $\pi \in \Pi \setminus \{\pi_e^*\}$ following the probability $\epsilon$ , the third term (15) can be expanded as:
|
| 267 |
+
|
| 268 |
+
$$
|
| 269 |
+
\begin{array}{l} \beta \mathbb {E} _ {s \sim d _ {e _ {+ +} ^ {*}}, a \sim \pi_ {e} (\cdot | s)} ^ {\pi_ {e} ^ {*}} [ \log \pi_ {\theta} (a | s) ] = \beta \mathbb {E} _ {s \sim d _ {e _ {+ +} ^ {*}} ^ {\pi_ {e}}} \left\{\epsilon \mathbb {E} _ {\pi \sim p _ {\Pi}, a \sim \pi (\cdot | s)} [ \log \pi_ {\theta} (a | s) ] \right. \\ \left. + (1 - \epsilon) \mathbb {E} _ {a \sim \pi_ {\epsilon} ^ {*} (\cdot | s)} [ \log \pi_ {\theta} (a | s) ] \right\} \\ \leq - \beta \left\{\epsilon \mathbb {E} _ {s \sim d _ {e + 1} ^ {\pi_ {c}}, \pi \sim p _ {\Pi}} [ D _ {K L} (\pi (\cdot | s) | | \pi_ {\theta} (\cdot | s)) ] \right. \\ + (1 - \epsilon) \mathbb {E} _ {s \sim d _ {e + *} ^ {t e}} [ D _ {K L} \left(\pi_ {e} ^ {*} (\cdot | s) \| \pi_ {\theta} (\cdot | s)\right) ] \} \tag {17} \\ \end{array}
|
| 270 |
+
$$
|
| 271 |
+
|
| 272 |
+
# A.3 DETAILED DESCRIPTION OF THE EXPERIMENTAL SETUP
|
| 273 |
+
|
| 274 |
+
We annotate confidence scores for the noisy demonstrations so that the confidence is one if the demonstrations are obtained with $\epsilon = 0$ otherwise zero. The confidence scores are used IC-GAIL as well as 2IWIL. We use publicly available code $^2$ for the implementation of both IC-GAIL and
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
Noisy Expert
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
Probability of Non-Optimal Behavior $\epsilon$
|
| 281 |
+
BC-Single
|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
BC-Ensemble
|
| 285 |
+
|
| 286 |
+

|
| 287 |
+
Ours
|
| 288 |
+
Figure 3: The performance of policies vs. $\epsilon$ given 50000 state-action pairs of the noisy expert demonstrations where the non-optimal policies $\pi \in \Pi \setminus \{\pi_e^*\}$ select actions uniformly at random. BC-Single is a policy learned by BC. BC-Ensemble is an ensemble of policies, each of which was learned by BC. Shaded regions indicate the standard deviation over five experiments.
|
| 289 |
+
|
| 290 |
+
2IWIL. We follow the training procedure of both methods as described in Section 5 in (Wu et al., 2019).
|
| 291 |
+
|
| 292 |
+
We annotate rankings for the noisy demonstrations so that the smaller $\epsilon$ correspond to higher rankings. Then, we train the learner by T-REX given the ranked demonstration data. We use publicly available code $^3$ for the implementation of T-REX.
|
| 293 |
+
|
| 294 |
+
For training the learner with GAIL and DRIL, we use all noisy demonstrations without any screening process. We use publicly available code $^{4}$ for the implementation of GAIL and DRIL.
|
| 295 |
+
|
| 296 |
+
# A.4 EXPERIMENTAL RESULTS ON DISCRETE CONTROL TASKS
|
| 297 |
+
|
| 298 |
+
Figure 3 shows the experimental results on four discrete control tasks. Over all tasks, our algorithm obtain much better results than BC.
|
| 299 |
+
|
| 300 |
+
# A.5 IMPLEMENTATION DETAILS OF OUR ALGORITHM
|
| 301 |
+
|
| 302 |
+
We implement our algorithm using $K$ neural networks with two hidden layers to represent policies $\pi_{\theta^1}, \pi_{\theta^2}, \dots, \pi_{\theta^K}$ in the ensemble. The input of the networks is vector representations of the state. Each neural network has 100 hidden units in each hidden layer followed by hyperbolic tangent nonlinearity, and the dimensionality of its final output corresponds to that of action space. The final output is followed by softmax function in the discrete control tasks. As for the continuous control tasks, the final output represents the mean of a Gaussian policy as $\pi_{\theta^k} = \mathcal{N}(\mu_{\theta^k}(s), \sigma_{\theta^k}^2)$ , where $\sigma_{\theta^k}^2$ is implemented as a trainable independent vector from the networks. The neural network architecture for the policy trained by BC is the same as the ones for a single policy in our algorithm. We employ Adam (Kingma & Ba, 2014) for learning parameters with a learning rate of $\eta * 10^{-4}$ where $\eta = K / \sum_{k=1}^{K} \pi_{\theta_{old}^k}(\mu_{\theta_{old}^k}(s)|s)$ is a scaling parameter. The parameter $\eta$ plays a role in scaling $\hat{R} = \pi_{\theta_{old}}(a|s)$ to avoid the training being slow due to $\pi_{\theta_{old}}(a|s)$ of small values.
|
| 303 |
+
|
| 304 |
+
The parameters in all layers are initialized by Xavier initialization (Glorot & Bengio, 2010). The mini-batch size and the number of training epochs are 128 and 500, respectively.
|
| 305 |
+
|
| 306 |
+
# A.6 ABLATION EXPERIMENTS
|
| 307 |
+
|
| 308 |
+
We conducted ablation experiments where we evaluate how the number of policies $K$ in the ensemble policy $\pi_{\theta}$ as well as the number of the policies $K_{old}$ used in the old ensemble policies $\pi_{\theta_{old}}$ affect the performance. Table 2 summarizes the ablation experimental results. Even if our algorithm uses $K = 1$ as BC-Single does, the results of our algorithm are better than BC. It indicates that the
|
| 309 |
+
|
| 310 |
+
weighted action sampling described in 5.2 works to avoid learning the non-optimal policies without relying on the ensemble approach. The same goes with $K = 5$ . Our algorithm with $K = 5$ and $K_{old} = 1$ obtain much better performance than BC-Ensemble with $K = 5$ . This result also supports the weighted action sampling works. The learner performance with fixed $K$ increases as $K_{old}$ increases. Similarly, the learner performance with fixed $K_{old}$ increases as $K$ increases. It suggests that both $K$ and $K_{old}$ affect the performance in our algorithm.
|
| 311 |
+
|
| 312 |
+
Table 2: The performance of policies on the ablation experiment. The number of state-action pairs of the noisy expert demonstrations is $N = 50000$ . The non-optimal policies $\pi \in \Pi \backslash \{\pi_e^*\}$ is $\mathcal{U}(0, I)$ . BC-Single is a policy learned by BC. BC-Ensemble is an ensemble of five policies, each of which was learned by BC. $K$ denotes the number of policies in the ensemble policy $\pi_{\theta}$ . $K_{old}$ denotes the number of policies used in the old ensemble policy $\pi_{\theta_{old}}$ . The mean and standard deviation of the normalized cumulative rewards over three experiments are described.
|
| 313 |
+
|
| 314 |
+
<table><tr><td></td><td>Ant-v2</td><td>HalfCheetah-v2</td><td>Hopper-v2</td><td>Walker2d-v2</td><td>Average</td></tr><tr><td>BC-Single</td><td>0.149 ± 0.001</td><td>0.305 ± 0.006</td><td>0.258 ± 0.017</td><td>0.071 ± 0.004</td><td>0.196 ± 0.105</td></tr><tr><td>BC-Ensemble(K=5)</td><td>0.664 ± 0.043</td><td>0.459 ± 0.014</td><td>0.352 ± 0.028</td><td>0.279 ± 0.039</td><td>0.438 ± 0.167</td></tr><tr><td>Ours(K=1,Kold=1)</td><td>0.272 ± 0.184</td><td>0.505 ± 0.279</td><td>0.405 ± 0.206</td><td>0.281 ± 0.306</td><td>0.366 ± 0.111</td></tr><tr><td>Ours(K=5,Kold=1)</td><td>0.903 ± 0.048</td><td>1.057 ± 0.008</td><td>0.602 ± 0.130</td><td>0.345 ± 0.049</td><td>0.731 ± 0.320</td></tr><tr><td>Ours(K=1,Kold=5)</td><td>0.517 ± 0.015</td><td>0.907 ± 0.007</td><td>0.778 ± 0.093</td><td>0.414 ± 0.085</td><td>0.654 ± 0.227</td></tr><tr><td>Ours(K=5,Kold=5)</td><td>0.995 ± 0.053</td><td>1.058 ± 0.053</td><td>0.573 ± 0.079</td><td>0.364 ± 0.044</td><td>0.747 ± 0.334</td></tr></table>
|
behavioralcloningfromnoisydemonstrations/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a071f3dba76e4a03754ffb75f51cd6d5b6a0c7a7b531ec159f7e1b108e37f87b
|
| 3 |
+
size 381143
|
behavioralcloningfromnoisydemonstrations/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67d32672a51fc11615388b16ef81f8f99dcd94f7efd3bde578c0c4a8ba2c8571
|
| 3 |
+
size 635242
|
benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/18ea42be-5b92-4b13-a3b3-551b4d5fdd6b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:98a1ccfec3d8b4ccb7ea655d6911e3fb8f3e9ff42dbdd9509b5a03ff320df117
|
| 3 |
+
size 156707
|
benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/18ea42be-5b92-4b13-a3b3-551b4d5fdd6b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:78b4ad240d988f30db6193be269f5cef90ba53a74f26eaff6e04da47cc3fa9db
|
| 3 |
+
size 185837
|
benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/18ea42be-5b92-4b13-a3b3-551b4d5fdd6b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3d410d13c9dd10cc1a8363d25aab04de69ca1a1159df18daf99336a6a8e82abe
|
| 3 |
+
size 449863
|
benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/full.md
ADDED
|
@@ -0,0 +1,714 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BENEFIT OF DEEP LEARNING WITH NON-CONVEX NOISY GRADIENT DESCENT: PROVABLE EXCESS RISK BOUND AND SUPERIORITY TO KERNEL METHODS
|
| 2 |
+
|
| 3 |
+
# Taiji Suzuki
|
| 4 |
+
|
| 5 |
+
Graduate School of Information Science and Technology, The University of Tokyo, Japan
|
| 6 |
+
Center for Advanced Intelligence Project, RIKEN, Japan
|
| 7 |
+
|
| 8 |
+
E-mail: taiji@mist.i.u-tokyo.ac.jp
|
| 9 |
+
|
| 10 |
+
# Shunta Akiyama
|
| 11 |
+
|
| 12 |
+
Graduate School of Information Science and Technology, The University of Tokyo, Japan
|
| 13 |
+
E-mail: akiyama@mist.i.u-tokyo.ac.jp
|
| 14 |
+
|
| 15 |
+
# ABSTRACT
|
| 16 |
+
|
| 17 |
+
Establishing a theoretical analysis that explains why deep learning can outperform shallow learning such as kernel methods is one of the biggest issues in the deep learning literature. Towards answering this question, we evaluate excess risk of a deep learning estimator trained by a noisy gradient descent with ridge regularization on a mildly overparameterized neural network, and discuss its superiority to a class of linear estimators that includes neural tangent kernel approach, random feature model, other kernel methods, $k$ -NN estimator and so on. We consider a teacher-student regression model, and eventually show that any linear estimator can be outperformed by deep learning in a sense of the minimax optimal rate especially for a high dimension setting. The obtained excess bounds are so-called fast learning rate which is faster than $O(1 / \sqrt{n})$ that is obtained by usual Rademacher complexity analysis. This discrepancy is induced by the non-convex geometry of the model and the noisy gradient descent used for neural network training provably reaches a near global optimal solution even though the loss landscape is highly non-convex. Although the noisy gradient descent does not employ any explicit or implicit sparsity inducing regularization, it shows a preferable generalization performance that dominates linear estimators.
|
| 18 |
+
|
| 19 |
+
# 1 INTRODUCTION
|
| 20 |
+
|
| 21 |
+
In the deep learning theory literature, clarifying the mechanism by which deep learning can outperform shallow approaches has been gathering most attention for a long time. In particular, it is quite important to show that a tractable algorithm for deep learning can provably achieve a better generalization performance than shallow methods. Towards that goal, we study the rate of convergence of excess risk of both deep and shallow methods in a setting of a nonparametric regression problem. One of the difficulties to show generalization ability of deep learning with certain optimization methods is that the solution is likely to be stacked in a bad local minimum, which prevents us to show its preferable performances. Recent studies tackled this problem by considering optimization on overparameterized networks as in neural tangent kernel (NTK) (Jacot et al., 2018; Du et al., 2019a) and mean field analysis (Nitanda & Suzuki, 2017; Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; 2019; Mei et al., 2018; 2019), or analyzing the noisy gradient descent such as stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011; Raginsky et al., 2017; Erdogdu et al., 2018).
|
| 22 |
+
|
| 23 |
+
The NTK analysis deals with a relatively large scale initialization so that the model is well approximated by the tangent space at the initial solution, and eventually, all analyses can be reduced to those of kernel methods (Jacot et al., 2018; Du et al., 2019b; Allen-Zhu et al., 2019; Du et al., 2019a; Arora et al., 2019; Cao & Gu, 2019; Zou et al., 2020). Although this regime is useful to show its global
|
| 24 |
+
|
| 25 |
+
convergence, the obtained estimator looses large advantage of deep learning approaches because the estimation ability is reduced to the corresponding kernel methods. To overcome this issue, there are several "beyond-kernel" type analyses. For example, Allen-Zhu & Li (2019; 2020) showed benefit of depth by analyzing ResNet type networks. Li et al. (2020) showed global optimality of gradient descent by reducing the optimization problem to a tensor decomposition problem for a specific regression problem, and showed the "ideal" estimator on a linear model has worse dependency on the input dimensionality. Bai & Lee (2020) considered a second order Taylor expansion and showed that the sample complexity of deep approaches has better dependency on the input dimensionality than kernel methods. Chen et al. (2020) also derived a similar conclusion by considering a hierarchical representation. The analyses mentioned above actually show some superiority of deep learning, but all of these bounds are essentially $\Omega(1/\sqrt{n})$ where $n$ is the sample size, which is not optimal for regression problems with squared loss (Caponnetto & de Vito, 2007). The reason why only such a sub-optimal rate is considered is that the target of their analyses is mostly the Rademacher complexity of the set in which estimators exist for bounding the generalization gap. However, to derive a tight excess risk bound instead of the generalization gap, we need to evaluate so called local Rademacher complexity (Mendelson, 2002; Bartlett et al., 2005; Koltchinskii, 2006) (see Eq. (2) for the definition of excess risk). Moreover, some of the existing analyses should change the target function class as the sample size $n$ increases, for example, the input dimensionality is increased against the sample size, which makes it difficult to see how the rate of convergence is affected by the choice of estimators.
|
| 26 |
+
|
| 27 |
+
Another promising approach is the mean field analysis. There are also some work that showed superiority of deep learning against kernel methods. Ghorbani et al. (2019) showed that, when the dimensionality $d$ of input is polynomially increasing with respect to $n$ , the kernel methods is outperformed by neural network approaches. Although the situation of increasing $d$ explains well the modern high dimensional situations, this setting blurs the rate of convergence. Actually, we can show the superiority of deep learning even in a fixed dimension setting.
|
| 28 |
+
|
| 29 |
+
There are several studies about approximation abilities of deep and shallow models. Ghorbani et al. (2020) showed adaptivity of kernel methods to the intrinsic dimensionality in terms of approximation error and discuss difference between deep and kernel methods. Yehudai & Shamir (2019) showed that the random feature method requires exponentially large number of nodes against the input dimension to obtain a good approximation for a single neuron target function. These are only for approximation errors and estimation errors are not compared.
|
| 30 |
+
|
| 31 |
+
Recently, the superiority of deep learning against kernel methods has been discussed also in the nonparametric statistics literature where the minimax optimality of deep learning in terms of excess risk is shown. Especially, it is shown that deep learning achieves better rate of convergence than linear estimators in several settings (Schmidt-Hieber, 2020; Suzuki, 2019; Imaizumi & Fukumizu, 2019; Suzuki & Nitanda, 2019; Hayakawa & Suzuki, 2020). Here, the linear estimators are a general class of estimators that includes kernel ridge regression, $k$ -NN regression and Nadaraya-Watson estimator. Although these analyses give clear statistical characterization on estimation ability of deep learning, they are not compatible with tractable optimization algorithms.
|
| 32 |
+
|
| 33 |
+
In this paper, we give a theoretical analysis that unifies these analyses and shows the superiority of a deep learning method trained by a tractable noisy gradient descent algorithm. We evaluate the excess risks of the deep learning approach and linear estimators in a nonparametric regression setting, and show that the minimax optimal convergence rate of the linear estimators can be dominated by the noisy gradient descent on neural networks. In our analysis, the model is fixed and no explicit sparse regularization is employed. Our contributions can be summarized as follows:
|
| 34 |
+
|
| 35 |
+
- A refined analysis of excess risks for a fixed model with a fixed input dimension is given to compare deep and shallow estimators. Although several studies pointed out the curse of dimensionality is a key factor that separates shallow and deep approaches, we point out that such a separation appears in a rather low dimensional setting, and more importantly, the non-convexity of the model essentially makes the two regimes different.
|
| 36 |
+
- A lower bound of the excess risk which is valid for any linear estimator is derived. The analysis is considerably general because the class of linear estimators includes kernel ridge regression with any kernel and thus it also includes estimators in the NTK regime.
|
| 37 |
+
- All derived convergence rate is a fast learning rate that is faster than $O(1 / \sqrt{n})$ . We show that simple noisy gradient descent on a sufficiently wide two-layer neural network achieves a fast
|
| 38 |
+
|
| 39 |
+
learning rate by using a fact that the solution converges to a Bayes estimator with a Gaussian process prior, and the derived convergence rate can be faster than that of linear estimators. This is much different from such existing work that compared only coefficients with the same rate of convergence with respect to the sample size $n$ .
|
| 40 |
+
|
| 41 |
+
Other related work Bach (2017) analyzed the model capacity of neural networks and its corresponding reproducing kernel Hilbert space (RKHS), and showed that the RKHS is much larger than the neural network model. However, separation of the estimation abilities between shallow and deep is not proven. Moreover, the analyzed algorithm is basically the Frank-Wolfe type method which is not typically used in practical deep learning. The same technique is also employed by Barron (1993). The Frank-Wolfe algorithm is a kind of sparsity inducing algorithm that is effective for estimating a function in a model with an $L_{1}$ -norm constraint. It has been shown that explicit or implicit sparse regularization such as $L_{1}$ -regularization is beneficial to obtain better performances of deep learning under certain situations (Chizat & Bach, 2020; Chizat, 2019; Gunasekar et al., 2018; Woodworth et al., 2020; Klusowski & Barron, 2016). For example, E et al. (2019b;a) showed that the approximation error of a linear model suffers from the curse of dimensionality in a setting where the target function is in the Barron class (Barron, 1993), and showed an $L_{1}$ -type regularization avoids the curse of dimensionality. However, our analysis goes in a different direction where a sparse regularization is not required.
|
| 42 |
+
|
| 43 |
+
# 2 PROBLEM SETTING AND MODEL
|
| 44 |
+
|
| 45 |
+
In this section, we give the problem setting and notations that will be used in the theoretical analysis. We consider the standard nonparametric regression problem where data are generated from the following model for an unknown true function $f^{\mathrm{o}}: \mathbb{R}^{d} \to \mathbb{R}$ :
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
y _ {i} = f ^ {\mathrm {o}} \left(x _ {i}\right) + \epsilon_ {i} (i = 1, \dots , n), \tag {1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where $x_{i}$ is independently identically distributed from $P_{X}$ whose support is included in $\Omega = [0,1]^d$ , and $\epsilon_{i}$ is an observation noise that is independent of $x_{i}$ and satisfies $\mathrm{E}[\epsilon_i] = 0$ and $\epsilon_{i} \in [-U,U]$ almost surely. The $n$ i.i.d. observations are denoted by $D_{n} = (x_{i},y_{i})_{i = 1}^{n}$ . We want to estimate the true function $f^{\mathrm{o}}$ through the training data $D_{n}$ . To achieve this purpose, we employ the squared loss $\ell (y,f(x)) = (y - f(x))^2$ and accordingly we define the expected and empirical risks as $\mathcal{L}(f)\coloneqq \operatorname{E}_{Y,X}[\ell (Y,f(X))]$ and $\widehat{\mathcal{L}} (f)\coloneqq \frac{1}{n}\sum_{i = 1}^{n}\ell (y_i,f(x_i))$ respectively. Throughout this paper, we are interested in the excess (expected) risk of an estimator $\widehat{f}$ defined by
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\text {(E x c e s s} \widehat {f}) \quad \mathcal {L} (\widehat {f}) - \inf _ {f: \text {m e a s u r a b l e}} \mathcal {L} (f). \tag {2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
Since the loss function $\ell$ is the squared loss, the infimum of $\inf_{f:\text{measurable}}\mathcal{L}(f)$ is achieved by $f^{\circ}$ : $\inf_{f:\text{measurable}}\mathcal{L}(f) = \mathcal{L}(f^{\circ})$ . The population $L_2(P_X)$ -norm is denoted by $\| f\|_{L_2(P_X)} := \sqrt{\mathrm{E}_X\sim P_X[f(X)^2]}$ and the sup-norm on the support of $P_X$ is denoted by $\| f\|_{\infty} := \sup_{x\in \operatorname {supp}(P_X)}|f(x)|$ . We can easily check that for an estimator $\widehat{f}$ , the $L_2$ -distance $\| \widehat{f} -f^{\circ}\|_{L_2(P_X)}^2$ between the estimator $\widehat{f}$ and the true function $f^{\circ}$ is identical to the excess risk: $\mathcal{L}(\widehat{f}) - \mathcal{L}(f^{\circ}) = \| \widehat{f} -f^{\circ}\|_{L_2(P_X)}^2$ . Note that the excess risk is different from the generalization gap $\mathcal{L}(\widehat{f}) - \widehat{\mathcal{L}} (\widehat{f})$ . Indeed, the generalization gap typically converges with the rate of $O(1 / \sqrt{n})$ which is optimal in a typical setting (Mohri et al., 2012). On the other hand, the excess risk can be faster than $O(1 / \sqrt{n})$ , which is known as a fast learning rate (Mendelson, 2002; Bartlett et al., 2005; Koltchinskii, 2006; Giné & Koltchinskii, 2006).
|
| 58 |
+
|
| 59 |
+
# 2.1 MODEL OF TRUE FUNCTIONS
|
| 60 |
+
|
| 61 |
+
To analyze the excess risk, we need to specify a function class (in other words, model) in which the true function $f^{\mathrm{o}}$ is included. In this paper, we only consider a two layer neural network model, whereas the techniques adapted in this paper can be directly extended to deeper neural network models. We consider a teacher-student setting, that is, the true function $f^{\mathrm{o}}$ can be represented by a neural network defined as follows. For $w \in \mathbb{R}$ , let $\bar{w}$ be a "clipping" of $w$ defined as $\bar{w} :=$
|
| 62 |
+
|
| 63 |
+
$R \times \tanh(w / R)$ where $R \geq 1$ is a fixed constant, and let $[x;1] := [x^\top, 1]^\top$ for $x \in \mathbb{R}^d$ . Then, the teacher network is given by
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
f _ {W} (x) = \sum_ {m = 1} ^ {\infty} a _ {m} \bar {w} _ {2, m} \sigma_ {m} (w _ {1, m} ^ {\top} [ x; 1 ]),
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
where $w_{1,m} \in \mathbb{R}^{d + 1}$ and $w_{2,m} \in \mathbb{R}$ ( $m \in \mathbb{N}$ ) are the trainable parameters (where $W = (w_{1,m}, w_{2,m})_{m=1}^{\infty}$ ), $a_m \in \mathbb{R}$ ( $m \in \mathbb{N}$ ) is a fixed scaling parameter, and $\sigma_m : \mathbb{R} \to \mathbb{R}$ is an activation function for the $m$ -th node. The reason why we applied the clipping operation to the parameter of the second layer is just for a technical reason to ensure convergence of Langevin dynamics. The dynamics is bounded in high probability in practical situations and the boundedness condition would be removed if further theoretical development of infinite dimensional Langevin dynamics would be achieved.
|
| 70 |
+
|
| 71 |
+
Let $\mathcal{H}$ be a set of parameters $W$ such that its squared norm is bounded: $\mathcal{H} := \{W = (w_{1,m}, w_{2,m})_{m=1}^{\infty} \mid \sum_{m=1}^{\infty} (\|w_{1,m}\|^2 + w_{2,m}^2) < \infty\}$ . Define $\|W\|_{\mathcal{H}} := [\sum_{m=1}^{\infty} (\|w_{1,m}\|^2 + w_{2,m}^2)]^{1/2}$ for $W \in \mathcal{H}$ . Let $(\mu_m)_{m=1}^{\infty}$ be a regularization parameter such that $\mu_m \searrow 0$ . Accordingly we define $\mathcal{H}_{\gamma} := \{W \in \mathcal{H} \mid \|W\|_{\mathcal{H}_{\gamma}} < \infty\}$ where $\|W\|_{\mathcal{H}_{\gamma}} := [\sum_{m=1}^{\infty} \mu_m^{-\gamma} (\|w_{1,m}\|^2 + w_{2,m}^2)]^{1/2}$ for a given $0 < \gamma$ . Throughout this paper, we analyze an estimation problem in which the true function is included in the following model:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\mathcal {F} _ {\gamma} = \left\{f _ {W} \mid W \in \mathcal {H} _ {\gamma}, \| W \| _ {\mathcal {H} _ {\gamma}} \leq 1 \right\}.
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
This is basically two layer neural network with infinite width. As assumed later, $a_{m}$ is assumed to decrease as $m\to \infty$ . Its decreasing rate controls the capacity of the model. If the first layer parameters $(w_{1,m})_m$ are fixed, this model can be regarded as a variant of the unit ball of some reproducing kernel Hilbert space (RKHS) with basis functions $a_{m}\sigma_{m}(w_{1,m}^{\top}[x;1])$ . However, since the first layer $(w_{1,m})$ is also trainable, there appears significant difference between deep and kernel approaches. The Barron class (Barron, 1993; E et al., 2019b) is relevant to this function class. Indeed, it is defined as the convex hull of $w_{2}\sigma (w_{1}^{\top}[x;1])$ with norm constraints on $(w_{1},w_{2})$ where $\sigma$ is an activation function. On the other hand, we will put an explicit decay rate on $a_{m}$ and the parameter $W$ has an $L_{2}$ -norm constraint, which makes the model $\mathcal{F}_{\gamma}$ smaller than the Barron class.
|
| 78 |
+
|
| 79 |
+
# 3 ESTIMATORS
|
| 80 |
+
|
| 81 |
+
We consider two classes of estimators and discuss their differences: linear estimators and deep learning estimator with noisy gradient descent (NGD).
|
| 82 |
+
|
| 83 |
+
Linear estimator A class of linear estimators, which we consider as a representative of "shallow" learning approach, consists of all estimators that have the following form:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\widehat {f} (x) = \sum_ {i = 1} ^ {n} y _ {i} \varphi_ {i} \left(x _ {1}, \dots , x _ {n}, x\right).
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
Here, $(\varphi_{i})_{i = 1}^{n}$ can be any measurable function (and $L_{2}(P_{X})$ -integrable so that the excess risk can be defined). Thus, they could be selected as the "optimal" one so that the corresponding linear estimator minimizes the worst case excess risk. Even if we chose such an optimal one, the worst case excess risk should be lower bounded by our lower bound given in Theorem 1. It should be noted that the linear estimator does not necessarily imply "linear model." The most relevant linear estimator in the machine learning literature is the kernel ridge regression: $\widehat{f} (x) = Y^{\top}(K_X + \lambda I)^{-1}\mathbf{k}(x)$ where $K_{X} = (k(x_{i},x_{j}))_{i,j = 1}^{n}\in \mathbb{R}^{n\times n}$ , $\mathbf{k}(x) = [k(x,x_1),\ldots ,k(x,x_n)]^\top \in \mathbb{R}^n$ and $Y = [y_{1},\dots,y_{n}]^{\top}\in \mathbb{R}^{n}$ for a kernel function $k:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$ . Therefore, the ridge regression estimator in the NTK regime or the random feature model is also included in the class of linear estimators. The solution obtained in the early stopping criteria instead of regularization in the NTK regime under the squared loss is also included in the linear estimators. Other examples include the $k$ -NN estimator and the Nadaraya-Watson estimator. All of them do not train the basis function in a nonlinear way, which makes difference from the deep learning approach. In the nonparametric statistics literature, linear estimators have been studied for estimating a wavelet series model. Donoho et al. (1990; 1996) have shown that a wavelet shrinkage estimator can outperform any linear estimator by showing suboptimality of linear estimators. Suzuki (2019) utilized such an argument to show superiority of deep learning but did not present any tractable optimization algorithm.
|
| 90 |
+
|
| 91 |
+
Noisy Gradient Descent with regularization As for the neural network approach, we consider a noisy gradient descent algorithm. Basically, we minimize the following regularized empirical risk:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\widehat {\mathcal {L}} \left(f _ {W}\right) + \frac {\lambda}{2} \| W \| _ {\mathcal {H} _ {1}} ^ {2}.
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Here, we employ $\mathcal{H}_1$ -norm as the regularizer. We note that the constant $\gamma$ controls the relative complexity of the true function $f^{\mathrm{o}}$ compared to the typical solution obtained under the regularization. Here, we define a linear operator $A$ as $\lambda \| W\|_{\mathcal{H}_1} = W^\top AW$ , that is, $AW = (\lambda \mu_m^{-1}w_{1,m},\lambda \mu_m^{-1}w_{2,m})_{m = 1}^{\infty}$ . The regularized empirical risk can be minimized by noisy gradient descent as $W_{k + 1} = W_k - \eta \nabla (\widehat{\mathcal{L}} (f_{W_k}) + \frac{\lambda}{2}\| W_k\|_{\mathcal{H}_1}^2) + \sqrt{\frac{2\eta}{\beta}}\xi_k$ , where $\eta >0$ is a step size and $\xi_{k} = (\xi_{k,(1,m)},\xi_{k,(2,m)})_{m = 1}^{\infty}$ is an infinite-dimensional Gaussian noise, i.e., $\xi_{k,(1,m)}$ and $\xi_{k,(2,m)}$ are independently identically distributed from the standard normal distribution (Da Prato & Zabczyk, 1996). Here, $\nabla \widehat{\mathcal{L}} (f_W) = \frac{1}{n}\sum_{i = 1}^{n}2(f_W(x_i) - y_i)(\bar{w}_{2,m}a_m[x_i;1]\sigma_m'(w_{1,m}^\top [x_i;1]),a_m\tanh ' (w_{2,m} / R)\sigma_m(w_{1,m}^\top [x_i;1]))_{m = 1}^{\infty}$ . However, since $\nabla \| W_{k - 1}\|_{\mathcal{H}_1}^2$ is unbounded which makes it difficult to show convergence, we employ the semi-implicit Euler scheme defined by
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
W _ {k + 1} = W _ {k} - \eta \nabla \widehat {\mathcal {L}} \left(f _ {W _ {k}}\right) - \eta A W _ {k + 1} + \sqrt {\frac {2 \eta}{\beta}} \xi_ {k} \Leftrightarrow W _ {k + 1} = S _ {\eta} \left(W _ {k} - \eta \nabla \widehat {\mathcal {L}} \left(f _ {W _ {k}}\right) + \sqrt {\frac {2 \eta}{\beta}} \xi_ {k}\right), \tag {3}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
where $S_{\eta} \coloneqq (\mathrm{I} + \eta A)^{-1}$ . It is easy to check that this is equivalent to the following update rule: $W_{k} = W_{k - 1} - \eta \left(\nabla \widehat{\mathcal{L}}(f_{W_{k - 1}}) + S_{\eta}AW_{k - 1} + \sqrt{\frac{2\eta}{\beta}}\xi_{k - 1}\right)$ . Therefore, the implicit Euler scheme can be seen as a naive noisy gradient descent for minimizing the empirical risk with a slightly modified ridge regularization. This can be interpreted as a discrete time approximation of the following infinite dimensional Langevin dynamics:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\mathrm {d} W _ {t} = - \nabla \left(\widehat {\mathcal {L}} \left(f _ {W _ {t}}\right) + \frac {\lambda}{2} \| W _ {t} \| _ {\mathcal {H} _ {1}} ^ {2}\right) \mathrm {d} t + \sqrt {2 / \beta} \mathrm {d} \xi_ {t}, \tag {4}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $(\xi_{t})_{t\geq 0}$ is the so-called cylindrical Brownian motion (see Da Prato & Zabczyk (1996) for the details). Its application and analysis for machine learning problems with non-convex objectives have been recently studied by, for example, Muzellec et al. (2020); Suzuki (2020).
|
| 110 |
+
|
| 111 |
+
The above mentioned algorithm is executed on an infinite dimensional parameter space. In practice, we should deal with a finite width network. To do so, we approximate the solution by a finite dimensional one: $W^{(M)} = (w_{1,m}, w_{2,m})_{m=1}^{M}$ where $M$ corresponds to the width of the network. We identify $W^{(M)}$ to the "zero-padded" infinite dimensional one, $W = (w_{1,m}, w_{2,m})_{m=1}^{\infty}$ with $w_{1,m} = 0$ and $w_{2,m} = 0$ for all $m > M$ . Accordingly, we use the same notation $f_{W^{(M)}}$ to indicate $f_{W}$ with zero padded vector $W$ . Then, the finite dimensional version of the update rule is given by $W_{k+1}^{(M)} = S_{\eta}^{(M)} \left( W_{k}^{(M)} - \eta \nabla \widehat{\mathcal{L}}(f_{W_{k}^{(M)}}) + \sqrt{\frac{2\eta}{\beta}} \xi_{k}^{(M)} \right)$ , where $\xi_{k}^{(M)}$ is the Gaussian noise vector obtained by projecting $\xi_{k}$ to the first $M$ components and $S_{\eta}^{(M)}$ is also obtained in a similar way.
|
| 112 |
+
|
| 113 |
+
# 4 CONVERGENCE RATE OF ESTIMATORS
|
| 114 |
+
|
| 115 |
+
In this section, we present the excess risk bounds for linear estimators and the deep learning estimator. As for the linear estimators, we give its lower bound while we give an upper bound for the deep learning approach. To obtain the result, we setup some assumptions on the model.
|
| 116 |
+
|
| 117 |
+
# Assumption 1.
|
| 118 |
+
|
| 119 |
+
(i) There exists a constant $c_{\mu}$ such that $\mu_m \leq c_\mu m^{-2} (m \in \mathbb{N})$ .
|
| 120 |
+
(ii) There exists $\alpha_{1} > 1 / 2$ such that $a_{m}\leq \mu_{m}^{\alpha_{1}}$ $(m\in \mathbb{N})$
|
| 121 |
+
(iii) The activation functions $(\sigma_{m})_{m}$ is bounded as $\| \sigma_{m}\|_{\infty}\leq 1$ . Moreover, they are three times differentiable and their derivatives upto third order differentiation are uniformly bounded: $\exists C_{\sigma}$ such that $\| \sigma_{m}\|_{1,3}:= \max \{\| \sigma_{m}^{\prime}\|_{\infty},\| \sigma_{m}^{\prime \prime}\|_{\infty},\| \sigma_{m}^{\prime \prime \prime}\|_{\infty}\} \leq C_{\sigma}$ ( $\forall m\in \mathbb{N}$ ).
|
| 122 |
+
|
| 123 |
+
The first assumption (i) controls the strength of the regularization, and combined with the second assumption (ii) and definition of the model $\mathcal{F}_{\gamma}$ , complexity of the model is controlled. If $\alpha_{1}$ and $\gamma$ are large, the model is less complicated. Indeed, the convergence rate of the excess risk becomes
|
| 124 |
+
|
| 125 |
+
faster if these parameters are large as seen later. The decay rate $\mu_{m} \leq c_{\mu} m^{-2}$ can be generalized as $m^{-p}$ with $p > 1$ but we employ this setting just for a technical simplicity for ensuring convergence of the Langevin dynamics. The third assumption (iii) can be satisfied by several activation functions such as the sigmoid function and the hyperbolic tangent. The assumption $\| \sigma_{m} \|_{\infty} \leq 1$ could be replaced by another one like $\| \sigma_{m} \|_{\infty} \leq C$ , but we fix this scaling for simple presentation.
|
| 126 |
+
|
| 127 |
+
# 4.1 MINIMAX LOWER BOUND FOR LINEAR ESTIMATORS
|
| 128 |
+
|
| 129 |
+
Here, we analyze a lower bound of excess risk of linear estimators, and eventually we show that any linear estimator suffers from curse of dimensionality. To rigorously show that, we consider the following minimax excess risk over the class of linear estimators:
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
R _ {\mathrm {l i n}} (\mathcal {F} _ {\gamma}) := \inf _ {\widehat {f}: \text {l i n e a r}} \sup _ {f ^ {\mathrm {o}} \in \mathcal {F} _ {\gamma}} \operatorname {E} _ {D _ {n}} [ \| \widehat {f} - f ^ {\mathrm {o}} \| _ {L _ {2} (P _ {X})} ^ {2} ],
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where inf is taken over all linear estimators and $\mathrm{E}_{D_n}[\cdot ]$ is taken with respect to the training data $D_{n}$ . This expresses the best achievable worst case error over the class of linear estimators to estimate a function in $\mathcal{F}_{\gamma}$ . To evaluate it, we additionally assume the following condition.
|
| 136 |
+
|
| 137 |
+
Assumption 2. We assume that $\mu_{m} = m^{-2}$ and $a_{m} = \mu_{m}^{\alpha_{1}}$ ( $m \in \mathbb{N}$ ) (and hence $c_{\mu} = 1$ ). There exists a monotonically decreasing sequence $(b_{m})_{m=1}^{\infty}$ and $s \geq 3$ such that $b_{m} = \mu_{m}^{\alpha_{2}}$ ( $\forall m$ ) with $\alpha_{2} > \gamma/2$ and $\sigma_{m}(u) = b_{m}^{s}\sigma(b_{m}^{-1}u)$ ( $u \in \mathbb{R}$ ) where $\sigma$ is the sigmoid function: $\sigma(u) = 1/(1 + e^{-u})$ .
|
| 138 |
+
|
| 139 |
+
Intuitively, the parameter $s$ controls the "resolution" of each basis function $\sigma_{m}$ , and the relation between parameter $\alpha_{1}$ and $\alpha_{2}$ controls the magnitude of coefficient for each basis $\sigma_{m}$ . Note that the condition $s \geq 3$ ensures $\| \sigma_{m}\|_{1,3}$ is uniformly bounded and $0 < b_{m} \leq 1$ ensures $\| \sigma_{m}\|_{\infty} \leq 1$ . Our main strategy to obtain the lower bound is to make use of the so-called convex-hull argument. That is, it is known that, for a function class $\mathcal{F}$ , the minimax risk $R(\mathcal{F})$ over a class of linear estimators is identical to that for the convex hull of $\mathcal{F}$ (Hayakawa & Suzuki, 2020; Donoho et al., 1990):
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
R _ {\mathrm {l i n}} (\mathcal {F}) = R _ {\mathrm {l i n}} (\overline {{\operatorname {c o n v}}} (\mathcal {F})),
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $\mathrm{conv}(\mathcal{F}) = \{\sum_{i=1}^{N}\lambda_if_i\mid f_i\in \mathcal{F}, \sum_{i=1}^{N}\lambda_i = 1, \lambda_i\geq 0, N\in \mathbb{N}\}$ and $\overline{\mathrm{conv}}(\cdot)$ is the closure of $\mathrm{conv}(\cdot)$ with respect to $L_2(P_X)$ -norm. Intuitively, since the linear estimator is linear to the observations $(y_i)_{i=1}^n$ of outputs, a simple application of Jensen's inequality yields that its worst case error on the convex hull of the function class $\mathcal{F}$ does not increase compared with that on the original one $\mathcal{F}$ (see Hayakawa & Suzuki (2020) for the details). This indicates that the linear estimators cannot distinguish the original hypothesis class $\mathcal{F}$ and its convex hull. Therefore, if the class $\mathcal{F}$ is highly non-convex, then the linear estimators suffer from much slower convergence rate because its convex hull $\overline{\mathrm{conv}}(\mathcal{F})$ becomes much "fatter" than the original one $\mathcal{F}$ . To make use of this argument, for each sample size $n$ , we pick up appropriate $m_n$ and consider a subset generated by the basis function $\sigma_{m_n}$ , i.e., $\mathcal{F}_{\gamma}^{(n)} := \{a_{m_n}\bar{w}_{2,m_n}\sigma_m(w_{1,m_n}^\top[x;1])\in \mathcal{F}_{\gamma}\}$ . By applying the convex hull argument to this set, we obtain the relation $R_{\mathrm{lin}}(\mathcal{F}_{\gamma})\geq R_{\mathrm{lin}}(\mathcal{F}_{\gamma}^{(n)}) = R_{\mathrm{lin}}(\overline{\mathrm{conv}}(\mathcal{F}_{\gamma}^{(n)}))$ . Since $\mathcal{F}_{\gamma}^{(n)}$ is highly non-convex, its convex hull $\overline{\mathrm{conv}}(\mathcal{F}_{\gamma}^{(n)})$ is much larger than the original set $\mathcal{F}_{\gamma}^{(n)}$ and thus the minimax risk over the linear estimators would be much larger than that over all estimators including deep learning. More intuitively, linear estimators do not adaptively select the basis functions and thus they should prepare redundantly large class of basis functions to approximate functions in the target function class. The following theorem gives the lower bound of the minimax optimal excess risk over the class of linear estimators.
|
| 146 |
+
|
| 147 |
+
Theorem 1. Suppose that $\operatorname{Var}(\epsilon) > 0$ , $P_X$ is the uniform distribution on $[0,1]^d$ , and Assumption 2 is satisfied. Let $\tilde{\beta} = \frac{\alpha_1 + (s + 1)\alpha_2}{\alpha_2 - \gamma / 2}$ . Then for arbitrary small $\kappa' > 0$ , we have that
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
R _ {\mathrm {l i n}} \left(\mathcal {F} _ {\gamma}\right) \gtrsim n ^ {- \frac {2 \bar {\beta} + d}{2 \bar {\beta} + 2 d}} n ^ {- \kappa^ {\prime}}. \tag {5}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
The proof is in Appendix A. We utilized the Irie-Miyake integral representation (Irie & Miyake, 1988; Hornik et al., 1990) to show there exists a "complicated" function in the convex hull, and then we adopted the technique of Zhang et al. (2002) to show the lower bound. The lower bound is characterized by the decaying rate $(\alpha_{1})$ of $a_{m}$ relative to that $(\alpha_{2})$ of the scaling factor $b_{m}$ . Indeed, the faster $a_{m}$ decays with increasing $m$ , the faster the rate of the minimax lower bound becomes.
|
| 154 |
+
|
| 155 |
+
We can see that the minimax rate of linear estimators is quite sensitive to the dimension $d$ . Actually, for relatively high dimensional settings, this lower bound becomes close to a slow rate $\Omega(1/\sqrt{n})$ which corresponds to the curse of dimensionality.
|
| 156 |
+
|
| 157 |
+
It has been pointed out that the sample complexity of kernel methods suffers from the curse of dimensionality while deep learning can avoid that by a tractable algorithms (e.g., Ghorbani et al. (2019); Bach (2017)). Among them, Ghorbani et al. (2019) showed that if the dimensionality $d$ is polynomial against $n$ , then the excess risk of kernel methods is bounded away from 0 for all $n$ . On the other hand, our analysis can be applied to any linear estimator including kernel methods, and it shows that even if the dimensionality $d$ is fixed, the convergence rate of their excess risk suffers from the curse of dimensionality. This can be accomplished thanks to a careful analysis of the rate of convergence. Bach (2017) derived an upper bound of the Rademacher complexity of the unit ball of the RKHS corresponding to a neural network model. However, it is just an upper bound and there is still a large gap from excess risk estimates. Allen-Zhu & Li (2019; 2020); Bai & Lee (2020); Chen et al. (2020) also analyzed a lower bound of sample complexity of kernel methods. However, their lower bound is not for the excess risk of the squared loss. Eventually, the sample complexities of all methods including deep learning take a form of $O(C / \sqrt{n})$ and dependency of coefficient $C$ to the dimensionality or other factors such as magnitude of residual components is compared. On the other hand, our lower bound properly involves the properties of squared loss such as strong convexity and smoothness and the bound shows the curse of dimensionality occurs even in the rate of convergence instead of just the coefficient. Finally, we would like to point out that several existing work (e.g., Ghorbani et al. (2019); Allen-Zhu & Li (2019)) considered a situation where the target function class changes as the sample size $n$ increases. However, our analysis reveals that separation between deep and shallow occurs even if the target function class $\mathcal{F}_{\gamma}$ is fixed.
|
| 158 |
+
|
| 159 |
+
# 4.2 UPPER BOUND FOR DEEP LEARNING
|
| 160 |
+
|
| 161 |
+
Here, we analyze the excess risk of deep learning trained by NGD and its algorithmic convergence rate. Our analysis heavily relies on the weak convergence of the discrete time gradient Langevin dynamics to the stationary distribution of the continuous time one (Eq. (4)). Under some assumptions, the continuous time dynamics has a stationary distribution (Da Prato & Zabczyk, 1992; Maslowski, 1989; Sowers, 1992; Jacquot & Royer, 1995; Shardlow, 1999; Hairer, 2002). If we denote the probability measure on $\mathcal{H}$ corresponding to the stationary distribution by $\pi_{\infty}$ , then it is given by
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\frac {\mathrm {d} \pi_ {\infty}}{\mathrm {d} \nu_ {\beta}} (W) \propto \exp (- \beta \widehat {\mathcal {L}} (f _ {W})),
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where $\nu_{\beta}$ is the Gaussian measure in $\mathcal{H}$ with mean 0 and covariance $(\beta A)^{-1}$ (see Da Prato & Zabczyk (1996) for the rigorous definition of the Gaussian measure on a Hilbert space). Remarkably, this can be seen as the Bayes posterior for a prior distribution $\nu_{\beta}$ and a "log-likelihood" function $\exp(-\beta \widehat{\mathcal{L}}(W))$ . Through this view point, we can obtain an excess risk bound of the solution $W_{k}$ . The proofs of all theorems in this section are in Appendix B.
|
| 168 |
+
|
| 169 |
+
Under Assumption 1, the distribution of $W_{k}$ derived by the discrete time gradient Langevin dynamics satisfies the following weak convergence property to the stationary distribution $\pi_{\infty}$ . This convergence rate analysis depends on the techniques by Brézier & Kopec (2016); Muzellec et al. (2020).
|
| 170 |
+
|
| 171 |
+
Proposition 1. Assume Assumption 1 holds and $\beta >\eta$ . Then, there exist spectral gaps $\Lambda_{\eta}^{*}$ and $\Lambda_0^*$ (defined in Eq. (10) of Appendix B.1) and a constant $C_0$ such that, for any $0 < a < 1 / 4$ , the following convergence bound holds for almost sure observation $D_{n}$ :
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\left| \mathrm {E} _ {W _ {k}} \left[ \mathcal {L} \left(f _ {W _ {k}}\right) \mid D _ {n} \right] - \mathrm {E} _ {W \sim \pi_ {\infty}} \left[ \mathcal {L} \left(f _ {W}\right) \mid D _ {n} \right] \right| \leq C _ {0} \exp \left(- \Lambda_ {\eta} ^ {*} \eta k\right) + C _ {1} \frac {\sqrt {\beta}}{\Lambda_ {0} ^ {*}} \eta^ {1 / 2 - a} =: \Xi_ {k}, \tag {6}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
where $C_1$ is a constant depending only on $c_{\mu}, R, \alpha_1, C_{\sigma}, U, a$ (independent of $\eta, k, \beta, \lambda, n$ ).
|
| 178 |
+
|
| 179 |
+
This proposition indicates that the expected risk of $W_{k}$ can be almost identical to that of the "Bayes posterior solution" obeying $\pi_{\infty}$ after sufficiently large iterations $k$ with sufficiently small step size $\eta$ even though $\widehat{\mathcal{L}}(f_{W})$ is not convex. The definition of $\Lambda_{\eta}^{*}$ can be found in Eq. (10). We should note that its dependency on $\beta$ is exponential. Thus, if we take $\beta = \Omega(n)$ , then the computational cost until a sufficiently small error could be exponential with respect to the sample size $n$ . The same convergence holds also for finite dimensional one $W_{k}^{(M)}$ with a modified stationary distribution. The
|
| 180 |
+
|
| 181 |
+
constants appearing in the bound are independent of the model size $M$ (see the proof of Proposition 1 in Appendix B). In particular, the convergence can be guaranteed even if $W$ is infinite dimensional. This is quite different from usual finite dimensional analyses (Raginsky et al., 2017; Erdogdu et al., 2018; Xu et al., 2018) which requires exponential dependency on the dimension, but thanks to the regularization term, we can obtain the model size independent convergence rate. Xu et al. (2018) also analyzed a finite dimensional gradient Langevin dynamics and obtained a similar bound where $O(\eta)$ appears in place of the second term $\eta^{1/2-a}$ which corresponds to time discretization error. In our setting the regularization term is $\| W\|_{\mathcal{H}_1}^2 = \sum_m (\| w_{1,m}\|^2 + w_{2,m}^2) / \mu_m$ with $\mu_m \lesssim m^{-2}$ , but if we employ $\| W\|_{\mathcal{H}_{p/2}}^2 = \sum_m (\| w_{1,m}\|^2 + w_{2,m}^2) / \mu_m^{p/2}$ for $p > 1$ , then the time discretization error term would be modified to $\eta^{(p-1)/p-a}$ (Andersson et al., 2016). We can interpret the finite dimensional setting as the limit of $p \to \infty$ which leads to $\eta^{(p-1)/p} \to \eta$ that recovers the finite dimensional result $(O(\eta))$ as shown by Xu et al. (2018).
|
| 182 |
+
|
| 183 |
+
In addition to the above algorithmic convergence, we also have the following convergence rate for the excess risk bound of the finite dimensional solution $W_{k}^{(M)}$ .
|
| 184 |
+
|
| 185 |
+
Theorem 2. Assume Assumption 1 holds, assume $\eta < \beta \leq \min \{n / (2U^2), n\}$ , and $0 < \gamma < 1/2 + \alpha_1$ . Then, if the width satisfies $M \geq \min \left\{\lambda^{1/4\gamma (\alpha_1 + 1)}\beta^{1/2\gamma}, \lambda^{-1/2(\alpha_1 + 1)}, n^{1/2\gamma}\right\}$ , the expected excess risk of $W_k$ is bounded as
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\operatorname {E} _ {D ^ {n}} \Big [ \operatorname {E} _ {W _ {k} ^ {(M)}} \big [ \| f _ {W _ {k} ^ {(M)}} - f ^ {\mathrm {o}} \| _ {L _ {2} (P _ {X})} ^ {2} | D _ {n} \big ] \Big ] \leq C \max \big \{(\lambda \beta) ^ {\frac {1 / \gamma}{1 + 1 / 2 \gamma}} n ^ {- \frac {1}{1 + 1 / 2 \gamma}}, \lambda^ {- \frac {1}{2 (\alpha_ {1} + 1)}} \beta^ {- 1}, \lambda^ {\frac {\gamma}{1 + \alpha_ {1}}} \big \} + \Xi_ {k},
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where $C$ is a constant independent of $n, \beta, \lambda, \eta, k$ . In particular, if we set $\beta = \min \{n / (2U^2), n\}$ and $\lambda = \beta^{-1}$ , then for $M \geq n^{1/2(\alpha_1 + 1)}$ , we obtain
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\operatorname {E} _ {D ^ {n}} \left[ \operatorname {E} _ {W _ {k} ^ {(M)}} \big [ \| f _ {W _ {k} ^ {(M)}} - f ^ {\circ} \| _ {L _ {2} (P _ {X})} ^ {2} | D _ {n} \big ] \right] \lesssim n ^ {- \frac {\gamma}{\alpha_ {1} + 1}} + \Xi_ {k}.
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
In addition to this theorem, if we further assume Assumption 2, we obtain a refined bound as follows.
|
| 198 |
+
|
| 199 |
+
Corollary 1. Assume Assumptions 1 and 2 hold and $\eta < \beta$ , and let $\beta = \min \{n/(2U^2), n\}$ and $\lambda = \beta^{-1}$ . Suppose that there exists $0 \leq q \leq s - 3$ such that $0 < \gamma < 1/2 + \alpha_1 + q\alpha_2$ . Then, the excess risk bound of $W_k^{(M)}$ for $M \geq n^{1/2(\alpha_1 + q\alpha_2 + 1)}$ can be refined as
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\mathrm {E} _ {D ^ {n}} \left[ \mathrm {E} _ {W _ {k} ^ {(M)}} \left[ \| f _ {W _ {k} ^ {(M)}} - f ^ {\mathrm {o}} \| _ {L _ {2} (P _ {X})} ^ {2} \mid D _ {n} \right] \right] \lesssim n ^ {- \frac {\gamma}{\alpha_ {1} + q \alpha_ {2} + 1}} + \Xi_ {k}. \tag {7}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
These theorem and corollary shows that the tractable NGD algorithm achieves a fast convergence rate of the excess risk bound. Indeed, if $q$ is chosen so that $\gamma > (\alpha_{1} + q\alpha_{2} + 1)/2$ , then the excess risk bound converges faster than $O(1/\sqrt{n})$ . Remarkably, the convergence rate is not affected by the input dimension $d$ , which makes discrepancy from linear estimators. The bound of Theorem 2 is tightest when $\gamma$ is close to $1/2 + \alpha_{1}$ ( $\gamma \approx 1/2 + \alpha_{1} + 3\alpha_{2}$ for Corollary 1), and a smaller $\gamma$ yields looser bound. This relation between $\gamma$ and $\alpha_{1}$ reflects misspecification of the "prior" distribution. When $\gamma$ is small, the regularization $\lambda \| W\|_{\mathcal{H}_{1}}^{2}$ is not strong enough so that the variance of the posterior distribution becomes unnecessarily large for estimating the true function $f^{\mathrm{o}} \in \mathcal{F}_{\gamma}$ . Therefore, the best achievable bound can be obtained when the regularization is correctly specified. The analysis of fast rate is in contrast to some existing work (Allen-Zhu & Li, 2019; 2020; Li et al., 2020; Bai & Lee, 2020) that basically evaluated the Rademacher complexity. This is because we essentially evaluated a local Rademacher complexity instead.
|
| 206 |
+
|
| 207 |
+
# 4.3 COMPARISON BETWEEN LINEAR ESTIMATORS AND DEEP LEARNING
|
| 208 |
+
|
| 209 |
+
Here, we compare the convergence rate of excess risks between the linear estimators and the neural network method trained by NGD using the bounds obtained in Theorem 1 and Corollary 1 respectively. We write the lower bound (5) of the minimax excess risk of linear estimators as $R_{\mathrm{lin}}^{*}$ and the excess risk of the neural network approach (7) as $R_{\mathrm{NN}}^{*}$ . To make the discussion concise, we consider a specific situation where $s = 3$ , $\alpha_{1} = \gamma = \frac{1}{4}\alpha_{2}$ . In this case, $\tilde{\beta} = 17/3 \approx 5.667$ , which gives
|
| 210 |
+
|
| 211 |
+
$$
|
| 212 |
+
R _ {\mathrm {l i n}} ^ {*} \gtrsim n ^ {- \left(1 + \frac {d}{2 \bar {\beta} + d}\right) ^ {- 1}} n ^ {- \kappa^ {\prime}} \approx n ^ {- \left(1 + \frac {d}{1 1 . 3 + d}\right) ^ {- 1}} n ^ {- \kappa^ {\prime}}.
|
| 213 |
+
$$
|
| 214 |
+
|
| 215 |
+
On the other hand, by setting $q = 0$ , we have
|
| 216 |
+
|
| 217 |
+
$$
|
| 218 |
+
R _ {\mathrm {N N}} ^ {*} \lesssim n ^ {- \frac {\alpha_ {1}}{\alpha_ {1} + 1}} = n ^ {- \left(1 + \frac {1}{\alpha_ {1}}\right) ^ {- 1}}.
|
| 219 |
+
$$
|
| 220 |
+
|
| 221 |
+
Thus, as long as $\alpha_{1} > 11.3 / d + 1\approx 2\tilde{\beta} /d + 1$ , we have that
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
R _ {\mathrm {l i n}} ^ {*} \gtrsim R _ {\mathrm {N N}} ^ {*}, \mathrm {a n d} \lim _ {n \to \infty} \frac {R _ {\mathrm {N N}} ^ {*}}{R _ {\mathrm {l i n}} ^ {*}} = 0.
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
In particular, as $d$ gets larger, $R_{\mathrm{lin}}^{*}$ approaches to $\Omega(n^{-1/2})$ while $R_{\mathrm{NN}}^{*}$ is not affected by $d$ and it gets close to $O(n^{-1})$ as $\alpha_{1}$ gets larger. Moreover, the inequality $\alpha_{1} > 11.3/d + 1$ can be satisfied by a relatively low dimensional setting; for example, $d = 10$ is sufficient when $\alpha_{1} = 3$ . As $\alpha_{1}$ becomes large, the model becomes "simpler" because $(a_m)_{m=1}^{\infty}$ decays faster. However, the linear estimators cannot take advantage of this information whereas deep learning can. From the convex hull argument, this discrepancy stems from the non-convexity of the model. We also note that the superiority of deep learning is shown without sparse regularization while several existing work showed favorable estimation property of deep learning though sparsity inducing regularization (Bach, 2017; Chizat, 2019; Hayakawa & Suzuki, 2020). However, our analysis indicates that sparse regularization is not necessarily as long as the model has non-convex geometry, i.e., sparsity is just one sufficient condition for non-convexity but not a necessarily condition. The parameter setting above is just a sufficient condition and the lower bound $R_{\mathrm{lin}}^{*}$ would not be tight. The superiority of deep learning would hold in much wider situations.
|
| 228 |
+
|
| 229 |
+
# 5 CONCLUSION
|
| 230 |
+
|
| 231 |
+
In this paper, we studied excess risks of linear estimators, as a representative of shallow methods, and a neural network estimator trained by a noisy gradient descent where the model is fixed and no sparsity inducing regularization is imposed. Our analysis revealed that deep learning can outperform any linear estimator even for a relatively low dimensional setting. Essentially, non-convexity of the model induces this difference and the curse of dimensionality for linear estimators is a consequence of a fact that the geometry of the model becomes more "non-convex" as the dimension of input gets higher. All derived bounds are fast rate because the analyses are about the excess risk with the squared loss, which made it possible to compare the rate of convergence. The fast learning rate of the deep learning approach is derived through the fact that the noisy gradient descent behaves like a Bayes estimator with model size independent convergence rate.
|
| 232 |
+
|
| 233 |
+
# ACKNOWLEDGMENTS
|
| 234 |
+
|
| 235 |
+
TS was partially supported by JSPS Kakenhi (18K19793, 18H03201, and 20H00576), Japan Digital Design and JST-CREST.
|
| 236 |
+
|
| 237 |
+
# REFERENCES
|
| 238 |
+
|
| 239 |
+
Z. Allen-Zhu and Y. Li. What can ResNet learn efficiently, going beyond kernels? In Advances in Neural Information Processing Systems 32, pp. 9017-9028. Curran Associates, Inc., 2019.
|
| 240 |
+
Z. Allen-Zhu and Y. Li. Backward feature correction: How deep learning performs deep learning. arXiv preprint arXiv:2001.04413, 2020.
|
| 241 |
+
Z. Allen-Zhu, Y. Li, and Z. Song. A convergence theory for deep learning via over-parameterization. In Proceedings of International Conference on Machine Learning, pp. 242-252, 2019.
|
| 242 |
+
A. Andersson, R. Kruse, and S. Larsson. Duality in refined Sobolev-Malliavin spaces and weak approximation of SPDE. Stochastics and Partial Differential Equations Analysis and Computations, 4(1):113-149, 2016.
|
| 243 |
+
S. Arora, S. S. Du, W. Hu, Z. Li, and R. Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. arXiv preprint arXiv:1901.08584, 2019.
|
| 244 |
+
|
| 245 |
+
F. Bach. Breaking the curse of dimensionality with convex neural networks. Journal of Machine Learning Research, 18(19):1-53, 2017.
|
| 246 |
+
Y. Bai and J. D. Lee. Beyond linearization: On quadratic and higher-order approximation of wide neural networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkllGyBFPH.
|
| 247 |
+
A. R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information theory, 39(3):930-945, 1993.
|
| 248 |
+
P. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. The Annals of Statistics, 33:1487-1537, 2005.
|
| 249 |
+
C.-E. Brehier and M. Kopec. Approximation of the invariant law of SPDEs: error analysis using a Poisson equation for a full-discretization scheme. IMA Journal of Numerical Analysis, 37(3): 1375-1410, 07 2016.
|
| 250 |
+
Y. Cao and Q. Gu. A generalization theory of gradient descent for learning over-parameterized deep ReLU networks. arXiv preprint arXiv:1902.01384, 2019.
|
| 251 |
+
A. Caponnetto and E. de Vito. Optimal rates for regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331-368, 2007.
|
| 252 |
+
M. Chen, Y. Bai, J. D. Lee, T. Zhao, H. Wang, C. Xiong, and R. Socher. Towards understanding hierarchical learning: Benefits of neural representations. Advances in Neural Information Processing Systems, 33, 2020.
|
| 253 |
+
L. Chizat. Sparse optimization on measures with over-parameterized gradient descent. arXiv preprint arXiv:1907.10300, 2019.
|
| 254 |
+
L. Chizat and F. Bach. A note on lazy training in supervised differentiable programming. arXiv preprint arXiv:1812.07956, 2018.
|
| 255 |
+
L. Chizat and F. Bach. Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss. arXiv preprint arXiv:2002.04486, 2020.
|
| 256 |
+
G. Da Prato and J. Zabczyk. Non-explosion, boundedness and ergodicity for stochastic semilinear equations. Journal of Differential Equations, 98:181-195, 1992.
|
| 257 |
+
G. Da Prato and J. Zabczyk. *Ergodicity for Infinite Dimensional Systems*. London Mathematical Society Lecture Note Series. Cambridge University Press, 1996.
|
| 258 |
+
D. L. Donoho, R. C. Liu, and B. MacGibbon. Minimax risk over hyperrectangles, and implications. The Annal of Statistics, 18(3):1416-1437, 09 1990. doi: 10.1214/aos/1176347758.
|
| 259 |
+
D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard. Density estimation by wavelet thresholding. The Annals of Statistics, 24(2):508-539, 1996.
|
| 260 |
+
S. Du, J. Lee, H. Li, L. Wang, and X. Zhai. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, pp. 1675-1685, 2019a.
|
| 261 |
+
S. S. Du, X. Zhai, B. Poczos, and A. Singh. Gradient descent provably optimizes over-parameterized neural networks. International Conference on Learning Representations 7, 2019b.
|
| 262 |
+
W. E, C. Ma, and L. Wu. A priori estimates of the population risk for two-layer neural networks. Communications in Mathematical Sciences, 17(5):1407-1425, 2019a.
|
| 263 |
+
W. E, C. Ma, and L. Wu. A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics. Science China Mathematics, pp. 1-24, 2019b.
|
| 264 |
+
M. A. Erdogdu, L. Mackey, and O. Shamir. Global non-convex optimization with discretized diffusions. In Advances in Neural Information Processing Systems 31, pp. 9671–9680. 2018.
|
| 265 |
+
|
| 266 |
+
B. Ghorbani, S. Mei, T. Misiakiewicz, and A. Montanari. Linearized two-layers neural networks in high dimension. arXiv preprint arXiv:1904.12191, 2019.
|
| 267 |
+
B. Ghorbani, S. Mei, T. Misiakiewicz, and A. Montanari. When do neural networks outperform kernel methods? arXiv preprint arXiv:2006.13409, 2020.
|
| 268 |
+
E. Giné and V. Koltchinskii. Concentration inequalities and asymptotic results for ratio type empirical processes. The Annals of Probability, 34(3):1143-1216, 2006.
|
| 269 |
+
S. Gunasekar, J. D. Lee, D. Soudry, and N. Srebro. Implicit bias of gradient descent on linear convolutional networks. In Advances in Neural Information Processing Systems, pp. 9482-9491, 2018.
|
| 270 |
+
M. Hairer. Exponential mixing properties of stochastic PDEs through asymptotic coupling. Probab. Theory Related Fields, 124(3):345-380, 2002.
|
| 271 |
+
S. Hayakawa and T. Suzuki. On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces. Neural Networks, 123:343-361, 2020. ISSN 0893-6080.
|
| 272 |
+
K. Hornik, M. Stinchcombe, and H. White. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. *Neural Networks*, 3(5):551-560, 1990.
|
| 273 |
+
M. Imaizumi and K. Fukumizu. Deep neural networks learn non-smooth functions effectively. In K. Chaudhuri and M. Sugiyama (eds.), Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pp. 869-878. PMLR, 16-18 Apr 2019.
|
| 274 |
+
B. Irie and S. Miyake. Capabilities of three-layered perceptrons. In IEEE 1988 International Conference on Neural Networks, pp. 641-648, 1988.
|
| 275 |
+
A. Jacot, F. Gabriel, and C. Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems 31, pp. 8580-8589, 2018.
|
| 276 |
+
S. Jacquot and G. Royer. Ergodicité d'une classe d'équations aux dérivées partielles stochastiques. Comptes Rendus de l'Académie des Sciences. Série I. Mathématique, 320(2):231-236, 1995.
|
| 277 |
+
J. M. Klusowski and A. R. Barron. Risk bounds for high-dimensional ridge function combinations including neural networks. arXiv preprint arXiv:1607.01434, 2016.
|
| 278 |
+
V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34:2593-2656, 2006.
|
| 279 |
+
Y. Li, T. Ma, and H. R. Zhang. Learning over-parametrized two-layer neural networks beyond ntk. volume 125 of Proceedings of Machine Learning Research, pp. 2613-2682. PMLR, 09-12 Jul 2020.
|
| 280 |
+
B. Maslowski. Strong Feller property for semilinear stochastic evolution equations and applications. In Stochastic systems and optimization (Warsaw, 1988), volume 136 of Lect. Notes Control Inf. Sci., pp. 210-224. Springer, Berlin, 1989.
|
| 281 |
+
S. Mei, A. Montanari, and P.-M. Nguyen. A mean field view of the landscape of two-layer neural networks. Proceedings of the National Academy of Sciences, 115(33):E7665-E7671, 2018. doi: 10.1073/pnas.1806579115.
|
| 282 |
+
S. Mei, T. Misiakiewicz, and A. Montanari. Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit. In A. Beygelzimer and D. Hsu (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99 of Proceedings of Machine Learning Research, pp. 2388–2464, Phoenix, USA, 25–28 Jun 2019. PMLR.
|
| 283 |
+
S. Mendelson. Improving the sample complexity using global data. IEEE Transactions on Information Theory, 48:1977-1991, 2002.
|
| 284 |
+
M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. The MIT Press, 2012.
|
| 285 |
+
|
| 286 |
+
B. Muzellec, K. Sato, M. Massias, and T. Suzuki. Dimension-free convergence rates for gradient Langevin dynamics in RKHS. arXiv preprint 2003.00306, 2020.
|
| 287 |
+
A. Nitanda and T. Suzuki. Stochastic particle gradient descent for infinite ensembles. arXiv preprint arXiv:1712.05438, 2017.
|
| 288 |
+
M. Raginsky, A. Rakhlin, and M. Telgarsky. Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis. arXiv e-prints, pp. arXiv:1702.03849, 2017.
|
| 289 |
+
G. Rotskoff and E. Vanden-Eijnden. Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks. In Advances in Neural Information Processing Systems 31, pp. 7146-7155. Curran Associates, Inc., 2018.
|
| 290 |
+
G. M. Rotskoff and E. Vanden-Eijnden. Trainability and accuracy of neural networks: An interacting particle system approach. arXiv preprint arXiv:1805.00915, 2019.
|
| 291 |
+
W. Rudin. Real and Complex Analysis (third edition). Mathematics series. McGraw-Hill, 1987.
|
| 292 |
+
J. Schmidt-Hieber. Nonparametric regression using deep neural networks with ReLU activation function. The Annals of Statistics, 48(4), 2020.
|
| 293 |
+
T. Shardlow. Geometric ergodicity for stochastic PDEs. Stochastic Analysis and Applications, 17 (5):857-869, 1999.
|
| 294 |
+
R. Sowers. Large deviations for the invariant measure of a reaction-diffusion equation with non-Gaussian perturbations. Probab. Theory Related Fields, 92(3):393-421, 1992. ISSN 0178-8051.
|
| 295 |
+
T. Suzuki. Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HlebTsActm.
|
| 296 |
+
T. Suzuki. Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics. In Advances in Neural Information Processing Systems 33, pp. to appear. Curran Associates, Inc., 2020.
|
| 297 |
+
T. Suzuki and A. Nitanda. Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space. arXiv preprint arXiv:1910.12799, 2019.
|
| 298 |
+
M. Welling and Y.-W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, pp. 681-688, 2011.
|
| 299 |
+
B. Woodworth, S. Gunasekar, J. D. Lee, E. Moroshko, P. Savarese, I. Golan, D. Soudry, and N. Srebro. Kernel and rich regimes in overparametrized models. volume 125 of Proceedings of Machine Learning Research, pp. 3635-3673. PMLR, 09-12 Jul 2020.
|
| 300 |
+
P. Xu, J. Chen, D. Zou, and Q. Gu. Global convergence of Langevin dynamics based algorithms for nonconvex optimization. In Advances in Neural Information Processing Systems, volume 31, pp. 3122-3133. Curran Associates, Inc., 2018.
|
| 301 |
+
G. Yehudai and O. Shamir. On the power and limitations of random features for understanding neural networks. In Advances in Neural Information Processing Systems 32, pp. 6598-6608. Curran Associates, Inc., 2019.
|
| 302 |
+
S. Zhang, M.-Y. Wong, and Z. Zheng. Wavelet threshold estimation of a regression function with random design. Journal of Multivariate Analysis, 80(2):256-284, 2002.
|
| 303 |
+
D. Zou, Y. Cao, D. Zhou, and Q. Gu. Gradient descent optimizes over-parameterized deep ReLU networks. Machine Learning, 109(3):467-492, 2020.
|
| 304 |
+
|
| 305 |
+
# A PROOF OF THEOREM 1
|
| 306 |
+
|
| 307 |
+
We basically combine the "convex hull argument" and the minimax optimal rate analysis for linear estimators developed by Zhang et al. (2002).
|
| 308 |
+
|
| 309 |
+
Zhang et al. (2002) essentially showed the following statement in their Theorem 1.
|
| 310 |
+
|
| 311 |
+
Proposition 2 (Theorem 1 of Zhang et al. (2002)). Let $\mu$ be the Lebesgue measure. Suppose that the space $\Omega$ has even partition $\mathcal{A}$ such that $|\mathcal{A}| = 2^K$ for an integer $K\in \mathbb{N}$ , each $A$ has equivalent measure $\mu (A) = 2^{-K}$ for all $A\in \mathcal{A}$ , and $\mathcal{A}$ is indeed a partition of $\Omega$ , i.e., $\cup_{A\in \mathcal{A}} = \Omega$ , $A\cap A^{\prime} = \emptyset$ for $A,A^{\prime}\in \Omega$ and $A\neq A^{\prime}$ . Then, if $K$ is chosen as $n^{-\gamma_1}\leq 2^{-K}\leq n^{-\gamma_2}$ for constants $\gamma_1,\gamma_2 > 0$ that are independent of $n$ , then there exists an event $\mathcal{E}$ such that, for a constant $C^\prime >0$
|
| 312 |
+
|
| 313 |
+
$$
|
| 314 |
+
P (\mathcal {E}) \geq 1 + o (1) a n d | \left\{x _ {i} \mid x _ {i} \in A (i \in \{1, \dots , n \}) \right\} | \leq C ^ {\prime} n / 2 ^ {K} (\forall A \in \mathcal {A}).
|
| 315 |
+
$$
|
| 316 |
+
|
| 317 |
+
Moreover, suppose that, for a class $\mathcal{F}^{\circ}$ of functions on $\Omega$ , there exists $\Delta > 0$ that satisfies the following conditions:
|
| 318 |
+
|
| 319 |
+
1. There exists $F > 0$ such that, for any $A \in \mathcal{A}$ , there exists $g \in \mathcal{F}^{\circ}$ that satisfies $g(x) \geq \frac{1}{2}\Delta F$ for all $x \in A$ ,
|
| 320 |
+
2. There exists $K'$ and $C'' > 0$ such that $\frac{1}{n}\sum_{i=1}^{n}g(x_i)^2 \leq C''\Delta^22^{-K'}$ for any $g \in \mathcal{F}^\circ$ on the event $\mathcal{E}$ .
|
| 321 |
+
|
| 322 |
+
Then, there exists a constant $F_{1}$ such that at least one of the following inequalities holds:
|
| 323 |
+
|
| 324 |
+
$$
|
| 325 |
+
\frac {F ^ {2}}{4 F _ {1} C ^ {\prime \prime}} \frac {2 ^ {K ^ {\prime}}}{n} \leq R _ {\mathrm {l i n}} \left(\mathcal {F} ^ {\circ}\right), \tag {8a}
|
| 326 |
+
$$
|
| 327 |
+
|
| 328 |
+
$$
|
| 329 |
+
\frac {F ^ {3}}{3 2} \Delta^ {2} 2 ^ {- K} \leq R _ {\mathrm {l i n}} \left(\mathcal {F} ^ {\circ}\right), \tag {8b}
|
| 330 |
+
$$
|
| 331 |
+
|
| 332 |
+
for sufficiently large $n$ .
|
| 333 |
+
|
| 334 |
+
Before we show the main assertion, we prepare some additional lemmas. For a sigmoid function $\sigma$ , let $\tilde{\mathcal{F}}_{C,\tau}^{(\sigma)} \coloneqq \{x \in \mathbb{R}^d \mapsto a\sigma(\tau(w^\top x + b)) \mid |a| \leq 2C, \|w\| \leq 1, |b| \leq 2 (a, b \in \mathbb{R}, w \in \mathbb{R}^d)\}$ for $C > 0, \tau > 0$ .
|
| 335 |
+
|
| 336 |
+
Lemma 1. Let $\psi(x) = \frac{1}{2} (\sigma(x + 1) - \sigma(x - 1))$ and $\hat{\psi}$ be its Fourier transform: $\hat{\psi}(\omega) := (2\pi)^{-1} \int e^{-i\omega x} \psi(x) \, \mathrm{d}x$ . Let $h > 0$ and $D_w > 0$ . Then, by setting $\tau = h^{-1}(2\sqrt{d} + 1) D_w$ and $C = \frac{(2\sqrt{d} + 1) D_w}{\pi h |\hat{\psi}(1)|}$ , the Gaussian RBF kernel can be approximated by
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
\begin{array}{l} \inf_{\check{g}\in \overline{\operatorname{conv}} (\tilde{\mathcal{F}}^{(\sigma)}_{C,\tau})}\sup_{x\in [0,1]^{d}}\left|\check{g} (x) - \exp \left(-\frac{\|x - c\|^{2}}{2h^{2}}\right)\right| \\ \leq \frac {4}{| 2 \pi \hat {\psi} (1) |} \left[ C _ {d} D _ {w} ^ {2 (d - 2)} \exp (- D _ {w} ^ {2} / 2) + \exp (- D _ {w}) \right] \\ \end{array}
|
| 340 |
+
$$
|
| 341 |
+
|
| 342 |
+
for any $c \in [0,1]^d$ , where $C_d$ is a constant depending only on $d$ . In particular, the right hand side is $O(\exp(-n^\kappa))$ if $D_w = n^\kappa$ .
|
| 343 |
+
|
| 344 |
+
Proof of Lemma 1. Let $\psi_h(x) = \psi(h^{-1}x)$ . Suppose that, for $f \in L_1(\mathbb{R}^d)$ , its Fourier transform $\hat{f}(\omega) = (2\pi)^{-d}\int e^{-\mathrm{i}\omega^\top x}f(x)\mathrm{d}x$ ( $\omega \in \mathbb{R}^d$ ) gives
|
| 345 |
+
|
| 346 |
+
$$
|
| 347 |
+
\int_ {\mathbb {R} ^ {d}} \exp (\mathrm {i} w ^ {\top} x) \hat {f} (w) \mathrm {d} w = f (x),
|
| 348 |
+
$$
|
| 349 |
+
|
| 350 |
+
for every $x \in \mathbb{R}^{d1}$ . Then the Irie-Miyake integral representation (Irie & Miyake (1988); see also the proof of Theorem 3.1 in Hornik et al. (1990)) gives
|
| 351 |
+
|
| 352 |
+
$$
|
| 353 |
+
f (x) = \int_ {a \in \mathbb {R} ^ {d}} \int_ {b \in \mathbb {R}} \psi \left(a ^ {\top} x + b\right) d \nu (a, b) \quad (a. e.),
|
| 354 |
+
$$
|
| 355 |
+
|
| 356 |
+
where $\mathrm{d}\nu (a,b)$ is given by
|
| 357 |
+
|
| 358 |
+
$$
|
| 359 |
+
\mathrm {d} \nu (a, b) = \operatorname {R e} \left(\frac {\left| \omega \right| ^ {d} e ^ {- \mathrm {i} w b}}{2 \pi \hat {\psi} (\omega)}\right) \hat {f} (w a) \mathrm {d} a \mathrm {d} b
|
| 360 |
+
$$
|
| 361 |
+
|
| 362 |
+
for any $\omega \neq 0$ . Since the characteristic function of the multivariate normal distribution gives that
|
| 363 |
+
|
| 364 |
+
$$
|
| 365 |
+
\int_ {\mathbb {R} ^ {d}} \exp (\mathrm {i} w ^ {\top} (x - c)) \underbrace {\sqrt {\frac {h ^ {2 d}}{(2 \pi) ^ {d}}} \exp \left(- \frac {h ^ {2} \| w \| ^ {2}}{2}\right)} _ {= \hat {f} (w)} \mathrm {d} w = \exp \left(- \frac {\| x - c \| ^ {2}}{2 h ^ {2}}\right) =: f (x) (\forall x \in \mathbb {R} ^ {d}),
|
| 366 |
+
$$
|
| 367 |
+
|
| 368 |
+
we have that
|
| 369 |
+
|
| 370 |
+
$$
|
| 371 |
+
\exp \left(- \frac {\| x - c \| ^ {2}}{2 h ^ {2}}\right) =
|
| 372 |
+
$$
|
| 373 |
+
|
| 374 |
+
$$
|
| 375 |
+
\int_ {a \in \mathbb {R} ^ {d}} \int_ {b \in \mathbb {R}} \psi_ {h} (a ^ {\top} (x - c) + b) \operatorname {R e} \left(\frac {e ^ {- \mathrm {i} w b}}{2 \pi \hat {\psi} _ {h} (\omega)}\right) \sqrt {\frac {| \omega h | ^ {2 d}}{(2 \pi) ^ {d}}} \exp \left(- \frac {(\omega h) ^ {2} \| a \| ^ {2}}{2}\right) \mathrm {d} a \mathrm {d} b,
|
| 376 |
+
$$
|
| 377 |
+
|
| 378 |
+
for all $x \in \mathbb{R}^d$ . Since $\psi_h(\cdot) = \psi(h^{-1} \cdot)$ and $\hat{\psi}_h(\cdot) = h \hat{\psi}(h \cdot)$ by its definition, the right hand side is equivalent to
|
| 379 |
+
|
| 380 |
+
$$
|
| 381 |
+
\int_ {a \in \mathbb {R} ^ {d}} \int_ {b \in \mathbb {R}} \psi (h ^ {- 1} [ a ^ {\top} (x - c) + b ]) \mathrm {R e} \left(\frac {e ^ {- \mathrm {i} w b}}{2 \pi h \hat {\psi} (h \omega)}\right) \sqrt {\frac {| \omega h | ^ {2 d}}{(2 \pi) ^ {d}}} \exp \left(- \frac {(\omega h) ^ {2} \| a \| ^ {2}}{2}\right) \mathrm {d} a \mathrm {d} b.
|
| 382 |
+
$$
|
| 383 |
+
|
| 384 |
+
Here, we set $\omega = h^{-1}$ . Let $N_{\sigma^2}$ be the probability measure corresponding to the multivariate normal with mean 0 and covariance $\sigma^2\mathrm{I}$ , and let $A_D \coloneqq \{w \in \mathbb{R}^d \mid \| w \| \leq D\}$ . Let $D_a > 0$ and $D_b = D_a(\sqrt{2d} + 1)$ , and define
|
| 385 |
+
|
| 386 |
+
$$
|
| 387 |
+
\begin{array}{l} f _ {D _ {a}} (x) := \frac {1}{2 D _ {b} N _ {1} \left(A _ {D _ {a}}\right)} \int_ {\| a \| \leq D _ {a}, | b | \leq D _ {b}} \psi \left(h ^ {- 1} \left[ a ^ {\top} (x - c) + b \right]\right) \operatorname {R e} \left(\frac {e ^ {- \mathrm {i} b / h}}{2 \pi h \hat {\psi} (1)}\right) \times \\ \sqrt {\frac {1}{(2 \pi) ^ {d}}} \exp \left(- \frac {\| a \| ^ {2}}{2}\right) \mathrm {d} a \mathrm {d} b. \\ \end{array}
|
| 388 |
+
$$
|
| 389 |
+
|
| 390 |
+
Then, we can see that, for any $x\in [0,1]^d$ , it holds that
|
| 391 |
+
|
| 392 |
+
$$
|
| 393 |
+
\begin{array}{l} \left| \frac {1}{2 D _ {b} N _ {1} (A _ {D _ {a}})} f (x) - f _ {D _ {a}} (x) \right| \\ \leq \frac {1}{2 D _ {b} N _ {1} (A _ {D _ {a}}) | 2 \pi h \hat {\psi} (1) |} \left[ N _ {1} (A _ {D _ {a}} ^ {c}) \int 2 \exp (- h ^ {- 1} | x |) \mathrm {d} x + \int_ {| b | > D _ {b}} 2 \exp (- [ h ^ {- 1} (| b | - 2 \sqrt {d} D _ {a}) ]) \mathrm {d} b \right] \\ \leq \frac {1}{2 D _ {b} N _ {1} (A _ {D _ {a}}) | 2 \pi h \hat {\psi} (1) |} \left[ 4 h N _ {1} (A _ {D _ {a}} ^ {c}) + 4 h \exp (- D _ {a}) \right] \\ = \frac {4 h}{2 D _ {b} N _ {1} (A _ {D _ {a}}) | 2 \pi h \hat {\psi} (1) |} \left[ C _ {d} D _ {a} ^ {2 (d - 2)} \exp (- D _ {a} ^ {2} / 2) + \exp (- D _ {a}) \right], \\ \end{array}
|
| 394 |
+
$$
|
| 395 |
+
|
| 396 |
+
where $C_d > 0$ is a constant depending on only $d$ , and we used $|a^\top(x - c) + b| \geq |b| - |a^\top(x - c)| \geq |b| - 2\sqrt{d} D_a$ and $\psi(x) \leq 2\exp(-|x|)$ . Note that if $D_a = n^\kappa$ , then the right hand side is $O(h\exp(-n^\kappa))$ . Therefore, since $N_1(A_{D_a}) \leq 1$ , by setting $\tau = h^{-1}D_b$ , $C = \frac{D_b}{\pi h|\psi(1)|}$ , we have that
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
\begin{array}{l} \inf_ {\check {g} \in \overline {{\operatorname {c o n v}}} (\bar {\mathcal {F}} _ {C, \tau} ^ {(\sigma)})} \sup _ {x \in [ 0, 1 ] ^ {d}} \left| \check {g} (x) - \exp \left(- \frac {\| x - c \| ^ {2}}{2 h ^ {2}}\right) \right| \\ \leq \frac {4}{| 2 \pi \hat {\psi} (1) |} \left[ C _ {d} D _ {a} ^ {2 (d - 2)} \exp (- D _ {a} ^ {2} / 2) + \exp (- D _ {a}) \right]. \\ \end{array}
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
Hence, by rewriting $D_w \gets D_a$ , we obtain the assertion. As noted above, the right hand is $O(\exp(-n^\kappa))$ if $D_a = n^\kappa$ .
|
| 403 |
+
|
| 404 |
+
Proof of Theorem 1. For a sample size $n$ , we fix $m_n$ which will be determined later and use Proposition 2 with $\mathcal{F}^{\circ} = \mathcal{F}_{\gamma}^{(n)}$ . If $w_{2,m_n} = b\sqrt{\mu_{m_n}^{\gamma}/2}$ with $|b| \leq 1$ and $w_{1,m} = \mu_{m_n}^{\gamma/2}[u; -u^\top c]/(\sqrt{2(d+1)})$ for $u \in \mathbb{R}^d$ such that $\| u \| \leq 1$ and $c \in [0,1]^d$ , then $\| (w_{1,m_n}, w_{2,m_n}) \|^2 \leq \mu_{m_n}^{\gamma}(1/2 + (1 + |u^\top c|^2)/2(d+1)) \leq \mu_{m_n}^{\gamma}$ . Therefore, $\tilde{\varphi}_{u,c}(x) = a_{m_n}\bar{w}_{2,m_n}\sigma_{m_n}(w_{1,m_n}^\top[x;1]) = \mu_{m_n}^{\alpha_1}(b\mu_{m_n}^{\gamma/2}/\sqrt{2})\mu_{m_n}^{s\alpha_2}\sigma\left(\mu_{m_n}^{-\alpha_2+\gamma/2}u^\top(x-c)/\sqrt{2(d+1)}\right) \in \mathcal{F}_{\gamma}^{(n)} \subset \mathcal{F}_{\gamma}$ for all $b \in \mathbb{R}$ with $|b| \leq 1$ , $u \in \mathbb{R}^d$ with $\| u \| \leq 1$ , and $c \in [0,1]^d$ . In other words, $\mu_{m_n}^{\alpha_1+\gamma/2+s\alpha_2}(2C)^{-1}\mathcal{F}_{C,\tau}^{(\sigma)} \subset \mathcal{F}_{\gamma}^{(n)}$ for any $C > 0$ and $\tau = \frac{1}{\sqrt{2(d+1)}}\mu_{m_n}^{-\alpha_2+\gamma/2}$ .
|
| 405 |
+
|
| 406 |
+
Therefore, by setting $C = (\sqrt{2d} + 1)D_w / (\pi h|\hat{\psi}(1)|)$ for $D_w > 0$ , Lemma 1 yields that for any $c \in [0,1]^d$ and given $h > 0$ , there exists $g \in \overline{\mathrm{conv}}(\mathcal{F}_\gamma^{(n)})$ such that
|
| 407 |
+
|
| 408 |
+
$$
|
| 409 |
+
\begin{array}{l} \left\| \mu_ {m _ {n}} ^ {\alpha_ {1} + \gamma / 2 + s \alpha_ {2}} \left(\frac {2 (\sqrt {2 d} + 1) D _ {w}}{\pi h | \hat {\psi} (1) |}\right) ^ {- 1} \exp \left(- \frac {\| \cdot - c \| ^ {2}}{2 h ^ {2}}\right) - g \right\| _ {\infty} \\ \leq \mu_ {m _ {n}} ^ {\alpha_ {1} + \gamma / 2 + s \alpha_ {2}} \left(\frac {2 (\sqrt {2 d} + 1) D _ {w}}{\pi h | \hat {\psi} (1) |}\right) ^ {- 1} \frac {4}{| 2 \pi \hat {\psi} (1) |} \left[ C _ {d} D _ {w} ^ {2 (d - 2)} \exp (- D _ {w} ^ {2} / 2) + \exp (- D _ {w}) \right] \\ = \mu_ {m _ {n}} ^ {\alpha_ {1} + \gamma / 2 + s \alpha_ {2}} \frac {h}{(\sqrt {2 d} + 1) D _ {w}} \left[ C _ {d} D _ {a} ^ {2 (d - 2)} \exp (- D _ {w} ^ {2} / 2) + \exp (- D _ {w}) \right]. \\ \end{array}
|
| 410 |
+
$$
|
| 411 |
+
|
| 412 |
+
We let $D_w = n^\kappa$ for any $\kappa > 0$ and choose $\mu_{m_n}$ as $\tau \simeq \mu_{m_n}^{-\alpha_2 + \gamma / 2} = D_w h^{-1} = h^{-1} n^\kappa$ . We write $\Delta := \mu_{m_n}^{\alpha_1 + \gamma / 2 + s\alpha_2}(2C)^{-1} \simeq h^{\frac{\alpha_1 + s\alpha_2 + \gamma / 2}{\alpha_2 - \gamma / 2} + 1} n^{-\kappa (\frac{\alpha_1 + s\alpha_2 + \gamma / 2}{\alpha_2 - \gamma / 2} + 1)}$ . Then, it holds that
|
| 413 |
+
|
| 414 |
+
$$
|
| 415 |
+
\left\| \Delta \exp \left(- \frac {\| \cdot - c \| ^ {2}}{2 h ^ {2}}\right) - g \right\| _ {\infty} \lesssim \Delta \exp (- n ^ {\kappa}). \tag {9}
|
| 416 |
+
$$
|
| 417 |
+
|
| 418 |
+
Here, we set $h$ as $h = 2^{-k}$ with a positive integer $k$ . Accordingly, we define a partition $\mathcal{A}$ of $\Omega$ so that any $A \in \mathcal{A}$ can be represented as $A = [2^{-k}j_{1}, 2^{-k}(j_{1} + 1)] \times \dots \times [2^{-k}j_{d}, 2^{-k}(j_{d} + 1)]$ by non-negative integers $0 \leq j_{i} \leq 2^{k} - 1 (i = 1, \ldots, d)$ . Note that $|\mathcal{A}| = 2^{dk} = h^{-d}$ .
|
| 419 |
+
|
| 420 |
+
For each $A \in \mathcal{A}$ , we define $c_A$ as $c_A = (2^{-k}(j_1 + 1/2), \ldots, 2^{-k}(j_d + 1/2))^{\top}$ where $(j_1, \ldots, j_d)$ is a set of indexes that satisfies $A = [2^{-k}j_1, 2^{-k}(j_1 + 1)] \times \dots \times [2^{-k}j_d, 2^{-k}(j_d + 1)]$ . For each $A \in \mathcal{A}$ , we define $g_A \in \overline{\mathrm{conv}}(\mathcal{F}_\gamma^{(n)})$ as a function that satisfies Eq. (9) for $c = c_A$ .
|
| 421 |
+
|
| 422 |
+
Now, we apply Proposition 2 with $\mathcal{F}^{\circ} = \overline{\mathrm{conv}} (\mathcal{F}_{\gamma}^{(n)})$ and $K = K^{\prime} = dk$ . Let $R^{*}\coloneqq R_{\mathrm{lin}}(\overline{\mathrm{conv}} (\mathcal{F}_{\gamma}^{(n)}))$ . First, we can see that there exists a constant $F > 0$ such that
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
g _ {A} (x) \geq F \Delta (\forall x \in A),
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
where we used $\exp (-n^{\kappa})\ll 1$
|
| 429 |
+
|
| 430 |
+
Second, in the event $\mathcal{E}$ introduced in the statement of Proposition 2, there exists $C$ such that $|\{i\in \{1,\ldots ,n\} \mid x_i\in A'\} |\leq Cn / 2^{-dk}$ for all $A^{\prime}\in \mathcal{A}$ . In this case, we can check that
|
| 431 |
+
|
| 432 |
+
$$
|
| 433 |
+
\frac {1}{n} \sum_ {i = 1} ^ {n} \left[ \Delta \exp \left(- \frac {\| x _ {i} - c _ {A} \| ^ {2}}{2 h ^ {2}}\right) \right] ^ {2} \lesssim \Delta^ {2} h ^ {d} = \Delta^ {2} 2 ^ {- k d},
|
| 434 |
+
$$
|
| 435 |
+
|
| 436 |
+
by the uniform continuity of the Gaussian RBF. Therefore, we also have
|
| 437 |
+
|
| 438 |
+
$$
|
| 439 |
+
\begin{array}{l} \frac {1}{n} \sum_ {i = 1} ^ {n} g _ {A} (x _ {i}) ^ {2} \leq \frac {2}{n} \sum_ {i = 1} ^ {n} \left[ \Delta \exp \left(- \frac {\| x _ {i} - c _ {A} \| ^ {2}}{2 h ^ {2}}\right) \right] ^ {2} + c \Delta^ {2} \exp (- 2 n ^ {\kappa}) \\ \lesssim \Delta^ {2} \left(h ^ {d} + \exp (- 2 n ^ {\kappa})\right), \\ \end{array}
|
| 440 |
+
$$
|
| 441 |
+
|
| 442 |
+
where $c > 0$ is a constant. Thus, as long as $h$ is polynomial to $n$ like $h = \Theta (n^{-a})$ , the right hand side is $O(\Delta^2 h^d)$ .
|
| 443 |
+
|
| 444 |
+
Now, if we write
|
| 445 |
+
|
| 446 |
+
$$
|
| 447 |
+
\tilde {\beta} = \frac {\alpha_ {1} + s \alpha_ {2} + \gamma / 2}{\alpha_ {2} - \gamma / 2} + 1 = \frac {\alpha_ {1} + (s + 1) \alpha_ {2}}{\alpha_ {2} - \gamma / 2},
|
| 448 |
+
$$
|
| 449 |
+
|
| 450 |
+
then we have $\Delta \simeq h^{\tilde{\beta}}n^{-\kappa \tilde{\beta}}$ by its definition.
|
| 451 |
+
|
| 452 |
+
Here, we choose $k$ as a maximum integer that satisfies $\frac{F^3}{32}\Delta^2 2^{-dk} > R^*$ . In this situation, it holds that
|
| 453 |
+
|
| 454 |
+
$$
|
| 455 |
+
h ^ {2 \bar {\beta} + d} n ^ {- 2 \kappa \bar {\beta}} \simeq R ^ {*}.
|
| 456 |
+
$$
|
| 457 |
+
|
| 458 |
+
Since Eq. (8b) is not satisfied, Eq. (8a) must hold, and hence we have
|
| 459 |
+
|
| 460 |
+
$$
|
| 461 |
+
\begin{array}{l} n ^ {- 1} h ^ {- d} \lesssim R ^ {*} \simeq h ^ {2 \tilde {\beta} + d} n ^ {- 2 \kappa \tilde {\beta}} \\ \Rightarrow h \simeq n ^ {- \frac {1 - 2 \kappa \bar {\beta}}{2 \beta + 2 d}}. \\ \end{array}
|
| 462 |
+
$$
|
| 463 |
+
|
| 464 |
+
Therefore, we obtain that
|
| 465 |
+
|
| 466 |
+
$$
|
| 467 |
+
\begin{array}{l} R ^ {*} \gtrsim n ^ {- \frac {2 \bar {\beta} + d}{2 \bar {\beta} + 2 d}} n ^ {- \frac {2 \kappa d \bar {\beta}}{2 \bar {\beta} + 2 d}} \\ \geq n ^ {- \frac {2 \bar {\beta} + d}{2 \bar {\beta} + 2 d}} n ^ {- \kappa^ {\prime}}, \\ \end{array}
|
| 468 |
+
$$
|
| 469 |
+
|
| 470 |
+
by setting $\kappa' = \kappa \frac{2d\tilde{\beta}}{2\tilde{\beta} + 2d}$ . This gives the assertion.
|
| 471 |
+
|
| 472 |
+

|
| 473 |
+
|
| 474 |
+
# B PROOFS OF PROPOSITION 1, THEOREM 2 AND COROLLARY 1
|
| 475 |
+
|
| 476 |
+
Proposition 1, Theorem 2 and Corollary 1 can be shown by using Propositions 3 and 4 given in Appendix B.1 shown below.
|
| 477 |
+
|
| 478 |
+
Let $T^{\alpha}W = (\mu_{m}^{\alpha}w_{1,m},\mu_{m}^{\alpha}w_{2,m})_{m = 1}^{\infty}$ for $W = (w_{1,m},w_{2,m})_{m = 1}^{\infty}$ for $\alpha >0$ , and let us consider a model $h_W\coloneqq f_{T^{-\alpha /2}W}$ . Then, the training error can be rewritten as
|
| 479 |
+
|
| 480 |
+
$$
|
| 481 |
+
\widehat {\mathcal {L}} \left(f _ {W}\right) = \widehat {\mathcal {L}} \left(h _ {T ^ {\alpha / 2} W}\right).
|
| 482 |
+
$$
|
| 483 |
+
|
| 484 |
+
For notational simplicity, we let $\widehat{\mathcal{L}}(W) \coloneqq \widehat{\mathcal{L}}(f_W)$ .
|
| 485 |
+
|
| 486 |
+
Let $\mathcal{H}^{(M)}$ be $\{W^{(M)} = (w_{1,m},w_{2,m})_{m = 1}^{M}\mid w_{1,m}\in \mathbb{R}^{d + 1},w_{2,m}\in \mathbb{R},1\leq m\leq M\}$ and $\iota :\mathcal{H}^{(M)}\to \mathcal{H}$ be the zero padding of $W^{(M)}$ , that is, $\iota (W^{(M)}) = (w_{1,m}^{\prime},w_{2,m}^{\prime})_{m = 1}^{\infty}\in \mathcal{H}$ satisfies $w_{1,m}^{\prime} = w_{1,m}$ , $w_{2,m}^{\prime} = w_{2,m}$ ( $m\leq M$ ) and $w_{1,m}^{\prime} = 0$ , $w_{2,m}^{\prime} = 0$ ( $m > M$ ). Moreover, we define $\iota^{*}:\mathcal{H}\rightarrow \mathcal{H}^{(M)}$ as the map that extracts first $M$ components. By abuse of notation, we write $f_{W^{(M)}}$ for $W^{(M)}\in \mathcal{H}^{(M)}$ to indicate $f_{\iota (W^{(M)})}$ . Finally, let $A^{(M)}:\mathcal{H}^{(M)}\to \mathcal{H}^{(M)}$ be a linear operator such that $A^{(M)}W^{(M)} = \iota^{*}(A\iota (W^{(M)}))$ , which is just a truncation of $A$ . Similarly, let $T_M^a W^{(M)}$ for $W^{(M)}\in \mathcal{H}^{(M)}$ be the operator corresponding to $T^a W$ for $W\in \mathcal{H}$ , i.e., $T_M^a W^{(M)} = \iota^* (T^a\iota (W^{(M)}))$ .
|
| 487 |
+
|
| 488 |
+
# B.1 AUXILIARY LEMMAS
|
| 489 |
+
|
| 490 |
+
First, we show some key propositions to show the main results. To do so, we utilize the result by Muzellec et al. (2020) and Suzuki (2020).
|
| 491 |
+
|
| 492 |
+
# Assumption 3.
|
| 493 |
+
|
| 494 |
+
(i) There exists a constant $c_{\mu}$ such that $\mu_m \leq c_\mu m^{-2}$ .
|
| 495 |
+
(ii) There exist $B, U > 0$ such that the following two inequalities hold for some $a \in (1/4, 1)$ almost surely:
|
| 496 |
+
|
| 497 |
+
$$
|
| 498 |
+
\begin{array}{l} \| \nabla \widehat {\mathcal {L}} (W) \| _ {\mathcal {H}} \leq B (\forall W \in \mathcal {H}), \\ \| \nabla \widehat {\mathcal {L}} (W) - \nabla \widehat {\mathcal {L}} (W ^ {\prime}) \| _ {\mathcal {H}} \leq L \| W - W ^ {\prime} \| _ {\mathcal {H} _ {- a}} (\forall W, W ^ {\prime} \in \mathcal {H}). \\ \end{array}
|
| 499 |
+
$$
|
| 500 |
+
|
| 501 |
+
(iii) For any data $D_{n}$ , $\widehat{\mathcal{L}}$ is three times differentiable. Let $\nabla^3\widehat{\mathcal{L}}(W)$ be the third-order derivative of $\widehat{\mathcal{L}}(W)$ . This can be identified with a third-order linear form and $\nabla^3\widehat{\mathcal{L}}(W) \cdot (h, k)$ denotes the Riesz representative of $l \in \mathcal{H} \mapsto \nabla^3\widehat{\mathcal{L}}(W) \cdot (h, k, l)$ . There exists $\alpha' \in [0,1)$ , $C_{\alpha'} \in (0, \infty)$ such that $\forall W, h, k \in \mathcal{H}$ , $\|\nabla^3\widehat{\mathcal{L}}(W) \cdot (h, k)\|_{\mathcal{H}_{-\alpha'}} \leq C_{\alpha'} \|h\|_{\mathcal{H}} \|k\|_{\mathcal{H}}$ , $\|\nabla^3\widehat{\mathcal{L}}(W) \cdot (h, k)\|_{\mathcal{H}} \leq C_{\alpha'} \|h\|_{\mathcal{H}_{\alpha'}} \|k\|_{\mathcal{H}}$ (a.s.).
|
| 502 |
+
|
| 503 |
+
Remark 1. In the analysis of Brehier & Kopec (2016); Muzellec et al. (2020); Suzuki (2020), Assumption 3-(iii) is imposed for any finite dimensional projection $\mathcal{L}(W^{(M)})$ as a function on $\mathcal{H}^{(M)}$ for all $M \geq 1$ instead of $\mathcal{L}(W)$ as a function of $\mathcal{H}$ . However, the condition on $\mathcal{L}(W)$ gives a sufficient condition for any finite dimensional projection in our setting. Thus, we employed the current version.
|
| 504 |
+
|
| 505 |
+
Assumption 4. For the loss function $\ell(y, f(x)) = (y - f(x))^2$ , the following conditions hold:
|
| 506 |
+
|
| 507 |
+
(i) There exists $C > 0$ such that for any $f_{W}$ $(W\in \mathcal{H})$ , it holds that
|
| 508 |
+
|
| 509 |
+
$$
|
| 510 |
+
\operatorname {E} _ {X, Y} \left[ \left(\ell \left(Y, f _ {W} (X)\right) - \ell \left(Y, f ^ {*} (X)\right)\right) ^ {2} \right] \leq C \left(\mathcal {L} \left(f _ {W}\right) - \mathcal {L} \left(f ^ {*}\right)\right).
|
| 511 |
+
$$
|
| 512 |
+
|
| 513 |
+
(ii) $\beta > 0$ is chosen so that, for any $h: \mathbb{R}^d \to \mathbb{R}$ and $x \in \operatorname{supp}(P_X)$ , it holds that
|
| 514 |
+
|
| 515 |
+
$$
|
| 516 |
+
\left. \right. \mathrm {E} _ {Y | X = x} \left[ \exp \left(- \frac {\beta}{n} (\ell (Y, h (x)) - \ell (Y, f ^ {*} (x)))\right)\right] \leq 1.
|
| 517 |
+
$$
|
| 518 |
+
|
| 519 |
+
(iii) There exists $L_{h} > 0$ such that $\| \nabla_{W}\ell (Y,h_{W}(X)) - \nabla_{W^{\prime}}\ell (Y,h_{W^{\prime}}(X))\|_{\mathcal{H}}\leq L_{h}\| W - W^{\prime}\|_{\mathcal{H}}$ $(\forall W,W^{\prime}\in \mathcal{H})$ almost surely.
|
| 520 |
+
(iv) There exists $C_h$ such that $\| h_W - h_{W'}\|_{\infty}\leq C_h\| W - W'\|_{\mathcal{H}}$ $(W,W^{\prime}\in \mathcal{H})$
|
| 521 |
+
|
| 522 |
+
Proposition 3. Assume Assumption 3 holds and $\beta >\eta$ . Suppose that $\exists \bar{R} >0,0\leq \ell (Y,f_W(X))\leq$ $\bar{R}$ for any $W\in \mathcal{H}$ (a.s.). Let $\rho = \frac{1}{1 + \lambda\eta / \mu_1}$ and $b = \frac{\mu_1}{\lambda} B + \frac{c_\mu}{\beta\lambda}$ . Accordingly, let $\bar{b} = \max \{b,1\}$ , $\kappa = \bar{b} +1$ and $\bar{V} = 4\bar{b} /\left(\sqrt{(1 + \rho^{1 / \eta}) / 2} -\rho^{1 / \eta}\right)$ . Then, the spectral gap of the dynamics is given by
|
| 523 |
+
|
| 524 |
+
$$
|
| 525 |
+
\Lambda_ {\eta} ^ {*} = \frac {\operatorname* {m i n} \left(\frac {\lambda}{2 \mu_ {1}} , \frac {1}{2}\right)}{4 \log (\kappa (\bar {V} + 1) / (1 - \delta))} \delta \tag {10}
|
| 526 |
+
$$
|
| 527 |
+
|
| 528 |
+
where $0 < \delta < 1$ is a real number satisfying $\delta = \Omega (\exp (-\Theta (\mathrm{poly}(\lambda^{-1})\beta)))$ . We define $\Lambda_0^* = \lim_{\eta \to 0}\Lambda_\eta^*$ (i.e., $\bar{V}$ is replaced by $4\bar{b}/(\sqrt{(1 + \exp(-\frac{\lambda}{\mu_1}))/2 - \exp(-\frac{\lambda}{\mu_1}))}$ ). We also define $C_{W_0} = \kappa [V + 1] + \frac{\sqrt{2}(\bar{R} + b)}{\sqrt{\delta}}$ . Then, for any $0 < a < 1/4$ , the following convergence bound holds for almost sure observation $D_n$ : for either $L = \mathcal{L}$ or $L = \widehat{\mathcal{L}}$ ,
|
| 529 |
+
|
| 530 |
+
$$
|
| 531 |
+
\left| \mathrm {E} _ {W _ {k}} [ L (W _ {k}) | D _ {n} ] - \mathrm {E} _ {W \sim \pi_ {\infty}} [ L (W) | D _ {n} ] \right| \tag {11}
|
| 532 |
+
$$
|
| 533 |
+
|
| 534 |
+
$$
|
| 535 |
+
\leq C _ {1} \left[ C _ {W _ {0}} \exp \left(- \Lambda_ {\eta} ^ {*} \eta k\right) + \frac {\sqrt {\beta}}{\Lambda_ {0} ^ {*}} \eta^ {1 / 2 - a} \right] = \Xi_ {k} ^ {\prime}, \tag {12}
|
| 536 |
+
$$
|
| 537 |
+
|
| 538 |
+
where $C_1$ is a constant depending only on $c_{\mu},B,L,C_{\alpha '},a,\bar{R}$ (independent of $\eta ,k,\beta ,\lambda$
|
| 539 |
+
|
| 540 |
+
Proposition 4. Assume that Assumptions 3 and 4 hold. Let $\tilde{\alpha} := 1 / \{2(\alpha + 1)\}$ for a given $\alpha > 0$ and $\theta$ be an arbitrary real number satisfying $0 < \theta < 1 - \tilde{\alpha}$ . Assume that the true function $f^{\mathrm{o}}$ can be represented by $h_{W^{*}} = f^{\mathrm{o}}$ for $W^{*} \in \mathcal{H}_{\theta(\alpha + 1)}$ . Then, if $M \geq \min \left\{\lambda^{\tilde{\alpha}/2[\theta(\alpha + 1)]}\beta^{1/2[\theta(\alpha + 1)]}, \lambda^{-1/2(\alpha + 1)}, n^{1/2[\theta(\alpha + 1)]}\right\}$ , the expected excess risk is bounded by
|
| 541 |
+
|
| 542 |
+
$$
|
| 543 |
+
\begin{array}{l} \mathrm {E} _ {D ^ {n}} \left[ \mathrm {E} _ {W _ {k} ^ {(M)}} \left[ \mathcal {L} \left(h _ {T _ {M} ^ {\alpha / 2} W _ {k} ^ {(M)}}\right) \mid D _ {n} \right] - \mathcal {L} \left(f ^ {\mathrm {o}}\right) \right] \\ \leq C \max \left\{\left(\lambda \beta\right) ^ {\frac {2 \tilde {\alpha} / \theta}{1 + \tilde {\alpha} / \theta}} n ^ {- \frac {1}{1 + \tilde {\alpha} / \theta}}, \lambda^ {- \tilde {\alpha}} \beta^ {- 1}, \lambda^ {\theta}, 1 / n \right\} + \Xi_ {k} ^ {\prime}, \tag {13} \\ \end{array}
|
| 544 |
+
$$
|
| 545 |
+
|
| 546 |
+
where $C$ is a constant independent of $n, \beta, \lambda, \eta, k$ .
|
| 547 |
+
|
| 548 |
+
Proof. Repeating the same argument in Proposition 1 and using the same notation, Proposition 3 gives
|
| 549 |
+
|
| 550 |
+
$$
|
| 551 |
+
| \mathrm {E} _ {W _ {k} ^ {(M)}} [ \mathcal {L} (W _ {k} ^ {(M)}) | D _ {n} ] - \mathrm {E} _ {W \sim \pi_ {\infty} ^ {(M)}} [ \mathcal {L} (W) | D _ {n} ] | \leq \Xi_ {k} ^ {\prime},
|
| 552 |
+
$$
|
| 553 |
+
|
| 554 |
+
for any $1 \leq M \leq \infty$ . Therefore, we just need to bound the following quantity: $\left|\mathrm{E}_{D^n}\left[\mathrm{E}_{W^{(M)} \sim \pi_{\infty}^{(M)}}\left[\mathcal{L}(h_{T_M^{\alpha/2}}_{W^{(M)}}) \mid D_n\right]\right] - \mathcal{L}(f^{\mathrm{o}})\right|$ .
|
| 555 |
+
|
| 556 |
+
We define $\| W^{(M)}\|_{\mathcal{H}^{(M)}}\coloneqq \| \iota^{*}(W^{(M)})\|_{\mathcal{H}}$ for $W^{(M)}\in \mathcal{H}^{(M)}$ . For $a > 0$ , we define $\mathcal{H}_a^{(M)}$ be the projection of $\mathcal{H}_a$ to the first $M$ components, $\mathcal{H}_a^{(M)} = \{\iota (W)\mid W\in \mathcal{H}_a\}$ , and we define $\| W^{(M)}\|_{\mathcal{H}_a^{(M)}}\coloneqq \| \iota^* (W^{(M)})\|_{\mathcal{H}_a}$ (note that since $\mathcal{H}_a^{(M)}$ is a finite dimensional linear space, it is same as $\mathcal{H}$ as a set). Let $\nu_{\beta}^{(M)}$ be the Gaussian measure on $\mathcal{H}^{(M)}$ with mean 0 and covariance $(\beta A^{(M)})^{-1}$ , and $\tilde{\nu}_{\beta}^{(M)}$ be the Gaussian measure corresponding to the random variable $T_M^{\alpha /2}W^{(M)}$ with $W^{(M)}\sim \nu_{\beta}^{(M)}$ . Let the concentration function be
|
| 557 |
+
|
| 558 |
+
$$
|
| 559 |
+
\phi_{\beta ,\lambda}^{(M)}(\epsilon):= \inf_{\substack{W\in \mathcal{H}^{(M)}_{\alpha +1}:\\ \mathcal{L}(h_{W}) - \mathcal{L}(f^{\mathrm{o}})\leq \epsilon^{2}}}\beta \lambda \| W\|_{\mathcal{H}^{(M)}_{\alpha +1}}^{2} - \log \tilde{\nu}_{\beta}^{(M)}(\{W\in \mathcal{H}^{(M)}: \| W\|_{\mathcal{H}^{(M)}}\leq \epsilon \}) + \log (2),
|
| 560 |
+
$$
|
| 561 |
+
|
| 562 |
+
where, if there does not exist $W \in \mathcal{H}_{\alpha + 1}^{(M)}$ that satisfies the condition inf, then we define $\phi_{\beta, \lambda}^{(M)}(\epsilon) = \infty$ , then Let $\epsilon^* > 0$ be
|
| 563 |
+
|
| 564 |
+
$$
|
| 565 |
+
\epsilon^ {*} := \max \left\{\inf \left\{\epsilon > 0 \mid \phi_ {\beta , \lambda} (\epsilon) \leq \beta \epsilon^ {2} \right\}, 1 / n \right\}.
|
| 566 |
+
$$
|
| 567 |
+
|
| 568 |
+
Then, Suzuki (2020) showed the following bound:
|
| 569 |
+
|
| 570 |
+
$$
|
| 571 |
+
\begin{array}{l} \left| \mathrm {E} _ {D ^ {n}} \left[ \mathrm {E} _ {W ^ {(M)} \sim \pi_ {\infty} ^ {(M)}} \left[ \mathcal {L} \left(h _ {T _ {(M)} ^ {\alpha / 2}} W ^ {(M)}\right) \mid D _ {n} \right] - \mathcal {L} (f ^ {\mathrm {o}}) \right] \right| \\ \leq C \max \left\{\epsilon^ {* 2}, \left(\frac {\beta}{n} \epsilon^ {* 2} + n ^ {- \frac {1}{1 + \bar {\alpha} / \theta}} (\lambda \beta) ^ {\frac {2 \bar {\alpha} / \theta}{1 + \bar {\alpha} / \theta}}\right), \frac {1}{n} \right\}. \tag {14} \\ \end{array}
|
| 572 |
+
$$
|
| 573 |
+
|
| 574 |
+
They also showed that, for $M = \infty$ , it holds that
|
| 575 |
+
|
| 576 |
+
$$
|
| 577 |
+
\epsilon^ {* 2} \lesssim \max \left\{(\lambda \beta) ^ {- \tilde {\alpha}} \beta^ {- (1 - \tilde {\alpha})}, \lambda^ {\theta}, n ^ {- 1} \right\} = \max \left\{\lambda^ {- \tilde {\alpha}} \beta^ {- 1}, \lambda^ {\theta}, n ^ {- 1} \right\}.
|
| 578 |
+
$$
|
| 579 |
+
|
| 580 |
+
Substituting this bound of $\epsilon^{*}$ to Eq. (14), we obtain Eq. (13) for $M = \infty$ . Moreover, in their proof, if $M \geq (\epsilon^{*})^{-1 / [\theta (\alpha + 1)]}$ , then
|
| 581 |
+
|
| 582 |
+
$$
|
| 583 |
+
\inf_{\substack{W\in \mathcal{H}^{(M)}_{\alpha +1}:\\ }}\quad \beta \lambda \| W\|_{\mathcal{H}^{(M)}_{\alpha +1}}^{2}\lesssim \beta (\epsilon^{*})^{2}.
|
| 584 |
+
$$
|
| 585 |
+
|
| 586 |
+
$$
|
| 587 |
+
\mathcal {L} (h _ {W}) - \mathcal {L} (f ^ {\mathrm {o}}) \leq \epsilon^ {2}
|
| 588 |
+
$$
|
| 589 |
+
|
| 590 |
+
Finally, since $\tilde{\nu}_{\beta}^{(M)}$ is a marginal distribution of $\tilde{\nu}_{\beta}^{(\infty)}$ , it holds that
|
| 591 |
+
|
| 592 |
+
$$
|
| 593 |
+
- \log \tilde {\nu} _ {\beta} ^ {(M)} \left(\left\{W \in \mathcal {H} ^ {(M)}: \| W \| _ {\mathcal {H} ^ {(M)}} \leq \epsilon \right\}\right) \leq - \log \tilde {\nu} _ {\beta} ^ {(\infty)} \left(\left\{W \in \mathcal {H}: \| W \| _ {\mathcal {H}} \leq \epsilon \right\}\right).
|
| 594 |
+
$$
|
| 595 |
+
|
| 596 |
+
Therefore, as long as $M \geq (\epsilon^{*})^{-1 / [\theta(\alpha + 1)]}$ , the rate of $\epsilon^{*}$ is not deteriorated from $M = \infty$ . In other words, if $M \geq \min \left\{\lambda^{\tilde{\alpha} / 2[\theta(\alpha + 1)]} \beta^{1/2[\theta(\alpha + 1)]}, \lambda^{-\theta / 2[\theta(\alpha + 1)]}, n^{1/2[\theta(\alpha + 1)]}\right\}$ , the bound (13) holds.
|
| 597 |
+
|
| 598 |
+
Remark 2. Suzuki (2020) showed Proposition 4 under a condition $\alpha > 1/2$ . However, this is used only to ensure Assumption 3. In our setting, we can show Assumption 3 holds directly and thus we may omit the condition $\alpha > 1/2$ .
|
| 599 |
+
|
| 600 |
+
# B.2 PROOFS OF PROPOSITION 1, THEOREM 2 AND COROLLARY 1
|
| 601 |
+
|
| 602 |
+
Here, we give the proofs of Proposition 1 and Theorem 2 simultaneously.
|
| 603 |
+
|
| 604 |
+
Proof of Proposition 1 and Theorem 2. Let $\bar{R} = (2\sum_{m=1}^{\infty} a_m R + U)^2$ . Then, we can easily check that $(y_i - f_W(x_i))^2 \leq \bar{R}$ . As stated above, we use Propositions 3 and 4 to show the statements.
|
| 605 |
+
|
| 606 |
+
First, we show Proposition 1 for the dynamics of $W_{k}^{(M)}$ for any $1 \leq M \leq \infty$ . However, it suffices to show the statement only for $M = \infty$ because the finite dimensional version can be seen as a
|
| 607 |
+
|
| 608 |
+
specific case of the infinite dimensional one. Actually, the dynamics of $W_{k}^{(M)}$ is same as that of $\iota(\tilde{W}_{k})$ where $\tilde{W}_{k} \in \mathcal{H}$ obeys the following dynamics:
|
| 609 |
+
|
| 610 |
+
$$
|
| 611 |
+
\tilde {W} _ {k + 1} = S _ {\eta} \left(\tilde {W} _ {k} - \eta \nabla \widehat {\mathcal {L}} (f _ {\iota (\tilde {W} _ {k})}) + \sqrt {\frac {2 \eta}{\beta}} \xi_ {k}\right).
|
| 612 |
+
$$
|
| 613 |
+
|
| 614 |
+
This is because $f_{\iota(\tilde{W}_k)}$ is determined by only the first $M$ components $\iota(\tilde{W}_k)$ , $\iota(\nabla \widehat{\mathcal{L}}(f_{\iota(\tilde{W}_k)})) = \nabla_{W^{(M)}} \widehat{\mathcal{L}}(f_{W^{(M)}})|_{W^{(M)}=\iota(\tilde{W}_k)}$ and $S_\eta$ is a diagonal operator. Since the components of $\tilde{W}_k$ with indexes higher than $M$ does not affect the objective, smoothness of the objective is not lost. The stationary distribution $\pi_\infty^{(M)}$ of the continuous dynamics corresponding to $W^{(M)}$ is a probability measure on $\mathcal{H}^{(M)}$ that satisfies
|
| 615 |
+
|
| 616 |
+
$$
|
| 617 |
+
\frac {\mathrm {d} \pi_ {\infty} ^ {(M)}}{\mathrm {d} \nu_ {\beta} ^ {(M)}} (W ^ {(M)}) \propto \exp (- \beta \widehat {\mathcal {L}} (f _ {W ^ {(M)}})),
|
| 618 |
+
$$
|
| 619 |
+
|
| 620 |
+
where $\nu_{\beta}^{(M)}$ is the Gaussian measure on $\mathbb{R}^{M\times (d + 2)}$ with mean 0 and covariance $(\beta A^{(M)})^{-1}$ . We can notice that this is the marginal distribution of the stationary distribution of the continuous time counterpart of $\tilde{W}_k$ : $\mathrm{d}\tilde{\pi}_{\infty}(\tilde{W})\propto \exp (-\beta \widehat{\mathcal{L}} (f_{\iota (\tilde{W})}))\mathrm{d}\nu_{\beta}$ . Therefore, we just need to consider an infinite dimensional one. For this reasoning, we show the convergence for the original infinite dimensional dynamics $(W_{k})_{k = 1}^{\infty}$ . The convergence of the finite dimensional one $(W_{k}^{(M)})_{k = 1}^{\infty}$ can be shown by the same manner using the argument above.
|
| 621 |
+
|
| 622 |
+
To show Proposition 1, we use Propositions 3. To do so, we need to check validity of Assumptions 3. First, we check Assumption 3. Assumption 3-(i) is ensured by Assumption 1. Next, we check Assumption 3-(ii). The boundedness of the gradient can be shown as follows:
|
| 623 |
+
|
| 624 |
+
$$
|
| 625 |
+
\begin{array}{l} \| \nabla \widehat {\mathcal {L}} (f _ {W}) \| _ {\mathcal {H}} ^ {2} \\ = \sum_ {m = 1} ^ {\infty} \left(\left\| \frac {1}{n} \sum_ {i = 1} ^ {n} 2 (f _ {W} (x _ {i}) - y _ {i}) \bar {w} _ {2, m} a _ {m} [ x _ {i}; 1 ] \sigma_ {m} ^ {\prime} (w _ {1, m} ^ {\top} [ x _ {i}; 1 ]) \right\| ^ {2} \right. \\ \left. \right. + \left| \right. \frac {1}{n} \sum_ {i = 1} ^ {n} 2 \left(f _ {W} \left(x _ {i}\right) - y _ {i}\right) a _ {m} \tanh ^ {\prime} \left(w _ {2, m} / R\right) \sigma_ {m} \left(w _ {1, m} ^ {\top} [ x _ {i}; 1 ]\right)\left. \right) _ {m = 1} ^ {\infty} \left. \right| ^ {2}\left. \right) \\ \leq \sum_ {m = 1} ^ {\infty} 4 \bar {R} R ^ {2} a _ {m} ^ {2} (d + 1) C _ {\sigma} ^ {2} + 4 \bar {R} a _ {m} ^ {2} \\ \left(\because \left| f _ {W} \left(x _ {i}\right) - y _ {i} \right| \leq \bar {R}, \| \sigma_ {m} ^ {\prime} \| _ {\infty} \leq C _ {\sigma}, \| \tanh ^ {\prime} \| _ {\infty} \leq 1\right) \\ \leq 4 \bar {R} \left[ R ^ {2} C _ {\sigma} ^ {2} (d + 1) + 1 \right] \sum_ {m = 1} ^ {\infty} a _ {m} ^ {2} < \infty . \\ \end{array}
|
| 626 |
+
$$
|
| 627 |
+
|
| 628 |
+
Similarly, we can show the Lipschitz continuity of the gradient as
|
| 629 |
+
|
| 630 |
+
$$
|
| 631 |
+
\begin{array}{l} \left\| \nabla \widehat {\mathcal {L}} \left(f _ {W}\right) - \nabla \widehat {\mathcal {L}} \left(f _ {W ^ {\prime}}\right) \right\| _ {\mathcal {H}} ^ {2} \\ \leq \sum_ {m = 1} ^ {\infty} \mu_ {m} ^ {- 2 \alpha_ {1}} \mu_ {m} ^ {2 \alpha_ {1}} \left\{4 \bar {R} a _ {m} ^ {2} (d + 1) C _ {\sigma} ^ {2} \left[ \left(w _ {2, m} - w _ {2, m} ^ {\prime}\right) ^ {2} + R ^ {2} \| w _ {1, m} - w _ {1, m} ^ {\prime} \| ^ {2} \right] \right. \\ \left. + 4 \bar {R} a _ {m} ^ {2} \left[ \left(w _ {2, m} - w _ {2, m} ^ {\prime}\right) ^ {2} / R ^ {2} + C _ {\sigma} ^ {2} (d + 1) \| w _ {1, m} - w _ {1, m} ^ {\prime} \| ^ {2} \right] \right\} \quad (\because \| \tanh ^ {\prime \prime} \| _ {\infty} \leq 1) \\ \leq 4 \bar {R} [ (d + 1) C _ {\sigma} ^ {2} (1 + R ^ {2}) + 1 / R ^ {2} + C _ {\sigma} ^ {2} (d + 1) ] \max _ {m \in \mathbb {N}} \left\{\mu_ {m} ^ {- 2 \alpha_ {1}} a _ {m} ^ {2} \right\} \\ \times \sum_ {m = 1} ^ {\infty} \mu_ {m} ^ {2 \alpha_ {1}} \left[ \left(w _ {2, m} - w _ {2, m} ^ {\prime}\right) ^ {2} + \left\| w _ {1, m} - w _ {1, m} ^ {\prime} \right\| ^ {2} \right] \\ \lesssim \| W - W ^ {\prime} \| _ {\mathcal {H} _ {- \alpha_ {1}}} ^ {2}. \\ \end{array}
|
| 632 |
+
$$
|
| 633 |
+
|
| 634 |
+
We can also verify Assumption 3-(iii) in a similar way. Then, we have verified Assumption 3. Therefore, we may apply Proposition 3, and then we obtain Proposition 1.
|
| 635 |
+
|
| 636 |
+
Next, we show Theorem 2 by using Proposition 4. For that purpose, we need to verify Assumption 4. The first condition can be verified as
|
| 637 |
+
|
| 638 |
+
$$
|
| 639 |
+
\begin{array}{l} \operatorname {E} _ {X, Y} \left[ \left(\left(Y - f _ {W} (X)\right) ^ {2} - \left(Y - f ^ {\mathrm {o}} (X)\right) ^ {2}\right) ^ {2} \right] \\ = \mathrm {E} _ {X, \epsilon} [ ((f ^ {\mathrm {o}} (X) + \epsilon - f _ {W} (X)) ^ {2} - \epsilon^ {2}) ^ {2} ] \\ = \mathrm {E} _ {X} \left[ \left(\left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right) ^ {2} + 2 \epsilon \left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right)\right) ^ {2} \right] \\ = \mathrm {E} _ {X} \left[ \left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right) ^ {4} + 2 \epsilon \left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right) \left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right) ^ {2} + \epsilon^ {2} \left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right) ^ {2} \right] \\ = \| f ^ {\mathrm {o}} - f _ {W} \| _ {\infty} ^ {2} \mathrm {E} _ {X} \left[ \left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right) ^ {2} \right] + U ^ {2} \mathrm {E} _ {X} \left[ \left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right) ^ {2} \right] \\ \leq \bar {R} \mathrm {E} _ {X} \left[ \left(f ^ {\mathrm {o}} (X) - f _ {W} (X)\right) ^ {2} \right] = \bar {R} \left(\mathcal {L} \left(f _ {W}\right) - \mathcal {L} \left(f ^ {\mathrm {o}}\right)\right). \\ \end{array}
|
| 640 |
+
$$
|
| 641 |
+
|
| 642 |
+
The second condition can be checked as follows. Note that
|
| 643 |
+
|
| 644 |
+
$$
|
| 645 |
+
\begin{array}{l} \mathrm {E} _ {Y \mid X = x} \left(\exp \left\{- \frac {\beta}{n} \left[ (Y - f _ {W} (x)) ^ {2} - (Y - f ^ {\circ} (x)) ^ {2} \right] \right\}\right) \\ = \mathrm {E} _ {\epsilon} \left(\exp \left[ - \frac {\beta}{n} \left(f ^ {\circ} (x) - f _ {W} (x)\right) ^ {2} - 2 \epsilon \left(f _ {W} (x) - f ^ {\circ} (x)\right) \right] \right\} \\ = \exp \left[ - \frac {\beta}{n} (f ^ {\mathrm {o}} (x) - f _ {W} (x)) ^ {2} \right] \mathrm {E} _ {\epsilon} \left\{\exp \left[ \frac {2 \beta}{n} \epsilon (f _ {W} (x) - f ^ {\mathrm {o}} (x)) \right] \right\} \\ \leq \exp \left[ - \frac {\beta}{n} (f ^ {\mathrm {o}} (x) - f _ {W} (x)) ^ {2} \right] \exp \left[ \frac {1}{8} \frac {4 \beta^ {2}}{n ^ {2}} 4 U ^ {2} (f _ {W} (x) - f ^ {\mathrm {o}} (x)) ^ {2} \right]. \\ \end{array}
|
| 646 |
+
$$
|
| 647 |
+
|
| 648 |
+
Thus, under the condition $\beta \leq n / (2U^2)$ , the right hand side can be upper bounded by
|
| 649 |
+
|
| 650 |
+
$$
|
| 651 |
+
\exp \left[ - \frac {\beta}{n} \left(1 - 2 \frac {U ^ {2} \beta}{n}\right) \left(f _ {W} (x) - f ^ {\mathrm {o}} (x)\right) ^ {2} \right] \leq 1.
|
| 652 |
+
$$
|
| 653 |
+
|
| 654 |
+
Next, we check the third and fourth conditions. Noting that
|
| 655 |
+
|
| 656 |
+
$$
|
| 657 |
+
\begin{array}{l} \nabla_ {W} h _ {W} (X) \\ = \left(a _ {m} \overline {{\left(\mu_ {m} ^ {- \alpha / 2} w _ {2 , m}\right)}} \mu_ {m} ^ {- \alpha / 2} [ x _ {i}; 1 ] \sigma_ {m} ^ {\prime} \left(\mu_ {m} ^ {- \alpha / 2} w _ {1, m} ^ {\top} [ x _ {i}; 1 ]\right), \right. \\ a _ {m} \mu_ {m} ^ {- \alpha / 2} \tanh ^ {\prime} (\mu_ {m} ^ {- \alpha / 2} w _ {2, m} / R) \sigma_ {m} (\mu_ {m} ^ {- \alpha / 2} w _ {1, m} ^ {\top} [ x _ {i}; 1 ]) \big) _ {m = 1} ^ {\infty}) _ {m = 1} ^ {\infty}, \\ \end{array}
|
| 658 |
+
$$
|
| 659 |
+
|
| 660 |
+
we have that
|
| 661 |
+
|
| 662 |
+
$$
|
| 663 |
+
\begin{array}{l} \| \nabla_ {W} h _ {W} (X) \| _ {\mathcal {H}} ^ {2} \\ \leq \sum_ {m = 1} ^ {\infty} a _ {m} ^ {2} \mu_ {m} ^ {- \alpha} [ (d + 1) R ^ {2} C _ {\sigma} ^ {2} + 1 ] \\ \leq \left[ (d + 1) R ^ {2} C _ {\sigma} ^ {2} + 1 \right] \sum_ {m = 1} ^ {\infty} \mu_ {m} ^ {- \alpha + 2 \alpha_ {1}} \\ \leq \left[ (d + 1) R ^ {2} C _ {\sigma} ^ {2} + 1 \right] c _ {\mu} ^ {- \alpha + 2 \alpha_ {1}} \sum_ {m = 1} ^ {\infty} m ^ {- 2 (- \alpha + 2 \alpha_ {1})} =: C _ {1} < \infty \\ \left(\because - \alpha + 2 \alpha_ {1} = \alpha_ {1} > 1 / 2\right), \\ \end{array}
|
| 664 |
+
$$
|
| 665 |
+
|
| 666 |
+
and
|
| 667 |
+
|
| 668 |
+
$$
|
| 669 |
+
\begin{array}{l} \| \nabla_ {W} h _ {W} (X) - \nabla_ {W} h _ {W ^ {\prime}} (X) \| _ {\mathcal {H}} ^ {2} \\ \leq \sum_ {m = 1} ^ {\infty} a _ {m} ^ {2} \mu_ {m} ^ {- \alpha} (d + 1) [ \mu_ {m} ^ {- \alpha} (w _ {2, m} - w _ {2, m} ^ {\prime}) ^ {2} + R ^ {2} \mu_ {m} ^ {- \alpha} \| w _ {1, m} - w _ {1, m} ^ {\prime} \| ^ {2} ] \\ + a _ {m} ^ {2} \mu_ {m} ^ {- \alpha} \left[ \mu_ {m} ^ {- \alpha} \left(w _ {2, m} - w _ {2, m} ^ {\prime}\right) ^ {2} / R ^ {2} + C _ {\sigma} ^ {2} (d + 1) \mu_ {m} ^ {- \alpha} \| w _ {1, m} - w _ {1, m} ^ {\prime} \| ^ {2} \right] \\ \leq \sum_ {m = 1} ^ {\infty} a _ {m} ^ {2} \mu_ {m} ^ {- 2 \alpha} [ (d + 1) (1 + R ^ {2}) + 1 / R ^ {2} + C _ {\sigma} ^ {2} (d + 1) ] [ \| w _ {1, m} - w _ {1, m} ^ {\prime} \| ^ {2} + (w _ {2, m} - w _ {2, m} ^ {\prime}) ^ {2} ] \\ \end{array}
|
| 670 |
+
$$
|
| 671 |
+
|
| 672 |
+
$$
|
| 673 |
+
\leq c _ {\mu} ^ {2 \alpha_ {1}} \max _ {m} \{\mu_ {m} ^ {2 (\alpha_ {1} - \alpha)} \} [ (d + 1) (1 + R ^ {2}) + 1 / R ^ {2} + C _ {\sigma} ^ {2} (d + 1) ] \| W - W ^ {\prime} \| _ {\mathcal {H}} ^ {2} =: C _ {2} \| W - W ^ {\prime} \| _ {\mathcal {H}} ^ {2},
|
| 674 |
+
$$
|
| 675 |
+
|
| 676 |
+
for a constant $0 < C_2 < \infty$ . Therefore, it holds that
|
| 677 |
+
|
| 678 |
+
$$
|
| 679 |
+
\left| h _ {W} (X) - h _ {W ^ {\prime}} (X) \right| ^ {2} \leq C _ {1} \| W - W ^ {\prime} \| _ {\mathcal {H}} ^ {2},
|
| 680 |
+
$$
|
| 681 |
+
|
| 682 |
+
which yields the forth condition, and we also have
|
| 683 |
+
|
| 684 |
+
$$
|
| 685 |
+
\begin{array}{l} \| \nabla_ {W} \ell (Y, h _ {W} (X)) - \nabla_ {W} \ell (Y, h _ {W ^ {\prime}} (X)) \| _ {\mathcal {H}} ^ {2} \\ = \| 2 \left(h _ {W} (X) - Y\right) \nabla_ {W} h _ {W} (X) - 2 \left(h _ {W ^ {\prime}} (X) - Y\right) \nabla_ {W} h _ {W ^ {\prime}} (X) \| _ {\mathcal {H}} ^ {2} \\ \leq 2 \| 2 \left(h _ {W} (X) - Y\right) \left(\nabla_ {W} h _ {W} (X) - \nabla_ {W} h _ {W} (X)\right) \| _ {\mathcal {H}} ^ {2} \\ + 2 \| 2 \left(h _ {W} (X) - h _ {W ^ {\prime}} (X)\right) \nabla_ {W} h _ {W ^ {\prime}} (X) \| _ {\mathcal {H}} ^ {2} \\ \leq 8 \bar {R} C _ {2} \| W - W ^ {\prime} \| _ {\mathcal {H}} ^ {2} + 8 C _ {1} ^ {2} \| W - W ^ {\prime} \| _ {\mathcal {H}} ^ {2} \lesssim \| W - W ^ {\prime} \| _ {\mathcal {H}} ^ {2}, \\ \end{array}
|
| 686 |
+
$$
|
| 687 |
+
|
| 688 |
+
which yields the third condition.
|
| 689 |
+
|
| 690 |
+
Since $f^{\mathrm{o}} \in \mathcal{F}_{\gamma}$ , there exists $W^{*} \in \mathcal{H}_{\gamma}$ such that $f^{\mathrm{o}} = f_{W^{*}}$ . Therefore, applying Proposition 4 with $\alpha = \alpha_{1}$ ( $\tilde{\alpha} = 1 / [2(\alpha_{1} + 1)]$ ) and $\theta = \gamma / (1 + \alpha_{1})$ (since $\gamma < 1 / 2 + \alpha_{1}$ , the condition $\theta < 1 - \tilde{\alpha}$ is satisfied), we obtain that for $M \geq \min \left\{ \lambda^{1/4\gamma (\alpha_{1} + 1)}\beta^{1/2\gamma}, \lambda^{-1/2(\alpha_{1} + 1)}, n^{1/2\gamma} \right\}$ , the following excess risk bound holds:
|
| 691 |
+
|
| 692 |
+
$$
|
| 693 |
+
\mathrm {E} _ {D ^ {n}} \left[ \mathrm {E} _ {W _ {k} ^ {(M)}} [ \mathcal {L} (W _ {k} ^ {(M)}) | D _ {n} ] - \mathcal {L} (f ^ {*}) \right] \lesssim \max \left\{(\lambda \beta) ^ {\frac {2 \tilde {\alpha} / \theta}{1 + \tilde {\alpha} / \theta}} n ^ {- \frac {1}{1 + \tilde {\alpha} / \theta}}, \lambda^ {- \tilde {\alpha}} \beta^ {- 1}, \lambda^ {\theta}, 1 / n \right\} + \Xi_ {k}.
|
| 694 |
+
$$
|
| 695 |
+
|
| 696 |
+
Finally, by noting $\mathcal{L}(W_k^{(M)}) - \mathcal{L}(f^*) = \| f_{W_k^{(M)}} - f^*\|_{L_2(P_X)}^2$ , we obtain the assertion.
|
| 697 |
+
|
| 698 |
+

|
| 699 |
+
|
| 700 |
+
Finally, we give the proof of Corollary 1.
|
| 701 |
+
|
| 702 |
+
Proof of Corollary 1. Note that
|
| 703 |
+
|
| 704 |
+
$$
|
| 705 |
+
\begin{array}{l} f _ {W} (x) \\ = \sum_ {m = 1} ^ {\infty} a _ {m} \bar {w} _ {2, m} \sigma_ {m} \left(w _ {1, m} ^ {\top} [ x; 1 ]\right) \\ = \sum_ {m = 1} ^ {\infty} \mu_ {m} ^ {\alpha_ {1}} \bar {w} _ {2, m} \mu_ {m} ^ {q \alpha_ {2}} \mu_ {m} ^ {- q \alpha_ {2}} \mu_ {m} ^ {s \alpha_ {2}} \sigma \left(\mu_ {m} ^ {- \alpha_ {2}} w _ {1, m} ^ {\top} [ x; 1 ]\right) (\because a _ {m} = \mu_ {m} ^ {\alpha_ {1}}, b _ {m} = \mu_ {m} ^ {\alpha_ {2}}) \\ = \sum_ {m = 1} ^ {\infty} \mu_ {m} ^ {\alpha_ {1} + q \alpha_ {2}} \bar {w} _ {2, m} \mu_ {m} ^ {- (s - q) \alpha_ {2}} \sigma \left(\mu_ {m} ^ {- \alpha_ {2}} w _ {1, m} ^ {\top} [ x; 1 ]\right). \\ \end{array}
|
| 706 |
+
$$
|
| 707 |
+
|
| 708 |
+
Therefore, we may redefine $\alpha_{1}^{\prime}\gets \alpha_{1} + q\alpha_{2}$ and $s^\prime \gets s - q$ so that we obtain another representation of the model $\mathcal{F}_{\gamma}$ :
|
| 709 |
+
|
| 710 |
+
$$
|
| 711 |
+
\mathcal {F} _ {\gamma} = \left\{f _ {W} (x) = \sum_ {m = 1} ^ {\infty} \mu_ {m} ^ {\alpha_ {1} ^ {\prime}} \bar {w} _ {2, m} \check {\sigma} _ {m} \left(w _ {1, m} ^ {\top} [ x; 1 ]\right) \Big | W \in \mathcal {H} _ {\gamma}, \| W \| _ {\mathcal {H} _ {\gamma}} \leq 1 \right\},
|
| 712 |
+
$$
|
| 713 |
+
|
| 714 |
+
where $\check{\sigma}_m(\cdot) = \mu_m^{-s' \alpha_2} \sigma(\mu_m^{-\alpha_2} \cdot)$ . Note that the condition $0 \leq q \leq s - 3$ gives $s - q \geq 3$ . Therefore, Assumptions 3 and 4 are valid even for the redefined parameters $\alpha_1'$ , $s'$ and $\check{\sigma}_m$ instead of $\alpha_1$ , $s$ and $\sigma_m$ . Therefore, we can apply Theorem 2 by simply replacing $\alpha_1$ by $\alpha_1' = \alpha_1 + q\alpha_2$ .
|
benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fe22589dc818a40c8a51a70f9cba48aba148b990520dd776786502c3ca1a0f56
|
| 3 |
+
size 797469
|
benefitofdeeplearningwithnonconvexnoisygradientdescentprovableexcessriskboundandsuperioritytokernelmethods/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8c5b1dd57bae579e34a82c6fedf737211825a7b7753da79f13a05c219bdd3aef
|
| 3 |
+
size 1093610
|
beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/27f95abb-7e15-43ae-913b-746319f62b32_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b1983163d9e4a464ea105ddbfab3ae12a78722cde7fc2751b64e8e4175189e9f
|
| 3 |
+
size 81273
|
beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/27f95abb-7e15-43ae-913b-746319f62b32_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e60fe16966b39b6abfd009505772b1866e29547c48611ec794182be9842d7c12
|
| 3 |
+
size 95836
|
beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/27f95abb-7e15-43ae-913b-746319f62b32_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d440c5325d099247504e35167c4411700a41157c32505e8563eb5eb540058a92
|
| 3 |
+
size 1208619
|
beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/full.md
ADDED
|
@@ -0,0 +1,338 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BEYOND FULLY-CONNECTED LAYERS WITH QUATERNIONS: PARAMETERIZATION OF HYPERCOMPLEX MULTIPLICATIONS WITH $1 / n$ PARAMETERS
|
| 2 |
+
|
| 3 |
+
Aston Zhang†, Yi Tay‡, Shuai Zhang, Alvin Chan Anh Tuan Luu<sup>4,0</sup>, Siu Cheung Hui, Jie Fu
|
| 4 |
+
|
| 5 |
+
$\dagger$ Amazon Web Services AI
|
| 6 |
+
$\ddagger$ Google Research
|
| 7 |
+
ETH Zürich
|
| 8 |
+
<sup>a</sup>NTU, Singapore
|
| 9 |
+
$^{\circ}$ VinAI
|
| 10 |
+
$^{\dagger}$ Mila, Université de Montréal
|
| 11 |
+
|
| 12 |
+
az@astonzhang.com
|
| 13 |
+
|
| 14 |
+
# ABSTRACT
|
| 15 |
+
|
| 16 |
+
Recent works have demonstrated reasonable success of representation learning in hypercomplex space. Specifically, "fully-connected layers with quaternions" (quaternions are 4D hypercomplex numbers), which replace real-valued matrix multiplications in fully-connected layers with Hamilton products of quaternions, both enjoy parameter savings with only $1/4$ learnable parameters and achieve comparable performance in various applications. However, one key caveat is that hypercomplex space only exists at very few predefined dimensions (4D, 8D, and 16D). This restricts the flexibility of models that leverage hypercomplex multiplifications. To this end, we propose parameterizing hypercomplex multiplications, allowing models to learn multiplication rules from data regardless of whether such rules are predefined. As a result, our method not only subsumes the Hamilton product, but also learns to operate on any arbitrary $n\mathrm{D}$ hypercomplex space, providing more architectural flexibility using arbitrarily $1/n$ learnable parameters compared with the fully-connected layer counterpart. Experiments of applications to the LSTM and transformer models on natural language inference, machine translation, text style transfer, and subject verb agreement demonstrate architectural flexibility and effectiveness of the proposed approach.
|
| 17 |
+
|
| 18 |
+
# 1 INTRODUCTION
|
| 19 |
+
|
| 20 |
+
A quaternion is a 4D hypercomplex number with one real component and three imaginary components. The Hamilton product is the hypercomplex multiplication of two quaternions. Recent works in quaternion space and Hamilton products have demonstrated reasonable success (Parcollet et al., 2018b; 2019; Tay et al., 2019). Notably, the Hamilton product enjoys a parameter saving with $1/4$ learnable parameters as compared with the real-valued matrix multiplication. It also enables effective representation learning by modeling interactions between real and imaginary components.
|
| 21 |
+
|
| 22 |
+
One of the attractive properties of quaternion models is its high applicability and universal usefulness to one of the most ubiquitous layers in deep learning, i.e., the fully-connected (or feedforward) layer. Specifically, "fully-connected layers with quaternions" replace real-valued matrix multiplications in fully-connected layers with Hamilton products of quaternions, enjoying parameter savings with only $1/4$ learnable parameters and achieving comparable performance with their fully-connected layer counterparts (Parcollet et al., 2018b; 2019; Tay et al., 2019).
|
| 23 |
+
|
| 24 |
+
The fully-connected layer is one of the most dominant components in existing deep learning literature (Goodfellow et al., 2016; Zhang et al., 2020). Its pervasiveness cannot be understated, given its
|
| 25 |
+
|
| 26 |
+
centrality to many core building blocks in neural network research. Given widespread adoptions of fully-connected layers, e.g., within LSTM networks (Hochreiter & Schmidhuber, 1997) and transformer models (Vaswani et al., 2017), having flexibility to balance between parameter savings and effectiveness could be extremely useful to many real-world applications.
|
| 27 |
+
|
| 28 |
+
Unfortunately, hypercomplex space only exists at 4D (quaternions), 8D (octonions), and 16D (sedenions), which generalizes the 2D complex space (Rishiyur, 2006). Moreover, custom operators are required at each hypercomplex dimensionality. For instance, the Hamilton product is the hypercomplex multiplication in 4D hypercomplex space. Thus, no operator in such predefined hypercomplex space is suitable for applications that prefer reducing parameters to $1/n$ , where $n \neq 4, 8, 16$ .
|
| 29 |
+
|
| 30 |
+
In view of the architectural limitation due to the very few choices of those existing hypercomplex space, we propose parameterization of hypercomplex multiplications, i.e., learning the real and imaginary component interactions from data in a differentiable fashion. Essentially, our method can operate on an arbitrary $n$ D hypercomplex space, aside from subsuming those predefined hypercomplex multiplication rules, facilitating using up to arbitrarily $1 / n$ learnable parameters while maintaining expressiveness. In practice, the hyperparameter $n$ can be flexibly specified or tuned by users based on applications.
|
| 31 |
+
|
| 32 |
+
Concretely, our prime contribution is a new module that parameterizes and generalizes the hypercomplex multiplication by learning the real and imaginary component interactions, i.e., multiplication rules, from data. Our method, which we call the parameterized hypercomplex multiplication layer, is characterized by a sum of Kronecker products that generalize the vector outer products to higher dimensions in real space. To demonstrate applicability, we equip two well-established models (the LSTM and transformer) with our proposed method. We conduct extensive experiments on different tasks, i.e., natural language inference for LSTM networks and machine translation for transformer models. Additionally, we perform further experiments on text style transfer and subject verb agreement tasks. All in all, our method has demonstrated architectural flexibility through different experimental settings, where it generally can use a fraction of the learnable parameters with minimal degradation or slight improvement in performance.
|
| 33 |
+
|
| 34 |
+
The overall contributions of this work are summarized as follows:
|
| 35 |
+
|
| 36 |
+
- We propose a new parameterization of hypercomplex multiplications: the parameterized hypercomplex multiplication (PHM) layer. This layer has $1/n$ learnable parameters compared with the fully-connected layer counterpart, where $n$ can be flexibly specified by users. The key idea behind PHM layers is to learn the interactions between real and imaginary components, i.e., multiplication rules, from data using a sum of Kronecker products.
|
| 37 |
+
- We demonstrate the applicability of the PHM layers by leveraging them in two dominant neural architectures: the LSTM and transformer models.
|
| 38 |
+
- We empirically show architectural flexibility and effectiveness of PHM layers by conducting extensive experiments on five natural language inference tasks, seven machine translation datasets, together with text style transfer and subject verb agreement tasks.
|
| 39 |
+
|
| 40 |
+
# 2 BACKGROUND ON QUATERNIONS AND HAMILTON PRODUCTS
|
| 41 |
+
|
| 42 |
+
We begin by introducing the background for the rest of the paper. Concretely, we describe quaternion algebra along with Hamilton products, which is at the heart of our proposed approach.
|
| 43 |
+
|
| 44 |
+
Quaternion A quaternion $Q \in \mathbb{H}$ is a hypercomplex number with one real component and three imaginary components as follows:
|
| 45 |
+
|
| 46 |
+
$$
|
| 47 |
+
Q = Q _ {r} + Q _ {x} \mathbf {i} + Q _ {y} \mathbf {j} + Q _ {z} \mathbf {k}, \tag {2.1}
|
| 48 |
+
$$
|
| 49 |
+
|
| 50 |
+
whereby $\mathbf{ijk} = \mathbf{i}^2 = \mathbf{j}^2 = \mathbf{k}^2 = -1$ . In (2.1), noncommutative multiplication rules hold: $\mathbf{ij} = \mathbf{k}, \mathbf{jk} = \mathbf{i}, \mathbf{ki} = \mathbf{j}, \mathbf{ji} = -\mathbf{k}, \mathbf{kj} = -\mathbf{i}, \mathbf{ik} = -\mathbf{j}$ . Here, $Q_r$ is the real component, $Q_x, Q_y, Q_z$ are real numbers that represent the imaginary components of the quaternion $Q$ .
|
| 51 |
+
|
| 52 |
+
Addition The addition of two quaternions is defined as
|
| 53 |
+
|
| 54 |
+
$$
|
| 55 |
+
Q + P = Q _ {r} + P _ {r} + \left(Q _ {x} + P _ {x}\right) \mathbf {i} + \left(Q _ {y} + P _ {y}\right) \mathbf {j} + \left(Q _ {z} + P _ {z}\right) \mathbf {k},
|
| 56 |
+
$$
|
| 57 |
+
|
| 58 |
+
where $Q$ and $P$ with subscripts denote the real and imaginary components of quaternions $Q$ and $P$ .
|
| 59 |
+
|
| 60 |
+
Scalar Multiplication Any scalar $\alpha$ multiplies across all the components:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
\alpha Q = \alpha Q _ {r} + \alpha Q _ {x} \mathbf {i} + \alpha Q _ {y} \mathbf {j} + \alpha Q _ {z} \mathbf {k}.
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
Hamilton Product The Hamilton product, which represents the multiplication of two quaternions $Q$ and $P$ , is defined as
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\begin{array}{l} Q \otimes P = \left(Q _ {r} P _ {r} - Q _ {x} P _ {x} - Q _ {y} P _ {y} - Q _ {z} P _ {z}\right) + \left(Q _ {x} P _ {r} + Q _ {r} P _ {x} - Q _ {z} P _ {y} + Q _ {y} P _ {z}\right) \mathbf {i} \\ + \left(Q _ {y} P _ {r} + Q _ {z} P _ {x} + Q _ {r} P _ {y} - Q _ {x} P _ {z}\right) \mathbf {j} + \left(Q _ {z} P _ {r} - Q _ {y} P _ {x} + Q _ {x} P _ {y} + Q _ {r} P _ {z}\right) \mathbf {k}. \tag {2.2} \\ \end{array}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
The multiplication rule in (2.2) forges interactions between real and imaginary components of $Q$ and $P$ . The benefits of Hamilton products have been demonstrated in recent works where the matrix multiplication in fully-connected layers is replaced with the Hamilton product: this reduces $75\%$ parameters with comparable performance (Parcollet et al., 2018b; 2019; Tay et al., 2019).
|
| 73 |
+
|
| 74 |
+
# 3 PARAMETERIZATION OF HYPERCOMPLEX MULTIPLICATIONS
|
| 75 |
+
|
| 76 |
+
The following introduces our proposed parameterized hypercomplex multiplication layer and elaborates on how it parameterizes and generalizes multiplications in hypercomplex space, such as subsuming the multiplication rules of Hamilton products in (2.2).
|
| 77 |
+
|
| 78 |
+
# 3.1 FULLY-CONNECTED (FC) LAYERS
|
| 79 |
+
|
| 80 |
+
Before we delve into our proposed method, recall the fully-connected (FC) layer that transforms an input $\mathbf{x} \in \mathbb{R}^d$ into an output $\mathbf{y} \in \mathbb{R}^k$ by
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\mathbf {y} = \operatorname {F C} (\mathbf {x}) = \mathbf {W x} + \mathbf {b}, \tag {3.1}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
where the weight matrix of parameters $\mathbf{W} \in \mathbb{R}^{k \times d}$ and the bias vector of parameters $\pmb{b} \in \mathbb{R}^k$ . The FC layer in (3.1) is fundamental to many modern and traditional neural network architectures. Note that the degree of freedom for the weight parameters $\mathbf{W}$ in (3.1) is $kd$ . Since $\mathbf{W}$ dominates parameterization, the parameter size of the FC layer in (3.1) is $\mathcal{O}(kd)$ .
|
| 87 |
+
|
| 88 |
+
# 3.2 PARAMETERIZED HYPERCOMPLEX MULTIPLICATION (PHM) LAYERS
|
| 89 |
+
|
| 90 |
+
We propose the parameterized hypercomplex multiplication (PHM) layer that transforms an input $\mathbf{x}$ into an output $\mathbf{y}$ by
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
\mathbf {y} = \operatorname {P H M} (\mathbf {x}) = \mathbf {H x} + \mathbf {b}, \tag {3.2}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
where the same notation from (3.1) is used but the replaced parameter $\mathbf{H} \in \mathbb{R}^{k \times d}$ is constructed by a sum of Kronecker products. For context, the Kronecker product is a generalization of the vector outer product to higher dimensions in real space. For any matrix $\mathbf{X} \in \mathbb{R}^{m \times n}$ and $\mathbf{Y} \in \mathbb{R}^{p \times q}$ , the Kronecker product $\mathbf{X} \otimes \mathbf{Y}$ is a block matrix:
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
\mathbf {X} \otimes \mathbf {Y} = \left[ \begin{array}{c c c} x _ {1 1} \mathbf {Y} & \ldots & x _ {1 n} \mathbf {Y} \\ \vdots & \ddots & \vdots \\ x _ {m 1} \mathbf {Y} & \ldots & x _ {m n} \mathbf {Y} \end{array} \right] \in \mathbb {R} ^ {m p \times n q},
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where $x_{ij}$ is the element of $\mathbf{X}$ at its $i^{\text{th}}$ row and $j^{\text{th}}$ column. Note that the symbol $\otimes$ between two matrices is the Kronecker product while the same symbol between two quaternions means the Hamilton product.
|
| 103 |
+
|
| 104 |
+
Now let us revisit (3.2) to explain $\mathbf{H}$ . Suppose that both $k$ and $d$ are divisible by a user-defined hyperparameter $n\in \mathbb{Z}_{>0}$ . For $i = 1,\dots ,n$ , denote by each parameter matrix $\mathbf{A}_i\in \mathbb{R}^{n\times n}$ and $\mathbf{S}_i\in \mathbb{R}^{\frac{k}{n}\times \frac{d}{n}}$ . The parameter $\mathbf{H}$ in (3.2) is a sum of $n$ Kronecker products:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathbf {H} = \sum_ {i = 1} ^ {n} \mathbf {A} _ {i} \otimes \mathbf {S} _ {i}. \tag {3.3}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+

|
| 111 |
+
Figure 1: Illustration of the PHM layer. It uses a sum of Kronecker products of matrices $\mathbf{A}_i$ and $\mathbf{S}_i$ $(i = 1,2)$ to construct $\mathbf{H}$ in (3.2) (here $n = 2$ , $k = 6$ , $d = 8$ ). Best viewed in color.
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
(a) Learning rotations in 3D real space
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
(b) Learning Hamilton products in quaternion space
|
| 118 |
+
Figure 2: PHM layers can learn to perform rotations in 3D real space and Hamilton products in quaternion space on artificial datasets.
|
| 119 |
+
|
| 120 |
+
As illustrated in Figure 1, it is the parameter matrices $\mathbf{A}_i$ and $\mathbf{S}_i$ ( $i = 1, \dots, n$ ) that determine the degree of freedom for $\mathbf{H}$ , which is $kd / n + n^3$ . Since $\mathbf{H}$ dominates parameterization, the parameter size of the PHM in (3.2) is $\mathcal{O}(kd / n)$ , where $kd \gtrsim n^4$ is assumed: this condition is mild for real-world problems, such as in our experiments (e.g., $d = 512$ , $k = 2048$ , $n = 2, 4, 8, 16$ ). Thus, for the same input and output sizes, the parameter size of a PHM layer is approximately $1 / n$ of that of an FC layer under mild assumptions.
|
| 121 |
+
|
| 122 |
+
The benefit of parameterization reduction of PHM layers is due to reusing elements of both parameter matrices $\mathbf{A}_i$ and $\mathbf{S}_i$ in the Kronecker product. As an alternative perspective, we can equivalently reconstruct $\mathbf{H}$ in (3.3) by reusing parameter matrices in real-valued matrix multiplications, followed by more operations. Due to limited space, this more complicated perspective is offered in Appendix A. Though simply setting $\mathbf{H} = \mathbf{A}_1\otimes \mathbf{S}_1$ can further save parameters, it does not generalize hypercomplex multiplications hence is out of scope.
|
| 123 |
+
|
| 124 |
+
To show that PHM layers can learn to perform pre-defined multiplication-related operations in practice, we perform experiments to learn rotations in 3D real space using the PHM layer. Using a rotation matrix $\mathbf{W} \in \mathbb{R}^{3 \times 3}$ we create an artificial dataset $\{(\mathbf{x}_i \in \mathbb{R}^3, \mathbf{y}_i \in \mathbb{R}^3)\}$ , where $\mathbf{y}_i$ is generated via the 3D rotation of the input: $\mathbf{y}_i = \mathbf{W}\mathbf{x}_i$ . Figure 2(a) shows that the loss converges to zero: the PHM layer can learn a single rotation of an object in 3D real space.
|
| 125 |
+
|
| 126 |
+
In the following, we show how the proposed PHM layer subsumes and generalizes both hypercomplex multiplications and real-valued matrix multiplications.
|
| 127 |
+
|
| 128 |
+
# 3.3 SUBSUMING HYPERCOMPLEX MULTIPLICATIONS
|
| 129 |
+
|
| 130 |
+
First, we explore how the PHM layer connects to the hypercomplex multiplication. For the sake of illustration, let us take the Hamilton product of two quaternions $Q$ and $P$ in (2.2) as an example,
|
| 131 |
+
|
| 132 |
+
which can be rewritten as
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\left[ \begin{array}{c c c c} Q _ {r} & - Q _ {x} & - Q _ {y} & - Q _ {z} \\ Q _ {x} & Q _ {r} & - Q _ {z} & Q _ {y} \\ Q _ {y} & Q _ {z} & Q _ {r} & - Q _ {x} \\ Q _ {z} & - Q _ {y} & Q _ {x} & Q _ {r} \end{array} \right] \left[ \begin{array}{l} P _ {r} \\ P _ {x} \\ P _ {y} \\ P _ {z} \end{array} \right], \tag {3.4}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
where the 4 output elements are the real values for the quaternion unit basis $[1, \mathbf{i}, \mathbf{j}, \mathbf{k}]^{\top}$ . Note that for models leveraging Hamilton products of quaternions (Parcollet et al., 2018b; 2019; Tay et al., 2019), the components $Q_{r}, Q_{x}, Q_{y}, Q_{z}$ of (3.4) are learnable parameters while the components $P_{r}, P_{x}, P_{y}, P_{z}$ are the layer inputs. In practice, such a layer usually has more than 4 inputs ( $d > 4$ ). To apply the Hamilton product, all the inputs are evenly split into 4 segments ( $P_{r}, P_{x}, P_{y}, P_{z}$ ) of the right input vector of (3.4). Then each component in the left matrix of (3.4) can be a block matrix (i) where all the elements take the same value; (ii) whose shape is aligned with the input length $d$ and the output length $k$ of the layer. It is noteworthy that the left $4 \times 4$ matrix of (3.4) can be rewritten as a sum of 4 Kronecker products:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
\underbrace {\left[ \begin{array}{l l l l} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]} _ {\mathbf {A} _ {1}} \otimes \underbrace {\left[ \begin{array}{l} Q _ {r} \end{array} \right]} _ {\mathbf {S} _ {1}} + \underbrace {\left[ \begin{array}{l l l l} 0 & - 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & - 1 \\ 0 & 0 & 1 & 0 \end{array} \right]} _ {\mathbf {A} _ {2}} \otimes \underbrace {\left[ \begin{array}{l} Q _ {x} \end{array} \right]} _ {\mathbf {S} _ {2}} + \underbrace {\left[ \begin{array}{l l l l} 0 & 0 & - 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & - 1 & 0 & 0 \end{array} \right]} _ {\mathbf {A} _ {3}} \otimes \underbrace {\left[ \begin{array}{l} Q _ {y} \end{array} \right]} _ {\mathbf {S} _ {3}} + \underbrace {\left[ \begin{array}{l l l l} 0 & 0 & 0 & - 1 \\ 0 & 0 & - 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right]} _ {\mathbf {A} _ {4}} \otimes \underbrace {\left[ \begin{array}{l} Q _ {z} \end{array} \right]} _ {\mathbf {S} _ {4}}. \tag {3.5}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
According to (3.5), when $n = 4$ , the PHM layer can be learned to express the Hamilton product of quaternions. Specifically, matrices $\mathbf{A}_1, \ldots, \mathbf{A}_4$ in (3.3) parameterize the four matrices composed of $-1, 0, 1$ in (3.5) that reflect interactions between real and imaginary components of quaternions, which are the rule of Hamilton products. The single-element "matrices" $\mathbf{S}_1, \ldots, \mathbf{S}_4$ in (3.3) are equal to the learnable components $Q_r, Q_x, Q_y, Q_z$ in (3.4). Figure 2(b) shows that PHM layers can learn the rule of Hamilton products on artificial data. Likewise, hypercomplex multiplications of octonions or sedenions can also be learned by the PHM layer when $n$ is set to 8 or 16.
|
| 145 |
+
|
| 146 |
+
# 3.4 SUBSUMING REAL-VALUED MATRIX MULTIPLICATIONS
|
| 147 |
+
|
| 148 |
+
Next, we show how the PHM layer subsumes the matrix multiplication in real space. In other words, the PHM layer is a generalization of the FC layer via the hyperparameter $n$ . To explain, referring to (3.2), when $n = 1$ , $\mathbf{H} = \mathbf{A}_1 \otimes \mathbf{S}_1 = a\mathbf{S}_1$ , where the scalar $a$ is the single element of the $1 \times 1$ matrix $\mathbf{A}_1$ and $\mathbf{S}_1 \in \mathbb{R}^{k \times d}$ . Since learning $a$ and $\mathbf{S}_1$ separately is equivalent to learning their multiplication jointly, scalar $a$ can be dropped, which is learning the single weight matrix in an FC layer. Therefore, a PHM layer is degenerated to an FC layer when $n = 1$ .
|
| 149 |
+
|
| 150 |
+
# 3.5 GENERALIZING HYPERCOMPLEX MULTIPLICATIONS
|
| 151 |
+
|
| 152 |
+
Though parameter reusing by component-wise partitioning in quaternion space has demonstrated success (Parcollet et al., 2018b; Zhu et al., 2018; Parcollet et al., 2019; Tay et al., 2019), one key problem is that hypercomplex space only exists at very few predefined dimensionalities, such as 4D (quaternions), 8D (octonions), and 16D (sedenions). Within the context of hypercomplex space, specialized multiplication rules, such as the Hamilton product, have to be devised and encoded in the network as a fixed inductive bias. As described in Section 1, the very few choices over existing hypercomplex space restricts the flexibility of networks that leverage hypercomplex multiplication.
|
| 153 |
+
|
| 154 |
+
In sharp contrast to relying on predefined mathematical rules over limited dimensionality choices, the PHM layer treats the dimensionality $n$ (number of Kronecker products) as a tunable hyperparameter and learns such specialized multiplication rules from data, as manifested in the parameterized matrices $\mathbf{A}_i$ ( $i = 1, \dots, n$ ) in (3.3). On one hand, the PHM layer can express hypercomplex multiplications when $\mathbf{A}_i$ are set to reflect those predefined multiplication rules in hypercomplex space. On the other hand, the PHM layer can be seen as a trainable and parameterized form of $n$ D hypercomplex multiplications, where $n$ can be values other than 4, 8, or 16. Thus, the PHM layer generalizes multiplications in hypercomplex space. Since $n$ can be 1, the PHM layer also offers a neat way to bridging multiplication between both real space and hypercomplex space.
|
| 155 |
+
|
| 156 |
+
# 4 NEURAL MODELS WITH PHM LAYERS
|
| 157 |
+
|
| 158 |
+
To demonstrate the applicability of the PHM layers, we develop the PHM-LSTM and PHM-transformer by equipping two popular neural network models, LSTMs and transformers, with PHM layers.
|
| 159 |
+
|
| 160 |
+
# 4.1 PHM-LSTM
|
| 161 |
+
|
| 162 |
+
Recurrent neural networks such as LSTMs (Hochreiter & Schmidhuber, 1997) are gated recurrent networks where the gating functions are parameterized by linear transformations. We introduce the PHM-LSTM, which replaces such linear transformations in LSTMs with PHM layers:
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
\begin{array}{l} \mathbf {y} _ {t} = \operatorname {P H M} \left(\mathbf {x} _ {t}\right) + \operatorname {P H M} \left(\mathbf {h} _ {t - 1}\right) + \boldsymbol {b} \\ \mathbf {f} _ {t}, \mathbf {i} _ {t}, \mathbf {o} _ {t}, \mathbf {x} _ {t} ^ {\prime} = \phi (\mathbf {y} _ {t}) \\ \mathbf {c} _ {t} = \sigma_ {s} (\mathbf {f} _ {t}) \mathbf {c} _ {t - 1} + \sigma_ {s} (\mathbf {i} _ {t}) \sigma_ {t} \left(\mathbf {x} _ {t} ^ {\prime}\right) \\ \mathbf {h} _ {t} = \mathbf {o} _ {t} \odot \mathbf {c} _ {t}, \\ \end{array}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
where $\sigma_{s}$ is the sigmoid activation function, $\sigma_{t}$ is the tanh activation function, $\phi :\mathbb{R}^{1\times d}\to \mathbb{R}^{4\times \frac{d}{4}}$ is a four-way split on the last dimension, and $\mathbf{c}_t,\mathbf{h}_t$ are the cell state and the hidden state of the PHM-LSTM unit at any time step $t$ .
|
| 169 |
+
|
| 170 |
+
# 4.2 PHM-TRANSFORMER
|
| 171 |
+
|
| 172 |
+
The transformer is a stacked neural network architecture that aggressively exploits linear transformations (Vaswani et al., 2017). Each self-attention layer comprises of $\mathbf{Q}$ (query), $\mathbf{K}$ (key), $\mathbf{V}$ (value) linear transformations, along with multiple heads. Each transformer block also has a position-wise feed-forward network composed of two FC layers. Since a large majority of the transformer parameters stem from linear transformations or FC layers, we introduce the PHM-transformer to replace all the linear transformations or FC layers with PHM layers. The single-head self-attention module is rewritten as:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\begin{array}{l} \mathbf {Q}, \mathbf {K}, \mathbf {V} = \Phi (\mathrm {P H M} (\mathbf {X})) \\ \mathbf {A} = \operatorname {s o f t m a x} (\frac {\mathbf {Q K} ^ {\top}}{\sqrt {d _ {k}}}) \mathbf {V}, \\ \end{array}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
where $d_k$ is the key dimension, $\Phi: \mathbb{R}^{1 \times d} \to \mathbb{R}^{3 \times \frac{d}{3}}$ is a three-way split on the last dimension, $\mathbf{X}$ is the input sequence, and $\mathbf{A}$ is the self-attentive representation. For multi-head attention, using PHM layers also enables weight sharing not only among the linear transformations of $\mathbf{Q}, \mathbf{K}, \mathbf{V}$ but also among the linear transformation of multiple heads:
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\mathbf {X} = \mathrm {P H M} ([ \mathbf {H} _ {1}; \dots ; \mathbf {H} _ {N _ {h}} ]),
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
where $N_{h}$ is the number of heads and (;) is the column-wise concatenation. Finally, the position-wise feed-forward network is now defined as
|
| 185 |
+
|
| 186 |
+
$$
|
| 187 |
+
\mathbf {Y} = \operatorname {P H M} (\operatorname {R e L U} (\operatorname {P H M} (\mathbf {X}))),
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+
which transforms $\mathbf{X}$ with two PHM layers.
|
| 191 |
+
|
| 192 |
+
# 5 EXPERIMENTS
|
| 193 |
+
|
| 194 |
+
For context, in the field of representation learning using hypercomplex multiplications, quaternion convolutional neural networks (Zhu et al., 2018), quaternion recurrent neural networks (Parcollet et al., 2018a), and quaternion transformers (Tay et al., 2019) have all compared themselves with only real-valued counterparts. Therefore, to be consistent with the rest of the literature, we evaluate PHM-LSTMs and PHM-transformers that are equipped with PHM layers, and compare them with quaternion LSTMs, quaternion transformers, real-valued LSTMs, or real-valued transformers. Both quaternion LSTMs and quaternion transformers replace linear transformations with Hamilton products of quaternions.
|
| 195 |
+
|
| 196 |
+
Table 1: Experimental results of natural language inference (accuracy) on five different datasets. The PHM-LSTM reduces the parameters of the standard LSTM model and improves or partially matches performance on four out of five datasets.
|
| 197 |
+
|
| 198 |
+
<table><tr><td>Model</td><td>#Params</td><td>MNLI</td><td>QNLI</td><td>SNLI</td><td>DNLI</td><td>SciTail</td></tr><tr><td>LSTM</td><td>721K</td><td>71.82 / 71.89</td><td>84.44</td><td>84.18</td><td>85.16</td><td>74.36</td></tr><tr><td>Quaternion LSTM</td><td>180K (-75.0%)</td><td>71.57 / 72.19</td><td>84.73</td><td>84.21</td><td>86.45</td><td>75.58</td></tr><tr><td>PHM-LSTM (n = 2)</td><td>361K (-49.9%)</td><td>71.82 / 72.08</td><td>84.39</td><td>84.38</td><td>85.77</td><td>77.47</td></tr><tr><td>PHM-LSTM (n = 5)</td><td>146K (-79.7%)</td><td>71.80 / 71.77</td><td>83.87</td><td>84.58</td><td>86.47</td><td>74.64</td></tr><tr><td>PHM-LSTM (n = 10)</td><td>81K (-88.7%)</td><td>71.59 / 71.59</td><td>84.25</td><td>84.40</td><td>86.21</td><td>77.84</td></tr></table>
|
| 199 |
+
|
| 200 |
+
To demonstrate the architectural flexibility and effectiveness, we evaluate different settings of PHMLSTMs and PHM-transformers to show that allowing for flexible choices of the hyperparameter $n$ in the PHM layer may lead to more effective performance. Details of the setup for the experiments are provided in Appendix B.
|
| 201 |
+
|
| 202 |
+
# 5.1 NATURAL LANGUAGE INFERENCE
|
| 203 |
+
|
| 204 |
+
The task of natural language inference is to determine the logical relationship between two text sequences (MacCartney, 2009). It is a fundamental task pertaining to language understanding. To this end, they serve as a suitable benchmark for evaluating recurrent models.
|
| 205 |
+
|
| 206 |
+
We run experiments on five datasets: (i) MultiNLI (Williams et al., 2017), (ii) QNLI (Quora) (Wang et al., 2017), (iii) SNLI (Bowman et al., 2015), (iv) Dialogue NLI (Welleck et al., 2018), and (v) SciTail (Science Entailment) (Khot et al., 2018). Table 1 reports the results on all these datasets. All in all, such results show that the PHM layer can not only reduce the parameters but also improve performance with flexible choices of $n$ (four out of five datasets show reasonable improvement or partially match). The only exception is on the QNLI dataset, where the performance drop is marginal ( $< 1\%$ ). This is still decent considering the parameter saving: the parameterization cost of the PHM-LSTM is in the order of $\mathcal{O}(1/n)$ of that of the standard LSTM, where settings of $n = 5$ and $n = 10$ do not take values of power of 2. As detailed in Appendix B, since we use the 300D GloVe (Pennington et al., 2014) embeddings to represent input tokens, we choose multiples of 5 instead of 4 for ease of divisibility. It is also noteworthy that on the SNLI, Dialogue NLI, and SciTail datasets, all the PHM-LSTM variants outperform the standard LSTM model. We think that the element reusing properties of the Kronecker product operation, in addition to learning to share such reused components amongst recurrent gating functions, may contribute to both effective and efficient representations.
|
| 207 |
+
|
| 208 |
+
# 5.2 MACHINE TRANSLATION
|
| 209 |
+
|
| 210 |
+
Machine translation is concerned with translating between source-target language pairs. To this end, sequence transduction models are central to this problem domain. In this experiment, the key goal is to compare PHM-transformers against the standard and quaternion transformer models.
|
| 211 |
+
|
| 212 |
+
We run experiments on seven datasets: (i) IWSLT'15 English-Vietnamese (En-Vi), (ii) IWSLT'17 English-Indonesian (En-Id), (iii) IWSLT'14 German-English (De-En), (iv) IWSLT'14 Romanian-English (Ro-En), (v) WMT'18 English-Estonian (En-Et), (vi) Setimes English-Macedonian (En-Mk), and (vii) WMT'18 English-Romanian (En-Ro).
|
| 213 |
+
|
| 214 |
+
Table 2 reports our results of the machine translation tasks. Overall, these empirical results with different settings demonstrate architectural flexibility and effectiveness of the hypercomplex multiplication parameterization. First and foremost, across six out of seven benchmarks, the PHM-transformer at $n = 4$ makes reasonable gains over the quaternion transformer, signifying that parameterization of hypercomplex multiplications by learning from data can be more effective than predefining Hamilton product rules mathematically. Second, though increasing $n$ leads to more parameter savings, we observe that increasing $n$ all the way to 16 does not cause significant degradation in performance on datasets such as En-Vi. Third, for most datasets, even with significant parameter savings, we find that the decrease in the BLEU score is mostly manageable ( $\approx 1-3$ BLEU
|
| 215 |
+
|
| 216 |
+
Table 2: Experimental results of machine translation (BLEU) on seven different datasets. Symbol $\dagger$ represents re-scaling the parameters with a factor of 2 by doubling the hidden size. The PHM-transformer does not lose much performance despite enjoying parameter savings. Re-scaling can lead to improvement in performance.
|
| 217 |
+
|
| 218 |
+
<table><tr><td>Model</td><td>#Params</td><td>En-Vi</td><td>En-Id</td><td>De-En</td><td>Ro-En</td><td>En-Et</td><td>En-Mk</td><td>En-Ro</td></tr><tr><td>Transformer (Tm)</td><td>44M</td><td>28.43</td><td>47.40</td><td>36.68</td><td>34.60</td><td>14.17</td><td>13.96</td><td>22.79</td></tr><tr><td>Quaternion Tm</td><td>11M (-75.0%)</td><td>28.00</td><td>42.22</td><td>32.83</td><td>30.53</td><td>13.10</td><td>13.67</td><td>18.50</td></tr><tr><td>PHM-Tm n = 2</td><td>22M (-50.0%)</td><td>29.25</td><td>46.32</td><td>35.52</td><td>33.40</td><td>14.98</td><td>13.60</td><td>21.73</td></tr><tr><td>PHM-Tm n = 4</td><td>11M (-75.0%)</td><td>29.13</td><td>44.13</td><td>35.53</td><td>32.74</td><td>14.11</td><td>13.01</td><td>21.19</td></tr><tr><td>PHM-Tm n = 8</td><td>5.5M (-87.5%)</td><td>29.34</td><td>40.81</td><td>34.16</td><td>31.88</td><td>13.08</td><td>12.95</td><td>21.66</td></tr><tr><td>PHM-Tm n = 16</td><td>2.9M (-93.4%)</td><td>29.04</td><td>33.48</td><td>33.89</td><td>31.53</td><td>12.15</td><td>11.97</td><td>19.63</td></tr><tr><td>PHM-Tm† n = 2</td><td>44M</td><td>29.54</td><td>49.05</td><td>34.32</td><td>33.88</td><td>14.05</td><td>14.41</td><td>22.18</td></tr><tr><td>PHM-Tm† n = 4</td><td>22M (-50.0%)</td><td>29.17</td><td>46.24</td><td>34.86</td><td>33.80</td><td>14.43</td><td>13.78</td><td>21.91</td></tr><tr><td>PHM-Tm† n = 8</td><td>11M (-75.0%)</td><td>29.47</td><td>43.49</td><td>34.71</td><td>32.59</td><td>13.75</td><td>13.78</td><td>21.43</td></tr></table>
|
| 219 |
+
|
| 220 |
+
Table 3: Training time (seconds per 100 steps) and inference time (seconds to decode test sets) with beam size of 4 and length penalty of 0.6 on the IWSLT'14 German-English dataset.
|
| 221 |
+
|
| 222 |
+
<table><tr><td>Model</td><td>Transformer (Tm)</td><td>Quaternion Tm</td><td>PHM-Tm (n = 4)</td><td>PHM-Tm (n = 8)</td></tr><tr><td>Training time</td><td>7.61</td><td>8.11</td><td>7.92</td><td>7.70</td></tr><tr><td>Inference time</td><td>336</td><td>293</td><td>299</td><td>282</td></tr></table>
|
| 223 |
+
|
| 224 |
+
points). However, we also note a rare occurrence where $n = 16$ results in a significant decrease in the BLEU score, such as on the En-Id dataset. Fourth, on several datasets, the PHM-transformer model improves the performance of the standard transformer model. For example, on datasets such as En-Vi and En-Et, the PHM-transformer model enjoys a performance boost of about 0.8 BLEU point with $n = 2$ . Finally, by re-scaling with a factor of 2 (doubling the hidden size), we are able to improve the performance on three datasets: En-Vi, En-Id, and En-Mk.
|
| 225 |
+
|
| 226 |
+
Table 3 reports the training and inference time for transformer variants. We observe that the PHM-transformer with $n = 8$ has the fastest inference speed amongst all the variants, primarily due to a significant reduction of parameters. All in all, the training speed is also approximately comparable. This ascertains that the PHM layer does not increase much computational cost in practice.
|
| 227 |
+
|
| 228 |
+
# 5.3 TEXT STYLE TRANSFER
|
| 229 |
+
|
| 230 |
+
We continue to experiment with sequence transduction for text style transfer. The goal of this task is to convert text of a certain style to another style. We use the Modern $\rightarrow$ Shakespeare corpus<sup>1</sup> in the experiments. Table 4 reports the results on this text style transfer task. We observe that the best performance is achieved with the PHM-transformer ( $n = 4$ ). Notably, all except the $n = 16$ variant increases or matches the performance of the standard transformer model. This ascertains architectural flexibility and effectiveness of the proposed PHM layer. This not only enables parameter savings but also improves the performance of the transformer.
|
| 231 |
+
|
| 232 |
+
# 5.4 SUBJECT VERB AGREEMENT
|
| 233 |
+
|
| 234 |
+
We conduct additional experiments on the subject-verb agreement task (Linzen et al., 2016). The task predicts if the sentence, e.g., 'The keys to the cabinet ____.' is followed by a plural or a singular. The used dataset can be found online (Linzen et al., 2016). Table 5 reports the results on the subject-verb agreement task. Results are promising, demonstrating that all variants with PHM layers outperform the standard and quaternion transformer models. The best performance peaks at $n = 8$ , despite a parameter saving to up to $1/8$ .
|
| 235 |
+
|
| 236 |
+
Table 4: Experimental results of text style transfer. The PHM-transformer may reduce the parameters of the standard transformer model and improve performance.
|
| 237 |
+
|
| 238 |
+
<table><tr><td>Model</td><td>#Params</td><td>BLEU</td></tr><tr><td>Transformer (Tm)</td><td>44M</td><td>11.65</td></tr><tr><td>PHM-Tm (n = 2)</td><td>22M (-50.0%)</td><td>12.20</td></tr><tr><td>PHM-Tm (n = 4)</td><td>11M (-75.0%)</td><td>12.42</td></tr><tr><td>PHM-Tm (n = 8)</td><td>5.5M (-87.5%)</td><td>11.66</td></tr><tr><td>PHM-Tm (n = 16)</td><td>2.9M (-93.4%)</td><td>10.76</td></tr></table>
|
| 239 |
+
|
| 240 |
+
Table 5: Experimental results of subject verb agreement. The PHM-transformer may reduce the parameters of the standard transformer model and improve performance.
|
| 241 |
+
|
| 242 |
+
<table><tr><td>Model</td><td>#Params</td><td>Acc</td></tr><tr><td>Transformer (Tm)</td><td>400K</td><td>94.80</td></tr><tr><td>Quaternion Tm</td><td>100K</td><td>94.70</td></tr><tr><td>PHM-Tm (n = 2)</td><td>200K (-50.0%)</td><td>95.14</td></tr><tr><td>PHM-Tm (n = 4)</td><td>101K (-74.8%)</td><td>95.05</td></tr><tr><td>PHM-Tm (n = 8)</td><td>56K (-86.0%)</td><td>95.62</td></tr></table>
|
| 243 |
+
|
| 244 |
+
# 6 RELATED WORK
|
| 245 |
+
|
| 246 |
+
While neural networks have been a well-established line of research, progress on hypercomplex representations for deep learning is still in its infancy and most works on this topic are new (Gaudet & Maida, 2017; Parcollet et al., 2018a,b; Zhu et al., 2018; Tay et al., 2019). The hypercomplex Hamilton product provides a greater extent of expressiveness, similar to the complex multiplication, albeit with a 4-fold increase in interactions between real and imaginary components. In the case of quaternion representations, due to parameter savings in the Hamilton product, models also enjoy a $75\%$ reduction in the parameter size (Parcollet et al., 2018a; Tay et al., 2019). A striking caveat is that all quaternions are fundamentally limited to 4D hypercomplex space, which restricts architectural flexibility. The other options would be to scale to octonion (8D) or sedenion (16D) space, given the predefined multiplication rules in such space. To the best of our knowledge, there is no work that attempts to generalize arbitrary $n\mathrm{D}$ hypercomplex multiplications to allow for architectural flexibility, where $n$ can be specified or tuned by users.
|
| 247 |
+
|
| 248 |
+
Our work can also be interpreted as a form of soft parameter sharing, albeit learned from data. Quaternion networks (Zhu et al., 2018; Parcollet et al., 2018b; 2019) are known to possess weight sharing properties via the Hamilton product operation and have demonstrated reasonable success despite having fewer parameters. To the best of our knowledge, there has been no work that attempts to parameterize the hypercomplex Hamilton product for neural networks, i.e., enabling end-to-end learning of real and imaginary component interactions from data.
|
| 249 |
+
|
| 250 |
+
# 7 CONCLUSION
|
| 251 |
+
|
| 252 |
+
We proposed parameterized hypercomplex multiplication (PHM) layers that learn and generalize hypercomplex multiplications. In practice, the PHM layer has $1 / n$ learnable parameters compared with the fully-connected layer counterpart, where $n$ can be flexibly specified by users. PHM layers are applicable to dominant models such as LSTMs and transformers. We evaluated these models equipped by PHM layers on comprehensive tasks to show architectural flexibility and effectiveness of the hypercomplex multiplication parameterization.
|
| 253 |
+
|
| 254 |
+
Acknowledgements. We thank the anonymous reviewers for the insightful comments on this paper. This work was partially supported by the Ministry of Education (MoE) of Singapore under the Academic Research Fund (AcRF) Tier 1 Grant RG135/18.
|
| 255 |
+
|
| 256 |
+
# REFERENCES
|
| 257 |
+
|
| 258 |
+
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015.
|
| 259 |
+
Chase Gaudet and Anthony Maida. Deep quaternion networks. arXiv preprint arXiv:1712.04604, 2017.
|
| 260 |
+
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
|
| 261 |
+
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
|
| 262 |
+
Tushar Khot, Ashish Sabharwal, and Peter Clark. Scitail: A textual entailment dataset from science question answering. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
|
| 263 |
+
Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521-535, 2016.
|
| 264 |
+
Bill MacCartney. Natural language inference. CiteSeer, 2009.
|
| 265 |
+
Titouan Parcollet, Mirco Ravanelli, Mohamed Morchid, Georges Linares, Chiheb Trabelsi, Renato De Mori, and Yoshua Bengio. Quaternion recurrent neural networks. In International Conference on Learning Representations, 2018a.
|
| 266 |
+
Titouan Parcollet, Ying Zhang, Mohamed Morchid, Chiheb Trabelsi, Georges Linares, Renato De Mori, and Yoshua Bengio. Quaternion convolutional neural networks for end-to-end automatic speech recognition. arXiv preprint arXiv:1806.07789, 2018b.
|
| 267 |
+
Titouan Parcollet, Mohamed Morchid, and Georges Linares. *Quaternion convolutional neural networks for heterogeneous image processing*. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8514-8518. IEEE, 2019.
|
| 268 |
+
Ankur P Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933, 2016.
|
| 269 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014.
|
| 270 |
+
Adityan Rishiyur. Neural networks with complex and quaternion inputs. arXiv preprint cs/0607090, 2006.
|
| 271 |
+
Yi Tay, Aston Zhang, Luu Anh Tuan, Jinfeng Rao, Shuai Zhang, Shuohang Wang, Jie Fu, and Siu Cheung Hui. Lightweight and efficient neural natural language processing with quaternion networks. arXiv preprint arXiv:1906.04393, 2019.
|
| 272 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
|
| 273 |
+
Zhiguo Wang, Wael Hamza, and Radu Florian. Bilateral multi-perspective matching for natural language sentences. arXiv preprint arXiv:1702.03814, 2017.
|
| 274 |
+
Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. Dialogue natural language inference. arXiv preprint arXiv:1811.00671, 2018.
|
| 275 |
+
Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017.
|
| 276 |
+
Aston Zhang, Zachary C. Lipton, Mu Li, and Alexander J. Smola. *Dive into Deep Learning*. 2020. https://d2l.ai.
|
| 277 |
+
Xuanyu Zhu, Yi Xu, Hongteng Xu, and Changjian Chen. Quaternion convolutional neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 631-647, 2018.
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
Figure 3: Illustration of reconstructing $\mathbf{H}$ in (3.2) by reusing parameter matrices $\mathbf{B}_i$ ( $i = 1, 2$ ) and $\mathbf{T}_j$ ( $j = 1, \ldots, 4$ ) in real-valued matrix multiplications, followed by more operations (here $n = 2, k = 6, d = 8$ ). Best viewed in color.
|
| 281 |
+
|
| 282 |
+
# A RECONSTRUCTING THE PARAMETER MATRIX
|
| 283 |
+
|
| 284 |
+
In the paper, the parameter matrix $\mathbf{H}$ in (3.2) is constructed by a sum of $n$ Kronecker products. In the following, we will provide an alternative perspective and show how to equivalently reconstruct $\mathbf{H}$ by reusing parameter matrices in real-valued matrix multiplications, followed by more operations.
|
| 285 |
+
|
| 286 |
+
# A.1 METHOD
|
| 287 |
+
|
| 288 |
+
The key idea is to operate on partitioned weight blocks and learn a dynamic diffusion of weights. There are two key parameter blocks $\mathbf{B}$ and $\mathbf{T}$ that are central to our approach. Intuitively, $\mathbf{B} \in \mathbb{R}^{n \times n \times n}$ controls the weight diffusion process and learns the soft interactions between $\mathbf{T}$ partitions. Here, $n$ is a user defined hyperparameter.
|
| 289 |
+
|
| 290 |
+
Suppose that both $d$ and $k$ are divisible by $n \in \mathbb{Z}_{>0}$ . For $i = 1, \dots, n$ and $j = 1, \dots, \frac{d}{n}$ , denote by each partitioned parameter block $\mathbf{T}_j \in \mathbb{R}^{n \times \frac{k}{n}}$ , and $\mathbf{B}_i \in \mathbb{R}^{n \times n}$ is the weight diffusion matrix assigned to each partitioned parameter block via real-valued matrix multiplication $\mathbf{B}_i \mathbf{T}_j$ . The parameter $\mathbf{H}$ in (3.2) is now constructed by column-wise concatenation;
|
| 291 |
+
|
| 292 |
+
$$
|
| 293 |
+
\mathbf {H} = \left[ s \left(\mathbf {B} _ {1}\right); s \left(\mathbf {B} _ {2}\right); \dots ; s \left(\mathbf {B} _ {n}\right) \right], \tag {A.1}
|
| 294 |
+
$$
|
| 295 |
+
|
| 296 |
+
where each segment $s(\mathbf{B}_i)$ is also formed by column-wise concatenation:
|
| 297 |
+
|
| 298 |
+
$$
|
| 299 |
+
s \left(\mathbf {B} _ {i}\right) = \left[ \psi \left(\mathbf {B} _ {i} \cdot \mathbf {T} _ {1}\right); \psi \left(\mathbf {B} _ {i} \cdot \mathbf {T} _ {2}\right); \dots ; \psi \left(\mathbf {B} _ {i} \cdot \mathbf {T} _ {\frac {d}{n}}\right) \right]. \tag {A.2}
|
| 300 |
+
$$
|
| 301 |
+
|
| 302 |
+
In (A.2), function $\psi : \mathbb{R}^{p \times q} \to \mathbb{R}^{pq}$ , where $\psi(\mathbf{X})$ flattens the matrix $\mathbf{X} \in \mathbb{R}^{p \times q}$ by concatenating each row of $\mathbf{X}$ then transposes the concatenated row vector into a column vector of dimension $pq$ . It is easy to see that, $\psi(\mathbf{B}_i \mathbf{T}_j) \in \mathbb{R}^k$ , $s(\mathbf{B}_i) \in \mathbb{R}^{k \times \frac{d}{n}}$ , thus $\mathbf{H} \in \mathbb{R}^{k \times d}$ .
|
| 303 |
+
|
| 304 |
+
It is the partitioned parameter blocks $\mathbf{B}_i$ ( $i = 1, \dots, n$ ) and $\mathbf{T}_j$ ( $j = 1, \dots, \frac{d}{n}$ ) that determine the degree of freedom for $\mathbf{H}$ , which is $kd / n + n^3$ . As illustrated in Figure 3, the reuse of parameter matrices $\mathbf{B}_1, \dots, \mathbf{B}_n$ and $\mathbf{T}_1, \dots, \mathbf{T}_{\frac{d}{n}}$ in real-valued matrix multiplications in (A.2) may reduce the degree of freedom for $\mathbf{H}$ .
|
| 305 |
+
|
| 306 |
+
# A.2 SUBSUMING HYPERCOMPLEX MULTIPLICATIONS
|
| 307 |
+
|
| 308 |
+
Similarly, we show how the PHM layer with the reconstructed $\mathbf{H}$ in (A.1) also subsumes the hypercomplex multiplication. Taking the Hamilton product of two quaternions $Q$ and $P$ as an example, it can be rewritten as
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
\left(\underbrace {\left[ \begin{array}{l l l l} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]} _ {\mathbf {B} _ {1}} \underbrace {\left[ \begin{array}{l} Q _ {r} \\ Q _ {x} \\ Q _ {y} \\ Q _ {z} \end{array} \right]} _ {\mathbf {T} _ {1}}; \underbrace {\left[ \begin{array}{l l l l} 0 & - 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & - 1 & 0 \end{array} \right]} _ {\mathbf {B} _ {2}} \underbrace {\left[ \begin{array}{l} Q _ {r} \\ Q _ {x} \\ Q _ {y} \\ Q _ {z} \end{array} \right]} _ {\mathbf {T} _ {1}}; \underbrace {\left[ \begin{array}{l l l l} 0 & 0 & - 1 & 0 \\ 0 & 0 & 0 & - 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array} \right]} _ {\mathbf {B} _ {3}} \underbrace {\left[ \begin{array}{l} Q _ {r} \\ Q _ {x} \\ Q _ {y} \\ Q _ {z} \end{array} \right]} _ {\mathbf {T} _ {1}}; \underbrace {\left[ \begin{array}{l l l l} 0 & 0 & 0 & - 1 \\ 0 & 0 & 1 & 0 \\ 0 & - 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right]} _ {\mathbf {B} _ {4}} \underbrace {\left[ \begin{array}{l} Q _ {r} \\ Q _ {x} \\ Q _ {y} \\ Q _ {z} \end{array} \right]} _ {\mathbf {T} _ {1}}\right) \tag {A.3}
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
where the 4 output elements are the real values for the quaternion unit basis $[1, \mathbf{i}, \mathbf{j}, \mathbf{k}]^{\top}$ . According to (A.3), when $n = 4$ , the PHM layer with the reconstructed parameter matrix can also be learned to exactly express the Hamilton product of quaternions. Likewise, hypercomplex multiplications of octonions or sedenions can also be learned by the PHM layer when $n$ is set to 8 or 16.
|
| 315 |
+
|
| 316 |
+
# A.3 SUBSUMING REAL-VALUED MATRIX MULTIPLICATIONS
|
| 317 |
+
|
| 318 |
+
Now we show how the PHM layer with the reconstructed $\mathbf{H}$ in (A.1) also subsumes the matrix multiplication in real space. Referring to (3.2), when $n = 1$ , $\mathbf{H} = b\mathbf{W}$ , where the scalar $b$ is the single element of the $1 \times 1$ matrix $\mathbf{B}_1$ and elements of $\mathbf{W} \in \mathbb{R}^{k \times d}$ come from the concatenation of $\mathbf{T}_1, \ldots, \mathbf{T}_d \in \mathbb{R}^{1 \times k}$ . Since learning $b$ and $\mathbf{W}$ separately is equivalent to learning their multiplication jointly, the scalar $b$ can be dropped, which is learning the single weight matrix in an FC layer. Therefore, a PHM layer is degenerated to an FC layer when $n = 1$ .
|
| 319 |
+
|
| 320 |
+
# B SETUP FOR EXPERIMENTS
|
| 321 |
+
|
| 322 |
+
We describe the setup for the experiments as follows.
|
| 323 |
+
|
| 324 |
+
# B.1 NATURAL LANGUAGE INFERENCE
|
| 325 |
+
|
| 326 |
+
We implement 300D unidirectional encoders with shared parameters for both premises and hypotheses. We take the concatenation of max and mean pooled representations as the input to a two-layer 300D multilayer perceptron for prediction. Our model is trained with the Adam optimizer with a learning rate of 0.0004 and a batch size of 256. Word embeddings are initialized with GloVe (Pennington et al., 2014) and are fixed. No cross sentence attention (Parikh et al., 2016) is used, mainly to observe the effectiveness of standalone encoders. For PHM-LSTM, we use $n = \{2, 5, 10\}$ . Note that in this task, since word embeddings are 300D, we select multiples of 5 instead of 4 for ease of divisibility.
|
| 327 |
+
|
| 328 |
+
# B.2 MACHINE TRANSLATION
|
| 329 |
+
|
| 330 |
+
For the IWSLT'15 English-Vietnamese (En-Vi), IWSLT'17 English-Indonesian (En-Id), IWSLT'14 German-English (De-En), and IWSLT'14 Romanian-English (Ro-En) datasets, we run with 50K steps; while for WMT'18 English-Estonian (En-Et), Setimes English-Macedonian (En-Mk), and WMT'18 English-Romanian (En-Ro) datasets, models are trained for 100K steps. For the En-Vi, En-Id, En-Et, En-Mk, and En-Ro datasets, we specify that transformers have 4 layers, 8 heads, and a hidden size 512. For the De-En and Ro-En datasets, we specify that transformers have 2 layers, 4 heads, and a hidden size 256. We use beam size of 5 and $\alpha = 0.6$ (length penalty) for decoding. For all PHM models, we benchmark several settings for the hyperparameter $n = \{2,4,8,16\}$ .
|
| 331 |
+
|
| 332 |
+
# B.3 TEXT STYLE TRANSFER
|
| 333 |
+
|
| 334 |
+
For the used Modern $\rightarrow$ Shakespeare corpus $^2$ in the experiments, the key goal here is to convert modern writing into Shakespeare writing. This dataset comprises of 18,395 parallel sentences for training, 1,218 parallel sentences for evaluation (development set), and 1,462 parallel sentences for testing. We still specify that transformers have 4 layers, 8 heads, and a hidden size 512. Similar to machine translation, we experiment with $n = \{2,4,8,16\}$ . We train all the models for 10K steps.
|
| 335 |
+
|
| 336 |
+
# B.4 SUBJECT VERB AGREEMENT
|
| 337 |
+
|
| 338 |
+
In contrast to the previous experimental settings, we use a smaller transformer architecture with 10K training steps. Specifically, transformers here have 2 layers, 4 heads, and a hidden size 128. Since the hidden size is smaller than those in the previous experimental settings, we experiment with $n = \{2,4,8\}$ .
|
beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:03cd8d8abc231bd3bc9d758a34c9bfadeb675fa275a42115e64754ddb17d7f8a
|
| 3 |
+
size 433676
|
beyondfullyconnectedlayerswithquaternionsparameterizationofhypercomplexmultiplicationswith1nparameters/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b598f16724c664244941ec159d5c7da44650a4118fa7eaec9c40f76a4a921e7
|
| 3 |
+
size 495300
|
bustlebottomupprogramsynthesisthroughlearningguidedexploration/8e440b64-4a3f-43e4-a021-feefad71e32b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4e9148ce22c3428c91c75036cc96bb017a6ad6b590adcb4c9d09753af7098273
|
| 3 |
+
size 83436
|
bustlebottomupprogramsynthesisthroughlearningguidedexploration/8e440b64-4a3f-43e4-a021-feefad71e32b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ee6e198614678d69e1c2c429c0fce7ebc09847cbd58542bc9a898a6fe1e4be9b
|
| 3 |
+
size 100821
|